Transaction SAT - Analyze Performance of Classes

Hi,
is it possible to analyze performance of classes directly? In transaction SAT, I'm only able to execute Transaction, Program, or Function Module. How can I analyze performance using transaction SAT of classes without writing a program?
The editors of Tips and Tricks are not editable in my system.
Thanks.
/Björn-Henrik

Hi,
how would you test / run your classes / methods without writing a program?
Whatever transaction you use for that you can trace. Use the "Explicit Switching on and off measurement" setting in
the trace variant.
The principle is explained here for ST12 (which can't trace function modules) and the "particular unit" setting which
is exactly the same as the "Explicit Switching on and off measurement" setting.
ST12 - tracing function modules
Hope this helps,
Hermann

Similar Messages

  • Transaction SAT for extractors

    I have some issues with the performance of some extractors. Can I use the runtime analyis tool (transaction SAT) for BW extractors?

    Hi,
    Yes, You can. You may need to use extractor back end programs.
    but we can find the steps which its take more time thru SM37 job log details.
    Actually at which step your facing performance problem while loading data into bw?
    Can we know the data source name?
    During  extraction time, ECC/BW servers have required free application servers to excute loads in time?
    Have you monitored data loads at SM58 while loading data from source to PSA?
    Thanks

  • MSI Z97-G45 GAMING Sata controller performance

    Hello
    I have an question about the Intel Z97 Express Chipset Sata 6gb controller on my MSI Z97-G45 GAMING motherboard.
    Is there any difference between the sata ports performance that can make different results when testing the hard drives. I have an Seagate 3Tb HDD and an WD RE 3Tb HDD and the Segate seems faster. When testing the drives with the manufacturers test suites Seagate takes about 1min to test and WD takes 2min to test and in HD Tune Pro Segate shows a faster speed and a more even curve.
    Could this big difference between the harddrives be affected with which sata port that is being used or are the all the same?
    Thanx
    Niklas

    Im quite sure my sata cables in my new computer says 6gb on it but im not 100 sure. Im using Asus black Sata cables with black/white connector and the Sata cable included with the motherboard. So there should not be any problem.
    From reading this test it seems there no difference between 6gb and 3gb cables as long as they are high quality cables from known manufacturers. I only own Asus red 3gb cables in my old computer and black/white Asus 6gb cables and the ones included with the modtherboard in my new computer.
    Sata II vs Sata III cable test:
    http://www.pugetsystems.com/labs/articles/SATA-cables-Is-there-a-difference-97/

  • Transaction propogation to java helper class

              Hi
              EJB A calls EJB B and EJB B does certain numbr of updates/insert. If one them
              fails everything rollsback in second EJB. This works fine.
              But instead of EJB B being an EJB the initial design was a regular java helper
              class. EJB A creates a new instance of this java class and calls a method on this
              class. The EJB A is set to be transactional (Required). The java class get a DB
              connection from TX datasource and does the same operation as EJB B that I described
              above. I tried to throw an exception to the EJB A and catching the Exception did
              a setRollbackOnly() in EJB A and the transaction does not rollback. I was under
              the impression that even if DB operations are done in a java class they are still
              under the same transaction and thus container is still under control of it if
              it is a container managed transaction. it does not look like it is teh case.
              Is there any requirement that DB connection obtained /operation made need to be
              inside the EJB method itself and not in the java helper class. Does this means
              that the transaction gets suspended on the duration of execution of the java helper
              class. I was under the impression that they are all under the same transaction
              context and it applies to that as well. Any help on this is greatly appreciated.
              

    Well, that's the only way. The container has to start the transaction in its
              invocation wrapper for CMT EJBs and then terminate the transaction (commit or
              rollback) after the method has exited (either normally or by throwing an
              exception). Anything that happens inside the call to the business method is in
              the transaction...
              --dejan
              Toad wrote:
              > That's good to know but somewhat surprising.
              >
              > "Deyan D. Bektchiev" <[email protected]> wrote in message
              > news:[email protected]...
              > > The transaction is associate with the thread that the EJB method executes
              > in so
              > > any calls in the same threads are part of the transaction.
              > > So even if you have a separate class that functions as a connection
              > factory (in
              > > the end getting the connections from a TX datasource) those connections
              > still
              > > would be part of the transaction.
              > >
              > > You can test that if you do
              > > System.out.println(weblogic.transaction.TxHelper.getTransaction()) and you
              > > should see the current transaction.
              > >
              > >
              > > --dejan
              > >
              > > Toad wrote:
              > >
              > > > I'm thinking the key to the failure to rollback is that the helper bean
              > > > "gets a connection" which is effectively stepping outside the confines
              > of
              > > > your CMP model. How would the container know that you hand-carved a
              > > > connection or even a set of connections and executed several
              > transactions
              > > > independently? That would be no mean feat if it wasn't specifically
              > designed
              > > > in.
              > > >
              > > > "Priya Vasudevan" <[email protected]> wrote in message
              > > > news:[email protected]...
              > > > >
              > > > > Hi
              > > > >
              > > > > EJB A calls EJB B and EJB B does certain numbr of updates/insert. If
              > one
              > > > them
              > > > > fails everything rollsback in second EJB. This works fine.
              > > > >
              > > > > But instead of EJB B being an EJB the initial design was a regular
              > java
              > > > helper
              > > > > class. EJB A creates a new instance of this java class and calls a
              > method
              > > > on this
              > > > > class. The EJB A is set to be transactional (Required). The java class
              > get
              > > > a DB
              > > > > connection from TX datasource and does the same operation as EJB B
              > that I
              > > > described
              > > > > above. I tried to throw an exception to the EJB A and catching the
              > > > Exception did
              > > > > a setRollbackOnly() in EJB A and the transaction does not rollback. I
              > was
              > > > under
              > > > > the impression that even if DB operations are done in a java class
              > they
              > > > are still
              > > > > under the same transaction and thus container is still under control
              > of it
              > > > if
              > > > > it is a container managed transaction. it does not look like it is teh
              > > > case.
              > > > >
              > > > > Is there any requirement that DB connection obtained /operation made
              > need
              > > > to be
              > > > > inside the EJB method itself and not in the java helper class. Does
              > this
              > > > means
              > > > > that the transaction gets suspended on the duration of execution of
              > the
              > > > java helper
              > > > > class. I was under the impression that they are all under the same
              > > > transaction
              > > > > context and it applies to that as well. Any help on this is greatly
              > > > appreciated.
              > > > >
              > >
              

  • SAP Business Explorer Analyzer Performance testing

    Hi, All
    I don't have any experience in SAP at all. So maybe I will ask silly questions.
    I was asked to provide performance testing for SAP Business Explorer Analyzer. As you know it is Excel base application with ActiveX control from SAP. HP Loadrunner doesn't support such configuration (It can only work with SAPGUI). So I am forced to search for other possible solutions. So maybe here I can get ideas and opinions about options:
    1. I asked SAP admins about possibility to do the same things what users do from BEx  (query execution + some drill down in data) in SAPGUI, maybe with predefined parameters or with additional customization. Unfortunately I've got an answer that it isn't possible at all. Is it really?
    2. Another idea is to simulate what users do in BEx with VBA since anyway Excel interacts with BEx engine via DBA. There I met another problem. There is no any documentation about ActiveX components (e.g. com.sap.bi.et.analyzer.addin.BExConnect) or  API used in BExAnalyzer.xla. The question is "is it possible or not to fully automate all data manipulation with VBA" and "Where can I get documentation about SAP VBA API?"
    Thanks,
    Alexander

    You can run the queries with drilldown using transaction rsrt in list mode and interact via standard screen functions
    This won't however give you the WAN or the presentation server overhead (ie formatting of the cells with excel wrappers of styles)
    Read here for a testing example
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60981d00-ca87-2910-fdb8-d4a2640d69d4?quicklink=index&overridelayout=true

  • Performance:  large classes or inheritence?

    I have several classes that share common function requirements. Previously, I set up a third class as a static function provider (violating most known OO concepts :). In an effort to redeem myself, I've moved the common functions into a base class, which the other classes extend.
    My question is: setting aside OO design theory, which system is better for performance, public static methods from a class that just provides them, or protected static methods from a shared base class? (Memory is not as much of a consideration as speed in this case.)
    I wish there was a Java performance forum here ... :)
    thanks for your time,
    Andrew

    I think pronouncing on performance without measured data would be pure speculation. It's a religious issue then.
    I think you're really arguing about inheritance vs. composition.
    Is inheritance always better? No - composition's favored in some circles because it can be more flexible.
    If you group those common methods an interface and make all the others implement it, you can change the behavior of each one individually without affecting the others, even after you're finished writing the code, just by passing a new implementation when you instantiate it.
    It all depends on what you're trying to accomplish, IMO. I'll leave the performance discussion to those with strong opinions and, hopefully, data. - MOD

  • Long-running transactions and the performance penalty

    If I change the orch or scope Transaction Type to "Long Running" and do not create any other transaction scopes inside, I'm getting this warning:
    warning X4018: Performance Warning: marking service '***' as a longrunning transaction is not necessary and incurs the performance penalty of an extra commit
    I didn't find any description of such penalties.
    So my questions to gurus:
    Does it create some additional persistence point(s) / commit(s) in LR orchestration/scope?
    Where are these persistence points happen, especially in LR orchestration?
    Leonid Ganeline [BizTalk MVP] BizTalk Development Architecture

    The wording may make it sound so but IMHO, if during the build of an orchestration we get carried away with scope shapes we end up with more persistence points which do affect the performance so one additional should not make soo much of a difference. It
    may have been put because of end-user feed back where people may have opted for long running transactions without realizing about performance overheads and in subsequent performance optimization sessions with Microsoft put it on the product enhancement list
    as "provide us with an indication if we're to incurr performance penalties". A lot of people design orchestration like they write code (not saying that is a bad thing) where they use the scope shape along the lines of a try catch block and what with
    Microsoft marketing Long Running Transactions/Compensation blocks as USP's for BizTalk, people did get carried away into using them without understanding the implications.
    Not saying that there is no additional persistence points added but just wondering if adding one is sufficient to warrant the warning. But if I nest enough scope shapes and mark them all as long-running, they may add up.
    So when I looked at things other than persistence points, I tried to think on how one might implement the long running transaction (nested, incorporating atomic, etc), would you be able to leverage the .Net transaction object (something the pipeline
    use and execute under) or would that model not handle the complexities of the Long Running Transaction which by very definiton span across days/months and keeping .Net Transaction objects active or serialization/de-serialization into operating context will
    cause more issues.
    Regards.

  • How do i know the no. of transaction doDML will perform?

    Hi,
    In my application, i am using custom java data source implementation methodology as per the document. I have to fetch records from a service and to populate it to view object and to post the records which belongs to current transaction, to service. I am able to populate data into view object but have an issue while getting the dirty records and post it to my service.
    Requirement scenario is explained below.
    1. User has done changes in some of the fetched records, added some new records and deleted some records.
    2. When he press save, because of the execution of commit operation, control will come to doDML method as many times as it needs based on the no. of transactions.
    3. My requirement is to identify all the records and to pack it and to send it to my service at a single shot.
    I need some place where i should be able to identify how many times doDML method belongs to a particular entity object will be called, so that i can decide when to call my service. For example, i have done 2 insert and 1 delete, as per the framework, doDML will be called 3 times. I need to capture the record details during these 3 times and to call my service only at 3rd time.
    Any idea on how to achive this??
    Thanks in advance.
    Raguraman

    This will not help srinidhi.. Because if i perform my service call after the control has come out from doDML() method, dirty transaction will become undirty. So even if i get any error from my service while posting data to it, next time if user corrects it and click on save, it will not come to doDML() method itself. So i should perform my service call inside the doDML() itself.
    Edited by: Raguraman on Mar 23, 2011 10:30 AM

  • Filesystem and SATA drive performance

    Hi you all,
    I'm in the process of installing ArchLinux for the third time on my system and I'm in need for some suggestions. The previous installations went without problems but I have realized that the system was not really tuned for some video stuff I'm working with i.e that I need support for large files on my system. Did a google and found references on xfs system that should do the trick. All went ok and I could work without problems on dv files 10-13 GB in size but suddenly I've realized the abismal low performance of my harddrive when copying files e.g: 20 min for a folder of 512 MB (indeed with multiple folders and small files)!!!. The hardware I'm using is:
    AMD Athlon 2500
    Gigabyte GA-7VM400AMF (VIA 8237 -sata controller)
    Seagate 160 GB SATA harddrive
    512 MB ram
    -the "hdparm -tT /dev/sda" command gives me:
    /dev/sda:
    Timing cached reads:   1260 MB in  2.00 seconds = 629.77 MB/sec
    Timing buffered disk reads:  152 MB in  3.01 seconds =  50.43 MB/sec
    -the "sdparm /dev/sda" output is:
    /dev/sda: ATA       ST3160827AS       3.42
    Read write error recovery mode page:
      AWRE        1
      ARRE        1
      PER         0
    Caching (SBC) mode page:
      WCE         1
      RCD         0
    Control mode page:
      SWP         0
    -the "sdparm -i --verbose /dev/sda" command output is:
      /dev/sda: ATA       ST3160827AS       3.42
      PQual=0  Device_type=0x0  RMB=0  version=0x05  [SPC-3]
      [AERC=0]  [TrmTsk=0]  NormACA=0  HiSUP=0  Resp_data_format=2
      SCCS=0  ACC=0  TGPS=0  3PC=0  Protect=0  BQue=0
      EncServ=0  MultiP=0  MChngr=0  [ACKREQQ=0]  Addr16=0
      [RelAdr=0]  WBus16=0  Sync=0  Linked=0  [TranDis=0]  CmdQue=0
    Device identification VPD page:
      Addressed logical unit:
        desig_type: vendor specific [0x0],  code_set: ASCII
    00     20 20 20 20 20 20 20 20  20 20 20 20 34 4d 54 30                4MT0
    10     30 47 4b 48                                         0GKH
        desig_type: T10 vendor identification,  code_set: ASCII
          vendor id: ATA
          vendor specific: ST3160827AS                                         4MT00 GKH
    -I have the following partitons:
    Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1        3918    31471303+   c  W95 FAT32 (LBA)
    /dev/sda2            3919        3930       96390   83  Linux
    /dev/sda3            3931        4055     1004062+  83  Linux
    /dev/sda4            4056       19457   123716565   83  Linux
    sda1 partition is with winXP (still using win bootloader)
    sda2 is /boot formatted as ReiserFs
    sda3 is swap
    sda4 is XFS formatted mounted as /
    Now I'm preparing to reformat the whole drive, partitioning in the same manner but using JFS on the "/" partiton ....and this is what I would ask you guys:
    1. Are the hdparm readings "normal" for my system? Please reply with some of yours if you have sata drives.
    2. Please comment the sdparm readings (have no idea what could these be used for..)
    3. The reallife test of copying files tells me that there should be a problem on my system -I mean that my vaio laptop with a PATA drive @4200rpm does better. If this is the case what could it be - hardware? sata_via drive (I would be willing to test a proprietary drive if it exists)? filesystem?
    Thank you.

    Your new drive may just be 'slower' than the old one.  If they are both 7200RPM disks, it is likely that the older/smaller drive has a faster access time and/or transfer rate.  This is not alwayst the case but it is possible. 
    Also, ensure that you have the latest nVidia nForce2 drivers installed.  If you know the new drive should be 'as fast as' or 'faster than' the old drive, then check the settings that JeanGuy suggested, and if they are all set correctly, look into reverting your nVidia IDE driver to an earlier version.

  • GC performance and Class Loading/Unloading

    We have EP 6.0, SP11 on Solaris with JDK 1.4.8_02. We are running Web Dynpro version of MSS/ESS and Adobe Document Services. This is a Java stack only Web AS.
    We are experiencing very uneven performance on the Portal. Usually, when the Portal grinds to a halt, the server log shows GC entries or Class unloading entries for the entire time the Portal stops working.
    I am thinking about setting the GC parameters to the same size to try and eliminate sudden GC interruptions. Also, what parameter can I set to allow as many classes to be loaded at startup and stay in memory for as long as possible?
    Thanks,
    Rob Bartlett

    Hi Robert
    Also, if the host running the WebAS is a multi processor machine, then setting the flags
    -XX:+UseConcMarkSweepGC and
    -XX:+UseParNewGC
    will help reduce the pause time during the GC collection in old generation and the young genereation respectively, as the GC will happen using multiple threads
    I can suggest you to check if the GC performs a minor collection or major collection by enabling the flags
    -verbose:gc   
    -XX:+PrintGCTimeStamps   
    -XX:+PrintGCDetails. Based on this, try to tune the young or old generation.
    Regards
    Madhu

  • Auto analyzer performance improvements in version 9

    Has anyone run performance tests on a manual execution of version 8 of the auto-analyzer and compared this with version 9?
    I'm runing PrE and Organizer version 8 and am actually quite happy with it.  I have many hours of video and thousands of photos which have been 'organized'.  I'm taking advantage now of the manual execution of the auto-analyzer feature for my videos.  I'm now a week into this running my computer 24x7 and figure I have another two weeks or so to go before auto-analyzer is done executing.  I currently have M2T (1440x1050 60i) and DV-AVI flavored clips.

    In my experience, everything about version 9 runs more efficiently than it did in version 8.
    When I had version 8, I couldn't even run the Auto Analyzer in the background because it would stall my computer. In version 9, I can leave the Organizer to run the Auto Analyzer (aka Media Analyzer) in the background and it doesn't interfere with my work whatsoever.

  • Slow Analyzer Performance

    I would like to ask about slow performance for the Analyzer client (java) on the users' workstations (startup, shutdown/logoff, jumping between reports, etc). Logoff can sometimes take 5 minutes and eats my CPU. We're currently using Java version 1.3.1_09, but have tried other versions; and we're running the latest Analyzer 6.5.0.1 and Essbase 6.5.4. Is there a diagnostic, or file to change to enhance performance for the users? We've monitored the workstations and it appears that their CPUs max at 100% for several seconds when you click a button before making updates to the reports. At this point, we are considering other user interfaces/applications. (around 10 seconds for simple report changes, up to 300 seconds to logout, 75 seconds to go to Home, etc). The database appears fine, just the client performance is of great concern. Thank you, Les

    Hi,We use the following products: Essbase 6.1.4, Analyzer version 6.1.1, Java Version: 1.3.0_02, WebSphere 3.5.5, DB2 7.2.This version of product are the best one for as. Migrating to upper levels of version always produces us some problems with code-pages and foreign characters (non-american characters). That is big problem for us. We are trying to migrate to upper level version but above problems always appears.About prerformance: We are just having problems with starting up Analyzer java web client. It takes about 30 secounds. Opening report is just 2 to 3 secounds. The same time for storing reports. Problem only appears if we open more then 10 reports (or opening the report group). Try to minimize the quantity of reports opened at the time.Also check if the Analyzer runs on very slow client computers. This can be problem. Running on faster client PC it can speed up the loading.According to your description there are probably some problems on server site (also check network). On server (database server, application server, etc) check if the pageing (swaping) accours. On Windows check with Task Manager on unix check with vmstat command. Pageing is not good at all!Locate te problem: Is the problem on Analyzer server? Is the problem on Essbase server, or database server or application server. Are there some operating system problems (see swaping tips above). Are there some hardware problems, etc. Try to use Excel with its add-in to check if the problem is on Analyzer or Essbase.At client try using the java 1.3.0_02 (this version of java client is recomended). You can find this java at original Analyzer server CD or on Sun web page. I have try to use some other versions of java, but I notices some freezing problems. If you are not from America you have to use international version of java. On Analyzer CD there are two versions: american one and international one (international one has the "i" word in exe file).You didn't specify the operating system you use (Windows, Unix,etc)? What application server do you use? What is the database type and version? All this things can impact on performance. I am only familar with above described products. So if you have any other installation, then I can't help you.Hope this helps,Grofaty

  • BEx Analyzer Performance

    Hi All,
    We're using the BEx Analyzer (7.x version) based on SAP Gui 6.40 patch level 29.
    When opening workbooks in the (new) BEx Analyzer the local PC CPU usage is exploding and stays on 99% / 100% all the time, even when the results are in and no navigation is done. All this is resulting in poor workbook and PC performace.
    Any ideas?
    Our specs are:
    Gui640, version 29 (gui640_29-10001615.exe)
    Business Warehouse, version 13 (bw350_13-10001615.exe)
    BW700, version 16 (BW700SP16_1600-10001615.EXE)
    Thanks a lot!
    Cheers, Ron

    Hi Daniel,
    RSRT is the tcode for anlyzing your query performance.
    Try this.
    NOTE: Assign points if it helps
    Regards,
    Arun.M.D

  • Transaction propagation via plain Java classes?

              Hello,
              I have a question on transaction propagation in the following scenario:
              1. a method of EJB1 with setting "Required" is invoked.
              2. the method creates a plain Java class and invokes a method of the class
              3. the class's method invokes a method of EJB2 with setting "Required".
              Is my understanding of EJB spec correct, when I assume that the transaction created
              when the first EJB method was called will be propagated through the plain Java
              class (supposedly via association with current thread), so the second EJB will
              participate in the same transaction?
              Thank you in advance,
              Sergey
              

    Yup, current transaction is associated with the current thread.
              Sergey <[email protected]> wrote:
              > Hello,
              > I have a question on transaction propagation in the following scenario:
              > 1. a method of EJB1 with setting "Required" is invoked.
              > 2. the method creates a plain Java class and invokes a method of the class
              > 3. the class's method invokes a method of EJB2 with setting "Required".
              > Is my understanding of EJB spec correct, when I assume that the transaction created
              > when the first EJB method was called will be propagated through the plain Java
              > class (supposedly via association with current thread), so the second EJB will
              > participate in the same transaction?
              > Thank you in advance,
              > Sergey
              Dimitri
              

  • K7N2 Delta SATA Raid Performance

    I just installed this motherboard and was wondering what kind of performance improvement I might see using sata converters on my almost new WD 80 GB ATA 100 HDDs? Taking advantage on board of the serial ata raid controler.  The converters are only $20 and 2 new sata HDDs would be $260.  Also if I use the Serial ata for my HDD wich IDE plug do I use for my optical drives?   Any Help would be apperciated.

    what they say as regards a single drive is true,but you would still see the benefits of running in raid and they are quite high,im still using my highpoint controlller on my wd drives,but i would not go back to single drives from raid 0 now ,it just makes the pc more snappy,some of the adaptors like the abit ones have got a bad reputation though,seems they are prone to falling out
    as i recall on a clean install my c in sandra was benching around 54000

Maybe you are looking for