How to measure the performance of Extractor

Hi,
How to measure the time taken to by the extractor when executed from rsa3 for a given selection?
Lot of threads speak about ST05... but these transactions are too granular to analyse.
How to get the overall time taken.i need the overall time taken and the time taken by the individual SQL statements... please provide specific pointers.
Thanks,
Balaji

Maybe SE30 can help you....
Regards,
Fred

Similar Messages

  • How to measure the performance of sql query?

    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i...
    It ll be useful for me to write efficient query....
    Thanks & Regards

    psram wrote:
    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
    This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
    Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
    You can use the following approach to get a deeper understanding of the operations performed by each row source:
    alter session set statistics_level=all;
    alter session set timed_statistics = true;
    select /* findme */ ... <your query here>
    SELECT
             SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
             OBJECT_NAME,
             CARDINALITY,
             LAST_OUTPUT_ROWS,
             LAST_CR_BUFFER_GETS,
             LAST_DISK_READS,
             LAST_DISK_WRITES,
    FROM     V$SQL_PLAN_STATISTICS_ALL P,
             (SELECT *
              FROM   (SELECT   *
                      FROM     V$SQL
                      WHERE    SQL_TEXT LIKE '%findme%'
                               AND SQL_TEXT NOT LIKE '%V$SQL%'
                               AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
                      ORDER BY LAST_LOAD_TIME DESC)
              WHERE  ROWNUM < 2) S
    WHERE    S.HASH_VALUE = P.HASH_VALUE
             AND S.CHILD_NUMBER = P.CHILD_NUMBER
    ORDER BY ID
    /Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
    Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
    http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
    http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to measure the performance of a SQL query?

    Hello,
    I want to measure the performance of a group of SQL queries to compare them, but i don't know how to do it.
    Is there any application to do it?
    Thanks.

    You can use STATSPACK (in 10g its called as AWR - Automatic Workload Repository)
    Statspack -> A set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. This feature has been replaced by the Automatic Workload Repository.
    Automatic Workload Repository - Collects, processes, and maintains performance statistics for problem detection and self-tuning purposes
    Oracle Database Performance Tuning Guide - Automatic Workload Repository
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/autostat.htm#PFGRF02601
    or
    you can use EXPLAIN PLAN
    EXPLAIN PLAN -> A SQL statement that enables examination of the execution plan chosen by the optimizer for DML statements. EXPLAIN PLAN causes the optimizer to choose an execution plan and then to put data describing the plan into a database table.
    Oracle Database Performance Tuning Guide - Using EXPLAIN PLAN
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/ex_plan.htm#PFGRF009
    Oracle Database SQL Reference - EXPLAIN PLAN
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9010.htm#sthref8881

  • How to measure the performance

    Hi all,
    I would like to compare the performance of my Webdynpros in different machine. I need to measure the time of execution of my Web dynpro, including execution of some interaction from the user (navigation between the pages, etc). How can I do it? I cannot add any additional code, are there any transaction to measure it?
    Thanks a lot,
    Anna

    Thank you.
    I opened this transaction, but I could see only Transaction, Program or Function module to introduce and analyze. How can I measure Webdynpro?
    Regards,
    Anna

  • How to improve the performance of extractor?

    Hi,
    we have an extractor for which data loading is taking too much of a time!
    It is taking 5 hours for loading where as the same extractor in different source system  ( different client )  is doing this load with in an hour.
    what one can do to improve the performance of the extractor?
    Many thanks in advance!
    Ravi

    Hi,
    Are you sure that both source systems has same amount of data?
    And also make sure that Master data was uploaded from both systems. It takes more time to upload transaction data if the master data is not yet uploaded from same source system.
    And also check SM58 in Source systme for any blocking TRFCs.If there are any with text in red colour , then manually update them.Huge number of blocked TRFCs may slow down the upload.
    with rgds,
    Anil Kumar Sharma .P

  • Measuring the Performance

    How to measure the Performance of an XI system and also an XI Interface?
    What are all the ways we can tune an XI system for an optimal performance? i.e what are all the areas we have to look into for a superior performance?

    Pete,
    You can check the start time and end time of the message in sxmb_moni and can find out how much time it takes for the interface to execute.
    Go to RWB and then click on performance monitoring. You can get for each interface individually or for the whole.
    Also go thorugh this documents:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/489f5844-0c01-0010-79be-acc3b52250fd
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/defd5544-0c01-0010-ba88-fd38caee02f7?prtmode=navigate
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad
    /people/prasad.illapani/blog/2007/04/27/performance-tuning-checks-in-sap-exchange-infrastructurexi-part-iii
    http://help.sap.com/saphelp_nw04/helpdata/en/9e/6921e784677d4591053564a8b95e7d/frameset.htm
    ---Satish

  • How to improve the performance of Standard Extractors

    Hi All,
    Can you please let me know how to monitor the performance of Standard Extractors. For one of the extractor which have 350K records in source, it is taking 4hrs to load to BI.
    I want to know which part of the program is taking more time.
    Thanks in Advance
    RK

    No, that exception is thrown when the underlying
    socket (tcp/ip) implementation cannot reserve buffer
    space for new sockets afaik. I get it on windows
    machines when the concurrent connection load reaches
    17-18k connections. I do believe it's configurable
    though.And what happens if your max server load is 7-8 connections an hour and you never close the connections?

  • How can i use Bi-Technical Content is used for measuring the performance of

    Hi
    recenetly i implemented the BI7.0 for one client. I want to know how to use 'BI Administraction Cockpit,.
    Actual what technical content does ???
    And how can i use technical content or statistics for measuring the performance of my queries. Please let me know
    kumar

    Hi Ravi,
    BI Admin Cockpit is enhancement of BW Statistics.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/44/08a75d19e32d2fe10000000a11466f/frameset.htm
    Check this thread also:
    BI Statistics comparision with the old
    Regarding the performance check the link below.
    Re: Query - Performance
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
    Regards,
    Anil

  • How to measure the size of an object written by myself?

    Hi all,
    I'm going to measure the performance on throughput of an ad hoc wireless network that is set up for my project. I wrote a java class that represents a particular data. In order to calculate the throughput, I'm going to send this data objects from one node to another one in the network for a certain time. But I've got a problem with it- How to measure the size of an object that was written by myself in byte or bit in Java? Please help me with it. Thank you very much.

    LindaL22 wrote:
    wrote a java class that represents a particular data. In order to calculate the throughput, I'm going to send this data "a data" doesn't exist. So there's nothing to measure.
    objects from one node to another one in the network for a certain time. But I've got a problem with it- How to measure the size of an object that was written by myself in byte or bit in Java? Not.

  • How to measure the task instance size

    I am trying to understand how to measure the custom task instnace size that was occupied in the cache repository. Is there a way to obtain the size (in KB s ) in lighthouse 4.1 +sp5. Please help.                                                                                                                                                                                                                                                                                                                                                                                                       

    Hi, Luiz!
    In order to estimate the instance size, you will have to sum up all the instance variables size.
    From the practical prospective it may be difficult to use the algorithm in cases when your instance variables are not simple types.
    For already existing instances, you can look the instance size in the database:
    An instance is just a java object, that is serialized as a byte[] and stored in the engine database, PPROCINSTANCE table, column INSTDATA.
    You can write a simple program (either java or SQL plus) that would read the instance from the BLOB and measure the size.
    And yes, with large instances you can have performance problems.

  • Measuring the performance of Networking code

    Lately I've had renewed interest in Java networking, and been doing some reading on various ways of optimizing networking code.
    But then it hit me.
    I dont know any way of benchmarking IO/Networking code. To take a simple example, how exactly am I supposed to know if read(buf,i,len) is more efficient than read() ? or how do I know the performance difference between setting sendBufferSize 8k and 32k? etc
    1)
    When people say "this networking code is faster than that", I assume they are referring to latency. Correct? Obviously these claims need to be verifiable. How do they do that?
    2)
    I am aware of Java profilers ( http://java-source.net/open-source/profilers), but most of them measure stuff like CPU, memory, heap, etc - I cant seem to find any profiler that measures Networking code. Should I be looking at OS/System level tools? If so, which ones?
    I dont want to make the cardinal sin of blindly optimizing because "people say so". I want to measure the performance and see it with my own eyes.
    Appreciate the assistance.
    Edited by: GizmoC on Apr 23, 2008 11:53 PM

    If you're not prepared to assume they know what they're talking about, why do you assume that you know what they're talking about?Ok, so what criteria determine if a certain piece of "networking code" is better/faster than another? My guess is: latency, CPU usage, memory usage - that's all I can think of. Anyway, I think we are derailing here.
    The rest of your problem is trivial. All you have to do is time a large download under the various conditions of interest.1)
    hmm.. well for my purpose I am mainly interested in latency. I am writing a SOCKS server which is currently encapsulating multiplayer game data. Currently I pay an apprx 100 latency overhead - I dont understand why.. considering both the SOCKS client (my game) and SOCKS server are localhost. And I dont think merely reading a few bytes of SOCKS header information can potentially cause such an overhead.
    2)
    Let's say I make certain changes to my networking code which results in a slightly faster download - however can I assume that this will also mean lower latency while gaming? Game traffic is extremely sporadic, unlike a regular HTTP download which is a continuous stream of bytes.
    3)
    "timing a large download" implies that I am using some kind of external mechanism to test my networking performance. Though this sounds like a pragmatic solution, I think there ought to be a formal finely grained test harness that tests networking performance in Java, no?

  • How to monitor the performance of EBS iAS?

    Hello All,
    I am looking for your advise on how to monitor the performance of EBS (11i/R12) iAS
    How to measure iAS worload during a certain period?
    How to know the number of iAS hits/requests?
    How to the iAS response time?
    Regards,
    Tareq

    Thanks EBSDBA and Helios for your time.
    Are there any scripts/reporting tool that helps me monitor iAS performance other than OEM for EBS?
    Note:370583.1 is very helpful in monitoring JDBC connections. Are there any similar Notes/scripts that can answer my questions:
    How to measure iAS workload during a certain period?
    How to know the number of iAS hits/requests?
    How to the iAS response time?
    Appreciating your cooperation,
    Tareq

  • How to improve the performance of adobe forms

    Hi,
    Please give me some suggestions as to how to improve the performance of adobe form?
    Right now when I' am doing user events it is working fine for first 6 or 7 user events. From the next
    one it is hanging.
    I read about Wizard form design approach, how to use the same here.
    Thanks,
    Aravind

    Hi Otto,
    The form is created using HCM forms and processes. I' am performing user events in the form.
    User events will doa round trip, in which form data will be sent to backend SAP system. Processing will
    happen on the ABAP side and result will appear on the form. First 6 or 7 user events works correctly,
    the result is appearing on the form. Around 8 or 9th one, the wait symbol appears and the form is not
    re-rendered. The form is of size 6 pages. The issue is not coming with form of size 1 page.
    I was reading ways to improve performance during re-rendering given below.
    http://www.adobe.com/devnet/livecycle/articles/DynamicInteractiveFormPerformance.pdf
    It talks about wizard form design approach. But in SFP transaction, I am not seeing any kind of wizard.
    Let me know if you need further details.
    Thanks,
    Aravind

  • How to measure the baseline of a noisy, pulsed signal

    Hi
    I am measuring the torque exerted by a large motor on a shaft using a load cell and lever arm. The shaft runs at approx 150 rpm. I have attached a drawing that shows the output I get. This is a test rig.
    I have written some code that measures the maximum peak out of a group of approx 5 peaks and writes this to a shift register. This gives me an idea of the maximum torque "spike".
    I also wish to measure the baseline torque (due to the bearings in the machine). Even when highly filtered (my noise filter is set to 49Hz) the signal exhibits this noise which is probably due to vibration in the system. The signal is zeroed when the motor is not running.
    Does anyone have an ideas on how to measure the "baseline" torque? The large spike in torque prevents me from doing a running average. Can anyone think of a way of averaging just the noisy part of the signal to get an average value? I aim to to subtract the average baseline torque from the peak value to get an idea of the torque due to the event which causes
    the spike.
    Any help would be greatly appreciated.
    Many thanks.
    Attachments:
    drawing of torque signal.gif ‏26 KB

    Thanks for the reply. I understand what you are saying. However, I might have to modify my method for measuring the peaks if I choose to implement your idea. I have taken a screenshot of my "peak finder" code and attached it.
    Bascially, the reset terminal is wired to a timer which outputs a pulse every few seconds. This resets the vi (a standard NI one I think) and sets the peak magnitude back to zero. This way, I am windowing the signal and measuring the maximum peak in every window. This is what I need to do.
    So I could use a logical filter to feed data to the running average only if;
    the amplitude of the signal is less than a certain threshold
    and if the current value has similar low peaks either side of it
    How would you construct the code to delay the evaluation so that the values in front and behind of the current data point can be analysed?
    thanks again
    Attachments:
    peak_find_screenshot.jpg ‏45 KB

  • Do you have an idea how to improve the performance ?

    Hi All,
    Greeting,
    I'm doing SEM IP. Regarding the performance, do you have some thought about this ?
    So I have planning report for project report . As we know, if we forecast against project, means the date itself is the life of the project itself.
    It means it could be more than 10 years (forecast period) and 10 years (actual period). Currently I segregate between actual and forecast into different info cube .
    But the performance of the planning report is slow now. Do you have an idea about this how to increase the performance. The performance I mentioned here is when we're going to the report (after putting in the value in the selection screen).
    The other question, at this moment, I have a multiprovider than this multi provider consist 2 info cubes ( actual and forecast ). Than my aggregation is sitting on top of that multi-provider .
    My question whether that's approach correct or not ? Or do we have to create 1 aggregate (only for forecast), than I have multi-provider consisting forecasting aggregation and actual cube .
    than my query will sit on top of that multi-provider ?
    Which one is better ??
    Thanks a lot all,
    really need your help,

    Hi,
       For the performance tuning, you can consider any of the following three methods,
    1. Indices
    With an increasing number of data records in the InfoCube, not only the load but also the query performance can be reduced. This is attributed to the increasing demands on the system for maintaining indexes. The indexes that are created in the fact table for each dimension allow you to easily find and select the data.
    2. Partitioning
    By using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    3. Aggregates 
    Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance.
    4. Compressing the Infocube
    Infocube compression means aggregation of the data ignoring the request idu2019s. After compression, the system need not perform aggregation using the request ID every time you execute a query.
    And I feel that as per your scenario, you need to do first compress the data based on user requirements and have only the required data in the infocube.
    And for the approach regarding the Aggregation level design, choosing between the two approaches depends on the user requirements. For example,
    If you have aggregation level created on top of multiprovider containing actual and forecast cube, in your report (created on top of aggregation level) you can view the key figure values present in both the cubes, which is not possible in the other approach.
    So this approach is suited if your requirement is to view the records from both the cubes in your report (Comparing planning and actual values).
    The second approach is used if your requirement is only to report on planning forecast cube.
    Hopes this solves your issue.
    Regards,
    Balajee

Maybe you are looking for

  • Error while loading shared libraries after installing cleartype fonts.

    Hi there.! Since the default fonts of Arch looked quite ugly on my LCD, I read the Fonts wiki and installed cleartype packages from AUR.But now, when I try switching to runlevel 5 (I use GDM by the way), it throws up following error: /usr/sbin/gdm-bi

  • How to get Application Error descriptions

    Hi, all: Our interface has ECC system involved. The Message Monitoring in local integration engine shows that there are many messages with application errors. I would like to export all messages with application error: include message ID and error te

  • Asynchronous  IO ( need help)

    hi i got a problem in reading a file Asynchronously, my requirements are : i would like to read huge file into the buffers (say 3 here )each buffer of fixed size ,the moment the buffer gets filled with read content ,i need to update the bufferd conte

  • Static nat with dual destination

    I need to configure static nat for cisco ASA 5500, here is the topology: one server (source) with ip 10.211.250.22 /28 (interface : name if dmz_virtual_account) will static nat to two destinations : 1. to Internet will translated to 202.152.19.196 (I

  • How to convert amount data element to number data element

    Dear Gurus, I have a requirement to get the data by calling BRF+ and then pass them to BAPI 'BAPI_ASSET_ACQUISITION_POST', in the BRF+ export structure, there is a field 'AMOUNT', I bind it to DDIC element 'BF_ANBTR', the type is Number. In BRF+, I c