How to measure performance?

Hi all,
I have a scenario where in i need to check performance of design being used.
I have one infocube in which data is on Per calender day basis. I have loaded that data in other cube on fiscal year basis with only specific characteristics and key figures i wanted from first cube.
How do i check performance if data is fetched from firtst cube and compare it with time taken to fetch data from other cube.
can i measure it if i am fetching data in Function module using ABAP?

Hi..........
If u want that then..............create query on both the cubes...........then take query statistics............
For this u can use tcode........... RSRT ........it shows the raw time and not percentage of the time that the query spent in each area............
For the percentages, you can either calculate them yourself or use transaction ST03 (expert mode). this will show the breakdown by %.........
Or u can schedule the following chains....order to load BI Statistics data to the Technical Content:
Master Data
System Master Data - 0TCT_MD_S_FULL_P01
This loads text for objects like u2018Process Statusu2019, u2018BI Object typeu2019, u2018Process Typeu2019
Content Master Data - 0TCT_MD_C_FULL_P01
This loads attributes & text for objects like u2018Process Variantsu2019, u2018Process Chainu2019
Initialization Loads
Query Runtime Statistics - Init - 0TCT_C0_INIT_P01
Data Load Statistics - Init - 0TCT_C2_INIT_P01
These process chains need to run only once (Immediate scheduling).
Delta Loads
Query Runtime Statistics - Delta 0TCT_C0_DELTA_P01
Data Load Statistics - Delta 0TCT_C2_DELTA_P01
These process chains can be scheduled for periodic execution
Delta Loads
Query Runtime Statistics - Delta 0TCT_C0_DELTA_P01
Data Load Statistics - Delta 0TCT_C2_DELTA_P01
These process chains can be scheduled for periodic execution
I hav already given u a link.......check that........
Hope this helps.......
Regards,
Debjani........
Edited by: Debjani  Mukherjee on Nov 17, 2008 2:05 PM

Similar Messages

  • How to Measure performance in Interaction centre

    Dear Experts
    Pls tell me whether can we measure IC agent performance based on calls attended by them in interaction center.
    If yes, how?
    Regards
    Rajat

    Dear Gurinder
    There are various ways of findout out IC agent's performance in a call center.
    There are some standard reports which are given by BI through which the performance can be measures
    Some of these reports are
    Abandonment Rate
    Average Time to Abandonment
    Volume of Connections
    Transfers
    Service Level
    Average Response Time
    Average Handling Time
    The above reports can help find out out not only the agent's performance but also the performance of the entire process.
    Let me give you an example of how these reports can figure out Agent's performance
    - Let's take an excample of Handling time. Handling time in a generic way is defined as the time taken to talk to a customer.
      Handling time is more often then not one of the most important metrics of running a call center.
    Let's assume that the avegare handling time of a call center is 5 minutes but the average handling time of a particular agent is 8 minutes  then this gives you an idea of the performance of this agent.
    Another metric could be First call resoloution. This means how ofetn the call is resolved in first interaction itself.
    There is a particular way to feed data to BI to get the analysis of the above mentioned reports. This way is that data come from the CTI provider gets loaded in CRM in some tables through a function module and then extracted by BI for this analysis.
    Another way of conducting analysis is Blended analytics. Please read about it.
    Thanks
    Tarang

  • How to measure performance of supplier when using scheduling agreement ?

    Hello all,
    My client has an absolute need to be able to measure the performance of its suppliers based on delivery dates and delivered quantities. That is to say he needs to be able to compare what dates and quantities were asked to what has been really delivered.
    Most of the procurement processes used are based on scheduling agreements : schedule lines are generated by MRP and forecast is sent to supplier while firm requirements are sent through JIT calls.
    It seems that when doing GR in MIGO, it is done against the outline agreement number, and not against the call. Therefore, we have no way to compare dates and quantity with what was expected (in the JIT call).
    Do you know if SAP proposes a standard solution to this, and what could be a solution to this issue ?
    Thanks for your help
    E. Vallez

    Hi,
    My client faced the same problem and we ended up developing an own analysis in LIS. Since the GR is not linked to specific schedule line (SAP does some kind of apportioning, but it doesn't have to correlate to the correct match), one needs to do assumptions. Our assumption was the closest schedule line, i.e. each GR is related to the schedule line with the closest date. Then all GR the same day are totaled together before the quantity reliability is calculated, since the very same shipment can be reported through several GR transactions in SAP (one per pallet).
    If anybody has info about what SAP has to offer in this question (or is developing), please tell us!
    BR
    Raf

  • How to measure performance in HourGlass Model and Modified HourGlass Model

    Hello All,
    I'm trying to understand as to how the HourGlass Model (which says that the outline should be designed as Dimension tagged as Account, Dimension tagged as Time, Dense Dimensions from most dense to least dense, Sparse Dimensions from least sparse to most sparse) exactly works in terms of optimizing performance and aggregation.
    Also I want to understand the working of the new Modified HourGlass Model on Stick (which says that the outline should be designed as Dimension tagged as Account, Dimension tagged as Time, Dense Dimensions from most dense to least dense, Aggregating Sparse Dimensions from least sparse to most sparse, Non Aggregating Sparse Dimensions).
    Why are these approaches better and how do they work internally in the system?
    How exactly does it pick up combinations during calculations and aggregations?
    In some documents I learned that we should keep the Time dimension as the first dimension in the outline since it is dense and there are more chances of having similar kind of data values across the same fiscal year, due to which compression takes place efficiently. So if this is the case doesn’t it conflict with the HourGlass model and at such times which model to go with?
    Thank You,
    MM

    Hi Damian,
    Here are a few more general tips for query performance:
    1) Always gather statistics for the query optimizer. In addition, we usually see better performance with column group statistics for PS and PC column groups.
    exec sem_apis.analyze_model('my_model',METHOD_OPT =>'FOR COLUMNS (P_VALUE_ID, CANON_END_NODE_ID) SIZE AUTO',DEGREE=>4);
    exec sem_apis.analyze_model('my_model',METHOD_OPT =>'FOR COLUMNS (P_VALUE_ID, START_NODE_ID) SIZE AUTO',DEGREE=>4);
    exec sem_perf.gather_stats(just_on_values_table=>true,degree=>4);
    Note: the DEGREE argument is for degree of parallelism
    Usually, you would load data, then gather statistics, and then periodically re-gather them as updates are done (maybe when 20% of the data is new).
    2) Create appropriate semantic network indexes. We generally recommend PCSM and PSCM indexes. PCSM is always there, and PSCM is created by default in the latest patch but not in 11.2.0.1.0 release (11.2.0.1.0 has a PSCF index that should be dropped and replaced with PSCM).
    Both of these items are covered in the documentation.
    You may also find the following presentation from SemTech 2010 helpful. It covers many best practices for load, query and inference.
    http://download.oracle.com/otndocs/tech/semantic_web/pdf/2010_ora_semtech_wkshp.pdf
    Thanks,
    Matt

  • How do you measure performance of an item renderer?

    I'm creating an ItemRenderer in Flex 4.6 and I want to know how to measure total time to create, view and render an item renderer and how long it takes to view and render that item renderer when it's being reused.
    I just watched the video, Performance Tips and Tricks for Flex and Flash Development and it describes the creation time, validation time and render time and also the reset time. This is described at 36:43 and 40:25.
    I'm looking for a way to get numbers in milliseconds for total item renderer render time and reset time (what is being done in the video). 

    To answer your first question, in this video Ryan Frishberg recommends measuring and tuning your code. I'm trying to follow his example for my own item renderers.
    I've taken some key slides out to show you.

  • How to measure the performance of sql query?

    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i...
    It ll be useful for me to write efficient query....
    Thanks & Regards

    psram wrote:
    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
    This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
    Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
    You can use the following approach to get a deeper understanding of the operations performed by each row source:
    alter session set statistics_level=all;
    alter session set timed_statistics = true;
    select /* findme */ ... <your query here>
    SELECT
             SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
             OBJECT_NAME,
             CARDINALITY,
             LAST_OUTPUT_ROWS,
             LAST_CR_BUFFER_GETS,
             LAST_DISK_READS,
             LAST_DISK_WRITES,
    FROM     V$SQL_PLAN_STATISTICS_ALL P,
             (SELECT *
              FROM   (SELECT   *
                      FROM     V$SQL
                      WHERE    SQL_TEXT LIKE '%findme%'
                               AND SQL_TEXT NOT LIKE '%V$SQL%'
                               AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
                      ORDER BY LAST_LOAD_TIME DESC)
              WHERE  ROWNUM < 2) S
    WHERE    S.HASH_VALUE = P.HASH_VALUE
             AND S.CHILD_NUMBER = P.CHILD_NUMBER
    ORDER BY ID
    /Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
    Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
    http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
    http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to measure query run time and mnitor performance

    Hai All,
                   A simple question. How to measure query run time and mnitor performance? I want to see the parameters like how long it took to execute, how much space it took etc.
    Thank you.

    hi,
    some ways
    1. use transaction st03, expert mode.
    2. tables rsddstat*
    3. install bw statistics (technical content)
    there are docs on this, also bi knowledge performance center.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    BW Performance Tuning Knowledge Center - SAP Developer Network (SDN)
    Business Intelligence Performance Tuning [original link is broken]
    also take a look
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/31b6b490-0201-0010-e4b6-a1523327025e
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'.

  • How to measure the performance of Extractor

    Hi,
    How to measure the time taken to by the extractor when executed from rsa3 for a given selection?
    Lot of threads speak about ST05... but these transactions are too granular to analyse.
    How to get the overall time taken.i need the overall time taken and the time taken by the individual SQL statements... please provide specific pointers.
    Thanks,
    Balaji

    Maybe SE30 can help you....
    Regards,
    Fred

  • How to measure the sap system performance

    hi experts ,
    i want to understand how the system performance is calculated in sap. for example " How system , database performances are calculated by sap in SE30, SCI - sap code inspector " ...
    With regards ,
    James..
    Valuable answers will be rewarded...

    Hi James,
    Check these links, they will help you
    http://help.sap.com/saphelp_nw04s/helpdata/en/8a/3b834014d26f1de10000000a1550b0/frameset.htm
    http://help.sap.com/saphelp_nw04s/helpdata/en/d1/801f7c454211d189710000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d1/801f7c454211d189710000e8322d00/frameset.htm
    Thanks
    Janani
    award points if helpful

  • How to measure mapping execution speed

    Hi,
    currently i'm trying to measure performance differences between Interface Mappings which contain one single Message Mapping and Interface Mappings which contain 2 or 3 Message Mappings.
    I already tried to do this with RWB and Performance-Monitoring. But Performance Monitoring shows the processing time through the whole XI, and not only Mapping execution time. So it is difficult to get a clean measuring there, without influences from queueing and so on.
    Test Tab on Integration Builder has a too big step (one second). Mapping execution time is slower.
    Do you have any ideas to measure this?
    Or do you have experience with performance differences between those two kinds of Interface Mappings?
    regards,
    ms
    P.S. i'm using XI 3.0

    Hi, Manuel:
    For the two scenarios you want to compare performance, trigger them separately.
    You take following steps to take measurement for those two scenarios:
    Go to SXMB_MONI, find the message, go to pipeline step after your "Request Message Mapping"
    e.g. you can select "Technical Routing" step, expand it, -> SOAP Header -> Performance Header:
    You will see the start time stamp for each steps executed up to current step.
    Locate your mapping programs, get the begin time stamp and end time stamp, then you will know the how long the mapping program take.
    For the scenario that you have several mapping programs, make sure you get begin timestamp for the first mapping program and end timestamp for last mapping program, the difference is the time for you few mapping program take.
    Hope this helps.
    Liang
    Edited by: Liang Ji on Mar 29, 2008 5:42 AM

  • How to measure the size of an object written by myself?

    Hi all,
    I'm going to measure the performance on throughput of an ad hoc wireless network that is set up for my project. I wrote a java class that represents a particular data. In order to calculate the throughput, I'm going to send this data objects from one node to another one in the network for a certain time. But I've got a problem with it- How to measure the size of an object that was written by myself in byte or bit in Java? Please help me with it. Thank you very much.

    LindaL22 wrote:
    wrote a java class that represents a particular data. In order to calculate the throughput, I'm going to send this data "a data" doesn't exist. So there's nothing to measure.
    objects from one node to another one in the network for a certain time. But I've got a problem with it- How to measure the size of an object that was written by myself in byte or bit in Java? Not.

  • Use SE30 to measure performance in background

    Hi,
      I want to measure performance of an existing program by running it in background because it takes a really long time to run.
      I m trying to use the "Schedule Measurements for user service" option from the menu. But when I click on the new icon after going to Schedule Measurements for user service menu, I get a short dump with message
    MOVE_NOT_SUPPORTED.
    on the following line of code of standard SAP program SAPMS38T
    > convert time stamp l_ts time zone sy-zonlo.
    My question is
    1. Am I using the right option to measure the performance in bacground or is there any other way (except changing code to put log statements)
    2. How can I fix the above problem.
    Will give points to the right answer. Thanks for reading.

    I generally use ST05 to measure performance.
    Rob

  • How to measure the task instance size

    I am trying to understand how to measure the custom task instnace size that was occupied in the cache repository. Is there a way to obtain the size (in KB s ) in lighthouse 4.1 +sp5. Please help.                                                                                                                                                                                                                                                                                                                                                                                                       

    Hi, Luiz!
    In order to estimate the instance size, you will have to sum up all the instance variables size.
    From the practical prospective it may be difficult to use the algorithm in cases when your instance variables are not simple types.
    For already existing instances, you can look the instance size in the database:
    An instance is just a java object, that is serialized as a byte[] and stored in the engine database, PPROCINSTANCE table, column INSTDATA.
    You can write a simple program (either java or SQL plus) that would read the instance from the BLOB and measure the size.
    And yes, with large instances you can have performance problems.

  • How to measure/estimate Bias of the signal

    Dear Sir
    I am performing the FFT of the signal which collect in real time from the hall effect current tranceducer SCT-013-005. I need to measure/estimate bias of this signal. Can you please guide me how can I do that. (Attached is my vi which is developed in LabVIEW 2012)
    I shall be thankful to you for your attention and consideration.
    Kind Regards
    Urfee
    Solved!
    Go to Solution.
    Attachments:
    How to measure Biase of the signal.vi ‏930 KB

    Hi tronoh,
    what about using the "Basic Averaged DC&RMS" function on your periodic signal?
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • How to judge performance prob

    Dear All,
    I have gone through ST03 , ST06 , ST02 , ST04 several times but never get any conclusion where is the prob .
    what  I have to check
    Total cpu , avg. cpu time, total rep. time , avg. resp. time , Total db , avg db time
    which bacground task is taking too much time ?
    is the sytem is busy with rfc call ?
    any transection taking too much cpu time but not taking too much db time .
    how to measure the time just like a doctor check through bench mark ( like blook pressure measure between 80 to 120 )
    what to check in ST02 ?
    I think i am looking for threshhold value ?
    suppose cpu time is 3200 ms . is it high cpu time or normal ?
    very much confused about the performance .
    Pl. advice .
    any good docs , link , wiki , TIPS ?

    As long as no user complains and all jobs run ok, you are basically fine
    But let me do an example, you have a user complaining that transaction X is running slow. Now you need to find out, what is taking the largest part of the response time. I often just monitor the workprocess in transaction SM50, constantly refreshing while the transaction is running. If you see the workprocess stuck on one single database table, you will have to look that up. If there are almost no database tables, this means most of the time is spent in abap code. You can also use transaction ST03 to figure out where the main part of the response time is spent.
    Depending on this, you can do an SQL trace with ST05 (if the time spent is on the database), or a runtime analysis in SE30 to find which parts consume the most time. Or you use the debugger to capture a running transaction, often the place where you land is the part taking the most time.
    Causes for high database times are often missing indexes, wrong database access (wrong cbo decision) or suboptimal coding.
    Causes for high abap times are often nested loops, searching on unsorted lists etc.
    Regards, Michael

Maybe you are looking for