Urgent Performance Issue

Hi,
Can someone please tell me and explain me that if i have performance issue at the portal side then what cache settings i have to choose at rsrt and then at the reporting agent level.
I will appreciate if you can explain me also in detail.
I will appreciate and award points.
Thanks
Sakshi

Hi Sakshi,
For your question pls read the following document carefully, so you will know the correct information,
regarding your question.
<b>Read Mode</b>
The read mode determines how the OLAP processor gets data during navigation. Three alternatives are supported:
1. Read when navigating/expanding the hierarchy
In this method, the system transports the smallest amount of data from the database to the OLAP processor but the number of read processes is the largest.
In the "Read when navigating" mode below, data is requested in a hierarchy drilldown for the fully expanded hierarchy. In the "Read when navigating/expanding the hierarchy" mode, data in the hierarchy is aggregated by the database and transferred to the OLAP processor from the lowest hierarchy level displayed in the start list. When expanding a hierarchy node, the system intentionally reads this node's children.
You can improve the performance of queries with large presentation hierarchies by creating aggregates in a middle hierarchy level that is greater than or equal to the start level.
2. Read when navigating
The OLAP processor only requests the data required for each query navigation status in the Business Explorer. The data required is read for each navigation step.
In contrast to the "Read when navigating/expanding the hierarchy" mode, the system always fully reads presentation hierarchies at tree level.
When expanding nodes, the OLAP processor can read the data from the main memory.
When accessing the database, the system uses the most suitable aggregate table and, if possible, aggregates in the database itself.
3. Read everything at once
There is only one read process in this mode. When executing the query in the Business Explorer, the data required for all possible navigation steps for this query is read to the OLAP processor's main memory area. When navigating, all new navigation statuses are aggregated and calculated from the main memory data.
The "Read when navigating/expanding the hierarchy" mode has a markedly better performance in almost all cases than the other two modes. This is because the system only requests the data that the user wants to see in this mode.
The "Read when navigating" setting, in contrast to "Read when navigating/expanding the hierarchy", only has a better performance for queries with presentation hierarchies.
In contrast to the two previous modes, the "Read everything at once" setting also has a better performance with queries with free characteristics. The idea behind aggregates, that is working with pre-aggregated data, is least supported in the "Read everything at once" mode. This is because the OLAP processor carries out aggregation in each query view.
We recommend you choose the "Read when navigating/ expanding the hierarchy" mode.
Only use different mode to "Read when navigating/ expanding the hierarchy" in exceptional circumstances.
The "Read everything at once" mode can be useful in the following cases:
The InfoProvider does not support selection, meaning the OLAP processor reads significantly more data than the query needs anyway.
A user exit is active in the query that prevents the system from having already aggregated in the database.
If it is Useful informatin to u pls provide points.
have nice day.
by
ANR

Similar Messages

  • Urgent : Performance Issue DELETE , INSERT INTO SELECT, UPDATE

    Hi,
    NEED ASSISTANCE TO OPTIMIZE THE INSERT STATEMENT (insert into select):
    =================================================
    We have a report.
    As per current design following steps are used to populate the custom table whcih is used for reporting purpose:
    1) DELETE all the recods from the custom table XXX_TEMP_REP.
    2) INSERT records in custom table XXX_TEMP_REP (Assume all the records related to type A)
    using
    INSERT..... INTO..... SELECT.....
    statement.
    3) Update records in XXX_TEMP_REP
    using some custom logic for the records populated .
    4) INSERT records in custom table XXX_TEMP_REP (Records related to type B)
    using
    INSERT..... INTO..... SELECT.....
    statement.
    Stats gathered related to Insert statement are:
    Event Wait Information
    SID 460 is waiting on event : db file sequential read
    P1 Text : file#
    P1 Value : 20
    P2 Text : block#
    P2 Value : 435039
    P3 Text : blocks
    P3 Value : 1
    Session Statistics
    redo size : 293.84 M
    parse count (hard) : 34
    parse count (total) : 1217
    user commits : 3
    Transaction and Rollback Information
    Rollback Used : 35.1796875 M
    Rollback Records : 355886
    Rollback Segment Number : 12
    Rollback Segment Name : _SYSSMU12$
    Logical IOs : 1627182
    Physical IOs : 136409
    RBS Startng Extent ID : 14
    Transaction Start Time : 09/29/10 04:22:11
    Transaction_Status : ACTIVE
    Please suggest how this can be optimized.
    Regards,
    Narender

    Hello,
    Is there any relation with the Oracle Forms tool ?
    Francois

  • Urgent : general abap performance issue

    HI floks
    i did some development in new smartform its working fine but i have issue in data base performance is 76% . but i utilize similar below code with various conditions in various 12 places . is it possible to reduce performance this type of code . check it and mail me how can i do it . if possible can suggest me fast .how much % is best for this type of performance issues.
    DATA : BEGIN OF ITVBRPC OCCURS 0,
           LV_POSNR LIKE VBRP-POSNR,
           END OF ITVBRPC.
    DATA : BEGIN OF ITKONVC OCCURS 0,
            LV_KNUMH LIKE KONV-KNUMH,
            LV_KSCHL LIKE KONV-KSCHL,
           END OF ITKONVC.
    DATA:  BEGIN OF ITKONHC OCCURS 0,
           LV_KNUMH LIKE KONH-KNUMH,
           LV_KSCHL LIKE KONH-KSCHL,
           LV_KZUST LIKE KONH-KZUST,
           END OF ITKONHC.
    DATA: BEGIN OF ITKONVC1 OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITKONVC1.
    DATA :  BEGIN OF ITCALCC OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITCALCC.
    DATA: COUNTC(3) TYPE n,
           TOTALC LIKE KONV-KWERT.
    SELECT POSNR FROM VBRP INTO ITVBRPC
      WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
    APPEND ITVBRPC.
    ENDSELECT.
    LOOP AT ITVBRPC.
    SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
    LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
    APPEND ITKONVC.
    ENDSELECT.
    ENDLOOP.
    SORT ITKONVC BY LV_KNUMH.
    DELETE ADJACENT DUPLICATES FROM ITKONVC.
    LOOP AT ITKONVC.
    SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
    APPEND ITKONHC.
    ENDSELECT.
    ENDLOOP.
    LOOP AT ITKONHC.
    SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH AND
    KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
    MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
    APPEND ITCALCC.
    ENDSELECT.
    endloop.
    LOOP AT ITCALCC.
    COUNTC = COUNTC + 1.
    TOTALC = TOTALC + ITCALCC-LV_KWERT.
      ENDLOOP.
    MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
    MOVE TOTALC TO LV_CKWERT.
    it's urgent ..........
    thanks .
    bbbbye
    suresh

    You need to use for all entries instead of select inside the loop.
    Try this:
    DATA : BEGIN OF ITVBRPC OCCURS 0,
    VBELN LIKE VBRP-VBELN,
    LV_POSNR LIKE VBRP-POSNR,
    END OF ITVBRPC.
    DATA: IT_VBRPC_TMP like ITVBRPC occurs 0 with header line.
    DATA : BEGIN OF ITKONVC OCCURS 0,
    LV_KNUMH LIKE KONV-KNUMH,
    LV_KSCHL LIKE KONV-KSCHL,
    END OF ITKONVC.
    DATA: BEGIN OF ITKONHC OCCURS 0,
    LV_KNUMH LIKE KONH-KNUMH,
    LV_KSCHL LIKE KONH-KSCHL,
    LV_KZUST LIKE KONH-KZUST,
    END OF ITKONHC.
    DATA: BEGIN OF ITKONVC1 OCCURS 0,
    KNUMH LIKE KONV-KNUMH,
    KSCHL LIKE KONV- KSCHL,
    LV_KWERT LIKE KONV-KWERT,
    END OF ITKONVC1.
    DATA : BEGIN OF ITCALCC OCCURS 0,
    LV_KWERT LIKE KONV-KWERT,
    END OF ITCALCC.
    DATA: COUNTC(3) TYPE n,
    TOTALC LIKE KONV-KWERT.
    *SELECT POSNR FROM VBRP INTO ITVBRPC
    *WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
    *APPEND ITVBRPC.
    *ENDSELECT.
    SELECT VBELN POSNR FROM VBRP INTO TABLE ITVBRPC
    WHERE VBELN = INV_HEADER-VBELN AND
                     ARKTX = WA_INVDATA-ARKTX .
    If sy-subrc eq 0.
      IT_VBRPC_TMP[] = ITVBRPC[].
      Sort IT_VBRPC_TMP by vbeln posnr.
      Delete adjacent duplicates from IT_VBRPC_TMP comparing vbeln posnr.
    SELECT KNUMH KSCHL FROM KONV
                   INTO TABLE ITKONVC
                   WHERE KNUMV = LV_VBRK-KNUMV AND
                   KPOSN = ITVBRPC-LV_POSNR AND
                    KSCHL = 'ZLAC'.
    if sy-subrc eq 0.
       SORT ITKONVC BY LV_KNUMH.
        DELETE ADJACENT DUPLICATES FROM ITKONVC COMPARING LV_KNUMH.
       SELECT KNUMH KSCHL KZUST FROM KONH
                 INTO TABLE ITKONHC
                 WHERE KNUMH =  ITKONVC-LV_KNUMH AND
                               KSCHL = 'ZLAC' AND
                               KZUST = 'Z02'.
       if sy-subrc eq 0.
    SELECT KNUMH KSCHL KWERT FROM KONV
                   INTO TABLE ITKONVC1
                    WHERE KNUMH = ITKONHC-LV_KNUMH AND
                                   KSCHL = ITKONHC-LV_KSCHL AND
                                    KNUMV = LV_VBRK-KNUMV.
        Endif.
    Endif.
    Endif.
    *LOOP AT ITVBRPC.
    *SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
    *LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
    *APPEND ITKONVC.
    *ENDSELECT.
    *ENDLOOP.
    *SORT ITKONVC BY LV_KNUMH.
    *DELETE ADJACENT DUPLICATES FROM ITKONVC.
    *LOOP AT ITKONVC.
    SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
    *APPEND ITKONHC.
    *ENDSELECT.
    *ENDLOOP.
    *LOOP AT ITKONHC.
    *SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH *AND
    *KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
    *MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
    *APPEND ITCALCC.
    *ENDSELECT.
    *endloop.
    LOOP AT ITCALCC.
    COUNTC = COUNTC + 1.
    TOTALC = TOTALC + ITCALCC-LV_KWERT.
    ENDLOOP.
    MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
    MOVE TOTALC TO LV_CKWERT.

  • Oracle Forms6i Query Performance issue - Urgent

    Hi All,
    I'm using oracle forms6i and Oracle DB 9i.
    I'm facing the performance issue in query forms.
    In detail block form taking long time to load the data.
    Form contains 2 non data blocks
    1.HDR - 3 input parameters
    2.DETAILS - Grid - Details
    HDR input fields
    1.Company Code
    2.Company ACccount No
    3.Customer Name
    Details Grid is displayed the details.
    Here there are 2 tables involved
    1.Table1 - 1 crore records
    2.Table2 - 4 crore records
    In form procedure one cursor bulid and fetch is done directly and assign the values to form block fields.
    Below i've pasted the query
    SELECT
    t1.entry_dt,
    t2.authoriser_code,
    t1.company_code,
    t1.company_ac_no
    initcap(t1.customer_name) cust_name,
    t2.agreement_no
    t1.customer_id
    FROM
    table1 t1,
    table2 t2
    WHERE
    (t2.trans_no = t1.trans_no or t2.temp_trans_no = t1.trans_no)
    AND t1.company_code = nvl(:hdr.l_company_code,t1.company_code)
    AND t1.company_ac_no = nvl(:hdr.l_company_ac_no,t1.company_ac_no)
    AND lower(t1.customer_name) LIKE lower(nvl('%'||:hdr.l_customer_name||'%' ,t1.customer_name))
    GROUP BY
    t2.authoriser_code,
    t1.company_code,
    t1.company_ac_no,
    t1.customer_name,
    t2.agreement_no,
    t1.customer_id;
    Where Clause Analysis
    1.Condition 1 OR operator (In table2 two different columbs are compared with one column in table)
    2.Like Operator
    3.All the columns has index but not used properly always full table scan
    4.NVL chk
    5.If i run the qry in backend means coming little fast,front end very slow
    Input Parameter - Query retrival data - limit
    Only compnay code means record count will be 50 - 500 records -
    Only compnay code and comp ac number means record count will be 1-5
    Only compnay code,omp ac number and customer name means record count will be 1 - 5 records
    I have tried following ways
    1.Split the query using UNIOIN (OR clause seaparted) - Nested loops COST 850 , Nested loops COST 750 - index by row id - cost is 160 ,index by row id - cost is 152 full table access.................................
    2.Dynamic SQL build - 'DBMS_SQL.DEFINE COLUMN .....
    3.Given onlu one input parameter - Nested loops COST 780 , Nested loops COST 780 - index by row id - cost is 148 ,index by row id - cost is 152 full table access.................................
    Still im facing the same issue.
    Please help me out on this.
    Thanks and Regards,
    Oracle1001

    Sudhakar P wrote:
    the below query its take more than one minute while updating the records through pro*c.
    Execute 562238 161.03 174.15 7 3932677 2274833 562238Hi Sudhakar,
    If the database is capable of executing 562,238 update statements in one minute, then that's pretty good, don't you think.
    Your real problem is in the application code which probably looks something like this in pseudocode:
    for i in (some set containing 562,238 rows)
    loop
      <your update statement with all the bind variables>
    end loop;If you transform your code to do a single update statement, you'll gain a lot of seconds.
    Regards,
    Rob.

  • Performance issues with respect scheme registration,select & insert query

    I am facing performance issues with respect to schema registration,Select & insert query towards 10.2.0.3 version.It is taking around 45 minutes to register schema and it is taking around 5 min to insert a single document into xml db where as it was taking less than min to insert a single document into xml db of 9.2.0.6 version.Would like to know the issue and solution to resolve this issue.Please help me out on this as it is very urgent for me

    Since it appears that this is an XML DB specific question, you're probably better off posting in the XML DB. The folks over there have much more experience with the ins and outs of that particular product.
    Justin

  • Performance issue in Portal Reports

    Hi
    We are experiencing a serious performance issue, in a report, and need a urgent fix on this issue.
    The report is a Reports From SQL Query report, I need to find a way to dynamically make/create the where clause otherwise I have to make the statement in a way the exclude the use of indexes.
    Full-table-scan is not a valid option here; the number or records is simply too high (several millions its a datawarehouse solution). In the Developer packaged, we can make the where clause dynamically, this basic yet extremely important feature, is essential to all database application.
    We need to know how to do it, and if this functionality is not natively supported, then this should be one of the priority one functionalities to implement in future releases.
    However, what do I do for now?
    Thank in advance

    I have found a temporary workaround, by editing the where clause in the stored procedure manually. However this fix have to be done every time a change have been committed in the wizard, so it is still not a solution to go for indefinitely, but its ok for now.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance issue on Fiscal period.

    HI all,
    I have multiprovider built on an infoset.The infoset is built on 3 stadard ODS(0FIGL_O02,0PUR_O01,0PUR_DS03).The user is running the report by Company code and Fiscal Period.
    The Company Code and Fiscal period is available only for FI-GL ODS.The purchasing os has only Fiscal Variant time characterstic.When i am trying to runt he report,its taking unusually long time to run.
    How should i resolve this performance issue?
    1)will getting Fiscal period into Purchasing ods help improve the performance.If so can anyone please give me step by step procees.As this is very urgent.
    2) Or should i take any other method to improve the performance.Th FI-GL already has secondary indexes on it.
    Please advise.
    Message was edited by:
            sap novice

    Duplicate post:
    Performance issue on AFPO
    Performance issue on AFPO

  • Smartform performance issue

    HI floks
    i did some development in new smartform its working fine but i have issue in data base performance is 76% . but i utilize similar below code with various conditions in various 12 places . is it possible to reduce performance this type of code . check it and mail me how can i do it . if possible can suggest me fast .how much % is best for this type of performance issues.
    DATA : BEGIN OF ITVBRPC OCCURS 0,
           LV_POSNR LIKE VBRP-POSNR,
           END OF ITVBRPC.
    DATA : BEGIN OF ITKONVC OCCURS 0,
            LV_KNUMH LIKE KONV-KNUMH,
            LV_KSCHL LIKE KONV-KSCHL,
           END OF ITKONVC.
    DATA:  BEGIN OF ITKONHC OCCURS 0,
           LV_KNUMH LIKE KONH-KNUMH,
           LV_KSCHL LIKE KONH-KSCHL,
           LV_KZUST LIKE KONH-KZUST,
           END OF ITKONHC.
    DATA: BEGIN OF ITKONVC1 OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITKONVC1.
    DATA :  BEGIN OF ITCALCC OCCURS 0,
           LV_KWERT LIKE KONV-KWERT,
           END OF ITCALCC.
    DATA: COUNTC(3) TYPE n,
           TOTALC LIKE KONV-KWERT.
    SELECT POSNR FROM VBRP INTO ITVBRPC
      WHERE VBELN = INV_HEADER-VBELN AND ARKTX = WA_INVDATA-ARKTX .
    APPEND ITVBRPC.
    ENDSELECT.
    LOOP AT ITVBRPC.
    SELECT KNUMH KSCHL FROM KONV INTO ITKONVC WHERE KNUMV =
    LV_VBRK-KNUMV AND KPOSN = ITVBRPC-LV_POSNR AND KSCHL = 'ZLAC'.
    APPEND ITKONVC.
    ENDSELECT.
    ENDLOOP.
    SORT ITKONVC BY LV_KNUMH.
    DELETE ADJACENT DUPLICATES FROM ITKONVC.
    LOOP AT ITKONVC.
    SELECT KNUMH KSCHL KZUST FROM KONH INTO ITKONHC WHERE KNUMH = ITKONVC-LV_KNUMH AND KSCHL = 'ZLAC' AND KZUST = 'Z02'.
    APPEND ITKONHC.
    ENDSELECT.
    ENDLOOP.
    LOOP AT ITKONHC.
    SELECT KWERT FROM KONV INTO ITKONVC1 WHERE KNUMH = ITKONHC-LV_KNUMH AND
    KSCHL = ITKONHC-LV_KSCHL AND KNUMV = LV_VBRK-KNUMV.
    MOVE ITKONVC1-LV_KWERT TO ITCALCC-LV_KWERT.
    APPEND ITCALCC.
    ENDSELECT.
    endloop.
    LOOP AT ITCALCC.
    COUNTC = COUNTC + 1.
    TOTALC = TOTALC + ITCALCC-LV_KWERT.
      ENDLOOP.
    MOVE ITKONHC-LV_KSCHL TO LV_CKSCHL.
    MOVE TOTALC TO LV_CKWERT.
    it's urgent ..........
    thanks .
    bbbbye
    suresh

    Hi,
    you can increase the performance by following,
    1. By changing occurs clause to types declaration clause.
       You need to declare like this:
       DATA : BEGIN OF xVBRPC,
                      LV_POSNR LIKE VBRP-POSNR,
                  END OF xVBRPC.
       DATA: ITVBRPC like table of xVBRPC,
    2. do not use select statements in loop statements.
    Ashven

  • Performance issue in MIRO

    Hi Friends,
        While executing Tcode - MIRO, we are facing performance issue . The details are as below
    We create PO , then receive the goods against PO and then do invoice verification. In PO we check box for GR-Based IV.
    We have observed that if we check the box for GR-Based IV, there is performance issue with MIRO, But if we do not check this box, there is no performance issue with MIRO.
    Could you tell me how to resolve this issue ?
    I will reward for every useful responses.
    pls treat it as urgent.
    Thx in Adv.
    Bobby

    If you check the box for GR-Based IV.
    then the GR data needs to be picked up from table MKPF & MSEG
    for MIRO..
    Check with your Basis team...they may find the exact reason

  • AP invoice workbench performance issue.

    Hi Guys,
    We have a production system with RHEL 5 ,R12 12.0.6 and DB:10.2.0.4
    WE are facing Performance issue in Inovoice workbench were users when click on Inovoice  batches or Inovoices it takes a lot of time to open the invoice form.
    Also Validation of inovoices consumes a lot of time which is unaccepted .
    Request you please give some pointers or note id to followd already have locked an SR with oracle .
    This is urgent
    Regards,
    Milan

    Please see these docs.
    R12 Invoice Workbench Form Has A Performance Issue [ID 1072338.1]
    Bad Performance When Checking Funds In Invoice Workbench [ID 1091280.1]
    Bad Performance In Invoice Workbench (APXINWKB) Find Window When Searching By Purchase Order [ID 1195623.1]
    R12 AP Invoice Workbench Performance Issues [ID 957105.1]
    Poor Performance On Invoice Validation In The Invoice Workbench [ID 1130313.1]
    Invoice Workbench> Actions: Pay in Full Performance Issue [ID 983804.1]
    Invoice Workbench (APXINWKB) Performance Issue While Selecting The Self Assessment Check Box [ID 1210340.1
    Performance of Project Expenditure LOV At AP Invoice Header in Invoice Workbench [ID 1143943.1]
    R12.1.1 Performance Problem in AP Invoice Workbench [ID 861205.1]
    R12 Invoice Performance FAQs [ID 579737.1]
    Thanks,
    Hussein

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

Maybe you are looking for