Regarding the performance in report

Hi Abap Gurus,
                          i am working on the report. my requirement is that after executing the report data gets extracting after 11 hours.the required data is comonf perfectly.  how to improve the performance. any tips to follow the performance in the report.  if possible post the code.
Moderator Message: Please search the forum for available information.
Edited by: kishan P on Oct 19, 2010 4:50 PM

Hi,
Please check below thread;
Extract from ALV List
Regards
Jana

Similar Messages

  • Urgent: regarding the increasing the performance of report

    Hi,
    I had a report which is displaying the correct data but i execute on PRD Server,it gets Request Time Out.So i want to increase the performance of it.Plzz help me out in doing this.
    REPORT  ZWIP_STOCK NO STANDARD PAGE HEADING LINE-SIZE 150.
    TABLES: AFPO, AFRU, MARA, MAKT.
    DATA: BEGIN OF ITAB OCCURS 0,
          AUFNR LIKE AFPO-AUFNR,
          MATNR LIKE AFPO-MATNR,
          LGORT LIKE AFPO-LGORT,
          MEINS LIKE MARA-MEINS,
          NTGEW LIKE MARA-NTGEW,
          MTART LIKE MARA-MTART,
          STOCK TYPE P LENGTH 10 DECIMALS 3,
          END OF ITAB.
    DATA : ITAB2 LIKE ITAB OCCURS 0 WITH HEADER LINE.
    DATA : DESC LIKE MAKT-MAKTX.
    SELECT-OPTIONS : MAT_TYPE FOR MARA-MTART.
    SELECT-OPTIONS : P_MATNR FOR AFPO-MATNR.
    DATA : V_MINOPR LIKE AFRU-VORNR,
           V_MAXOPR LIKE AFRU-VORNR,
           V_QTYMIN LIKE AFRU-GMNGA,
           V_QTYMAX LIKE AFRU-GMNGA,
           V_QTY TYPE P LENGTH 10 DECIMALS 3.
            SELECT AAUFNR AMATNR ALGORT BMEINS BNTGEW BMTART FROM AFPO AS A
              INNER JOIN MARA AS B ON AMATNR = BMATNR
                INTO TABLE ITAB WHERE ELIKZ <> 'X' AND MTART IN MAT_TYPE AND A~MATNR IN P_MATNR.
        ITAB2[] = ITAB[].
        SORT ITAB2 BY MATNR MEINS MTART NTGEW.
        DELETE ADJACENT DUPLICATES FROM ITAB2 COMPARING MATNR MEINS MTART NTGEW.
       LOOP AT ITAB2.
        V_QTY = 0.
          LOOP AT ITAB WHERE MATNR = ITAB2-MATNR.
            SELECT MIN( VORNR ) INTO V_MINOPR FROM AFRU WHERE AUFNR = ITAB-AUFNR.
            SELECT MAX( VORNR ) INTO V_MAXOPR FROM AFRU WHERE AUFNR = ITAB-AUFNR.
            SELECT SUM( GMNGA ) INTO V_QTYMIN FROM AFRU WHERE AUFNR = ITAB-AUFNR AND VORNR =  V_MINOPR.
            SELECT SUM( GMNGA ) INTO V_QTYMAX FROM AFRU WHERE AUFNR = ITAB-AUFNR AND VORNR =  V_MAXOPR.
            V_QTY = V_QTY + V_QTYMIN - V_QTYMAX.
          ENDLOOP.
          ITAB2-STOCK = V_QTY.
          MODIFY ITAB2.
        ENDLOOP.
        LOOP AT ITAB2.
              WRITE:/ ITAB2-MATNR,ITAB2-STOCK.
        ENDLOOP.

    Instead of code from
    itab2[] = itab[] till last endloop try code given below
    data : begin of minopr occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of minopr.
    data : begin of maxopr occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of maxopr.
    data : begin of qtymin occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of qtymin.
    data : begin of qtymax occurs 0,
           aufnr type afru-aurnr,
           vornr type afru-vornr,
           end of qtymax.
    select aurnr vornr into table minopr from afru for all entries in itab where aurnr = itab-aufnr
    maxopr[] = minopr[].
    sort minopr by aufnr vornr ascending.
    sort maxopr by aufnr vornr descending.
    delete adjacent duplicates from minopr comparing aufnr.
    delete adjacent duplicates from maxopr comparing aufnr.
    SELECT aufnr vornr GMNGA INTO TABLE QTYMIN FROM AFRU for all entries in minopr WHERE AUFNR = minopr-AUFNR AND VORNR = MINOPR-vornr.
    SELECT aufnr vornr GMNGA INTO TABLE QTYMAX FROM AFRU for all entries in maxopr WHERE AUFNR = maxopr-AUFNR AND VORNR = maxopr-vornr.
    sort qtymin by aufnr.
    sort qtymax by aufnr
    sort itab by matnr MEINS MTART NTGEW.
    LOOP AT ITAB.
    v_minopr = 0.
    v_maxopr = 0.
    read table qtymin with key aufnr = itab-aufnr binary search.
    if sy-subrc = 0.
    loop at qtymin from sy-tabix.
    if qtymin-aufnr = itab-aufnr.
    V_MINOPR = V_MINOPR + itab-gmnga.
    else.
    exit.
    endif.
    endloop.
    endif.
    read table qtymax with key aufnr = itab-aufnr binary search.
    if sy-subrc = 0.
    loop at qtymax from sy-tabix.
    if qtymax-aufnr = itab-aufnr.
    V_MaxOPR = V_MaxOPR + itab-gmnga.
    else.
    exit.
    endif.
    endloop.
    endif.
    V_QTY = V_QTY + V_QTYMIN - V_QTYMAX.
    At new itab-matnr.
    if sy-tabix = 1.
    continue.
    endif.
    itab2 = itab.
    itab2-stock = v_qty.
    append itab2.
    V_QTY = 0.
    endat.
    ENDLOOP.
    itab2 = itab.
    itab2-stock = v_qty.
    append itab2.
    LOOP AT ITAB2.
    WRITE:/ ITAB2-MATNR,ITAB2-STOCK.
    ENDLOOP.

  • Split of Cubes to improve the performance of reports

    Hello Friends . We are now Implementing the Finance GL Line Items for Global Automobile operations in BMW and services to Outsourced to Japan which increased the data volume to 300 millions records for last 2 years since we go live. we have 200 Company codes.
    How To Improve performance
    1. Please suggest if I want to split the cubes based on the year and Company codes which are region based. which means european's will run report out of one cube and same report for america will be on another cube
    But Question here is if I make 8 cube (2 For each year : 1- current year comp code ABC & 1 Current Year DEF), (2 For each year : 1- Prev year comp code ABC & 1 Prev Year DEF)
    (2 For each year : 1- Arch year comp code ABC & 1 Archieve Year DEF)
    1. Then what how I can I tell the query to look the data from which cube. since Company code is authorization variable so to pick that value  of comp code and make a customer exit variable for infoprovider  will increase lot of work.
    Is there any good way to do this. does split of cubes make sense based on company code or just make it on year.
    Please suggest me a excellent approach step by step to split cubes for 60 million records in 2 years growth will be same for
    next 4 years since more company codes are coming.
    2. Please suggest if split of cube will improve performance of report or it will make it worse since now query need to go thru 5-6 different cubes.
    Thanks
    Regards
    Soniya

    Hi Soniya,
    There are two ways in which you can split your cube....either based on Year or based on Company code.(i.e Region). While loading the data, write a code in the start routine which will filter tha data. For example, if you are loading data for three region say 1, 2, and 3, you code will be something like
    DELETE SOURCE_PACKAGE WHERE REGION EQ '2' OR
    REGION EQ '3'.
    This will load data to your cube correspoding to region 1.
    you can build your reports either on these cubes or you can have a multiprovider above these cubes and build the report.
    Thanks..
    Shambhu

  • Can the Performance Detail reports be exported as PDF without the Detail Table?

    We previously generated significant numbers of Performance Detail reports out of SCOM 2007 R2 as PDF files, and the 'Detail Table' did not show.
    Now that we are on SCOM 2012 R2, the same reports show the 'Detail Table' when exported to PDF. We report on 6 to 12 months of data, so these tables are huge, and are also useless for our purposes.
    Is there some way to suppress the Detail Table when exporting a Performance Detail report to PDF?

    Hello,
    Please see if the method in the following post can meet your requirements:
    SCOM reports on performance counters for large groups of servers
    http://www.bictt.com/blogs/bictt.php/2010/11/28/scom-reports-on-performance-counters-for-large-groups-of-servers

  • Regarding the performance comparision of application/web servers

    i want the details of performance comparision of application/web servers and which is the best, efficient and easy to use.

    Try google.
    However for the most part the performance is not related to the application/web server that you use.
    It more depends on
    - the machine you are running it on
    - what code you are running on it.
    With regards to "best", you can debate that forever.
    Tomcat is widely used, mainly because it is free. But it is solid.

  • Does the ETL workload effect the performance of reports?

    Hello
    There are several heavy ETL processes running on production. Do they effect the performance of the reports that user may execute during this time?
    thanks

    Definitely... Your server has a finite number of resources available to share with all those processes... Running a report is just one more process... Users can be (and are) impacted by other processes in the system...

  • Regarding the performance of the Transaction F.01 Program RFBILA00

    Hi Everyone,
    We are running the transaction F.01for financial statements. The program is RFBILA00. We are facing the performance issue with this program. The version is ECC 6.0
    Is there any solution to reduce the running time of this program. We are not using the business area.
    Thanks,
    Senthil

    1.Run time analysis transaction SE30
    This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
    2.SQL Trace transaction ST05
    The trace list has many lines that are not related to the SELECT statement in the ABAP program. This is because the execution of any ABAP program requires additional administrative SQL calls. To restrict the list output, use the filter introducing the trace list.
    The trace list contains different SQL statements simultaneously related to the one SELECT statement in the ABAP program. This is because the R/3 Database Interface - a sophisticated component of the R/3 Application Server - maps every Open SQL statement to one or a series of physical database calls and brings it to execution. This mapping, crucial to R/3s performance, depends on the particular call and database system. For example, the SELECT-ENDSELECT loop on the SPFLI table in our test program is mapped to a sequence PREPARE-OPEN-FETCH of physical calls in an Oracle environment.
    The WHERE clause in the trace list's SQL statement is different from the WHERE clause in the ABAP statement. This is because in an R/3 system, a client is a self-contained unit with separate master records and its own set of table data (in commercial, organizational, and technical terms). With ABAP, every Open SQL statement automatically executes within the correct client environment. For this reason, a condition with the actual client code is added to every WHERE clause if a client field is a component of the searched table.
    To see a statement's execution plan, just position the cursor on the PREPARE statement and choose Explain SQL. A detailed explanation of the execution plan depends on the database system in use.
    Or can use load balancing servers for user over load.

  • Regarding the simple ALV report

    Hi All,
    I created the simple code for disply of the alv report the code is provided below,but the output is showing only the column heading not the data in the grid, i won't came to know where the problem is please find the problem in the code.
    Regards
    Sai
    ********sample code**********
    type-pools : slis.
    data : b_display type slis_t_fieldcat_alv,
           w_display type slis_fieldcat_alv.
    data : begin of itab_display occurs 0,
            kunnr type kna1-kunnr,
            name1 type kna1-name1,
            end of itab_display.
    data : gd_repid like sy-repid.
          itab_display-name1 = 'ram'.
          itab_display-kunnr = '10000033242'.
          append itab_display.
          clear itab_display.
          itab_display-name1 = 'sai'.
          itab_display-kunnr = '10000033243'.
          append itab_display.
          clear itab_display.
       w_display-col_pos = 0.
       w_display-fieldname = 'name1'.
       w_display-seltext_m = 'name'.
       append w_display to b_display.
       clear w_display.
       w_display-col_pos = 1.
       w_display-fieldname = 'kunnr'.
       w_display-seltext_m = 'cus.no'.
       append w_display to b_display.
       clear w_display.
    gd_repid  = sy-repid.
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
    EXPORTING
      I_CALLBACK_PROGRAM  = gd_repid
      IT_FIELDCAT       = b_display[]
      I_SAVE            = 'X'
      TABLES
        t_outtab       =  itab_display.

    Hi,
    Please use the following code.
    type-pools : slis.
    data : b_display type slis_t_fieldcat_alv,
    w_display type slis_fieldcat_alv.
    data : begin of itab_display occurs 0,
    kunnr type kna1-kunnr,
    name1 type kna1-name1,
    end of itab_display.
    data : gd_repid like sy-repid.
    itab_display-name1 = 'ram'.
    itab_display-kunnr = '10000033242'.
    append itab_display.
    clear itab_display.
    itab_display-name1 = 'sai'.
    itab_display-kunnr = '10000033243'.
    append itab_display.
    clear itab_display.
    w_display-col_pos = 0.
    w_display-fieldname = 'NAME1'.
    w_display-tabname = ITAB_DISPLAY.
    w_display-seltext_m = 'Name'.
    w_display-ddictxt   = 'M'.
    append w_display to B_DISPLAY.
    w_display-col_pos = 1.
    w_display-fieldname = 'KUNNR'.
    w_display-tabname = ITAB_DISPLAY.
    w_display-seltext_m = 'cus.no'.
    w_display-ddictxt   = 'M'.
    append w_display to B_DISPLAY.
    gd_repid = sy-repid.
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
    EXPORTING
    I_CALLBACK_PROGRAM = gd_repid
    IT_FIELDCAT = b_display[]
    I_SAVE = 'X'
    TABLES
    t_outtab = itab_display[].
    Regards,
    Sankar.

  • Regarding the Crystal Xcelsius Report

    Hello Experts,
    I am new to buisness Intelligence ...
    I have task to create Celsiuse reports for SCM EWM.
    I have the following doubt.....
    so can you please let me know the procedure how to start?
    Whether i have to extract data using extractor? if i extract data using extractor whenther it is available in
    Xcelsius report?
    Please provide me Crystal Xcelsius document for developing report.
    Thanks and regards,
    zubera

    Hello,
    Thanks for your help.
    Please give the link for business object site.
    Thanks and regards,
    zubera

  • Regarding Database performance for report creation

    Hi,
    Currently i have one database in that two schema one for table data and second for reports creation.But it is taking more time to display data because there is more load on first schema.
    I want to create second database for report schema .But i have to access table data from first database .
    1) There is one option to fetch data from first database is through DB Link . But i think it also takes more time to fetch data.
    Is there any way i can access data from first database and i get more performance with creating new database.
    Kindly give me suggestion . What should i do to improve reports performance?

    user647572 wrote:
    Hi,
    Currently i have one database in that two schema one for table data and second for reports creation.But it is taking more time to display data because there is more load on first schema.
    I want to create second database for report schema .But i have to access table data from first database .
    1) There is one option to fetch data from first database is through DB Link . But i think it also takes more time to fetch data.
    Is there any way i can access data from first database and i get more performance with creating new database.
    Kindly give me suggestion . What should i do to improve reports performance?You have more two options:
    1. Use Oracle Streams and replicate tables between databases. WHile using reporting, you'll refer to the second database
    2. Create Standby database, it's the clone of your database where you can update it by adding archived redo log files from primary database

  • Regarding the performance

    I had an application which consists of many forms and each form will have different form objects.. the values already there in the database for these form objects should be displayed to the user in a popup so that he can select the existing ones. There can be be 10 form objects for which the data should be displayed to the user, and for each form object there can be records in thousands.
    For this i have approached the following way:
    So as soon as the user clicks on the form, Based on the form objects fr which the data has to be displayed.. I am retrieving the first 1000 records ( if the records are more than 1000) for each form object and storing in the session and when user browses through all these records and clicks on the next then the next 1000 records i am fetching from the database and replacing the session with the new records. that is at any point of time there will be only 1000 records for the form object and like that there are 10 form objects.
    I am retrieving these records using the cursors within the cursor and sending to the java client. at the client end i am splitting these curosors into result sets.
    Right now i am success fully doing with above procedure..but my question is the procedure i am following is an ideal one or not. Is there any better way to do the above task. Suggestions are appreciated.
    Thanks for your time
    bye
    s

    This sounds like a similar situation described in Re: Getting the records from database in order which was posted last week. Perhaps if you read th eadvic ethere it might help.
    Incidentally, why do you think you may have a problem? If the thing is working then I suggest you leave it alone, until a user complains about performance (or it fails some predefined acceptance criterion). I know refactoring is a very popular concept but it is not always appropriate.
    Cheers, APC

  • How to maximize the performance of report service 9i?

    I have a report running in Oracle 9iAS report service. The report is invoked from forms by PL/SQL function run_report_object and generated into PDF file. The report is so complex that the RDF file size reaches to 6MB.
    Now it takes about 30 seconds to run the report. My form service and report service are installed on same machine which has 2 intel XEON 2G CPU and 4G memory. If i run the report twice at same time, the report jobs are put into a queue and runs one by one, so it take me 60 seconds. I find the CPU loading is about 20% during the report is running, it means the report service did not make the base of CPU.
    How can report service process multi jobs at same on a multi CPU machine? What is the best config in report service about engine number?

    Hi,
    Whenever you start a new Reports Server in 9iAS, by default it starts 1 Reports Engine (rwEng). You can increase the number of engines to 2 for processing the 2 jobs simultaneously. you can do this by using the Oracle Enterprise Manager (via the browser). You can also change the following parameters in the server config file directly
    file name: OH/reports/conf/<your_reports_server_name>.conf
    <engine id="rwEng" ... initEngine="1" maxEngine="1" minEngine="0" ... >
    You can either increase both initEngine and maxEngine to 2, only maxEngine. In the latter case, the second reports engine will be started only when needed.
    However, increasing the number of engines will obviously increase the overhead. The optimum value for number of engines depends on your machine, load, response characteristics required, etc. You can test with a few values to arrive at an optimum number.
    I would recommend going through the Oracle Reports Tuning whitepaper
    http://otn.oracle.com/products/reports/htdocs/getstart/whitepapers/cb_tuning8.pdf
    Navneet.

  • Do you have an idea how to improve the performance ?

    Hi All,
    Greeting,
    I'm doing SEM IP. Regarding the performance, do you have some thought about this ?
    So I have planning report for project report . As we know, if we forecast against project, means the date itself is the life of the project itself.
    It means it could be more than 10 years (forecast period) and 10 years (actual period). Currently I segregate between actual and forecast into different info cube .
    But the performance of the planning report is slow now. Do you have an idea about this how to increase the performance. The performance I mentioned here is when we're going to the report (after putting in the value in the selection screen).
    The other question, at this moment, I have a multiprovider than this multi provider consist 2 info cubes ( actual and forecast ). Than my aggregation is sitting on top of that multi-provider .
    My question whether that's approach correct or not ? Or do we have to create 1 aggregate (only for forecast), than I have multi-provider consisting forecasting aggregation and actual cube .
    than my query will sit on top of that multi-provider ?
    Which one is better ??
    Thanks a lot all,
    really need your help,

    Hi,
       For the performance tuning, you can consider any of the following three methods,
    1. Indices
    With an increasing number of data records in the InfoCube, not only the load but also the query performance can be reduced. This is attributed to the increasing demands on the system for maintaining indexes. The indexes that are created in the fact table for each dimension allow you to easily find and select the data.
    2. Partitioning
    By using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    3. Aggregates 
    Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance.
    4. Compressing the Infocube
    Infocube compression means aggregation of the data ignoring the request idu2019s. After compression, the system need not perform aggregation using the request ID every time you execute a query.
    And I feel that as per your scenario, you need to do first compress the data based on user requirements and have only the required data in the infocube.
    And for the approach regarding the Aggregation level design, choosing between the two approaches depends on the user requirements. For example,
    If you have aggregation level created on top of multiprovider containing actual and forecast cube, in your report (created on top of aggregation level) you can view the key figure values present in both the cubes, which is not possible in the other approach.
    So this approach is suited if your requirement is to view the records from both the cubes in your report (Comparing planning and actual values).
    The second approach is used if your requirement is only to report on planning forecast cube.
    Hopes this solves your issue.
    Regards,
    Balajee

  • How can i use Bi-Technical Content is used for measuring the performance of

    Hi
    recenetly i implemented the BI7.0 for one client. I want to know how to use 'BI Administraction Cockpit,.
    Actual what technical content does ???
    And how can i use technical content or statistics for measuring the performance of my queries. Please let me know
    kumar

    Hi Ravi,
    BI Admin Cockpit is enhancement of BW Statistics.
    http://help.sap.com/saphelp_nw2004s/helpdata/en/44/08a75d19e32d2fe10000000a11466f/frameset.htm
    Check this thread also:
    BI Statistics comparision with the old
    Regarding the performance check the link below.
    Re: Query - Performance
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
    Regards,
    Anil

  • Performance Monitor Reports Missing in FDM

    Hi All -
    I'm trying to run "Performance Monitor Reports" in FDM and am getting the below error messages for both type of reports. Any help on this greatly appreciated. Thank you!!
    Here's the message from the error log:
    Invalid Report File: \\..\Apps\FDMLTD\Reports\PerformanceGraphAvgProcessTime.rpt
    I logged into the app server and looked in the folder, and .rpt isn't there.
    Same with the Min-Max report
    Error: Invalid Report File: \\..\Apps\FDMLTD\Reports\PerformanceGraphMinMaxProcessTime.rpt

    Can you please let me know the steps to import report xml? I logged into workbench client, clicked on "view->Reports" and tried to import. But I was not able to locate the required xml files for the Performance monitor reports.
    I tried to find the steps in help guide as well, but couldn't find anything. Thanks!

Maybe you are looking for

  • Urgt:How to identify TOP 5 SQL using more CPU time  without using statspack

    How to identify the TOP 5 SQL queries which are consuming more CPU time during the timespan of 24 hours for entire database. There are N number of users who have issued sql queries, out of which few users have disconnected and few user are still conn

  • Page protection prob

    Hi Experts,                   In the main window ..i have a table....but in the output options tab---the page protection check is disabled ....how do i provide page protection? plz help...it is urgent

  • Moving itunes library from desktop to laptop

    ok so i know there are some topics on this already but none have helped me yet..i just bought a new laptop..i downloaded the latest version on itunes on it today..and i do not know how to get all my music from my library on my desktop onto my laptop.

  • SYSTAT01 for inbound processing

    Hello all, I receive an incoming IDOC for creation of ORDERS. Upon creation, I need to return the created Order Number. 1. Do I maintain Basic type SYSTAT01 (message type STATUS) in my Partner Profile Outbound Parameters? So that the information can

  • Can't use ECP if one of two Exchange servers disabled

    Hello all, For some reason if my 1st Exchange 2013 server disabled I am not able to use ECP of the 2nd Exchange 2013 server. If I go https://exchange02/ecp I can see a start up page, but after I put my password and login name all I can see is a HTTP