To improve performance for report

Hi Expert,
i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
SELECT vbeln
           auart
           submi
           vkorg
           vtweg
           spart
           knumv
           vdatu
           vprgr
           ihrez
           bname
           kunnr
    FROM vbak
    APPENDING TABLE itab_vbak_vbap
    FOR ALL ENTRIES IN l_itab_temp
*BEGIN OF change 17/Oct/2008.
    WHERE erdat IN s_erdat              AND
         submi = l_itab_temp-submi     AND
*End of Changes 17/Oct/2008.
          auart = l_itab_temp-auart     AND
*BEGIN OF change 17/Oct/2008.
          submi = l_itab_temp-submi     AND
*End of Changes 17/Oct/2008.
          vkorg = l_itab_temp-vkorg     AND
          vtweg = l_itab_temp-vtweg     AND
          spart = l_itab_temp-spart     AND
          vdatu = l_itab_temp-vdatu     AND
          vprgr = l_itab_temp-vprgr     AND
          ihrez = l_itab_temp-ihrez     AND
          bname = l_itab_temp-bname     AND
          kunnr = l_itab_temp-sap_kunnr.
    DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
  ENDDO.
Please give me suggession for improving performance for the programmes.

hi,
you try like this
DATA:BEGIN OF itab1 OCCURS 0,
     vbeln LIKE vbak-vbeln,
     END OF itab1.
DATA: BEGIN OF itab2 OCCURS 0,
      vbeln LIKE vbap-vbeln,
      posnr LIKE vbap-posnr,
      matnr LIKE vbap-matnr,
      END OF itab2.
DATA: BEGIN OF itab3 OCCURS 0,
      vbeln TYPE vbeln_va,
      posnr TYPE posnr_va,
      matnr TYPE matnr,
      END OF itab3.
SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
START-OF-SELECTION.
  SELECT vbeln FROM vbak INTO TABLE itab1
  WHERE vbeln IN s_vbeln.
  IF itab1[] IS NOT INITIAL.
    SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
    FOR ALL ENTRIES IN itab1
    WHERE vbeln = itab1-vbeln.
  ENDIF.

Similar Messages

  • Improve Performance of Report

    I have a report on the opportunities subject area but the performance for this report is very slow.
    I have included the external id but the output gets timed out always.
    Is there a way to improve the performance for this report?
    Edited by: user636400 on Aug 27, 2008 11:11 PM

    Garima,
    You mentioned that you used the ID field in the report, but you are interested in the sum of values... do you really need to see each record int he report, or an aggregation of data by user? The ID column forces the report to return every record and then perform the aggregation. Without the ID, the query is able to leverage the database server to perform the aggregation rather than the report having to calculate it over the entire dataset.
    Look for columns in your report that are unique at the record level and remove those first, then as Alex said, add them back in one at a time and you will quickly find the column that is causing the report to time out.
    Also, this message recently went out to all primary contacts:
    We have discovered a product defect that enables Oracle CRM On Demand users to submit real-time reports that take longer than 10 minutes to run. When this occurs, there is the potential for performance degradation for all users who are co-located on your Pod.
    A fix for this defect will be rolled-out across our entire fleet over the next eight weeks. After this fix is deployed to your Pod, any real-time report that fails to complete within 10 minutes will be terminated and will result in the display of the following error message: "The user request exceeded the maximum query governing execution time.”
    Note that historical reports are unaffected by this fix. Should you have any real-time reports which exceed the 10 minute window, then you need to set them up as historical reports using Analytics. Alternatively, you can continue to run them as real-time reports but must reduce the amount of data selected using filters, column prompts, dashboard prompts or report filter chains so that the report completes within the 10 minute window.
    Regards,
    Mike L.

  • Regarding Database performance for report creation

    Hi,
    Currently i have one database in that two schema one for table data and second for reports creation.But it is taking more time to display data because there is more load on first schema.
    I want to create second database for report schema .But i have to access table data from first database .
    1) There is one option to fetch data from first database is through DB Link . But i think it also takes more time to fetch data.
    Is there any way i can access data from first database and i get more performance with creating new database.
    Kindly give me suggestion . What should i do to improve reports performance?

    user647572 wrote:
    Hi,
    Currently i have one database in that two schema one for table data and second for reports creation.But it is taking more time to display data because there is more load on first schema.
    I want to create second database for report schema .But i have to access table data from first database .
    1) There is one option to fetch data from first database is through DB Link . But i think it also takes more time to fetch data.
    Is there any way i can access data from first database and i get more performance with creating new database.
    Kindly give me suggestion . What should i do to improve reports performance?You have more two options:
    1. Use Oracle Streams and replicate tables between databases. WHile using reporting, you'll refer to the second database
    2. Create Standby database, it's the clone of your database where you can update it by adding archived redo log files from primary database

  • Times ten to improve performance for search results in Oracle eBS

    Hi ,
    We have various search scenarios in our ERP implementaion using Oracle Apps eBS, for example searching for an item . Oracle apps does provide item search but performance is not great. We have about 30 million items and hence to improve the performance of the search thought Times ten may help.
    Can anyone please clarify if Times ten can be used to improve performance on the eBS database , if yes how.

    Vikash,
    We were thinking along the same lines (using TimesTen for massive item search in e-Business Suite). In our case massive Item / parametric search leveraging the Product Information Management application. We were thinking about setting up a POC on a Linux Server with a Vision Instance. We should compare notes?
    SParker

  • Need to improve performance for bex queries

    Dear Experts,
    Here we have bex queries buit on BW infoset, further infoset is buit on 2 dsos and 4 infoobjects.
    we have built secondary indices to the two dso assuming will improve performance, but still query execution time is very long.
    Could you suggest me on this.
    Thanks in advance,
    Mannu

    HI,
    Thanks for the repsonse.
    But as I have mentioned the infoset is based on DSOs and Infoobjects. So we could not perform on aggregates.
    in RSRT
    I have tried look in read mode of the query i.e. in 'x', which is also valid as qurey needs to fetch huge data.
    Could you pls look into other possible areas, in order to improve this.
    Thanks in advance,
    Mannu

  • BIA to improve performance for BPS Applications

    Hi All,
    Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
    Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
    Request an early reply as system is in really bad shape and users are grappling with poor performance?
    Rgds,
    Rajeev

    Hi Rajeev,
    If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
    Hence if you are facing issue with query response time, BIA will defenitely help.
    Regards,
    Praveen

  • How to improve performance for bulk data load in Dynamics CRM 2013 Online

    Hi all,
    We need to bulk update (or create) contacts into Dynamics CRM 2013 online every night due to data updated from another external data source.  The data size is around 100,000 and the data loading duration was around 6 hours.
    We are already using ExecuteMultiple web services to handle the integration, however, the 6 hours integraton duration is still not acceptable and we are seeking for any advise for further improvement. 
    Any help is highly appreciated.  Many thanks.
    Gary

    I think Andrii's referring to running multiple threads in parallel (see
    http://www.mscrmuk.blogspot.co.uk/2012/02/data-migration-performance-to-crm.html - it's a bit dated, but should still be relevant).
    Microsoft do have some throttling limits applied in Crm Online, and it is worth contacting them to see if you can get those raised.
    100 000 records per night seems a large number. Are all these records new or updated records, or are there some that are unchanged, in which case you could filter them out before uploading ? Or are there useful ways to summarise the data before loading
    Microsoft CRM MVP - http://mscrmuk.blogspot.com/ http://www.excitation.co.uk

  • How to improve performance for Custom Extractor in BI..

    HI all,
               I am new to BI and started working on BI for couple of weeks.. I created a Custom Extractor(Data View) in the Source system and when i pull data takes lot of time.. Can any one respond to this, suggesting how to improve the performance of my custom Extractor.. Please do the needfull..
      Thanks and Regards,
    Venugopal..

    Dear Venugopal,
    use transaction ST05 to check if your SQL statements are optimal and that you do not have redundant database calls. You should use as much as possible "bulking", which means to fetch the required data with one request to database and not with multiple requests to database.
    Use transaction SE30 to check if you are wasting time in loops and if yes, optimize the algorithm.
    Best Regards,
    Sylvia

  • Improving performance for SM35

    Hi all,
    Are there any ways to improve the performance (time taken to load data) of SM35?
    We are aware of executing the session in backgroud, but due to high data volume (~>10,000 records) per file, the time taken is still slow (about 3 hours per file).

    Hi Raj,
    The previous posters gave already all the information you need, but since the question is still open, let me try to summarize it.
    You're getting almost 1 transaction processed per second, which might be ok depending on the application area and the complexity of the executed transaction. So as Hermann initially pointed out, you should first profile the transaction you're running and check for any inefficiencies (custom coding in exits/BAdI's are often sources of slow-downs). If you find any problems, tune your transaction/application (not SM35).
    If your application is fast enough (i.e. you cannot find any easy measures for making your transaction faster), you can compare application/transaction processing time versus total time taken in SM35. I personally doubt that you'll find any worthwhile discrepancy there (i.e. process time taken up by SM35, which is not due to the called transaction). Thus you should be left with Hermann's initial point of running several BDC's in parallel - meaning that you'll have to split your input file (you can automate that if you have to run such loads regularly). Without parallel processing you will always encounter unacceptable processing times when running huge data loads (even with optimal coding throughout the application).
    Kind regards, harald

  • How can I improve performance for BC4J/JSP-application

    Hi,
    I have developed a JSP-Applikcation with the master-detail views. This Application has been implemented
    as SSO enabled web portlet into portal. Whenn I click on a row retrieved from the master view (Master.jsp), it take 15 seconds
    until the associated data in detail view will be displayed on the other window(Detail.jsp). The master table has about 2500 records and detail table about 7000. (Other case: it takes 2 seconds for 162 rows (master) and 228 rows (detail) respectively)
    In Master.jsp I set rangesize = "10" to reduce data loading time.
    ======================== master.jsp =================================
    <jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="10">
    <a href="detail.jsp?RowKeyValue=<jbo:ShowValue datasource="dsMaster" dataitem="RowKey"/>">
    Here Click
    </a>
    ==================================================================
    Because all records from master view have firstly to be retrieved to locate right row, I set rangesize="-1" in detail.jsp. Consequently this leads to a lower performance.
    When rangesize="20" sets instead rangesize="-1", The performance is good, but the wanted Data from detail view are not displayed if the records of master view cliked on ist not in this range.
    ======================== detail.jsp ======================================
    <jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="-1">
    <jbo:RefreshDataSource datasource="dsMaster">
    <jbo:Row id="msRow" datasource="dsMaster" action="Find" rowkeyparam="RowkeyValue">
    <jbo:dataSource id="dsDetail" appid="testAppId viewobject="DetailView">
    ======================================================================
    Is my programming logic not to be suited for the high performance?
    How can I improve the performance, if it is so?
    Many thanks for your help.
    regards,
    Yoo

    http://forums.adobe.com/thread/1369260?tstart=0

  • How to improve Performance for Select statement

    Hi Friends,
    Can  you please help me in improving the performance of the following query:
    SELECT SINGLE MAX( policyterm ) startterm INTO (lv_term, lv_cal_date) FROM zu1cd_policyterm WHERE gpart = gv_part GROUP BY startterm.
    Thanks and Regards,
    Johny

    long lists can not be produced with  a SELECT SINGLE, there is also nothing to group.
    But I guess the SINGLE is a bug
    SELECT MAX( policyterm ) startterm
                  INTO (lv_term, lv_cal_date)
                  FROM zu1cd_policyterm
                  WHERE gpart = gv_part
                  GROUP BY startterm.
    How many records are in zu1cd_policyterm ?
    Is there an index starting with gpart?
    If first answer is 'large' and second 'no'   =>   slow
    What is the meaning of gpart?  How many different values can it assume?
    If many different values then an index makes sense, if you are allowed to create
    an index.
    Otherwise you must be patient.
    Siegfried

  • Improving performance for java

    I'm new to this so please bare with me ... I have 2 basic questions
    I just upgraded my server to SunOS 5.10 Generic_139555-08 sun4u sparc SUNW,Sun-Fire-V440
    I also upgraded java to java version "1.6.0_14"
    This is a 4 processor box. Top gives me:
    last pid: 26233; load averages: 2.79, 2.99, 3.12 13:23:57
    174 processes: 172 sleeping, 2 on cpu
    CPU states: 40.2% idle, 54.2% user, 5.6% kernel, 0.0% iowait, 0.0% swap
    Memory: 8192M real, 3059M free, 6156M swap in use, 4105M swap free
    PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
    17294 prodslic 270 0 0 654M 641M cpu/1 527:36 50.02% java
    *!st Question:*
    *1. Why is java using so much cpu time?*
    When I run ps -ef | grep java:
    root 15666 1 0 Aug 10 ? 4:52 /usr/java/bin/java -server -Xmx128m -XX:+UseParallelGC -XX:ParallelGCThreads=4
    prodslic 17294 1 25 18:07:14 ? 530:07 /usr/jdk/instances/jdk1.6.0/bin/java -Xmx1024m -Djava.awt.headless=true -Djava.
    *2nd Question:*
    *2. Why are there 2 java version running?*
    /usr/java/bin/java -versionjava version "1.5.0_18"
    /usr/jdk/instances/jdk1.6.0/bin/java -versionjava version "1.6.0_14"
    This is confusiing to me. I'd also like to know what the different command line options mean
    Thanks
    Edited by: BH80477 on Aug 14, 2009 8:19 AM

    Claudia,
    We have not yet released a "production" version of OHJ 4.2 because 1) we'd like to add a few more features before calling it production, and 2) OHJ 4.2 has not yet shipped with an Oracle product and therefore hasn't undergone the rigorous testing that takes place on larger Oracle products. If you're concerned about the label of "production" versus "beta," I think you'd be pretty safe going with the latest release of OHJ, or we wouldn't have put it OTN for you to use. The developers have tested thoroughly. :) I'd recommend trying out 4.2.1 since it's got some substantial improvements over the 4.1 branch, and on the off-chance you run into problems, please let us know.
    - Ryan

  • How to improve performance for Azure Table Storage bulk loads

    Hello all,
    Would appreciate your help as we are facing a challenge.
    We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
    We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
    We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
    Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
    Kindly, note that we shouldn't be using SQL/Azure SQL for this.
    I would really appreciate your help.
    Thanks

    I'd think you're just pooling the parallel connections to Azure, if you do it on one system.  You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
    You could speed it up by moving the data file to the cloud and process it with a Cloud worker role.  That way you'd be in the datacenter (which is a much faster, more optimized network.)
    Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
    Darin R.

  • How to improve performance for this code

    Hi,
    LOOP AT lt_element INTO ls_element.
    READ TABLE lt_element_ident INTO ls_element_ident
    WITH KEY element_id = ls_element-element_id BINARY SEARCH.
    IF sy-subrc EQ 0.
    MOVE ls_element_ident-value TO lv_guid.
    SELECT * FROM zcm_valuation_at
    APPENDING CORRESPONDING FIELDS OF TABLE lt_caseattributes
    WHERE case_guid = lv_guid.
    ENDIF.
    ENDLOOP.
    LOOP AT lt_caseattributes INTO ls_caseattributes.
    IF ls_caseattributes-ext_key IS INITIAL.
    SELECT SINGLE ext_key
    INTO CORRESPONDING FIELDS OF ls_caseattributes
    FROM scmg_t_case_attr
    WHERE case_guid = ls_caseattributes-case_guid.
    ENDIF.
    *To get the Status description of the Case
    SELECT SINGLE stat_ordno_descr
    INTO ls_caseattributes-status
    FROM scmgstatprofst AS a
    INNER JOIN scmg_t_case_attr AS b
    ON aprofile_id = bprofile_id
    AND astat_orderno = bstat_orderno
    WHERE case_guid = ls_caseattributes-case_guid.
    MODIFY lt_caseattributes FROM ls_caseattributes INDEX sy-tabix TRANSPORTING status ext_key.
    ENDLOOP.
    READ TABLE lt_caseattributes INTO ls_caseattributes INDEX 1.
    Regards,
    Maruti

    Hi,
    try this kind of code:
    ==================================
    start new
    DATA:
      lt_scmgstatprofst LIKE scmgstatprofst OCCURS 0 WITH HEADER LINE,
      wa_scmg_t_case_attr LIKE scmg_t_case_attr.
    SELECT * FROM scmgstatprofst INTO TABLE lt_scmgstatprofst.
    SORT lt_scmgstatprofst BY profile_id stat_orderno.
    end new
    LOOP AT lt_element INTO ls_element.
      READ TABLE lt_element_ident INTO ls_element_ident
      WITH KEY element_id = ls_element-element_id BINARY SEARCH.
      IF sy-subrc EQ 0.
        MOVE ls_element_ident-value TO lv_guid.
        SELECT * FROM zcm_valuation_at
        APPENDING CORRESPONDING FIELDS OF TABLE lt_caseattributes
        WHERE case_guid = lv_guid.
      ENDIF.
    ENDLOOP.
    LOOP AT lt_caseattributes INTO ls_caseattributes.
      IF ls_caseattributes-ext_key IS INITIAL.
        SELECT SINGLE ext_key
        INTO CORRESPONDING FIELDS OF ls_caseattributes
        FROM scmg_t_case_attr
        WHERE case_guid = ls_caseattributes-case_guid.
      ENDIF.
    *To get the Status description of the Case
    start deletion
    SELECT SINGLE stat_ordno_descr
    INTO ls_caseattributes-status
    FROM scmgstatprofst AS a
    INNER JOIN scmg_t_case_attr AS b
    ON aprofile_id = bprofile_id
    AND astat_orderno = bstat_orderno
    WHERE case_guid = ls_caseattributes-case_guid.
    end deletion
    start new
      CLEAR wa_scmg_t_case_attr.
      SELECT SINGLE * FROM scmg_t_case_attr INTO wa_scmg_t_case_attr
        WHERE case_guid = ls_caseattributes-case_guid.
      READ TABLE lt_scmgstatprofst WITH KEY
        profile_id   = wa_scmg_t_case_attr-profile_id
        stat_orderno = wa_scmg_t_case_attr-stat_orderno
        BINARY SEARCH.
      IF sy-subrc IS INITIAL.
        ls_caseattributes-status = lt_scmgstatprofst-stat_ordno_descr.
      ENDIF.
    end new
      MODIFY lt_caseattributes FROM ls_caseattributes INDEX sy-tabix
      TRANSPORTING status ext_key.
    ENDLOOP.
    READ TABLE lt_caseattributes INTO ls_caseattributes INDEX 1.
    ==================================
    Regards
    Walter Habich
    Edited by: Walter Habich on Jun 17, 2008 8:41 AM

  • Split of Cubes to improve the performance of reports

    Hello Friends . We are now Implementing the Finance GL Line Items for Global Automobile operations in BMW and services to Outsourced to Japan which increased the data volume to 300 millions records for last 2 years since we go live. we have 200 Company codes.
    How To Improve performance
    1. Please suggest if I want to split the cubes based on the year and Company codes which are region based. which means european's will run report out of one cube and same report for america will be on another cube
    But Question here is if I make 8 cube (2 For each year : 1- current year comp code ABC & 1 Current Year DEF), (2 For each year : 1- Prev year comp code ABC & 1 Prev Year DEF)
    (2 For each year : 1- Arch year comp code ABC & 1 Archieve Year DEF)
    1. Then what how I can I tell the query to look the data from which cube. since Company code is authorization variable so to pick that value  of comp code and make a customer exit variable for infoprovider  will increase lot of work.
    Is there any good way to do this. does split of cubes make sense based on company code or just make it on year.
    Please suggest me a excellent approach step by step to split cubes for 60 million records in 2 years growth will be same for
    next 4 years since more company codes are coming.
    2. Please suggest if split of cube will improve performance of report or it will make it worse since now query need to go thru 5-6 different cubes.
    Thanks
    Regards
    Soniya

    Hi Soniya,
    There are two ways in which you can split your cube....either based on Year or based on Company code.(i.e Region). While loading the data, write a code in the start routine which will filter tha data. For example, if you are loading data for three region say 1, 2, and 3, you code will be something like
    DELETE SOURCE_PACKAGE WHERE REGION EQ '2' OR
    REGION EQ '3'.
    This will load data to your cube correspoding to region 1.
    you can build your reports either on these cubes or you can have a multiprovider above these cubes and build the report.
    Thanks..
    Shambhu

Maybe you are looking for