Improving backup performance for Essbase / Planning

Hi all
I am trying to improve out backup performance and limit downtime for our Planning applications. Currently we are running 11.1.1.3 and an instance of 11.1.2.1. We are doing your standard cold backups on the Essbase side (shut down, copy files) then we do a level 0 export out for a logical backup. Downtime is getting a bit restrictive and the business wants a more DR friendly solution.
Has anyone tried any of these options and if so please let me know how they worked
1. Use BCV technology to snap off a copy of the essbase databases to limit downtime of a normal copy.
2. HTBase backup utility, demo looked good, is it proven
3. Transaction log mode with hot backups.
Im just trying to weigh the best options and see what is proven out there
any help would be great

Hi Mike,
You can take backup using "Archive database " functionality. While data is being archived , the database is put into read only mode. Only thing is that you wont be able to write data into the cube at that time, but it might increase the time for which you take backup as data archive backs up the wole database with all the dataabse.
You can also use the replay BSO feature by enabling transaction logging.
See if this Helps.
Cheers,
SS
Edited by: Saurav Sarkar on 5 Feb, 2013 5:11 AM

Similar Messages

  • Hot backups for essbase planning databases

    Hi gurus
    Can please tell me best advice for making hot backups of our essbase planning databases for disaster recoveries?
    We try doubletake but files are locked unless essbase is put in read-only mode.
    Any good tools for this? Certified stuff? Heard maybe VBE?
    TIA
    M.

    Hi,
    First of all have you read the document of backing up essbase :- http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/hfm_backup.pdf
    If you are taking a backup while essbase is running then you will need to put the databases into read only mode, this can be done by using a maxl script and once again to take it out of read only mode.
    There are lots of 3rd party applications out there for backing up file structures, it all depends on what you want it to do and how much you want to pay.
    If it is a small installation you could even script it yourself depensing on your knowledge to back up files and copy them to a san or backup drive.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Need help in improving the performance for the sql query

    Thanks in advance for helping me.
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. The data count which is updated in the target table is 2 million records and the target table has 15 million records.
    Any suggestions or solutions for improving performance are appreciated
    SQL query:
    update targettable tt
    set mnop = 'G',
    where ( x,y,z ) in
    select a.x, a.y,a.z
    from table1 a
    where (a.x, a.y,a.z) not in (
    select b.x,b.y,b.z
    from table2 b
    where 'O' = b.defg
    and mnop = 'P'
    and hijkl = 'UVW';

    987981 wrote:
    I was trying to improve the performance of the below query. I tried the following methods used merge instead of update, used bulk collect / Forall update, used ordered hint, created a temp table and upadated the target table using the same. The methods which I used did not improve any performance. And that meant what? Surely if you spend all that time and effort to try various approaches, it should mean something? Failures are as important teachers as successes. You need to learn from failures too. :-)
    The data count which is updated in the target table is 2 million records and the target table has 15 million records.Tables have rows btw, not records. Database people tend to get upset when rows are called records, as records exist in files and a database is not a mere collection of records and files.
    The failure to find a single faster method with the approaches you tried, points to that you do not know what the actual performance problem is. And without knowing the problem, you still went ahead, guns blazing.
    The very first step in dealing with any software engineering problem, is to identify the problem. Seeing the symptoms (slow performance) is still a long way from problem identification.
    Part of identifying the performance problem, is understanding the workload. Just what does the code task the database to do?
    From your comments, it needs to find 2 million rows from 15 million rows. Change these rows. And then write 2 million rows back to disk.
    That is not a small workload. Simple example. Let's say that the 2 million row find is 1ms/row and the 2 million row write is also 1ms/row. This means a 66 minute workload. Due to the number of rows, an increase in time/row either way, will potentially have 2 million fold impact.
    So where is the performance problem? Time spend finding the 2 million rows (where other tables need to be read, indexes used, etc)? Time spend writing the 2 million rows (where triggers and indexes need to be fired and maintained)? Both?

  • How to improve the performance for integrating third party search engine

    hi,
    I have been working on integrating verity search engine with KM. the performance for retrieving search result totally depend on how many search result returned, for example if there is less than 10 records, it only takes 3 seconds, but if there are 200 records, it takes about 3 minutes, is it normal? anyway to improve it? Thanks!
    T.J.

    Thilo,
    thanks for the response, would you recommend some document for configuring KM cache service, I did change memory cache, and also dynamic web reposity, whatelse out there that I can change? right now, I have one instance(EP6.4 sp11) works well, it returns 200 records from Stellent within 6s. But when I put this KM global service on EP6.0 sp2 (our current system) it takes about 15s. I am not sure is this because of different EP version, or something else. I have tried my best to slim down SOAP component from Stellent. I don't think anything else can be done from that side anymore. before I changed the SOAP, it tooks about 60s. just wonder what else I can do on KM side to improve it performance? Thanks!
    T.J.

  • Improving retrieval performance of essbase server in unix environment

    HI,
    Our production environment is unix system. can any one suggest settings which impact the retrieval performance and how to do these settings in unix environment.

    Naveen,
    For retrieval perfomance, Increase the retreival buffer size.
    Default is 10 KB for 32 bit platforms and 20 KB for 64 bit.
    make it 100 KB.
    2. if the data block size is large ,and you are retriving cells across several blocks
    set VLBREPORT true in the essbase.cfg configuration file
    NOTE: this will increase the retrival process but , its applicable to the outlines which does not include dynamic calcs.
    3. If the format of your reoport is of not much importance. group dense dimension in colums and groping sparse in rows ,this would be faster.
    4. An applicaion/database does has a limt on its memory consumption.
    So, RAM is the key for the speed.
    Best part is that ,as you have UNIX operating system ,addressabe memory in your case is 3.9GB(which is very good) unlike 2 GB in case of windows.This is per application.
    Sandeep Reddy Enti
    HCC

  • Performance for table BKFP

    Hi,
    I would like to enquire is there anyway that i can improve the performance for table BKPF from the ABAP code point of view.
    Because we have customise one program to generate report for the asset master listing.
    one of the select statement are show as below:
          SELECT SINGLE * FROM BKPF WHERE BUKRS = ANEP-BUKRS
                                      AND GJAHR = ANEP-GJAHR
                                      AND AWKEY = AWKEYUS.
    I would like to know how it different from the select statemene below:
    SELECT SINGLE * FROM BKPF INTO CORRESPONDING FIELDS OF T_BKPF
          WHERE
          BUKRS = ANEP-BUKRS
      AND GJAHR = ANEP-GJAHR
      AND AWKEY = AWKEY.
    Which of the select statements above can enhance report,because currently we have face quite bad issue on this report.
    Can i post the ABAP code on this forum.
    Hope someone can help me on this. thank you.

    Hi,
    As much as possible use the primary keys of BKPF which is BUKRS, BELNR and GJAHR. Also, select only the records which are needed so to increase performance. Please look at the code below:
    DATA: lv_age_of_rec TYPE p.
      FIELD-SYMBOLS: <fs_final> LIKE LINE OF it_final.
      LOOP AT it_final ASSIGNING <fs_final>.
      get records from BKPF
        SELECT SINGLE bukrs belnr gjahr budat bldat xblnr bktxt FROM bkpf
        INTO (bkpf-bukrs, bkpf-belnr, bkpf-gjahr, <fs_final>-budat,
              <fs_final>-bldat, <fs_final>-xblnr, <fs_final>-bktxt)
        WHERE bukrs = <fs_final>-bukrs
          AND belnr = <fs_final>-belnr
          AND gjahr = <fs_final>-gjahr.
      if <fs_final>-shkzg = 'H', multiply dmbtr(amount in local currency)
      by negative 1
        IF <fs_final>-shkzg = 'H'.
          <fs_final>-dmbtr = <fs_final>-dmbtr * -1.
        ENDIF.
      combine company code(bukrs), accounting document number(belnr),
      fiscal year(gjahr) and line item(buzei) to get long text.
        CONCATENATE: <fs_final>-bukrs <fs_final>-belnr
                     <fs_final>-gjahr <fs_final>-buzei
                     INTO it_thead-tdname.
        CALL FUNCTION 'READ_TEXT'
          EXPORTING
            client                        = sy-mandt
            id                            = '0001'
            language                      = sy-langu
            name                          = it_thead-tdname
            object                        = 'DOC_ITEM'
          ARCHIVE_HANDLE                = 0
          LOCAL_CAT                     = ' '
        IMPORTING
          HEADER                        =
          TABLES
            lines                         = it_lines
         EXCEPTIONS
           id                            = 1
           language                      = 2
           name                          = 3
           not_found                     = 4
           object                        = 5
           reference_check               = 6
           wrong_access_to_archive       = 7
           OTHERS                        = 8.
       IF sy-subrc <> 0.
         MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                 WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
       ENDIF.
      if successful, split long text into start and end date
        IF sy-subrc = 0.
          READ TABLE it_lines TRANSPORTING tdline.
          IF sy-subrc = 0.
            SPLIT it_lines-tdline AT '-' INTO
                  <fs_final>-s_dat <fs_final>-e_dat.
          ENDIF.
        ENDIF.
      get vendor name from LFA1
        SELECT SINGLE name1 FROM lfa1
        INTO <fs_final>-name1
        WHERE lifnr = <fs_final>-lifnr.
        lv_age_of_rec = p_budat - <fs_final>-budat.
      condition for age of deposits
        IF lv_age_of_rec <= 30.
          <fs_final>-amount1 = <fs_final>-dmbtr.
        ELSEIF lv_age_of_rec > 30 AND lv_age_of_rec <= 60.
          <fs_final>-amount2 = <fs_final>-dmbtr.
        ELSEIF lv_age_of_rec > 60 AND lv_age_of_rec <= 90.
          <fs_final>-amount3 = <fs_final>-dmbtr.
        ELSEIF lv_age_of_rec > 90 AND lv_age_of_rec <= 120.
          <fs_final>-amount4 = <fs_final>-dmbtr.
        ELSEIF lv_age_of_rec > 120 AND lv_age_of_rec <= 180.
          <fs_final>-amount5 = <fs_final>-dmbtr.
        ELSEIF lv_age_of_rec > 180.
          <fs_final>-amount6 = <fs_final>-dmbtr.
        ENDIF.
        CLEAR: bkpf, it_lines-tdline, lv_age_of_rec.
      ENDLOOP.
    Hope this helps...
    P.S. Please award points for useful answers.

  • Performance for the below code

    Can any one help me in improving the performance for the below code.
    FORM RETRIEVE_DATA .
    CLEAR WA_TERRINFO.
    CLEAR WA_KNA1.
    CLEAR WA_ADRC.
    CLEAR SORT2.
    *To retrieve the territory information from ZPSDSALREP
    SELECT ZZTERRMG
           ZZSALESREP
           NAME1
           ZREP_PROFILE
           ZTEAM
         INTO TABLE GT_TERRINFO
         FROM ZPSDSALREP.
    *Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
    LOOP AT GT_TERRINFO INTO WA_TERRINFO.
      SELECT SINGLE * FROM KNA1 INTO WA_KNA1
                      WHERE KUNNR = WA_TERRINFO-SALESREP.
      SELECT SINGLE * FROM ADRC INTO WA_ADRC
                      WHERE ADDRNUMBER = WA_KNA1-ADRNR.
      IF NOT WA_ADRC-SORT2 IS INITIAL.
      CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
      MOVE SORT2 TO WA_TERRINFO-SORT2.
    MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
      APPEND WA_TERRINFO TO GT_TERRINFO1.
      CLEAR WA_TERRINFO.
      ENDIF.
      CLEAR WA_KNA1.
      CLEAR WA_ADRC.
    ENDLOOP.
    ENDFORM.                    " RETRIEVE_DATA

    Hi
    The code is easy so I don't think you can do nothing, only u can try to limit the reading of KNA1:
    FORM RETRIEVE_DATA .
      CLEAR WA_TERRINFO.
      CLEAR WA_KNA1.
      CLEAR WA_ADRC.
      CLEAR SORT2.
    *To retrieve the territory information from ZPSDSALREP
      SELECT ZZTERRMG
      ZZSALESREP
      NAME1
      ZREP_PROFILE
      ZTEAM
      INTO TABLE GT_TERRINFO
      FROM ZPSDSALREP.
      SORT GT_TERRINFO BY SALESREP.
    *Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
      LOOP AT GT_TERRINFO INTO WA_TERRINFO.
        IF KNA1-KUNNR <> WA_KNA1-KUNNR.
          SELECT SINGLE * FROM KNA1 INTO WA_KNA1
               WHERE KUNNR = WA_TERRINFO-SALESREP.
          IF SY-SUBRC <> 0.
            CLEAR: WA_KNA1, WA_ADRC.
          ELSE.
            SELECT SINGLE * FROM ADRC INTO WA_ADRC
                                     WHERE ADDRNUMBER = WA_KNA1-ADRNR.
            IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
          ENDIF.
        ENDIF.
        IF NOT WA_ADRC-SORT2 IS INITIAL.
          CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
          MOVE SORT2 TO WA_TERRINFO-SORT2.
    * MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
          APPEND WA_TERRINFO TO GT_TERRINFO1.
          CLEAR WA_TERRINFO.
        ENDIF.
      ENDLOOP.
    ENDFORM. " RETRIEVE_DATA
    If program takes many times to upload the data from ZPSDSALREP, you can try to split in sevaral packages:
    SELECT ZZTERRMG ZZSALESREP NAME1 ZREP_PROFILE ZTEAM
      INTO TABLE GT_TERRINFO PACKAGE SIZE <...>
      FROM ZPSDSALREP.
      SORT GT_TERRINFO BY SALESREP.
    *Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
      LOOP AT GT_TERRINFO INTO WA_TERRINFO.
        IF KNA1-KUNNR <> WA_KNA1-KUNNR.
          SELECT SINGLE * FROM KNA1 INTO WA_KNA1
               WHERE KUNNR = WA_TERRINFO-SALESREP.
          IF SY-SUBRC <> 0.
            CLEAR: WA_KNA1, WA_ADRC.
          ELSE.
            SELECT SINGLE * FROM ADRC INTO WA_ADRC
                                     WHERE ADDRNUMBER = WA_KNA1-ADRNR.
            IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
          ENDIF.
        ENDIF.
        IF NOT WA_ADRC-SORT2 IS INITIAL.
          CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
          MOVE SORT2 TO WA_TERRINFO-SORT2.
    * MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
          APPEND WA_TERRINFO TO GT_TERRINFO1.
          CLEAR WA_TERRINFO.
        ENDIF.
      ENDLOOP.
    ENDSELECT.
    Max

  • Improve web form performance of Hyperion Planning

    Hi all,
    I am not sure whether anyone has tried the recommended method from Oracle to compress the web forms used in Hyperion Planning so as to improve the network perfromance?
    http://download.oracle.com/docs/cd/E12032_01/doc/epm.921/html_hp_admin/frameset.htm?/docs/cd/E12032_01/doc/epm.921/html_hp_admin/hpamin-19-17.htm
    Slow Performance When Opening Data Forms Using a Dial-Up Connection
    Scenario:
    Opening a data form using a slow network connection (for example, with a modem) takes a long time.
    Solution:
    You can significantly increase the network bandwidth when opening data forms by modifying the web.xml file, as described in this section. This solution compresses by approximately 90% the data stream sent from the Planning server to the client.
    Note: If you are using a WebLogic (all supported versions) Web application server, complete the second procedure, which is specific to WebLogic. If you are using any other Web application server, complete the first procedure, which contains general instructions.
    To modify the web.xml file to improve performance, for any Web application server except WebLogic:
    1 With a text editor, open the web.xml file, located in HyperionPlanning.ear or HyperionPlanning.war.
    2 After the tag </description> and before the tag <listener>, insert the following lines:
    <filter>
    <filter-name>HspCompressionFilter</filter-name> <filter-
    class>com.hyperion.planning.HspCompressionFilter</filter-class>
    <init-param>
    <param-name>compressionThreshold</param-name>
    <param-value>2048</param-value>
    </init-param>
    <init-param>
    <param-name>debug</param-name> <param-value>1</param-value>
    </init-param>
    </filter>
    <filter-mapping>
    <filter-name>HspCompressionFilter</filter-name>
    <url-pattern>/EnterData.jsp</url-pattern>
    </filter-mapping>
    3 Save the web.xml file.
    If you are using WebLogic, you must manually modify the .ear file and redeploy it for the Web application server.
    To improve performance with a WebLogic application server:
    1 Unzip the HyperionPlanning.ear file to C\:ear, for example.
    2 Unzip Hyperion.war under C:\ear to C:\war.
    3 With a text editor, open the C:war\WEB-INF\web.xml file and modify it using the instructions in step 2 in the preceding procedure.
    4 Compress the content in C:\war to C:\ear\HyperionPlanning.war.
    5 Compress the content in C:\ear into C:ear\HyperionPlanning.ear
    6 Deploy the new HyperionPlanning.ear for the WebLogic Web application server.

    Hi,
    Yes we did this with a bank which had branches connected to the servers via dial-up. They were complaining about the amount of data being transferred in between branches and servers. This has worked well over there and increased the data transfer performance considerably. However, nowadays the performance bottlenecks are not generally due to the amount of data being transferred over the network but the javascript components that should run on client and heavy calculation/data and report creation processes.
    Cheers,
    Alp

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

  • Financial report for essbase and planning application

    Hi,
    Can any one please explain me what is the difference between creating financial report for essbase application and planning application.
    Considering we will have Essbase cube for the planning application
    Thanks in advance

    Correct, the data that is stored in Essbase will be the same regardless of which connection type you use.
    Data that is only stored in Planning can only be retrieved using a Planning connection. This includes Supporting Detail and SmartLists as these are stored in the Planning relational database.
    As a general rule I always use an Essbase connection unless i specifically want to get supporting detail or smartlists but that is just me.
    Hope this helps
    Stuart

  • Only Essbase read access for a Planning App

    A user needs access only to read data values from essbase for a planning application. Similar to "View User" in Planning roles except doesn't need access to Planning. How can this be achieved?
    Thanks

    I would have thought as it is a planning application then you would need to provision a planning role for the user, the application will be seen as a planning application in shared services and not be under essbase.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Smart View for Essbase Not Available in Planning Rapid Deployment

    I have successfully deployed EPM 11.1.2.3 by following the steps in the Planning Rapid Deployment guide (http://docs.oracle.com/cd/E40248_01/epm.1112/epm_planning_rapid_deploy/epm_planning_rapid_deploy.html). However, when I bring up Smart View using a Shared Connections URL of http://<servername>:9000/workspace/SmartViewProviders, I only get connections for Planning and for the Reporting and Analysis Framework. How can a I get a connection for Essbase?

    As it turns out, the 11.1.2.3 Planning Rapid Deployment wizard doesn't install Essbase Provider Services, just Planning Provider Services and Reporting and Analysis Provider Services. Until I installed Essbase Provider Services, I wasn't even able to create a private connection to Essbase. I had to run InstallTool.cmd and select "Provider Services Web Application" from the Essbase section of the selection screen. I also had to run the configuration utility and select "Deploy to Application Server" under "Provider Services" in the Essbase section of the selection screen.
    After that, I was able to create a private connection to Essbase with this URL: http://<server name>:9000/aps/APS
    Although the private connection was now working, the Essbase connection was still not showing up as a shared connection in Excel, when I set the Excel Smart View option for Shared Connections URL to http://<server name>:9000/workspace/SmartViewProviders. I was able to get this working by running the configuration utility again and selecting "Configure Web Server" in the "Hyperion Foundation" section of the selection screen.
    I was also able to create an XML file as Celvin suggested, using http://<server name>:9000/aps/APS for the Essbase connection. Thanks for the suggestion!
    -Tom

  • Times ten to improve performance for search results in Oracle eBS

    Hi ,
    We have various search scenarios in our ERP implementaion using Oracle Apps eBS, for example searching for an item . Oracle apps does provide item search but performance is not great. We have about 30 million items and hence to improve the performance of the search thought Times ten may help.
    Can anyone please clarify if Times ten can be used to improve performance on the eBS database , if yes how.

    Vikash,
    We were thinking along the same lines (using TimesTen for massive item search in e-Business Suite). In our case massive Item / parametric search leveraging the Product Information Management application. We were thinking about setting up a POC on a Linux Server with a Vision Instance. We should compare notes?
    SParker

  • How to improve performance for Custom Extractor in BI..

    HI all,
               I am new to BI and started working on BI for couple of weeks.. I created a Custom Extractor(Data View) in the Source system and when i pull data takes lot of time.. Can any one respond to this, suggesting how to improve the performance of my custom Extractor.. Please do the needfull..
      Thanks and Regards,
    Venugopal..

    Dear Venugopal,
    use transaction ST05 to check if your SQL statements are optimal and that you do not have redundant database calls. You should use as much as possible "bulking", which means to fetch the required data with one request to database and not with multiple requests to database.
    Use transaction SE30 to check if you are wasting time in loops and if yes, optimize the algorithm.
    Best Regards,
    Sylvia

  • To improve performance for report

    Hi Expert,
    i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
    it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
    SELECT vbeln
               auart
               submi
               vkorg
               vtweg
               spart
               knumv
               vdatu
               vprgr
               ihrez
               bname
               kunnr
        FROM vbak
        APPENDING TABLE itab_vbak_vbap
        FOR ALL ENTRIES IN l_itab_temp
    *BEGIN OF change 17/Oct/2008.
        WHERE erdat IN s_erdat              AND
             submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              auart = l_itab_temp-auart     AND
    *BEGIN OF change 17/Oct/2008.
              submi = l_itab_temp-submi     AND
    *End of Changes 17/Oct/2008.
              vkorg = l_itab_temp-vkorg     AND
              vtweg = l_itab_temp-vtweg     AND
              spart = l_itab_temp-spart     AND
              vdatu = l_itab_temp-vdatu     AND
              vprgr = l_itab_temp-vprgr     AND
              ihrez = l_itab_temp-ihrez     AND
              bname = l_itab_temp-bname     AND
              kunnr = l_itab_temp-sap_kunnr.
        DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
      ENDDO.
    Please give me suggession for improving performance for the programmes.

    hi,
    you try like this
    DATA:BEGIN OF itab1 OCCURS 0,
         vbeln LIKE vbak-vbeln,
         END OF itab1.
    DATA: BEGIN OF itab2 OCCURS 0,
          vbeln LIKE vbap-vbeln,
          posnr LIKE vbap-posnr,
          matnr LIKE vbap-matnr,
          END OF itab2.
    DATA: BEGIN OF itab3 OCCURS 0,
          vbeln TYPE vbeln_va,
          posnr TYPE posnr_va,
          matnr TYPE matnr,
          END OF itab3.
    SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
    START-OF-SELECTION.
      SELECT vbeln FROM vbak INTO TABLE itab1
      WHERE vbeln IN s_vbeln.
      IF itab1[] IS NOT INITIAL.
        SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
        FOR ALL ENTRIES IN itab1
        WHERE vbeln = itab1-vbeln.
      ENDIF.

Maybe you are looking for

  • How to generate an XML Document from XQuery

    The following query returns me the required XML output. However, I need to generate an XML Document out of it. How do I go about generating an XML Doc?. SELECT XMLQuery('<Data> {for $tert in ora:view("DATA"),            $tert_audit in ora:view("DATA_

  • WTF? Zen Nano Plus/Error with Ripping to portable dev

    I have ripped a cd to WMA and now have it in my PC music library. Now that it is converted, I want to drag it to my Nano Plus. Oddly enough, only some go through complete, others have an error. I have deleted and tried to rip and move song by song as

  • Nokia Suite/Nokia Store finally working

    I had not tried downloading/installing an app from the Nokia store from within Nokia Suite in quite a while, as it never seemed to work. On a lark, I decided to see whether anything had changed since installing the latest version of Nokia Suite. To m

  • Install Errors (CTEngine) DVD Bur

    GRRRR........ I keep getting CTengine won't install message and dvd burner wont install when installing my soundblaster platinum software, Also it takes about 45 minits to install, is this normal? I have updated the driver on the creative page, and t

  • I need e6567c wireless test manager...please help..

     can u give me e6567c wireless test manage installer for my agilent 8960 e5515c test set??...please help...