Report script taking too long to export data

Hello guys,
I have a report script to export data out of a BSO cube. The cube is 200GB in size. But the exported text file is only 10MB. It takes around 40 mins to export this file.
I have exported data of this size in less than a minute from other DBs. But this one is taking way too long for me.
I also have a calc script for the same export but that too is taking 20 mins which is not reasonable for a 10MB export.
Any idea why a report script could take this long? Is it due to huge size of database? Or is there a way to optimize the report script?
Any help would be appreciated.
Thanks

Thanks for the input guys.
My DATAEXPORT is taking half the time than my report script export. So yeah it is much faster but still not reasonable(20 mins for one month data) compared to other DBs that are exported very quick.
In my calc I am just FIXING on level 0 members for most of the dimensions against the specific period, year and scenario. I have checked the conditions for an optimal report script, I think mine is just fine.
The outline has like 15 dimensions in it and only two of them are dense. Do you think the reason might be the huge size of DB along with too many sparse Dims?
I appreciate your help on this.
Thanks

Similar Messages

  • Report script taking too long

    My report script execution time is too long - Previously it used to complete with in 5 mins when costcenter was in page def not being in rows, but the business requirements changed and we need data corresponding to level zero cost cnetr in row def,
    I have tried optimizing databuffer size (increased to 100k) and sorbuffer size but the script takes too long to complete can some bosy review the report def and see am i missing anything its on prod enviroment it is a beffy env with quad processors and 8 gb ram
    /ESS_LOCALE English_UnitedStates.Latin1@Binary
    // Report Script Name - FY09 Extract
    <ACCON
    <SUPSHARE
    {TABDELIMIT}
    {DECIMAL 9}
    <QUOTEMBRNAMES
    {SUPEMPTYROWS}
    {SUPMISSINGROWS}
    {SUPZEROROWS}
    {MISSINGTEXT  "0"}
    {ROWREPEAT}
    {NOINDENTGEN}
    <SPARSE
    <PAGE("Years","Project","Currency")
    <COLUMN("Time Periods")
    <ROW("Accounts","Entity","SubAccount","Cost Center")
    // Page Members
    // Selection rules and output options for dimension: Years
    {OUTMBRNAMES}
    "FY-2009"
    // Selection rules and output options for dimension: Currencies
    {OUTMBRNAMES}
    "Local"
    // Selection rules and output options for dimension: Project
    {OUTMBRNAMES}
    "Project"
    // Column Members
    // Selection rules and output options for dimension: Time Periods
    {OUTMBRNAMES}
    "Feb"
    "Mar"
    "Apr"
    "May"
    "Jun"
    "Jul"
    "Aug"
    "Sep"
    "Oct"
    "Nov"
    "Dec"
    "Jan"
    // Row Members
    // Selection rules and output options for dimension: Account
    "LEV0,Accounts"
    // Selection rules and output options for dimension: Entities
    "LEV0,Entity"
    // Selection rules and output options for dimension: SubAccount
    "LEV0,SubAccount"
    // Selection rules and output options for dimension: CostCenter
    "LEV0,Cost Center"
    // End of Report

    There's probably no silver bullet for this beast. Going to level 0 on 4 dimensions is going to be tough in the reporting engine. You could probably get some improvement by using the link approach rather than the Lev0 approach.
    The only other way I could think of is taking the report and breaking it into different sections for one of the smaller row dimensions, but it doesn't look like the ones you listed fit that category (they all appear to have more than a handful of members).
    One note: I don't think you need to list the {OUTMBRNAMES} for each dimension up top, but that's cosmetic and you might want it that way (to change one without affecting the others later).
    The reason the report will run for a long time is that it has to build a row index up front so they can be sorted, even if you don't request a sort. You could try SORTNONE, but I don't think it removes the step entirely from the sequence so the report engine burns a lot of timing creating the row indexes anyway.

  • WebI Report is taking too long time to opening

    Hi All,
    When iam trying to open the WebI report in Infoview , it is taking long time to open and refresh,
    Please suggest me a solution.
    Thanks in advance..
    Regards,
    Mahesh

    Hi,
    As the issue you are facing is that the webi report is taking too long to open and refresh, I would recommend the below steps.
    1. Check whether the webi report is set to "Refreh on Open" if yes probably you need to uncheck, save the report and open it again.
    2. Try to run the same query in the backend database and see if it returns the data.
    3. Try to run refresh the report for a smaller data selection.
    4. make the report run on a specific webi server, and when refreshing have your BOBJ admin monitor that process to see if the process is going in a hung state, using High memory etc.
    5. restart webi process and run again
    Thanks,
    aKs

  • R/3 Extraction taking too long to load data into BW

    HI There,
    I'm trying to extract SAP Standard extractor 0FI_AP_4 into BW, and its taking endless time.
    Even the Extract checker RSA3  is taking too long to execute the data. Dont know why its taking so long to execute.
    Since there in not much data to take such a long time.
    Enhanced the datasource with three fields from BSEG using user exits.
    Is that the reason why its taking too long? Does User Exit slows down the extraction process?
    What measures i should take to quicken the process?
    Thanks for your time
    Vandana

    Thanks for all you replies.
    Please go through the steps I've gone through :
    - Installed the Business Content and its in version 3.5
    - Changed the update rules, Transfer rules and migrated the datasource to BI 7
    - Enhanced the 0FI_AP_3 to include three fields BSEG table
    - Ran RSA3 and the new fields are showing but the loading is quite slow.
    - Commented the code and ran RSA3 and with little difference data is showing up
    - Removed the comments and ran, its fine, though it takes little more time then previous step...but data is showing up
    - Replicated the datasource into BW
    - Created the info package and started the init process (before this deleted the previous stored init process)
    - Data isn't loading and please see the error message below.
    Diagnosis
    The data request was a full update.  In this case, the corresponding table in the source system does not
    contain any data. System Response Info IDoc received with status 8. Procedure Check the data basis in the source system.
    - Checked the transformation between datasource 0FI_AP_4 and Infosource ZFI_AP_4
       and I DID NOT found the three fields which i enhanced from BSEG table in the 0FI_AP_4 datasource.
    - Replicated the datasource 0FI_AP_4 again, but no change.
    Now...I dont know whats happening here.
    When i check the datasource 0FI_AP_4 in RSA6, i can see the three new fields from BSEG.
    When i check RSA3, i can see the data getting populated with the three new fields from BSEG,
    When i check the fields in the datasource 0FI_AP_4 in BW, I can see the three new fields. It shows
    that the connection between BW and R/3 is fine, isn't it?
    Now...Can anyone please suggest me how to go forward from here?
    Thanks for your time
    Vandana

  • Report script taking very long time to export in ASO

    Hi All,
    My report script is taking very long time to execute and finally a message appears as timed out.
    I'm working on ASO Cubes and there are 14 dimensions for which i need to export all data for all the dimensions for only one version.
    The data is very huge and the member count in each dimension is also huge, so which is making me difficult to export the data.
    Any suggestions??
    Thanks

    Here is a link that addresses several ways to optimize your report script. I utilize report scripts for Level 0 exports in an ASO environment as well, however the majority of our dimemsions are attribute dimensions.
    These are the most effective solutions we have implemented to improve our exports via report scripts:
    1. Make sure your report script is written in the order of how the Report Extractor retrieves data.
    2. Supressing Zero and Missing Data
    3. We use the LINK command within reports for some dimensions that are really big and pull at Level 0
    4. Using Symmetric reports.
    5. Breakout the exports in multiple reports.
    However, you may also consider some additional solutions outlined in this link:
    1. The MDX optimizing commands
    2. Back end system settings
    http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/drpoptim.htm
    I hope this helps. Maybe posting your report script would also help users to provide feedback.
    Thanks
    Edited by: ronnie on Jul 14, 2011 9:25 AM
    Edited by: ronnie on Jul 14, 2011 9:53 AM

  • DTP taking too long to load data

    Hello
    I want to know what performance relarted setting can we do when it comes to a DTP load. I have read some notes and most of them says that if the load is a huge volume of data then you should try increasing the Background processes of the DTP and semantic groups etc.
    What i have noticed is that each DTP is having a defaulted value of 3 as background processes. Now if i change this to 5 what can be the impact. Is it advisabe to change this.
    The DTP has transfered records 17219409 and added records 4717473. Please can somone suggest what can be other performace activities that can be checked as this load is nearly taking 4 hrs to load data to a DSO.
    Thanks

    All right Tom,
    Yes I would defiantly increase the number of background process.  This could have an impact on user performance as the more background processes you add, the more dialogue processes will be removed.
    But for instance say you do the load during the night when there are no users you can remove all dialogue processes and say have 12 background processes which should load the data faster.
    If you need to load the data during the day analyse how many dialogue processes are free in on average and adjust accordingly.
    I have found if you reduce the package size often increases data loads.
    Thanks,
    Nick.

  • Taking too long for Loaded data visible in Reporting

    Hi experts,
    I have a cost center cube, which takes very very long to show the  " Loaded data is not yet visible in Reporting " icon for the request. I donot have aggregate on the cube and no other requests are Red /yellow. What is the reason for this weird nature. I checked the consistency check, there are no over lap /errors for the request.
    Any help is appreciated.
    thanks in adavnce
    D Bret

    Hi Dave,
    Try to run the RSRV for the Cube and try to refresh the data in the cube.
    RSRV --> Combined test --> transaction data --> Status of data in the info cube.
    If you find any errors after running this check,repair it and then refresh the cube's data.
    Hope this helps.
    Thanks & Regards,
    Suchitra.V

  • BI Publisher report is taking too long

    Hello,
    I have created a report with BI Publisher (Release 10.1.3.4) that has 5 data sets, that sumarizes the sales by channel for the company.
    When I run each dataset independently with SQL Developer, each sql takes about 40 seconds to give results. But when I run the report with BI Publisher, that has to produce a page with the results of these datasets, then it takes almost 90 minutes to give results!!!
    Anybody could guide me where to look? I have made many reports with BI and they usually take longer than running directly with SQL Developer, but not as much longer than this report.
    Any ideas? I suppose I have to tune BI, because the Sql is ok, using the correct index and everything. But how to tune BI?
    Thanks in advance for any help,
    Alida

    hi,
    I am facing sam problem. i have a report created in latest relase of bi publisher. when I run the query from sqlplus prompt, it hardly takes few seconds and start printing rows in sqlplus. however when I ruin same query through bi publisher for interactive reports, it takes more than an hour and still screen shows processing. The number of rows is greater than 10K.
    Please help any suggestion will be helpful.

  • Report is taking too long time. Please advice

    Hi Experts,
    I have created a report which is almost exploding the BOM and getting the all
    material documents from MKPF for the given date and then based on those material documents selecting the 601 (Sales) from MSEG. Later based on the material, batch and 901 its drilling down. If it gets the raw materials in that, add those otherwise again if you find some FG or Semi finish in that further explode it.
    like that this will go to the deeper level.
    I have run this report for almost all the plants and they are updated successfully
    but some plants are there which might have the large data due to drill down batch level. So atlast giving the dump.
    Please suggest me how get it tuned so that my all plants give the data is speedy way. Thanks in advance.

    The code is:
    REPORT  zsales_variance_1 NO STANDARD PAGE HEADING MESSAGE-ID ymsg LINE-SIZE 1000.
    TABLES: mara,
            zcon_variance,
            zsales_variance,
            t001k,
            makt,
            mkpf,
            aufm,
            mbew.
    SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE t2.
    PARAMETER       s_bukrs LIKE zcon_variance-bukrs DEFAULT '1000' OBLIGATORY.
    SELECT-OPTIONS: s_matnr FOR mara-matnr DEFAULT 'FG only'.
    PARAMETER:      so_werks LIKE zsales_variance-werks OBLIGATORY.
    SELECT-OPTIONS: s_pcdf FOR aufm-budat OBLIGATORY.
    SELECTION-SCREEN END OF BLOCK b1.
    RANGES: s_werks FOR zsales_variance-werks.
    *******Filling the Range internal table for fixing the plants for Company code 1000: Start
    s_werks-sign = 'I'.             "Include
    s_werks-option = 'BT'.          "Pattern
    s_werks-low = '1001'.            "Low Value
    s_werks-high = '1025'.              "High Value
    APPEND s_werks.
    *******Filling the Range internal table for fixing the plants for Company code 1000: End
    DATA: BEGIN OF it_mara OCCURS 0,
          matnr LIKE mara-matnr,
          mtart LIKE mara-mtart,
          matkl TYPE matkl,
          werks LIKE marc-werks,
          indi1(8) TYPE c,
          ok(2) TYPE c,
         maktx LIKE makt-maktx,
          END OF it_mara.
    *DATA it_mara1 LIKE it_mara OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF it_t001k OCCURS 0,
          bwkey LIKE t001k-bwkey,
          matnr LIKE mara-matnr,
          zplp1 LIKE mbew-zplp1,
          maktx LIKE makt-maktx,
          END OF it_t001k.
    DATA: BEGIN OF it_mbew OCCURS 0,
          matnr LIKE mara-matnr,
          bwkey LIKE t001k-bwkey,
          zplp1 LIKE mbew-zplp1,
          maktx LIKE makt-maktx,
          END OF it_mbew.
    DATA: BEGIN OF it_makt OCCURS 0,
          matnr LIKE makt-matnr,
          maktx LIKE makt-maktx,
          END OF it_makt.
    DATA: it_makt1 LIKE it_makt OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF it_mkpf OCCURS 0,
          mblnr LIKE mkpf-mblnr,
          mjahr LIKE mkpf-mjahr,
          END OF it_mkpf.
    *DATA: BEGIN OF it_mseg OCCURS 0,
         mblnr LIKE mseg-mblnr,
         matnr LIKE mara-matnr,
         werks LIKE mseg-werks,
         bwart LIKE mseg-bwart,
         menge LIKE mseg-menge,
         meins LIKE mseg-meins,
         dmbtr LIKE mseg-dmbtr,
         END OF it_mseg.
    DATA: BEGIN OF it_final OCCURS 0 ,
              mandt TYPE mandt,
              bukrs LIKE zcon_variance-bukrs,
              werks LIKE zcon_variance-werks,
              matnr LIKE zcon_variance-matnr,
              component LIKE zcon_variance-component,
              mtart LIKE zcon_variance-mtart,
              maktx LIKE zcon_variance-maktx,
              componentdisc LIKE zcon_variance-componentdisc,
              matkl LIKE zcon_variance-matkl,
              menge LIKE zcon_variance-menge,
              meins LIKE zcon_variance-meins,
              dmbtr LIKE zcon_variance-dmbtr,
              verpr LIKE zcon_variance-verpr,
              zplp1 LIKE zcon_variance-zplp1,
              confirmtodate LIKE zcon_variance-confirmtodate,
              confirmfromdate LIKE zcon_variance-confirmtodate,
              currency TYPE waers,
              mtart_matnr TYPE mtart,
             ok(2) TYPE c,
      END OF it_final.
    DATA: BEGIN OF it_final_compo OCCURS 0 ,
            mandt TYPE mandt,
            bukrs LIKE zcon_variance-bukrs,
            werks LIKE zcon_variance-werks,
           matnr LIKE zcon_variance-matnr,
            component LIKE zcon_variance-component,
            mtart LIKE zcon_variance-mtart,
           maktx LIKE zcon_variance-maktx,
            componentdisc LIKE zcon_variance-componentdisc,
            matkl LIKE zcon_variance-matkl,
            menge LIKE zcon_variance-menge,
            meins LIKE zcon_variance-meins,
            dmbtr LIKE zcon_variance-dmbtr,
            verpr LIKE zcon_variance-verpr,
            zplp1 LIKE zcon_variance-zplp1,
            confirmtodate LIKE zcon_variance-confirmtodate,
            confirmfromdate LIKE zcon_variance-confirmtodate,
            currency TYPE waers,
            mtart_matnr TYPE mtart,
       ok(2) TYPE c,
    END OF it_final_compo.
    DATA: ok(2) TYPE c.
    DATA : it2 LIKE it_final OCCURS 0 WITH HEADER LINE.
    DATA : it2_temp LIKE it2 OCCURS 0  WITH HEADER LINE.
    DATA:  it_compo LIKE it_final_compo OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF it3 OCCURS 0 ,
        mandt TYPE mandt,
        bukrs LIKE zcon_variance-bukrs,
        werks LIKE zcon_variance-werks,
        matnr LIKE zcon_variance-matnr,
        component LIKE zcon_variance-component,
        mtart LIKE zcon_variance-mtart,
        maktx LIKE zcon_variance-maktx,
        componentdisc LIKE zcon_variance-componentdisc,
        matkl LIKE zcon_variance-matkl,
        menge LIKE zcon_variance-menge,
        meins LIKE zcon_variance-meins,
        dmbtr LIKE zcon_variance-dmbtr,
        verpr LIKE zcon_variance-verpr,
        zplp1 LIKE zcon_variance-zplp1,  " if this is 0.
        confirmtodate LIKE zcon_variance-confirmtodate,
        confirmfromdate LIKE zcon_variance-confirmtodate,
        currency TYPE waers,
    END OF it3.
    DATA: BEGIN OF it_aufm OCCURS 0,
           aufnr LIKE aufm-aufnr,
           matnr LIKE aufm-matnr,
           charg LIKE aufm-charg,
           bwart LIKE aufm-bwart,
          END OF it_aufm.
    DATA: it_aufm1 LIKE it_aufm OCCURS 0 WITH HEADER LINE.
    DATA: it_aufm3 LIKE it_aufm1 OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF it_aufm2 OCCURS 0,
          matnr LIKE aufm-matnr,
          charg LIKE aufm-charg,
          bwart LIKE aufm-bwart,
          werks LIKE aufm-werks,
          END OF it_aufm2.
    DATA: BEGIN OF it_aufm_901 OCCURS 0,
          aufnr LIKE aufm-aufnr,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
          budat LIKE aufm-budat,
          menge LIKE aufm-menge,
          meins LIKE aufm-meins,
          dmbtr LIKE aufm-dmbtr,
          bwart_901 LIKE aufm-bwart,
          bwart_902 LIKE aufm-bwart,
          bwart_261 LIKE aufm-bwart,
          bwart_262 LIKE aufm-bwart,
          indi1(8) TYPE c,
          901 LIKE mara-matnr,
         aufnr LIKE aufm-aufnr,
    END OF it_aufm_901.
    DATA: BEGIN OF it_mpb OCCURS 0,
         aufnr LIKE aufm-aufnr,
          matnr LIKE mara-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
          yes TYPE c,
          END OF it_mpb.
    DATA: it_temp LIKE it_aufm_901 OCCURS 0 WITH HEADER LINE.
    DATA: it_aufm_901_backup LIKE it_aufm_901 OCCURS 0 WITH HEADER LINE.
    DATA: indi1(8) TYPE c.
    DATA: bwart_901 LIKE aufm-bwart,
          bwart_902 LIKE aufm-bwart,
          bwart_261 LIKE aufm-bwart,
          bwart_262 LIKE aufm-bwart.
    DATA: BEGIN OF it_aufnr OCCURS 0,
          aufnr LIKE aufm-aufnr,
          yes TYPE c,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
      END OF it_aufnr.
    DATA yes TYPE c.
    DATA: BEGIN OF it_aufm_901_filter OCCURS 0,
          aufnr LIKE aufm-aufnr,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
          bwart_pm(4) TYPE c,
          indi TYPE c,
          budat LIKE aufm-budat,
          menge LIKE aufm-menge,
          meins LIKE aufm-meins,
          dmbtr LIKE aufm-dmbtr,
         bwart_901 LIKE aufm-bwart,
         bwart_902 LIKE aufm-bwart,
         bwart_261 LIKE aufm-bwart,
         bwart_262 LIKE aufm-bwart,
         aufnr LIKE aufm-aufnr,
      END OF it_aufm_901_filter.
    DATA: BEGIN OF it1 OCCURS 0,
          aufnr LIKE aufm-aufnr,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
         bwart_pm(4) TYPE c,
          indi TYPE c,
          budat LIKE aufm-budat,
          menge LIKE aufm-menge,
          menge_f(14) TYPE c,
          meins LIKE aufm-meins,
          dmbtr LIKE aufm-dmbtr,
          dmbtr_f(14) TYPE c,
          dmbtr_v(14) TYPE c,
          menge_v(14) TYPE c,
          901 LIKE mara-matnr,
      END OF it1.
    DATA: it4 LIKE it1 OCCURS 0 WITH HEADER LINE.
    DATA: it5 LIKE it4 OCCURS 0 WITH HEADER LINE.
    DATA: add_menge TYPE menge_d,
          add_dmbtr TYPE dmbtr.
    DATA: BEGIN OF it1902 OCCURS 0,
          aufnr LIKE aufm-aufnr,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
         bwart_pm(4) TYPE c,
          indi TYPE c,
          budat LIKE aufm-budat,
          menge(14) TYPE c,
          menge_v(14) TYPE c,
          meins LIKE aufm-meins,
          dmbtr(14) TYPE c,
          dmbtr_v(14) TYPE c,
      END OF it1902.
    DATA: it1902_add LIKE it1902 OCCURS 0 WITH HEADER LINE.
    DATA: it1902_add1 LIKE it1902 OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF it1262 OCCURS 0,
          aufnr LIKE aufm-aufnr,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
         bwart_pm(4) TYPE c,
          indi TYPE c,
          budat LIKE aufm-budat,
          menge(14) TYPE c,
          menge_v(14) TYPE c,
          meins LIKE aufm-meins,
          dmbtr(14) TYPE c,
          dmbtr_v(14) TYPE c,
      END OF it1262.
    DATA: it1262_add LIKE it1262 OCCURS 0 WITH HEADER LINE.
    DATA: it1262_add1 LIKE it1262 OCCURS 0 WITH HEADER LINE.
    DATA: indi TYPE c,
         bwart_pm(4) TYPE c,
          menge_v(14) TYPE c,
          dmbtr_v(14) TYPE c,
          menge_f(14) TYPE c,
          dmbtr_f(14) TYPE c.
    DATA: matnr LIKE mara-matnr,
          maktx LIKE makt-maktx,
          cnt TYPE i,
          cnt1 TYPE i,
          cnt2 TYPE i.
    DATA: BEGIN OF it_aufm_901_1 OCCURS 0,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
         budat LIKE aufm-budat,
          menge LIKE aufm-menge,
          meins LIKE aufm-meins,
          dmbtr LIKE aufm-dmbtr,
          mengefinal LIKE aufm-menge,
          aufnr LIKE aufm-aufnr,
          END OF it_aufm_901_1.
    DATA: BEGIN OF it_aufm_901_2 OCCURS 0,
         matnr LIKE aufm-matnr,
         werks LIKE aufm-werks,
         bwart LIKE aufm-bwart,
         budat LIKE aufm-budat,
         menge LIKE aufm-menge,
         meins LIKE aufm-meins,
         dmbtr LIKE aufm-dmbtr,
         mengefinal LIKE aufm-menge,
          aufnr LIKE aufm-aufnr,
          END OF it_aufm_901_2.
    DATA: BEGIN OF it_aufm_901_3 OCCURS 0,
          bukrs LIKE zcon_variance-bukrs,
          werks LIKE zcon_variance-werks,
          matnr LIKE zcon_variance-matnr,
          maktx LIKE zcon_variance-maktx,
          component LIKE zcon_variance-component,
          componentdisc LIKE zcon_variance-componentdisc,
          mtart LIKE zcon_variance-mtart,
          matkl LIKE zcon_variance-matkl,
          menge LIKE zcon_variance-menge,
          meins LIKE zcon_variance-meins,
          dmbtr LIKE zcon_variance-dmbtr,
          map LIKE zcon_variance-verpr,
          zplp1 LIKE zcon_variance-zplp1,
          confirmtodate LIKE zcon_variance-confirmtodate,
          confirmfromdate LIKE zcon_variance-confirmtodate,
          bwart LIKE mseg-bwart,
         END OF it_aufm_901_3.
    DATA: BEGIN OF it_aufm_261 OCCURS 0,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
          budat LIKE aufm-budat,
          menge LIKE aufm-menge,
          meins LIKE aufm-meins,
          dmbtr LIKE aufm-dmbtr,
          aufnr LIKE aufm-aufnr,
          END OF it_aufm_261.
    DATA: BEGIN OF it_aufm_261_1 OCCURS 0,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
         budat LIKE aufm-budat,
          menge LIKE aufm-menge,
          meins LIKE aufm-meins,
          dmbtr LIKE aufm-dmbtr,
          issueq LIKE aufm-menge,
          aufnr LIKE aufm-aufnr,
          map LIKE zcon_variance-verpr,
          END OF it_aufm_261_1.
    DATA: BEGIN OF it_aufm_261_2 OCCURS 0,
          matnr LIKE aufm-matnr,
          werks LIKE aufm-werks,
          bwart LIKE aufm-bwart,
         budat LIKE aufm-budat,
          menge LIKE aufm-menge,
          meins LIKE aufm-meins,
          dmbtr LIKE aufm-dmbtr,
          issueq LIKE aufm-menge,
          aufnr LIKE aufm-aufnr,
          map LIKE zcon_variance-verpr,
          END OF it_aufm_261_2.
    DATA: flag1,
          flag2,
          flag3,
          flag4,
          menge LIKE aufm-menge,
          menge902 LIKE aufm-menge,
          menge262 LIKE aufm-menge,
          mengefinal LIKE aufm-menge,
          issueq LIKE aufm-menge,
          map LIKE zcon_variance-verpr,
          dmbtr LIKE aufm-dmbtr,
          flagtab TYPE c.
    DATA: mandt TYPE sy-mandt.
    DATA: it_zcv TYPE zcon_variance,
          it_zcv_compo TYPE zcon_variance_c.
    DATA: BEGIN OF it_mseg OCCURS 0,
          mblnr LIKE mseg-mblnr,
          matnr LIKE mseg-matnr,
          werks LIKE mseg-werks,
          charg LIKE mseg-charg,
          menge LIKE mseg-menge,
          dmbtr LIKE mseg-dmbtr,
          bwart LIKE mseg-bwart,
          END OF it_mseg.
    DATA: it_mseg1 LIKE it_mseg OCCURS 0 WITH HEADER LINE.
    DATA: it_mseg_temp LIKE it_mseg OCCURS 0 WITH HEADER LINE.
    DATA: it_mseg_601 LIKE it_mseg OCCURS 0 WITH HEADER LINE,
          it_mseg_601_add LIKE it_mseg OCCURS 0 WITH HEADER LINE.
    DATA: it_mseg_602 LIKE it_mseg OCCURS 0 WITH HEADER LINE,
          it_mseg_602_add LIKE it_mseg OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF it_1 OCCURS 0,
          matnr LIKE mseg-matnr,
          werks LIKE mseg-werks,
          charg LIKE mseg-charg,
          menge(16) TYPE p DECIMALS 2, " LIKE mseg-menge, Changed later to get more space and to avoid any possible overflow: Viren;c
          dmbtr(16) TYPE p DECIMALS 2, " LIKE mseg-dmbtr, Changed later to get more space and to avoid any possible overflow: Viren;c
          bwart LIKE mseg-bwart,
          meins LIKE mseg-meins,
          END OF it_1.
    DATA: it_temp1 LIKE it_1 OCCURS 0 WITH HEADER LINE.
    DATA: it_261 LIKE it_1 OCCURS 0 WITH HEADER LINE,
          it_262 LIKE it_1 OCCURS 0 WITH HEADER LINE.
    DATA: BEGIN OF it_sv OCCURS 0,
          mandt TYPE mandt,
          bukrs TYPE bukrs,
          werks TYPE  werks_d,
          matnr TYPE  matnr,
          component TYPE zcomponent,
          charg TYPE charg_d,
          period TYPE zperiod,
          gjahr TYPE gjahr,
          mtart TYPE mtart,
          maktx TYPE maktx,
          componentdisc TYPE zcomponentdisc,
          matkl TYPE matkl,
          menge(16) TYPE p DECIMALS 2, "menge_d,     Changed later to get more space and to avoid any possible overflow: Viren;c
          meins TYPE meins,
          dmbtr(16) TYPE p DECIMALS 2, "  dmbtr,     Changed later to get more space and to avoid any possible overflow: Viren;c
          zplp1 TYPE dzplp1,
          sales_variance TYPE zvariance,
          charg_fg TYPE charg_d,
          confirmtodate LIKE zcon_variance-confirmtodate,
          confirmfromdate LIKE zcon_variance-confirmtodate,
          currency TYPE waers,
          END OF it_sv.
    DATA: it_zsv TYPE zsales_variance.
    DATA: cyear(4) TYPE c,
          cmonth(2) TYPE c.
    INITIALIZATION.
      t2 = 'Please select'.
    START-OF-SELECTION.
      PERFORM determine_material_codes.
      PERFORM dtrmine_plnt_bsd_on_cmpny_cods.
    PERFORM determine_material_documents.
      PERFORM determine_mat_doc1.
      PERFORM fill_sales_variance.
    END-OF-SELECTION.
    *&      Form  determine_material_codes
          text
    -->  p1        text
    <--  p2        text
    FORM determine_material_codes .
    For Material selection
      IF s_matnr-low IS INITIAL AND s_matnr-high IS NOT INITIAL.
        s_matnr-low = s_matnr-high.
      ENDIF.
      SELECT matnr mtart
      FROM   mara
      INTO TABLE it_mara
      WHERE  matnr IN s_matnr
      AND mtart = 'IFIG'.
      IF it_mara[] IS INITIAL.
        MESSAGE e001.
        EXIT.
      ENDIF.
    ENDFORM.                    " determine_material_codes
    *&      Form  dtrmine_plnt_bsd_on_cmpny_cods
          text
    -->  p1        text
    <--  p2        text
    FORM dtrmine_plnt_bsd_on_cmpny_cods .
      SELECT bwkey FROM t001k INTO TABLE it_t001k
      WHERE bukrs = s_bukrs.
      IF it_t001k[] IS INITIAL.
       mandt = sy-mandt.
       DELETE FROM zsales_variance CLIENT SPECIFIED WHERE mandt = mandt.
       IF sy-subrc = 0.
         COMMIT WORK.
       ENDIF.
        MESSAGE e002.
        EXIT.
      ENDIF.
    ENDFORM.                    " dtrmine_plnt_bsd_on_cmpny_cods
    *&      Form  determine_mat_doc1
          text
    -->  p1        text
    <--  p2        text
    FORM determine_mat_doc1 .
    If optional fields is not filled then go by plant specifications
      CLEAR it_mbew.
      CLEAR  it_t001k.
      SORT it_t001k BY bwkey.
      DELETE ADJACENT DUPLICATES FROM it_t001k COMPARING bwkey.
      PERFORM select_material_docs.
    ENDFORM.                    " determine_mat_doc1
    *&      Form  select_material_docs
          text
    -->  p1        text
    <--  p2        text
    FORM select_material_docs .
      SELECT mblnr mjahr
      FROM mkpf
      INTO TABLE it_mkpf
      WHERE budat IN s_pcdf.
      IF  NOT it_mkpf[] IS INITIAL.
        CLEAR  it_mkpf.
        SORT it_mkpf BY mblnr.
        DELETE ADJACENT DUPLICATES FROM it_mkpf COMPARING mblnr.
        SELECT mblnr
               matnr
               werks
               charg
               menge
               dmbtr
               bwart
        FROM mseg
        INTO TABLE it_mseg
        FOR ALL ENTRIES IN it_mkpf
        WHERE mblnr = it_mkpf-mblnr
        AND   mjahr = it_mkpf-mjahr
        AND   ( bwart = '601' OR bwart = '602' )
        AND matnr IN s_matnr. " Added to select the only materials given in Material field.
        cnt = 0.
        DATA lv_index TYPE sy-tabix.
        SORT it_mara BY matnr ASCENDING.
        SORT it_mseg BY matnr ASCENDING.
       IF NOT it_mara[] IS INITIAL AND NOT it_mseg[] IS INITIAL.
         LOOP AT it_mara.
           LOOP AT it_mseg WHERE MATNR = IT_MARA-MATNR .
             IF NOT it_mseg IS INITIAL.
               APPEND LINES OF it_mseg TO it_mseg1.
               MOVE-CORRESPONDING it_mseg TO it_mseg1.
               APPEND it_mseg1.
               CLEAR it_mseg.
             ENDIF.
           ENDLOOP.
         ENDLOOP.
       ENDIF.
        DATA index TYPE sy-tabix.
    start
       LOOP AT it_mara.
         index = sy-tabix.
         CLEAR it_mseg.
         READ TABLE it_mseg WITH KEY matnr = it_mara-matnr BINARY SEARCH.
         IF NOT it_mseg IS INITIAL.
           it_mara-indi1 = 'FOUND'.
         ELSE.
           it_mara-indi1 = 'NOTFOUND'.
         ENDIF.
         MODIFY it_mara INDEX index.
       ENDLOOP.
       DELETE it_mara WHERE indi1 = 'NOTFOUND'.
       SORT it_mara BY matnr ASCENDING.
       SORT it_mseg BY matnr ASCENDING.
       lv_index = 1.
       IF NOT it_mara[] IS INITIAL AND NOT it_mseg[] IS INITIAL.
         LOOP AT it_mara.
           CLEAR it_mseg.
           LOOP AT it_mseg FROM lv_index .
             IF it_mseg-matnr <> it_mara-matnr.
               lv_index = sy-tabix.
               EXIT.
             ELSE.
               MOVE-CORRESPONDING it_mseg TO it_mseg1.
               APPEND it_mseg1.
               CLEAR it_mseg1.
             ENDIF.
           ENDLOOP.
         ENDLOOP.
       ENDIF.
       CLEAR: it_mseg,
              it_mseg1.
       REFRESH: it_mseg.
       it_mseg[] = it_mseg1[].
    END
    Select the manufacturing plants for given company code.
        IF NOT it_mseg[] IS INITIAL.
          SELECT matnr charg bwart werks
          FROM aufm
          INTO TABLE it_aufm2
         FOR ALL ENTRIES IN it_mseg
         WHERE matnr = it_mseg-matnr
         AND charg =   it_mseg-charg
          WHERE bwart = '901'.
          SORT it_mseg BY matnr charg bwart.
          SORT it_aufm2 BY matnr charg bwart.
          LOOP AT it_mseg.
            CLEAR it_aufm2.
            READ TABLE it_aufm2 WITH KEY matnr = it_mseg-matnr
                                         charg = it_mseg-charg
                                         bwart = '901' BINARY SEARCH.
            IF sy-subrc = 0.
              MOVE it_aufm2-werks TO it_mseg-werks.
              MODIFY it_mseg.
            ENDIF.
          ENDLOOP.
        ENDIF.
    Delete the plants which are not manufacturing plants.
        IF NOT it_mseg[] IS INITIAL.
          DELETE it_mseg WHERE werks <> so_werks.
        ENDIF.
        IF NOT it_mseg[] IS INITIAL.
          LOOP AT it_mseg.
            MOVE it_mseg-matnr TO it_mseg_temp-matnr.
            MOVE it_mseg-werks TO it_mseg_temp-werks.
            MOVE it_mseg-charg TO it_mseg_temp-charg.
            MOVE it_mseg-menge TO it_mseg_temp-menge.
            MOVE it_mseg-dmbtr TO it_mseg_temp-dmbtr.
            MOVE it_mseg-bwart TO it_mseg_temp-bwart.
            COLLECT it_mseg_temp.
          ENDLOOP.
        ENDIF.
        CLEAR it_mseg.
        REFRESH it_mseg.
        it_mseg[] = it_mseg_temp[].
      ENDIF.
      IF NOT it_mseg[] IS INITIAL.
    Seprate the entries having movement type '602'.
        LOOP AT it_mseg.
          IF it_mseg-bwart = '601'.
            MOVE it_mseg-matnr TO it_mseg_601-matnr.
            MOVE it_mseg-werks TO it_mseg_601-werks.
            MOVE it_mseg-charg TO it_mseg_601-charg.
            MOVE it_mseg-menge TO it_mseg_601-menge.
            MOVE it_mseg-dmbtr TO it_mseg_601-dmbtr.
            MOVE it_mseg-bwart TO it_mseg_601-bwart.
            APPEND it_mseg_601.
          ENDIF.
          IF it_mseg-bwart = '602'.
            MOVE it_mseg-matnr TO it_mseg_602-matnr.
            MOVE it_mseg-werks TO it_mseg_602-werks.
            MOVE it_mseg-charg TO it_mseg_602-charg.
            MOVE it_mseg-menge TO it_mseg_602-menge.
            MOVE it_mseg-dmbtr TO it_mseg_602-dmbtr.
            MOVE it_mseg-bwart TO it_mseg_602-bwart.
            APPEND it_mseg_602.
          ENDIF.
        ENDLOOP.
       DELETE it_mseg WHERE bwart = '602'.
    Add the entries of '601' movement type
        IF NOT it_mseg_601[] IS INITIAL.
          LOOP AT it_mseg_601.
            MOVE it_mseg_601-matnr TO it_mseg_601_add-matnr.
            MOVE it_mseg_601-werks TO it_mseg_601_add-werks.
            MOVE it_mseg_601-charg TO it_mseg_601_add-charg.
            MOVE it_mseg_601-menge TO it_mseg_601_add-menge.
            MOVE it_mseg_601-dmbtr TO it_mseg_601_add-dmbtr.
            MOVE it_mseg_601-bwart TO it_mseg_601_add-bwart.
            COLLECT it_mseg_601_add.
          ENDLOOP.
        ENDIF.
    Add the entries of '602' movement type
        IF NOT it_mseg_602[] IS INITIAL.
          LOOP AT it_mseg_602.
            MOVE it_mseg_602-matnr TO it_mseg_602_add-matnr.
            MOVE it_mseg_602-werks TO it_mseg_602_add-werks.
            MOVE it_mseg_602-charg TO it_mseg_602_add-charg.
            MOVE it_mseg_602-menge TO it_mseg_602_add-menge.
            MOVE it_mseg_602-dmbtr TO it_mseg_602_add-dmbtr.
            MOVE it_mseg_602-bwart TO it_mseg_602_add-bwart.
            COLLECT it_mseg_602_add.
          ENDLOOP.
        ENDIF.
      ELSE.
        MESSAGE e003.
        EXIT.
      ENDIF.
    Deduct the quantity and the amount for movement type '602' from '601'.
      cnt = 0.
      IF NOT it_mseg_601_add[] IS INITIAL.
        LOOP AT it_mseg_601_add.
          cnt = sy-tabix.
          CLEAR: menge_f,
                 dmbtr_f,
                 it_mseg_602_add.
          READ TABLE it_mseg_602_add WITH KEY matnr = it_mseg_601_add-matnr
                                              werks = it_mseg_601_add-werks
                                              charg = it_mseg_601_add-charg.
          IF sy-subrc = 0.
            menge_f = it_mseg_601_add-menge - it_mseg_602_add-menge.
            dmbtr_f = it_mseg_601_add-dmbtr - it_mseg_602_add-dmbtr.
            MOVE menge_f TO it_mseg_601_add-menge.
            MOVE dmbtr_f TO it_mseg_601_add-dmbtr.
            MODIFY it_mseg_601_add INDEX cnt TRANSPORTING menge dmbtr.
          ENDIF.
        ENDLOOP.
      ELSE.
        MESSAGE e003.
        EXIT.
      ENDIF.
      CLEAR: it_mseg,
             it_mseg_601_add.
      REFRESH: it_mseg.
      it_mseg[] = it_mseg_601_add[].
    select all materials and material type to match at ISFG material level
      CLEAR: it_mara.
      REFRESH: it_mara.
      SELECT matnr
             mtart
      INTO TABLE it_mara
      FROM mara
      WHERE mtart = 'ISFG'.
    Select all the order no's from AUFM to reduce to data trafficing
    SELECT aufnr matnr charg bwart
    FROM aufm
    INTO TABLE it_aufm
    FOR ALL ENTRIES IN it_mseg
    WHERE matnr = it_mseg-matnr
    AND   charg = it_mseg-charg
    AND   bwart = '901'.
      DATA : it_mc9 LIKE it_aufm OCCURS 0 WITH HEADER LINE.
      SELECT aufnr
             matnr
             charg
             bwart
      INTO TABLE it_mc9
      FROM aufm
      FOR ALL ENTRIES IN it_mseg
      WHERE matnr = it_mseg-matnr
      AND   charg = it_mseg-charg
      AND  bwart = '901'
      LOOP AT it_mseg.
        CLEAR it_mc9.
       READ TABLE it_mc9 WITH KEY matnr = it_mseg-matnr
                                  charg = it_mseg-charg
                                  bwart  = '901'.
        LOOP AT it_mc9 WHERE matnr = it_mseg-matnr AND
                             charg = it_mseg-charg AND
                             bwart  = '901'.
          IF NOT it_mc9 IS INITIAL.
            MOVE-CORRESPONDING it_mc9 TO it_aufm.
            APPEND it_aufm.
          ENDIF.
        ENDLOOP.
       SELECT aufnr matnr charg bwart
       FROM aufm
       APPENDING TABLE it_aufm
       WHERE matnr = it_mseg-matnr
       AND   charg = it_mseg-charg
       AND   bwart = '901'.
      ENDLOOP.
      cnt = 0.
    select the order no's giving material code and batch.
      CLEAR it_mseg.
      LOOP AT it_mseg.
        cnt = sy-tabix.
        CLEAR it_aufm.
       READ TABLE it_aufm WITH KEY matnr = it_mseg-matnr
                                   charg = it_mseg-charg
                                   bwart = '901'.
        LOOP AT it_aufm WHERE matnr = it_mseg-matnr AND
                              charg = it_mseg-charg AND
                              bwart = '901'.
          IF NOT it_aufm IS INITIAL.
         READ TABLE it_aufm.
            SELECT matnr
                   werks
                   charg
                   menge
                   dmbtr
                   bwart
                   meins
            INTO TABLE it_1
            FROM aufm
            WHERE aufnr = it_aufm-aufnr
            AND bwart IN (261,262)
            AND budat IN s_pcdf. " Vi
            IF NOT it_1[] IS INITIAL.
              SORT it_1 BY matnr ASCENDING.
    Select order no's one time to prevent data trafficing in IT_AUFM1.
              IF NOT it_aufm1[] IS INITIAL.
                APPEND LINES OF it_aufm1 TO it_aufm3.
              ENDIF.
              SELECT aufnr
                     matnr
                     charg
                     bwart
              FROM aufm
              INTO TABLE it_aufm1
              FOR ALL ENTRIES IN it_1
              WHERE matnr = it_1-matnr
              AND   charg = it_1-charg
              AND   bwart = '901'
              AND budat IN s_pcdf. " Vi
             IF NOT it_aufm1[] IS INITIAL.
               it_aufm3[] = it_aufm1[].
             ENDIF.
              LOOP AT it_1.
                CLEAR it_mara.
                READ TABLE it_mara WITH KEY matnr = it_1-matnr.
                IF NOT it_mara IS INITIAL.
                  CLEAR it_aufm1.
                 READ TABLE it_aufm1 WITH KEY matnr = it_1-matnr
                                              charg = it_1-charg
                                              bwart = '901'.
                  LOOP AT it_aufm1 WHERE matnr = it_1-matnr AND
                                         charg = it_1-charg AND
                                         bwart = '901'.
                    IF sy-subrc = 0.
                      READ TABLE it_aufm3 WITH KEY aufnr = it_aufm1-aufnr
                                                   matnr = it_aufm1-matnr.
                      IF sy-subrc <> 0.
                        SELECT matnr
                               werks
                               charg
                               menge
                               dmbtr
                               bwart
                               meins
                        APPENDING TABLE it_1
                        FROM aufm
                        WHERE aufnr = it_aufm1-aufnr
                        AND bwart IN (261,262)
                        AND budat IN s_pcdf. " Vi
                      ELSE.
                        DELETE it_1.
                      ENDIF.
                    ENDIF.
                  ENDLOOP.
                ELSE.
                  MOVE-CORRESPONDING it_1 TO it_temp1.
                  COLLECT it_temp1.
                  DELETE it_1.
                 ELSE.
                   DELETE it_1.
                 ENDIF.
                 ENDLOOP. " Second
                ENDIF.
                CLEAR it_1.
               delete it_1." Vi
              ENDLOOP.
            ENDIF.
            CLEAR: it_aufm.
          ENDIF.
          CLEAR: it_aufm.
        ENDLOOP.
    Seprate the data for movement type 262 and subtract 262 from 261.
        IF NOT it_temp1[] IS INITIAL.
          CLEAR it_temp1.
          LOOP AT it_temp1.
            IF it_temp1-bwart = '262'.
              MOVE-CORRESPONDING it_temp1 TO it_262.
              COLLECT it_262.
            ENDIF.
          ENDLOOP.
        ENDIF.
        DELETE it_temp1 WHERE bwart = '262'.
        IF NOT it_temp1[] IS INITIAL.
          LOOP AT it_temp1.
            IF NOT it_262[] IS INITIAL.
              CLEAR it_262.
              READ TABLE it_262 WITH KEY matnr = it_temp1-matnr
                                         werks = it_temp1-werks
                                         charg = it_temp1-charg.
              IF NOT it_262 IS INITIAL.
                it_temp1-menge = it_temp1-menge - it_262-menge.
                it_temp1-dmbtr = it_temp1-dmbtr - it_262-dmbtr.
              ELSE.
          do nothing
              ENDIF.
              MODIFY it_temp1.
            ENDIF.
          ENDLOOP.
        ENDIF.
    Push complete data about one FG code in table IT_SV.
        IF NOT it_temp1[] IS INITIAL.
          LOOP AT it_temp1.
            CLEAR it_sv.
            MOVE sy-mandt TO it_sv-mandt.
            MOVE s_bukrs TO it_sv-bukrs.
            MOVE it_temp1-werks TO it_sv-werks.
            MOVE it_mseg-matnr TO it_sv-matnr.  " This will the FG code
            MOVE it_temp1-charg TO it_sv-charg.
            MOVE it_mseg-charg TO it_sv-charg_fg.
            MOVE it_temp1-matnr TO it_sv-component.
            MOVE it_temp1-menge TO it_sv-menge.
            MOVE it_temp1-meins TO it_sv-meins.
            MOVE it_temp1-dmbtr TO it_sv-dmbtr.
            APPEND it_sv.
          ENDLOOP.
        ENDIF.
        CLEAR:    it_temp1,
                  it_1,
                  it_aufm,
                  it_aufm1.
                 it_262.
        REFRESH:  it_temp1,
                 it_262,
                  it_1,
                  it_aufm1.
      ENDLOOP.
      CLEAR: it_makt,
             it_mara.
      REFRESH: it_makt,
               it_mara.
      SELECT matnr maktx
      FROM makt
      INTO TABLE it_makt
      FOR ALL ENTRIES IN it_sv
      WHERE matnr = it_sv-matnr.
      SELECT matnr maktx
      FROM makt
      INTO TABLE it_makt1
      FOR ALL ENTRIES IN it_sv
      WHERE matnr = it_sv-component.
      SELECT matnr mtart matkl
      FROM mara
      INTO TABLE it_mara
      FOR ALL ENTRIES IN it_sv
      WHERE matnr  = it_sv-component.
      IF NOT it_sv[] IS INITIAL.
        LOOP AT it_sv.
          CLEAR it_makt.
          READ TABLE it_makt WITH KEY matnr = it_sv-matnr.
          IF NOT it_makt IS INITIAL.
            MOVE it_makt-maktx TO it_sv-maktx.
          ENDIF.
          CLEAR it_makt1.
          READ TABLE it_makt1 WITH KEY matnr = it_sv-component.
          IF NOT it_makt1 IS INITIAL.
            MOVE it_makt1-maktx TO it_sv-componentdisc.
          ENDIF.
          CLEAR it_mara.
          READ TABLE it_mara WITH KEY matnr = it_sv-component.
          IF NOT it_mara IS INITIAL.
            MOVE it_mara-mtart TO it_sv-mtart.
            MOVE it_mara-matkl TO it_sv-matkl.
          ENDIF.
          CLEAR mbew.
          SELECT SINGLE zplp1
          INTO mbew-zplp1
          FROM mbew
          WHERE matnr = it_sv-component
          AND   bwkey = it_sv-werks.
          IF sy-subrc = 0.
            MOVE mbew-zplp1 TO it_sv-zplp1.
          ENDIF.
          DATA: temp_var LIKE zcon_variance-verpr.
          temp_var = it_sv-zplp1 * it_sv-menge.
          it_sv-sales_variance = temp_var - it_sv-dmbtr.
          MOVE s_pcdf-low TO it_sv-confirmfromdate.
          MOVE s_pcdf-high TO it_sv-confirmtodate.
          CLEAR aufm.
          SELECT SINGLE * FROM aufm WHERE matnr = it_sv-component.
          IF sy-subrc = 0.
            MOVE aufm-waers TO it_sv-currency.
          ENDIF.
          CLEAR: cyear,
                 cmonth.
          cyear = s_pcdf-low+0(4).
          cmonth = s_pcdf-low+4(2).
          MOVE cmonth TO it_sv-period.
          MOVE cyear TO it_sv-gjahr.
          MODIFY it_sv.
        ENDLOOP.
      ENDIF.
    ENDFORM.                    " select_material_docs
    *&      Form  fill_sales_variance
          text
    -->  p1        text
    <--  p2        text
    FORM fill_sales_variance .
    mandt = sy-mandt.
    SELECT SINGLE * FROM zsales_variance INTO it_zsv.
    IF NOT it_zsv IS INITIAL.
       DELETE FROM zsales_variance CLIENT SPECIFIED WHERE mandt = mandt.
    ENDIF.
    Delete the plants which are not manufacturing plants.
      IF NOT it_sv[] IS INITIAL.
        DELETE it_sv WHERE werks <> so_werks.
      ENDIF.
      IF NOT it_sv[] IS INITIAL.
        SORT it_sv BY period gjahr ASCENDING.
        LOOP AT it_sv.
          MOVE-CORRESPONDING it_sv TO zsales_variance.
          MODIFY zsales_variance.
        ENDLOOP.
      ELSE.
        MESSAGE e208(00) WITH 'No record found...'.
      ENDIF.
      IF sy-subrc = 0.
        COMMIT WORK.
        MESSAGE s208(00) WITH 'Records inserted successfully...'.
      ELSE.
        ROLLBACK WORK.
        MESSAGE e208(00) WITH 'Error in records insertion'.
      ENDIF.
    ENDFORM.                    " fill_sales_variance

  • Discoverer report taking too long time to open.

    HI,
    Discovere reports are taking too long time to open. Please help to resolve this.
    Regards,
    Bhatia

    What is the Dicoverer and the Application release?
    Please refer to the following links (For both Discoverer 4i and 10g). Please note that some Discoverer 4i notes also apply to Discoverer 10g.
    Note: 362851.1 - Guidelines to setup the JVM in Apps Ebusiness Suite 11i and R12
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=362851.1
    Note: 68100.1 - Discoverer Performance When Running On Oracle Applications
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=68100.1
    Note: 465234.1 - Recommended Client Java Plug-in (JVM/JRE) For Discoverer Plus 10g (10.1.2)
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=465234.1
    Note: 329674.1 - Slow Performance When Opening Plus Workbooks from Oracle 11.5.10 Applications Home Page
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=329674.1
    Note: 190326.1 - Ideas for Improving Discoverer 4i Performance in an Applications 11i Environment
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=190326.1
    Note: 331435.1 - Slow Perfomance Using Disco 4.1 Admin/Desktop in Oracle Applications Mode EUL
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=331435.1
    Note: 217669.1 - Refreshing Folders and opening workbooks is slow in Apps 11i environment
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=217669.1

  • AirPlay Screen Mirroring (Mavericks) disconnects frequently with "Feedback taking too long to send" message

    My company has several TVs with AppleTVs (3rd generation units) connected in our conference rooms so we can "Screen Mirror" our Mac laptops via AirPlay during meetings. Many employees have complained that AirPlay Screen Mirroring drops frequently during meetings for no apparent reason.
    In attempts to determine the cause of the issue, I removed the AppleTV units from Wi-Fi and hard-wired them all to the LAN (100Mbps/Full duplex, no switchport errors seen on the Cisco switch). I upgraded the AppleTVs to the latest firmware. I had our AppleTV users ensure they were running MacOS Mavericks with the latest software updates installed. I had the Mac laptops hard-wired into the LAN during meetings in the conference rooms. None of these changes resolved the AirPlay issue.
    I reviewed the MacOS "/var/log/system.log" file from the laptops of several users that reported issues. I found a pattern that seemed to indicate that the "coreaudiod" process reported "Feedback taking too long to send" several times before the AppleTV connection was terminated. Also, from a network trace (using "tcpdump") taken during an unexpected AirPlay Screen Mirroring disconnection, I could see that the Mac laptop sent a TCP FIN packet to the AppleTV unit (this would indicate that the MacOS laptop initiated the closing of the AirPlay connection).
    I have included the relevant log file entries below. Please note that the LAN internal to our company is "solid" and there have been no connectivity issues detected or reported during the times the AirPlay sessions were disconnected.
    I believe I have found a workaround to this issue. By going into "System Preferences", "Sound" and then changing the "Output" device BACK to the "Internal Speakers" (rather than the AirPlay destination), the AirPlay Screen Monitoring connection seems to remain stable.
    My questions are:
    - is anyone else experiencing this type of problem? any other solutions recommended?
    - is there a way to change the AirPlay defaults so that Screen Mirroring only sends the video (not audio)?
    - does anyone know what the log file entries indicate (like, what does "Feedback taking too long to send...." mean)?
    - any fix planned for this issue?
    From: "/var/log/system.log":
    Jan 16 10:50:16 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:16.454404 AM [AirPlay] ### Feedback taking too long to send (1 seconds, 1 total)
    Jan 16 10:50:18 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:18.524517 AM [AirPlay] ### Feedback taking too long to send (4 seconds, 2 total)
    Jan 16 10:50:20 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:20.533639 AM [AirPlay] ### Feedback taking too long to send (6 seconds, 3 total)
    Jan 16 10:50:22 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:22.548168 AM [AirPlay] ### Feedback taking too long to send (8 seconds, 4 total)
    Jan 16 10:50:24 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:24.554522 AM [AirPlay] ### Feedback taking too long to send (10 seconds, 5 total)
    Jan 16 10:50:24 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:24.554809 AM [AirPlay] ### Report network status (3, en0) failed: 1/0x1 kCFHostErrorHostNotFound / kCFStreamErrorSOCKSSubDomainVersionCode / kCFStreamErrorSOCKS5BadResponseAddr / kCFStreamErrorDomainPOSIX / evtNotEnb / siInitSDTblErr / kUSBPending / dsBusError / kStatusIsError / kOTSerialSwOverRunErr / cdevResErr / EPERM
    Jan 16 10:50:26 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:26.545531 AM [AirPlay] ### Feedback taking too long to send (12 seconds, 6 total)
    Jan 16 10:50:28 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:28.559050 AM [AirPlay] ### Feedback taking too long to send (14 seconds, 7 total)
    Jan 16 10:50:30 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:30.628868 AM [AirPlay] ### Feedback taking too long to send (16 seconds, 8 total)
    Jan 16 10:50:32 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:32.655638 AM [AirPlay] ### Feedback taking too long to send (18 seconds, 9 total)
    Jan 16 10:50:34 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:34.641952 AM [AirPlay] ### Feedback taking too long to send (20 seconds, 10 total)
    Jan 16 10:50:36 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:36.659854 AM [AirPlay] ### Feedback taking too long to send (22 seconds, 11 total)
    Jan 16 10:50:38 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:38.653594 AM [AirPlay] ### Feedback taking too long to send (24 seconds, 12 total)
    Jan 16 10:50:40 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:40.659279 AM [AirPlay] ### Feedback taking too long to send (26 seconds, 13 total)
    Jan 16 10:50:42 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:42.745549 AM [AirPlay] ### Feedback taking too long to send (28 seconds, 14 total)
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.532853 AM [AirPlay] ### Endpoint "AppleTV" feedback error: -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533151 AM [AirPlay] ### Feedback failed: -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533273 AM [AirPlay] ### Error with endpoint "AppleTV": -6722/0xFFFFE5BE kTimeoutErr
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533427 AM [BonjourBrowser] Reconfirming PTR for AppleTV._airplay._tcp.local. on en0
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533588 AM [BonjourBrowser] Reconfirming PTR for 9C207BBD8EA1@AppleTV._raop._tcp.local. on en0
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.533839 AM [AirPlay] ### AirPlay report: Network dead for 10+ seconds after 159 seconds, screen, nm "AppleTV", tp WiFi, md AppleTV3,1, sv 190.9, rt 0, fu 0, rssi -54
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.534104 AM [AirPlay] ### Report network status (5, en0) failed: 1/0x1 kCFHostErrorHostNotFound / kCFStreamErrorSOCKSSubDomainVersionCode / kCFStreamErrorSOCKS5BadResponseAddr / kCFStreamErrorDomainPOSIX / evtNotEnb / siInitSDTblErr / kUSBPending / dsBusError / kStatusIsError / kOTSerialSwOverRunErr / cdevResErr / EPERM
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.534315 AM [AirPlay] Deactivating virtual display stream for quiesce
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543682 AM [AirPlayScreenClient] Stopping session
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543815 AM [AirPlay] Quiescing endpoint 'AppleTV'
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.543907 AM [AirPlayScreenClient] Stopping session internal
    Jan 16 10:50:44 My-MacBook-Pro.local coreaudiod[161]: 2014-01-16 10:50:44.544218 AM [AirPlayScreenClient] Stopped session internal
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.544266 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.544297 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.553084 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.554904 AM [AirPlayAVSys] ### Quiesce AirPlay
    Jan 16 10:50:44 My-MacBook-Pro.local SystemUIServer[159]: 2014-01-16 10:50:44.557604 AM [AirPlayAVSys] Ignoring route away when AirPlay not current
    Jan 16 10:50:44 My-MacBook-Pro.local AirPlayUIAgent[985]: 2014-01-16 10:50:44.560307 AM [AirPlayAVSys] Ignoring route away when AirPlay not current
    Jan 16 10:50:44 My-MacBook-Pro.local WindowServer[89]: Display 0x04280880: GL mask 0x21; bounds (0, 0)[1920 x 1080], 62 modes available
    Jan 16 10:50:44 My-MacBook-Pro.local WindowServer[89]: GLCompositor: GL renderer id 0x01022727, GL mask 0x0000001f, accelerator 0x00004ccb, unit 0, caps QEX|MIPMAP, vram 2048 MB
    I am happy to provide more information if needed.
    Thank you.
    -Tim

    I'm currently still experiencing this as well. I've confirmed it occurs on 10.9.1, 10.9.2, and 10.9.3 on MacBook Pro Retina's, Mid-2012 and 2013 MacBooks. It happens on multiple ATV's not just one, all are updated to 6.1.1 and a simple reboot seems to fix it temporarily but it does come back. All the ATV's are connecting to the network via Wireless not Ethernet. These are 3rd Gen ATV's but I checked the serial number and these do not match the bad batch of Apple TV's from 2013 that were offered up for Replacement for Apple due to the bad firmware update. None of the computers have the Firewall turned on. Here's the two logs that we always find after the issue occurs (logs are recent, happened this morning):
    5/30/14 8:57:36.017 AM coreaudiod[183]: 2014-05-30 08:57:36.016946 AM [AirPlay] ### Feedback taking too long to send (30 seconds, 17 total)
    5/30/14 8:57:36.332 AM coreaudiod[183]: 2014-05-30 08:57:36.331492 AM [AirPlay] ### Feedback failed: -6723/0xFFFFE5BD kCanceledErr
    The user will get disconnected from Airplay anywhere between 30 seconds to 3 minutes after logging on and can reconnect but then once again get disconnected after the same time period. One interesting thing to note is that when the Feedback Taking Too Long to send error starts occuring and the countdown to disconnect start ticking down to 30, its solely referring to Audio not being sent over the network and Video is working just fine. If I try to play sound I get another log and the sound doesn't play through the speakers. After a reboot, sound works fine and the Feedback Error's do not show up. I've also tried switching to Internal Speakers (since it defaultly switches to Airplay Speakers) after connecting to Airplay and seeing the Feedback timer start in the Console Logs but even after that the log continues to saw its taking too long to send and disconnects in 30 seconds. 
    This issue has been ongoing for months, I've got a ticket logged as far back as January with this occuring but its infrequent enough that we've just restarted and moved on. I'd say its an issue that occurs to about 5%-10% of meetings but that's an entire meeting that doesn't have the ability to Airplay until someone comes down and reboots it.
    I don't often post in this forum but its still an active issue with no resolution, proof that its occuring on other people's systems, and no firmware updates having been released to correct it. It'd be nice to know of any workarounds other than having to buy some lamp timers for each conference room just to get a functional ATV or putting up a sign that says hey if you get disconnected every 3 minutes, reboot the ATV. The whole reason we're using Apple products is for ease of use otherwise I'd put together a much cheaper solution myself. Any help or recommended troubleshooting steps would be fantastic at this point.

  • Data Archive Script is taking too long to delete a large table

    Hi All,
    We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key, Please help... More info below
    CREATE TABLE "APP"."MON_TXNS"
       (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,
        "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_PAYER" NUMBER(12,0),
        "ID_PAYER_PI" NUMBER(12,0),
        "ID_PAYEE" NUMBER(12,0),
        "ID_PAYEE_PI" NUMBER(12,0),
        "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,
        "STR_TEXT" VARCHAR2(60 CHAR),
        "DAT_MERCHANT_TIMESTAMP" DATE,
        "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),
        "DAT_EXPIRATION" DATE,
        "DAT_CREATION" DATE,
        "STR_USER_CREATION" VARCHAR2(30 CHAR),
        "DAT_LAST_UPDATE" DATE,
        "STR_USER_LAST_UPDATE" VARCHAR2(30 CHAR),
        "STR_OTP" CHAR(6 BYTE),
        "ID_AUTH_METHOD_PAYER" NUMBER(1,0),
        "AMNT_AMOUNT" NUMBER(23,0) DEFAULT 0,
        "BOL_IS_AUTOCAPTURE" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,
        "ID_USE_CASE" NUMBER(4,0) NOT NULL ENABLE,
        "ID_AUTH_METHOD_PAYEE" NUMBER(2,0),
         CONSTRAINT "CKC_BOL_IS_CANCELLED_MON_TXNS" CHECK (BOL_IS_CANCELLED in ('Y','N')) ENABLE,
         CONSTRAINT "PK_MON_TXNS" PRIMARY KEY ("ID_TXN")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX"  ENABLE,
         CONSTRAINT "FK_MON_TXNS_CURRENCIES" FOREIGN KEY ("ID_CURRENCY")
          REFERENCES "APP"."CURRENCIES" ("ID_CURRENCY") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYER" FOREIGN KEY ("ID_PAYER")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PAYEE" FOREIGN KEY ("ID_PAYEE")
          REFERENCES "APP"."CUSTOMERS" ("ID_CUSTOMER") ENABLE,
         CONSTRAINT "FK_MON_TXNS_REFERENCE_TXNS" FOREIGN KEY ("ID_TXN")
          REFERENCES "APP"."TXNS" ("ID_TXN") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYER" FOREIGN KEY ("ID_PAYER_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_PI_PAYEE" FOREIGN KEY ("ID_PAYEE_PI")
          REFERENCES "APP"."PIS" ("ID_PI") ENABLE,
         CONSTRAINT "FK_MON_TXNS_TO_AUTHMETHOD" FOREIGN KEY ("ID_AUTH_METHOD_PAYER")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE,
         CONSTRAINT "FK_MON_TXNS_USE_CASE_ID" FOREIGN KEY ("ID_USE_CASE")
          REFERENCES "APP"."USE_CASES" ("ID_USE_CASE") ENABLE,
         CONSTRAINT "FK_MON_TXN_AUTH_PAYEE" FOREIGN KEY ("ID_AUTH_METHOD_PAYEE")
          REFERENCES "APP"."AUTHENTICATION_METHODS" ("ID_AUTHENTICATION_METHOD") ENABLE
      CREATE INDEX "APP"."IDX_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
      CREATE INDEX "APP"."IDX_PAYEE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYE_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYEE_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_PYR_PI_MON_TXNS" ON "APP"."MON_TXNS" ("ID_PAYER_PI")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE INDEX "APP"."IDX_USE_CASE_MON_TXNS" ON "APP"."MON_TXNS" ("ID_USE_CASE")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_DATA" ;
      CREATE UNIQUE INDEX "APP"."PK_MON_TXNS" ON "APP"."MON_TXNS" ("ID_TXN")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "LARGE_INDEX" ;
    Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
    SQL> explain plan for
      2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 2798378986
    | Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   1 |  DELETE                | MON_TXNS   |       |       |            |          |
    |*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 |
    |   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 |
    |   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    Please help,
    thanks,
    Banka Ravi

    'Best practice' is just what Oracle is already doing as you have already been told: DELETE FROM myTable WHERE myDate between myStart and Myend.
    Your use case is why many orgs elect to use partitioning and use that DATE column as the partition key. Then it is VERY FAST and VERY EASY to truncate or drop partitions that contain old data when you no longer need them.
    The other solution used is to quit waiting so long to delete data and then you don't have to delete large amounts at the same time. So instead of deleting data once a month delete it once a week or even every night. Then the number of rows being deleted will be much smaller and, if the stats are kept current, Oracle may decide to use the index.

  • SQL Statement taking too long to get the data

    Hi,
    There are over 2500 records in a table and when retrieve all using ' SELECT * From Table' it is taking too long to get the data. ie .. 4.3 secs.
    Is there any possible way to shorten the process time.
    Thanks

    Hi Patrick,
    Here is the sql statement and table desc.
    ID     Number
    SN     Varchar2(12)
    FN     Varchar2(30)
    LN     Varchar2(30)
    By     Varchar(255)
    Dt     Date(7)
    Add     Varchar2(50)
    Add1     Varchar2(30)
    Cty     Varchar2(30)
    Stt     Varchar2(2)
    Zip     Varchar2(12)
    Ph     Varchar2(15)
    Email     Varchar2(30)
    ORgId     Number
    Act     Varchar2(3)     
    select A."FN" || '' '' || A."LN" || '' ('' || A."SN" || '')'' "Name",
    A."By", A."Dt",
    A."Add" || ''
    '' || A."Cty" || '', '' || A."Stt" || '' '' || A."Zip" "Location",
    A."Ph", A."Email", A."ORgId", A."ID",
    A."SN" "OSN", A."Act"
    from "TBL_OPTRS" A where A."ID" <> 0 ';
    I'm displaying all rows in a report.
    if I use 'select * from TBL_OPTRS' , this also takes 4.3 to 4.6 secs.
    Thanks.

  • Data manager jobs taking too long or hanging

    Hoping someone here can provide some assistance with regard to the 4.2 version. We are specifically using BPC/OutlookSoft 4.2SP4 (and in process of upgrading to BPC7.5). Three server environment - SQL, OLAP and Web.
    Problem: Data manager jobs in each application of production appset with five applications are either taking too long to complete for very small jobs (single entity/single period data copy/clear, under 1000 records) or completely hanging for larger jobs. This has been an issue for the last 7 days. During normal operation, small DM jobs ran in under a minute and large ones taking only a few minutes.
    Failed attempts at resolution thus far:
    1. Processed all applications from the OLAP server
    2. Confirmed issue is specific to our appset and is not present in ApShell
    3. Copied packages from ApShell to application to eliminate package corruption
    4. Windows security updates were applied to all three servers but I assume this would also impact ApShell.
    5. Cleared tblDTSLog history
    6. Rebooted all three servers
    7. Suspected antivirus however, problem persists with antivirus disabled on all three servers.
    Other Observations
    There are several tables in the SQL database named k2import# and several stored procedures named DMU_k2import#. My guess is these did not get removed because I killed the hung up jobs. I'm not sure if their existence is causing any issues.
    To make the long story short, how can I narrow down at which point the jobs are hanging up or what is taking the longest time? I have turned on Debug Script but I don't' have documentation to make sense of all this info. What exactly is happening when I run a Clear package?  At this point, my next step is to run SQL Profiler to get a look into what is going on behind the scenes on the sql server. I also want to rule out the COM+ objects on the web server but not sure where to start.
    Any help is greatly appreciated!!
    Thank you,
    Hitesh

    Hi ,
    The problem seems to be related to database. Do you have any maintenance plan for database?
    It is specific for your appset because each appset has own database.
    I suspect you have to run an sp_updatestats (Update Statistics) for your database and I think the issue with your jobs hang will be solved.
    DMU_K2_importXXX  table are coming from hang imports ...you can delete these tables because it is just growing the size of database and for sure are not used anymore.
    Regards
    Sorin Radulescu

  • I was backing up my iphone by changing the location of library beacause i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up

    I was backing up my iphone by changing the location of library because i don't have enough space.My phone was taking too long to copying file so i can celled it.the data is stored in desired location . And now i can't delete that back up.
    Also tell me about the performance of iphone 4 with ios 7.1.1...........
    T0X1C

    rabidrabbit wrote:
    Can I back up my iPhone 4S to my ipad 3 (64 gb)?
    no
    rabidrabbit wrote:
    However, now I don't have enough space in iCloud to backup either device. Why not?
    iCloud only give so much space for free storage, then if you exceed the limit of 5gb you have to pay for additional storage.

Maybe you are looking for

  • I have lost my CD's with the software and instructions for my SX30 IS.

    Yesterday my laptop was making some noise and I took it to a repair shop.  They installed Windows 7 (Previously was using windows Vista).  They wiped my hard drive completely. Yes I am furious. I had my Canon software and also instructions loaded on

  • Certificate problem downloading patches

    Today I am having problems downloading patches on Solaris 9 & 10. The log in part shows : 2006-10-25 09:55:27 Wed Oct 25 09:55:27 GMT+00:00 2006(ERROR) => com.sun.patchpr o.security.SignatureValidationUtil@c2ee15 <=SignatureValidationUtil.validateJar

  • How to get the area of a polygon

    Dears, I want to use distance tool to get the area of a drawn polygon, how to do so using mapviewer javascript API version 2 and/or version 1. Does the mapviewer javascript API version 2 works with JSP/ASP or it is just for HTML5? Many thanks in adva

  • Mass Update Purchase Order Price

    Dear folks, I have a bunch of purchase orders already created in the system. However, I received notification that there is a price increase in most of the products in the purchase orders. How do I mass update the prices in the purchase orders? I alr

  • Financial Reporting and Microsoft OLAP - issues with MDX

    We have installed Oracle/Hyperion Financial Reporting 11 for a client. They are using a Microsoft Analysis Services 2000 'cube' as the data source. The cube has an Account Dimension with 9 levels in its hierarchy. We have found Financial Reporting tr