DELETE statement performance issue

Dear all,
I ahve the following problem:
When running the delete script bellow:
DELETE FROM aux_exist_relationship_3 aer_3
WHERE EXISTS (SELECT ''
FROM aux_kind_of_control_1 akc_1
WHERE aer_3.cons_rel_id_new= akc_1.cons_rel_id)
AND NOT EXISTS (SELECT ''
FROM aux_kind_of_control_3 akc_3
WHERE akc_3.cons_rel_id=aer_3.cons_rel_id_new)
it takes more than 50 mins to excute it, despite there are indexes on the 3 used tables, on the three fields akc_1.cons_rel_id,akc_3.cons_rel_id and aer_3.cons_rel_id_new
Can someone advise me how to solve the issue?

How many rows do you expect to delete?
You can tryDELETE FROM aux_exist_relationship_3 aer_3
      WHERE aer_3.cons_rel_id_new IN(SELECT akc_1.cons_rel_id
                                       FROM aux_kind_of_control_1 akc_1
                                     MINUS
                                     SELECT akc_3.cons_rel_id
                                       FROM aux_kind_of_control_3 akc_3)Urs

Similar Messages

  • SQL statement performance issues

    Hi: A sql statement I have is taking too long to complete. But if I break the statement into two separate statements, it runs fine.
    SQL 1 runs fine and returns 163.
    SELECT /*+ FIRST_ROWS */ entity_group.entity FROM auth, entity_group, user_group WHERE entity_group.auth_group = user_group.groupid and user_group.userid = auth.id and auth.username = 'ing';
    SQL 2 runs fine, prints about 20 records
    select /*+ FIRST_ROWS */ EVENT_DATA.info from EVENT_DATA, AGENTS where EVENT_DATA.agent_id = agents.id AND agents.entity IN (163);
    SQL 3 takes a long time to run. The field AGENT_ID is indexed in EVENT_DATA.
    select /*+ FIRST_ROWS */ EVENT_DATA.info from EVENT_DATA, AGENTS where EVENT_DATA.agent_id = agents.id AND agents.entity IN (SELECT /*+ FIRST_ROWS */ entity_group.entity FROM auth, entity_group, user_group WHERE entity_group.auth_group = user_group.groupid and user_group.userid = auth.id and auth.username = 'ing');
    Any suggestions on improving the SQL statement or finding out if anything is wrong with the database.
    Thanks
    Ravi

    First, lose the hints, analyse your tables and let the CBO do its job. If it is not fast enough. then Nicolas' solution may be faster.
    Another possible construct would be:
    SELECT event_data.info
    FROM event_data, agents,
         (SELECT DISTINCT entity_group.entity
          FROM auth, entity_group, user_group
          WHERE entity_group.auth_group = user_group.groupid and
                user_group.userid = auth.id and
                auth.username = 'ing') eg
    WHERE event_data.agent_id = agents.id and
          agents.entity = eg.entityTTFN
    John

  • DELETE performance issue

    Dear All,
    I am posting again my performance issue for a delete statement (this time according to the instructions for posting SQL statements tunning request). The following delete statement was running more than 2 hours without success. Please note that the tables are indexed on the used fields in the query. If i try simple select it is done within 16 seconds:
    DELETE FROM aux_exist_relationship_3 aer_3
            WHERE EXISTS (SELECT ''
                          FROM aux_kind_of_control_1 akc_1
                          WHERE aer_3.cons_rel_id_new= akc_1.cons_rel_id)
            AND NOT EXISTS (SELECT ''
                            FROM aux_kind_of_control_3 akc_3
                            WHERE akc_3.cons_rel_id=aer_3.cons_rel_id_new)The version of the DB is 10.2.0.4
    These are the parameters relevant to the optimizer:
    show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.4
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 128
    SQL>
    SQL>
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    SQL>
    SQL> show parameter cursor_sharing;
    NAME TYPE VALUE
    cursor_sharing string EXACT
    SQL>
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pval2 format a20
    SQL>
    SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
    SNAME PNAME PVAL1 PVAL2
    SYSSTATS_INFO STATUS COMPLETED
    SYSSTATS_INFO DSTART 10-30-2008 16:28
    SYSSTATS_INFO DSTOP 10-30-2008 16:28
    SYSSTATS_INFO FLAGS 1
    SYSSTATS_MAIN CPUSPEEDNW 1217.17877
    SYSSTATS_MAIN IOSEEKTIM 10
    SYSSTATS_MAIN IOTFRSPEED 4096
    SYSSTATS_MAIN SREADTIM
    SYSSTATS_MAIN MREADTIM
    SYSSTATS_MAIN CPUSPEED
    SYSSTATS_MAIN MBRC
    SNAME PNAME PVAL1 PVAL2
    SYSSTATS_MAIN MAXTHR
    SYSSTATS_MAIN SLAVETHR
    13 rows selected.
    Here is the output of EXPLAIN PLAN
    explain plan for
    2 DELETE FROM aux_exist_relationship_3 aer_3
    3 WHERE EXISTS (SELECT ''
    4 FROM aux_kind_of_control_1 akc_1
    5 WHERE aer_3.cons_rel_id_new= akc_1.cons_rel_id)
    6 AND NOT EXISTS (SELECT ''
    7 FROM aux_kind_of_control_3 akc_3
    8 WHERE akc_3.cons_rel_id=aer_3.cons_rel_id_new);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4002353621
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU) | Time |
    | 0 | DELETE STATEMENT | | 2334 | 42012 | | 1174 (1) | 00:00:15 |
    | 1 | DELETE | AUX_EXIST_RELATIONSHIP_3 | | | | | |
    |* 2 | HASH JOIN SEMI | | 2334 | 42012 | | 1174 (1) |00:00:15 |
    |* 3 | HASH JOIN ANTI | | 2334 | 28008 | 1992K | 989 (1) | 00:00:12 |
    | 4 | TABLE ACCESS FULL | AUX_EXIST_RELATIONSHIP_3 | 113K| 663K| | 718 (1) | 00:00:09 |
    | 5 | INDEX FAST FULL SCAN| AUX_KIND_OF_CONTROL_3_IDX1 | 113K| 663K| | 74 (2) | 00:00:01 |
    | 6 | INDEX FAST FULL SCAN | AUX_KIND_OF_CONTROL_1_IDX1 | 221K| 1298K| | 183 (2) | 00:00:03 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
    2 - access("AER_3"."CONS_REL_ID_NEW"="AKC_1"."CONS_REL_ID")
    3 - access("AKC_3"."CONS_REL_ID"="AER_3"."CONS_REL_ID_NEW")
    19 rows selected.
    I will be very grateful if someone can tel me where might be the problem. i got suggestions yesterday for using MINUS in the delete clause:
    DELETE  FROM aux_exist_relationship_3 aer_3
          WHERE aer_3.cons_rel_id_new IN(SELECT akc_1.cons_rel_id
                                           FROM aux_kind_of_control_1 akc_1
                                         MINUS
                                         SELECT akc_3.cons_rel_id
                                           FROM aux_kind_of_control_3 akc_3)or using view, but it did not help.
    My guess is that this is linked to the UNDO_TABLESPACE but how to prove it? Or perhaps I am wrong?

    Good day to everyone!
    Thanks a lot for th suggestions.
    I have tested the same query in a different DB where I have imported the 3 tables together with the data and the execution took 2.54 seconds. I guess the problem might be coming from the UNDO_TABLESPACE. I have found the following query:
    select b.tablespace_name, tbs_size SizeMb, a.free_space FreeMb
    from  (select tablespace_name, round(sum(bytes)/1024/1024 ,2) as free_space
           from dba_free_space
           group by tablespace_name) a,
          (select tablespace_name, sum(bytes)/1024/1024 as tbs_size
           from dba_data_files
           group by tablespace_name) b
    where a.tablespace_name(+)=b.tablespace_name;and the result is:
    TABLESPACE_NAME | SIZEMB | FREEMB
    SYSAUX | 600 | 48
    UNDOTBS | 9500 | 2.44
    SYSTEM | 400 | 122.69
    EGR_TBS_DATA | 13000 | 96.56
    and also
    select SEGMENT_SPACE_MANAGEMENT,RETENTION,MAX_EXTENTS
    from dba_tablespaces
    where tablespace_name = 'UNDOTBS'result:
    SEGMENT_SPACE_MANAGEMENT |RETENTION | MAX_EXTENTS
    MANUAL | NOGUARANTEE | 2147483645
    so perhaps it is from the UNDO_TABLESPACE.
    Do you think this could be the issue?

  • Oracle Advance Compression Deletion Performance issue in 11g R1

    Hi,
    We have implemented OAC in our datawarehouse environment to enable table and index compression. We tested in our Test machine and we gained almost 600GB due to advance compression without any issues and all the informatica loads are running fine. And hence we implemented the same in our production but unfortunately two sessions which are involving deletion of data are taking more time (3 times of actual timing) for completion which affects our production environment.
    The tables creating issue are all non partitioned tables.
    I need to know whether Oracle Advance Compression will decrease delete performance? and is there any way to disable advance compression on those particular tables?
    Our environment details:
    DB earlier version: 11.1.0.6
    DB current version : Oracle 11.1.0.7
    Applied PSU: 11.1.0.7.6
    Operating system: Solaris 5.9
    Syntax used for compression:
    ALTER TABLE TABLE_NAME MOVE COMPRESS FOR ALL OPERATIONS;
    Thanks in Advance.

    Hi,
    Thanks for your reply.
    The note is for update performance issue and also I have applied necessary patches for improving update performance.
    The update sessions are all working fine. only the deletion sessions are creating problem.
    Could someone help me out to clear this problem.
    Thanks,
    VBK

  • Performance issue with the ABAP statements

    Hello,
    Please can some help me with the below statements where I am getting performance problem.
    SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
    SORT CHDATE by DOC_NUMBER.
    SORT SOURCE_PACKAGE by DOC_NUMBER.
    LOOP AT CHDATE INTO WA_CHDATE.
       READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
       WA_CHDATE-DOC_NUMBER BINARY SEARCH.
       MOVE WA_CHDATE-CREATEDON  to WA_CIDATE-CREATEDON.
    APPEND WA_CIDATE to CIDATE.
    ENDLOOP.
    I wrote an above code for the follwing requirement.
    1. I have 2 tables from where i am getting the data
    2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
    3. While accessing the 2 table and copying to thrid table i have to modify the field.
    I am getting performance issues with the above statements.
    Than
    Edited by: Rob Burbank on Jul 29, 2010 10:06 AM

    Hello,
    try a select like the following one instead of you code.
    SELECT field field2 ...
    INTO TABLE it_table
    FROM table1 AS T1 INNER JOIN table2 AS T2
    ON t1-doc_number = t2-doc_number

  • Performance Issues DPS - Single Edition - PDF Rendering, Multi-State Text

    Hello,
    Please advise on the above issue...
    On a retina display ipad, I'm having issues with multi-state objects that contain text. The column on the left is crisp text, non multi-state object. However when I place the text in a multi state object (right) it's not rendering in high resolution. A bit pixelated.
    Single Issue - Creative Cloud License
    iPad Gen III Retina Original Version
    My application is compiled using app version: v24
    Also I experience performance issues using PDF mode... when you change a page everything is super grainy, and then ~2 Seconds later it renders crisp. It used to be prerendered, or much faster. Perhaps I was using PNG? I switched to PDF mode as recommended in countless articles. Is the slow render just a side affect of PDF? Anything I can do to optimize? Recommendations?
    Thanks so much!
    Richie

    Thank You, Bob!
    Vector fixed it! I only have to change it on 29872498329 pages
    One thing I just realized is, that I have the same issue with buttons. I have a button with Normal, and Click states. It has the same rendering issue (grainy) and I couldn't find a place to make the button render 'vector'.

  • Finding Delete statement issued in particular object

    Dear All,
    Please let me know how to find what are all the delete statement issued against a object in oracle. some one is deleted some data from one table i want to find when the delete statement fired in my schema.
    With Regards
    Ramesh

    you have audit turn on right? there should be a report for DML in audit vault.

  • Performance of delete statement

    This delete statement does not complete even after 20 hrs. I have indexes on all columns mentioned in the where clause. I have also tried using function based indexes and analyzing tables but this does not give ay results.
    When I remove the nvl it processes in 1min. There are 400,000 records in edw_err_invoice_Stg. Can anybody suggest anything ?
    delete from stgown.EDW_ERR_INVOICE_STG a
    where a.dwh_load_key != (select max(b.dwh_load_key)
    from EDW_ERR_INVOICE_STG b
    where NVL(a.sys_ent_id,0) = NVL(b.sys_ent_id,0)
    and NVL(a.invoice_no,0) = NVL(b.invoice_no,0)
    and NVL(a.invoice_line_no,0) = NVL(b.invoice_line_no,0)
    and NVL(a.invoice_transaction_date,trunc(sysdate)) = NVL(b.invoice_transaction_date,trunc(sysdate))
    and NVL(a.supplier_no,0) = NVL(b.supplier_no,0)
    and NVL(a.transaction_flag,0) = NVL(b.transaction_flag,0));

    Hi,
    What about :
    delete from EDW_ERR_INVOICE_STG
    where rowid not in (select max(rowid) keep (dense_rank last order by dwh_load_key)
                                from    EDW_ERR_INVOICE_STG
                                group by sys_ent_id, invoice_no, invoice_line_no, invoice_transaction_date, supplier_no, transaction_flag));Nicolas.
    Well, perhaps if you've 2 rows with max value for same group by, it's not a solution... because the statement above keep only one of these.
    Message was edited by:
    N. Gasparotto

  • Simplest way to issue a DELETE statement from ADF?

    What is the simplest way to code a delete statement in an ADF application?
    In my case, I have a set of rows that have become obsolete and need to be deleted. The timeout parameter that governs this is not in the database, but configured in the application. A parameterized stored procedure seems overly complicated, and using an entity based view to retrieve all rows solely for the purpose of deleting them seems inefficient. Do I have to explicitly deal with PreparedStatement, or can I have something like a read-only view object based on a delete statement and simply say executeQuery() ?
    -- Sebastian

    Hi,
    You can write custom function in Application Module (if you have on the page table with selection) e.g.
    public void executeDelete(){
    Row r = getYourViewObject().getCurrentRow();
    r.remove();
    getDBTransaction().commit()
    or
    public void executeDelete(){
    getYourViewObject().removeCurrentRow();
    getDBTransaction().commit()
    Optionally to refresh data displayed on the page, after deleting you may invoke in your methods (after commiting):
    getYourViewObject().executeQuery();
    I hope it helps.
    Kuba

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • Performance Issue For Opening And Closing Balance In FBL1N/3N/5N

    Dear experts,
                        I Am Having Requirement to Bring Opening And Closing Balance In FBL1N, FBL3N, FBL5N.
    For This requirement I Used BADI : FI_ITEMS_CH_DATA~CHANGE_ITEMS, below is my Code For FBL1N, And I've Done the same For 3N/5N...With Related BAPI
    *   IF SY-TCODE = 'FBL1N'.
    *    LOOP AT ct_items INTO gs_items.
    *      CALL FUNCTION 'RP_CALC_DATE_IN_INTERVAL'
    *        EXPORTING
    *          date      = gs_items-budat
    *          days      = '01'
    *          months    = '00'
    *          signum    = '-'
    *          years     = '00'
    *        IMPORTING
    *          calc_date = lv_date.
    *      CALL FUNCTION 'BAPI_AP_ACC_GETKEYDATEBALANCE'
    *        EXPORTING
    *          companycode        = gs_items-bukrs
    *          vendor             = gs_items-konto
    *          keydate            = lv_date
    **   BALANCESPGLI       = ' '
    **   NOTEDITEMS         = ' '
    ** IMPORTING
    **   RETURN             =
    *        TABLES
    *          keybalance         =  lv_obal.
    *      CALL FUNCTION 'BAPI_AP_ACC_GETKEYDATEBALANCE'
    *        EXPORTING
    *          companycode        = gs_items-bukrs
    *          vendor             = gs_items-konto
    *          keydate            = gs_items-budat
    **   BALANCESPGLI       = ' '
    **   NOTEDITEMS         = ' '
    ** IMPORTING
    **   RETURN             =
    *        TABLES
    *          keybalance         = lv_cbal
    *      READ TABLE lv_cbal INTO gs_cbal INDEX 1.
    *      gs_items-cbal = gs_cbal-lc_bal.
    *      READ TABLE lv_obal INTO gs_obal INDEX 1.
    *      gs_items-obal = gs_obal-lc_bal.
    *      MODIFY ct_items FROM gs_items TRANSPORTING obal cbal.
    *      CLEAR: gs_items,gs_obal,gs_cbal.
    *    ENDLOOP.
    *   ENDIF.
    So, Above Code Causing Me the Performance Issue, Kindly Suggest Me the Solution..
    Regards,
    uday.

    Hi Uday,
    I am sending you the code i used for the creation a Zreport based on FBL5N. Please check if it can of any help.
    *& Report  ZFBL5N                                                      *
    REPORT  zfbl5n_new  .
    TABLES : bsid,knc1,lfc1.
    TYPE-POOLS: slis.
    TYPES: BEGIN OF ty_bsid,
              bukrs TYPE bsid-bukrs,
              kunnr TYPE bsid-kunnr,
              belnr TYPE bsid-belnr,
              buzei TYPE bsid-buzei,
              bldat TYPE bsid-bldat,
              blart TYPE bsid-blart,
              bschl TYPE bsid-bschl,
              shkzg TYPE bsid-shkzg,
              dmbtr TYPE bsid-dmbtr,
              augdt TYPE bsid-augdt,
              augbl TYPE bsid-augbl,
              zuonr TYPE bsid-zuonr,
              sgtxt TYPE bsid-sgtxt,
              zfbdt TYPE bsid-zfbdt,
              zterm TYPE bsid-zterm,
              zbd1t TYPE bsid-zbd1t,
              zbd2t TYPE bsid-zbd2t,
              zbd3t TYPE bsid-zbd3t,
              kkber TYPE bsid-kkber,
              bstat TYPE bsid-bstat,
              umskz TYPE bsid-umskz,
            END OF ty_bsid.
    TYPES: BEGIN OF ty_bsik,
             bukrs TYPE bsik-bukrs,
              lifnr TYPE bsik-lifnr,
              belnr TYPE bsik-belnr,
              buzei TYPE bsik-buzei,
              bldat TYPE bsik-bldat,
              blart TYPE bsik-blart,
              bschl TYPE bsik-bschl,
              shkzg TYPE bsik-shkzg,
              dmbtr TYPE bsik-dmbtr,
              augdt TYPE bsik-augdt,
              augbl TYPE bsik-augbl,
              zuonr TYPE bsik-zuonr,
              sgtxt TYPE bsik-sgtxt,
               zfbdt TYPE bsik-zfbdt,
    *         KKBER TYPE bsik-kkber,
              zterm TYPE bsik-zterm,
               zbd1t TYPE bsik-zbd1t,
              zbd2t TYPE bsik-zbd2t,
              zbd3t TYPE bsik-zbd3t,
              bstat TYPE bsid-bstat,
              umskz TYPE bsid-umskz,
            END OF ty_bsik.
    TYPES: BEGIN OF ty_final,
              belnr TYPE bsid-belnr,
    *         buzei TYPE bsak-buzei,
              bldat TYPE bsid-bldat,
              blart TYPE bsid-blart,
              chq TYPE bsid-zuonr,
              debit TYPE bsid-dmbtr,
              credit TYPE bsid-dmbtr,
              txt TYPE bsid-sgtxt,
              date TYPE bsid-zfbdt,
              kkber TYPE bsid-kkber,
              zterm TYPE bsid-zterm,
              augbl TYPE bsid-augbl,
              augdt TYPE bsid-augdt,
              flag TYPE c,
            END OF ty_final.
    TYPES : BEGIN OF gs_openbal,
              bukrs TYPE bapi3007_2-comp_code,
              kunnr TYPE bapi3007_2-customer,
              dmbtr TYPE bapi3007_2-lc_amount,
             END OF gs_openbal.
    DATA: it_bsid TYPE STANDARD TABLE OF ty_bsid,
           it_bsik TYPE STANDARD TABLE OF ty_bsik,
           it_final TYPE STANDARD TABLE OF ty_final.
    DATA: wa_bsid TYPE ty_bsid,
           wa_bsik TYPE ty_bsik,
           wa_final TYPE ty_final.
    DATA: w_days TYPE t5a4a-dlydy,
           w_month TYPE t5a4a-dlymo,
           w_year TYPE t5a4a-dlyyr,
           w_date TYPE p0001-begda,
           w_name1 TYPE kna1-name1,
           w_ort01 TYPE kna1-ort01,
           w_lifnr TYPE kna1-lifnr,
           w_dmbtr1 TYPE bsid-dmbtr,
           w_dmbtr2 TYPE bsid-dmbtr,
           w_dmbtr3 TYPE bsad-dmbtr,
           w_dmbtr4 TYPE bsad-dmbtr,
           w_opbal TYPE bsid-dmbtr,
           w_credit TYPE bsik-dmbtr,
           w_debit TYPE bsik-dmbtr,
           w_clobal TYPE bsik-dmbtr,
           w_credit1 TYPE bsik-dmbtr,
           w_debit1 TYPE bsik-dmbtr,
           w_clobal1 TYPE bsik-dmbtr.
    DATA: ld_yrper LIKE rwcoom-fiscper,
           kunnr LIKE kna1-kunnr,
           x_norm TYPE c,
           x_park,
           x_apar,
           x_merk,
           ok_code(4),
           wa_x001 LIKE x001,
           return LIKE bapireturn,
           line_count LIKE sy-loopc,
           number_of_records TYPE i,
           xindex LIKE sy-tabix,
           open LIKE knc1-um01s,
           temp(20),
           close LIKE knc1-um01s,
           gjahr LIKE bsid-gjahr,
           period LIKE bkpf-monat,
           f(1),
           v_char(2),
           closec(20),
           openc(20),
           debit LIKE bapi3007_2-lc_amount,
           credit LIKE debit.
    DATA : v_dmbtr LIKE bsid-dmbtr.
    *DATA : tot_debit LIKE t_ar-debit,
    *       tot_credit LIKE t_ar-credit.
    DATA : t_kna1 LIKE kna1 OCCURS 1  WITH HEADER LINE,
            t_knb1 LIKE knb1 OCCURS 10 WITH HEADER LINE.
    DATA ibsid LIKE bsid OCCURS 0 WITH HEADER LINE.
    DATA ibsad LIKE bsad OCCURS 0 WITH HEADER LINE.
    DATA ibsik LIKE bsik OCCURS 0 WITH HEADER LINE.
    DATA ibsak LIKE bsak OCCURS 0 WITH HEADER LINE.
    DATA : it_fieldcat_alv   TYPE slis_t_fieldcat_alv,
            wa_fieldcat_alv     TYPE slis_fieldcat_alv,
            is_layout_alv  TYPE slis_layout_alv,
            wa_layout_alv  TYPE slis_layout_alv,
            it_list_top_of_page TYPE slis_t_listheader,
            it_events TYPE slis_t_event,
            wa_events TYPE LINE OF slis_t_event.
    DATA : BEGIN OF ibukrs OCCURS 0,
               bukrs LIKE t001-bukrs,
              END OF ibukrs.
    DATA : BEGIN OF ikunnr1 OCCURS 0,
              kunnr LIKE knc1-kunnr,
             END OF ikunnr1.
    DATA : BEGIN OF ikunnr OCCURS 0,
               kunnr LIKE knc1-kunnr,
               bukrs LIKE t001-bukrs,
               lifnr LIKE lfc1-lifnr,
              END OF ikunnr.
    DATA: it_sort TYPE slis_t_sortinfo_alv,
           wa_sort TYPE slis_sortinfo_alv.
    DATA:    r_bschl TYPE RANGE OF bschl,
              wa_bschl LIKE LINE OF r_bschl.
    SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
    PARAMETERS : p_kunnr TYPE bsid-kunnr OBLIGATORY,
                  p_bukrs TYPE bsid-bukrs OBLIGATORY.
    SELECT-OPTIONS: so_budat FOR bsid-budat .
    SELECTION-SCREEN END OF BLOCK b1.
    SELECTION-SCREEN BEGIN OF BLOCK b2 WITH FRAME TITLE text-002.
    PARAMETERS : p_normal AS CHECKBOX,
                  p_spl    AS CHECKBOX,
                  p_vendor AS CHECKBOX.
    SELECTION-SCREEN END OF BLOCK b2.
    PERFORM get_data.
    PERFORM process_data.
    *PERFORM calculate_openbal. " Commented by anish
    PERFORM calculate_open_bal.
    PERFORM calculate_closing_bal.
    PERFORM build_catalog_sort USING it_sort.
    PERFORM reuse_alv_events_get .
    PERFORM display_data.
    *&      Form  GET_DATA
    *       text
    *  -->  p1        text
    *  <--  p2        text
    FORM get_data .
       SELECT bukrs kunnr belnr buzei bldat blart bschl shkzg dmbtr augdt augbl zuonr sgtxt zfbdt zterm zbd1t zbd2t zbd3t kkber
         bstat umskz FROM bsid
         INTO TABLE it_bsid
         WHERE bukrs = p_bukrs
          AND kunnr = p_kunnr
          AND budat IN so_budat.
       SELECT bukrs kunnr belnr buzei bldat blart bschl shkzg dmbtr augdt augbl zuonr sgtxt zfbdt zterm zbd1t zbd2t zbd3t kkber
        bstat umskz FROM bsad
        APPENDING TABLE it_bsid
        WHERE bukrs = p_bukrs
         AND kunnr = p_kunnr
         AND budat IN so_budat.
       SELECT SINGLE name1 ort01 lifnr FROM kna1
         INTO (w_name1 , w_ort01 , w_lifnr)
         WHERE kunnr = p_kunnr.
       IF p_vendor IS NOT INITIAL.
         SELECT bukrs lifnr belnr buzei bldat blart bschl shkzg dmbtr augdt augbl zuonr sgtxt zfbdt zterm zbd1t zbd2t zbd3t
         bstat umskz   FROM bsik
         APPENDING TABLE it_bsik
         WHERE bukrs = p_bukrs
           AND lifnr = w_lifnr
           AND budat IN so_budat.
         SELECT bukrs lifnr belnr buzei bldat blart bschl shkzg dmbtr augdt augbl zuonr sgtxt zfbdt zterm zbd1t zbd2t zbd3t
         bstat umskz  FROM bsak
        APPENDING TABLE it_bsik
        WHERE bukrs = p_bukrs
          AND lifnr = w_lifnr
          AND budat IN so_budat.
       ENDIF.
       SORT it_bsid BY bschl.
       DELETE  it_bsid WHERE bschl = '04'.
       DELETE  it_bsid WHERE bschl = '07'.
       DELETE  it_bsid WHERE bschl = '17'.
       DELETE  it_bsid WHERE bschl = '34'.
       DELETE  it_bsid WHERE bschl = '27'.
       DELETE  it_bsid WHERE bschl = '37'.
       SORT it_bsik BY bschl.
       DELETE  it_bsik WHERE bschl = '04'.
       DELETE  it_bsik WHERE bschl = '07'.
       DELETE  it_bsik WHERE bschl = '17'.
       DELETE  it_bsik WHERE bschl = '34'.
       DELETE  it_bsik WHERE bschl = '27'.
       DELETE  it_bsik WHERE bschl = '37'.
    ENDFORM.                    " GET_DATA
    *&      Form  PROCESS_DATA
    *       text
    *  -->  p1        text
    *  <--  p2        text
    FORM process_data .
       DATA:okay       TYPE c VALUE space.
       w_month = '00'.
       w_year = '00'.
       SORT it_bsid BY bldat .
       LOOP AT it_bsid INTO wa_bsid.
         PERFORM check_item_ok  USING p_normal
                                      p_spl
                                      p_vendor
    *                               x_park
                                      wa_bsid
                                CHANGING okay.
         CHECK okay = 'X'.
         wa_final-belnr = wa_bsid-belnr.
         wa_final-bldat = wa_bsid-bldat.
         wa_final-blart = wa_bsid-blart.
         wa_final-txt = wa_bsid-sgtxt.
         wa_final-kkber = wa_bsid-kkber.
         wa_final-zterm = wa_bsid-zterm.
         wa_final-augbl = wa_bsid-augbl.
         wa_final-augdt = wa_bsid-augdt.
         wa_final-flag = 'C'.
         IF wa_bsid-blart = 'DZ'.
           wa_final-chq = wa_bsid-zuonr.
         ENDIF.
         IF wa_bsid-shkzg = 'S'.
           wa_final-debit = wa_bsid-dmbtr.
         ELSEIF wa_bsid-shkzg = 'H'.
           wa_final-credit = wa_bsid-dmbtr.
         ENDIF.
         w_credit = w_credit + wa_final-credit.
         w_debit = w_debit + wa_final-debit.
    ****** Net due  date
         IF wa_bsid-zbd1t IS NOT INITIAL.
           w_days = wa_bsid-zbd1t.
         ELSEIF wa_bsid-zbd2t IS NOT INITIAL.
           w_days = wa_bsid-zbd2t.
         ELSEIF wa_bsid-zbd3t IS NOT INITIAL.
           w_days = wa_bsid-zbd3t.
         ENDIF.
         IF w_days IS INITIAL.
           wa_final-date = wa_bsid-zfbdt.
         ELSE.
           CALL FUNCTION 'RP_CALC_DATE_IN_INTERVAL'
             EXPORTING
               date      = wa_bsid-zfbdt
               days      = w_days
               months    = w_month
               signum    = '+'
               years     = w_year
             IMPORTING
               calc_date = w_date.
           wa_final-date = w_date.
         ENDIF.
         APPEND wa_final TO it_final.
         CLEAR: w_days , w_date , wa_final .
       ENDLOOP.
       IF it_bsik IS NOT INITIAL.
         CLEAR: w_days , w_date.
         SORT it_bsik BY bldat.
         LOOP AT it_bsik INTO wa_bsik.
           wa_final-belnr = wa_bsik-belnr.
           wa_final-bldat = wa_bsik-bldat.
           wa_final-blart = wa_bsik-blart.
           wa_final-txt = wa_bsik-sgtxt.
    *    wa_final-kkber = wa_bsik-kkber.
           wa_final-zterm = wa_bsik-zterm.
           wa_final-augbl = wa_bsik-augbl.
           wa_final-augdt = wa_bsik-augdt.
           wa_final-flag = 'V'.
           IF wa_bsik-blart = 'DZ'.
             wa_final-chq = wa_bsik-zuonr.
           ENDIF.
           IF wa_bsik-shkzg = 'S'.
             wa_final-debit = wa_bsik-dmbtr.
           ELSEIF wa_bsik-shkzg = 'H'.
             wa_final-credit = wa_bsik-dmbtr.
           ENDIF.
           w_credit1 = w_credit1 + wa_final-credit.
           w_debit1 = w_debit1 + wa_final-debit.
    *******  Net Due date
           IF wa_bsik-zbd1t IS NOT INITIAL.
             w_days = wa_bsik-zbd1t.
           ELSEIF wa_bsik-zbd2t IS NOT INITIAL.
             w_days = wa_bsik-zbd2t.
           ELSEIF wa_bsik-zbd3t IS NOT INITIAL.
             w_days = wa_bsik-zbd3t.
           ENDIF.
           IF w_days IS INITIAL.
             wa_final-date = wa_bsik-zfbdt.
           ELSE.
             CALL FUNCTION 'RP_CALC_DATE_IN_INTERVAL'
               EXPORTING
                 date      = wa_bsik-zfbdt
                 days      = w_days
                 months    = w_month
                 signum    = '+'
                 years     = w_year
               IMPORTING
                 calc_date = w_date.
           ENDIF.
           wa_final-date = w_date.
           APPEND wa_final TO it_final.
           CLEAR: wa_final.
         ENDLOOP.
       ENDIF.
    ENDFORM.                    " PROCESS_DATA
    *&      Form  DISPLAY_DATA
    *       text
    *  -->  p1        text
    *  <--  p2        text
    FORM display_data .
       wa_fieldcat_alv-fieldname = 'BELNR'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-003.
       wa_fieldcat_alv-outputlen = '11'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'BLDAT'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-004.
       wa_fieldcat_alv-outputlen = '13'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'BLART'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-005.
       wa_fieldcat_alv-outputlen = '02'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'CHQ'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-006.
       wa_fieldcat_alv-outputlen = '09'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'DEBIT'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-007.
       wa_fieldcat_alv-outputlen = '15'.
       wa_fieldcat_alv-do_sum = 'X'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'CREDIT'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-008.
       wa_fieldcat_alv-outputlen = '15'.
       wa_fieldcat_alv-do_sum = 'X'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'TXT'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-009.
       wa_fieldcat_alv-outputlen = '50'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'DATE'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-010.
       wa_fieldcat_alv-outputlen = '12'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'KKBER'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-011.
       wa_fieldcat_alv-outputlen = '04'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'ZTERM'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-012.
       wa_fieldcat_alv-outputlen = '13'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'AUGBL'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-013.
       wa_fieldcat_alv-outputlen = '15'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'AUGDT'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-seltext_l = text-014.
       wa_fieldcat_alv-outputlen = '17'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       wa_fieldcat_alv-fieldname = 'FLAG'.
       wa_fieldcat_alv-tabname = 'IT_FINAL'.
       wa_fieldcat_alv-tech = 'X'.
       APPEND wa_fieldcat_alv TO it_fieldcat_alv.
       CLEAR wa_fieldcat_alv.
       CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
        EXPORTING
          i_callback_program             = sy-repid
          is_layout                      = wa_layout_alv
          it_fieldcat                    = it_fieldcat_alv
    *   IT_EXCLUDING                   =
    *   IT_SPECIAL_GROUPS              =
          it_sort                        = it_sort
          it_events                      = it_events
          i_save                            = 'A'
         TABLES
           t_outtab                       = it_final
        EXCEPTIONS
          program_error                  = 1
          OTHERS                         = 2
       IF sy-subrc <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
       ENDIF.
    *  CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
    *    EXPORTING
    *      i_callback_program                = sy-repid
    *     i_callback_top_of_page            = 'TOP_OF_PAGE'
    *      is_layout                         = wa_layout_alv
    *      it_fieldcat                       = it_fieldcat_alv
    *      it_sort                           = it_sort
    ***   I_DEFAULT                         = 'X'
    **      i_save                            = 'A'
    ***   IT_EVENTS                         =
    *     TABLES
    *       t_outtab                          = it_final
    *    EXCEPTIONS
    *      program_error                     = 1
    *      OTHERS                            = 2
    *  IF sy-subrc <> 0.
    *** MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    ***         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    *  ENDIF.
    ENDFORM.                    " DISPLAY_DATA
    *&      Form  TOP_OF_PAGE
    *       Header at top of page.
    FORM top_of_page.
       SKIP 1.
       WRITE: AT 35 'Account Statement from' , so_budat-low , 'to' , so_budat-high.
       SKIP 2.
       WRITE: AT /5 'CUSTOMER:' , p_kunnr.
       WRITE: AT 35 'Name:' , w_name1.
       WRITE: AT /5 'Company:' , p_bukrs.
       WRITE: AT 35 'City:' , w_ort01.
       SKIP 1.
       WRITE: AT /5 'Opening Balance as on' , so_budat-low , '   ' ,  w_opbal LEFT-JUSTIFIED.
       SKIP 2.
    ENDFORM.                    "TOP_OF_PAGE
    *&      Form  END_OF_PAGE
    *       Footer at End of page.
    FORM end_of_page.
       SKIP 2.
       IF so_budat-high IS NOT INITIAL.
         WRITE: AT 5 'Closing Balance as on' , so_budat-high , '   ' ,  w_clobal LEFT-JUSTIFIED.
       ELSE.
         WRITE: AT 5 'Closing Balance  ' , w_clobal LEFT-JUSTIFIED.
       ENDIF.
    ENDFORM.                    "end_of_page
    *&      Form  CALCULATE_OPENBAL
    *       text
    *  -->  p1        text
    *  <--  p2        text
    FORM calculate_openbal .
       DATA:v_gjahr       TYPE bsid-gjahr.
       DATA: v_period LIKE  t009b-poper,v_monat LIKE t001-periv.
       CALL FUNCTION 'FI_PERIOD_DETERMINE'
              EXPORTING
                   i_budat        = so_budat-low
                   i_bukrs        = p_bukrs
    *           I_PERIV        = ' '
    *           I_GJAHR        = 0000
    *           I_MONAT        = 00
    *           X_XMO16        = ' '
              IMPORTING
                   e_gjahr        = v_gjahr
    *            e_monat        = v_monat
                   e_poper        = v_period.
       IF sy-subrc NE 0.
       ENDIF.
       DATA: f_date LIKE sy-datum.
       CALL FUNCTION 'FIRST_DAY_IN_PERIOD_GET'
         EXPORTING
           i_gjahr  = v_gjahr
           i_monmit = 00
           i_periv  = 'V3'
           i_poper  = v_period
         IMPORTING
           e_date   = f_date.
       period = v_period - 1.
       gjahr = v_gjahr.
       DATA wa_kna1 LIKE kna1.
       CALL FUNCTION 'READ_KNA1'
         EXPORTING
           xkunnr         = p_kunnr
         IMPORTING
           xkna1          = wa_kna1
         EXCEPTIONS
           key_incomplete = 1
           not_authorized = 2
           not_found      = 3
           OTHERS         = 4.
       IF sy-subrc <> 0.
         MESSAGE w023(zwww).
         CALL SCREEN 0010.
       ENDIF.
       MOVE-CORRESPONDING wa_kna1 TO t_kna1.
       APPEND t_kna1.
       SELECT kunnr FROM kna1 INTO TABLE ikunnr1
         WHERE kunnr = p_kunnr.
       SELECT bukrs FROM t001 INTO TABLE ibukrs
        FOR ALL ENTRIES IN t_knb1
        WHERE bukrs = t_knb1-bukrs.
       LOOP AT ikunnr1.
         LOOP AT ibukrs.
           ikunnr-kunnr = ikunnr1-kunnr.
           ikunnr-bukrs = ibukrs-bukrs.
           READ TABLE t_kna1 WITH  KEY kunnr = ikunnr1-kunnr.
           ikunnr-lifnr = t_kna1-lifnr.
           APPEND ikunnr.
         ENDLOOP.
       ENDLOOP.
       DELETE ikunnr WHERE bukrs NE p_bukrs.
       LOOP AT ikunnr.
         CLEAR: knc1,lfc1,f.
         IF NOT ( ikunnr-kunnr IS INITIAL ) AND NOT ( p_vendor IS INITIAL ).
           SELECT SINGLE * FROM lfc1
                  WHERE gjahr = gjahr AND bukrs = ikunnr-bukrs
                                      AND lifnr = ikunnr-lifnr.
         ENDIF.
         SELECT SINGLE * FROM knc1
           WHERE gjahr = gjahr AND bukrs = p_bukrs
                 AND kunnr = p_kunnr.
         IF sy-subrc = 0.
           CASE period .
             WHEN 12.
               open = knc1-umsav +
               knc1-um01s - knc1-um01h + knc1-um02s - knc1-um02h +
               knc1-um03s - knc1-um03h + knc1-um04s - knc1-um04h +
               knc1-um05s - knc1-um05h + knc1-um06s - knc1-um06h +
               knc1-um07s - knc1-um07h + knc1-um08s - knc1-um08h +
               knc1-um09s - knc1-um09h + knc1-um10s - knc1-um10h +
               knc1-um11s - knc1-um11h + knc1-um12s - knc1-um12h.
               IF NOT ( lfc1 IS INITIAL ).
                 open = open + lfc1-umsav +
                 lfc1-um01s - lfc1-um01h + lfc1-um02s - lfc1-um02h +
                 lfc1-um03s - lfc1-um03h + lfc1-um04s - lfc1-um04h +
                 lfc1-um05s - lfc1-um05h + lfc1-um06s - lfc1-um06h +
                 lfc1-um07s - lfc1-um07h + lfc1-um08s - lfc1-um08h +
                 lfc1-um09s - lfc1-um09h + lfc1-um10s - lfc1-um10h +
                 lfc1-um11s - lfc1-um11h + lfc1-um12s - lfc1-um12h.
               ENDIF.
             WHEN 11.
               open = knc1-umsav +
               knc1-um01s - knc1-um01h + knc1-um02s - knc1-um02h +
               knc1-um03s - knc1-um03h + knc1-um04s - knc1-um04h +
               knc1-um05s - knc1-um05h + knc1-um06s - knc1-um06h +
               knc1-um07s - knc1-um07h + knc1-um08s - knc1-um08h +
               knc1-um09s - knc1-um09h + knc1-um10s - knc1-um10h +
               knc1-um11s - knc1-um11h.
               IF NOT ( lfc1 IS INITIAL ) .
                 open = open + lfc1-umsav +
                 lfc1-um01s - lfc1-um01h + lfc1-um02s - lfc1-um02h +
                 lfc1-um03s - lfc1-um03h + lfc1-um04s - lfc1-um04h +
                 lfc1-um05s - lfc1-um05h + lfc1-um06s - lfc1-um06h +
                 lfc1-um07s - lfc1-um07h + lfc1-um08s - lfc1-um08h +
                 lfc1-um09s - lfc1-um09h + lfc1-um10s - lfc1-um10h +
                 lfc1-um11s - lfc1-um11h.
               ENDIF.
             WHEN 10.
               open = knc1-umsav +
               knc1-um01s - knc1-um01h + knc1-um02s - knc1-um02h +
               knc1-um03s - knc1-um03h + knc1-um04s - knc1-um04h +
               knc1-um05s - knc1-um05h + knc1-um06s - knc1-um06h +
               knc1-um07s - knc1-um07h + knc1-um08s - knc1-um08h +
               knc1-um09s - knc1-um09h + knc1-um10s - knc1-um10h .
               IF NOT ( lfc1 IS INITIAL ) .
                 open = open + lfc1-umsav +
                 lfc1-um01s - lfc1-um01h + lfc1-um02s - lfc1-um02h +
                 lfc1-um03s - lfc1-um03h + lfc1-um04s - lfc1-um04h +
                 lfc1-um05s - lfc1-um05h + lfc1-um06s - lfc1-um06h +
                 lfc1-um07s - lfc1-um07h + lfc1-um08s - lfc1-um08h +
                 lfc1-um09s - lfc1-um09h + lfc1-um10s - lfc1-um10h.
               ENDIF.
             WHEN 9.
               open = knc1-umsav +
               knc1-um01s - knc1-um01h + knc1-um02s - knc1-um02h +
               knc1-um03s - knc1-um03h + knc1-um04s - knc1-um04h +
               knc1-um05s - knc1-um05h + knc1-um06s - knc1-um06h +
               knc1-um07s - knc1-um07h + knc1-um08s - knc1-um08h +
               knc1-um09s - knc1-um09h .
               IF NOT ( lfc1 IS INITIAL ) .
                 open = open + lfc1-umsav +
                 lfc1-um01s - lfc1-um01h + lfc1-um02s - lfc1-um02h +
                 lfc1-um03s - lfc1-um03h + lfc1-um04s - lfc1-um04h +
                 lfc1-um05s - lfc1-um05h + lfc1-um06s - lfc1-um06h +
                 lfc1-um07s - lfc1-um07h + lfc1-um08s - lfc1-um08h +
                 lfc1-um09s - lfc1-um09h.
               ENDIF.
             WHEN 8.
               open = knc1-umsav +
               knc1-um01s - knc1-um01h + knc1-um02s - knc1-um02h +
               knc1-um03s - knc1-um03h + knc1-um04s - knc1-um04h +
               knc1-um05s - knc1-um05h + knc1-um06s - knc1-um06h +
               knc1-um07s - knc1-um07h + knc1-um08s - knc1-um08h.
               IF NOT ( lfc1 IS INITIAL ) .
                 open = open + lfc1-umsav +
                 lfc1-um01s - lfc1-um01h + lfc1-um02s - lfc1-um02h +
                 lfc1-um03s - lfc1-um03h + lfc1-um04s - lfc1-um04h +
                 lfc1-um05s - lfc1-um05h + lfc1-um06s - lfc1-um06h +
                 lfc1-um07s - lfc1-um07h + lfc1-um08s - lfc1-um08h .
               ENDIF.
             WHEN 7.
               open = knc1-umsav +
               knc1-um01s - knc1-um01h + knc1-um02s - knc1-um02h +
               knc1-um03s - knc1-um03h + knc1-um04s - knc1-um04h +
               knc1-um05s - knc1-um05h + knc1-um06s - knc1-um06h +
               knc1-um07s - knc1-um07h .
               IF NOT ( lfc1 IS INITIAL ) .
                 open = open + lfc1-umsav +
                 lfc1-um01s - lfc1-um01h + lfc1-um02s - lfc1-um02h +
                 lfc1-um03s - lfc1-um03h + lfc1-um04s - lfc1-um04h +
                 lfc1-um05s - lfc1-um

  • Performance Issue in a report

    Hi,
         Can you please modify this code, i have performance issue for the following code.
    Here: t_po is a final internal table.
        select  ekkobukrs ekkoebeln ekkolifnr ekkobsart    " EKKO
                ekkoernam ekkoaedat ekko~memory                " EKKO
                ekpoebelp ekpoidnlf ekpotxz01 ekpoloekz     " EKPO
                ekpoeffwr ekpomenge                                      " EKPO 
                ekknsakto ekknps_psp_pnr                              " EKKN
                ekknzekkn ekknmenge                                    " EKKN  
                into (t_po-bukrs,t_po-ebeln,t_po-lifnr,t_po-bsart,
                      t_po-ernam,t_po-aedat,t_po-memory,
                      t_po-ebelp,t_po-idnlf,t_po-txz01,t_po-loekz,
                      t_po-effwr,t_po-menge,
                      t_po-sakto,t_po-ps_psp_pnr,
                      t_po-zekkn,t_po-menge1)
                from ( ekko inner join ekpo
                on   ekkoebeln eq ekpoebeln )
                     inner join ekkn
                on   ekkoebeln eq ekknebeln
                and  ekpoebelp eq ekknebelp
                for all entries in t_pspnr          
                where ekko~bukrs      in s_bukrs  and
                      ekko~lifnr      in s_lifnr  and
                      ekko~bsart      in s_bsart  and
                      ekko~aedat      in s_aedat  and
                      ekko~ernam      in s_ernam  and
                      ekpo~idnlf      in s_idnlf  and
                      ps_psp_pnr      =  t_pspnr-pspnr.   
          append t_po.
          clear  t_po.
        endselect.
    Another code:
      sort t_po by bukrs idnlf ebeln ebelp.
      loop at t_po.
        select single post1 psphi into (t_po-post1,t_po-psphi)             "Performance issue
                      from prps
                      where pspnr = t_po-ps_psp_pnr.
        select single pspid into t_po-pspid                                         ""Performance issue
                      from proj
                      where pspnr = t_po-psphi.
        if t_po-pspid in s_pspid.
    do nothing
        else.
          delete t_po index sy-tabix.
          continue.
        endif.
    Get invoiced amount for a PO line item
        select dmbtr shkzg into (ekbe-dmbtr,ekbe-shkzg)              ""Performance issue
                  from ekbe
                  where ebeln = t_po-ebeln and
                        ebelp = t_po-ebelp and
                        vgabe = '2'.
          if ekbe-shkzg = 'H'.
            ekbe-dmbtr = ekbe-dmbtr * -1.
          endif.
          t_po-invamt = t_po-invamt + ekbe-dmbtr.
        endselect.                                                                                ""Performance issue
        if not t_po-menge eq 0.
          t_po-polinamt = t_po-effwr * ( t_po-menge1 / t_po-menge ).
        endif.
        t_po-amtopen  = t_po-polinamt - t_po-invamt.
        modify t_po index sy-tabix.
        clear: t_po,prps,ekbe,proj.
      endloop.

    HI,
    Instead of selecting each fields and putting it in each fileds in internal table, we can select the fields and put it into the internal table which contains the required fields.
    Eg.
    TYPES: begin of itab,
                 vorna type vorna,
                nachn type nachn,
                end of itab.
    data: wt_itab type table of itab.
    select vorna, nachn into table wt_itab from pa0002 where cond.
    This increase the performance because it wont take time to match the each fields given after the table statement.
    Regards,
    Rani.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance issue with pl/sql code

    Hi Oracle Gurus,
    I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Elapsed Times are 30minutes , 40 minutes, 65 minutes , 3 minutes ,3 seconds.
    Expected elapsed time is maximum of 3 minutes. ( But some times it took 3 seconds too...! )
    Output on all different executions are same that is deletion and insertion of 12K records into a table.
    Here is the auto trace details of two different scenarios.
    Slow execution - 33.65 minutes
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                         1,712,343    1,712,342.6    41.4
    CPU Time (ms)                             1,679,689    1,679,688.6    44.7
    Executions                                        1            N/A     N/A
    Buffer Gets                              ##########  167,257,973.0    86.9
    Disk Reads                                    1,284        1,284.0     0.4
    Parse Calls                                       1            1.0     0.0
    User I/O Wait Time (ms)                       4,264            N/A     N/A
    Cluster Wait Time (ms)                        3,468            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        6            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     4            N/A     N/A
    Sharable Mem(KB)                                 85            N/A     N/A
              -------------------------------------------------------------Fast Exection : 5 seconds
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                            41,550       41,550.3     0.7
    CPU Time (ms)                                40,776       40,776.3     1.0
    Executions                                        1            N/A     N/A
    Buffer Gets                               2,995,677    2,995,677.0     4.2
    Disk Reads                                       22           22.0     0.0
    Parse Calls                                       1            1.0     0.0
    User I/O Wait Time (ms)                         162            N/A     N/A
    Cluster Wait Time (ms)                          621            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                       55            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     4            N/A     N/A
    Sharable Mem(KB)                                 85            N/A     N/A
              -------------------------------------------------------------For security reasons, I cannot share the actual code. Its a report generating code that deletes and load the data into table using insert into select statement.
    Delete from table ;
    cursor X to get the master data ( 98 records )
    For each X loop
    insert into tableA select * from tables where a= X.a and b= X.b and c=X.c ..... ;
    -- 12 K records inserted on average
    insert into tableB select * from tables where a= X.a and b= X.b and c=X.c ..... ;
    -- 12 K records inserted on average
    end loop ;1. The select query is complex with bind variables ( explain plan varies for each values )
    2. I have checked the tablespace of the tables involved, it is 82% used. DBA confirmed that it is not the reason.
    3. Disk reads are high during long execution.
    4. At long running times, I can see a db sequential read wait event on a index object. This index is on the table where data is inserted.
    All I need to find is why this code is taking 3 seconds and 60 minutes on the same day and on the consecutive executions ?
    Is there any other approach to find the root cause of this behaviour and to fix it ? Kindly adivse.
    Thanks in advance your help.
    Regards,
    Hari
    Edited by: BluShadow on 26-Sep-2012 08:24
    edited to add {noformat}{noformat} tags.  You've been a member long enough to know to do this yourself... so please do so in future.  ({message:id=9360002})                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hariharan ST wrote:
    Hi Oracle Gurus,
    I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Please reedit your post and add some code tags around the trace information. This would improve readability greatly and will help us to help you
    example
    {<b></b>code}
    select * from dual;{<b></b>code}
    Based upon your description I can imagine two things.
    a) The execution plan for the select query does change frequently.
    A typical reason can be not up to date statistics.
    b) Some locking / wait conflict. For example upon a UK index.
    Are there any other operations going on while it is slow? If anybody inserts a value, then your session will wait, if the same (PK/UK) value also is to be inserted.
    Those wait events can be recognized using standard tools like oracle sql developer or enterprise manager while the query is slow.
    Also go through the links that are in the FAQ. They tell you how to get better information for makeing a tuning request.
    SQL and PL/SQL FAQ
    Edited by: Sven W. on Sep 25, 2012 6:41 PM

  • Performance issue of frequently data inserted tables

    Hi all,
    Have a table named raw_trap_store having columns as trap_id(number,PK),Source_IP(varchar2), OID(varchar2),Message(CLOB)  and received_time(date).
    This table is partitioned across 24 partitions where partitioning column being received_time. (every hour's data will be stored in each partition).
    This table is getting inserted with 40-50 records/sec on an average. Overall number of records for a day will be around 2.8-3 million. Data will be retained for 2 days.
    No updates will be happening on this table.
    Performance issue:N
    Need a report  which involves selection of records from this table based on certain values of Source IP (filtering condition on source_ip column).
    Need a report  which involves selection of records from this table based on certain values of OID (filtering condition on OID column).
    But, if i create an index on SourceIP and OID column, then inserts are getting slow. (have created a normal index. not partitioned index)
    Please help me to address the above issue.

    Giving the nature of your report (based on Source_IP and OID) and the nature of your table partitioning (range partitioned by received_time), you have already made a good decision to create these particular indexes as a normal (b-tree or global) and not locally partitioned. Because if you have locally partitioned them, your reports will not eliminate partitions (because they do not include the partition key in their where clause) and hence your index range scans will scan all 24 partitions generating a lot of logical I/O
    That is said, remember that generally we insert once and select many times. You have to balance that. If you are sure that it is the creation of your two indexes that has decreased the insert performance then you may set them at in an unusable state before the insert and rebuild them after. But this might be a good advice only if the volume of data to be inserted is much bigger than the existing volume of data before the insert.
    And if you are not deleting from the table and the table does not contain triggers and integrity constraints (like FK constraint) then you can opt for a direct path insert using the hint /*+ append */
    Best regards
    Mohamed Houri
    <mod. action: removed unecessary blog ref.>
    Message was edited by: Nicolas.Gasparotto

Maybe you are looking for