Owb job taking too much time to execute

While creating a job in OWB, I am using three tables,a joiner and an aggregator which are all joined through another joiner to load into the final table. The output is coming correct but the sql query generated is very complex having so many sub-queries. So, its taking so much time to execute. Pls help me in reducing the cost.
-KC

It depends on what kind of code it generates at each stage. The first step would be collect stats for all the tables used and check the SQL generated using EXPLAIN PLAN. See which sub-query or inline view creates the most cost.
Generate SQL at various stages and see if you can achieve the same with a different operator.
The other option would be passing HINTS to the tables selected.
- K

Similar Messages

  • Archive Delete job taking too much time - STXH Sequential Read

    Hello,
    We have been running Archive sessions in our production system in last couple of months. We use SARA and selecting the appropriate variants for WRITE, DELETE and STORAGE options.
    Currently we use the Archive object FI_DOCUMNT and the write job is finished as per the normal time (5 hrs based on the selection criteria). After that normally the delete job is used to complete in 1hr or less than 2hrs always (in last 3 months).
    But in last few days the delete job is taking too much to complete (around 8 - 10hrs) when I monitor the system found that the Sequential Read for table STXH is taking too much time to read and it seems this is the cause.
    Could you please provide a solution for the same, so that the job will run faster as earlier.
    Thanks for your time
    Shyl

    Hi Juan,
    After the statistics run the performance is quite good. Now the job getting finished as expected.
    Thanks. Problem solved
    Shyl

  • Comma Separated Value Taking too Much Time to Execute

    Hi,
    select ES_DIAGNOSIS_CODE DC from ecg_study WHERE PS_PROTOCOL_ID LIKE 'H6L-MC-LFAN'
    The above query returns comma separated value from the above query.
    I am using the query below to split the comma separated value but the below query is taking lot of time to return the data.
    SELECT
    select DC from (
    with t as ( select ES_DIAGNOSIS_CODE DC from ecg_study WHERE PS_PROTOCOL_ID LIKE 'H6L-MC-LFAN' )
    select REGEXP_SUBSTR (DC, '[^,]+', 1, level) DC from t
    connect by level <= length(regexp_replace(DC,'[^,]*'))+1 )
    Please suggest me is there any alternative way to do this comma separated value.
    Thanks
    Sudhir

    Nikolay Savvinov wrote:
    Hi BluShadow,
    I know that this function is fast with varchar2 strings from several years of using it. With CLOBs one may need something faster, but the OP didn't menion CLOBs.
    Best regards,
    NikolayJust because you perceive it to be fast doesn't mean it's faster than doing it in SQL alone.
    For starters you are context switching from the SQL engine to PL/SQL to call it.
    Then in your code you are doing this...
    select substr(v_str,v_last_break+1, decode(v_nxt_break,0,v_length, v_nxt_break-v_last_break-1)) into v_result from dual;which is context switching back from the PL/SQL engine to the SQL engine for each entry in the string.
    Why people do that I don't know... when PL/SQL alone could do it without a context switch e.g.
    v_result := substr(v_str,v_last_break+1, case when v_nxt_break = 0 then v_length else v_nxt_break-v_last_break-1 end);So, if you still think it's faster than pure SQL (which is what the OP is using), please go ahead and prove it to us.

  • My ALV Report is taking too much time to execute

    Hi Friends,
    My ALV Report is taking long time for execution(more than 1.5 Hrs).Pls suggest the changes to be done to improove the performance.Its very urgent.Pls respond as soon as possible.
    Thanks & Regards,
    Sunil Maurya
    Report is as follows :
    REPORT  YSEG_PROFIT.
    TABLES : ZSEGMENT, coep.
    TYPE-POOLS: slis.
    DATA : BEGIN OF I_COEP OCCURS 0,
             BELNR  LIKE COEP-BELNR,
             BUZEI  LIKE COEP-BUZEI,
             PERIO  LIKE COEP-PERIO,
             WOGBTR LIKE COEP-WOGBTR,
             OBJNR  LIKE COEP-OBJNR,
             KSTAR  LIKE COEP-KSTAR,
             PAOBJNR LIKE COEP-PAOBJNR,
             KVGR5  LIKE ZSEGMENT-KVGR5,
             KAUFN  LIKE CE4KBL1_ACCT-KAUFN,
           END OF I_COEP.
    DATA : BEGIN OF I_SECTOR OCCURS 0,
            KVGR5  LIKE ZSEGMENT-KVGR5,
          END OF I_SECTOR.
    DATA : BEGIN OF I_AUFK OCCURS 0,
            OBJNR LIKE AUFK-OBJNR,
            PSPEL LIKE AUFK-PSPEL,
            KDAUF LIKE AUFK-KDAUF,
            KDPOS LIKE AUFK-KDPOS,
           END OF I_AUFK.
    DATA : BEGIN OF I_VBAKP OCCURS 0,
            OBJNR LIKE VBAP-OBJNR,
            KVGR5 LIKE VBAK-VBELN,
          END OF I_VBAKP.
    DATA : BEGIN OF I_PRPS OCCURS 0,
            OBJNR LIKE PRPS-OBJNR,
            PSPHI LIKE PRPS-PSPHI,
            ASTNR LIKE PROJ-ASTNR,
          END OF I_PRPS.
    DATA : BEGIN OF I_OUTPUT OCCURS 0,
             KSTAR LIKE COEP-KSTAR,
             MCTXT LIKE CSKU-MCTXT,
             S01   LIKE COEP-WOGBTR,
             S02   LIKE COEP-WOGBTR,
             S03   LIKE COEP-WOGBTR,
             S04   LIKE COEP-WOGBTR,
             S05   LIKE COEP-WOGBTR,
             S06   LIKE COEP-WOGBTR,
             S07   LIKE COEP-WOGBTR,
             S08   LIKE COEP-WOGBTR,
             S09   LIKE COEP-WOGBTR,
             OTH   like COEP-WOGBTR,
             TOTAL LIKE COEP-WOGBTR,
          END OF I_OUTPUT.
    DATA : BEGIN OF I_AFVC OCCURS 0,
            OBJNR LIKE AFVC-OBJNR,
            PROJN LIKE AFVC-PROJN,
            PROJ LIKE   PROJ-PSPID,
            PSPNR LIKE PROJ-PSPNR,
           END OF I_AFVC.
    DATA : BEGIN OF I_PROJ OCCURS 0,
            PSPNR LIKE PROJ-PSPNR,
            ASTNR LIKE PROJ-ASTNR,
           END OF I_PROJ.
    DATA : I_NP LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_NV LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_DETAIL LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_WB LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_PR LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    data : t_fieldcat_sum_rep TYPE slis_t_fieldcat_alv.
    data : t_fieldcat_det_rep TYPE slis_t_fieldcat_alv.
    DATA: k_fieldcat    TYPE slis_fieldcat_alv.
    Declaration by Sunil Maurya for sorting
    DATA : GT_SORT TYPE SLIS_T_SORTINFO_ALV,
           GS_SORT TYPE SLIS_SORTINFO_ALV.
    DATA : GT_SORT1 TYPE SLIS_T_SORTINFO_ALV,
           GS_SORT1 TYPE SLIS_SORTINFO_ALV.
    *data : it_sortcat type slis_t_sortinfo_alv.
    *DATA : k_sortcat  like line of it_sortcat.
    **data : wa_sort like line of it_sortcat.
    Declaration by Sunil Maurya for sorting
    constants :  c_user_command TYPE char30 VALUE 'USER_COMMAND'.
    *Selection screen
    SELECTION-SCREEN BEGIN OF BLOCK A WITH FRAME TITLE TEXT-001.
    PARAMETER : P_PERIO1   LIKE COEP-PERIO obligatory,
               P_PERIO2   LIKE COEP-PERIO MODIF ID D1,
                P_GJAHR    LIKE COEP-GJAHR obligatory.
    select-options :  P_KSTAR  for COEP-KSTAR,
                      P_GSBER  FOR COEP-GSBER.
    SELECT-OPTIONS : S_KVGR5 FOR ZSEGMENT-KVGR5 MODIF ID D1.
    SELECTION-SCREEN END OF BLOCK A.
    INITIALIZATION.
       S_KVGR5-OPTION = 'BT' .
       S_KVGR5-LOW = 'S01'.
       S_KVGR5-HIGH = 'S09'.
       APPEND S_KVGR5.
    AT SELECTION-SCREEN OUTPUT.
        LOOP AT SCREEN.
          IF SCREEN-GROUP1 = 'D1'.
             SCREEN-INPUT = '0'.
             MODIFY SCREEN.
          ENDIF.
        ENDLOOP.
    START-OF-SELECTION.
      PERFORM GET_SECTORS.
      PERFORM GET_DATA_COEP.
      PERFORM VALIDATE_SECTOR.
      PERFORM GROUP_OUTPUT.
    PERFORM BUILD_SORTCAT. " Inserted by by Sunil Maurya for sorting
      PERFORM BUILD_CATLOG.
      PERFORM DISPLAY_OUTPUT.
    *&      Form  GET_DATA_COEP
          text
    -->  p1        text
    <--  p2        text
    FORM GET_DATA_COEP .
    SELECT BELNR BUZEI PERIO WOGBTR OBJNR KSTAR PAOBJNR
    FROM COEP INTO TABLE I_COEP WHERE
            KOKRS = 'KBL' AND
           PERIO >= P_PERIO1 AND
           PERIO <= P_PERIO2 AND
            PERIO = P_PERIO1 AND
            GJAHR = P_GJAHR AND
            KSTAR NE '' AND
            OBJNR NE '' AND
            KSTAR in P_KSTAR AND
            GSBER IN P_GSBER. "AND
           BELNR = '0103991827' .
      SORT I_COEP BY OBJNR.
      DELETE I_COEP WHERE OBJNR+0(2) <> 'VB' AND
                          OBJNR+0(2) <> 'PR' AND
                          OBJNR+0(2) <> 'NV' AND
                          OBJNR+0(2) <> 'NP' AND
                          OBJNR+0(2) <> 'AO'.
      SORT I_COEP BY KSTAR.
      DELETE I_COEP WHERE KSTAR+0(5) <> '00003' AND
                          KSTAR+0(5) <> '00004'.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'NP'.
        MOVE I_COEP TO I_NP.
        APPEND I_NP.
        CLEAR : I_NP, I_COEP.
      ENDLOOP.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'NV'.
        MOVE I_COEP TO I_NV.
        APPEND I_NV.
        CLEAR : I_NV, I_COEP.
      ENDLOOP.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'VB' .
        MOVE I_COEP TO I_WB.
        APPEND I_WB.
        CLEAR : I_WB, I_COEP.
      ENDLOOP.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'PR' .
        MOVE I_COEP TO I_PR.
        APPEND I_PR.
        CLEAR : I_PR, I_COEP.
      ENDLOOP.
    *Inserted by Sunil Maurya for PAOBJNR = "AO....."
    Data : ind type sy-tabix.
    loop at i_coep where OBJNR+0(2) = 'AO'.
    ind = sy-tabix.
    select single KAUFN into i_coep-KAUFN from CE4KBL1_ACCT where PAOBJNR =
    i_coep-PAOBJNR.
    select single KVGR5 into i_coep-kvgr5 from vbak where vbeln =
    i_coep-kaufn.
    modify i_coep index ind.
    clear i_coep.
    endloop.
    *Inserted by Sunil Maurya for PAOBJNR = "AO....."
    *LOOP AT I_COEP WHERE OBJNR+0(2) = 'AO' .
       MOVE I_COEP TO I_AO.
       APPEND I_AO.
       CLEAR : I_AO, I_COEP.
    *ENDLOOP.
    ENDFORM.                    " GET_DATA_COEP
    *&      Form  GET_SECTORS
          text
    -->  p1        text
    <--  p2        text
    FORM GET_SECTORS .
      DATA : L_FR TYPE I,
             L_TO TYPE I.
      DATA :  L_CH1(1).
      LOOP AT S_KVGR5.
        IF S_KVGR5-OPTION = 'EQ'.
          I_SECTOR-KVGR5 = S_KVGR5-LOW.
          APPEND I_SECTOR.
          CLEAR : I_SECTOR.
          CONCATENATE '5' S_KVGR5-LOW+1(2) INTO I_SECTOR-KVGR5.
          APPEND I_SECTOR.
          CLEAR : I_SECTOR.
        ENDIF.
        IF S_KVGR5-OPTION = 'BT'.
          L_FR = S_KVGR5-LOW+1(2).
          L_TO = S_KVGR5-HIGH+1(2).
          WHILE L_FR <= L_TO.
            L_CH1 = L_FR.
            CONCATENATE 'S0' L_CH1 INTO I_SECTOR-KVGR5.
            APPEND I_SECTOR.
            CONCATENATE '50' L_CH1 INTO I_SECTOR-KVGR5.
            APPEND I_SECTOR.
            CLEAR : I_SECTOR, L_CH1.
            L_FR = L_FR + 1.
          ENDWHILE.
        ENDIF.
      ENDLOOP.
    ENDFORM.                    " GET_SECTORS
    *&      Form  VALIDATE_SECTOR
          text
    -->  p1        text
    <--  p2        text
    FORM VALIDATE_SECTOR .
    *get data from AUFK for NP & NV type intab
    IF I_NP[] IS NOT INITIAL.
      SELECT OBJNR PSPEL KDAUF KDPOS FROM AUFK
      INTO TABLE I_AUFK
      FOR ALL ENTRIES IN I_NP
      WHERE OBJNR = I_NP-OBJNR.
    *Push this data in I_WB where order no exist in AUFK
      LOOP AT I_AUFK WHERE KDAUF NE ''.
        I_WB-OBJNR = I_AUFK-OBJNR.
        APPEND I_WB.
        CLEAR : I_AUFK, I_WB.
      ENDLOOP.
    *Push this data in I_PR where order no exist in AUFK
      LOOP AT I_AUFK WHERE PSPEL NE ''.
        I_PR-OBJNR = I_AUFK-OBJNR.
        APPEND I_PR.
        CLEAR : I_AUFK, I_PR.
      ENDLOOP.
    ENDIF.
      SELECT BOBJNR AKVGR5 FROM VBAK AS A INNER JOIN VBAP AS B
      ON AVBELN = BVBELN
      INTO TABLE I_VBAKP
      FOR ALL ENTRIES IN I_WB
      WHERE B~VBELN = I_WB-OBJNR+2(10).
      SORT I_VBAKP BY OBJNR.
      SORT I_COEP BY OBJNR.
      LOOP AT I_WB.
        READ TABLE I_VBAKP WITH KEY OBJNR = I_WB-OBJNR.
        IF SY-SUBRC = 0.
          READ TABLE I_SECTOR WITH KEY KVGR5 = I_VBAKP-KVGR5.
          IF SY-SUBRC <> 0.
           DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
             loop at i_coep where objnr = i_wb-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
          ELSE.
         READ TABLE I_COEP WITH KEY BELNR = I_WB-BELNR BUZEI = I_WB-BUZEI.
           READ TABLE I_COEP WITH KEY OBJNR = I_WB-OBJNR.
            IF SY-SUBRC = 0.
              I_COEP-KVGR5 = I_SECTOR-KVGR5.
              MODIFY I_COEP INDEX SY-TABIX.
              LOOP AT I_COEP WHERE OBJNR = I_WB-OBJNR.
                I_COEP-KVGR5 = I_SECTOR-KVGR5.
                MODIFY I_COEP.
                CLEAR : I_COEP.
              ENDLOOP.
            ENDIF.
          ENDIF.
        ELSE.
         DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
             loop at i_coep where objnr = i_wb-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
        ENDIF.
        CLEAR : I_VBAKP, I_SECTOR, I_COEP.
      ENDLOOP.
    IF I_PR[] IS NOT INITIAL.
      SELECT AOBJNR APSPHI B~ASTNR
      FROM PRPS AS A INNER JOIN PROJ AS B
      ON APSPHI = BPSPNR
      INTO TABLE I_PRPS
      FOR ALL ENTRIES IN I_PR
      WHERE A~OBJNR = I_PR-OBJNR.
    ENDIF.
      IF I_NV[] IS NOT INITIAL.
        SELECT OBJNR PROJN FROM AFVC INTO TABLE I_AFVC
        FOR ALL ENTRIES IN I_NV
        WHERE OBJNR = I_NV-OBJNR AND
              PROJN <> ''  .
        LOOP AT I_AFVC.
          CALL FUNCTION 'CONVERSION_EXIT_ABPSP_OUTPUT'
            EXPORTING
              INPUT         = I_AFVC-PROJN
           IMPORTING
             OUTPUT        = I_AFVC-PROJ
            I_AFVC-PROJ = I_AFVC-PROJ+0(9).
            CALL FUNCTION 'CONVERSION_EXIT_KONPD_INPUT'
              EXPORTING
                INPUT           = I_AFVC-PROJ
             IMPORTING
               OUTPUT          =  I_AFVC-PSPNR.
            MODIFY I_AFVC.
            CLEAR : I_AFVC.
        ENDLOOP.
        SELECT PSPNR ASTNR FROM PROJ INTO TABLE I_PROJ
        FOR ALL ENTRIES IN I_AFVC
        WHERE PSPNR = I_AFVC-PSPNR.
        LOOP AT I_NV.
           I_PRPS-OBJNR = I_NV-OBJNR.
           READ TABLE I_AFVC WITH KEY OBJNR = I_NV-OBJNR.
           IF SY-SUBRC = 0.
             READ TABLE I_PROJ WITH KEY PSPNR = I_AFVC-PSPNR.
             IF SY-SUBRC = 0.
               I_PRPS-ASTNR = I_PROJ-ASTNR.
             ENDIF.
           ENDIF.
            APPEND I_PRPS.
            I_PR-OBJNR = I_NV-OBJNR.
            APPEND I_PR.
            CLEAR : I_NV, I_AFVC, I_PROJ, I_PR.
        ENDLOOP.
      ENDIF.
      SORT I_PRPS BY OBJNR.
      LOOP AT I_PR.
        READ TABLE I_PRPS WITH KEY OBJNR = I_PR-OBJNR.
        IF SY-SUBRC = 0.
          READ TABLE I_SECTOR WITH KEY KVGR5 = I_PRPS-ASTNR+5(3).
          IF SY-SUBRC <> 0.
           DELETE I_COEP WHERE OBJNR = I_PR-OBJNR.
             loop at i_coep where objnr = i_pr-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
          ELSE.
            READ TABLE I_COEP WITH KEY OBJNR = I_PR-OBJNR.
            IF SY-SUBRC = 0.
              CONCATENATE 'S' I_SECTOR-KVGR5+1(2) INTO I_COEP-KVGR5.
              MODIFY I_COEP INDEX SY-TABIX.
              LOOP AT I_COEP WHERE OBJNR = I_PR-OBJNR.
                CONCATENATE 'S' I_SECTOR-KVGR5+1(2) INTO I_COEP-KVGR5.
                MODIFY I_COEP.
                CLEAR : I_COEP.
              ENDLOOP.
            ENDIF.
          ENDIF.
        ELSE.
         DELETE I_COEP WHERE OBJNR = I_PR-OBJNR.
            loop at i_coep where objnr = i_pr-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
        ENDIF.
        CLEAR : I_PR, I_PRPS, I_SECTOR, I_COEP.
      ENDLOOP.
    ENDFORM.                    " VALIDATE_SECTOR
    *&      Form  GROUP_OUTPUT
          text
    -->  p1        text
    <--  p2        text
    FORM GROUP_OUTPUT .
      LOOP AT I_COEP.
        I_OUTPUT-KSTAR = I_COEP-KSTAR.
        IF I_COEP-KVGR5 = 'S01'.
          I_OUTPUT-S01 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S02'.
          I_OUTPUT-S02 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S03'.
          I_OUTPUT-S03 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S04'.
          I_OUTPUT-S04 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S05'.
          I_OUTPUT-S05 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S06'.
          I_OUTPUT-S06 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S07'.
          I_OUTPUT-S07 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S08'.
          I_OUTPUT-S08 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S09'.
          I_OUTPUT-S09 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'OTH' OR I_COEP-KVGR5 = ''.
          I_OUTPUT-OTH = I_COEP-WOGBTR.
        ENDIF.
        COLLECT I_OUTPUT.
        CLEAR : I_COEP, I_OUTPUT.
      ENDLOOP.
      LOOP AT I_OUTPUT.
        SELECT SINGLE MCTXT FROM CSKU
        INTO I_OUTPUT-MCTXT
        WHERE KTOPL = 'KBL' AND
              SPRAS = SY-LANGU AND
              KSTAR = I_OUTPUT-KSTAR.
        I_OUTPUT-TOTAL = I_OUTPUT-S01 + I_OUTPUT-S02 + I_OUTPUT-S03
                        + I_OUTPUT-S04 + I_OUTPUT-S05 + I_OUTPUT-S06
                        + I_OUTPUT-S07 + I_OUTPUT-S08 + I_OUTPUT-S09
                        + I_OUTPUT-OTH.
        MODIFY I_OUTPUT.
        CLEAR : I_OUTPUT.
      ENDLOOP.
    ENDFORM.                    " GROUP_OUTPUT
    *&      Form  BUILD_CATLOG
          text
    -->  p1        text
    <--  p2        text
    FORM BUILD_CATLOG .
      CLEAR k_fieldcat.
      k_fieldcat-fieldname = 'KSTAR'.
      k_fieldcat-seltext_l  = text-002.
      k_fieldcat-hotspot   = 'X'.
      APPEND k_fieldcat TO t_fieldcat_sum_rep.
      CLEAR k_fieldcat.
      k_fieldcat-fieldname = 'MCTXT'.
      k_fieldcat-seltext_l  = text-003.
      k_fieldcat-hotspot   = 'X'.
      APPEND k_fieldcat TO t_fieldcat_sum_rep.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S01'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S01'.
        k_fieldcat-seltext_l  = text-004.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S02'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S02'.
        k_fieldcat-seltext_l  = text-005.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S03'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S03'.
        k_fieldcat-seltext_l  = text-006.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S04'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S04'.
        k_fieldcat-seltext_l  = text-007.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S05'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S05'.
        k_fieldcat-seltext_l  = text-008.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S06'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S06'.
        k_fieldcat-seltext_l  = text-009.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S07'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S07'.
        k_fieldcat-seltext_l  = text-010.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S08'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S08'.
        k_fieldcat-seltext_l  = text-011.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S09'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S09'.
        k_fieldcat-seltext_l  = text-012.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'OTH'.
        k_fieldcat-seltext_l  = text-019.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'TOTAL'.
        k_fieldcat-seltext_l  = text-020.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
    *=======================================================================
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'BELNR'.
        k_fieldcat-seltext_l  = text-013.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'BUZEI'.
        k_fieldcat-seltext_l  = text-014.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'PERIO'.
        k_fieldcat-seltext_l  = text-015.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'WOGBTR'.
        k_fieldcat-seltext_l  = text-016.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'OBJNR'.
        k_fieldcat-seltext_l  = text-018.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'KSTAR'.
        k_fieldcat-seltext_l  = text-002.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
         CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'KVGR5'.
        k_fieldcat-seltext_l  = text-017.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
    *==============================================================
           Statements inserted by Sunil Maurya for sorting
    CLEAR GS_SORT.
    GS_SORT-FIELDNAME = 'KSTAR'.
    GS_SORT-SPOS      = 1.
    GS_SORT-UP        = 'X'.
    APPEND GS_SORT TO GT_SORT.
    CLEAR GS_SORT1.
    GS_SORT1-FIELDNAME = 'KSTAR'.
    GS_SORT1-SPOS      = 1.
    GS_SORT1-UP        = 'X'.
    GS_SORT1-SUBTOT    = 'X'.
    APPEND GS_SORT1 TO GT_SORT1.
    *CLEAR GS_SORT1.
    *GS_SORT1-FIELDNAME = 'WOGBTR'.
    *GS_SORT1-SPOS      = 2.
    *GS_SORT1-UP        = 'X'.
    *GS_SORT1-SUBTOT    = 'X'.
    *APPEND GS_SORT1 TO GT_SORT1.
       FORM build_sortcat.
       k_sortcat-spos = 1.
       k_sortcat-fieldname = 'KSTAR'.
       k_sortcat-up      = 'X'.
      k_sortcat-down      = 'X'.
       APPEND k_sortcat TO it_sortcat.
    clear k_sortcat.
       ENDFORM.
       Statements inserted by Sunil Maurya for sorting
    ENDFORM.                    " BUILD_CATLOG
    *&      Form  DISPLAY_OUTPUT
          text
    -->  p1        text
    <--  p2        text
    FORM DISPLAY_OUTPUT .
      DATA l_repid TYPE syrepid.
      l_repid = sy-repid.
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
        EXPORTING
          i_callback_program      = l_repid
          it_fieldcat             = t_fieldcat_sum_rep
          i_callback_user_command = c_user_command
          i_save                  = 'A'
          it_sort                 = GT_SORT[] "Statements inserted by Sunil
        TABLES
          t_outtab                = I_OUTPUT
        EXCEPTIONS
          program_error           = 1
          OTHERS                  = 2.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
    ENDFORM.                    " DISPLAY_OUTPUT
    *&      Form  user_command
          text
    FORM user_command USING r_ucomm     LIKE sy-ucomm
                            rs_selfield TYPE slis_selfield.
      DATA: l_repid TYPE syrepid.
      l_repid = sy-repid.
      CASE r_ucomm.
        WHEN '&IC1'.
          clear : i_detail. refresh : i_detail.
          LOOP AT I_COEP WHERE KVGR5 = rs_selfield-fieldname.
              MOVE i_coep to i_detail.
              append i_detail.
              clear : i_detail, i_coep.
          endloop.
          CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
            EXPORTING
              i_callback_program      = l_repid
              it_fieldcat             = t_fieldcat_det_rep
              i_callback_user_command = c_user_command
              i_save                  = 'A'
              it_sort                 = GT_SORT1[] "Inserted by Sunil
            TABLES
              t_outtab                = i_detail
            EXCEPTIONS
              program_error           = 1
              OTHERS                  = 2.
          IF sy-subrc <> 0.
            MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
          ENDIF.
      ENDCASE.
      ENDFORM.

    Run the SE30 with internal tables, then examine the top lines.
    see
    SE30
    The ABAP Runtime Trace (SE30) -  Quick and Easy
    Define you tables as sorted tables or use at least binary search, otherwise this sorts are useless
    and the nested loop is slow.
    SORT I_VBAKP BY OBJNR.
    SORT I_COEP BY OBJNR.
    LOOP AT I_WB.
    READ TABLE I_VBAKP WITH KEY OBJNR = I_WB-OBJNR.
    IF SY-SUBRC = 0.
    READ TABLE I_SECTOR WITH KEY KVGR5 = I_VBAKP-KVGR5.
    IF SY-SUBRC 0.
    DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
    loop at i_coep where objnr = i_wb-objnr.
    i_coep-kvgr5 = 'OTH'.
    modify i_coep.
    clear : i_coep.
    endloop.
    ELSE.
    READ TABLE I_COEP WITH KEY BELNR = I_WB-BELNR BUZEI = I_WB-BUZEI.
    READ TABLE I_COEP WITH KEY OBJNR = I_WB-OBJNR.
    IF SY-SUBRC = 0.
    I_COEP-KVGR5 = I_SECTOR-KVGR5.
    MODIFY I_COEP INDEX SY-TABIX.
    LOOP AT I_COEP WHERE OBJNR = I_WB-OBJNR.
    I_COEP-KVGR5 = I_SECTOR-KVGR5.
    MODIFY I_COEP.
    CLEAR : I_COEP.
    ENDLOOP.
    This is all very slow.
    Read my blog on internal tables:
    Measurements on internal tables: Reads and Loops:
    Runtimes of  Reads and Loops on Internal Tables
    If you really wnat to identify all bugs, then try this
    Z_SE30_COMPARE
    A Tool to Compare Runtime Measurements: Z_SE30_COMPARE
    Nonlinearity Check
    Nonlinearity Check Using the Z_SE30_COMPARE
    It needs at bit of experience but then it is very powerful!!!
    That is also a waste of time, but not a nested loop. You sort and resort the same table,
    but the sort is useless the deletes are still sequential on standard tables.
    Put all the stuff into ONE loop on I_COEP.
    SORT I_COEP BY OBJNR.
    DELETE I_COEP WHERE OBJNR+0(2) <> 'VB' AND
    OBJNR+0(2) <> 'PR' AND
    OBJNR+0(2) <> 'NV' AND
    OBJNR+0(2) <> 'NP' AND
    OBJNR+0(2) <> 'AO'.
    SORT I_COEP BY KSTAR.
    DELETE I_COEP WHERE KSTAR+0(5) <> '00003' AND
    KSTAR+0(5) <> '00004'.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'NP'.
    MOVE I_COEP TO I_NP.
    APPEND I_NP.
    CLEAR : I_NP, I_COEP.
    ENDLOOP.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'NV'.
    MOVE I_COEP TO I_NV.
    APPEND I_NV.
    CLEAR : I_NV, I_COEP.
    ENDLOOP.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'VB' .
    MOVE I_COEP TO I_WB.
    APPEND I_WB.
    CLEAR : I_WB, I_COEP.
    ENDLOOP.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'PR' .
    MOVE I_COEP TO I_PR.
    APPEND I_PR.
    CLEAR : I_PR, I_COEP.
    ENDLOOP.
    There is probably more. BBut with the compare tool you can find everything.
    Siegfried

  • Logic taking too much time to execute

    Dear Friends,
    My i_bkpf internal table has approx. 3 lakh records another table i_del has 2 lakh records. I have to delete these 2 lakh records of i_del from i_bkpf.
    I'd written the following piece of code for the same but it takes hell lot of time and ultimately I encounter a timeout.
    Kindly have a look at the following code and suggest how can I improve it.
    sort i_bkpf by belnr.
    sort i_del by belnr.
    loop at i_del.
    DELETE I_BKPF WHERE   BELNR  = I_DEL-BELNR  AND
                                           GJAHR = I_DEL-GJAHR AND
                                          BUKRS  = I_DEL-BUKRS.
    ENDLOOP.
    Regards,
    Alok.

    Hi
    do like this
    sort i_bkpf by bukrs belnr gjahr.
    sort i_del by  bukrs belnr gjahr.
    loop at i_del.
    read table  I_BKPF With key BUKRS = I_DEL-BUKRS
                                               BELNR = I_DEL-BELNR
                                                GJAHR = I_DEL-GJAHR
                                              binary search.
    if sy-subrc  = 0.
      delete i_bkpf index sy-tabix.
    endif..
    ENDLOOP.
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Code  taking too much time to output

    Following  code is taking too much time to execute . (some time giving Time_out )
    ind = sy-tabix.
        SELECT SINGLE * FROM mseg INTO mseg
           WHERE bwart = '102' AND
                 lfbnr = itab-mblnr AND
                 ebeln = itab-ebeln AND
                 ebelp = itab-ebelp.
        IF sy-subrc = 0.
          DELETE itab INDEX ind.
          CONTINUE.
    Is there any other way to write this code to reduce the output time.
    Thanks

    Hi,
    I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
    Try to rewrite the code as follows:
    * Outside the loop
    SELECT *
    from MSEG
    into table lt_mseg
    for all entries in itab
    where bwart = '102' AND
    lfbnr = itab-mblnr AND
    ebeln = itab-ebeln AND
    ebelp = itab-ebelp.
    Then inside the loop, do a READ on the internal table
    Loop at itab.
    read table lt_mseg with key bwart = '102'. "plus other conditions
    if sy-subrc ne 0.
    delete itab. "index is automatically determined here from SY-TABIX
    endif.
    endloop.
    I think this should optimise performance. You can check your code's performance using SE30 or ST05.
    Hope this helps! Please revert if you need anything else!!
    Cheers,
    Shailesh.
    Always provide feedback for helpful answers!

  • Job is taking too much time

    Hi,
    I am running one SQL code that is taking 1 hrs. but when I schedule this code in JOB using package it is taking 5 hrs.
    could anybody suggest why it is taking too much time?
    Regards
    Gagan

    Use TRACE and TKPROF with wait events to see where time is being spent (or waisted).
    See these informative threads:
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    HOW TO: Post a SQL statement tuning request - template posting
    Also you can use V$SESSION and/or V$SESSION_LONGOPS to see what code is currently executing.

  • Job is taking too much time during Delta loads

    hi
    when i tried to extract Delta records from R3 for Standard Extractor 0FI_GL_4 it is taking 46 mins while there are very less no. of delta records(193 records only).
    PFA the R3 Job log. the major time is taking in calling the Customer enhacement BW_BTE_CALL_BW204010_E.
    please let me know why this is taking too much time.
    06:10:16  4 LUWs confirmed and 4 LUWs to be deleted with FB RSC2_QOUT_CONFIRM_DATA
    06:56:46  Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Asynchronous sending of data package 1 in task 0002 (1 parallel tasks)
    06:56:47  IDOC: InfoIDOC 2, IDOC no. 121289649, duration 00:00:00
    06:56:47  IDOC: Begin 09.05.2011 06:10:15, end 09.05.2011 06:10:15
    06:56:48  Asynchronous sending of InfoIDOCs 3 in task 0003 (1 parallel tasks)
    06:56:48  Through selection conditions, 0 records filtered out in total
    06:56:48  IDOC: InfoIDOC 3, IDOC no. 121289686, duration 00:00:00
    06:56:48  IDOC: Begin 09.05.2011 06:56:48, end 09.05.2011 06:56:48
    06:56:54  tRFC: Data package 1, TID = 3547D5D96D2C4DC7740F217E, duration 00:00:07, ARFCSTATE =
    06:56:54  tRFC: Begin 09.05.2011 06:56:47, end 09.05.2011 06:56:54
    06:56:55  Synchronous sending of InfoIDOCs 4 (0 parallel tasks)
    06:56:55  IDOC: InfoIDOC 4, IDOC no. 121289687, duration 00:00:00
    06:56:55  IDOC: Begin 09.05.2011 06:56:55, end 09.05.2011 06:56:55
    06:56:55  Job finished
    Regards
    Atul

    Hi Atul,
    Have you written any customer exit code . If yes check for the optimization for it .
    Kind Regards,
    Ashutosh Singh

  • SAP GUI taking too much time to open transactions

    Hi guys,
    i have done system copy from Production to Quality server completely.
    after that i started SAP on Quality server, it is taking too much time for opening SAP transactions (it is going in compilation mode).
    i started SGEN, but it is giving time_out errors. please help me on this issue.
    MY hardware details on quality server
    operating system : SuSE Linux 10 SP2
    Database : 10.2.0.2
    SAP : ECC 6.0  SR2
    RAM size : 8 GB
    Hard disk space : 500 GB
    swap space : 16 GB.
    regards
    Ramesh

    Hi,
    >i started SGEN, but it is giving time_out errors. please help me on this issue.
    You are supposed to run SGEN as a batch job and so, it should be possible  to get time out errors.
    I've seen Full SGEN lasting from 3 hours on high end systems up to 8 full days on PC hardware...
    Regards,
    Olivier

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • ODS to CUBE load taking too much time..

    Hi all ,
    we are loading the data from our ZODS to ZCUBE, but the data load is taking too much time , we haven't created any indexes , we alsotried by making infosource for the ODS but still tha same problem .. It is always showing 0 from 345674 records that is the records are not getting extracted from ODS .
    Can anybody help me in this regards , it is a bit urgent ..
    Thanks in advance.

    Hi,
    there are a few you can check. First you should check if this job hasn't ended in a dump with ST22.
    The next thing you can do, if the job doesn't end abnormaly, is to reduce the amount of records processed at the same time. Sometimes the system has trouble if the amount of records that it has to process is too large. Go to the InfoPackage -> DataS. Default Data Transfer -> Fill the maximum to 10% of de the default value. Try to run the load again.
    If the job still doesn't finish then you have to check wether or not there are any ABAP routines and/or formula involved in the update rule ? Maybe they running in a loop.
    regards,
    Raymond Baggen
    Uphantis bv

  • Importing is taking too much time (2 DAYS)

    Dear All,
    I'm Importing below support packages together in a queue @ SAP Solution manger 4  .
    SAPKB70015             Basis Support Package 15 for 7.00
    SAPKA70015             ABA Support Package 15 for 7.00
    SAPKITL426             ST 400: Patch 0016, CRT for SAPKB70015
    SAPKIBIIP6             BI_CONT 703: patch 0006
    SAPKIBIIP7             BI_CONT 703: patch 0007
    SAPKIBIIP8             BI_CONT 703: patch 0008
    SAPK-40010INCPRXRPM    CPRXRPM 400: patch 0010
    SAPK-40011INCPRXRPM    CPRXRPM 400: patch 0011
    SAPK-40012INCPRXRPM    CPRXRPM 400: patch 0012
    SAPKIPYJ7E             PI_BASIS 2005_1_700: patch 0014
    SAPKW70016             BW Support Package 16 for 7.00
    importing is taking too much time (2 DAYS) in main import phase, I have seen SLOG, there are many rows " I am waiting 1 sec and 6 sec . and also checked transaction code STMS all support packages  imported except  one support package "SAPKW70016  "
    Please advice.
    Best Regards'
    HE

    Hello Mohan,
    The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
    Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
    Note 531923 - Audit Trail: Indexes on table DBTABLOG
    Note 579980 - Table logs: Performance during access to DBTABLOG
    Regards,
    Siddhesh

  • Uploading Taking too much Time

    Hi all,
    I am uploading my video files from client computer to server, the server has 1 tera byte storage, 500 GB is free. It is taking too much time to get uploaded. even there is no other computer connected. only one client and a server. the left pane "jobs in progress" is shown and never ends.
    what should be the possible reason for that.
    Junaid
    Broadcast Engineer

    Our server FCS was hellishly slow, to the point where are producers never used it as it would take them forever to upload their capture scratches.
    We recently installed a Fiber network with an Xsan and switched all of the producers over to that network. Now it is just as fast (if not faster) than using an external USB/FireWire drive. Now the only challenge is to get them to use FCS again...
    But really, if your on a 10/100 network then you might as well not use FCS for any type of real movie editing. Even on our 10/100/1000 network it was still slow enough to make everyone hate using FCS. So if you are planning on using FCS on a wide-scale deployment, prepare to shell out some cash for Fiber.

Maybe you are looking for

  • Unexpected reboot or restart Macbook pro Mid 2014

    My MacBook Pro Mid 2014 MGXC2HN/A bought on 19th August 2014, is unexpectedly rebooting or restarting from day 1, when I login in to my mac it start rebooting and again it will give the login prompt and after login it will hang for 1 minute with popu

  • Restore Error Message -1

    Need to restore my iPhone as my home screen after turning it on says connect to iTunes. So I follow these instructions: http://support.apple.com/kb/HT1808 and then it comes up with a box stating "The iPhone could not be restored. An unknown error occ

  • Change a price based on selected options...

    Hey everyone. I know next to zero flash. I'm working on one of my first projects and would like to create a gallery of products that have two sets of options. For simplicity sake I'll use a shirt. If I have a specific shirt that costs different amoun

  • Xslt: calling custom Service Application Proxy from xslt

    Hi, Basically, i have creted my custom service application and i wrote webpart to consume it. I have a requirement, to add a button in the SharePoint-Blog XSLTListViewWebPart located in the ~/blog/post/post.aspx page. In this page, they have 2 XSLTLi

  • Error installing 11gr1 oem agent on AIX 6.1

    Hi , I am using the silent installation method for installating 11gr1 oem client using the doc"http://download.oracle.com/otndocs/products/oem/agent-files/64-bit/aix5l-64-11.1.0.1.0.-instructions.txt" ( 3.2. Silent installation of Management Agent).