Crystal Report Query taking too much time

Hi,
We are developing one report based on SQL Server 2008 in Crystal Report. There are around 50,000 valid combination in database. Based on dynamic filter we need to bring few records in report. Since these filters are at report level, and crystal report is using microcube, it is taking more than 15 mins to execute.
Is there any option to fetch record based on filter applied at report level.
Regards
Baby

HI,
First of all , thank you very much.
Since having cascading prompt, we never thought in this way.
Details:-
For our report we have 4 prompts.
1. category->family-brand (cascading- mandatory)
2. season(madatory)
3.collection (madatory)
4.owner(not mandatory)
previously we set all these filters at record level.
Now we set season and collection at query level and brand, owner at report level. Report only query for selected season and collection only.
Thanks once again.
Regards
Baby

Similar Messages

  • Still the report is taking too much time

    Hi All,
    When i refresh a webi report the report is taken too much time to refresh(open).
    In back end i have checked all the connections, contexts, cardinalities, joins, conditions...etc, in webi i have enabled the the check box 'query stripping'.
    but still the report is taking too much time, i didn't identified the problem?
    Please help me on this.
    Thanks in advance..

    Hi Mark,
    How many queries are there--2
    How many rows are returned--- 2000+
    Are all measures defined with aggregates-- Yes
    What is the array fetch size? (I1000 if it isn't already)

  • Report is taking too much time when running from parameter form

    Dear All
    I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
    But, If it is running from parameter form it is taking too much time to format report in PDF.
    Please suggest any configuration or setting if anybody is having Idea.
    Thanks

    Hi,
    The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
    Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
    Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
    If you still cannot determine the difference then trace both sessions.
    Rod West

  • Delete query taking too much time

    Hi All,
    my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
    Though I have dropped mv log on the table & disabled all the triggers on it.
    Moreover deletion is based on primary key .
    delete from table_name where primary_key in (values)
    above is dummy format of my query.
    can anyone please tell me what could be other reason that query is performing that slow.
    Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
    Please reply asap.

    Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
    You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
    ~ Madrid.

  • Query taking too much time with dates??

    hello folks,
    I am trying pull some data using the date condition and for somereason its taking too much time to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    Presumably you've got an index on activity_date.
    If you apply a function like TRUNC to activity_date, you can no longer use the index.
    Post execution plans to verify.
    and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
    and al.activity_date < TRUNC (SYSDATE, 'DD')

  • Why is query taking too much time ?

    Hi gurus,
    I have a table name test which has 100000 records in it,now the question i like to ask is.
    when i query select * from test ; no proble with responce time, but when the next day i fire the above query it is taking too much of time say 3 times.i would also like to tell you that everything is ok in respect of tuning,the db is properly tuned, network is tuned properly. what could be the hurting factor here ??
    take care
    All expertise.

    Here is a small test on my windows PC.
    oracle 9i Rel1.
    Table : emp_test
    number of records : 42k
    set autot trace exp stat
    15:29:13 jaffar@PRIMEDB> select * from emp_test;
    41665 rows selected.
    Elapsed: 00:00:02.06 ==> response time.
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    2951 consistent gets
    178 physical reads
    0 redo size
    1268062 bytes sent via SQL*Net to client
    31050 bytes received via SQL*Net from client
    2779 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    41665 rows processed
    15:29:40 jaffar@PRIMEDB> delete from emp_test where deptno = 10;
    24998 rows deleted.
    Elapsed: 00:00:10.06
    15:31:19 jaffar@PRIMEDB> select * from emp_test;
    16667 rows selected.
    Elapsed: 00:00:00.09 ==> response time
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    1289 consistent gets
    0 physical reads
    0 redo size
    218615 bytes sent via SQL*Net to client
    12724 bytes received via SQL*Net from client
    1113 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    16667 rows processed

  • My ALV Report is taking too much time to execute

    Hi Friends,
    My ALV Report is taking long time for execution(more than 1.5 Hrs).Pls suggest the changes to be done to improove the performance.Its very urgent.Pls respond as soon as possible.
    Thanks & Regards,
    Sunil Maurya
    Report is as follows :
    REPORT  YSEG_PROFIT.
    TABLES : ZSEGMENT, coep.
    TYPE-POOLS: slis.
    DATA : BEGIN OF I_COEP OCCURS 0,
             BELNR  LIKE COEP-BELNR,
             BUZEI  LIKE COEP-BUZEI,
             PERIO  LIKE COEP-PERIO,
             WOGBTR LIKE COEP-WOGBTR,
             OBJNR  LIKE COEP-OBJNR,
             KSTAR  LIKE COEP-KSTAR,
             PAOBJNR LIKE COEP-PAOBJNR,
             KVGR5  LIKE ZSEGMENT-KVGR5,
             KAUFN  LIKE CE4KBL1_ACCT-KAUFN,
           END OF I_COEP.
    DATA : BEGIN OF I_SECTOR OCCURS 0,
            KVGR5  LIKE ZSEGMENT-KVGR5,
          END OF I_SECTOR.
    DATA : BEGIN OF I_AUFK OCCURS 0,
            OBJNR LIKE AUFK-OBJNR,
            PSPEL LIKE AUFK-PSPEL,
            KDAUF LIKE AUFK-KDAUF,
            KDPOS LIKE AUFK-KDPOS,
           END OF I_AUFK.
    DATA : BEGIN OF I_VBAKP OCCURS 0,
            OBJNR LIKE VBAP-OBJNR,
            KVGR5 LIKE VBAK-VBELN,
          END OF I_VBAKP.
    DATA : BEGIN OF I_PRPS OCCURS 0,
            OBJNR LIKE PRPS-OBJNR,
            PSPHI LIKE PRPS-PSPHI,
            ASTNR LIKE PROJ-ASTNR,
          END OF I_PRPS.
    DATA : BEGIN OF I_OUTPUT OCCURS 0,
             KSTAR LIKE COEP-KSTAR,
             MCTXT LIKE CSKU-MCTXT,
             S01   LIKE COEP-WOGBTR,
             S02   LIKE COEP-WOGBTR,
             S03   LIKE COEP-WOGBTR,
             S04   LIKE COEP-WOGBTR,
             S05   LIKE COEP-WOGBTR,
             S06   LIKE COEP-WOGBTR,
             S07   LIKE COEP-WOGBTR,
             S08   LIKE COEP-WOGBTR,
             S09   LIKE COEP-WOGBTR,
             OTH   like COEP-WOGBTR,
             TOTAL LIKE COEP-WOGBTR,
          END OF I_OUTPUT.
    DATA : BEGIN OF I_AFVC OCCURS 0,
            OBJNR LIKE AFVC-OBJNR,
            PROJN LIKE AFVC-PROJN,
            PROJ LIKE   PROJ-PSPID,
            PSPNR LIKE PROJ-PSPNR,
           END OF I_AFVC.
    DATA : BEGIN OF I_PROJ OCCURS 0,
            PSPNR LIKE PROJ-PSPNR,
            ASTNR LIKE PROJ-ASTNR,
           END OF I_PROJ.
    DATA : I_NP LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_NV LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_DETAIL LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_WB LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    DATA : I_PR LIKE STANDARD TABLE OF I_COEP WITH HEADER LINE.
    data : t_fieldcat_sum_rep TYPE slis_t_fieldcat_alv.
    data : t_fieldcat_det_rep TYPE slis_t_fieldcat_alv.
    DATA: k_fieldcat    TYPE slis_fieldcat_alv.
    Declaration by Sunil Maurya for sorting
    DATA : GT_SORT TYPE SLIS_T_SORTINFO_ALV,
           GS_SORT TYPE SLIS_SORTINFO_ALV.
    DATA : GT_SORT1 TYPE SLIS_T_SORTINFO_ALV,
           GS_SORT1 TYPE SLIS_SORTINFO_ALV.
    *data : it_sortcat type slis_t_sortinfo_alv.
    *DATA : k_sortcat  like line of it_sortcat.
    **data : wa_sort like line of it_sortcat.
    Declaration by Sunil Maurya for sorting
    constants :  c_user_command TYPE char30 VALUE 'USER_COMMAND'.
    *Selection screen
    SELECTION-SCREEN BEGIN OF BLOCK A WITH FRAME TITLE TEXT-001.
    PARAMETER : P_PERIO1   LIKE COEP-PERIO obligatory,
               P_PERIO2   LIKE COEP-PERIO MODIF ID D1,
                P_GJAHR    LIKE COEP-GJAHR obligatory.
    select-options :  P_KSTAR  for COEP-KSTAR,
                      P_GSBER  FOR COEP-GSBER.
    SELECT-OPTIONS : S_KVGR5 FOR ZSEGMENT-KVGR5 MODIF ID D1.
    SELECTION-SCREEN END OF BLOCK A.
    INITIALIZATION.
       S_KVGR5-OPTION = 'BT' .
       S_KVGR5-LOW = 'S01'.
       S_KVGR5-HIGH = 'S09'.
       APPEND S_KVGR5.
    AT SELECTION-SCREEN OUTPUT.
        LOOP AT SCREEN.
          IF SCREEN-GROUP1 = 'D1'.
             SCREEN-INPUT = '0'.
             MODIFY SCREEN.
          ENDIF.
        ENDLOOP.
    START-OF-SELECTION.
      PERFORM GET_SECTORS.
      PERFORM GET_DATA_COEP.
      PERFORM VALIDATE_SECTOR.
      PERFORM GROUP_OUTPUT.
    PERFORM BUILD_SORTCAT. " Inserted by by Sunil Maurya for sorting
      PERFORM BUILD_CATLOG.
      PERFORM DISPLAY_OUTPUT.
    *&      Form  GET_DATA_COEP
          text
    -->  p1        text
    <--  p2        text
    FORM GET_DATA_COEP .
    SELECT BELNR BUZEI PERIO WOGBTR OBJNR KSTAR PAOBJNR
    FROM COEP INTO TABLE I_COEP WHERE
            KOKRS = 'KBL' AND
           PERIO >= P_PERIO1 AND
           PERIO <= P_PERIO2 AND
            PERIO = P_PERIO1 AND
            GJAHR = P_GJAHR AND
            KSTAR NE '' AND
            OBJNR NE '' AND
            KSTAR in P_KSTAR AND
            GSBER IN P_GSBER. "AND
           BELNR = '0103991827' .
      SORT I_COEP BY OBJNR.
      DELETE I_COEP WHERE OBJNR+0(2) <> 'VB' AND
                          OBJNR+0(2) <> 'PR' AND
                          OBJNR+0(2) <> 'NV' AND
                          OBJNR+0(2) <> 'NP' AND
                          OBJNR+0(2) <> 'AO'.
      SORT I_COEP BY KSTAR.
      DELETE I_COEP WHERE KSTAR+0(5) <> '00003' AND
                          KSTAR+0(5) <> '00004'.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'NP'.
        MOVE I_COEP TO I_NP.
        APPEND I_NP.
        CLEAR : I_NP, I_COEP.
      ENDLOOP.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'NV'.
        MOVE I_COEP TO I_NV.
        APPEND I_NV.
        CLEAR : I_NV, I_COEP.
      ENDLOOP.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'VB' .
        MOVE I_COEP TO I_WB.
        APPEND I_WB.
        CLEAR : I_WB, I_COEP.
      ENDLOOP.
      LOOP AT I_COEP WHERE OBJNR+0(2) = 'PR' .
        MOVE I_COEP TO I_PR.
        APPEND I_PR.
        CLEAR : I_PR, I_COEP.
      ENDLOOP.
    *Inserted by Sunil Maurya for PAOBJNR = "AO....."
    Data : ind type sy-tabix.
    loop at i_coep where OBJNR+0(2) = 'AO'.
    ind = sy-tabix.
    select single KAUFN into i_coep-KAUFN from CE4KBL1_ACCT where PAOBJNR =
    i_coep-PAOBJNR.
    select single KVGR5 into i_coep-kvgr5 from vbak where vbeln =
    i_coep-kaufn.
    modify i_coep index ind.
    clear i_coep.
    endloop.
    *Inserted by Sunil Maurya for PAOBJNR = "AO....."
    *LOOP AT I_COEP WHERE OBJNR+0(2) = 'AO' .
       MOVE I_COEP TO I_AO.
       APPEND I_AO.
       CLEAR : I_AO, I_COEP.
    *ENDLOOP.
    ENDFORM.                    " GET_DATA_COEP
    *&      Form  GET_SECTORS
          text
    -->  p1        text
    <--  p2        text
    FORM GET_SECTORS .
      DATA : L_FR TYPE I,
             L_TO TYPE I.
      DATA :  L_CH1(1).
      LOOP AT S_KVGR5.
        IF S_KVGR5-OPTION = 'EQ'.
          I_SECTOR-KVGR5 = S_KVGR5-LOW.
          APPEND I_SECTOR.
          CLEAR : I_SECTOR.
          CONCATENATE '5' S_KVGR5-LOW+1(2) INTO I_SECTOR-KVGR5.
          APPEND I_SECTOR.
          CLEAR : I_SECTOR.
        ENDIF.
        IF S_KVGR5-OPTION = 'BT'.
          L_FR = S_KVGR5-LOW+1(2).
          L_TO = S_KVGR5-HIGH+1(2).
          WHILE L_FR <= L_TO.
            L_CH1 = L_FR.
            CONCATENATE 'S0' L_CH1 INTO I_SECTOR-KVGR5.
            APPEND I_SECTOR.
            CONCATENATE '50' L_CH1 INTO I_SECTOR-KVGR5.
            APPEND I_SECTOR.
            CLEAR : I_SECTOR, L_CH1.
            L_FR = L_FR + 1.
          ENDWHILE.
        ENDIF.
      ENDLOOP.
    ENDFORM.                    " GET_SECTORS
    *&      Form  VALIDATE_SECTOR
          text
    -->  p1        text
    <--  p2        text
    FORM VALIDATE_SECTOR .
    *get data from AUFK for NP & NV type intab
    IF I_NP[] IS NOT INITIAL.
      SELECT OBJNR PSPEL KDAUF KDPOS FROM AUFK
      INTO TABLE I_AUFK
      FOR ALL ENTRIES IN I_NP
      WHERE OBJNR = I_NP-OBJNR.
    *Push this data in I_WB where order no exist in AUFK
      LOOP AT I_AUFK WHERE KDAUF NE ''.
        I_WB-OBJNR = I_AUFK-OBJNR.
        APPEND I_WB.
        CLEAR : I_AUFK, I_WB.
      ENDLOOP.
    *Push this data in I_PR where order no exist in AUFK
      LOOP AT I_AUFK WHERE PSPEL NE ''.
        I_PR-OBJNR = I_AUFK-OBJNR.
        APPEND I_PR.
        CLEAR : I_AUFK, I_PR.
      ENDLOOP.
    ENDIF.
      SELECT BOBJNR AKVGR5 FROM VBAK AS A INNER JOIN VBAP AS B
      ON AVBELN = BVBELN
      INTO TABLE I_VBAKP
      FOR ALL ENTRIES IN I_WB
      WHERE B~VBELN = I_WB-OBJNR+2(10).
      SORT I_VBAKP BY OBJNR.
      SORT I_COEP BY OBJNR.
      LOOP AT I_WB.
        READ TABLE I_VBAKP WITH KEY OBJNR = I_WB-OBJNR.
        IF SY-SUBRC = 0.
          READ TABLE I_SECTOR WITH KEY KVGR5 = I_VBAKP-KVGR5.
          IF SY-SUBRC <> 0.
           DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
             loop at i_coep where objnr = i_wb-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
          ELSE.
         READ TABLE I_COEP WITH KEY BELNR = I_WB-BELNR BUZEI = I_WB-BUZEI.
           READ TABLE I_COEP WITH KEY OBJNR = I_WB-OBJNR.
            IF SY-SUBRC = 0.
              I_COEP-KVGR5 = I_SECTOR-KVGR5.
              MODIFY I_COEP INDEX SY-TABIX.
              LOOP AT I_COEP WHERE OBJNR = I_WB-OBJNR.
                I_COEP-KVGR5 = I_SECTOR-KVGR5.
                MODIFY I_COEP.
                CLEAR : I_COEP.
              ENDLOOP.
            ENDIF.
          ENDIF.
        ELSE.
         DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
             loop at i_coep where objnr = i_wb-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
        ENDIF.
        CLEAR : I_VBAKP, I_SECTOR, I_COEP.
      ENDLOOP.
    IF I_PR[] IS NOT INITIAL.
      SELECT AOBJNR APSPHI B~ASTNR
      FROM PRPS AS A INNER JOIN PROJ AS B
      ON APSPHI = BPSPNR
      INTO TABLE I_PRPS
      FOR ALL ENTRIES IN I_PR
      WHERE A~OBJNR = I_PR-OBJNR.
    ENDIF.
      IF I_NV[] IS NOT INITIAL.
        SELECT OBJNR PROJN FROM AFVC INTO TABLE I_AFVC
        FOR ALL ENTRIES IN I_NV
        WHERE OBJNR = I_NV-OBJNR AND
              PROJN <> ''  .
        LOOP AT I_AFVC.
          CALL FUNCTION 'CONVERSION_EXIT_ABPSP_OUTPUT'
            EXPORTING
              INPUT         = I_AFVC-PROJN
           IMPORTING
             OUTPUT        = I_AFVC-PROJ
            I_AFVC-PROJ = I_AFVC-PROJ+0(9).
            CALL FUNCTION 'CONVERSION_EXIT_KONPD_INPUT'
              EXPORTING
                INPUT           = I_AFVC-PROJ
             IMPORTING
               OUTPUT          =  I_AFVC-PSPNR.
            MODIFY I_AFVC.
            CLEAR : I_AFVC.
        ENDLOOP.
        SELECT PSPNR ASTNR FROM PROJ INTO TABLE I_PROJ
        FOR ALL ENTRIES IN I_AFVC
        WHERE PSPNR = I_AFVC-PSPNR.
        LOOP AT I_NV.
           I_PRPS-OBJNR = I_NV-OBJNR.
           READ TABLE I_AFVC WITH KEY OBJNR = I_NV-OBJNR.
           IF SY-SUBRC = 0.
             READ TABLE I_PROJ WITH KEY PSPNR = I_AFVC-PSPNR.
             IF SY-SUBRC = 0.
               I_PRPS-ASTNR = I_PROJ-ASTNR.
             ENDIF.
           ENDIF.
            APPEND I_PRPS.
            I_PR-OBJNR = I_NV-OBJNR.
            APPEND I_PR.
            CLEAR : I_NV, I_AFVC, I_PROJ, I_PR.
        ENDLOOP.
      ENDIF.
      SORT I_PRPS BY OBJNR.
      LOOP AT I_PR.
        READ TABLE I_PRPS WITH KEY OBJNR = I_PR-OBJNR.
        IF SY-SUBRC = 0.
          READ TABLE I_SECTOR WITH KEY KVGR5 = I_PRPS-ASTNR+5(3).
          IF SY-SUBRC <> 0.
           DELETE I_COEP WHERE OBJNR = I_PR-OBJNR.
             loop at i_coep where objnr = i_pr-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
          ELSE.
            READ TABLE I_COEP WITH KEY OBJNR = I_PR-OBJNR.
            IF SY-SUBRC = 0.
              CONCATENATE 'S' I_SECTOR-KVGR5+1(2) INTO I_COEP-KVGR5.
              MODIFY I_COEP INDEX SY-TABIX.
              LOOP AT I_COEP WHERE OBJNR = I_PR-OBJNR.
                CONCATENATE 'S' I_SECTOR-KVGR5+1(2) INTO I_COEP-KVGR5.
                MODIFY I_COEP.
                CLEAR : I_COEP.
              ENDLOOP.
            ENDIF.
          ENDIF.
        ELSE.
         DELETE I_COEP WHERE OBJNR = I_PR-OBJNR.
            loop at i_coep where objnr = i_pr-objnr.
                  i_coep-kvgr5 = 'OTH'.
                  modify i_coep.
                  clear : i_coep.
             endloop.
        ENDIF.
        CLEAR : I_PR, I_PRPS, I_SECTOR, I_COEP.
      ENDLOOP.
    ENDFORM.                    " VALIDATE_SECTOR
    *&      Form  GROUP_OUTPUT
          text
    -->  p1        text
    <--  p2        text
    FORM GROUP_OUTPUT .
      LOOP AT I_COEP.
        I_OUTPUT-KSTAR = I_COEP-KSTAR.
        IF I_COEP-KVGR5 = 'S01'.
          I_OUTPUT-S01 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S02'.
          I_OUTPUT-S02 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S03'.
          I_OUTPUT-S03 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S04'.
          I_OUTPUT-S04 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S05'.
          I_OUTPUT-S05 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S06'.
          I_OUTPUT-S06 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S07'.
          I_OUTPUT-S07 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S08'.
          I_OUTPUT-S08 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'S09'.
          I_OUTPUT-S09 = I_COEP-WOGBTR.
        ENDIF.
        IF I_COEP-KVGR5 = 'OTH' OR I_COEP-KVGR5 = ''.
          I_OUTPUT-OTH = I_COEP-WOGBTR.
        ENDIF.
        COLLECT I_OUTPUT.
        CLEAR : I_COEP, I_OUTPUT.
      ENDLOOP.
      LOOP AT I_OUTPUT.
        SELECT SINGLE MCTXT FROM CSKU
        INTO I_OUTPUT-MCTXT
        WHERE KTOPL = 'KBL' AND
              SPRAS = SY-LANGU AND
              KSTAR = I_OUTPUT-KSTAR.
        I_OUTPUT-TOTAL = I_OUTPUT-S01 + I_OUTPUT-S02 + I_OUTPUT-S03
                        + I_OUTPUT-S04 + I_OUTPUT-S05 + I_OUTPUT-S06
                        + I_OUTPUT-S07 + I_OUTPUT-S08 + I_OUTPUT-S09
                        + I_OUTPUT-OTH.
        MODIFY I_OUTPUT.
        CLEAR : I_OUTPUT.
      ENDLOOP.
    ENDFORM.                    " GROUP_OUTPUT
    *&      Form  BUILD_CATLOG
          text
    -->  p1        text
    <--  p2        text
    FORM BUILD_CATLOG .
      CLEAR k_fieldcat.
      k_fieldcat-fieldname = 'KSTAR'.
      k_fieldcat-seltext_l  = text-002.
      k_fieldcat-hotspot   = 'X'.
      APPEND k_fieldcat TO t_fieldcat_sum_rep.
      CLEAR k_fieldcat.
      k_fieldcat-fieldname = 'MCTXT'.
      k_fieldcat-seltext_l  = text-003.
      k_fieldcat-hotspot   = 'X'.
      APPEND k_fieldcat TO t_fieldcat_sum_rep.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S01'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S01'.
        k_fieldcat-seltext_l  = text-004.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S02'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S02'.
        k_fieldcat-seltext_l  = text-005.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S03'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S03'.
        k_fieldcat-seltext_l  = text-006.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S04'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S04'.
        k_fieldcat-seltext_l  = text-007.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S05'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S05'.
        k_fieldcat-seltext_l  = text-008.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S06'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S06'.
        k_fieldcat-seltext_l  = text-009.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S07'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S07'.
        k_fieldcat-seltext_l  = text-010.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S08'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S08'.
        k_fieldcat-seltext_l  = text-011.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
      READ TABLE I_SECTOR WITH KEY KVGR5 = 'S09'.
      IF SY-SUBRC = 0.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'S09'.
        k_fieldcat-seltext_l  = text-012.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
      ENDIF.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'OTH'.
        k_fieldcat-seltext_l  = text-019.
        k_fieldcat-hotspot   = 'X'.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'TOTAL'.
        k_fieldcat-seltext_l  = text-020.
        k_fieldcat-do_sum    = 'X'. "Statement inserted by Sunil Maurya
        APPEND k_fieldcat TO t_fieldcat_sum_rep.
    *=======================================================================
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'BELNR'.
        k_fieldcat-seltext_l  = text-013.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'BUZEI'.
        k_fieldcat-seltext_l  = text-014.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'PERIO'.
        k_fieldcat-seltext_l  = text-015.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'WOGBTR'.
        k_fieldcat-seltext_l  = text-016.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'OBJNR'.
        k_fieldcat-seltext_l  = text-018.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
        CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'KSTAR'.
        k_fieldcat-seltext_l  = text-002.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
         CLEAR k_fieldcat.
        k_fieldcat-fieldname = 'KVGR5'.
        k_fieldcat-seltext_l  = text-017.
        APPEND k_fieldcat TO t_fieldcat_det_rep.
    *==============================================================
           Statements inserted by Sunil Maurya for sorting
    CLEAR GS_SORT.
    GS_SORT-FIELDNAME = 'KSTAR'.
    GS_SORT-SPOS      = 1.
    GS_SORT-UP        = 'X'.
    APPEND GS_SORT TO GT_SORT.
    CLEAR GS_SORT1.
    GS_SORT1-FIELDNAME = 'KSTAR'.
    GS_SORT1-SPOS      = 1.
    GS_SORT1-UP        = 'X'.
    GS_SORT1-SUBTOT    = 'X'.
    APPEND GS_SORT1 TO GT_SORT1.
    *CLEAR GS_SORT1.
    *GS_SORT1-FIELDNAME = 'WOGBTR'.
    *GS_SORT1-SPOS      = 2.
    *GS_SORT1-UP        = 'X'.
    *GS_SORT1-SUBTOT    = 'X'.
    *APPEND GS_SORT1 TO GT_SORT1.
       FORM build_sortcat.
       k_sortcat-spos = 1.
       k_sortcat-fieldname = 'KSTAR'.
       k_sortcat-up      = 'X'.
      k_sortcat-down      = 'X'.
       APPEND k_sortcat TO it_sortcat.
    clear k_sortcat.
       ENDFORM.
       Statements inserted by Sunil Maurya for sorting
    ENDFORM.                    " BUILD_CATLOG
    *&      Form  DISPLAY_OUTPUT
          text
    -->  p1        text
    <--  p2        text
    FORM DISPLAY_OUTPUT .
      DATA l_repid TYPE syrepid.
      l_repid = sy-repid.
      CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
        EXPORTING
          i_callback_program      = l_repid
          it_fieldcat             = t_fieldcat_sum_rep
          i_callback_user_command = c_user_command
          i_save                  = 'A'
          it_sort                 = GT_SORT[] "Statements inserted by Sunil
        TABLES
          t_outtab                = I_OUTPUT
        EXCEPTIONS
          program_error           = 1
          OTHERS                  = 2.
      IF sy-subrc <> 0.
        MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
    ENDFORM.                    " DISPLAY_OUTPUT
    *&      Form  user_command
          text
    FORM user_command USING r_ucomm     LIKE sy-ucomm
                            rs_selfield TYPE slis_selfield.
      DATA: l_repid TYPE syrepid.
      l_repid = sy-repid.
      CASE r_ucomm.
        WHEN '&IC1'.
          clear : i_detail. refresh : i_detail.
          LOOP AT I_COEP WHERE KVGR5 = rs_selfield-fieldname.
              MOVE i_coep to i_detail.
              append i_detail.
              clear : i_detail, i_coep.
          endloop.
          CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
            EXPORTING
              i_callback_program      = l_repid
              it_fieldcat             = t_fieldcat_det_rep
              i_callback_user_command = c_user_command
              i_save                  = 'A'
              it_sort                 = GT_SORT1[] "Inserted by Sunil
            TABLES
              t_outtab                = i_detail
            EXCEPTIONS
              program_error           = 1
              OTHERS                  = 2.
          IF sy-subrc <> 0.
            MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
          ENDIF.
      ENDCASE.
      ENDFORM.

    Run the SE30 with internal tables, then examine the top lines.
    see
    SE30
    The ABAP Runtime Trace (SE30) -  Quick and Easy
    Define you tables as sorted tables or use at least binary search, otherwise this sorts are useless
    and the nested loop is slow.
    SORT I_VBAKP BY OBJNR.
    SORT I_COEP BY OBJNR.
    LOOP AT I_WB.
    READ TABLE I_VBAKP WITH KEY OBJNR = I_WB-OBJNR.
    IF SY-SUBRC = 0.
    READ TABLE I_SECTOR WITH KEY KVGR5 = I_VBAKP-KVGR5.
    IF SY-SUBRC 0.
    DELETE I_COEP WHERE OBJNR = I_WB-OBJNR.
    loop at i_coep where objnr = i_wb-objnr.
    i_coep-kvgr5 = 'OTH'.
    modify i_coep.
    clear : i_coep.
    endloop.
    ELSE.
    READ TABLE I_COEP WITH KEY BELNR = I_WB-BELNR BUZEI = I_WB-BUZEI.
    READ TABLE I_COEP WITH KEY OBJNR = I_WB-OBJNR.
    IF SY-SUBRC = 0.
    I_COEP-KVGR5 = I_SECTOR-KVGR5.
    MODIFY I_COEP INDEX SY-TABIX.
    LOOP AT I_COEP WHERE OBJNR = I_WB-OBJNR.
    I_COEP-KVGR5 = I_SECTOR-KVGR5.
    MODIFY I_COEP.
    CLEAR : I_COEP.
    ENDLOOP.
    This is all very slow.
    Read my blog on internal tables:
    Measurements on internal tables: Reads and Loops:
    Runtimes of  Reads and Loops on Internal Tables
    If you really wnat to identify all bugs, then try this
    Z_SE30_COMPARE
    A Tool to Compare Runtime Measurements: Z_SE30_COMPARE
    Nonlinearity Check
    Nonlinearity Check Using the Z_SE30_COMPARE
    It needs at bit of experience but then it is very powerful!!!
    That is also a waste of time, but not a nested loop. You sort and resort the same table,
    but the sort is useless the deletes are still sequential on standard tables.
    Put all the stuff into ONE loop on I_COEP.
    SORT I_COEP BY OBJNR.
    DELETE I_COEP WHERE OBJNR+0(2) <> 'VB' AND
    OBJNR+0(2) <> 'PR' AND
    OBJNR+0(2) <> 'NV' AND
    OBJNR+0(2) <> 'NP' AND
    OBJNR+0(2) <> 'AO'.
    SORT I_COEP BY KSTAR.
    DELETE I_COEP WHERE KSTAR+0(5) <> '00003' AND
    KSTAR+0(5) <> '00004'.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'NP'.
    MOVE I_COEP TO I_NP.
    APPEND I_NP.
    CLEAR : I_NP, I_COEP.
    ENDLOOP.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'NV'.
    MOVE I_COEP TO I_NV.
    APPEND I_NV.
    CLEAR : I_NV, I_COEP.
    ENDLOOP.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'VB' .
    MOVE I_COEP TO I_WB.
    APPEND I_WB.
    CLEAR : I_WB, I_COEP.
    ENDLOOP.
    LOOP AT I_COEP WHERE OBJNR+0(2) = 'PR' .
    MOVE I_COEP TO I_PR.
    APPEND I_PR.
    CLEAR : I_PR, I_COEP.
    ENDLOOP.
    There is probably more. BBut with the compare tool you can find everything.
    Siegfried

  • Query taking too much time!!!!!

    Sorry for posting without format. I will post an another question with the use of your format blog.
    Edited by: San on 22 Feb, 2011 3:18 PM

    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting

  • Select query taking too much time to fetch data from pool table a005

    Dear all,
    I am using 2 pool table a005 and a006 in my program. I am using select query to fetch data from these table. i.e. example is mentioned below.
    select * from a005 into table t_a005 for all entries in it_itab
                       where vkorg in s_vkorg
                       and     matnr in  s_matnr
                       and     aplp   in  s_aplp
                       and     kmunh = it_itab-kmunh.
    here i can't create index also as tables are pool table...If there is any solutions , than please help me for same..
    Thanks ,

    it would be helpful to know what other fields are in the internal table you are using for the FOR ALL ENTRIES.
    In general, you should code the order of your fields in the select in the same order as they appear in the database.  If you do not have the top key field, then the entire database is read. If it's large then it's going to take a lot of time.  The more key fields from the beginning of the structure that you can supply at faster the retrieval.
    Regards,
    Brent

  • Insert Query taking too much time (5 hrs)

    Hi experts,
    We are using "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi" on our server, and we are trying to execute the SQL script which inserts data into a table from 5 different table.
    Note: The SQL query is been passed directly to the server.
    Need your expertise as to what is causing the script to run so slow.....(It's taking around 5 hrs to execute the script).
    The Script is as follow's;
    Insert /*+ append */ into TABLE1 NOLOGGING
    SELECT DISTINCT /*+ PARALLEL(TABLE1, 4) */ ROW_NUMBER() OVER
    (ORDER BY COL1, COL2, COL3, COL4, COL5, COL6, COL7, COL8, COL9) AS PK_ID, TABLE1.COL1 AS PRODUCT_ID, TABLE1.M_NBR AS M_NBR, TABLE1.P_NBR AS P_NBR, TABLE1.C_ID AS C_ID, TABLE1.TABLE1_SERV_DT AS TABLE1_SERV_DT, TABLE1.C_KEY AS C_KEY, TABLE1.C_DEN AS C_DEN, TABLE1.A_S_DT AS A_S_DT,
    TABLE1.F_ID AS F_ID, TABLE1.L_DT AS L_DT, TABLE1.A_DT AS A_DT, TABLE1.D_DT AS D_DT, TABLE1.C_TYPE AS C_TYPE,
    TABLE1.S_COL AS S_COL, TABLE1.S_TBL AS S_TBL, TABLE4.ME AS ME, TABLE1.ER_YN AS ER_YN
    FROM
    TABLE1,
    TABLE2,
    TABLE3,
    TABLE4,
    TABLE5
    WHERE
    TABLE1.COL1 = TABLE2.COL1
    AND
    TABLE1.COL2 = TABLE3.COL2
    AND
    TABLE1.COL3 = TABLE4.COL3
    AND
    (TABLE1.SERV_DT BETWEEN RY_START AND RY_END)
    AND
    (TABLE1.CLAIM = :"SYS_B_0")
    AND
    (TABLE2.INACTIVATED = :"SYS_B_1")
    AND
    (TABLE2.TABLE1 = :"SYS_B_2")
    AND
    (TABLE2.PRELIM = :"SYS_B_3")
    AND
    (TABLE3.MEASURE = :"SYS_B_4")
    AND
    (TABLE3.EVENTS = :"SYS_B_5");

    In addition to APC, here's some more on the subject:
    EXACT - (Default) Only statements with an exact text match will share the same SQL area.
    SIMILAR - Oracle will substitute bind variable for all literals, thereby increasing the chances of a text match. Oracle will force similar statements to share the SQL area without deteriorating execution plans.
    FORCE - The same as SIMILAR except that execution plans may deteriorate. This option should only be used if the risk of suboptimal plans is outweighed by the increase in cursor sharing .
    Summary from:
    http://www.oracle-base.com/articles/9i/performance-enhancements-9i.php#CursorSharing
    In-depth:
    http://www.oracle.com/technetwork/issue-archive/2006/06-jan/o16asktom-101983.html
    and you can search the docs @ http://www.oracle.com/pls/db112/homepage for more.
    Hope your DBA and you can fix it.
    Keep us posted.

  • Partition Table Query taking Too Much Time

    I have created partition table and Created local partition index on a column whose datatype is DATE.
    Now when I Query table and use index column in the where clause It is scaning all the table (Full scan) . The quey is :
    Select * From mytable
    where to_char(transaction_date, 'DD-MON-YY') = '01-Aug-07';
    I have to use to_char function not to_date due to Front end application problem.

    Before we go too far with this, if you manually query with TO_DATE on the variable instead of TO_CHAR on the column, does the query actually use the index?
    The TO_CHAR on the column will definitely stop Oracle from using any index on the column. If the query will use the index if you TO_DATE the variable, as I see it, you have three options. First, fix the application problem that won't let you use TO_DATE from the application. Second, change the application to call a function returning a ref cursor, get the date string as a parameter to the function, and do the TO_DATE in the function.
    Third, you could consider creating a function-based index on TO_CHAR(transaction_date, 'dd-Mon-yy'). This would be the least desirable option, particularly if you would also be selecting records based on a range of transaction_dates, since it loses a lot of information that the optimizer could use in devising an efficient query plan. It could also change your results for a range scan.
    John

  • Query taking too much time when running through discover

    Hi
    I have created report with sql query by creating custom folder in oracle discover desktop. Query is using parameter with sys_context. When report is executed from discover it takes more than 5 minutes and same query is executed in 30 seconds when executed in database through toad.
    Pls. let me know what could be the reason for this?
    Thanks

    Hi,
    The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
    Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
    Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
    If you still cannot determine the difference then trace both sessions.
    Rod West

  • Spatial query with sdo_aggregate_union taking too much time

    Hello friends,
    the following query taking too much time for execution.
    table1 contains around 2000 records.
    table2 contains 124 rows
    SELECT
    table1.id
    , table1.txt
    , table1.id2
    , table1.acti
    , table1.acti
    , table1.geom as geom
    FROM
    table1
    WHERE
    sdo_relate
    table1.geom,
    select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
    from table2
    ,'mask=(ANYINTERACT) querytype=window'
    )='TRUE'
    I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
    Thanks

    Hi Thanks lot for your reply.
    But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
    Let me give you clear picture....
    What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
    For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
    I hope this would help you to understand my query.
    Thanks
    I appreciate your efforts.

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Query is taking too much time

    hi
    The following query is taking too much time (more than 30 minutes), working with 11g.
    The table has three columns rid, ida, geometry and index has been created on all columns.
    The table has around 5,40,000 records of point geometries.
    Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
    SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
    sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
    regards

    I have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
    a.ida='CORD' AND
    b.idat='CORD' AND
    a.rid !=b.rid AND
    sdo_equal(a.geometry, b.geometry)='TRUE'
    ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
    So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
    and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
    Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
    And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
    Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
    [ c o d e ]
    <code/results here>
    [ / c o d e ]
    that way the original formatting is kept and it makes things much easier to read.
    Regards,
    Stefan

Maybe you are looking for

  • ITunes account

    I currently have a iPhone 4s, an iPad and a iPhone 3G.  My wife has the same, iPhone 4s, iPad, iPhone 3G.  We turned the iPhone 3G's into iPods.  We have all of these on one desktop computer.  But here is my problem, I am able to update apps on every

  • An easy way to make a text box blink??

    Hello everyone! So I can make a graphic symbol blink easy enough, but I am trying to "simulate" an electronic LCD screen, so I would like to display the value of a variable in a dynamic text box and make it blink. The idea is kind of like an alarm cl

  • BR0999E Loading of SQL client library orasql9.dll failed

    Hi basis experts, My SAP system is ECC 5.0 and the RDBMS is Oracle 10.2.0.2.0 I want to use transaction DB13 to make database backups, but it sendme an error because it can´t connect to the database. I try to use brtools to make a backup and sendme t

  • Is there any way to password protect a project in aperture 3?

    I have certain projects that I don't want viewable by others that may use my computer.  Is there any way to password protect a certain project or album to restrict access?

  • Price list category

    Hi all,    Which field will i maintain pricelist category in customer master. Thanks.