Query performance improvement techniques(urgent)

Hi experz,
I am having a business requirement where i have to fill the variable with 59 days(mandatory field).from different regional cubes, these are all compressedcubes, records are almost 45 mlns in a cube when i give selection by default it selects one aggrigate defined on this, but aftr running some time it is going to short dump,if the time for report execution is more than 10 mts its performance is very bad i know! can you guys suggest me how can i improve the performance of this report.
when i gave the same selections in list cube transaction it is saing  LSLVCF36 error,message_type_text error;
can you guys help me in this,i ll give full points.
thanks and regards,
veeru

Hi,
Try these
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Thanks,
JituK

Similar Messages

  • Select query performance improvement - Index on EDIDC table

    Hi Experts,
    I have a scenario where in I have to select data from the table EDIDC. The select query being used is given below.
      SELECT  docnum
              direct
              mestyp
              mescod
              rcvprn
              sndprn
              upddat
              updtim
      INTO CORRESPONDING FIELDS OF TABLE t_edidc
      FROM edidc
      FOR ALL ENTRIES IN t_error_idoc
      WHERE
      upddat GE gv_date1 AND
      upddat LE gv_date2 AND
      updtim GE p_time AND
      status EQ t_error_idoc-status.
    As the volume of the data is very high, our client requested to put up some index or use an existing one to improve the performance of the data selection query.
    Question:
    4.    How do we identify the index to be used.
    5.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    6.    What will be the impact on the table performance if we create a new index.
    Regards ,
    Raghav

    Question:
    1.    How do we identify the index to be used.
    Generally the index is automatically selected by SAP (DB Optimizer )  ( You can still mention the index name in your select query by changing the syntax)
      For your select Query the second Index will be called automatically by the Optimizer, ( Because  the select query has u2018Updatu2019 , u2018uptimu2019 in the sequence before the u2018statusu2019 ) .
    2.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    (Create a new Index with MANDT and the 4 fields which are in the where clause in sequence  )
    3.    What will be the impact on the table performance if we create a new index.
    ( Since the index which will be newly created is only the 4th index for the table, there shouldnu2019t be any side affects)
    After creation of index , Check the change in performance of the current program and also some other programs which are having the select queries on EDIDC ( Various types of where clauses preferably ) to verify that the newly created index is not having the negative impact on the performance. Additionally, if possible , check if you can avoid  into corresponding fields .
    Regards ,
    Seth

  • Index not getting used in the query(Query performance improvement)

    Hi,
    I am using oracle 10g version and have this query:
    select distinct bk.name             "Book Name",
                    fs.feed_description "Feed Name",
                    fbs.cob_date        "Cob",
                    at.description      "Data Type",
                    ah.user_name        " User",
                    ah.comments         "Comments",
                    ah.time_draft
      from Action_type       at,
           action_history    ah,
           sensitivity_audit sa,
           logical_entity    le,
           feed_static       fs,
           feed_book_status  fbs,
           feed_instance     fi,
           marsnode          bk
    where at.description = 'Regress Positions'
       and fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
       and fi.most_recent = 'Y'
       and bk.close_date is null
       and ah.time_draft = 'after'
       and sa.close_action_id is null
       and le.close_action_id is null
       and at.action_type_id = ah.action_type_id
       and ah.action_id = sa.create_action_id
       and le.logical_entity_id = sa.type_id
       and sa.feed_id = fs.feed_id
       and sa.book_id = bk.node_id
       and sa.feed_instance_id = fi.feed_instance_id
       and fbs.feed_instance_id = fi.feed_instance_id
       and fi.feed_id = fs.feed_id
    union
    select distinct bk.name             "Book Name",
                    fs.feed_description "Feed Name",
                    fbs.cob_date        "Cob",
                    at.description      "Data Type",
                    ah.user_name        " User",
                    ah.comments         "Comments",
                    ah.time_draft
      from feed_book_status         fbs,
           marsnode                 bk,
           feed_instance            fi,
           feed_static              fs,
           feed_book_status_history fbsh,
           Action_type              at,
           Action_history           ah
    where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
       and ah.action_type_id = 103
       and bk.close_date is null
       and ah.time_draft = 'after'
    --   and ah.action_id = fbs.action_id
       and fbs.book_id = bk.node_id
       and fbs.book_id = fbsh.book_id
       and fbs.feed_instance_id = fi.feed_instance_id
       and fi.feed_id = fs.feed_id
       and fbsh.create_action_id = ah.action_id
       and at.action_type_id = ah.action_type_id
    union
    select distinct bk.name             "Book Name",
                    fs.feed_description "Feed Name",
                    fbs.cob_date        "Cob",
                    at.description      "Data Type",
                    ah.user_name        " User",
                    ah.comments         "Comments",
                    ah.time_draft
      from feed_book_status         fbs,
           marsnode                 bk,
           feed_instance            fi,
           feed_static              fs,
           feed_book_status_history fbsh,
           Action_type              at,
           Action_history           ah
    where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
       and ah.action_type_id = 101
       and bk.close_date is null
       and ah.time_draft = 'after'
       and fbs.book_id = bk.node_id
       and fbs.book_id = fbsh.book_id
       and fbs.feed_instance_id = fi.feed_instance_id
       and fi.feed_id = fs.feed_id
       and fbsh.create_action_id = ah.action_id
       and at.action_type_id = ah.action_type_id;This is the execution plan
    | Id  | Operation                            | Name                     | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                     |                          |   231 | 43267 |   104K (85)|
    |   1 |  SORT UNIQUE                         |                          |   231 | 43267 |   104K (85)|
    |   2 |   UNION-ALL                          |                          |       |       |            |
    |   3 |    NESTED LOOPS                      |                          |     1 |   257 | 19540  (17)|
    |   4 |     NESTED LOOPS                     |                          |     1 |   230 | 19539  (17)|
    |   5 |      NESTED LOOPS                    |                          |     1 |   193 | 19537  (17)|
    |   6 |       NESTED LOOPS                   |                          |     1 |   152 | 19534  (17)|
    |*  7 |        HASH JOIN                     |                          |   213 | 26625 | 19530  (17)|
    |*  8 |         TABLE ACCESS FULL            | LOGICAL_ENTITY           |    12 |   264 |     2   (0)|
    |*  9 |         HASH JOIN                    |                          |  4267 |   429K| 19527  (17)|
    |* 10 |          HASH JOIN                   |                          |  3602 | 90050 |  1268  (28)|
    |* 11 |           INDEX RANGE SCAN           | IDX_FBS_CD_FII_BI        |  3602 | 46826 |    22   (5)|
    |* 12 |           TABLE ACCESS FULL          | FEED_INSTANCE            |   335K|  3927K|  1217  (27)|
    |* 13 |          TABLE ACCESS FULL           | SENSITIVITY_AUDIT        |   263K|    19M| 18236  (17)|
    |  14 |        TABLE ACCESS BY INDEX ROWID   | FEED_STATIC              |     1 |    27 |     1   (0)|
    |* 15 |         INDEX UNIQUE SCAN            | IDX_FEED_STATIC_FI       |     1 |       |     0   (0)|
    |* 16 |       TABLE ACCESS BY INDEX ROWID    | MARSNODE                 |     1 |    41 |     3   (0)|
    |* 17 |        INDEX RANGE SCAN              | PK_MARSNODE              |     3 |       |     2   (0)|
    |* 18 |      TABLE ACCESS BY INDEX ROWID     | ACTION_HISTORY           |     1 |    37 |     2   (0)|
    |* 19 |       INDEX UNIQUE SCAN              | PK_ACTION_HISTORY        |     1 |       |     1   (0)|
    |* 20 |     TABLE ACCESS BY INDEX ROWID      | ACTION_TYPE              |     1 |    27 |     1   (0)|
    |* 21 |      INDEX UNIQUE SCAN               | PK_ACTION_TYPE           |     1 |       |     0   (0)|
    |* 22 |    TABLE ACCESS BY INDEX ROWID       | MARSNODE                 |     1 |    41 |     3   (0)|
    |  23 |     NESTED LOOPS                     |                          |   115 | 21505 | 42367  (28)|
    |* 24 |      HASH JOIN                       |                          |   114 | 16644 | 42023  (28)|
    |  25 |       NESTED LOOPS                   |                          |   114 | 13566 | 42007  (28)|
    |* 26 |        HASH JOIN                     |                          |   114 | 12426 | 41777  (28)|
    |* 27 |         HASH JOIN                    |                          |   957 | 83259 | 41754  (28)|
    |* 28 |          TABLE ACCESS FULL           | ACTION_HISTORY           |  2480 | 91760 | 30731  (28)|
    |  29 |          NESTED LOOPS                |                          |  9570K|   456M| 10234  (21)|
    |  30 |           TABLE ACCESS BY INDEX ROWID| ACTION_TYPE              |     1 |    27 |     1   (0)|
    |* 31 |            INDEX UNIQUE SCAN         | PK_ACTION_TYPE           |     1 |       |     0   (0)|
    |  32 |           TABLE ACCESS FULL          | FEED_BOOK_STATUS_HISTORY |  9570K|   209M| 10233  (21)|
    |* 33 |         INDEX RANGE SCAN             | IDX_FBS_CD_FII_BI        |  3602 | 79244 |    22   (5)|
    |  34 |        TABLE ACCESS BY INDEX ROWID   | FEED_INSTANCE            |     1 |    10 |     2   (0)|
    |* 35 |         INDEX UNIQUE SCAN            | PK_FEED_INSTANCE         |     1 |       |     1   (0)|
    |  36 |       TABLE ACCESS FULL              | FEED_STATIC              |  2899 | 78273 |    16   (7)|
    |* 37 |      INDEX RANGE SCAN                | PK_MARSNODE              |     1 |       |     2   (0)|
    |* 38 |    TABLE ACCESS BY INDEX ROWID       | MARSNODE                 |     1 |    41 |     3   (0)|
    |  39 |     NESTED LOOPS                     |                          |   115 | 21505 | 42367  (28)|
    |* 40 |      HASH JOIN                       |                          |   114 | 16644 | 42023  (28)|
    |  41 |       NESTED LOOPS                   |                          |   114 | 13566 | 42007  (28)|
    |* 42 |        HASH JOIN                     |                          |   114 | 12426 | 41777  (28)|
    |* 43 |         HASH JOIN                    |                          |   957 | 83259 | 41754  (28)|
    |* 44 |          TABLE ACCESS FULL           | ACTION_HISTORY           |  2480 | 91760 | 30731  (28)|
    |  45 |          NESTED LOOPS                |                          |  9570K|   456M| 10234  (21)|
    |  46 |           TABLE ACCESS BY INDEX ROWID| ACTION_TYPE              |     1 |    27 |     1   (0)|
    |* 47 |            INDEX UNIQUE SCAN         | PK_ACTION_TYPE           |     1 |       |     0   (0)|
    |  48 |           TABLE ACCESS FULL          | FEED_BOOK_STATUS_HISTORY |  9570K|   209M| 10233  (21)|
    |* 49 |         INDEX RANGE SCAN             | IDX_FBS_CD_FII_BI        |  3602 | 79244 |    22   (5)|
    |  50 |        TABLE ACCESS BY INDEX ROWID   | FEED_INSTANCE            |     1 |    10 |     2   (0)|
    |* 51 |         INDEX UNIQUE SCAN            | PK_FEED_INSTANCE         |     1 |       |     1   (0)|
    |  52 |       TABLE ACCESS FULL              | FEED_STATIC              |  2899 | 78273 |    16   (7)|
    |* 53 |      INDEX RANGE SCAN                | PK_MARSNODE              |     1 |       |     2   (0)|
    ------------------------------------------------------------------------------------------------------and the predicate info
    Predicate Information (identified by operation id):
       7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
       8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
       9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      12 - filter("FI"."MOST_RECENT"='Y')
      13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
      15 - access("FI"."FEED_ID"="FS"."FEED_ID")
           filter("SA"."FEED_ID"="FS"."FEED_ID")
      16 - filter("BK"."CLOSE_DATE" IS NULL)
      17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
      18 - filter("AH"."TIME_DRAFT"='after')
      19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
      20 - filter("AT"."DESCRIPTION"='Regress Positions')
      21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
      22 - filter("BK"."CLOSE_DATE" IS NULL)
      24 - access("FI"."FEED_ID"="FS"."FEED_ID")
      26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
      27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
                  "AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
      28 - filter("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after')
      31 - access("AT"."ACTION_TYPE_ID"=103)
      33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
      38 - filter("BK"."CLOSE_DATE" IS NULL)
      40 - access("FI"."FEED_ID"="FS"."FEED_ID")
      42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
      43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
                  "AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
      44 - filter("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after')
      47 - access("AT"."ACTION_TYPE_ID"=101)
      49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
    Note
       - 'PLAN_TABLE' is old versionIn this query, mainly the ACTION_HISTORY and FEED_BOOK_STATUS_HISTORY tables are getting accessed fullly though there are indexes createdon them like this
    ACTION_HISTORY
    ACTION_ID  column Unique index
    FEED_BOOK_STATUS_HISTORY
    (FEED_INSTANCE_ID, BOOK_ID, COB_DATE, VERSION) composite indexI tried all the best combinations however the indexes are not getting used anywhere.
    Could you please suggest some way so the query will perform better way.
    Thanks,
    Aashish

    Hi Mohammed,
    This is what I got after your method of execution plan
    SQL_ID  4vmc8rzgaqgka, child number 0
    select distinct bk.name "Book Name" ,       fs.feed_description "Feed Name" ,       fbs.cob_date
    "Cob" ,       at.description "Data Type" ,       ah.user_name " User" ,       ah.comments "Comments"
    ,       ah.time_draft from Action_type at, action_history  ah, sensitivity_audit sa, logical_entity
    le, feed_static fs, feed_book_status fbs, feed_instance fi, marsnode bk where  at.description =
    'Regress Positions' and    fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011' and
    fi.most_recent = 'Y' and    bk.close_date is  null and    ah.time_draft='after' and
    sa.close_action_id is null and    le.close_action_id is null and    at.action_type_id =
    ah.action_type_id and    ah.action_id=sa.create_action_id and    le.logical_entity_id = sa.type_id
    and    sa.feed_id = fs.feed_id and    sa.book_id = bk.node_id and    sa.feed_instance_id =
    fi.feed_instance_id and    fbs.feed_instance_id = fi.feed_instance_id and    fi.feed_id = fs.feed_id
    union select distinct bk.name "Book Name" ,       fs.
    Plan hash value: 1006571916
    | Id  | Operation                            | Name                     | E-Rows |  OMem |  1Mem | Used-Mem |
    |   1 |  SORT UNIQUE                         |                          |    231 |  6144 |  6144 | 6144  (0)|
    |   2 |   UNION-ALL                          |                          |        |       |       |          |
    |   3 |    NESTED LOOPS                      |                          |      1 |       |       |          |
    |   4 |     NESTED LOOPS                     |                          |      1 |       |       |          |
    |   5 |      NESTED LOOPS                    |                          |      1 |       |       |          |
    |   6 |       NESTED LOOPS                   |                          |      1 |       |       |          |
    |*  7 |        HASH JOIN                     |                          |    213 |  1236K|  1236K| 1201K (0)|
    |*  8 |         TABLE ACCESS FULL            | LOGICAL_ENTITY           |     12 |       |       |          |
    |*  9 |         HASH JOIN                    |                          |   4267 |  1023K|  1023K| 1274K (0)|
    |* 10 |          HASH JOIN                   |                          |   3602 |  1095K|  1095K| 1296K (0)|
    |* 11 |           INDEX RANGE SCAN           | IDX_FBS_CD_FII_BI        |   3602 |       |       |          |
    |* 12 |           TABLE ACCESS FULL          | FEED_INSTANCE            |    335K|       |       |          |
    |* 13 |          TABLE ACCESS FULL           | SENSITIVITY_AUDIT        |    263K|       |       |          |
    |  14 |        TABLE ACCESS BY INDEX ROWID   | FEED_STATIC              |      1 |       |       |          |
    |* 15 |         INDEX UNIQUE SCAN            | IDX_FEED_STATIC_FI       |      1 |       |       |          |
    |* 16 |       TABLE ACCESS BY INDEX ROWID    | MARSNODE                 |      1 |       |       |          |
    |* 17 |        INDEX RANGE SCAN              | PK_MARSNODE              |      3 |       |       |          |
    |* 18 |      TABLE ACCESS BY INDEX ROWID     | ACTION_HISTORY           |      1 |       |       |          |
    |* 19 |       INDEX UNIQUE SCAN              | PK_ACTION_HISTORY        |      1 |       |       |          |
    |* 20 |     TABLE ACCESS BY INDEX ROWID      | ACTION_TYPE              |      1 |       |       |          |
    |* 21 |      INDEX UNIQUE SCAN               | PK_ACTION_TYPE           |      1 |       |       |          |
    |* 22 |    TABLE ACCESS BY INDEX ROWID       | MARSNODE                 |      1 |       |       |          |
    |  23 |     NESTED LOOPS                     |                          |    115 |       |       |          |
    |* 24 |      HASH JOIN                       |                          |    114 |   809K|   809K|  817K (0)|
    |  25 |       NESTED LOOPS                   |                          |    114 |       |       |          |
    |* 26 |        HASH JOIN                     |                          |    114 |   868K|   868K| 1234K (0)|
    |* 27 |         HASH JOIN                    |                          |    957 |   933K|   933K| 1232K (0)|
    |* 28 |          TABLE ACCESS FULL           | ACTION_HISTORY           |   2480 |       |       |          |
    |  29 |          NESTED LOOPS                |                          |   9570K|       |       |          |
    |  30 |           TABLE ACCESS BY INDEX ROWID| ACTION_TYPE              |      1 |       |       |          |
    |* 31 |            INDEX UNIQUE SCAN         | PK_ACTION_TYPE           |      1 |       |       |          |
    |  32 |           TABLE ACCESS FULL          | FEED_BOOK_STATUS_HISTORY |   9570K|       |       |          |
    |* 33 |         INDEX RANGE SCAN             | IDX_FBS_CD_FII_BI        |   3602 |       |       |          |
    |  34 |        TABLE ACCESS BY INDEX ROWID   | FEED_INSTANCE            |      1 |       |       |          |
    |* 35 |         INDEX UNIQUE SCAN            | PK_FEED_INSTANCE         |      1 |       |       |          |
    |  36 |       TABLE ACCESS FULL              | FEED_STATIC              |   2899 |       |       |          |
    |* 37 |      INDEX RANGE SCAN                | PK_MARSNODE              |      1 |       |       |          |
    |* 38 |    TABLE ACCESS BY INDEX ROWID       | MARSNODE                 |      1 |       |       |          |
    |  39 |     NESTED LOOPS                     |                          |    115 |       |       |          |
    |* 40 |      HASH JOIN                       |                          |    114 |   743K|   743K|  149K (0)|
    |  41 |       NESTED LOOPS                   |                          |    114 |       |       |          |
    |* 42 |        HASH JOIN                     |                          |    114 |   766K|   766K|  208K (0)|
    |* 43 |         HASH JOIN                    |                          |    957 |   842K|   842K|  204K (0)|
    |* 44 |          TABLE ACCESS FULL           | ACTION_HISTORY           |   2480 |       |       |          |
    |  45 |          NESTED LOOPS                |                          |   9570K|       |       |          |
    |  46 |           TABLE ACCESS BY INDEX ROWID| ACTION_TYPE              |      1 |       |       |          |
    |* 47 |            INDEX UNIQUE SCAN         | PK_ACTION_TYPE           |      1 |       |       |          |
    |  48 |           TABLE ACCESS FULL          | FEED_BOOK_STATUS_HISTORY |   9570K|       |       |          |
    |* 49 |         INDEX RANGE SCAN             | IDX_FBS_CD_FII_BI        |   3602 |       |       |          |
    |  50 |        TABLE ACCESS BY INDEX ROWID   | FEED_INSTANCE            |      1 |       |       |          |
    |* 51 |         INDEX UNIQUE SCAN            | PK_FEED_INSTANCE         |      1 |       |       |          |
    |  52 |       TABLE ACCESS FULL              | FEED_STATIC              |   2899 |       |       |          |
    |* 53 |      INDEX RANGE SCAN                | PK_MARSNODE              |      1 |       |       |          |
    Predicate Information (identified by operation id):
       7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
       8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
       9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      12 - filter("FI"."MOST_RECENT"='Y')
      13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
      15 - access("FI"."FEED_ID"="FS"."FEED_ID")
           filter("SA"."FEED_ID"="FS"."FEED_ID")
      16 - filter("BK"."CLOSE_DATE" IS NULL)
      17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
      18 - filter("AH"."TIME_DRAFT"='after')
      19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
      20 - filter("AT"."DESCRIPTION"='Regress Positions')
      21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
      22 - filter("BK"."CLOSE_DATE" IS NULL)
      24 - access("FI"."FEED_ID"="FS"."FEED_ID")
      26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
      27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
                  "AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
      28 - filter(("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after'))
      31 - access("AT"."ACTION_TYPE_ID"=103)
      33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
      38 - filter("BK"."CLOSE_DATE" IS NULL)
      40 - access("FI"."FEED_ID"="FS"."FEED_ID")
      42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
      43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
                  "AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
      44 - filter(("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after'))
      47 - access("AT"."ACTION_TYPE_ID"=101)
      49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
      53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
    Note
       - Warning: basic plan statistics not available. These are only collected when:
           * hint 'gather_plan_statistics' is used for the statement or
           * parameter 'statistics_level' is set to 'ALL', at session or system level
    122 rows selected.
    Elapsed: 00:00:02.18The action_type_id column is of NUMBER type.

  • Query performance improvement using pipelined table function

    Hi,
    I have got two select queries one is like...
    select * from table
    another is using pielined table function
    select *
    from table(pipelined_function(cursor(select * from table)))
    which query will return result set more faster????????
    suggest methods for retrieving dataset more faster (using pipelined table function) than a normal select query.
    rgds
    somy

    Compare the performance between these solutions:
    create table big as select * from all_objects;
    First test the performance of a normal select statement:
    begin
      for r in (select * from big) loop
       null;
      end loop;
    end;
    /Second a pipelined function:
    create type rc_vars as object 
    (OWNER  VARCHAR2(30)
    ,OBJECT_NAME     VARCHAR2(30));
    create or replace type rc_vars_table as table of  rc_vars ;
    create or replace
    function rc_get_vars
    return rc_vars_table
    pipelined
    as
      cursor c_aobj
             is
             select owner, object_name
             from   big;
      l_aobj c_aobj%rowtype;
    begin
      for r_aobj in c_aobj loop
        pipe row(rc_vars(r_aobj.owner,r_aobj.object_name));
      end loop;
      return;
    end;
    /Test the performance of the pipelined function:
    begin
      for r in (select * from table(rc_get_vars)) loop
       null;
      end loop;
    end;
    /On my system the simple select-statement is 20 times faster.
    Correction: It is 10 times faster, not 20.
    Message was edited by:
    wateenmooiedag

  • Query Performance with Unit Conversion

    Hi Experts,
    Right now my Customer ask to me to do a improve in the runtime of some queries.
    I detect a problem in one related to unit conversion. I execute a workbook statistics and found that the time is focusing in step conversion data.
    I'm consulting all the year and is taking around 20 minuts to give us a result. I too much time. The only thing in the query is the conversion data.
    How can I to improve the performance? what is the check list in this case?
    thanks for you help.
    Jose

    Hi Jose,
    You might not be able to reduce the unit conversion time here and try to apply the general query performance improvement techniques, e.g. caching the query results etc.
    But there is one thing which can help you, if end user is using only one of the unit for e.g. User always execute the report in USD but the source currency is different from USD. In such cases you can create another data source and do the data conversion at the time of data loading so that in the new data source all data will be available in required currency and no conversion will happen at runtime and will improve the query performance drastically.
    But above solution is not feasible if there are many currencies and report needs to be run in multiple currency frequently.
    Regards,
    Durgesh.

  • How to improve Query Performance

    Hi Friends...
    I Want to improve query performance.I need following things.
    1.What is the process to findout the performance?. Any transaction code's and how to use?.
    2.How can I know whether the query is running good or bad ,ie. in performance praspect.
    3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
    4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
    Eg..
    Eg 1.   Need to create aggregates.
    Solution:  where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
    Any chenges I need to do in Development?.Because I'm in Production system.
    So please tell me solution for my questions.
    Thanks
    Ganga
    Message was edited by: Ganga N

    hi ganga
    please refer oss note :557870 : Frequently asked questions on query performance
    also refer to
    Prakash's weblog
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    performance docs on query
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
    This is the oss notes of FAQ on query performance
    1. What kind of tools are available to monitor the overall Query Performance?
    1.     BW Statistics
    2.     BW Workload Analysis in ST03N (Use Export Mode!)
    3.     Content of Table RSDDSTAT
    2. Do I have to do something to enable such tools?
    Yes, you need to turn on the BW Statistics:
      RSA1, choose Tools -> BW statistics for InfoCubes
      (Choose OLAP and WHM for your relevant Cubes)
    3. What kind of tools is available to analyze a specific query in    detail?
    1.     Transaction RSRT
    2.     Transaction RSRTRACE
    4.  Do I have an overall query performance problem?
    i. Use ST03N -> BW System load values to recognize the problem. Use the  number given in table 'Reporting - InfoCubes:Share of total time (s)'  to check if one of the columns %OLAP, %DB, %Frontend shows a high   number in all Info Cubes.
    ii. You need to run ST03N in expert mode to get these values
    5. What can I do if the database proportion is high for all queries?
    Check:
    1.     If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
    2.     If database parameter set up accords with SAP Notes and SAP Services   (EarlyWatch)
    3.     If Buffers, I/O, CPU, memory on the database server are exhausted?
    4.     If Cube compression is used regularly
    5.     If Database partitioning is used (not available on all DB platforms)
    6. What can I do if the OLAP proportion is high for all queries?
    Check:
    1.     If the CPUs on the application server are exhausted
    2.     If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
    3.     If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,  Customizing default)
    7. What can I do if the client proportion is high for all queries?
    Check whether most of your clients are connected via a WAN  connection and the amount of data which is transferred   is rather high.
    8. Where can I get specific runtime information for one query?
    1.     Again you can use ST03N -> BW System Load
    2.     Depending on the time frame you select, you get historical data or current data.
    3.     To get to a specific query you need to drill down using the InfoCube  name
    4.      Use Aggregation Query to get more runtime information about a   single query. Use tab All data to get to the details.   (DB, OLAP, and Frontend time, plus Select/ Transferred records,  plus number of cells and formats)
    9. What kind of query performance problems can I recognize using ST03N
       values for a specific query?
    (Use Details to get the runtime segments)
    1.     High Database Runtime
    2.     High OLAP Runtime
    3.     High Frontend Runtime
    10. What can I do if a query has a high database runtime?
    1.     Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would  be an indicator for query performance improvement using an aggregate)
    2.     o Check if database statistics are update to data for the   Cube/Aggregate, use TX RSRV output (use database check for statistics  and indexes)
    3.     Check if the read mode of the query is unfavourable - Recommended (H)
    11. What can I do if a query has a high OLAP runtime?
    1.     Check if a high number of Cells transferred to the OLAP (use  "All data" to get value "No. of Cells")
    2.     Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before   Aggregation, Virtual Char. Key Figures, Attributes in Calculated   Key Figs, Time-dependent Currency Translation)  together with a high number of records transferred.
    3.     Check if a user exit Usage is involved in the OLAP runtime?
    4.     Check if large hierarchies are used and the entry hierarchy level is  as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
    5.     Check if a proper index on the inclusion  table exist
    12. What can I do if a query has a high frontend runtime?
    1.     Check if a very high number of cells and formatting are transferred   to the Frontend (use "All data" to get value "No. of Cells") which   cause high network and frontend (processing) runtime.
    2.     Check if frontend PC are within the recommendation (RAM, CPU MHz)
    3.     Check if the bandwidth for WAN connection is sufficient
    REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
    CHEERS
    RAVI

  • Poor query performance only with migrated 7.0 queries

    Dear Team,
    We are facing a serious query performance issue after migration of queries from 3.5 to 7.0.
    I executed a query in 3.5 with some variable values which takes fraction of seconds to display the output. But the same migrated query with same variable entries is taking very long time and giving time out error.
    We are not using any aggregates in the InfoProvider level.
    Both the queries are based on same cube but 3.5 query is taking less time and 7.0 is taking very long time if more selection is done.
    I checked for notes where I didn't find specific note for this particular scenario. I found notes only for general query performance improvement.
    I want to know the reason why only in 7.0 the same 3.5 query is taking a long time and giving time out error. And please suggest some notes or suggestions related to this scenario.
    Regards,
    Chan

    Hi,
    Queries in BI 7.0 are almost the same as queries in 3.x format.
    inorder to check if the problem is in the query runtime (database time) or JAVA runtime (probably rendering) you should try running it from RSRT once in JAVA web and once in ABAP web.
    if the problem is only with JAVA web, than u should take the URL and add &profiling=X at the end.
    after the query execution u can use statistics which will be shown at the top of the page.
    With my experience, the problem is in the rendering phase of the query. Things that could be done is to limit the number of rows shown at each page, that could be done by changing the 0ANALYSIS web template - it's one of the web template parameters.
    Tomer.

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Pls help me to modify the query for performance improvement

    Hi,
    I have the below initialization
    DECLARE @Active bit =1 ;
    Declare @id int
    SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END  FROM dbo.Students
    I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?

    I dont understand your query...May be below? or provide us sample data and your output...
    SELECT *  FROM dbo.students
    where @Active=CASE
    WHEN id=@id and rank ='Good' then 0 else 1 END
    But, I doubt you will have performance improvement here?
    Do you have index on id?
    If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
    SELECT *  FROM dbo.students
    where  id=@id
    and rank ='Good' 

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • How to improve query performance using infoset

    I create one infoset that including 4 char.and 3 DSO which all are time-dependent.When query run, system show very poor perfomance, sometimes no data show in BEX anayzer. In this case I have to close BEX analyzer at first and then open it again, after that it show real results. It seems very strange. Does anybody has experience on infoset performance improvement. pls info, thanks!

    Hi
    As info set itself doesn't have any data so it improves Performance
    also go through the below tips.
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    Statistical Records Part 4: How to read ST03N datasets from DB in NW2004
    How to read ST03N datasets from DB
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Improving a query performance

    Hello gurus,
    We do use RSRT, ST03(expert mode) to check if a query is running slowly. what actually do we check there and how?
    Can anyone please explain me how its done.
    Thanks in advance
    S N

    Hi SN,
    You need to check the DB time taken by the queries and the summarization ratio for the queries. As a rule of thumb if the summarization ration > 10 and the database time > 30% then creating an aggregate will improve the query performance.
    BY summarization ration i mean ratio between the number of records read from the cube to the number of records displayed in the report.
    Thx,
    Soumya

  • How many ways can i improve query performance?

    Hi All,
    can any body help me
    How many ways can i improve query performance in oracle ?
    Thanks,
    narasimha

    As many as you can think of them!!!

  • How to improve the query performance

    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    -------WHERE TSK.ProjectID = @Project-----
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    hi..
    My SP is as above..
    I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
    When i selected the ALL value for parameters Program and Project..i'm unable to get output.
    but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
    so i commented the where condition in SP as shown above
    --------where TSK.ProjectID=@Project-------------
    now i'm getting output when selecting ALL value for parameters.
    but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
    how can i create index on temp table in this sp and how can i improve the query performance..
    please help.
    thanks in advance..
    lucky

    Didnt i provide you solution in other thread?
    ALTER PROCEDURE [SPNAME]
    @Portfolio INT,
    @Program INT,
    @Project INT
    AS
    BEGIN
    --DECLARE @StartDate DATETIME
    --DECLARE @EndDate DATETIME
    --SET @StartDate = '11/01/2013'
    --SET @EndDate = '02/28/2014'
    IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
    DROP TABLE #Dates
    IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
    DROP TABLE #DailyTasks
    CREATE TABLE #Dates(WorkDate DATE)
    --CREATE INDEX IDX_Dates ON #Dates(WorkDate)
    ;WITH Dates AS
    SELECT (@StartDate) DateValue
    UNION ALL
    SELECT DateValue + 1
    FROM Dates
    WHERE DateValue + 1 <= @EndDate
    INSERT INTO #Dates
    SELECT DateValue
    FROM Dates D
    LEFT JOIN tb_Holidays H
    ON H.HolidayOn = D.DateValue
    AND H.OfficeID = 2
    WHERE DATEPART(dw,DateValue) NOT IN (1,7)
    AND H.UID IS NULL
    OPTION(MAXRECURSION 0)
    SELECT TSK.TaskID,
    TR.ResourceID,
    WC.WorkDayCount,
    (TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
    D.WorkDate,
    TSK.ProjectID,
    RES.ResourceName
    INTO #DailyTasks
    FROM Tasks TSK
    INNER JOIN TasksResource TR
    ON TSK.TaskID = TR.TaskID
    INNER JOIN tb_Resource RES
    ON TR.ResourceID=RES.UID
    OUTER APPLY (SELECT COUNT(*) WorkDayCount
    FROM #Dates
    WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
    INNER JOIN #Dates D
    ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
    WHERE (TSK.ProjectID = @Project OR @Project = -1)
    SELECT D.ResourceID,
    D.WorkDayCount,
    SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
    D.WorkDate,
    T.TaskID,
    D.ResourceName
    FROM #DailyTasks D
    OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
    FROM #DailyTasks DA
    WHERE D.WorkDate = DA.WorkDate
    AND D.ResourceID = DA.ResourceID
    FOR XML PATH('')) AS TaskID) T
    LEFT JOIN tb_Project PRJ
    ON D.ProjectID=PRJ.UID
    INNER JOIN tb_Program PR
    ON PRJ.ProgramID=PR.UID
    INNER JOIN tb_Portfolio PF
    ON PR.PortfolioID=PF.UID
    WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
    AND (@Program = -1 or PR.UID = @Program)
    AND (@Project = -1 or PRJ.UID = @Project)
    GROUP BY D.ResourceID,
    D.WorkDate,
    T.TaskID,
    D.WorkDayCount,
    D.ResourceName
    HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Maybe you are looking for

  • How to create report in the following form?

    Hi All, how I am using FR studio for creating reports. I want to create a report which will show me accounts per entity,which means I want to keep the entity in the page dimension and accounts in the row dimension.The report should be done in such wa

  • How to add custom infotype in ALE for HRMD_A  message type

    Hi  Friends- Pls help me for our req  we are using HRMD_A  Message Type for  HR master data but now we need to add one custom infotype ( 9902 ) also in standard ALE filters for this message type   how i can do that pls let me know ? <<removed_by_mode

  • Include and call an .exe file inside my jar file

    Hi. I want to include an exe file and call it inside my jar file. this is the code. InputStream is = getClass().getResourceAsStream("/native/my.exe");           int[] line = new int[is.available()];           File myFile = File.createTempFile("my","e

  • How many Radius Servers do I need in my environment for 802.x?

    I am seeting up Radius for all my controllers which is 5 because I have 5 buildings. 2 of these buildings have 100 MB connectiosn the other 3 have a 10 MB connection. Is it ncessary to put a Radius server in each building? I have about 1000 or more c

  • Does UTF-8 encoding support Thai language in Endeca Search?

    Hi, We have a requirement to configure Endeca search in Thai language. Does UTF-8 encoding on all the configuration files and for incoming data (Record Adapter) create issues with Thai character set? If yes, then please let us know which type of enco