Query performance question!

I have 2 SQL shown below. the first has no index and the others have 2 indexes. It's a huge table with no primary key. From below explain plan, which is more efficient? or someone can help to rewrite the query. I just use 'COUNT' to easily get the explain plan instead of selecting columns for tables. Any input is appreciated.
SQL> l
  1  select count(*)
  2    from  etl_orcl.tmp_trade_lot34 tt
  3*  where  not (nvl(trim(status_code), 'NA') = 'OO' or nvl(trim(status_code), 'NA') = 'PC' or nvl(trim(m_status_code), 'NA') = 'PC')
SQL> /
  COUNT(*)
   2574571
Elapsed: 00:00:47.74
Execution Plan
| Id  | Operation          | Name            | Rows  | Bytes | Cost (%CPU)|
|   0 | SELECT STATEMENT   |                 |     1 |     7 | 28330   (6)|
|   1 |  SORT AGGREGATE    |                 |     1 |     7 |            |
|   2 |   TABLE ACCESS FULL| TMP_TRADE_LOT34 |   322 |  2254 | 28330   (6)|
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
SQL> l
  1  select count(*)
  2    from  etl_orcl.tmp_trade_lot34 tt
  3*  where  not (status_code in ('OO','PC') or m_status_code = 'PC')
SQL> /
  COUNT(*)
   2574571
Elapsed: 00:00:24.24
Execution Plan
| Id  | Operation               | Name              | Rows  | Bytes | Cost (%CPU)|
|   0 | SELECT STATEMENT        |                   |     1 |     7 |  7798   (6)|
|   1 |  SORT AGGREGATE         |                   |     1 |     7 |            |
|   2 |   VIEW                  | index$_join$_001  |  2574K|    17M|  7798   (6)|
|   3 |    HASH JOIN            |                   |       |       |            |
|   4 |     INDEX FAST FULL SCAN| IDX#2_TRADE_LOT34 |  2574K|    17M|  1659   (7)|
|   5 |     INDEX FAST FULL SCAN| IDX#1_TRADE_LOT34 |  2574K|    17M|  1768   (7)|
----------------------------------------------------------------------------------Edited by: gonzroman on May 7, 2010 12:51 PM
Edited by: gonzroman on May 7, 2010 12:52 PM

From below explain plan, which is more efficient?Which one finishes first?
where not (nvl(trim(status_code), 'NA') = 'OO' or nvl(trim(status_code), 'NA') = 'PC' or nvl(trim(m_status_code), 'NA') = 'PC')The NVL functions are causing the optimizer to bypass the index as the results of the function are not indexed.
Keep in mind that counts can use a different execution path than returning data if the index can be used;
Sample table with index;
create table t as (
   select rownum rn, rownum*10 val
   from dual
   connect by rownum <= 100000);
Table created
create index t_i on t(rn);
Index createdExplain plan for COUNT;
explain plan for
   select count(*)
   from t
   where rn between 10 and 20;
Explained
select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 4142320527
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT  |      |     1 |    13 |     2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |      |     1 |    13 |            |          |
|*  2 |   INDEX RANGE SCAN| T_I  |    11 |   143 |     2   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   2 - access("RN">=10 AND "RN"<=20)
Note
   - dynamic sampling used for this statement
18 rows selectedExplain plan for query that returns data from the table;
explain plan for
   select val
   from t
   where rn between 10 and 20;
Explained
select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1658221841
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |      |    11 |   286 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T    |    11 |   286 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_I  |    11 |       |     2   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   2 - access("RN">=10 AND "RN"<=20)
Note
   - dynamic sampling used for this statement
18 rows selected

Similar Messages

  • SQL query performance question

    So I had this long query that looked like this:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM
    ideal d, ideal_term a,( select deal_key, deal_term_key, max(createdOn) maxdate    from Ideal_term B
    where createdOn <= '03-OCT-12 10.03.00 AM' group by deal_key, deal_term_key ) B
    WHERE  a.begin_date <= '20-MAR-09 01.01.00 AM'
    *     and a.end_date >= '19-MAR-09 01.00.00 AM'*
    *     and A.deal_key = b.deal_key*
    *     and A.deal_term_key = b.deal_term_key*
    *     and     a.createdOn = b.maxdate*
    *     and d.deal_key = a.deal_key*
    *     and d.name like 'MVPP1 B'*
    order by
    *     a.begin_date, a.deal_key, a.deal_term_key;*
    This performed very poorly for a record in one of the tables that has 43,000+ revisions. It took about 1 minute and 40 seconds. I asked the database guy at my company for help with it and he re-wrote it like so:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM ideal d
    INNER JOIN (SELECT deal_key,
    deal_term_key,
    MAX(createdOn) maxdate
    FROM Ideal_term B2
    WHERE '03-OCT-12 10.03.00 AM' >= createdOn
    GROUP BY deal_key, deal_term_key) B1
    ON d.deal_key = B1.deal_key
    INNER JOIN ideal_term a
    ON B1.deal_key = A.deal_key
    AND B1.deal_term_key = A.deal_term_key
    AND B1.maxdate = a.createdOn
    AND d.deal_key = a.deal_key + 0
    WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
    AND a.end_date >= '19-MAR-09 01.00.00 AM'
    AND d.name LIKE 'MVPP1 B'
    ORDER BY a.begin_date, a.deal_key, a.deal_term_key
    this works much better, it only takes 0.13 seconds. I've bee trying to figure out why exaclty his version performs so much better. His only epxlanation was that the "+ 0" in the WHERE clause prevented Oracle from using an index for that column which created a bad plan initially.
    I think there has to be more to it than that though. Can someone give me a detailed explanation of why the second version of the query performed so much faster.
    Thanks.
    Edited by: su**** on Oct 10, 2012 1:31 PM

    I used Autotrace in SQL developer. Is that sufficient? Here is the Autotrace and Explain for the slow query:
    and for the fast query:
    I said that I thought there was more to it because when my team members and I looked at the re-worked query the database guy sent us, our initial thoughts were that in the slow query some of the tables didn't have joins and because of that the query formed a Cartesian product and this resulted in a huge 43,000+ rows matrix.
    In his version all tables had joins properly defined and in addition he had that +0 which told it to ignore the index for the attribute deal_key of table ideal_term. I spoke with the database guy today and he confirmed our theory.

  • SQL Developer vs TOAD - query performance question

    Somebody made me notice same queries are executing slower in SQL Developer than in TOAD. I'm rather curious about this issue, since I understand Java is "slow" but I can't find any other thread about this point. I don't use TOAD, so I can't compare...
    Can this be related to the amount of data being returned by the query ? What could be the other reasons of SQL Dev running slower with an identical query ?
    Thanks,
    Attila

    It also occurs to me that TOAD always uses the equivalent of the JDBC "thick" driver. SQL Developer can use either the "thin" driver or the "thick" driver, but connections are usually configured with the "thin" driver, since you need an Oracle client to use the "thick" driver.
    The difference is that "thin" drivers are written entirely in Java, but "thick" drivers are written with only a little Java that calls the native executable (hence you need an Oracle client) to do most of the work. Theoretically, a thick driver is faster because the object code doesn't need to be interpreted by the JVM. However, I've heard that the difference in performance is not that large. The only way to know for sure is to configure a connection in SQL Developer to use the thick driver, and see if it is faster (I'd use a stop-watch).
    Someone correct me if I'm wrong, but I think that if you use "TNS" as your connection type, SQL Developer will use the thick driver, while the default, "Basic" connection type uses the thin driver. Otherwise, you're going to have to use the "Advanced" connection type and type in the Custom JDBC URL for the thick driver.

  • Question related to query performance

    Hi Experts,
    Could any 1 please explain me ,when  BWA Indexes or Aggregates are created Query performance will be high because query will hit the aggregates tables, which we can see in the statistics table RSDDSTAT_DM .
    But now the problem is though Aggregates are created some time Query is Hitting 'F' Table and in this case time read is very high.
    In which cases though Aggregates or BWA indexes present ,Query will hit 'F table
    Please explain

    Query will read the data from BWA or aggregates only if data is present there i.e. the data required as per query definition. If it doesn't find data in BWA or Aggregates, it will go and read from infocube.
    Ex. Lets say you have aggregates that holds data only for comp code - 1234.
    However, user has run the query seeking the details of Comp code - 4567. In this case, query will read the data from F table, even though aggregates are present.
    Rgds..
    Shambhu

  • Select query performance improvement - Index on EDIDC table

    Hi Experts,
    I have a scenario where in I have to select data from the table EDIDC. The select query being used is given below.
      SELECT  docnum
              direct
              mestyp
              mescod
              rcvprn
              sndprn
              upddat
              updtim
      INTO CORRESPONDING FIELDS OF TABLE t_edidc
      FROM edidc
      FOR ALL ENTRIES IN t_error_idoc
      WHERE
      upddat GE gv_date1 AND
      upddat LE gv_date2 AND
      updtim GE p_time AND
      status EQ t_error_idoc-status.
    As the volume of the data is very high, our client requested to put up some index or use an existing one to improve the performance of the data selection query.
    Question:
    4.    How do we identify the index to be used.
    5.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    6.    What will be the impact on the table performance if we create a new index.
    Regards ,
    Raghav

    Question:
    1.    How do we identify the index to be used.
    Generally the index is automatically selected by SAP (DB Optimizer )  ( You can still mention the index name in your select query by changing the syntax)
      For your select Query the second Index will be called automatically by the Optimizer, ( Because  the select query has u2018Updatu2019 , u2018uptimu2019 in the sequence before the u2018statusu2019 ) .
    2.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    (Create a new Index with MANDT and the 4 fields which are in the where clause in sequence  )
    3.    What will be the impact on the table performance if we create a new index.
    ( Since the index which will be newly created is only the 4th index for the table, there shouldnu2019t be any side affects)
    After creation of index , Check the change in performance of the current program and also some other programs which are having the select queries on EDIDC ( Various types of where clauses preferably ) to verify that the newly created index is not having the negative impact on the performance. Additionally, if possible , check if you can avoid  into corresponding fields .
    Regards ,
    Seth

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • Query Performance Issues

    Hi Gurus,
    I m woking on performance tuning at the moment and wants some tips
    regarding the Query performance tuning,if anyone can helpme in that
    rfrence.
    the thing is that i have got an idea about the system and now the
    issues r with DB space, Abap Dumps, problem in free space in table
    space, no number range buffering,cubes r using too many aggrigates,
    large IC,Large ODS, and many others.
    So my questionis that is anyone can tell me that how to resolve the
    issues with the Large master data tables,and large ODS,and one more
    importnat issue is KPI´s exceding there refrence valuses so any idea
    how to deal with them.
    waiting for the valuable responces.
    thanks In advance
    Redards
    Amit

    Hi Amit
    For Query performance issue u can go for :-
    Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
    secondly i wud suggest u is use CKF in place of formulaes if any in the query
    other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary  join to ur MD and thus dec query perfo
    be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
    use filters if possible
    if u follow these m sure ur query perfo will inc
    Assign points if applicable
    Thanks
    puneet

  • Reg: Process Chain, query performance tuning steps

    Hi All,
    I come across a question like,  There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
    Please let me know the steps involved in query performance tuning and aggregate tuning.
    Thanks & Regards
    Omkar.K

    Hi,
    Process Chain
    Method 1 (when it fails in a step/request)
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
    Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
    Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
    Query performance tuning
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • Impact of real time cube on query performance and OLAP cache

    Hi:
    We have actual and plan cubes both setup as real time cubes (only plan cube is being planned against, not actual cube) and both cubes are compressed once a day.
    We are planning on implementing BIA accelerator and have questions related to query performance optimization:
    1/ Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    2/ Can OLAP cache be leveraged for the queries run against the real time cubes e.g. actual cubes
    3/ What is the impact on BIA of having the actual cube as real time (whetehr or not there is data being loaded/planned during the day in that cube)
    Thank you in advance,
    Catherine

    1) Is there are any query performance benefits in changing the actual cube to be a basic cube (using program SAP_CONVERT_NORMAL_TRANS) if the F table is fully compressed
    From the performance point of view, tha actual cubes i.e std cubes are relatively better.
    2) Yes OLAP cache can be leveraged for bringing up the plan query but all the calculations are done in the planning buffer.
    3) Not sure.

  • Query performance and Transport

    Guys, (Those of you that already work in the field maybe able to better answer this question.) 
    Let's say if an end-user comes and complaints about a certain query performance.  The developer ends up creating aggregates on the cube.  Would he be doing all this process on dev box then transport to QA and on to Production environment or directly on production environment and won't need to transport anything?  Any kind of documentation will be helpful for this.
    Thanks,
    RG

    Aggregates are created directly in production.
    We create aggregate based on query statistics...which is based on data read by query and various times.
    Since this is based of data in prod we develop directly in prod...i mean dev wont have any data so you dont have any basis to create aggregate. even if you use statistics data from prod to create aggregate in dev...you cannot again check performance improvement coz of aggregate in dev as it wont have data as prod has

  • Bad Query Performance

    Hi Experts,
    I have a Query which was built on a multiprovider,Which has a slow preformance.
    I think the main problem comes from selecting to many records from
    I think it selects 1,1 million rows to display about 500 rows in the result.
    Another point could be that the complicated restricted and calculated keyfigures, especially might spend a lot of time in the OLAP processor.
    Here are the Statistics of the Query.
    OLAP Initialization      :  3,186906
    Wait Time, User         :   56,971169
    OLAP: Settings               0,983193
    Read Cache                     0,015642
    Delete Cache                   0,019030
    Write Cache                    0,087655
    Data Manager                   462,039167
    OLAP: Data Selection      0,671566
    OLAP: Data Transfer       1,257884.
    ST03 Stat:
    %OLAP       :22,74
    %DB           :77,18
    OLAP Time  :29,2
    DBTime        :99,1
    It seems that the maximum time is consuming in the Database
    Any suggestion to speed up this Query response time would be great.
    Thanks in advance.
    BR
    Srini.

    Hi,
    You need to have standard Query performance tuning done for the underlying cubes like better design, aggregates, etc
    Improve Performance of Queries/Reports on Multi Cubes
    Refer SAP Note Number: 869487
    Performance optimization for MultiCubes
    How to Create Efficient Multi-Provider Queries
    Please see the How to Guide "How to Create Efficient MultiProvider Queries" at http://service.sap.com/bi > SAP NetWeaver 2004 - Release-Specific Information > How-to Guides > Business Intelligence
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/how%20to%20create%20efficient%20multiprovider%20queries.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Performance of MultiProviders
    Multiprovider performance / aggregate question
    Query Performance
    Multicube performances
    Create Efficient MultiProvider Queries
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b03b7f4c-c270-2910-a8b8-91e0f6d77096
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/751be690-0201-0010-5e80-f4f92fb4e4ab
    Also try
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    tuning, short dumps
    Performance tuning in BW:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Also notes
    0000903559 MultiProvider optimization is only partially active
    0000825396 Performance in reports with many selections
    multiprovider explanation i need
    Note 629541 - Multiprovider: Parallel Processing 
    Thanks,
    JituK

  • Tuning query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    Hi David,
              Query performance when reporting on ODS object is slower compared to infocubes, infosets, multiproviders etc because of no aggregates and other performance techinques in DSO.
    Basically for DSO/ODS you need to turn on the BEx reporting flag, which again is an overhead for query execution and affects performance.
    To improve the performance when reporting on ODS you can create secondary indexes from BW workbench.
    Please check the below links.
    [Re: performance issues of ODS]
    [Which criteria to follow to pick InfoObj. as secondary index of ODS?;
    Hope this helps.
    Regards,
    Haritha.

  • Query Performance - Analyzer vs WAD

    Hi experts,
    We've been having user complaints regarding performance in BW.  The queries are accessed using Analyzer/Excel interface.  As a test I've tried comparing opening the query through Analyzer and then through Web Application Designer.  With Excel my load from S900 was taking almost 2 minutes.  With the Web Browser my initial load was just 30 seconds!  This is great however, this number is deceiving because it's only showing me a subset of the data; the first 25% of the data is shown and if I want to view more I have to click 'next page' (called 'Next Rows' on screen).  So the subsequent set of records (say 25-50%) then takes another 30 seconds.  So if you hit Next page each time you will end up getting 4 loads of 30 seconds each thus the same performance as Excel (2 minutes total wait), although maybe less stressful to sit and watch.
    QUESTION:
    It seems like the subsequent loads are taking 30 seconds because they are re-querying the back-end data (the S900 Cube) instead of re-querying the resultset.  Is that true?  Do you see my problem?  Is there a better way to do this to improve performance?
    Thanks for any help you can provide!
    -Patrick

    hi,
    dealing with query performance, please check following, hope you get ideas ....
    Prakash's weblog on this topic..
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'
    some docs
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc

  • Query Performance OBIEE 11.1.1.5-Essbase 9.3.1

    I have installed OBIEE 11.1.1.5 and Essbase 9.3.1 on a PC with 8GB ram and core i7, when I run a report or dashboard for the first time (*no data stored in cache*) the data display is very slow (*takes up to 15 min*), if I run a second time the same report or dashboard display and is instantaneous (as already exists in the query cache), what way I can get the cached data without having any questions?, some settings?, or there is a parameter that is not considering?, I can give some tip to improve query performance? thanks.

    One good place to start troubleshooting what took so long is - Take the logical sql and run it against the database to see how long it is taking to retrieve results back.

  • Query performance to increase

    hi i'm using cte's as defined and took a temp table to take the columns from temp table.now,when i'm executing this logic of sp..its taking 7min for 3lakhs records. can any one pls help me how to improve my query performance..
    the code which i'm using is:
    create temtable as(cols);WITH Datematrix(AllocationDate)/*cte*/
    As
    SELECT @StartDate AS AllocationDate
    UNION ALL
    SELECT DATEADD(D,1,AllocationDate) AS AllocationDate
    FROM Datematrix WHERE AllocationDate<@EndDate
    /*cte*/ Allocation (Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project
    ,ProjectID,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs)
    AS
    SELECT
    DIV.Division
    ,RES.DivisionID
    ,RES.ResourceName
    ,ResourceEmailID = STUFF((
    SELECT COALESCE( ', ' + CONVERT(VARCHAR,RES.Email1), '')
    FROM dbo.TasksResource TSKRES WITH(NOLOCK) LEFT OUTER JOIN
    dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    WHERE TSKRES.TaskID = TSK.TaskID
    FOR XML PATH('')), 1, 1, '')
    ,RES.UID ResourceID
    ,PRJ.Project + ' (' + CONVERT(VARCHAR(15),PRJ.StartDate,101) +' - ' + CONVERT(VARCHAR(15),PRJ.EndDate,101) + ')' as Project
    ,PRJ.UID ProjectID
    ,SCP.Title Scope
    ,SCP.ScopeID
    ,TSK.Title WorkItem
    ,TSK.StartDate TaskStartDate
    ,TSK.EndDate TaskEndDate
    ,PRJ.ProgramID
    ,PR.Program
    ,PR.PortfolioID
    ,PF.Portfolio
    ,TSK.StatusID
    ,ST.Status
    ,TSK.TaskID
    ,TSK.EstimateHrs
    ,(isnull(SCP.EstimateARCH,0) + isnull(SCP.EstimateBA,0) + isnull(SCP.EstimateDev,0) + isnull(SCP.EstimatePM,0) + isnull(SCP.EstimateQA,0) + isnull(SCP.EstimateRM,0)) as ScopeEstimateHrs
    --SCP.EstimateARCH + SCP.EstimateBA +SCP.EstimateDev +SCP.EstimatePM +SCP.EstimateQA +SCP.EstimateRM as ScopeEstimateHrs
    FROM Tasks TSK WITH(NOLOCK)
    INNER JOIN dbo.Scope SCP WITH(NOLOCK) ON TSK.ScopeID = SCP.ScopeID
    INNER JOIN dbo.tb_Project PRJ WITH(NOLOCK)ON TSK.ProjectID = PRJ.UID
    INNER JOIN dbo.tb_Program PR WITH(NOLOCK) ON PR.UID=PRJ.ProgramID
    INNER JOIN dbo.tb_Portfolio PF WITH(NOLOCK)ON PF.UID=PR.PortfolioID
    LEFT OUTER JOIN dbo.TasksResource TSKRES WITH(NOLOCK)ON TSKRES.TaskID = TSK.TaskID
    LEFT OUTER JOIN dbo.tb_Resource RES WITH(NOLOCK) ON RES.UID = TSKRES.ResourceID
    LEFT JOIN dbo.tb_Division DIV WITH(NOLOCK) ON RES.DivisionID = DIV.UID
    LEFT JOIN dbo.tb_Status ST WITH(NOLOCK) ON TSK.StatusID=ST.UID /*relating with the high level work items */
    WHERE (PRJ.UID = @Project OR @Project = -1)
    AND (PRJ.ProgramID = @Program OR @Program = -1)
    AND (PRJ.PortfolioID =@Portfolio OR @Portfolio = -1)
    ,/*columns used in 2 cte's are taken in below maindata*/
    MainData (AllocationDate,Division,DivisionID,ResourceName,ResourceEmailID,ResourceID,Project,ProjectID
    ,Scope,ScopeID,WorkItem,TaskStartDate,TaskEndDate
    ,ProgramID ,Program,PortfolioID ,Portfolio,StatusID,Status,TaskID,EstimateHrs,ScopeEstimateHrs,Allocated)
    AS
    ( SELECT
    Datematrix.*
    ,Allocation.*
    ,CASE WHEN ISDATE(TaskStartDate)=1 THEN 1 ELSE 0 END AS Allocated
    FROM Datematrix FULL OUTER JOIN Allocation
    ON ( Datematrix.AllocationDate>= Allocation.TaskStartDate
    AND Datematrix.AllocationDate<=Allocation.TaskEndDate
    )INSERT INTO #TempTable
    SELECT * FROM MainData
    OPTION (MAXRECURSION 0);this way the code goes...please help.. i need my query to be tuned..!!thanks in advance..
    lucky

    When asking performance related questions, it is usually a bad idea to use pseudo code. In this case, I am referring to the fact that your "temtable" declaration is invalid, and the fact that it is unclear where the local variables and/or parameter originate
    from.
    In your query, you are using the "optional parameter" pattern for local variables and/or parameters @Project, @Program and @Portfolio. I therefore assume that we are talking about a stored procedure here.
    In stored procedures, optional parameters often have a negative effect on performance. You can cancel some of that effect by adding the OPTION(RECOMPILE) hint (assuming it is part of the stored procedure, and we are talking about parameters, not local variables).
    Also, you could consider using a calendar table instead of the Datematrix CTE. Reason for that is, that it is now probably a difficult join with Allocation, because the optimizer probably hasn't established that Datematrix is a unique range of dates. Also,
    it will not have any index. A calendar table with unique index on the date can help.
    The rest is probably up to the indexes on the base tables. Since you did not post any DDL, I can't comment on that. Lack of proper indexes are usually the biggest reason of Select performance problems.
    Gert-Jan

Maybe you are looking for