Query is slow taking so much Time

Hi All,
My DB Version is 10.2.0
OS is Windows Server 2003
I am new to performance tunning, can anyone guru suggest me what i need to do as one of the user coming to me and said that my query is running slow. So from where i need to start or what are the steps sequence i need to follow wheather i start with explain plan or with AWR.
Thanks in advance

I am getting below error as i search from addm
DETAILED ADDM REPORT FOR TASK 'ADDM:1051257157_1_1412' WITH ID 12851
              Analysis Period: 22-JUN-2012 from 13:30:14 to 14:22:25
         Database ID/Instance: 1051257157/1
      Database/Instance Names: SIDB/sidb
                    Host Name: SERVER1
             Database Version: 10.2.0.1.0
               Snapshot Range: from 1411 to 1412
                Database Time: 2037 seconds
        Average Database Load: .7 active sessions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FINDING 1: 41% impact (829 seconds)
Individual database segments responsible for significant user I/O wait were
found.
   RECOMMENDATION 1: Segment Tuning, 17% benefit (346 seconds)
      ACTION: Run "Segment Advisor" on TABLE "SIUSER.SAL_ORD_D" with object id
         52739.
         RELEVANT OBJECT: database object with id 52739
      ACTION: Investigate application logic involving I/O on TABLE
         "SIUSER.SAL_ORD_D" with object id 52739.
         RELEVANT OBJECT: database object with id 52739
      RATIONALE: The I/O usage statistics for the object are: 243 full object
         scans, 1763014 physical reads, 64170 physical writes and 0 direct
         reads.
      RATIONALE: The SQL statement with SQL_ID "4ws9bwmth4yn0" spent
         significant time waiting for User I/O on the hot object.
         RELEVANT OBJECT: SQL statement with SQL_ID 4ws9bwmth4yn0
         Update SAL_ORD_D set SOD_BLCK_QTY  = nvl((select sum(BIQD_BLCK_QTY)
         from SAL_BLCK_ITEM_QTY_D,SAL_BLCK_ITEM_QTY_M where
         BIQD_DIV_CODE=BIQM_DIV_CODE and BIQD_MASCOM_CODE=BIQM_MASCOM_CODE and
         BIQD_REF_NUM=BIQM_REF_NUM  and BIQD_DIV_CODE='LM0000' and
         BIQD_MASCOM_CODE='LATL01' and BIQM_APPROVED_FLAG='Y' and
         BIQD_ITEM_CODE=SOD_ITEM_CODE),0)   where  SOD_CANCELED_FLAG 'Y' and
         SOD_STAT_FLAG 'Y' and SOD_DIV_CODE='LM0000' and
         SOD_MASCOM_CODE='LATL01' 
   RECOMMENDATION 2: Segment Tuning, 11% benefit (231 seconds)
      ACTION: Run "Segment Advisor" on TABLE "SIUSER.SAL_INVC_M" with object
         id 52723.
         RELEVANT OBJECT: database object with id 52723
      ACTION: Investigate application logic involving I/O on TABLE
         "SIUSER.SAL_INVC_M" with object id 52723.
         RELEVANT OBJECT: database object with id 52723
      RATIONALE: The I/O usage statistics for the object are: 387 full object
         scans, 607878 physical reads, 22 physical writes and 0 direct reads.
   RECOMMENDATION 3: Segment Tuning, 8.8% benefit (178 seconds)
      ACTION: Run "Segment Advisor" on TABLE "SIUSER.SAL_DA_D" with object id
         52694.
         RELEVANT OBJECT: database object with id 52694
      ACTION: Investigate application logic involving I/O on TABLE
         "SIUSER.SAL_DA_D" with object id 52694.
         RELEVANT OBJECT: database object with id 52694
      RATIONALE: The I/O usage statistics for the object are: 56 full object
         scans, 708645 physical reads, 24904 physical writes and 0 direct
         reads.
   RECOMMENDATION 4: Segment Tuning, 3.6% benefit (73 seconds)
      ACTION: Run "Segment Advisor" on TABLE "SIUSER.SAL_DA_SRV_N_RATE" with
         object id 52712.
         RELEVANT OBJECT: database object with id 52712
      ACTION: Investigate application logic involving I/O on TABLE
         "SIUSER.SAL_DA_SRV_N_RATE" with object id 52712.
         RELEVANT OBJECT: database object with id 52712
      RATIONALE: The I/O usage statistics for the object are: 112 full object
         scans, 173184 physical reads, 5402 physical writes and 0 direct
         reads.
   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Wait class "User I/O" was consuming significant database time.
               (57% impact [1165 seconds])
FINDING 2: 16% impact (324 seconds)
The SGA was inadequately sized, causing additional I/O or hard parses.
   RECOMMENDATION 1: DB Configuration, 15% benefit (297 seconds)
      ACTION: Increase the size of the SGA by setting the parameter
         "sga_target" to 876 M.
   ADDITIONAL INFORMATION:
      The value of parameter "sga_target" was "584 M" during the analysis
      period.
   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Wait class "User I/O" was consuming significant database time.
               (57% impact [1165 seconds])
      SYMPTOM: Hard parsing of SQL statements was consuming significant
               database time. (5.9% impact [120 seconds])
FINDING 3: 6.1% impact (125 seconds)
Time spent on the CPU by the instance was responsible for a substantial part
of database time.
   RECOMMENDATION 1: Application Analysis, 6.1% benefit (125 seconds)
      ACTION: Parsing SQL statements were consuming significant CPU. Please
         refer to other findings in this task about parsing for further
         details.
   ADDITIONAL INFORMATION:
      The instance spent significant time on CPU. However, there were no
      predominant SQL statements responsible for the CPU load.
FINDING 4: 5.5% impact (112 seconds)
SQL statements were not shared due to the usage of literals. This resulted in
additional hard parses which were consuming significant database time.
   RECOMMENDATION 1: Application Analysis, 5.5% benefit (112 seconds)
      ACTION: Investigate application logic for possible use of bind variables
         instead of literals.
      ACTION: Alternatively, you may set the parameter "cursor_sharing" to
         "force".
      RATIONALE: At least 28 SQL statements with PLAN_HASH_VALUE 3296081222
         were found to be using literals. Look in V$SQL for examples of such
         SQL statements.
      RATIONALE: At least 14 SQL statements with PLAN_HASH_VALUE 3662582453
         were found to be using literals. Look in V$SQL for examples of such
         SQL statements.
      RATIONALE: At least 7 SQL statements with PLAN_HASH_VALUE 3360087625
         were found to be using literals. Look in V$SQL for examples of such
         SQL statements.
      RATIONALE: At least 6 SQL statements with PLAN_HASH_VALUE 571809715 were
         found to be using literals. Look in V$SQL for examples of such SQL
         statements.
   SYMPTOMS THAT LED TO THE FINDING:
      SYMPTOM: Hard parsing of SQL statements was consuming significant
               database time. (5.9% impact [120 seconds])
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          ADDITIONAL INFORMATION
Wait class "Application" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 100% of the analysis
period.
The analysis of I/O performance is based on the default assumption that the
average read time for one database block is 10000 micro-seconds.
An explanation of the terminology used in this report is available when you
run the report with the 'ALL' level of detail.
Please guide me over the same
Edited by: Vikas Kohli on Jun 22, 2012 3:18 PM

Similar Messages

  • Spatial query with sdo_aggregate_union taking too much time

    Hello friends,
    the following query taking too much time for execution.
    table1 contains around 2000 records.
    table2 contains 124 rows
    SELECT
    table1.id
    , table1.txt
    , table1.id2
    , table1.acti
    , table1.acti
    , table1.geom as geom
    FROM
    table1
    WHERE
    sdo_relate
    table1.geom,
    select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
    from table2
    ,'mask=(ANYINTERACT) querytype=window'
    )='TRUE'
    I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
    Thanks

    Hi Thanks lot for your reply.
    But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
    Let me give you clear picture....
    What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
    For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
    I hope this would help you to understand my query.
    Thanks
    I appreciate your efforts.

  • Simple APD is taking too much time in running

    Hi All,
    We have one APD created on our developement system which is taking too much time in running.
    This APD is fetching data from a Query having only 1200 records and directly putting into a master attribute.
    The Query is running fine in RSRT transaction and giving output within 5 seconds but in APD if I do display data over Query it is taking too much time.
    The APD is taking arrount 1.20 hours in running.
    Thanks in advance!!

    Hi,
    When a query runs in APD it normally takes much, much longer than it takes in RSRT. Run times such as what you are saying (5secs in RSRT and >1.5 hrs in APD) are quite normal; I've seen them in some of my queries running for several hours in APD as well.
    You just have to wait for it to complete.
    Regards,
    Suhas

  • Delete query taking too much time

    Hi All,
    my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
    Though I have dropped mv log on the table & disabled all the triggers on it.
    Moreover deletion is based on primary key .
    delete from table_name where primary_key in (values)
    above is dummy format of my query.
    can anyone please tell me what could be other reason that query is performing that slow.
    Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
    Please reply asap.

    Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
    You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
    ~ Madrid.

  • Hi from the last two days my iphone have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    Hi,  from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    The Basic Troubleshooting Steps are:
    Restart... Reset... Restore...
    iPhone Reset
    http://support.apple.com/kb/ht1430
    Try this First... You will Not Lose Any Data...
    Turn the Phone Off...
    Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
    Wait for the Apple logo to Appear and then Disappear...
    Usually takes about 15 - 20 Seconds... ( But can take Longer...)
    Release the Buttons...
    Turn the Phone On...
    If that does not help... See Here:
    Backing up, Updating and Restoring
    http://support.apple.com/kb/HT1414

  • Why is query taking too much time ?

    Hi gurus,
    I have a table name test which has 100000 records in it,now the question i like to ask is.
    when i query select * from test ; no proble with responce time, but when the next day i fire the above query it is taking too much of time say 3 times.i would also like to tell you that everything is ok in respect of tuning,the db is properly tuned, network is tuned properly. what could be the hurting factor here ??
    take care
    All expertise.

    Here is a small test on my windows PC.
    oracle 9i Rel1.
    Table : emp_test
    number of records : 42k
    set autot trace exp stat
    15:29:13 jaffar@PRIMEDB> select * from emp_test;
    41665 rows selected.
    Elapsed: 00:00:02.06 ==> response time.
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    2951 consistent gets
    178 physical reads
    0 redo size
    1268062 bytes sent via SQL*Net to client
    31050 bytes received via SQL*Net from client
    2779 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    41665 rows processed
    15:29:40 jaffar@PRIMEDB> delete from emp_test where deptno = 10;
    24998 rows deleted.
    Elapsed: 00:00:10.06
    15:31:19 jaffar@PRIMEDB> select * from emp_test;
    16667 rows selected.
    Elapsed: 00:00:00.09 ==> response time
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=41665 Bytes=916630)
    1 0 TABLE ACCESS (FULL) OF 'EMP_TEST' (Cost=24 Card=41665 Bytes=916630)
    Statistics
    0 recursive calls
    0 db block gets
    1289 consistent gets
    0 physical reads
    0 redo size
    218615 bytes sent via SQL*Net to client
    12724 bytes received via SQL*Net from client
    1113 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    16667 rows processed

  • Query is taking too much time

    hi
    The following query is taking too much time (more than 30 minutes), working with 11g.
    The table has three columns rid, ida, geometry and index has been created on all columns.
    The table has around 5,40,000 records of point geometries.
    Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
    SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
    sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
    regards

    I have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
    a.ida='CORD' AND
    b.idat='CORD' AND
    a.rid !=b.rid AND
    sdo_equal(a.geometry, b.geometry)='TRUE'
    ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
    So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
    and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
    Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
    And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
    Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
    [ c o d e ]
    <code/results here>
    [ / c o d e ]
    that way the original formatting is kept and it makes things much easier to read.
    Regards,
    Stefan

  • Initramfs too slow and ISR taking too much time !

    On my SMP (2 core with 10siblings each) platform i have a PCIe deivce which is a video card which does DMA transfers to the application buffers.
    This works fine with suse13.1 installed on a SSD drive.
    When i took the whole image and created it as a initramfs and running from it (w/o any hdd) the whole system is very slow and i am seeing PCIe ISRs taking too much time and hecne the driver is failing !
    any help on this is much appreciated.
    there is a first initramfs iage which is the usual initrd the opensuse installs and i patched it to unpack the second initramfs (full rootfs of 1.8GB) in to RAM (32GB) as tmpfs and exec_init from it.
    Last edited by Abhayadev S (2015-05-21 16:28:38)

    Abhayadev S,
    Although your problem definitely looks very interesting, we can't help with issues with suse linux.
    Or are you considering to test archlinux on that machine ?

  • Query taking so much time..

    Hi All,
    I have one request to you..
    basically i m running a query on Dev server ,its taking so much time but same query if i run on my production environment then it run so fast.
    i checked the indexes on table its all are fine & in valid state.i rebuild the index again.
    i also get the explain plan on both the environment,its given below..
    EXPLAIN PLAN::: DEV Environment
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | 1 | 47 | 206K (2)|
    | 1 | SORT AGGREGATE | | 1 | 47 | |
    | 2 | NESTED LOOPS | | 1 | 47 | 206K (2)|
    | 3 | TABLE ACCESS FULL | TRANSACTION_DETAILS | 1 | 22 | 206K (2)|
    | 4 | TABLE ACCESS BY INDEX ROWID | TRANSACTIONS | 1 | 25 | 1 (0)|
    | 5 | INDEX UNIQUE SCAN | TRA_PK | 1 | | 1 (0)|
    EXPLAIN PLAN::: Production Environment.
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 1 | 48 | 296 (1)|
    | 1 | SORT AGGREGATE | | 1 | 48 | |
    | 2 | TABLE ACCESS BY INDEX ROWID | TRANSACTION_DETAILS | 1 | 23 | 4 (0)|
    | 3 | NESTED LOOPS | | 83 | 3984 | 296 (1)|
    | 4 | TABLE ACCESS BY INDEX ROWID| TRANSACTIONS | 60 | 1500 | 55 (0)|
    | 5 | INDEX RANGE SCAN | TRA_CODE_TYPE | 61 | | 4 (0)|
    | 6 | INDEX RANGE SCAN | TRAD_PK | 2 | | 3 (0)|
    please help me out ,its very urgent.thanks in advance.
    Regards
    Sunny.
    Edited by: Sunny on Jul 14, 2011 1:24 PM

    Sunny wrote:
    Justin,
    Please find the replay as per your queries.
    Are the statistics on the tables accurate- How can i know the statistics on the tables are accurate?See http://jonathanlewis.wordpress.com/2009/05/11/cardinality-feedback/
    >
    How do you gather statistics?- i find the statistics on table lavel & schema lavel also.I believe he was asking what command you use to gather the statistics. If, for example, you created the big table with a few rows, then populated them with a lot of rows without regathering statistics, you could have very misleading statistics. The opposite is possible too - some data skew in the large number, followed by gathering statistics with the 10g default statistics gathering, could give you bogus statistics. See http://jonathanlewis.wordpress.com/category/oracle/statistics/histograms/
    >
    How many rows are in the tables?- its using two table in this query.
    Transaction table-40 rows
    Transaction_detail-51151804 rows.
    advice me more in this.Please show the plans including the estimated and actual cardinalities. Something is telling the optimizer that a full table scan is better than nested loops with index range scans. Also show us the non-default init.ora settings.
    >
    Regards
    sunnyYou also need to show us in detail which version you are on. Remember, this is a volunteer forum, "urgent" is not a nice thing to say.

  • For tablet touch is very slow in firefox OS. Some times it is hanging. While dialing number to call it taking to much time. How to fix and debug this issue.

    For tablet touch is very slow in firefox OS. Some times it is hanging. While dialing number to call it taking to much time. How to fix and debug this issue. Waiting for your reply.

    Hi sb00349044,
    I'm sorry to hear that you are having problems with your Firefox OS device. Can you please specify the device Model and Version?
    If your device is one of the Firefox OS Tablets from the contribution program, please be aware that those builds are still being improved and ironed out. If that is the case, please follow the guidelines for the contributor program to report issues with the device.
    Thanks,
    - Ralph

  • Query taking too much time with dates??

    hello folks,
    I am trying pull some data using the date condition and for somereason its taking too much time to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    Presumably you've got an index on activity_date.
    If you apply a function like TRUNC to activity_date, you can no longer use the index.
    Post execution plans to verify.
    and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
    and al.activity_date < TRUNC (SYSDATE, 'DD')

  • Owb job taking too much time to execute

    While creating a job in OWB, I am using three tables,a joiner and an aggregator which are all joined through another joiner to load into the final table. The output is coming correct but the sql query generated is very complex having so many sub-queries. So, its taking so much time to execute. Pls help me in reducing the cost.
    -KC

    It depends on what kind of code it generates at each stage. The first step would be collect stats for all the tables used and check the SQL generated using EXPLAIN PLAN. See which sub-query or inline view creates the most cost.
    Generate SQL at various stages and see if you can achieve the same with a different operator.
    The other option would be passing HINTS to the tables selected.
    - K

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • Report is taking too much time when running from parameter form

    Dear All
    I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
    But, If it is running from parameter form it is taking too much time to format report in PDF.
    Please suggest any configuration or setting if anybody is having Idea.
    Thanks

    Hi,
    The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
    Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
    Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
    If you still cannot determine the difference then trace both sessions.
    Rod West

Maybe you are looking for