Slow query performance in Oracle 10.2.0.3

Hi,
We have Oracle 10.2.0.3 installed on RHEL 5(64 bit).We have two queries out of which one is a query using select while other query is using a insert.First we executed insert query which inserts 10000 rows in a table and then select query on this table.This works fine in one thread.But when we do samething in 10 threads, at that time INSERT is fine but select is taking very long time for 10 threads.Any bug related to parallel execution of queries for SELECT in 10.2.0.3?Any suggestion??
Thanks in advance.
Regards,
RJ.

Justin,
We have a same queries for INSERT and Select in 10 manual sessions outof which select query is taking more time to execute.Please refer to WAITs given below.No there is no bottleneck as far as hardware is concerned because we tested it on different configuration of servers.
Event                    Waits            Time(s)          Avg Wait(ms)     % Total Call Time     Wait Class
CPU time                52                                    93.2     
latch: cache buffers chains     45,542          6          0               10.7               Concurrency
log file parallel write          2,107          3          1               5.2               System I/O
log file sync               805          2          2               3.5               Commit
latch: session allocation     5,116          1          0               2.6               Other Wait Events
•     s - second
•     cs - centisecond - 100th of a second
•     ms - millisecond - 1000th of a second
•     us - microsecond - 1000000th of a second
•     ordered by wait time desc, waits desc (idle events last)
Event                    Waits     %Time-outs     Total Wait Time (s)     Avg wait (ms)     Waits /txn
latch: cache buffers chains     45,542     0.00          6                    0     22.99
log file parallel write          2,107     0.00          3                    1     1.06
log file sync               805     0.00          2                    2     0.41
latch: session allocation     5,116     0.00          1                    0     2.58
buffer busy waits          20,482     0.00          1                    0     10.34
db file sequential read          157     0.00          1                    4     0.08
control file parallel write     1,330     0.00          0                    0     0.67
wait list latch free          39     0.00          0                    10     0.02
enq: TX - index contention     632     0.00          0                    0     0.32
latch free               996     0.00          0                    0     0.50
SQL*Net break/reset to client     1,738     0.00          0                    0     0.88
SQL*Net message to client     108,947     0.00          0                    0     55.00
os thread startup          2     0.00          0                    19     0.00
cursor: pin S wait on X          3     100.00          0                    11     0.00
latch: In memory undo latch     136     0.00          0                    0     0.07
log file switch completion     4     0.00          0                    7     0.00
latch: shared pool          119     0.00          0                    0     0.06
latch: undo global data          121     0.00          0                    0     0.06
buffer deadlock               238     99.58          0                    0     0.12
control file sequential read     1,735     0.00          0                    0     0.88
SQL*Net more data to client     506     0.00          0                    0     0.26
log file single write          2     0.00          0                    2     0.00
SQL*Net more data from client     269     0.00          0                    0     0.14
reliable message          12     0.00          0                    0     0.01
LGWR wait for redo copy          26     0.00          0                    0     0.01
rdbms ipc reply               6     0.00          0                    0     0.00
latch: library cache          7     0.00          0                    0     0.00
latch: redo allocation          2     0.00          0                    0     0.00
enq: RO - fast object reuse     2     0.00          0                    0     0.00
direct path write          21     0.00          0                    0     0.01
cursor: pin S               1     0.00          0                    0     0.00
log file sequential read     2     0.00          0                    0     0.00
direct path read          8     0.00          0                    0     0.00
SQL*Net message from client     108,949     0.00          43,397                    398     55.00
jobq slave wait               14,527     49.56          35,159                    2420     7.33
Streams AQ: qmn slave idle wait     246     0.00          3,524                    14326     0.12
Streams AQ:qmn coordinator-
idle wait                451     45.45          3,524                    7814     0.23
wait for unread message on -
broadcast channel          3,597     100.00          3,516                    978     1.82
virtual circuit status          120     100.00          3,516                    29298     0.06
class slave wait          2     0.00          0                    0     0.00 Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv
Message was edited by:
RJiv

Similar Messages

  • OAF page : How to get its query performance from Oracle Apps Screen?

    Hi Team,
    How to get the query performance of an OAF page using Oracle Apps Screen ??
    regards
    sridhar

    Go through this link
    Any tools to validate performance of an OAF Page?
    However do let us know as these queries performance can be check through backend also
    Thanks
    --Anil
    http://oracleanil.blogspot.com/

  • Slow Query Performance During Process Of SSAS Tabular

    As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
    becomes very slow. 
    Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
    I am using SQL Server 2012 SP2.
    Thanks

    Hi AL.M,
    According to your description, when you query the tabular during Process Add, the performance is slow. Right?
    In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
    One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • Slow query against seg$ - Oracle 10g

    Hi,
    Our AWR report shows the following slow query, 3 minutes per execution,
    select file#, block# from seg$ where type# = 3 and ts# = :1
    This query isn't from our application for sure. Does anyone know what backgroud jobs or processes may execute this query?
    Thanks.

    user632535 wrote:
    Hi,
    Our AWR report shows the following slow query, 3 minutes per execution,
    select file#, block# from seg$ where type# = 3 and ts# = :1
    This query isn't from our application for sure. Does anyone know what backgroud jobs or processes may execute this query?
    It looks like the type of thing the SMON would run to clear up temporary segments after a process has done a rebuild, move, drop or similar. One reason why it might be slow is if you have a very large number of objects in a given tablespace that is subject to a lot of drops, creates etc. (E.g. a tablespace holding a complicated composite partitioned object with lots of indexes that goes through a frequent cycle of add/drop partition).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The temptation to form premature theories upon insufficient data is the bane of our profession."
    Sherlock Holmes (Sir Arthur Conan Doyle) in "The Valley of Fear".

  • Bad Query Performance in Oracle Text

    Hello everyone, I have the following problem:
    I have a table, TABLE_A from now on, a table of more or less 1,000.000 rows, with a CONTEXT index, using FILE_DATASTORE, CTXSYS.DEFAULT_STORAGE, CTXSYS.NULL_FILTER, CTXSYS.BASIC_LEXER and querying the index in the following way:
    SELECT /*+FIRST_ROWS*/ A.ID, B.ID2, SCORE(1) FROM TABLE_A A, TABLE_B WHERE A.ID = B.ID AND CONTAINS(A.PATH, '<SOME KW>', 1) > 0 ORDER BY SCORE(1) DESC
    where TABLE_B has another 1,000.000 rows.
    The problem is that the query response time is much higher after some time of inactivity regarding those tables. How can I avoid this behavior?. The fact is that those inactivity times (not more than 20min) are core to my application, so I always get long long response times for my queries.
    Is there any cache or cache time parameter that affects this behavior? I have checked the Oracle Text documentation without finding anything about that...
    More data: I am using Oracle 9.2.0.1, but I have tested with the latest patches an the behavior is the same...
    Thank you very much in advance.

    Pablo,
    This appears to be a generic database or OS issue, not a Text specific issue. It really depends on what your application is doing.
    If your application is doing some other database activity such as queries or DMLs on other non-text tables, chances are Oracle Text related data blocks are being aged out of cache. You can either increase the db_cache_size init
    parmater or try to keep the text tables and index tables blocks in cache using ALTER TABLE commands.
    If your app is doing NON-database activity, then chances are your application is taking up much of the machine's physical memory such that OS is swapping ORACLE out of the memory. In which case, you may want to consider to add more memory to the machine or have ORACLE run on a separate machine by itself.

  • Slow Query Performance

    Hi BI expert,
    I have got a query and it's written basing on sales order line cube. This cube holds about 60,000 records at the moment. A selection secreen has also been designed for this for example; creation date, brand and sku. If the users specify one of the selection criteria, it takes about 45 minute to bring back 20,000 records but if they do not specify anything on the selection screen, it takes hours and hours to fetch records from the cube. In my process chain, it deletes the index, loads data and creates index everyday.
    Would you be kindly suggest me as to why it's taking so long time to run the query please?
    Thanks.

    Hi
    go for Aggregates
    Most of the result sets of reporting and analysis processes consist of aggregated data. An aggregate is a redundantly stored, usually aggregated view on a specific InfoCube. Without aggregates, the OLAP engine would have to read all relevant records at the lowest level stored in the InfoCubeu2014which obviously takes some time for
    large InfoCubes. Aggregates allow you to physically store frequently used aggregated result sets in relational or multidimensional databases. Aggregates stored in relational databases essentially use the same data model as used for storing InfoCubes. Aggregates stored in multidimensional databases (Microsoft SQL Server 2000) have been introduced with SAP BW 3.0.
    Aggregates are still the most powerful means SAP BW provides to optimize the performance of reporting and analysis processes. Not only can SAP BW automatically take care of updating aggregates whenever necessary (upload of master or transaction data), it also automatically determines the most efficient aggregate available at query execution time.
    mahesh

  • Slow query performance in excel 2007 vs excel 2003

    Hi,
    Some of our clients recently did an upgrade towards BI 7.0 and also upgraded towards excel 2007.
    They experience lots of performance problems when using the Bex analyser in excel 2007.
    Refreshing queries, using 'simple' workbooks works till 10 times slower than before using excel 2003.
    Has anyons experienced the same?
    Any tips/tricks to solve that problem?
    With regards,
    Tom.

    Hello all,
    1) Please set the following parameters to X in transaction
        RS_FRONTEND_INIT and check the issue.
    Parameters to be set are
    ANA_USE_SIDGRIDDELTA
    ANA_USE_SIDGRIDMASS
    ANA_SINGLEDPREFRESH
    ANA_CACHE_WORKBOOK
    ANA_USE_OPTIMIZE_STG
    ANA_USE_TABLE
    2) Also refer to below KBA link which would help to resolve the issue.
       1570478 BW Report in Excel 2007 or Excel 2010 takes much more time than
    3) In the workbook properties please set the flag
         - Use Compression When Saving Workbook
    4)  If you are working with big hierarchies please try to improve
    performance with following setting directly in AnalysisGrid:
       - Properties of Analysis Grid - Dispay Hierarchy Icons
       - switch to "+/-"
    Regards,
    Arvind

  • Database upgrade - slow query performance

    Hi,
    recently we have upgraded our 8i database to an 10g database.
    While we was testing our forms application against the new
    10g database there was a very slow sql-statements which runs
    several minutes but it runs against the 8i database within seconds.
    With sqlplus it runs sometimes fast, sometimes slow (see execution plan below)
    in 10g.
    The sql-statement in detail:
    SELECT name1, vornam, aboid, liefstat
    FROM aktuellerabosatz
    WHERE aboid = evitadba.get_evitaid ('0000002100')
    "aktuellerabosatz" is a view on a table with about 3.000.000 records.
    The function get_evitaid gets only the substring of the last 4 diggits of the whole
    number.
    execution plan with slow responce time:
    12:05:31 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
    12:05:35 2 FROM aktuellerabosatz
    12:05:35 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
    NAME1 VORNAM ABOID L
    RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
    1 Zeile wurde ausgewählt.
    Abgelaufen: 00:00:55.07
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
    Card=1 Bytes=38)
    2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
    3 Card=1)
    Statistiken
    100 recursive calls
    0 db block gets
    121353 consistent gets
    121285 physical reads
    0 redo size
    613 bytes sent via SQL*Net to client
    500 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    execution plan with fast response time:
    12:06:43 EVITADBA-TSUN>SELECT name1, vornam, aboid, liefstat
    12:06:58 2 FROM aktuellerabosatz
    12:06:58 3 WHERE aboid = evitadba.get_evitaid ('0000002100');
    NAME1 VORNAM ABOID L
    RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
    1 Zeile wurde ausgewählt.
    Abgelaufen: 00:00:00.00
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=38)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
    Card=1 Bytes=38)
    2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
    3 Card=1)
    Statistiken
    110 recursive calls
    8 db block gets
    49 consistent gets
    0 physical reads
    0 redo size
    613 bytes sent via SQL*Net to client
    500 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    In the fast response the consistent gets and physical reads are very small
    but in the another time very high what (it seems) results in the slow performance.
    What could be the reasons?
    kind regards
    Marco

    The two execution plans above are both 10g-sqlsessions on the same database with the same user. We gather statistics for the database with the dbms_stats package. Normally we have the all_rows option. The confusing thing is sometimes the sql-statement runs fas sometimes slow in a sqlplus session with the same executin plan only the physical gets, constent reads are extreme different.
    If we rewrite the sql-statement to use the table evtabo with the an additional
    where clause (which is from the view) instead of using the view then it runs fast:
    14:24:04 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
    14:24:14 2 FROM aktuellerabosatz
    14:24:14 3 WHERE aboid = evitadba.get_evitaid ('0000000246');
    Es wurden keine Zeilen ausgewählt
    Abgelaufen: 00:00:43.07
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=27315 Card=1204986
    Bytes=59044314)
    1 0 VIEW OF 'EVTABO_V1' (VIEW) (Cost=27315 Card=1204986 Bytes=
    59044314)
    2 1 TABLE ACCESS (FULL) OF 'EVTABO' (TABLE) (Cost=27315 Card
    =1204986 Bytes=45789468)
    14:24:59 H00ZRETH-TSUN>SELECT name1, vornam, aboid, liefstat
    14:25:26 2 FROM evtabo
    14:25:26 3 WHERE aboid = evitadba.get_evitaid ('0000002100')
    14:25:26 4 and gueltab <= TRUNC(sysdate) AND (gueltbs >=TRUNC(SYSDATE) OR gueltbs IS NULL);
    NAME1 VORNAM ABOID L
    RETHMANN ENTSORGUNGSWIRTSCHAFT 2100 A
    1 Zeile wurde ausgewählt.
    Abgelaufen: 00:00:00.00
    Ausführungsplan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=38)
    1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EVTABO' (TABLE) (Cost=4
    Card=1 Bytes=38)
    2 1 INDEX (RANGE SCAN) OF 'EVIABO22' (INDEX (UNIQUE)) (Cost=
    3 Card=1)
    What could be the reason for the different performance in 8i and 10g?
    Thanks
    Marco

  • Weblogic 8.1.6 and Oracle 9.2.0.8 - query performance

    Folks,
    We are upgrading WebLogic from 8.1.5 to 8.1.6 and Oracle from 9.2.0.6 to 9.2.0.8. We use the Oracle thin client driver for 9.2.0.8 to connect from the application to Oracle.
    When we use the following combination of the stack we see SQL query performance degradation: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.8 driver, WL 8.1.6
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.6
    We do not see the degradation in case of the following: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.5
    Oracle 9.2.0.6 database, Oracle 9.2.0.1 driver, WL 8.1.5
    This shows that the problem could be with the WL 8.1.6 version and I was wondering if any of you have faced this before? The query retrieves a set of data from Oracle none of which contain the AsciiStream data type, which is noted as a problem in WL 8.1.6, but that too, only for WL JDBC drivers.
    Any ideas appreciated.

    Folks,
    We are upgrading WebLogic from 8.1.5 to 8.1.6 and Oracle from 9.2.0.6 to 9.2.0.8. We use the Oracle thin client driver for 9.2.0.8 to connect from the application to Oracle.
    When we use the following combination of the stack we see SQL query performance degradation: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.8 driver, WL 8.1.6
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.6
    We do not see the degradation in case of the following: -
    Oracle 9.2.0.8 database, Oracle 9.2.0.1 driver, WL 8.1.5
    Oracle 9.2.0.6 database, Oracle 9.2.0.1 driver, WL 8.1.5
    This shows that the problem could be with the WL 8.1.6 version and I was wondering if any of you have faced this before? The query retrieves a set of data from Oracle none of which contain the AsciiStream data type, which is noted as a problem in WL 8.1.6, but that too, only for WL JDBC drivers.
    Any ideas appreciated.

  • How many ways can i improve query performance?

    Hi All,
    can any body help me
    How many ways can i improve query performance in oracle ?
    Thanks,
    narasimha

    As many as you can think of them!!!

  • Query Performance issue in Oracle Forms

    Hi All,
    I am using oracle 9i DB and forms 6i.
    In query form ,qry took long time to load the data into form.
    There are two tables used here.
    1 table(A) contains 5 crore records another table(B) has 2 crore records.
    The recods fetching range 1-500 records.
    Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
    But DBA team dont want to create index on table A.Because of table space problem.
    If create the index on main table (A) ,then performance overhead in production.
    Concurrent user capacity is 1500.
    Is there any alternative methods to handle this problem.
    Regards,
    RS

    1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
    http://en.wikipedia.org/wiki/Crore
    I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
    2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
    3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
    Justin

  • Query performance on RAC is a lot slower than single instance

    I simply followed the steps provided by oracle to install a rac db of 2 nodes.
    The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
    However the performance on the select query is very slow compared to single instance.
    I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
    When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
    I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
    Could someone help me how to debug this problem?
    Thanks,
    Chau
    Edited by: user638637 on Aug 6, 2009 8:31 AM

    top 5 timed foreground events:
    DB CPU: times 943(s), %db time (47.5%)
    cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
    direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
    IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
    gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
    another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
    You should check your sql statement from report and tuning them.
    - Check from Execute Plan.
    - If not index, determine to use index.
    SQL> set autot trace explain
    SQL> sql statement;
    cursor: pin S wait on X.
    A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
    use variable SQL , avoid dynamic sql
    http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
    check about memory MEMORY_TARGET initialization parameter.
    By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
    Good Luck

  • Oracle 10g vs Oracle 11g query performance

    Hi everyone,
    We are moving from Oracle 10g to Oracle 11g database.
    I have a query which in Oracle 1g takes 85 seconds to run, but when I run the same query in Oracle 11g database, it takes 635 seconds.
    I have confirmed that all indexes on tables involved are enabled.
    Does anyone have any pointers, what should I look into. I have compared explain plans and clearly they are different. Oracle 11g is taking a different approach than Oracle 1g.
    Thanks

    Pl post details of OS versions, exact database versions (to 4 digits) and init.ora parameters of the 10g and 11g databases. Have statistics been gathered after the upgrade ?
    For posting tuning requests, pl see these threads
    HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long ...
    Pl see if the SQL Performance Analyzer can help - MOS Doc 562899.1 (TESTING SQL PERFORMANCE IMPACT OF AN ORACLE 9i TO ORACLE DATABASE 10g RELEASE 2 UPGRADE WITH SQL PERFORMANCE ANALYZER)
    HTH
    Srini

  • Oracle Query Performance While calling a function in a View

    Hi,
    We have a performance issue in one of our Oracle queries.
    Here is the scenario
    We use a hard coded value (which is the maximum value from a table) in couple of DECODE statements in our query. We would like to remove this hard coded value from the query. So we wrote a function which will return a maximum value from the table. Now when we execute the query after replacing the hard coded value with the function, this function is called four times which hampers the query performance.
    Pl find below the DECODE statements in the query. This query is part of a main VIEW.
    Using Hardcoded values
    =================
    DECODE(pro_risk_weighted_ctrl_scr, 10, 9.9, pro_risk_weighted_ctrl_scr)
    DECODE(pro_risk_score, 46619750, 46619749, pro_risk_score)
    Using Functions
    ============
    DECODE (pro_risk_weighted_ctrl_scr, rprowbproc.fn_max_rcsa_range_values ('CSR'), rprowbproc.fn_max_rcsa_range_values('CSR')- 0.1, pro_risk_weighted_ctrl_scr)
    DECODE (pro_risk_score, rprowbproc.fn_max_rcsa_range_values ('RSR'), rprowbproc.fn_max_rcsa_range_values ('RSR') - 1, pro_risk_score)
    Can any one suggest a way to improve the performance of the query.
    Thanks & Regards,
    Raji

    drop table max_demo;
    create table max_demo
    (rcsa   varchar2(10)
    ,value  number);
    insert into max_demo
    select case when mod(rownum,2) = 0
                then 'CSR'
                else 'RSR'
           end
    ,      rownum
    from   dual
    connect by rownum <= 10000;   
    create or replace function f_max (
      i_rcsa    in   max_demo.rcsa%TYPE
    return number
    as
      l_max number;
    begin
       select max(value)
       into   l_max
       from   max_demo
       where  rcsa = i_rcsa;
       return l_max;
    end;
    -- slooooooooooooowwwwww
    select m.*
    ,      f_max(rcsa)
    ,      decode(rcsa,'CSR',decode(value,f_max('CSR'),'Y - max is '||f_max('CSR'),'N - max is '||f_max('CSR'))) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,f_max('RSR'),'Y - max is '||f_max('RSR'),'N - max is '||f_max('RSR'))) is_max_rsr
    from   max_demo m
    order by value desc;
    -- ssllooooowwwww
    with subq_max as
         (select f_max('CSR') max_csr,
                 f_max('RSR') max_rsr
          from   dual)
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      subq_max s
    order by value desc;
    -- faster
    with subq_max as
         (select /*+materialize */
                 f_max('CSR') max_csr,
                 f_max('RSR') max_rsr
          from   dual)
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      subq_max s
    order by value desc;
    -- faster
    with subq_max as
         (select f_max('CSR') max_csr,
                 f_max('RSR') max_rsr,
                 rownum
          from   dual)
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      subq_max s
    order by value desc;
    -- sloooooowwwwww
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      (select /*+ materialize */
                 f_max('CSR') max_csr,
                 f_max('RSR') max_rsr
          from   dual) s
    order by value desc;
    -- faster
    select m.*
    ,      decode(rcsa,'CSR',s.max_csr,'RSR',s.max_rsr) max
    ,      decode(rcsa,'CSR',decode(value,s.max_csr,'Y - max is '||s.max_csr,'N - max is '||s.max_csr)) is_max_csr
    ,      decode(rcsa,'RSR',decode(value,s.max_rsr,'Y - max is '||s.max_rsr,'N - max is '||s.max_rsr)) is_max_rsr
    from   max_demo m
    ,      (select f_max('CSR') max_csr,
                   f_max('RSR') max_rsr,
                   rownum
            from   dual) s
    order by value desc;

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

Maybe you are looking for