Query performance is slow in RAC

Hi,
I am analyzing the purpose of Oracle RAC and how it will fit/useful into our product. So i have setup two nodes RAC 10g in our lab and i am doing various testing with RAC.
Test 1 : Fail-over:
~~~~~~~~~~~
First i have started with fail-over testing, done two types of testing "connect-time" failover and "TAF".
Here TAF has some limitation, it's not handle DML transactions.
Test 2 : Performance:
~~~~~~~~~~~~~~
Second, i have done performance testing. Here i have taken 10,000 records for insert, update, read and delete operations with singe and two node instances. Here there is no performance difference with single and two nodes.
But i am assumed, RAC will provide high performance than single instance oracle.
So i am in confusion, whether we can choose Oracle RAC to our project.
DBAs,
Please give me your answers for my following questions, it will be great helpful for me to come to conclusion:
1. What is the main purpose of RAC (because in my assumption, failover is partially supported and no difference in performance of query processing) ?
2. What kind of business enviroment RAC will perfectly fit ?
3. What is the unique benefits in RAC, which is not in single instance Oracle?
Thanks
Edited by: Anandhan on Aug 7, 2009 1:40 AM

Hi !
Well RAc ensures High Availibility. With Conditions Apply !
For the database create more than 1 service and have applications connected to the database using these services created.
RAC is service driven to access the database. So if plannned thoughtfully, Load on the database can be distributed physically using the services created for the database.
SO if you have a single database servicing more than one application( of any type(s) ie oltp/warehouse etc.) connect to the database using different services so that the Init parameters are set for the purpose of the connection.
NOTE: each database instance running on node can have different Init_sid.ora to ensure optimum perfromance for the designated purpose.
RAC uses CSS with cache fusion for reducing I/O on a running production server by transferring the buffers from the Global cache to nodes when required thus reducing the Physical Reads. This is contribution to the perfromance front.
Any database that requires access with different init.oa for the same physical data; RAC is the best way!
For High Avail. use TAF type service.

Similar Messages

  • Query performance is slow for only 500 objects- CMS timeout

    When a report is initially scheduled, I only have the job id as a key to determine the status of what happens to the report.
    Yes, the job id is not indexed - but the others are. What other fields can i use? I am only retrieving 500 records.
    SELECT
         SI_ID, SI_NAME ,....
    FROM
         CI_INFOOBJECTS
    WHERE
         SI_INSTANCE=1
         AND SI_RECURRING=0
         AND SI_OWNER='CustomApp'
         AND SI_UPDATE_TS > '2010-01-01'
         AND SI_NEW_JOB_ID (1,2,3.........500)
    and it times out after 9 minutes :
    com.crystaldecisions.sdk.exception.SDKServerException: CMS operation timed out after 9 minutes.
    cause:com.crystaldecisions.enterprise.ocaframework.idl.OCA.oca_abuse: IDL:img.seagatesoftware.com/OCA/oca_abuse:3.2
    detail:CMS operation timed out after 9 minutes.
    I can increase the timeout - but that would not be the solution.
    TIA,
    JM

    SI_NEW_JOB_ID isn't something you filter on.  That field isn't even guaranteed to persist beyond the lifetime of the InfoObject object instance that you've scheduled.
    How SI_NEW_JOB_ID works is this:  you schedule a InfoObject object instance.   That creates a new InfoObject object instance, with a new SI_ID.  So that you'd be able to determine this SI_ID, this value is placed in the SI_NEW_JOB_ID property of the original InfoObject.    So you'd schedule the InfoObject object instance, then immediately read back the SI_NEW_JOB_ID from that instance, to find out the SI_ID for the scheduled InfoObject instance.
    SI_OWNER does a double lookup.  It retrieves the SI_OWNERID for the instance, then looks up the name for that User.
    So filter on SI_ID to identify any scheduled instances, and filter on SI_OWNERID to filter on owners.
    SI_ID and SI_OWNERID are indexed properties.
    Sincerely,
    Ted Ueda

  • Query performance on RAC is a lot slower than single instance

    I simply followed the steps provided by oracle to install a rac db of 2 nodes.
    The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
    However the performance on the select query is very slow compared to single instance.
    I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
    When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
    I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
    Could someone help me how to debug this problem?
    Thanks,
    Chau
    Edited by: user638637 on Aug 6, 2009 8:31 AM

    top 5 timed foreground events:
    DB CPU: times 943(s), %db time (47.5%)
    cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
    direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
    IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
    gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
    another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
    You should check your sql statement from report and tuning them.
    - Check from Execute Plan.
    - If not index, determine to use index.
    SQL> set autot trace explain
    SQL> sql statement;
    cursor: pin S wait on X.
    A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
    use variable SQL , avoid dynamic sql
    http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
    check about memory MEMORY_TARGET initialization parameter.
    By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
    Good Luck

  • Select query performance is very slow

    Could you please explain me about BITMAP CONVERSION FROM ROWIDS
    Why the below query going for two times BITMAP CONVERSION TO ROWIDS on the same table.
    SQL> SELECT AGG.AGGREGATE_SENTENCE_ID ,
      2         AGG.ENTITY_ID,
      3         CAR.REQUEST_ID REQUEST_ID
      4    FROM epic.eh_aggregate_sentence agg ,om_cpps_active_requests car
      5   WHERE car.aggregate_sentence_id =agg.aggregate_sentence_id
      6  AND car.service_unit = '0ITNMK0020NZD0BE'
      7  AND car.request_type = 'CMNTY WORK'
      8  AND agg.hours_remaining > 0         
      9  AND NOT EXISTS (SELECT 'X'
    10                    FROM epic.eh_agg_sent_termination aggSentTerm
    11                   WHERE aggSentTerm.aggregate_sentence_id = agg.aggregate_sentence_id
    12                     AND aggSentTerm.date_terminated <= epic.epicdatenow);
    Execution Plan
    Plan hash value: 1009556971
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                            |     5 |   660 |    99   (2)| 00:00:02 |
    |*  1 |  HASH JOIN ANTI                     |                            |     5 |   660 |    99   (2)| 00:00:02 |
    |   2 |   NESTED LOOPS                      |                            |       |       |            |          |
    |   3 |    NESTED LOOPS                     |                            |     7 |   658 |    95   (0)| 00:00:02 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID     | OM_CPPS_ACTIVE_REQUESTS    |    45 |  2565 |    50   (0)| 00:00:01 |
    |   5 |      BITMAP CONVERSION TO ROWIDS    |                            |       |       |            |          |
    |   6 |       BITMAP AND                    |                            |       |       |            |          |
    |   7 |        BITMAP CONVERSION FROM ROWIDS|                            |       |       |            |          |
    |*  8 |         INDEX RANGE SCAN            | OM_CA_REQUEST_REQUEST_TYPE |   641 |       |    12   (0)| 00:00:01 |
    |   9 |        BITMAP CONVERSION FROM ROWIDS|                            |       |       |            |          |
    |* 10 |         INDEX RANGE SCAN            | OM_CA_REQUEST_SERVICE_UNIT |   641 |       |    20   (0)| 00:00:01 |
    |* 11 |     INDEX UNIQUE SCAN               | PK_EH_AGGREGATE_SENTENCE   |     1 |       |     0   (0)| 00:00:01 |
    |* 12 |    TABLE ACCESS BY INDEX ROWID      | EH_AGGREGATE_SENTENCE      |     1 |    37 |     1   (0)| 00:00:01 |
    |  13 |   TABLE ACCESS BY INDEX ROWID       | EH_AGG_SENT_TERMINATION    |    25 |   950 |     3   (0)| 00:00:01 |
    |* 14 |    INDEX RANGE SCAN                 | DATE_TERMINATED_0520       |     4 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("AGGSENTTERM"."AGGREGATE_SENTENCE_ID"="AGG"."AGGREGATE_SENTENCE_ID")
       4 - filter("CAR"."AGGREGATE_SENTENCE_ID" IS NOT NULL)
       8 - access("CAR"."REQUEST_TYPE"='CMNTY WORK')
      10 - access("CAR"."SERVICE_UNIT"='0ITNMK0020NZD0BE')
      11 - access("CAR"."AGGREGATE_SENTENCE_ID"="AGG"."AGGREGATE_SENTENCE_ID")
      12 - filter("AGG"."HOURS_REMAINING">0)
      14 - access("AGGSENTTERM"."DATE_TERMINATED"<="EPIC"."EPICDATENOW"())now this query is giving the correct result, but performance is slow.
    Please help to improve the performance.
    SQL> desc epic.eh_aggregate_sentence
    Name                                      Null?    Type
    ENTITY_ID                                          CHAR(16)
    AGGREGATE_SENTENCE_ID                     NOT NULL CHAR(16)
    HOURS_REMAINING                                    NUMBER(9,2)
    SQL> desc om_cpps_active_requests
    Name                                      Null?    Type
    REQUEST_ID                                NOT NULL VARCHAR2(16)
    AGGREGATE_SENTENCE_ID                              VARCHAR2(16)
    REQUEST_TYPE                              NOT NULL VARCHAR2(20)
    SERVICE_UNIT                                       VARCHAR2(16)
    SQL> desc epic.eh_agg_sent_termination
    Name                                      Null?    Type
    TERMINATION_ID                            NOT NULL CHAR(16)
    AGGREGATE_SENTENCE_ID                     NOT NULL CHAR(16)
    DATE_TERMINATED                           NOT NULL CHAR(20)
    .

    user10594152 wrote:
    Thanks for your reply.
    Still i am getting same problemIt is not a problem. Bitmap conversion usually is a very good thing. Useing this feature the database can use one or several unselective b*indexes. Combine them and do a kind of bitmap selection. THis should be slightly faster than a FTS and much faster than a normal index access.
    Your problem is that your filter criteria seem to be not very usefull. Whcih is the criteria that does the best reduction of rows?
    Also any kind of NOT EXISTS is potentiall not very fast (NOT IN is worse). You can rewrite your query with an OUTER JOIN. Sometimes this will help, but not always.
    SELECT AGG.AGGREGATE_SENTENCE_ID ,
            AGG.ENTITY_ID,
            CAR.REQUEST_ID REQUEST_ID
    FROM epic.eh_aggregate_sentence agg
    JOIN om_cpps_active_requests car ON ar.aggregate_sentence_id =agg.aggregate_sentence_id
    LEFT JOIN epic.eh_agg_sent_termination aggSentTerm ON aggSentTerm.aggregate_sentence_id = agg.aggregate_sentence_id and aggSentTerm.date_terminated <= epic.epicdatenow
    WHERE car.service_unit = '0ITNMK0020NZD0BE'
    AND car.request_type = 'CMNTY WORK'
    AND agg.hours_remaining > 0         
    AND aggSentTerm.aggregate_sentence_id is nullEdited by: Sven W. on Aug 31, 2010 4:01 PM

  • Query Performance - Query very slow to run

    I have built a query to show payroll costings per month per employee by cost centres for the current fiscal year. The cost centres are selected with a hierarchy variable - it's quite a latrge hierarchy. The problem is the query takes ages to run - nearly ten minutes. It's built on a DSO so I cant aggregate it. Is there anything I can do to improve performance.

    Hi Joel,
    Walkthrough Checklist for Query Performance:
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Regards
    Vivek Tripathi

  • Query performance slow

    Hi Experts,
    Please clarify my doubts.
    1. How can we know the particular query performance slow in all?
    2. How can we define a cell in BEx?
    3. Info cube is info provider, Info Object is not Info Provider why?
    Thanks in advance

    Hi,
    1. How can we know the particular query performance slow in all?
       When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
       like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
      DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
    2. How can we define a cell in BEx? 
       In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
    3. Info cube is info provider, Info Object is not Info Provider why?  
        Info object also info provider,
        when your info object also you can convert into info provider using " Convert as data target".
    Thanks and Regards,
    Venkat.
    Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM

  • Query performance slow WHY

    Its 11G R2 version, and query is performing very slow
    SELECT OBJSTATE
    FROM
    SUB_CON_CALL_OFF WHERE SUB_CON_NO = :B2 AND CALL_OFF_SEQ = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      140      0.00       0.00          0          0          0           0
    Execute 798747      8.34      14.01          0          4          0           0
    Fetch   798747     22.22      35.54          0    7987470          0      798747
    total   1597634     30.56      49.56          0    7987474          0      798747
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 51     (recursive depth: 1)
    Rows     Row Source Operation
          5  FILTER  (cr=50 pr=0 pw=0 time=239 us)
          5   NESTED LOOPS  (cr=40 pr=0 pw=0 time=164 us)
          5    NESTED LOOPS  (cr=30 pr=0 pw=0 time=117 us)
          5     TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
          5      INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
          5     TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
          5      INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
          5    INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
          5   INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
          5    FAST DUAL  (cr=0 pr=0 pw=0 time=4 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                              1        0.00          0.00
      gc cr block 2-way                               3        0.00          0.00
      gc current block 2-way                          1        0.00          0.00
      gc cr multi block request                       4        0.00          0.00 Edited by: 842638 on Feb 2, 2013 5:52 AM

    Hi Mark,
    Just have few basic doubts regarding the below query performance :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      140      0.00       0.00          0          0          0           0
    Execute 798747      8.34      14.01          0          4          0           0
    Fetch   798747     22.22      35.54          0    7987470          0      798747
    total   1597634     30.56      49.56          0    7987474          0      798747
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 51     (recursive depth: 1)
    Rows     Row Source Operation
           5  FILTER  (cr=50 pr=0 pw=0 time=239 us)
           5   NESTED LOOPS  (cr=40 pr=0 pw=0 time=164 us)
           5    NESTED LOOPS  (cr=30 pr=0 pw=0 time=117 us)
           5     TABLE ACCESS BY INDEX ROWID SUB_CON_CALL_OFF_TAB (cr=15 pr=0 pw=0 time=69 us)
           5      INDEX UNIQUE SCAN SUB_CON_CALL_OFF_PK (cr=10 pr=0 pw=0 time=41 us)(object id 59706)
           5     TABLE ACCESS BY INDEX ROWID SUB_CONTRACT_TAB (cr=15 pr=0 pw=0 time=42 us)
           5      INDEX UNIQUE SCAN SUB_CONTRACT_PK (cr=10 pr=0 pw=0 time=26 us)(object id 59666)
           5    INDEX UNIQUE SCAN USER_PROFILE_ENTRY_SYS_PK (cr=10 pr=0 pw=0 time=41 us)(object id 60891)
           5   INDEX UNIQUE SCAN USER_ALLOWED_SITE_PK (cr=10 pr=0 pw=0 time=36 us)(object id 60866)
           5    FAST DUAL  (cr=0 pr=0 pw=0 time=4 us)
    Elapsed times include waiting on following events:
       Event waited on                             Times   Max. Wait  Total Waited
       ----------------------------------------   Waited  ----------  ------------
       library cache lock                              1        0.00          0.00
       gc cr block 2-way                               3        0.00          0.00
       gc current block 2-way                          1        0.00          0.00
       gc cr multi block request                       4        0.00          0.00
    1] How do you determine that this query performance is +ok+ ?
    2] What is the actual need of checking the query performance this way?
    3] Is this the TKPROF output?
    4] How do you know that the query was +called+ 798747 times? the +execute+ shows 0
    Could you please help me with this?
    Thanks.
    Ranit B.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Urgent: WAD performance is very slow than query performance

    Hi,
       If i execute the report(using query with variables) which contains 8,50,000 records in WAD then its taking more than 900 seconds and say Connection timed out at the end.
      If i execute the same query in Query designer using Web browser then it will take 400 seconds to show all the data in hierarchy or tabular view.
      I've done tuning using RSRT on query read mode and persistent mode etc..
      Can you please help me?
    THanks in advance. Points will be given...
    Reg,
      Varun

    Varun,
    Did you ever solve the performance issue of WAD report.  We are having the same issue of WAD performance lot slower than executing just as a Query.
    Thanks

  • Slow Query Performance During Process Of SSAS Tabular

    As part of My SSAS Tabular Process Script Task in a SSIS Package, I read all new rows from the database and insert them to Tabular database using Process Add. The process works fine but for the duration of the Process Add, user queries to my Tabular model
    becomes very slow. 
    Is there a way to prevent the impact of Process Add on user queries? Users need near real time queries.
    I am using SQL Server 2012 SP2.
    Thanks

    Hi AL.M,
    According to your description, when you query the tabular during Process Add, the performance is slow. Right?
    In Analysis Services, it's not supported to make a MDX/DAX query ignore the Process Add of the tabular. it will always query the updated tabular. In this scenario, if you really need good query performance, I suggest you create two tabular.
    One is for end users to get data, the other one is used for update(full process). After the Process is done, let the users query the updated tabular.
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • Report Performance is slow

    Hi
    In our production one report is running very slow, for one moth its taking 10 to 12 min earlier it was took only 2 min for month data, we are confused why itu2019s happening. Query is running on multiprovider. On cube we are using only one aggregate and in the query designer we are using Hierarchy(Active) & virtual key figure.
    We have not changed anything recently on the query, why this suddenly report running very slow.
    Please advice 
    Thanks,
    Manu.

    Hi,
    Check whether the Aggregate is up-to-date and whether it is being used by your query or not?
    Also check the number of records as well, if the output is huge then it is obvious that the queries can take more time to run. As you are using Virtual Keyfigure, this generally degrades the query performance.
    Regards,
    rik

  • Query performance.

    Hi
    I have created a procedure that accepts two bind variables from a report. The user will select one or the other, both or neither of the variables. To return the appropriate results i have created a view with the entire result set and in the procedure are a number of if statements that determine what to place in the where clause selecting from the view, depending on what variables populated.
    My concern is that the query that generates the view includes several joins and in total outputs around 150,000 records and seems to be rather slow to run.
    Would you recommend another solution such as placing the query in the procedure itself repeated for every if statement?
    Or should I work on the query performance?
    What would be the most efficient solution for my problem?
    Any advice would be greatly appreciated.
    Thanks

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Query Performance issue in Oracle Forms

    Hi All,
    I am using oracle 9i DB and forms 6i.
    In query form ,qry took long time to load the data into form.
    There are two tables used here.
    1 table(A) contains 5 crore records another table(B) has 2 crore records.
    The recods fetching range 1-500 records.
    Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
    But DBA team dont want to create index on table A.Because of table space problem.
    If create the index on main table (A) ,then performance overhead in production.
    Concurrent user capacity is 1500.
    Is there any alternative methods to handle this problem.
    Regards,
    RS

    1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
    http://en.wikipedia.org/wiki/Crore
    I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
    2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
    3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
    Justin

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Bad Query Performance

    Hi Experts,
    I have a Query which was built on a multiprovider,Which has a slow preformance.
    I think the main problem comes from selecting to many records from
    I think it selects 1,1 million rows to display about 500 rows in the result.
    Another point could be that the complicated restricted and calculated keyfigures, especially might spend a lot of time in the OLAP processor.
    Here are the Statistics of the Query.
    OLAP Initialization      :  3,186906
    Wait Time, User         :   56,971169
    OLAP: Settings               0,983193
    Read Cache                     0,015642
    Delete Cache                   0,019030
    Write Cache                    0,087655
    Data Manager                   462,039167
    OLAP: Data Selection      0,671566
    OLAP: Data Transfer       1,257884.
    ST03 Stat:
    %OLAP       :22,74
    %DB           :77,18
    OLAP Time  :29,2
    DBTime        :99,1
    It seems that the maximum time is consuming in the Database
    Any suggestion to speed up this Query response time would be great.
    Thanks in advance.
    BR
    Srini.

    Hi,
    You need to have standard Query performance tuning done for the underlying cubes like better design, aggregates, etc
    Improve Performance of Queries/Reports on Multi Cubes
    Refer SAP Note Number: 869487
    Performance optimization for MultiCubes
    How to Create Efficient Multi-Provider Queries
    Please see the How to Guide "How to Create Efficient MultiProvider Queries" at http://service.sap.com/bi > SAP NetWeaver 2004 - Release-Specific Information > How-to Guides > Business Intelligence
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/how%20to%20create%20efficient%20multiprovider%20queries.pdf
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Performance of MultiProviders
    Multiprovider performance / aggregate question
    Query Performance
    Multicube performances
    Create Efficient MultiProvider Queries
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b03b7f4c-c270-2910-a8b8-91e0f6d77096
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/751be690-0201-0010-5e80-f4f92fb4e4ab
    Also try
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    tuning, short dumps
    Performance tuning in BW:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    Also notes
    0000903559 MultiProvider optimization is only partially active
    0000825396 Performance in reports with many selections
    multiprovider explanation i need
    Note 629541 - Multiprovider: Parallel Processing 
    Thanks,
    JituK

Maybe you are looking for