Query performance tuning need your suggestions

Hi,
Below is the sql query and explain plan which is taking 2 hours to execute and sometime it is breaking up( erroring out) due to memory issue.
Below it the query which i need to improve the performance of the code please need your suggestion in order to tweak so that time take for execution become less and also in less memory consumption
select a11.DATE_ID DATE_ID,
sum(a11.C_MEASURE) WJXBFS1,
count(a11.PKEY_GUID) WJXBFS2,
count(Case when a11.C_MEASURE <= 10 then a11.PKEY_GUID END) WJXBFS3,
count(Case when a11.STATUS = 'Y' and a11.C_MEASURE > 10 then a11.PKEY_GUID END) WJXBFS4,
count(Case when a11.STATUS = 'N' then a11.PKEY_GUID END) WJXBFS5,
sum(((a11.C_MEASURE ))) WJXBFS6,
a17.DESC_DATE_MM_DD_YYYY DESC_DATE_MM_DD_YYYY,
a11.DNS DNS,
a12.VVALUE VVALUE,
a12.VNAME VNAME,
a13.VVALUE VVALUE0,
a13.VNAME VNAME0,
9 a14.VVALUE VVALUE1,
a14.VNAME VNAME1,
a15.VVALUE VVALUE2,
a15.VNAME VNAME2,
a16.VVALUE VVALUE3,
a16.VNAME VNAME3,
a11.PKEY_GUID PKEY_GUID,
a11.UPKEY_GUID UPKEY_GUID,
a17.DAY_OF_WEEK DAY_OF_WEEK,
a17.D_WEEK D_WEEK,
a17.MNTH_ID DAY_OF_MONTH,
a17.YEAR_ID YEAR_ID,
a17.DESC_YEAR_FULL DESC_YEAR_FULL,
a17.WEEK_ID WEEK_ID,
a17.WEEK_OF_YEAR WEEK_OF_YEAR
from ACTIVITY_F a11
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 1 ) a12
on (a11.PKEY_GUID = a12.PKEY_GUID and
a11.DATE_ID = a12.DATE_ID and
a11.ORG = a12.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 2) a13
on (a11.PKEY_GUID = a13.PKEY_GUID and
a11.DATE_ID = a13.DATE_ID and
a11.ORG = a13.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 3 ) a14
on (a11.PKEY_GUID = a14.PKEY_GUID and
a11.DATE_ID = a14.DATE_ID and
a11.ORG = a14.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 4) a15
on (a11.PKEY_GUID = a15.PKEY_GUID and
89 a11.DATE_ID = a15.DATE_ID and
a11.ORG = a15.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 9) a16
on (a11.PKEY_GUID = a16.PKEY_GUID and
a11.DATE_ID = a16.DATE_ID and
A11.ORG = A16.ORG)
join W_DATE_D a17
ON (A11.DATE_ID = A17.ID)
join W_SALES_D a18
on (a11.TASK = a18.ID)
where (a17.TIMSTAMP between To_Date('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and To_Date('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and a11.ORG in (12)
and a18.SRC_TASK = 'AX012Z')
group by a11.DATE_ID,
a17.DESC_DATE_MM_DD_YYYY,
a11.DNS,
a12.VVALUE,
a12.VNAME,
a13.VVALUE,
a13.VNAME,
a14.VVALUE,
a14.VNAME,
a15.VVALUE,
a15.VNAME,
a16.VVALUE,
a16.VNAME,
a11.PKEY_GUID,
a11.UPKEY_GUID,
a17.DAY_OF_WEEK,
a17.D_WEEK,
a17.MNTH_ID,
a17.YEAR_ID,
a17.DESC_YEAR_FULL,
a17.WEEK_ID,
a17.WEEK_OF_YEAR;
Explained.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 1245 | 47 (9)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 1245 | 47 (9)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 1245 | 46 (7)| 00:00:01 |
|* 3 | HASH JOIN | | 1 | 1179 | 41 (5)| 00:00:01 |
|* 4 | HASH JOIN | | 1 | 1113 | 37 (6)| 00:00:01 |
|* 5 | HASH JOIN | | 1 | 1047 | 32 (4)| 00:00:01 |
|* 6 | HASH JOIN | | 1 | 981 | 28 (4)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 915 | 23 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 763 | 20 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 611 | 17 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 459 | 14 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 307 | 11 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 155 | 7 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 72 | 3 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID| W_SALES_D | 1 | 13 | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | CONS_UNQ_W_SALES_D_SRC_ID | 1 | | 1 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID| W_DATE_D | 1 | 59 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | UIDX_DD_TIMSTAMP | 1 | | 0 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | ACTIVITY_F | 1 | 83 | 4 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | PK_ACTIVITY_F | 1 | | 3 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | W_ORG_D      | 1 | 152 | 4 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 22 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 23 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 24 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 26 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 28 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 30 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 31 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 32 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 33 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 34 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------

Hi,
I'm not a tuning expert but I can suggest you to post your request according to this template:
Thread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting
Then:
a) you should posting a code which is easy to read. What about formatting? Your code had to be fixed in a couple of lines.
b) You could simplify your code using the with statement. This has nothing to do with the tuning but it will help the readability of the query.
Check it below:
WITH tab1 AS (SELECT a.org AS org
                   , a.date_id AS date_id
                   , a.time_of_day_id AS time_of_day_id
                   , a.date_hour_id AS date_hour_id
                   , a.task AS task
                   , a.pkey_guid AS pkey_guid
                   , a.vname AS vname
                   , a.vvalue AS vvalue
                   , b.variable_obj
                FROM    w_org_d a
                     JOIN
                        w_person_d b
                     ON (    a.task = b.task
                         AND a.org = b.id
                         AND a.vname = b.vname))
  SELECT a11.date_id date_id
       , SUM (a11.c_measure) wjxbfs1
       , COUNT (a11.pkey_guid) wjxbfs2
       , COUNT (CASE WHEN a11.c_measure <= 10 THEN a11.pkey_guid END) wjxbfs3
       , COUNT (CASE WHEN a11.status = 'Y' AND a11.c_measure > 10 THEN a11.pkey_guid END) wjxbfs4
       , COUNT (CASE WHEN a11.status = 'N' THEN a11.pkey_guid END) wjxbfs5
       , SUM ( ( (a11.c_measure))) wjxbfs6
       , a17.desc_date_mm_dd_yyyy desc_date_mm_dd_yyyy
       , a11.dns dns
       , a12.vvalue vvalue
       , a12.vname vname
       , a13.vvalue vvalue0
       , a13.vname vname0
       , a14.vvalue vvalue1
       , a14.vname vname1
       , a15.vvalue vvalue2
       , a15.vname vname2
       , a16.vvalue vvalue3
       , a16.vname vname3
       , a11.pkey_guid pkey_guid
       , a11.upkey_guid upkey_guid
       , a17.day_of_week day_of_week
       , a17.d_week d_week
       , a17.mnth_id day_of_month
       , a17.year_id year_id
       , a17.desc_year_full desc_year_full
       , a17.week_id week_id
       , a17.week_of_year week_of_year
    FROM activity_f a11
         JOIN tab1 a12
            ON (    a11.pkey_guid = a12.pkey_guid
                AND a11.date_id = a12.date_id
                AND a11.org = a12.org
                AND a12.variable_obj = 1)
         JOIN tab1 a13
            ON (    a11.pkey_guid = a13.pkey_guid
                AND a11.date_id = a13.date_id
                AND a11.org = a13.org
                AND a13.variable_obj = 2)
         JOIN tab1 a14
            ON (    a11.pkey_guid = a14.pkey_guid
                AND a11.date_id = a14.date_id
                AND a11.org = a14.org
                AND a14.variable_obj = 3)
         JOIN tab1 a15
            ON (    a11.pkey_guid = a15.pkey_guid
                AND a11.date_id = a15.date_id
                AND a11.org = a15.org
                AND a15.variable_obj = 4)
         JOIN tab1 a16
            ON (    a11.pkey_guid = a16.pkey_guid
                AND a11.date_id = a16.date_id
                AND a11.org = a16.org
                AND a16.variable_obj = 9)
         JOIN w_date_d a17
            ON (a11.date_id = a17.id)
         JOIN w_sales_d a18
            ON (a11.task = a18.id)
   WHERE (a17.timstamp BETWEEN TO_DATE ('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
                           AND TO_DATE ('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
          AND a11.org IN (12)
          AND a18.src_task = 'AX012Z')
GROUP BY a11.date_id, a17.desc_date_mm_dd_yyyy, a11.dns, a12.vvalue
       , a12.vname, a13.vvalue, a13.vname, a14.vvalue
       , a14.vname, a15.vvalue, a15.vname, a16.vvalue
       , a16.vname, a11.pkey_guid, a11.upkey_guid, a17.day_of_week
       , a17.d_week, a17.mnth_id, a17.year_id, a17.desc_year_full
       , a17.week_id, a17.week_of_year;
{code}
I hope I did not miss anything while reformatting the code. I could not test it not having the proper tables.
As I said before I'm not a tuning expert nor I pretend to be but I see this:
1) Table W_PERSON_D is read in full scan. Any possibility of using indexes?
2) Tables W_SALES_D, W_DATE_D,  ACTIVITY_F and W_ORG_D have TABLE ACCESS BY INDEX ROWID which definitely is not fast.
You should provide additional information for tuning your query checking the post I mentioned previously.
Regards.
Al                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Reg: Process Chain, query performance tuning steps

    Hi All,
    I come across a question like,  There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
    Please let me know the steps involved in query performance tuning and aggregate tuning.
    Thanks & Regards
    Omkar.K

    Hi,
    Process Chain
    Method 1 (when it fails in a step/request)
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
    Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
    Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
    Query performance tuning
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • Oracle query performance tuning

    Hi
    I am doing Oracle programming.Iwould like to learn Query Performance Tuning.
    Could you guide me , like how could i learn this online, which books to refer.
    Thank you

    I would recommend purchasing a copy of Cary Millsap's book now:
    http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X/ref=sr_1_1?ie=UTF8&qid=1248985270&sr=8-1
    And Jonathan Lewis' when you feel you are at a slightly more advanced level.
    http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Experts-Voice/dp/1590596366/ref=pd_sim_b_2
    Both belong in everyone's bookcase.

  • VAL_FIELD selection to determine RSDRI or MDX query: performance tuning

    according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
    I did so and found an interesting issue in one of the COPA report.
    with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
    I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
    in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
    so in smmary,
    BAS(PARENT) =>MDX query.
    BAS(CHILD1)=>RSDRI query.
    BAS(CHILD2)=>RSDRI query.
    BAS(CHILD3)=>RSDRI query.
    BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
    I know VAL_FIELD is SAP reserved name for BPC dimensions.  my question is why BAS(PARENT) =>MDX query.?
    interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
    George

    Ok - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection. 
    I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
    I should have paid more attention to the Info message I got in the BEx Query Designed.  It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
    After moving the variables to the Characteristic Restrictions my report worked as expected.  The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
    Hope this helps someone else

  • Need your suggestion ..

    Dear folks,
    Could you share to me your suggestions please .. ??
    I have report whose type is summary and it's coming from info-cubes. The info-cube before going to info-cube, the data it's sent to ODS first.
    The source of data come from CRM System.
    The case is like this, sometimes my user wanna delete the data that has been displayed in the report.
    Could you suggest me what i should do regarding this requirement .. ?? What procedure should i do ?
    Many thanks.
    regards,
    Niel.

    Deal All,
    I'd like to express my tks to you for your response.
    The user wanna to delete the data from the report permanently.
    It means, if i have data about transaction 1 for customer A, the data should be deleted from Info-Cube and also the source (CRM).
    I've ever heard the method, that we can delete the record in ODS using mark 0recordmode in ODS with 'D'.
    My questions are :
    1. If i use that method, how can i delete the data in info-cube ?
    2. Is there other method except this ?
    fyi, my data flow :
    SAP CRM -> BW : data source -> PSA -> ODS -> Cube.
    Regards,
    Niel.

  • Need Your suggestion on Performance Management

    Hi All
    When Creating objectives it can be linked different criterian.But Whati would need to know can this be linked based on
    Person wise
    Department
    Band
    Can objectives be mapped to One - Many Many - one relations !!
    Employee /Manager should have an option remove unwanted objectives.I think it can be done.But stilli need all gurus inputs
    When An employee transfered from Org to another existing objectives will need to be attached to the employee
    Upon transfer of an employee appraisal final rating should be given by both the managers
    Do a mid-term appraisal with ratings but not to complete the appraisal.
    Do an Annual appraisal with final ratings
    Can these be possible as straight fit. Need all your valuable inputs .This is very urgent.

    When Creating objectives it can be linked different criterian.But Whati would need to know can this be linked based on
    Person wise
    Department
    Band
    Yes, u can create different eligibility profiles as required.
    Can objectives be mapped to One - Many Many - one relations !!You cannot have more than one eligibility profile linked to an objective. But you can create all the eligibility factors into a single profile. This was you will not need more than 1 eligibility profile for 1 objective.
    You can link 1 eligibility profile to more than 1 objective.
    Employee /Manager should have an option remove unwanted objectives.I think it can be done.But stilli need all gurus inputsYes.
    When An employee transfered from Org to another existing objectives will need to be attached to the employeeNot sure. But but during the appraisal, the manager can attach more objectives. Also, if you republish the plan, the extra objectives gets added to the employee's scorecard.
    Upon transfer of an employee appraisal final rating should be given by both the managers.Final rating can only be given by the appraiser.
    1. The previous appraiser can change the Main Appraiser and select the new appraiser
    2. The previous appraiser can add the new manager as a particiapnt and get his inputs.
    Do a mid-term appraisal with ratings but not to complete the appraisal.
    Do an Annual appraisal with final ratings
    You need to create 2 separate appraisal. You can create two seperate appraisal tasks in the same PMP, one for mid year and the second for annual. But the final rating column will be there, you can ignore it.

  • Performance tuning needed for a query.

    Hi all,
    I am facing problem to extract data and insert into another table. I need to fetch part number from all_parts_already and then based on the result I need to extract place from all_parts and insert into all_parts_already.
    Sample data in the tables:
    table all_parts_already:
    part part_desc technique company place
    1 A Engine TVS B1
    1 Av Engine TVS B2
    1 Ab Engine TVS B3
    2 Ah Engine TVS B3
    2 Ap Engine TVS B2
    table all_parts:
    technique company place
    Engine TVS B1
    Kim TVS B2
    Engine TVS B3
    Engine TVS B4
    XXXXX TVS B5
    Engine TVS B6
    for c1 in (select distinct parts from all_parts_already where
    technique = 'Engine' and
    Company = 'TVS' ) loop
         for c2 in (select distinct place from all_parts where
                   technique = 'Engine' and
                   Company ='TVS' and
                   minus
                   select distinct Place from all_parts_already where
                   technique = 'Engine' and
                   Company = 'TVS' and
                   parts = c1.parts ) loop
                   insert into all_parts_already (select c2.place,place_desc,c1.parts,c2.place from place_master where parts=c1.parts and place=c2.place);
         end loop;          
    end loop;
    the data i am dealing with is in millions. One technique may have 1000 parts. One part may have 500 places. So the loop runs that many times creating the delay.
    Please tell me how to move forward.I am getting the output i need but the time it takes is too much(goes on to days)
    Thanks a lot
    :)

    Hi this is the Oracle Designer forum. You may be better asking over at one of the database/sql/plsql forums

  • Query Performance tuning and scope of imporvement

    Hi All ,
    I am on oracle 10g and on Linux OS.
    I have this below query which I am trying to optimise :
    SELECT 'COMPANY' AS device_brand, mach_sn AS device_source_id,
                           'COMPANY' AS device_brand_raw,
                           CASE
                               WHEN fdi.serial_number IS NOT NULL THEN
                                fdi.serial_number
                               ELSE
                                mach_sn || model_no
                           END AS serial_number_raw,
                           gmd.generic_meter_name AS counter_id,
                           meter_name AS counter_id_raw,
                           meter_value AS counter_value,
                           meter_hist_tstamp AS device_timestamp,
                           rcvd_tstamp AS server_timestamp
                      FROM rdw.v_meter_hist vmh
                      JOIN rdw.generic_meter_def gmd
                        ON vmh.generic_meter_id = gmd.generic_meter_id
                      LEFT OUTER JOIN fdr.device_info fdi
                        ON vmh.mach_sn = fdi.clean_serial_number
                     WHERE meter_hist_id IN
                           (SELECT /*+ PUSH_SUBQ */ MAX(meter_hist_id)
                              FROM rdw.v_meter_hist
                             WHERE vmh.mach_sn IN
                                   ('URR893727')
                               AND vmh.meter_name IN
                                   ('TotalImpressions','TotalBlackImpressions''TotalColorImpressions')
                               AND vmh.meter_hist_tstamp >=to_date ('04/16/2011', 'mm/dd/yyyy')
                               AND vmh.meter_hist_tstamp <= to_date ('04/18/2011', 'mm/dd/yyyy')
                             GROUP BY mach_sn, vmh.meter_def_id)
                     ORDER BY device_source_id, vmh.meter_def_id, meter_hist_tstamp;Earlier , it was taking too much time but it started to work faster when i added this :
    /*+ PUSH_SUBQ */ in the select query.
    The explain plan generated for the same is :
    | Id  | Operation                | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |                   |    29M|  3804M|       |    15M  (4)| 53:14:08 |
    |   1 |  SORT ORDER BY           |                   |    29M|  3804M|  8272M|    15M  (4)| 53:14:08 |
    |*  2 |   FILTER                 |                   |       |       |       |            |          |
    |*  3 |    HASH JOIN             |                   |    29M|  3804M|       |  8451K  (2)| 28:10:19 |
    |   4 |     TABLE ACCESS FULL    | GENERIC_METER_DEF |    11 |   264 |       |     3   (0)| 00:00:01 |
    |*  5 |     HASH JOIN RIGHT OUTER|                   |    29M|  3137M|    19M|  8451K  (2)| 28:10:17 |
    |   6 |      TABLE ACCESS FULL   | DEVICE_INFO       |   589K|    12M|       |   799   (2)| 00:00:10 |
    |*  7 |      HASH JOIN           |                   |    29M|  2527M|  2348M|  8307K  (2)| 27:41:29 |
    |*  8 |       HASH JOIN          |                   |    28M|  2016M|       |  6331K  (2)| 21:06:19 |
    |*  9 |        TABLE ACCESS FULL | METER_DEF         |    33 |   990 |       |     4   (0)| 00:00:01 |
    |  10 |        TABLE ACCESS FULL | METER_HIST        |  3440M|   137G|       |  6308K  (2)| 21:01:44 |
    |  11 |       TABLE ACCESS FULL  | MACH_XFER_HIST    |   436M|  7501M|       |  1233K  (1)| 04:06:41 |
    |* 12 |    FILTER                |                   |       |       |       |            |          |
    |  13 |     HASH GROUP BY        |                   |     1 |    26 |       |  6631K  (7)| 22:06:15 |
    |* 14 |      FILTER              |                   |       |       |       |            |          |
    |  15 |       TABLE ACCESS FULL  | METER_HIST        |  3440M|    83G|       |  6304K  (2)| 21:00:49 |
    ------------------------------------------------------------------------------------------------------Is there any other way to optimise it more ... PLease suggest since I am new to query tuning.
    Thanks and Regards
    KK

    Hi Dom ,
    Greetings. Sorry for the delayed response. I have read the How to Post document.
    I will provide all the required information here now :
    Version : 10.2.0.4
    OS : LinuxThe SQL Query which is facing the performance issue :
    SELECT mh.meter_hist_id, mxh.mach_sn, mxh.collectiontag, mxh.rcvd_tstamp,
              mxh.mach_xfer_id, md.meter_def_id, md.meter_name, md.meter_type,
              md.meter_units, md.meter_desc, mh.meter_value, mh.meter_hist_tstamp,
              mh.max_value, md.generic_meter_id
         FROM meter_hist mh JOIN mach_xfer_hist mxh
              ON mxh.mach_xfer_id = mh.mach_xfer_id
              JOIN meter_def md ON md.meter_def_id = mh.meter_def_id;Explain plan for this query :
    Plan hash value: 1878059220
    | Id  | Operation           | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |                |  3424M|   497G|       |    17M  (1)| 56:42:49 |
    |*  1 |  HASH JOIN          |                |  3424M|   497G|       |    17M  (1)| 56:42:49 |
    |   2 |   TABLE ACCESS FULL | METER_DEF      |   423 | 27918 |       |     4    | 00:00:01   |
    |*  3 |   HASH JOIN         |                |  3424M|   287G|    26G|    16M  (1)| 56:38:16 |
    |   4 |    TABLE ACCESS FULL| MACH_XFER_HIST |   432M|    21G|       |  1233K  (1)| 04:06:40 |
    |   5 |    TABLE ACCESS FULL| METER_HIST     |  3438M|   115G|       |  6299K  (2)| 20:59:54 |
    Predicate Information (identified by operation id):
       1 - access("MD"."METER_DEF_ID"="MH"."METER_DEF_ID")
       3 - access("MH"."MACH_XFER_ID"="MXH"."MACH_XFER_ID")Parameters :
    show parameter optimizer
    NAME                                 TYPE                                         VALUE
    optimizer_dynamic_sampling           integer                                      2
    optimizer_features_enable            string                                       10.2.0.4
    optimizer_index_caching              integer                                      70
    optimizer_index_cost_adj             integer                                      50
    optimizer_mode                       string                                       ALL_ROWS
    optimizer_secure_view_merging        boolean                                      TRUEshow parameter db_file_multi
    db_file_multiblock_read_count        integer                                      8
    show parameter cursor_sharing
    NAME                                 TYPE                                         VALUE
    cursor_sharing                       string                                       EXACT
    select  sname  , pname  , pval1  , pval2  from  sys.aux_stats$;
    SNAME                          PNAME                               PVAL1 PVAL2
    SYSSTATS_INFO                  STATUS                                    COMPLETED
    SYSSTATS_INFO                  DSTART                                    07-12-2011 09:22
    SYSSTATS_INFO                  DSTOP                                     07-12-2011 09:52
    SYSSTATS_INFO                  FLAGS                                   0
    SYSSTATS_MAIN                  CPUSPEEDNW                     1153.92254
    SYSSTATS_MAIN                  IOSEEKTIM                              10
    SYSSTATS_MAIN                  IOTFRSPEED                           4096
    SYSSTATS_MAIN                  SREADTIM                            4.398
    SYSSTATS_MAIN                  MREADTIM                            3.255
    SYSSTATS_MAIN                  CPUSPEED                              180
    SYSSTATS_MAIN                  MBRC                                    8
    SYSSTATS_MAIN                  MAXTHR                          244841472
    SYSSTATS_MAIN                  SLAVETHR                           933888
      show parameter db_block_size
    NAME                                 TYPE                                         VALUE
    db_block_size                        integer                                      8192Please let me me if any other information is needed. This Query is currently taking almost one hour to run right now.
    Also , we have two indexes on the columns : xfer_id and meter_def_id in both the tables , but its not getting used without any filtering ( Where clause).
    Do addition of hint in the query above will be of some help.
    Thanks and Regards
    KK

  • Query performance tuning

    Hello,
    Can someone please tell me if there is anyway to improve this query to perform better.I tried using temp table instead of CTE .it did'nt change anything.I think there is a problem with latter part of query.When I execute every select individually they run faster.This
    query is taking hours to execute and is taking all the CPU .
    lucky
    lucky

    Why do you need a FULL JOIN if you're applying a filter against the cte AGE anyway? A LEFT OUTER JOIN on a.PAT_ID = b.PAT_ID would do the same...
    Furthermore, I wouldn't place the WHERE clause at the very end of this rather complex query.
    Limt the result set of cte AGE by using
    WHERE b.DOB < DATEADD(YEAR,-10,@date) AND b.DOB > DATEADD(YEAR,-23,@date)
    for each section of the cte.
    If you're using DISTINCT all over the place, get rid of the overhead to eliminate duplicates within the cte and use UNION ALL instead.
    Normalize your table and therewith avoid the numerous calls to CA.dbo.Trans_VS_ValueSetsToCodes_2014.
    Perform the UNION (ALL?) of QI.dbo.SWHP_CLAIMS_MASTER and SWHP_ANALYTICS.dbo.CLAIMS and apply the WHERE clause to the result set instad of each set separately.
    Maybe even store some of the sub-results in a temp table and go from there.
    That's all I can see at a first glance.
    To summarize: there's huge room for improvement and most probably much more to do than just that single query... And definitely more than can/should be answered on a forum...

  • Query Performance Tuning - Help

    Hello Experts,
    Good Day to all...
    TEST@ora10g>select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    "CORE     10.2.0.4.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
    NLSRTL Version 10.2.0.4.0 - Production
    SELECT fa.user_id,
              fa.notation_type,
                 MAX(fa.created_date) maxDate,
                                      COUNT(*) bk_count
    FROM  book_notations fa
    WHERE fa.user_id IN
        ( SELECT user_id
         FROM
           ( SELECT /*+ INDEX(f2,FBK_AN_ID_IDX) */ f2.user_id,
                                                      MAX(f2.notatn_id) f2_annotation_id
            FROM  book_notations f2,
                  title_relation tdpr
            WHERE f2.user_id IN ('100002616221644',
                                          '100002616221645',
                                          '100002616221646',
                                          '100002616221647',
                                          '100002616221648')
              AND f2.pack_id=tdpr.pack_id
              AND tdpr.title_id =93402
            GROUP BY f2.user_id
            ORDER BY 2 DESC)
         WHERE ROWNUM <= 10)
    GROUP BY fa.user_id,
             fa.notation_type
    ORDER BY 3 DESC;Cost of the Query is too much...
    Below is the explain plan of the query
    | Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                           |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   1 |  SORT ORDER BY                             |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   2 |   HASH GROUP BY                            |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    11 |   319 |     4   (0)| 00:00:01 |
    |   4 |     NESTED LOOPS                           |                                |    53 |  2385 |    50   (6)| 00:00:01 |
    |   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    29   (7)| 00:00:01 |
    |   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |
    |*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |
    |   8 |         VIEW                               |                                |     5 |    80 |    29   (7)| 00:00:01 |
    |*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  10 |           HASH GROUP BY                    |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5356 |   135K|    26   (0)| 00:00:01 |
    |  12 |             NESTED LOOPS                   |                                |  6917 |   243K|    27   (0)| 00:00:01 |
    |  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                         |     1 |    10 |     1   (0)| 00:00:01 |
    |* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |
    |  15 |              INLIST ITERATOR               |                                |       |       |            |          |
    |* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5356 |       |     4   (0)| 00:00:01 |
    |* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   746 |       |     1   (0)| 00:00:01 |
    Table Details
    SELECT COUNT(*) FROM book_notations; --111367
    Columns
    user_id -- nullable field - VARCHAR2(50 BYTE)
    pack_id -- NOT NULL --NUMBER
    notation_type--     VARCHAR2(50 BYTE)     -- nullable field
    CREATED_DATE     - DATE     -- nullable field
    notatn_id     - VARCHAR2(50 BYTE)     -- nullable field      
    Index
    FBK_AN_ID_IDX - Non unique - Composite columns --> (user_id and pack_id)
    SELECT COUNT(*) FROM title_relation; --12678
    Columns
    pack_id - not null - number(38) - PK
    title_id - not null - number(38)
    Index
    IDX_TITLE_ID - Non Unique - TITLE_ID
    Please help...
    Thanks...

    Linus wrote:
    Thanks Bravid for your reply; highly appreciate that.
    So as you say; index creation on the NULL column doesnt have any impact. OK fine.
    What happens to the execution plan, performance and the stats when you remove the index hint?
    Find below the Execution Plan and Predicate information
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 126058086"
    "| Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |"
    "|   0 | SELECT STATEMENT                           |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   1 |  SORT ORDER BY                             |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   2 |   HASH GROUP BY                            |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    10 |   290 |     4   (0)| 00:00:01 |"
    "|   4 |     NESTED LOOPS                           |                                |    50 |  2250 |    53   (8)| 00:00:01 |"
    "|   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    32  (10)| 00:00:01 |"
    "|   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |"
    "|*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |"
    "|   8 |         VIEW                               |                                |     5 |    80 |    32  (10)| 00:00:01 |"
    "|*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  10 |           HASH GROUP BY                    |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5875 |   149K|    28   (0)| 00:00:01 |"
    "|  12 |             NESTED LOOPS                   |                                |  7587 |   266K|    29   (0)| 00:00:01 |"
    "|  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                      |     1 |    10 |     1   (0)| 00:00:01 |"
    "|* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |"
    "|  15 |              INLIST ITERATOR               |                                |       |       |            |          |"
    "|* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5875 |       |     4   (0)| 00:00:01 |"
    "|* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   775 |       |     1   (0)| 00:00:01 |"
    "Predicate Information (identified by operation id):"
    "   7 - filter(ROWNUM<=10)"
    "   9 - filter(ROWNUM<=10)"
    "  14 - access(""TDPR"".""TITLE_ID""=93402)"
    "  16 - access((""F2"".""USER_ID""='100002616221644' OR ""F2"".""USER_ID""='100002616221645' OR "
    "              ""F2"".""USER_ID""='100002616221646' OR ""F2"".""USER_ID""='100002616221647' OR "
    "              ""F2"".""USER_ID""='100002616221648') AND ""F2"".""PACK_ID""=""TDPR"".""PACK_ID"")"
    "  17 - access(""FA"".""USER_ID""=""$nso_col_1"")"
    The cost is the same because the plan is the same. The optimiser chose to use that index anyway. The point is, now that you have removed it, the optimiser is free to choose other indexes or a full table scan if it wants to.
    >
    Statistics
    BEGIN
    DBMS_STATS.GATHER_TABLE_STATS ('TEST', 'BOOK_NOTATIONS');
    END;
    "COLUMN_NAME"     "NUM_DISTINCT"     "NUM_BUCKETS"     "HISTOGRAM"
    "NOTATION_ID"     110269     1     "NONE"
    "USER_ID"     213     212     "FREQUENCY"
    "PACK_ID"     20     20     "FREQUENCY"
    "NOTATION_TYPE"     8     8     "FREQUENCY"
    "CREATED_DATE"     87     87     "FREQUENCY"
    "CREATED_BY"     1     1     "NONE"
    "UPDATED_DATE"     2     1     "NONE"
    "UPDATED_BY"     2     1     "NONE"
    After removing the hint ; the query still shows the same "COST"
    Autotrace
    recursive calls     1
    db block gets     0
    consistent gets     34706
    physical reads     0
    redo size     0
    bytes sent via SQL*Net to client     964
    bytes received via SQL*Net from client     1638
    SQL*Net roundtrips to/from client     2
    sorts (memory)     3
    sorts (disk)     0
    Output of query
    "USER_ID"     "NOTATION_TYPE"     "MAXDATE"     "COUNT"
    "100002616221647"     "WTF"     08-SEP-11     20000
    "100002616221645"     "LOL"     08-SEP-11     20000
    "100002616221644"     "OMG"     08-SEP-11     20000
    "100002616221648"     "ABC"     08-SEP-11     20000
    "100002616221646"     "MEH"     08-SEP-11     20000Thanks...I still don't know what we're working towards at the moment. WHat is the current run time? What is the expected run time?
    I can't tell you if there's a better way to write this query or if indeed there is another way to write this query because I don't know what it is attempting to achieve.
    I can see that you're accessing 100k rows from a 110k row table and it's using an index to look those rows up. That seems like a job for a full table scan rather than index lookups.
    David

  • Outerjoin query performance tuning

    Hi,
    I have two tables, tab1 (7 million records) and tab2 (50,000 records).
    Following query is taking more than 15 minutes to feth the result from the database.
    SELECT a.col11, a.col12, b.col11, b.col12
    FROM tab1 a, tab2 b
    WHERE a.col1 = b.col1 (+) AND
    a.col2 = b.col2 (+) AND
    a.col3 = b.col3 (+) AND
    a.col4 = b.col4 (+);
    Please suggest the ways how to tune the above query. I am working on Oracle 9i release 2.
    Thanks in advance.

    You should probably go through the usual steps to tune the query ... get a plan, check indexes, etc.
    Ideally you could eliminate the outer join, but that's not usually an option :(.
    One idea that might or might not help is to execute 2 different queries: a normal join UNIONed to the one where the join fails. In my experience it stands about a 50% chance of improving performance but you must be absolutely sure the modified query returns the same results as the original. The correlated subquery below must be use indexes for this to help. This idea looks something like
    select a.col1, a.col12, b.col11, b.col12
      from tab1 a, tab2 b
    where a.col1 = b.col1
      and a.col2 = b.col2
      and a.col3 = b.col3
      and a.col4 = b.col4
    UNION ALL
    select a.col1, null, null, null
      from tab1
    where not exists (
       select 0
          from tab2 b
        where a.col1 = b.col1
          and a.col2 = b.col2
          and a.col3 = b.col3
          and a.col4 = b.col4
      )

  • Query performance tuning beyond entity caching

    Hi,
    We have an extremely large read-only dataset stored using BDB-JE DPL. I'm seeking to tune our use of BDB while querying, and I'm wondering what options we have beyond simply attempting to cache more entities? Our expected cache hit rate is low. Can I tune things to keep more of the btree nodes and other internal structures buffered? What kind of configuration parameters should I be looking at?
    Thanks,
    Brian

    No, you don't have to preload the leaf nodes. But if you don't preload the secondary at all, you'll see more I/O when you read by secondary index.
    If you don't have enough cache to load leaf nodes, you should not call setLoadLNs(true), for primary or secondary DBs. Instead, try to load the internal nodes for all DBs if possible. You can limit the time taken by preload using PreloadConfig.
    I strongly suspect that the primary DB loads faster because it is probably written in key order, while the secondaries are not.
    The LRU-only setting is an environment wide setting and should apply to all databases, that is not a problem. If you are doing random access in general, this is the correct setting.
    Please use preload to reduce the amount of I/O that you see in the environment stats. If performance is still not adequate, you may want to look at the IO subsystem you're using -- do you know what you're getting for seek and read times? Also, you may want to turn on the Java verbose GC option and see if full GCs are occurring -- if so, tuning the Java GC will be necessary.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Query Performance Tuning from a cockpit

    When running some queries from within a cockpit we are expreienceing long search times from the cubes. What methods can be used to tune queries and improve search times? Is there a way to get an explain plan (Oracle featrue) of the query? Can we trace the steps and bottle necks that a query runs into?

    Ryan, Try to search for the topic posted couple a days back. All you have to do is search for Performance word.
    Topics:Performance started by "thisquestion 4u"
    Posted: Jul 25, 2005
    Let me know if you need help.
    Pete

  • Need your suggestions

    <SPAN>:angry: I'm so frustrated. I'm not going to follow the lemming-IPod rush.
    <SPAN>
    Could someone please review my info and give me some suggestions as to which Creative mp3 player would best suit my needs?
    I would hate to buy the wrong player.<SPAN> I appreciate your honest opinions on your personal Creative Player.
    <SPAN>
    <SPAN>Please do not recommend a player that is soon to be discontinued!I really appreciate your help in this matter. I?m currently dropping hints that I want a player for Christmas and hope to have a product name/price to tell people when they ask what I want. <B>Must Haves: </B>Plenty of storage for 0 plus audio books and over 500 songsBookmark function for audio booksMinimum of 8 hours of continuous play
    <SPAN>Flash Memory ? don?t plan to drop it, but if a good song starts to play ? I might start dancing.<SPAN>
    <SPAN>Windows XP compatibleUpgrades and Online SupportEasy upload of audio books and songs capabilitiesDecent sound quality from headphonesComes with headphones and other cables/cords/software to start using straight from boxAccessories that are available most anywhereMinimum 30 day warranty that covers most aspects<B>Not needed: </B>Features such as: photo or video storage/view, calendar, address book, memo pad, etc.Color screen

    Once you say Flash player, your options are limited when it comes to Audible support. First off the capacity of a flash player is small for what Creative is offering. So right there you need to get a hard dri've player.
    If flash is what you <i>REALLY</i> want then consider another brand. But if you can settle for a hard dri've. Then go for a Zen Micro, Zen Neeon 5gb/6gb, or Zen Sleek. I'm not sure if the Zen Micro Photo and Zen Sleek Photo have Audible support but I would think so. Hopefully someone can confirm this as the website for both devices dont list Audible as a compatible format.
    Also the accessories part, as the market is dominated by the iPod's that pretty much all you see. The accessories are pretty much iPod only and Everything else. So with anything but an iPod your choice of accessories are limited.

  • Need your suggestion / Input about choosing Apex for new application

    Guys,
    I came across oracle apex this week and started digging through the documentation and presentations. Read some forums as well. Now I believe, our application can be helpful by utilizing Apex but I would like to share high level application functionality which I am trying to implement through Apex
    Following are the functionality of the application:
    1) there will be 40 different entity in the application (create/edit/read/delete)
    2) Each entity and CRUD processing will be roles based
    3) On various events, send email and generate XML files for interfacing with external system
    4) Scheduling report generation and normal report generation
    5) Detail audit trail of the application (doesn't depend of Ajax, but still want to point out)
    6) Various search capabilities for each entity
    7) On various event and user selection: generate PDF file which user can download / print.
    8) Many client side and server side validations.
    Based on my reading, I believe this can be easily achieved with certainly some learning curve in AJAX. (which I have to do as I lost my entire project team due to budget issues.). I have a web devleopment background using Java, JSP and Servlet for 5 years. (Nothing in last 3 years though)
    Please let me know your thoughts based on my current situation and functionality.
    Also, Do i need to buy anything to get started with apex and implementing apex full blown application (they say it's free, but just want to understand your perspective as well).
    Thank you for reading the post and your support.
    -Raj

    Guys,
    I came across oracle apex this week and started digging through the documentation and presentations. Read some forums as well. Now I believe, our application can be helpful by utilizing Apex but I would like to share high level application functionality which I am trying to implement through Apex
    Following are the functionality of the application:
    1) there will be 40 different entity in the application (create/edit/read/delete)
    2) Each entity and CRUD processing will be roles based
    3) On various events, send email and generate XML files for interfacing with external system
    4) Scheduling report generation and normal report generation
    5) Detail audit trail of the application (doesn't depend of Ajax, but still want to point out)
    6) Various search capabilities for each entity
    7) On various event and user selection: generate PDF file which user can download / print.
    8) Many client side and server side validations.
    Based on my reading, I believe this can be easily achieved with certainly some learning curve in AJAX. (which I have to do as I lost my entire project team due to budget issues.). I have a web devleopment background using Java, JSP and Servlet for 5 years. (Nothing in last 3 years though)
    Please let me know your thoughts based on my current situation and functionality.
    Also, Do i need to buy anything to get started with apex and implementing apex full blown application (they say it's free, but just want to understand your perspective as well).
    Thank you for reading the post and your support.
    -Raj

Maybe you are looking for