Forcing a query to use a differnet plan
I have one database where a sql plan appears to be correct through tests. In another database the join order is flipped. Data is relatively close. Hinting the query would require a code change. What is the easiest to force the plan from the other database?
if this new plan does not work, what is the easiest way to revert back.
Hi,
Another option is SQL Plan Management
SQL Plan Management (SPM) prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting and evolving SQL plan information. Changes to the execution plan may result from database upgrades, system and data changes, application upgrade or bug fixes. When SPM is enabled, the system maintains a plan history that contains all plans generated by the optimizer and stores them in a component called plan baseline. Among the plan history in the plan baseline, plans that are verified not to cause performance regression are marked as acceptable. The plan baseline is used by the optimizer to decide on the best plan to use when compiling a SQL statement. Repository stored in a data dictionary of plan baselines and statement log maintained by the optimizer is called SQL Management Base (SMB).
http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-sqlplanmanagement.html
Regards,
Similar Messages
-
SQL Query C# Using Execution Plan Cache Without SP
I have a situation where i am executing an SQL query thru c# code. I cannot use a stored procedure because the database is hosted by another company and i'm not allowed to create any new procedures. If i run my query on the sql mgmt studio the first time
is approx 3 secs then every query after that is instant. My query is looking for date ranges and accounts. So if i loop thru accounts each one takes approx 3 secs in my code. If i close the program and run it again the accounts that originally took 3 secs
now are instant in my code. So my conclusion was that it is using an execution plan that is cached. I cannot find how to make the execution plan run on non-stored procedure code. I have created a sqlcommand object with my queary and 3 params. I loop thru each
one keeping the same command object and only changing the 3 params. It seems that each version with the different params are getting cached in the execution plans so they are now fast for that particular query. My question is how can i get sql to not do this
by either loading the execution plan or by making sql think that my query is the same execution plan as the previous? I have found multiple questions on this that pertain to stored procedures but nothing i can find with direct text query code.
Bob;
I did the query running different accounts and different dates with instant results AFTER the very first query that took the expected 3 secs. I changed all 3 fields that i've got code for parameters for and it still remains instant in the mgmt studio but
still remains slow in my code. I'm providing a sample of the base query i'm using.
select i.Field1, i.Field2,
d.Field3 'Field3',
ip.Field4 'Field4',
k.Field5 'Field5'
from SampleDataTable1 i,
SampleDataTable2 k,
SampleDataTable3 ip,
SampleDataTable4 d
where i.Field1 = k.Field1 and i.Field4 = ip.Field4
i.FieldDate between '<fromdate>' and '<thrudate>'
and k.Field6 = <Account>
Obviously the field names have been altered because the database is not mine but other then the actual names it is accurate. It works it just takes too long in code as described in the initial post.
My params setup during the init for the connection and the command.
sqlCmd.Parameters.Add("@FromDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@ThruDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@Account", SqlDbType.Decimal);
Each loop thru the code changes these 3 fields.
sqlCommand.Parameters["@FromDate"].Value = dtFrom;
sqlCommand.Parameters["@ThruDate"].Value = dtThru;
sqlCommand.Parameters["@Account"].Value = sAccountNumber;
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
reader.Close();
One thing i have noticed is that the account field is decimal(20,0) and by default the init i'm using defaults to decimal(10) so i'm going to change the init to
sqlCmd.Parameters["@Account"].Precision = 20;
sqlCmd.Parameters["@Account"].Scale = 0;
I don't believe this would change anything but at this point i'm ready to try anything to get the query running faster.
Bob; -
How to force a sql_id to use a specific hash plan value
Hi,
in my 11.2 database i've seen that a sql_id change hash plan value.
The original was more faster than second, but optimizer use second.
How can i force the sql_id to use the first hash plan value?
TnxHi,
i've a problem.
I've fixed the plan 3 days ago, but looking AWR (today), it use old hash plan.
Plan Hash Total Elapsed 1st Capture Last Capture
# Value Time(ms) Executions Snap ID Snap ID
1 2658787094 3,592,228 0 334 334looking into dba_sql_plan_baselines i've 2 plan_name, and only one is fixed.
select *
from table(dbms_xplan.display_sql_plan_baseline(sql_handle => 'SQL_3d5572d6f5e8cdda', format => 'basic'));
Plan name: SQL_PLAN_3upbkuvuyjmfub082c0c4 Plan id: 2961359044
Enabled: YES Fixed: NO Accepted: YES Origin: MANUAL-LOAD
Plan hash value: 2658787094
Plan name: SQL_PLAN_3upbkuvuyjmfu7ca80389 Plan id: 2091385737
Enabled: YES Fixed: YES Accepted: YES Origin: MANUAL-LOAD
Plan hash value: 3534976400
id plan hash last seen elapsed (s) origin note
1 3534976400 2012-10-26/14:00:11 183.232 AWR not reproducible
2 2658787094 2012-10-29/12:00:24 23116.872 AWR not reproducible Why i see old plan (2658787094)?
Thanks -
Spatial Query not using Spatial Index
Hi All,
I have a query which uses the SDO_WITHIN_DISTANCE operator, but is taking far too long to complete.
SELECT
RT.*,RD.RPD_NODE_ID, RD.RPD_XCOORD,RD.RPD_YCOORD
FROM
railplan_data RD
LEFT JOIN Walk_data_sets WDS ON RD.RPD_RPS_ID = WDS.WDS_RPS_ID
LEFT JOIN RWNet_Temp RT ON WDS.WDS_ID = RT.RW_WDS_ID
WHERE
WDS.wds_id = 441
AND
MDSYS.SDO_WITHIN_DISTANCE(RT.RW_GEOM,RD.RPD_GEOLOC,'DISTANCE=' || TO_CHAR(RT.RW_BUFFER) || ' UNIT=METER') = 'TRUE';
Upon generation of the explain plan I have realised that the spatial index is not being used in the query, but I can't for the life of me get the thing working
3 | Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
4 ------------------------------------------------------------------------------------------
5 | 0 | SELECT STATEMENT | | 25841 | 99M| | 201 |
6 |* 1 | FILTER | | | | | |
7 | 2 | MERGE JOIN OUTER | | | | | |
8 |* 3 | HASH JOIN | | 12652 | 420K| 2968K| 185 |
9 | 4 | TABLE ACCESS FULL | RAILPLAN_DATA | 75910 | 2075K| | 60 |
10 | 5 | TABLE ACCESS BY INDEX ROWID| WALK_DATA_SETS | 1 | 6 | | 1 |
11 |* 6 | INDEX UNIQUE SCAN | WDS_PK | 1 | | | |
12 |* 7 | SORT JOIN | | 16 | 63760 | | 16 |
13 |* 8 | TABLE ACCESS FULL | RWNET_TEMP | 16 | 63760 | | 4 |
If anyone could help me out in figuring out why the spatial index is not being used, I would be most appreciative.
TIA
DanHi all again,
Well I finally got an upgrade to Oracle 10 (yay!), so I am now trying to implement the SDO_JOIN method as per my earlier posts. In fact it is actually working, but I have a question. When I run an explain plan it does not show the use of any domain indexes which I would expect to see, but performs fine (1.07s) with just a few records (10 in 1 table, 15000 in the other), please see code and explain plan below:
SELECT
Distinct
RT.RW_ID, RD.RPD_NODE_ID,
RD.RPD_XCOORD,RD.RPD_YCOORD
FROM
RPD_TEMP_762 RD,
WALK_DATA_SETS WDS,
RWNET_TEMP RT,
TABLE
(SDO_JOIN
( 'RWNET_TEMP',
'RW_GEOM',
'RPD_TEMP_762',
'RPD_GEOLOC',
'distance= ' || TO_CHAR(RT.RW_BUFFER) || ' unit=meter')) SPATIAL_JOIN_RESULT
WHERE WDS.WDS_ID = RT.RW_WDS_ID
AND WDS.WDS_ID = 762
AND SPATIAL_JOIN_RESULT.ROWID1 = RT.ROWID
AND SPATIAL_JOIN_RESULT.ROWID2 = RD.ROWID
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 74 | 5994 | 21753 (1)|
| 1 | SORT UNIQUE | | 74 | 5994 | 21691 (1)|
|* 2 | HASH JOIN | | 1046K| 80M| 1859 (1)|
| 3 | NESTED LOOPS | | 6076 | 213K| 1824 (1)|
| 4 | NESTED LOOPS | | 74 | 2516 | 194 (1)|
|* 5 | INDEX UNIQUE SCAN | WDS_PK | 1 | 4 | 0 (0)|
|* 6 | TABLE ACCESS FULL | RWNET_TEMP | 74 | 2220 | 194 (1)|
|* 7 | COLLECTION ITERATOR PICKLER FETCH| SDO_JOIN | | | |
| 8 | TABLE ACCESS FULL | RPD_TEMP_762 | 17221 | 756K| 28 (0)|
------------------------------------------------------------------------------------------ When i try to add hints to force the use of spatial indexes the performance of this query drops through the floor (it takes minutes / hours), index hint shown below:
/*+ ORDERED INDEX(RW rw_geom) INDEX(RD rpd_geoloc) */My question is is the first query using domain indexes, and if not, how do I get it to?
TIA
Dan -
SQL Query not using Composite Index
Hi,
Please look at the below query:
SELECT pde.participant_uid
,pde.award_code
,pde.award_type
,SUM(decode(pde.distribution_type
,'FORFEITURE'
,pde.forfeited_quantity *
pde.sold_price * cc.rate
,pde.distributed_quantity *
pde.sold_price * cc.rate)) AS gross_Amt_pref_Curr
FROM part_distribution_exec pde
,currency_conversion cc
,currency off_curr
WHERE pde.participant_uid = 4105
AND off_curr.currency_iso_code =
pde.offering_currency_iso_code
AND cc.from_currency_uid = off_curr.currency_uid
AND cc.to_currency_uid = 1
AND cc.latest_flag = 'Y'
GROUP BY pde.participant_uid
,pde.award_code
,pde.award_type
In oracle 9i, i"ve executed this above query, it takes 6 seconds and the cost is 616, this is due to non usage of the composite index, Currency_conversion_idx(From_currency_uid, To_currency_uid, Latest_flag). I wonder why this index is not used while executing the above query. So, I've dropped the index and recreated it. Now, the query is using this index. After inserting many rows or say in 1 days time, if the same query is executed, again the query is not using the index. So everyday, the index should be dropped and recreated.
I don't want this drop and recreation of index daily, I need a permanent solution for this.
Can anyone tell me, Why this index goes stale after a period of time???? Please take some time and Solve this issue.
-SankarHi David,
This is Sankar here. Thankyou for your reply.
I've got the plan table output for this problematic query, please go thro' it and help me out why the index CURRENCY_CONVERSION_IDX is used now and why it's not using while executing the query after a day or inserting some records...
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 26 | 15678 | 147 |
| 1 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_PAYOUT_SCHEDULE | 1 | 89 | 2 |
|* 2 | INDEX UNIQUE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 61097 | | 1 |
| 3 | SORT AGGREGATE | | 1 | 67 | |
|* 4 | FILTER | | | | |
|* 5 | INDEX RANGE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 1 | 67 | 2 |
| 6 | SORT AGGREGATE | | 1 | 94 | |
|* 7 | FILTER | | | | |
|* 8 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_PAYOUT_SCHEDULE | 1 | 94 | 3 |
|* 9 | INDEX RANGE SCAN | PART_AWARD_PAYOUT_SCHEDULE_PK1 | 1 | | 2 |
|* 10 | FILTER | | | | |
|* 11 | HASH JOIN | | 26 | 15678 | 95 |
|* 12 | HASH JOIN OUTER | | 26 | 11596 | 91 |
|* 13 | HASH JOIN | | 26 | 10218 | 86 |
| 14 | VIEW | | 1 | 82 | 4 |
| 15 | SORT GROUP BY | | 1 | 116 | 4 |
|* 16 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_LEDGER | 1 | 116 | 2 |
|* 17 | INDEX RANGE SCAN | PARTICIPANT_UID_IDX | 1 | | 1 |
|* 18 | HASH JOIN OUTER | | 26 | 8086 | 82 |
|* 19 | HASH JOIN | | 26 | 6006 | 71 |
| 20 | NESTED LOOPS | | 36 | 5904 | 66 |
| 21 | NESTED LOOPS | | 1 | 115 | 65 |
| 22 | TABLE ACCESS BY INDEX ROWID | CURRENCY_CONVERSION | 18 | 756 | 2 |
|* 23 | INDEX RANGE SCAN | KLS_IDX_CURRENCY_CONV | 3 | | 1 |
| 24 | VIEW | | 1 | 73 | 4 |
| 25 | SORT GROUP BY | | 1 | 71 | 4 |
| 26 | TABLE ACCESS BY INDEX ROWID| PART_AWARD_VALUE | 1 | 71 | 2 |
|* 27 | INDEX RANGE SCAN | PAV_PARTICIPANT_UID_IDX | 1 | | 1 |
| 28 | TABLE ACCESS BY INDEX ROWID | PARTICIPANT_AWARD | 199 | 9751 | 1 |
|* 29 | INDEX UNIQUE SCAN | PARTICIPANT_AWARD_PK1 | 100 | | |
|* 30 | INDEX FAST FULL SCAN | PARTICIPANT_AWARD_TYPE_PK1 | 147 | 9849 | 4 |
| 31 | VIEW | | 1 | 80 | 10 |
| 32 | SORT GROUP BY | | 1 | 198 | 10 |
|* 33 | TABLE ACCESS BY INDEX ROWID | CURRENCY_CONVERSION | 1 | 42 | 2 |
| 34 | NESTED LOOPS | | 1 | 198 | 8 |
| 35 | NESTED LOOPS | | 2 | 312 | 4 |
| 36 | TABLE ACCESS BY INDEX ROWID| PART_DISTRIBUTION_EXEC | 2 | 276 | 2 |
|* 37 | INDEX RANGE SCAN | IND_PARTICIPANT_UID | 1 | | 1 |
| 38 | TABLE ACCESS BY INDEX ROWID| CURRENCY | 1 | 18 | 1 |
|* 39 | INDEX UNIQUE SCAN | CURRENCY_AK | 1 | | |
|* 40 | INDEX RANGE SCAN | CURRENCY_CONVERSION_AK | 2 | | 1 |
| 41 | VIEW | | 1 | 53 | 4 |
| 42 | SORT GROUP BY | | 1 | 62 | 4 |
|* 43 | TABLE ACCESS BY INDEX ROWID | PART_AWARD_VESTING | 1 | 62 | 2 |
|* 44 | INDEX RANGE SCAN | PAVES_PARTICIPANT_UID_IDX | 1 | | 1 |
| 45 | TABLE ACCESS FULL | AWARD | 1062 | 162K| 3 |
| 46 | TABLE ACCESS BY INDEX ROWID | CURRENCY | 1 | 18 | 2 |
|* 47 | INDEX UNIQUE SCAN | CURRENCY_AK | 102 | | 1 |
Predicate Information (identified by operation id):
2 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2
"PAPS"."INSTALLMENT_NUM"=1)
4 - filter(4105=:B1)
5 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2)
7 - filter(4105=:B1)
8 - filter("PAPS"."STATUS"='OPEN')
9 - access("PAPS"."AWARD_CODE"=:B1 AND "PAPS"."PARTICIPANT_UID"=4105 AND "PAPS"."AWARD_TYPE"=:B2)
10 - filter("CC_A_P_CURR"."FROM_CURRENCY_UID"= (SELECT /*+ */ "CURRENCY"."CURRENCY_UID" FROM
"EWAPDBO"."CURRENCY" "CURRENCY" WHERE "CURRENCY"."CURRENCY_ISO_CODE"=:B1))
11 - access("SYS_ALIAS_7"."AWARD_CODE"="A"."AWARD_CODE")
12 - access("SYS_ALIAS_7"."AWARD_CODE"="PVS"."AWARD_CODE"(+))
13 - access("SYS_ALIAS_8"."AWARD_CODE"="PALS"."AWARD_CODE" AND
"SYS_ALIAS_8"."AWARD_TYPE"="PALS"."AWARD_TYPE")
16 - filter(TRUNC("PAL1"."LEDGER_ENTRY_DATE")<=TRUNC(SYSDATE@!) AND "PAL1"."ALLOC_TYPE"='IPU')
17 - access("PAL1"."PARTICIPANT_UID"=4105)
filter("PAL1"."PARTICIPANT_UID"=4105)
18 - access("SYS_ALIAS_8"."AWARD_CODE"="PDES"."AWARD_CODE"(+) AND
"SYS_ALIAS_8"."AWARD_TYPE"="PDES"."AWARD_TYPE"(+))
19 - access("SYS_ALIAS_7"."AWARD_CODE"="SYS_ALIAS_8"."AWARD_CODE")
23 - access("CC_A_P_CURR"."TO_CURRENCY_UID"=1 AND "CC_A_P_CURR"."LATEST_FLAG"='Y')
27 - access("PAV"."PARTICIPANT_UID"=4105)
filter("PAV"."PARTICIPANT_UID"=4105)
29 - access("SYS_ALIAS_7"."AWARD_CODE"="SYS_ALIAS_9"."AWARD_CODE" AND
"SYS_ALIAS_7"."PARTICIPANT_UID"=4105)
30 - filter("SYS_ALIAS_8"."PARTICIPANT_UID"=4105)
33 - filter("CC"."LATEST_FLAG"='Y')
37 - access("PDE"."PARTICIPANT_UID"=4105)
filter("PDE"."PARTICIPANT_UID"=4105)
39 - access("OFF_CURR"."CURRENCY_ISO_CODE"="PDE"."OFFERING_CURRENCY_ISO_CODE")
40 - access("CC"."FROM_CURRENCY_UID"="OFF_CURR"."CURRENCY_UID" AND "CC"."TO_CURRENCY_UID"=1)
43 - filter("PV"."VESTING_DATE"<=SYSDATE@!)
44 - access("PV"."PARTICIPANT_UID"=4105)
filter("PV"."PARTICIPANT_UID"=4105)
47 - access("CURRENCY"."CURRENCY_ISO_CODE"=:B1)
Note: cpu costing is off
93 rows selected.
Please help me out...
-Sankar -
Why is this query not using the index?
check out this query:-
SELECT CUST_PO_NUMBER, HEADER_ID, ORDER_TYPE, PO_DATE
FROM TABLE1
WHERE STATUS = 'N'
and here's the explain plan:-
1
2 -------------------------------------------------------------------------------------
3 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
4 -------------------------------------------------------------------------------------
5 | 0 | SELECT STATEMENT | | 2735K| 140M| 81036 (2)|
6 |* 1 | TABLE ACCESS FULL| TABLE1 | 2735K| 140M| 81036 (2)|
7 -------------------------------------------------------------------------------------
8
9 Predicate Information (identified by operation id):
10 ---------------------------------------------------
11
12 1 - filter("STATUS"='N')
There is already an index on this column, as is shown below:-
INDEX_NAME INDEX_TYPE UNIQUENESS TABLE_NAME COLUMN_NAME COLUMN_POSITION
1 TABLE1_IDX2 NORMAL NONUNIQUE TABLE1 STATUS 1
2 TABLE1_IDX NORMAL NONUNIQUE TABLE1 HEADER_ID 1
So why is this query not using the index on the 'STATUS' Column?
I've already tried using optimizer hints and regathering the stats on the table, but the execution plan still remains the same, i.e. it still uses a FTS.
I have tried this command also:-
exec dbms_stats.gather_table_stats('GECS','GEPS_CS_SALES_ORDER_HEADER',method_opt=>'for all indexed columns size auto',cascade=>true,degree=>4);
inspite of this, the query is still using a full table scan.
The table has around 55 Lakh records, across 60 columns. And because of the FTS, the query is taking a long time to execute. How do i make it use the index?
Please help.
Edited by: user10047779 on Mar 16, 2010 6:55 AMIf the cardinality is really as skewed as that, you may want to look at putting a histogram on the column (sounds like it would be in order, and that you don't have one).
create table skewed_a_lot
as
select
case when mod(level, 1000) = 0 then 'N' else 'Y' end as Flag,
level as col1
from dual connect by level <= 1000000;
create index skewed_a_lot_i01 on skewed_a_lot (flag);
exec dbms_stats.gather_table_stats(user, 'SKEWED_A_LOT', cascade => true, method_opt => 'for all indexed columns size auto');Is an example. -
Improving a simple select query, which uses all rows.
Hi All,
Please excuse me if the question is too silly. Below is my code
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
Elapsed: 00:00:00.07
SQL> show parameter optim
NAME TYPE VALUE
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
SQL> explain plan for select SUM(decode(transaction_type,'D',txn_amount,0)) payments_reversals,
2 SUM(decode(transaction_type,'C',txn_amount,0)) payments,primary_card_no,statement_date
3 from credit_card_pymt_dtls group by primary_card_no,statement_date;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2801218574
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1912K| 56M| | 21466 (3)| 00:04:18 |
| 1 | SORT GROUP BY | | 1912K| 56M| 161M| 21466 (3)| 00:04:18 |
| 2 | TABLE ACCESS FULL| CREDIT_CARD_PYMT_DTLS | 1912K| 56M| | 4863 (3)| 00:00:59 |
9 rows selected.
SQL> select index_name,index_type
2 from all_indexes
3 where table_name = 'CREDIT_CARD_PYMT_DTLS';
INDEX_NAME INDEX_TYPE
INDX_TRANTYPE BITMAP
INDX_PCARD NORMAL
INDX_PSTATEMENT_DATE NORMALThe query is using all the records in the CREDIT_CARD_PYMT_DTLS table. Transaction type will be either 'C' or 'D'.
CREDIT_CARD_PYMT_DTLS has 2 million rows and the qury will output 1.5 million rows. Table statisticas are upto date.
The query now is taking almost 5 minutes. Is thaere any way to reduce the time ?
Our DB server has 8 CPUs and 8 GB memory. Is the timing genuine ?
Thanks in Advance.
Edited by: user11115924 on Apr 29, 2009 2:43 AM
All the columns used in the query are already indexed. ( Ofcourse, not only for this query.)Hi All,
Thanks for the helps provided. Expecting it once more..
My actual query is as below
select primary_card_no,base_segment_number,atab.previous_balance,current_balance,intrest_amt_due_this_cycle,total_min_amt_due,total_credit_limit,
total_purchase_this_cycle,total_cash_trns_this_cycle,available_credit_limit,payments,utilization,payment_ratio,payments_reversals,cash_limit,
available_cash_limit, description
from
select primary_card_no,DECODE(base_segment_number,NULL,primary_card_no,base_segment_number) base_segment_number,
SUM(previous_balance) previous_balance,SUM(current_balance) current_balance ,SUM(intrest_amt_due_this_cycle) intrest_amt_due_this_cycle,
SUM(total_min_amt_due) total_min_amt_due,SUM(total_credit_limit_all) total_credit_limit,
SUM(total_purchase_this_cycle) total_purchase_this_cycle,SUM(total_cash_trns_this_cycle) total_cash_trns_this_cycle,
SUM(available_credit_limit) available_credit_limit,SUM(payments) payments,
(SUM(NVL(current_balance,0)) / SUM(total_credit_limit_all)) * 100 utilization,
(SUM(NVL(payments,0)) / DECODE(SUM(previous_balance),0,NULL,SUM(previous_balance))) * 100 payment_ratio,
SUM(payments_reversals) payments_reversals,SUM(cash_limit) cash_limit,SUM(available_cash_limit) available_cash_limit
from
( select a.*,NVL(payments_reversals,0)payments_reversals ,NVL(payments,0) payments
from
( select primary_card_no,previous_balance,current_balance,intrest_amt_due_this_cycle,total_min_amt_due,total_purchase_this_cycle,
total_cash_trns_this_cycle,statement_date,available_credit_limit,cash_limit,available_cash_limit,
(case when statement_date <= TO_DATE('301108','ddmmyy') then NULLIF(total_credit_limit,0)
else NULLIF((select credit_limit
from ccm_dbf_chtxn_v0 t1
where t1.batch_id = '011208'
and SUBSTR(t1.card_number,4) = a.primary_card_no),0)
end) total_credit_limit_all
from
( select primary_card_no,previous_balance,current_balance,INTREST_AMT_DUE_THIS_CYCLE,
TOTAL_MIN_AMT_DUE,TOTAL_PURCHASE_THIS_CYCLE,TOTAL_CASH_TRNS_THIS_CYCLE,statement_date,
AVAILABLE_CREDIT_LIMIT,cash_limit,available_cash_limit,total_credit_limit
from credit_card_master_all@FGBAPPL_LINK
) a
where statement_date between ADD_MONTHS(TRUNC(SYSDATE,'mm'),-6) and TRUNC(SYSDATE,'mm')-1
) a,
( select SUM(decode(transaction_type,'D',txn_amount,0)) payments_reversals,
SUM(decode(transaction_type,'C',txn_amount,0)) payments,primary_card_no,TO_CHAR(statement_date,'MON-RRRR') sdate
from credit_card_pymt_dtls
group by primary_card_no,TO_CHAR(statement_date,'MON-RRRR')
) b
where TO_CHAR(a.statement_date,'MON-RRRR')= b.sdate(+)
and a.primary_card_no= b.primary_card_no(+)
) a,
( select SUBSTR(a.card_number,4) card_number,base_segment_number,TO_DATE(account_creation_date,'DDMMYYYY') account_creation_date,
a.batch_id, credit_limit credit_limit_current
from
( select *
from ccm_dbf_phtxn_v0
where batch_id= (SELECT to_char(MAX(TO_DATE(SUBSTR(BATCH_ID,1,6),'DDMMRR')),'DDMMRR') FROM CCM_MST_V0)
) a,
( select *
from ccm_dbf_chtxn_v0
where batch_id=(SELECT to_char(MAX(TO_DATE(SUBSTR(BATCH_ID,1,6),'DDMMRR')),'DDMMRR') FROM CCM_MST_V0)
) b
where a.card_number=b.card_number
and TO_NUMBER(ROUND(MONTHS_BETWEEN(SYSDATE,TO_DATE(account_creation_date,'DDMMYYYY')),2)) >=6
and a.company ='BNK'
) b
where a.primary_card_no = b.card_number
group by primary_card_no,base_segment_number) atab, card_summary_param btab
where utilization between utilization_low and utilization_high
and payment_ratio between payment_ratio_low and payment_ratio_high
and SIGN(atab.previous_balance) =btab.previous_balanceWhere I have to put the PARALLEL hint for maximum performance?
Sorry for asking blindly without doing any R&D. Time is not permitting that...
Edited by: user11115924 on Apr 29, 2009 5:09 AM
Sorry for the kiddy aliases.. Query is not written by me.. -
Tuning query without using hint
Hi,
i want to change the plan of a query without using hint.
Also i want to use the plan that hint generate in my original query.
My db is a 11g.
How can i do this?
tnxYou can use SQL Plan Manager. You might find this interesting:
http://blogs.oracle.com/optimizer/entry/what_should_i_do_with_old_hints_in_my_workload
The link above basically suggests these steps:
1. Run the query with the hints, then
2. Take the plan from #1 and associate its SQL plan baseline with the query with no hints
3. Remove the hints for that query in the code and start capturing and evolving plans for the un-hinted query -
How to use Hierarchy for Planning?
Hi everybody,
I have created a query, which is input ready.
This query uses a hierarchy which contains different positions being used for cashflow-planning.
My Problem is that cashflow not only consists of positions that have to be added, but also positions, which have to be subtracted/discounted.
I don't know whether it is possible to put my hierarchy into a structure and work with functions (because the function is not able to select child-nodes dynamically, I guess).
That's the reason for me asking:
Can I just instruct my hierarchy not to add eyery node, but also subtract them?
I will be very thankful for every hint being given.
Regards,
Martin
(p.s.: I'm sorry for my bad english. I'm out of practise.Hi Martin,
welcome to sdn.
follow this link regarding your query...
http://help.sap.com/saphelp_nw04s/helpdata/en/f8/05d13fa69a4921e10000000a1550b0/frameset.htm
assign points if it helps. -
Create a print button in webtemplate used for integrated planning - BI 7.0
Heelo Guys,
Can you please let me know how to add a print button in the web template which has a input query and which is used for integrated planning. The user want to print the report.I haven't worked in web template and thought of asking you if there is any button or standard sap functionality I can use for printing the report . Thanks in advance for your help. We are in BI 7.0.
Thanks
SenthilHello
What you are experiencing is not clear for me, but in general you have to install the BI ABAP + BI java to have planning modeler working. Ensure that BI Diagnostics (note 937697) and also ensure that you have setup your system according notes
919850 and 947376.
I hope this helps.
BR
Lucimar
Edited by: Lucimar Moresco on Oct 14, 2010 3:58 PM -
Querying RESOURCE_VIEW using XQuery
Hi all:
Anybody knows the best way to query RESOURCE_VIEW using XQuery to show directory listing information as an example.
For example, I want to produce an XML Document with something like this using XQuery:
<directoryListing>
<dir anyPath="/public/JURRICULUM/cms/en">
<DisplayName xmlns="http://xmlns.oracle.com/xdb/XDBResource.xsd">en</DisplayName>
</dir>
<dir anyPath="/public/JURRICULUM/cms/en/live">
<DisplayName xmlns="http://xmlns.oracle.com/xdb/XDBResource.xsd">live</DisplayName>
</dir>
<dir anyPath="/public/JURRICULUM/cms/es">
<DisplayName xmlns="http://xmlns.oracle.com/xdb/XDBResource.xsd">es</DisplayName>
</dir>
<dir anyPath="/public/JURRICULUM/cms/es/live">
<DisplayName xmlns="http://xmlns.oracle.com/xdb/XDBResource.xsd">live</DisplayName>
</dir>
</directoryListing>
I made this result by executing this query:
SELECT XMLQuery('declare namespace res = "http://xmlns.oracle.com/xdb/XDBResource.xsd";
<directoryListing>
{for $i in $directoryListing/dir
where $i/res:Resource/@Container="true"
return
<dir anyPath="{$i/@ANY_PATH}">
{$i/res:Resource/res:DisplayName}
</dir>}
</directoryListing>'
PASSING (
select XMLAgg(XMLElement("dir",XMLATTRIBUTES(any_path,resid),res))
from resource_view where
under_path(res,'/public/JURRICULUM/cms')=1
) as "directoryListing" RETURNING CONTENT).getStringVal()
FROM dual
I had injected resource_view's content as an argument.
Another way is to use ora:view() extension function, but I can't use under_path functionality for example.
Is there some extension funcion like doc() or collection() but instead of returning the content of the document, returning the information of resource_view asociated to the URI?
Is the above query optimal in term of execution plan?
I tested it with JDeveloper and shows an execution plan similar to the query on resource_view alone.
Best regards, Marcelo.Hi all:
I had implemented an XQuery extension library ready to run inside the Oracle JVM, but it can run outside as well.
The code is on the XQuery forums:
How to write an XQuery Extension library
I tested outside the database and the result is:
/usr/java/jdk1.5.0_04/bin/java -hotspot -classpath ... com.prism.cms.xquery.Application1 /public/PCT_ADMIN/cms/es/3-AcercaParque/ 7934
testXQL elapsed time: 3094
testXQ elapsed time: 2164
Running as Java Stored Procedure, it looks like this:
SQL> exec testXQ('/public/PCT_ADMIN/cms/es/3-AcercaParque/','7934')
testXQL elapsed time: 943
testXQ elapsed time: 854
PL/SQL procedure successfully completed.
Obviously running as Java Stored procedure its around 3.5 faster than a regular application.
Injecting the resource_view content as an argument instead of using an XQuery extension library seem to be equals (943 ~ 854), so I'll use the extension library mechanish for clearlying on the code.
Best regards, Marcelo -
Optimizer not using correct execution plan
Hi ,
DB version : 11.2.0.3
My sql query ran last month 1 hour. But the same sql query today running for four hours. Looks like optimizer is not using correct execution plan. I have used tuning advisor and applied recommended sql profile and query execution is back to normal. I can see statistics are upto date for the tables. Any other factors why the optimizer is not choosing correct execution plan ?
Thanks.What is the correct plan according to you? Multiple factors cause optimizer to chose a different plan. As a rudimentry example - A binary index column having low cardinality than expected, after new data has been inserted. Never ever expect your query to have same execution plan till the entire lifetime, until the underlying data does not change or nobody changes database settings.
You have to give a lot of information if you are looking for performance tuning. Pls see following thread
https://forums.oracle.com/message/9362003#9362003 -
S there any Utility industry company is using PP-production planning & exec
ear All,
Is there any Utility industry company is using PP-production planning & execution module and CO-PC product costing module ??
Meaning.. any Power generation/Transmission/distribution or water production/Transmission/distribution company is using SAP PP or APO for their operation business processes.
Our client is actually an integrated Utility having very much integrated product lines like Processed Sea water for cooling, Process water for Industries, Potable water & Power for the communities/ industries.
Please let me know.. as we are planning to implement the PP and CO-PC for product costig..
As I know Tata chemical, Gujarat-India is using SAP PP module for their captive Power generation.(I have worked there as SAP CO consultant). But there is no transmission and distribution as such.
But production processes are covered by PP and product costing CO-PC is also implemented and it is giving very accurate cost per unit both for plan side and actual side.
regards,
GeorgeHello George,
I would not be able to provide you much detail about the PP solution as I was taking care of the controlling part. The BOM of power and water were defined as you have mentioned only thing we had additionally was Gas was also mentioned in the BOM since we had 6 gas turbines. But all of them were mentioned as not relevant for costing. So there was no cost being calculated from the material side. We had formula planning template (CPT1) that used to calculate the costs under the following heads which were defined as activity types.
a) Gas
b) Employee Costs
c) Consumables / Supplies
d) Operating Expenses
e) Depreciation
These components were planned at a initial price in KP26 and the quantity for them (except Gas) was units of power generated (for cost component it was 1 MW and for actual based on the quanity in the product cost collector). Power cost estimate used to be created at the begining of each period with help of this template and per MW cost was used in valuating the power generation in each of the product cost collectors (the turbines were defined as separate work centers as well as product cost collectors).
In the actual execution the power material used to be generated in all the product cost collectors and the usage of gas was confirmed. The template used to be executed in the month end (CPTA) and the same would allocate the costs posted in the power generation cost center to the product cost collectors (the amount would be the power units generated multiplied by the rate mentioned in KP26 for Employee Costs, depreciation etc. and for gas it would be Gas units consumed multiplied by the rate mentioned in KP26 for activity Gas). The power material generated would then be issued to the generation cost center along with the usage of gas for power as well. The next step is execution of the splitting transaction which would allocate the amount posted in the cost center to the various activities and then the actual price calculation would be carried out for the activities and the product cost collectors get revalued with the actual price. Upon settlement of the product cost collector to the generation cost center you would have the power material issue cost + settlement account. The amounts posted in these two accounts would be transferred to the water cost center using a assessment cycle with SKF that mentions the usage of water in power and another assessment cycle taking the costs in water cost center with help of SFK that quantifies usage of water in power. Now the power material cost + settlement amount - sent to water + received from water is sent to another cost center called power - distribution and from there the amounts including any distribution costs booked therein is transferred to other user cost centers using usage units defined as SKF.
What I feel later is the water activity could have been done before settlement and an activity type could have been defined for that purpose.
At the moment I can remember this much and would be happy to answer any more query which you may have. That was a good design though it had lot of scope for fine tuning.
Kind Regards // Shaubhik -
How can i know if my query is using the index ?
Hello...
How can i know if my query is using the index of the table or not?
im using set autotrace on...but is there another way to do it?
thanks!
Alessandro Falanque.Hi,
You can use Explain Plan for checking that your query is using proper index or not. First you need to check that Plan_table is installed in your database or not. If it is not there THEN THE SCRIPT WILL BE LIKE THIS:
CREATE TABLE PLAN_TABLE (
STATEMENT_ID VARCHAR2 (30),
TIMESTAMP DATE,
REMARKS VARCHAR2 (80),
OPERATION VARCHAR2 (30),
OPTIONS VARCHAR2 (30),
OBJECT_NODE VARCHAR2 (128),
OBJECT_OWNER VARCHAR2 (30),
OBJECT_NAME VARCHAR2 (30),
OBJECT_INSTANCE NUMBER,
OBJECT_TYPE VARCHAR2 (30),
OPTIMIZER VARCHAR2 (255),
SEARCH_COLUMNS NUMBER,
ID NUMBER,
PARENT_ID NUMBER,
POSITION NUMBER,
COST NUMBER,
CARDINALITY NUMBER,
BYTES NUMBER,
OTHER_TAG VARCHAR2 (255),
PARTITION_START VARCHAR2 (255),
PARTITION_STOP VARCHAR2 (255),
PARTITION_ID NUMBER,
OTHER LONG,
DISTRIBUTION VARCHAR2 (30))
TABLESPACE SYSTEM NOLOGGING
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 10240
NEXT 10240
PCTINCREASE 50
MINEXTENTS 1
MAXEXTENTS 121
FREELISTS 1 FREELIST GROUPS 1 )
NOCACHE;
After that write the following command in the SQL prompt.
Explain plan for (Select statement);
Select level, SubStr( lpad(' ',2*(Level-1)) || operation || ' ' ||
object_name || ' ' || options || ' ' ||
decode(id, null , ' ', decode(position, null,' ', 'Cost = ' || position) ),1,100)
|| ' ' || nvl(other_tag, ' ') Operation
from PLAN_TABLE
start with id = 0
connect by
prior id = parent_id;
This will show how the query is getting executed . What are all the indexes it is using etc.
Cheers.
Samujjwal Basu -
How to compile the hint to force selection statement to use index
Hello expert,
will you please tell me how to compile the hint to force selection statement to use index?
Many Thanks,Not sure what you mean by compile, but hint is enclosed in /*+ hint */. Index hint is INDEX(table_name,index_name). For example:
SQL> explain plan for
2 select * from emp
3 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3956160932
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 14 | 546 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| EMP | 14 | 546 | 3 (0)| 00:00:01 |
8 rows selected.
SQL> explain plan for
2 select /*+ index(emp,pk_emp) */ *
3 from emp
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 4170700152
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 14 | 546 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 14 | 546 | 2 (0)| 00:00:01 |
| 2 | INDEX FULL SCAN | PK_EMP | 14 | | 1 (0)| 00:00:01 |
9 rows selected.
SQL> Hint in the above example is forcing optimizer to use index which resul;ts in a bad execution plan. Most of the time optimizer does not need hints and chooses an optimal plan. In most cases sub-optimal plan is result of stale or incomplete statistics.
SY.
Maybe you are looking for
-
Dear Gurus I am trying to update RG1 register through T-code J1I5 for movement type 921. Table J_1RG1 get updated for this transaction. But when i use T-code J2I5 for Excise Register Extraction entry does not reflect in table J_2RG1BAL hence opening
-
Mismatch of inventory data , R/3 and BW
hellow gurus m workin on environment ECC 5.0 and BI 7,0 i have extracted the MM data from R/3 to BI. and then i have a created a query.... when i runned the query i m nt geting the correct data .. and when i check for particular Article no. it is n
-
Static Fund Center and Commitment Item
Dear All, We have recently implemented Funds Management in our organisation and there are a few issues due to corrections in the account assignment derivation data by the user. We have a derivation strategy that gets the Funds Center and Commitment I
-
How can I un-install Symbian Anna software from C6
I have updated my nokia c6 device software with Symbian Anna software after that I have facing problem. I want to un-install (revert back to original software) the update. Please help me in this. Thanks in Advance.
-
Devices remain on even after shutdown
Since installing BIOS 1.4, or since converting my IDE drives into Promise RAID-0, I have noticed that my peripherals do not turn off after shutdown. My keyboard lights turn off, but the optical laser on my mouse stays on, as does the LED on my joyst