Sql query performance tuning
Hi all,
I am facing problem to extract data and insert into another table. I need to fetch part number from all_parts_already and then based on the result I need to extract place from all_parts and insert into all_parts_already.
Sample data in the tables:
table all_parts_already:
part part_desc technique company place
1 A Engine TVS B1
1 Av Engine TVS B2
1 Ab Engine TVS B3
2 Ah Engine TVS B3
2 Ap Engine TVS B2
table all_parts:
technique company place
Engine TVS B1
Kim TVS B2
Engine TVS B3
Engine TVS B4
XXXXX TVS B5
Engine TVS B6
for c1 in (select distinct parts from all_parts_already where
technique = 'Engine' and
Company = 'TVS' ) loop
for c2 in (select distinct place from all_parts where
technique = 'Engine' and
Company ='TVS' and
minus
select distinct Place from all_parts_already where
technique = 'Engine' and
Company = 'TVS' and
parts = c1.parts ) loop
insert into all_parts_already (select c2.place,place_desc,c1.parts,c2.place from place_master where parts=c1.parts and place=c2.place);
end loop;
end loop;
the data i am dealing with is in millions. One technique may have 1000 parts. One part may have 500 places. So the loop runs that many times creating the delay.
Please tell me how to move forward.I am getting the output i need but the time it takes is too much(goes on to days)
Thanks a lot
RESULT TABLE AS BELOW
part place machine country
P2 C3 M1 I1
P2 C4 M1 I1
P4 C1 M1 I1
P4 C2 M1 I1
P4 C4 M1 I1
P3 C1 M1 I1
P3 C2 M1 I1
P3 C3 M1 I1I don't get the relationship to country.
How do you determine what the missing country is?
You say on all of the above the I1 is missing and yet in table 'a' parts can have different countries?
That point aside....
This probably isn't the simplest way to do it, but maybe this is moving in the right direction:
SQL> with b as
2 (select 'c1' place, 'm1' machine from dual union all
3 select 'c2' place, 'm1' machine from dual union all
4 select 'c3' place, 'm1' machine from dual union all
5 select 'c4' place, 'm1' machine from dual)
6 , a as
7 (select 'p1' part, 'c1' place, 'm1' machine, 'i1' country from dual union all
8 select 'p1' part, 'c2' place, 'm1' machine, 'i2' country from dual union all
9 select 'p1' part, 'c3' place, 'm1' machine, 'i3' country from dual union all
10 select 'p1' part, 'c4' place, 'm1' machine, 'i1' country from dual union all
11 select 'p2' part, 'c1' place, 'm1' machine, 'i1' country from dual union all
12 select 'p2' part, 'c2' place, 'm1' machine, 'i2' country from dual union all
13 select 'p4' part, 'c3' place, 'm1' machine, 'i3' country from dual union all
14 select 'p3' part, 'c4' place, 'm1' machine, 'i1' country from dual)
15 , x as
16 (select distinct part, machine
17 from a)
18 , all_parts as
19 (select b.place, x.part, x.machine
20 from x
21 , b
22 where b.machine = x.machine)
23 select *
24 from all_parts ap
25 , a aa
26 where aa.place (+) = ap.place
27 and aa.part (+) = ap.part
28 and aa.machine (+) = ap.machine
29 --and aa.place is null
30 ;
PL PA MA PA PL MA CO
c1 p1 m1 p1 c1 m1 i1
c2 p1 m1 p1 c2 m1 i2
c3 p1 m1 p1 c3 m1 i3
c4 p1 m1 p1 c4 m1 i1
c1 p2 m1 p2 c1 m1 i1
c2 p2 m1 p2 c2 m1 i2
c3 p4 m1 p4 c3 m1 i3
c4 p3 m1 p3 c4 m1 i1
c2 p4 m1
c3 p2 m1
c3 p3 m1
c2 p3 m1
c4 p4 m1
c4 p2 m1
c1 p4 m1
c1 p3 m1
16 rows selected.
SQL> Just uncomment out the line "and aa.place is null" to get the missing rows.
Edited by: Dom Brooks on Nov 10, 2011 12:09 PM
Similar Messages
-
Reg: Process Chain, query performance tuning steps
Hi All,
I come across a question like, There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
Please let me know the steps involved in query performance tuning and aggregate tuning.
Thanks & Regards
Omkar.KHi,
Process Chain
Method 1 (when it fails in a step/request)
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
How is it possible to restart a process chain at a failed step/request?
Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
In the opened popup click on the tab 'Chain'.
In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
1. copy the variant from the popup to the variante of table rspcprocesslog
2. copy the instance from the popup to the instance of table rspcprocesslog
3. copy the start date from the popup to the batchdate of table rspcprocesslog
Press F8 to display the entries of table rspcprocesslog.
Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
1. rspcprocesslog-log_id -> i_logid
2. rspcprocesslog-type -> i_type
3. rspcprocesslog-variante -> i_variant
4. rspcprocesslog-instance -> i_instance
5. enter 'G' for parameter i_state (sets the status to green).
Now press F8 to run the fm.
Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
Query performance tuning
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memoery will decrease the loading time of the report.
Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
Thanks,
JituK -
Oracle query performance tuning
Hi
I am doing Oracle programming.Iwould like to learn Query Performance Tuning.
Could you guide me , like how could i learn this online, which books to refer.
Thank youI would recommend purchasing a copy of Cary Millsap's book now:
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/059600527X/ref=sr_1_1?ie=UTF8&qid=1248985270&sr=8-1
And Jonathan Lewis' when you feel you are at a slightly more advanced level.
http://www.amazon.com/Cost-Based-Oracle-Fundamentals-Experts-Voice/dp/1590596366/ref=pd_sim_b_2
Both belong in everyone's bookcase. -
SQL query performance issues.
Hi All,
I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
SQL query performance issues.
Following is the tkprof file.
CURSOR_ID:76 LENGTH:2383 ADDRESS:f6b40ab0 HASH_VALUE:2459471753 OPTIMIZER_GOAL:ALL_ROWS USER_ID:443 (APPS)
insert into cos_temp(
TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
INVOICE_NUMBER, EXT_SALES, EXT_COS,
GROSS_PROFIT, ACCT_DATE,
SHIPMENT_TYPE,
FROM_ORGANIZATION_ID,
FROM_ORGANIZATION_CODE)
select a.trx_date,
g.segment5 dept,
g.segment4 prd,
m.segment1 part,
d.customer_number customer,
b.quantity_invoiced units,
-- substr(a.sales_order,1,6) order#,
substr(ltrim(b.interface_line_attribute1),1,10) order#,
a.trx_number invoice,
(b.quantity_invoiced * b.unit_selling_price) sales,
(b.quantity_invoiced * nvl(price.operand,0)) cos,
(b.quantity_invoiced * b.unit_selling_price) -
(b.quantity_invoiced * nvl(price.operand,0)) profit,
to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
'DRP',
l.ship_from_org_id,
p.organization_code
from ra_customers d,
gl_code_combinations g,
mtl_system_items m,
ra_cust_trx_line_gl_dist c,
ra_customer_trx_lines b,
ra_customer_trx_all a,
apps.oe_order_lines l,
apps.HR_ORGANIZATION_INFORMATION i,
apps.MTL_INTERCOMPANY_PARAMETERS inter,
apps.HZ_CUST_SITE_USES_ALL site,
apps.qp_list_lines_v price,
apps.mtl_parameters p
where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
and a.batch_source_id = 1001 -- Sales order shipped other OU
and a.complete_flag = 'Y'
and a.customer_trx_id = b.customer_trx_id
and b.customer_trx_line_id = c.customer_trx_line_id
and a.sold_to_customer_id = d.customer_id
and b.inventory_item_id = m.inventory_item_id
and m.organization_id
= decode(substr(g.segment4,1,2),'01',5004,'03',5004,
'02',5003,'00',5001,5002)
and nvl(m.item_type,'0') <> '111'
and c.code_combination_id = g.code_combination_id+0
and l.line_id = b.interface_line_attribute6
and i.organization_id = l.ship_from_org_id
and p.organization_id = l.ship_from_org_id
and i.org_information3 <> '5108'
and inter.ship_organization_id = i.org_information3
and inter.sell_organization_id = '5108'
and inter.customer_site_id = site.site_use_id
and site.price_list_id = price.list_header_id
and product_attr_value = to_char(m.inventory_item_id)
call count cpu elapsed disk query current rows misses
Parse 1 0.47 0.56 11 197 0 0 1
Execute 1 3733.40 3739.40 34893 519962154 11 188 0
total 2 3733.87 3739.97 34904 519962351 11 188 1
| Rows Row Source Operation
| ------------ ---------------------------------------------------
| 188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
| 741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
| 741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
| 741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
| 741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
| 3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
| 3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
| 3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
| 1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
| 1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
| 486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
| 486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
| 486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
| 75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
| 486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
| 486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
| 486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
| 1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
| 2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
| 1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
| 1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
| 3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
| 3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
| 3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
| 3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
| 3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
| 3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
| 741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
| 6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
Regards
Ashish| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
Please post explain plan and optimizer* parameter settings. -
How does Index fragmentation and statistics affect the sql query performance
Hi,
How does Index fragmentation and statistics affect the sql query performance
Thanks
Shashikala
ShashikalaHow does Index fragmentation and statistics affect the sql query performance
Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
My TechNet Wiki Articles -
SQL Query Performance needed.
Hi All,
I am getting performance issue with my below sql query. When I fired it, It is taking 823.438 seconds, but when I ran query in, it is taking 8.578 seconds, and query after in is taking 7.579 seconds.
SELECT BAL.L_ID, BAL.L_TYPE, BAL.L_NAME, BAL.NATURAL_ACCOUNT,
BAL.LOCATION, BAL.PRODUCT, BAL.INTERCOMPANY, BAL.FUTURE1, BAL.FUTURE2, BAL.CURRENCY, BAL.AMOUNT_PTD, BAL.AMOUNT_YTD, BAL.CREATION_DATE,
BAL.CREATED_BY, BAL.LAST_UPDATE_DATE, BAL.LAST_UPDATED_BY, BAL.STATUS, BAL.ANET_STATUS, BAL.COG_STATUS, BAL.comb_id, BAL.MESSAGE,
SEG.SEGMENT_DESCRIPTION FROM ACC_SEGMENTS_V_TST SEG , ACC_BALANCE_STG BAL where BAL.NATURAL_ACCOUNT = SEG.SEGMENT_VALUE AND SEG.SEGMENT_COLUMN = 'SEGMENT99' AND BAL.ACCOUNTING_PERIOD = 'MAY-10' and BAL.comb_id
in
(select comb_id from
(select comb_id, rownum r from
(select distinct(comb_id),LAST_UPDATE_DATE from ACC_BALANCE_STG where accounting_period='MAY-10' order by LAST_UPDATE_DATE )
where rownum <=100) where r >0)
Please help me in fine tuning above. I am using Oracle 10g database. There are total of 8000 records. Let me know if any other info required.
Thanks in advance.In recent versions of Oracle an EXISTS predicate should produce the same execution plan as the corresponding IN clause.
Follow the advice in the tuning threads as suggested by SomeoneElse.
It looks to me like you could avoid the double pass on ACC_BALANCE_STG by using an analytical function like ROW_NUMBER() and then joining to ACC_SEGMENTS_V_TST SEG, maybe using subquery refactoring to make it look nicer.
e.g. something like (untested)
WITH subq_bal as
((SELECT *
FROM (SELECT BAL.L_ID, BAL.L_TYPE, BAL.L_NAME, BAL.NATURAL_ACCOUNT,
BAL.LOCATION, BAL.PRODUCT, BAL.INTERCOMPANY, BAL.FUTURE1, BAL.FUTURE2,
BAL.CURRENCY, BAL.AMOUNT_PTD, BAL.AMOUNT_YTD, BAL.CREATION_DATE,
BAL.CREATED_BY, BAL.LAST_UPDATE_DATE, BAL.LAST_UPDATED_BY, BAL.STATUS, BAL.ANET_STATUS,
BAL.COG_STATUS, BAL.comb_id, BAL.MESSAGE,
ROW_NUMBER() OVER (ORDER BY LAST_UPDATE_DATE) rn
FROM acc_balance_stg
WHERE accounting_period='MAY-10')
WHERE rn <= 100)
SELECT *
FROM subq_bal bal
, acc_Segments_v_tst seg
where BAL.NATURAL_ACCOUNT = SEG.SEGMENT_VALUE
AND SEG.SEGMENT_COLUMN = 'SEGMENT99';However, the parentheses you use around comb_id make me question what your intention is here in the subquery?
Do you have multiple rows in ACC_BALANCE_STG for the same comb_id and last_update_date?
If so you may want to do a MAX on last_update_date, group by comb_id before doing the analytic restriction.
Edited by: DomBrooks on Jun 16, 2010 5:56 PM -
VAL_FIELD selection to determine RSDRI or MDX query: performance tuning
according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
I did so and found an interesting issue in one of the COPA report.
with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
so in smmary,
BAS(PARENT) =>MDX query.
BAS(CHILD1)=>RSDRI query.
BAS(CHILD2)=>RSDRI query.
BAS(CHILD3)=>RSDRI query.
BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
I know VAL_FIELD is SAP reserved name for BPC dimensions. my question is why BAS(PARENT) =>MDX query.?
interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
GeorgeOk - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection.
I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
I should have paid more attention to the Info message I got in the BEx Query Designed. It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
After moving the variables to the Characteristic Restrictions my report worked as expected. The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
Hope this helps someone else -
Query performance tuning need your suggestions
Hi,
Below is the sql query and explain plan which is taking 2 hours to execute and sometime it is breaking up( erroring out) due to memory issue.
Below it the query which i need to improve the performance of the code please need your suggestion in order to tweak so that time take for execution become less and also in less memory consumption
select a11.DATE_ID DATE_ID,
sum(a11.C_MEASURE) WJXBFS1,
count(a11.PKEY_GUID) WJXBFS2,
count(Case when a11.C_MEASURE <= 10 then a11.PKEY_GUID END) WJXBFS3,
count(Case when a11.STATUS = 'Y' and a11.C_MEASURE > 10 then a11.PKEY_GUID END) WJXBFS4,
count(Case when a11.STATUS = 'N' then a11.PKEY_GUID END) WJXBFS5,
sum(((a11.C_MEASURE ))) WJXBFS6,
a17.DESC_DATE_MM_DD_YYYY DESC_DATE_MM_DD_YYYY,
a11.DNS DNS,
a12.VVALUE VVALUE,
a12.VNAME VNAME,
a13.VVALUE VVALUE0,
a13.VNAME VNAME0,
9 a14.VVALUE VVALUE1,
a14.VNAME VNAME1,
a15.VVALUE VVALUE2,
a15.VNAME VNAME2,
a16.VVALUE VVALUE3,
a16.VNAME VNAME3,
a11.PKEY_GUID PKEY_GUID,
a11.UPKEY_GUID UPKEY_GUID,
a17.DAY_OF_WEEK DAY_OF_WEEK,
a17.D_WEEK D_WEEK,
a17.MNTH_ID DAY_OF_MONTH,
a17.YEAR_ID YEAR_ID,
a17.DESC_YEAR_FULL DESC_YEAR_FULL,
a17.WEEK_ID WEEK_ID,
a17.WEEK_OF_YEAR WEEK_OF_YEAR
from ACTIVITY_F a11
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 1 ) a12
on (a11.PKEY_GUID = a12.PKEY_GUID and
a11.DATE_ID = a12.DATE_ID and
a11.ORG = a12.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 2) a13
on (a11.PKEY_GUID = a13.PKEY_GUID and
a11.DATE_ID = a13.DATE_ID and
a11.ORG = a13.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 3 ) a14
on (a11.PKEY_GUID = a14.PKEY_GUID and
a11.DATE_ID = a14.DATE_ID and
a11.ORG = a14.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 4) a15
on (a11.PKEY_GUID = a15.PKEY_GUID and
89 a11.DATE_ID = a15.DATE_ID and
a11.ORG = a15.ORG)
join (SELECT A.ORG as ORG,
A.DATE_ID as DATE_ID,
A.TIME_OF_DAY_ID as TIME_OF_DAY_ID,
A.DATE_HOUR_ID as DATE_HOUR_ID,
A.TASK as TASK,
A.PKEY_GUID as PKEY_GUID,
A.VNAME as VNAME,
A.VVALUE as VVALUE
FROM W_ORG_D A join W_PERSON_D B on
(A.TASK = B.TASK AND A.ORG = B.ID
AND A.VNAME = B.VNAME)
WHERE B.VARIABLE_OBJ = 9) a16
on (a11.PKEY_GUID = a16.PKEY_GUID and
a11.DATE_ID = a16.DATE_ID and
A11.ORG = A16.ORG)
join W_DATE_D a17
ON (A11.DATE_ID = A17.ID)
join W_SALES_D a18
on (a11.TASK = a18.ID)
where (a17.TIMSTAMP between To_Date('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and To_Date('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and a11.ORG in (12)
and a18.SRC_TASK = 'AX012Z')
group by a11.DATE_ID,
a17.DESC_DATE_MM_DD_YYYY,
a11.DNS,
a12.VVALUE,
a12.VNAME,
a13.VVALUE,
a13.VNAME,
a14.VVALUE,
a14.VNAME,
a15.VVALUE,
a15.VNAME,
a16.VVALUE,
a16.VNAME,
a11.PKEY_GUID,
a11.UPKEY_GUID,
a17.DAY_OF_WEEK,
a17.D_WEEK,
a17.MNTH_ID,
a17.YEAR_ID,
a17.DESC_YEAR_FULL,
a17.WEEK_ID,
a17.WEEK_OF_YEAR;
Explained.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 1245 | 47 (9)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 1245 | 47 (9)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 1245 | 46 (7)| 00:00:01 |
|* 3 | HASH JOIN | | 1 | 1179 | 41 (5)| 00:00:01 |
|* 4 | HASH JOIN | | 1 | 1113 | 37 (6)| 00:00:01 |
|* 5 | HASH JOIN | | 1 | 1047 | 32 (4)| 00:00:01 |
|* 6 | HASH JOIN | | 1 | 981 | 28 (4)| 00:00:01 |
| 7 | NESTED LOOPS | | 1 | 915 | 23 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 763 | 20 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 611 | 17 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 1 | 459 | 14 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 307 | 11 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 155 | 7 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 72 | 3 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID| W_SALES_D | 1 | 13 | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | CONS_UNQ_W_SALES_D_SRC_ID | 1 | | 1 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID| W_DATE_D | 1 | 59 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | UIDX_DD_TIMSTAMP | 1 | | 0 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | ACTIVITY_F | 1 | 83 | 4 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | PK_ACTIVITY_F | 1 | | 3 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 4 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 22 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 23 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 24 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 26 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 28 | TABLE ACCESS BY INDEX ROWID | W_ORG_D | 1 | 152 | 3 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | IDX_FK_CVSF_PKEY_GUID | 10 | | 3 (0)| 00:00:01 |
|* 30 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 31 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 32 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 33 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
|* 34 | TABLE ACCESS FULL | W_PERSON_D | 1 | 66 | 4 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------------Hi,
I'm not a tuning expert but I can suggest you to post your request according to this template:
Thread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting
Then:
a) you should posting a code which is easy to read. What about formatting? Your code had to be fixed in a couple of lines.
b) You could simplify your code using the with statement. This has nothing to do with the tuning but it will help the readability of the query.
Check it below:
WITH tab1 AS (SELECT a.org AS org
, a.date_id AS date_id
, a.time_of_day_id AS time_of_day_id
, a.date_hour_id AS date_hour_id
, a.task AS task
, a.pkey_guid AS pkey_guid
, a.vname AS vname
, a.vvalue AS vvalue
, b.variable_obj
FROM w_org_d a
JOIN
w_person_d b
ON ( a.task = b.task
AND a.org = b.id
AND a.vname = b.vname))
SELECT a11.date_id date_id
, SUM (a11.c_measure) wjxbfs1
, COUNT (a11.pkey_guid) wjxbfs2
, COUNT (CASE WHEN a11.c_measure <= 10 THEN a11.pkey_guid END) wjxbfs3
, COUNT (CASE WHEN a11.status = 'Y' AND a11.c_measure > 10 THEN a11.pkey_guid END) wjxbfs4
, COUNT (CASE WHEN a11.status = 'N' THEN a11.pkey_guid END) wjxbfs5
, SUM ( ( (a11.c_measure))) wjxbfs6
, a17.desc_date_mm_dd_yyyy desc_date_mm_dd_yyyy
, a11.dns dns
, a12.vvalue vvalue
, a12.vname vname
, a13.vvalue vvalue0
, a13.vname vname0
, a14.vvalue vvalue1
, a14.vname vname1
, a15.vvalue vvalue2
, a15.vname vname2
, a16.vvalue vvalue3
, a16.vname vname3
, a11.pkey_guid pkey_guid
, a11.upkey_guid upkey_guid
, a17.day_of_week day_of_week
, a17.d_week d_week
, a17.mnth_id day_of_month
, a17.year_id year_id
, a17.desc_year_full desc_year_full
, a17.week_id week_id
, a17.week_of_year week_of_year
FROM activity_f a11
JOIN tab1 a12
ON ( a11.pkey_guid = a12.pkey_guid
AND a11.date_id = a12.date_id
AND a11.org = a12.org
AND a12.variable_obj = 1)
JOIN tab1 a13
ON ( a11.pkey_guid = a13.pkey_guid
AND a11.date_id = a13.date_id
AND a11.org = a13.org
AND a13.variable_obj = 2)
JOIN tab1 a14
ON ( a11.pkey_guid = a14.pkey_guid
AND a11.date_id = a14.date_id
AND a11.org = a14.org
AND a14.variable_obj = 3)
JOIN tab1 a15
ON ( a11.pkey_guid = a15.pkey_guid
AND a11.date_id = a15.date_id
AND a11.org = a15.org
AND a15.variable_obj = 4)
JOIN tab1 a16
ON ( a11.pkey_guid = a16.pkey_guid
AND a11.date_id = a16.date_id
AND a11.org = a16.org
AND a16.variable_obj = 9)
JOIN w_date_d a17
ON (a11.date_id = a17.id)
JOIN w_sales_d a18
ON (a11.task = a18.id)
WHERE (a17.timstamp BETWEEN TO_DATE ('2001-02-24 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND TO_DATE ('2002-09-12 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
AND a11.org IN (12)
AND a18.src_task = 'AX012Z')
GROUP BY a11.date_id, a17.desc_date_mm_dd_yyyy, a11.dns, a12.vvalue
, a12.vname, a13.vvalue, a13.vname, a14.vvalue
, a14.vname, a15.vvalue, a15.vname, a16.vvalue
, a16.vname, a11.pkey_guid, a11.upkey_guid, a17.day_of_week
, a17.d_week, a17.mnth_id, a17.year_id, a17.desc_year_full
, a17.week_id, a17.week_of_year;
{code}
I hope I did not miss anything while reformatting the code. I could not test it not having the proper tables.
As I said before I'm not a tuning expert nor I pretend to be but I see this:
1) Table W_PERSON_D is read in full scan. Any possibility of using indexes?
2) Tables W_SALES_D, W_DATE_D, ACTIVITY_F and W_ORG_D have TABLE ACCESS BY INDEX ROWID which definitely is not fast.
You should provide additional information for tuning your query checking the post I mentioned previously.
Regards.
Al -
Query Performance tuning and scope of imporvement
Hi All ,
I am on oracle 10g and on Linux OS.
I have this below query which I am trying to optimise :
SELECT 'COMPANY' AS device_brand, mach_sn AS device_source_id,
'COMPANY' AS device_brand_raw,
CASE
WHEN fdi.serial_number IS NOT NULL THEN
fdi.serial_number
ELSE
mach_sn || model_no
END AS serial_number_raw,
gmd.generic_meter_name AS counter_id,
meter_name AS counter_id_raw,
meter_value AS counter_value,
meter_hist_tstamp AS device_timestamp,
rcvd_tstamp AS server_timestamp
FROM rdw.v_meter_hist vmh
JOIN rdw.generic_meter_def gmd
ON vmh.generic_meter_id = gmd.generic_meter_id
LEFT OUTER JOIN fdr.device_info fdi
ON vmh.mach_sn = fdi.clean_serial_number
WHERE meter_hist_id IN
(SELECT /*+ PUSH_SUBQ */ MAX(meter_hist_id)
FROM rdw.v_meter_hist
WHERE vmh.mach_sn IN
('URR893727')
AND vmh.meter_name IN
('TotalImpressions','TotalBlackImpressions''TotalColorImpressions')
AND vmh.meter_hist_tstamp >=to_date ('04/16/2011', 'mm/dd/yyyy')
AND vmh.meter_hist_tstamp <= to_date ('04/18/2011', 'mm/dd/yyyy')
GROUP BY mach_sn, vmh.meter_def_id)
ORDER BY device_source_id, vmh.meter_def_id, meter_hist_tstamp;Earlier , it was taking too much time but it started to work faster when i added this :
/*+ PUSH_SUBQ */ in the select query.
The explain plan generated for the same is :
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 29M| 3804M| | 15M (4)| 53:14:08 |
| 1 | SORT ORDER BY | | 29M| 3804M| 8272M| 15M (4)| 53:14:08 |
|* 2 | FILTER | | | | | | |
|* 3 | HASH JOIN | | 29M| 3804M| | 8451K (2)| 28:10:19 |
| 4 | TABLE ACCESS FULL | GENERIC_METER_DEF | 11 | 264 | | 3 (0)| 00:00:01 |
|* 5 | HASH JOIN RIGHT OUTER| | 29M| 3137M| 19M| 8451K (2)| 28:10:17 |
| 6 | TABLE ACCESS FULL | DEVICE_INFO | 589K| 12M| | 799 (2)| 00:00:10 |
|* 7 | HASH JOIN | | 29M| 2527M| 2348M| 8307K (2)| 27:41:29 |
|* 8 | HASH JOIN | | 28M| 2016M| | 6331K (2)| 21:06:19 |
|* 9 | TABLE ACCESS FULL | METER_DEF | 33 | 990 | | 4 (0)| 00:00:01 |
| 10 | TABLE ACCESS FULL | METER_HIST | 3440M| 137G| | 6308K (2)| 21:01:44 |
| 11 | TABLE ACCESS FULL | MACH_XFER_HIST | 436M| 7501M| | 1233K (1)| 04:06:41 |
|* 12 | FILTER | | | | | | |
| 13 | HASH GROUP BY | | 1 | 26 | | 6631K (7)| 22:06:15 |
|* 14 | FILTER | | | | | | |
| 15 | TABLE ACCESS FULL | METER_HIST | 3440M| 83G| | 6304K (2)| 21:00:49 |
------------------------------------------------------------------------------------------------------Is there any other way to optimise it more ... PLease suggest since I am new to query tuning.
Thanks and Regards
KKHi Dom ,
Greetings. Sorry for the delayed response. I have read the How to Post document.
I will provide all the required information here now :
Version : 10.2.0.4
OS : LinuxThe SQL Query which is facing the performance issue :
SELECT mh.meter_hist_id, mxh.mach_sn, mxh.collectiontag, mxh.rcvd_tstamp,
mxh.mach_xfer_id, md.meter_def_id, md.meter_name, md.meter_type,
md.meter_units, md.meter_desc, mh.meter_value, mh.meter_hist_tstamp,
mh.max_value, md.generic_meter_id
FROM meter_hist mh JOIN mach_xfer_hist mxh
ON mxh.mach_xfer_id = mh.mach_xfer_id
JOIN meter_def md ON md.meter_def_id = mh.meter_def_id;Explain plan for this query :
Plan hash value: 1878059220
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3424M| 497G| | 17M (1)| 56:42:49 |
|* 1 | HASH JOIN | | 3424M| 497G| | 17M (1)| 56:42:49 |
| 2 | TABLE ACCESS FULL | METER_DEF | 423 | 27918 | | 4 | 00:00:01 |
|* 3 | HASH JOIN | | 3424M| 287G| 26G| 16M (1)| 56:38:16 |
| 4 | TABLE ACCESS FULL| MACH_XFER_HIST | 432M| 21G| | 1233K (1)| 04:06:40 |
| 5 | TABLE ACCESS FULL| METER_HIST | 3438M| 115G| | 6299K (2)| 20:59:54 |
Predicate Information (identified by operation id):
1 - access("MD"."METER_DEF_ID"="MH"."METER_DEF_ID")
3 - access("MH"."MACH_XFER_ID"="MXH"."MACH_XFER_ID")Parameters :
show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 70
optimizer_index_cost_adj integer 50
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUEshow parameter db_file_multi
db_file_multiblock_read_count integer 8
show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
select sname , pname , pval1 , pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 07-12-2011 09:22
SYSSTATS_INFO DSTOP 07-12-2011 09:52
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 1153.92254
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 4.398
SYSSTATS_MAIN MREADTIM 3.255
SYSSTATS_MAIN CPUSPEED 180
SYSSTATS_MAIN MBRC 8
SYSSTATS_MAIN MAXTHR 244841472
SYSSTATS_MAIN SLAVETHR 933888
show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192Please let me me if any other information is needed. This Query is currently taking almost one hour to run right now.
Also , we have two indexes on the columns : xfer_id and meter_def_id in both the tables , but its not getting used without any filtering ( Where clause).
Do addition of hint in the query above will be of some help.
Thanks and Regards
KK -
Hi There,
We have a sql query that runs between 2 databases on the same machine, the sql takes about 2 mins and returns about 6400 rows. When the process started running we used to see results in about 13 secs, now it's taking almost 2 mins for the same data set. We have updated the stats (table and index) but to no use. I've been trying to get the execution plan to see if there is anything abnormal going on but as the core of the sql is done remotely, we haven't been able to get much out of it.
Here is the sql:
SELECT
--/*+ DRIVING_SITE(var) ALL_ROWS */
ventity_id, ar_action_performed, action_date,
'ventity_ar' ar_tab
FROM (SELECT var.ventity_id, var.ar_action_performed, var.action_date,
var.familyname_id, var.status, var.isprotected,
var.dateofbirth, var.gender, var.sindigits,
LAG (var.familyname_id) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_familyname_id,
LAG (var.status) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_status,
LAG (var.isprotected) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_isprotected,
LAG (var.dateofbirth) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_dateofbirth,
LAG (var.gender) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_gender,
LAG (var.sindigits) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_sindigits
FROM cpp_schema.ventity_ar@CdpP var,
-- reduce the set to ventity_id that had a change within the time frame,
-- and filter out RETRIEVEs as they do not signal change
(SELECT DISTINCT ventity_id
FROM cpp_schema.ventity_ar@CdpP
WHERE action_date BETWEEN '01-MAR-10' AND '10-APR-10'
AND ar_action_performed <> 'RTRV') m
WHERE var.action_date <= '10-APR-10'
AND var.ventity_id = m.ventity_id
AND var.ar_action_performed <> 'RTRV') mm
WHERE action_date BETWEEN '01-MAR-10' AND '10-APR-10'
-- most of the columns from the data table allow nulls
AND ( (NVL (familyname_id, 0) <> NVL (lag_familyname_id, 0))
OR (NVL (status, 'x') <> NVL (lag_status, 'x'))
OR (NVL (isprotected, 2) <> NVL (lag_isprotected, 2))
OR (NVL (dateofbirth, TO_DATE ('15000101', 'yyyymmdd')) <>
NVL (lag_dateofbirth, TO_DATE ('15000101', 'yyyymmdd'))
OR (NVL (gender, 'x') <> NVL (lag_gender, 'x'))
OR (NVL (sindigits, 'x') <> NVL (lag_sindigits, 'x'))
ORDER BY ventity_id, action_date DESC
6401 rows selected.
Elapsed: 00:01:47.03
Execution Plan
Plan hash value: 3953446945
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | SELECT STATEMENT | | 12M| 1575M| | 661K (1)| 02:12:22 | | |
| 1 | SORT ORDER BY | | 12M| 1575M| 2041M| 661K (1)| 02:12:22 | | |
|* 2 | VIEW | | 12M| 1575M| | 291K (2)| 00:58:13 | | |
| 3 | REMOTE | | | | | | | CCP01 | R->S |
2 - filter("action_date">='01_MAR-10' AND "action_date"<='10-APR-10' AND
(NVL("FAMILYNAME_id",0)<>NVL("LAG_FAMILYNAME_id",0) OR
NVL("STATUS",'x')<>NVL("LAG_STATUS",'x') OR NVL("ISPROTECTED",2)<>NVL("LAG_ISPROTECTED",2
) OR NVL("DATEOFBIRTH",TO_DATE(' 1500-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))<>NVL("LAG_DATEOFBIRTH",TO_DATE(' 1500-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss')) OR NVL("GENDER",'x')<>NVL("LAG_GENDER",'x') OR
NVL("SINDIGITS",'x')<>NVL("LAG_SINDIGITS",'x')))
Remote SQL Information (identified by operation id):
3 - EXPLAIN PLAN SET STATEMENT_ID='PLUS4294967295' INTO PLAN_TABLE@! FOR SELECT
"A2"."ventity_id","A2"."AR_ACTION_PERFORMED","A2"."action_date","A2"."FAMILYNAME_id","A2"
."STATUS","A2"."ISPROTECTED","A2"."DATEOFBIRTH","A2"."GENDER","A2"."SINDIGITS",DECODE(COU
NT(*) OVER ( PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1
PRECEDING AND 1 PRECEDING ),1,FIRST_VALUE("A2"."FAMILYNAME_id") OVER ( PARTITION BY
"A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
),NULL),DECODE(COUNT(*) OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
"A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING ),1,FIRST_VALUE("A2"."STATUS")
OVER ( PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1
PRECEDING AND 1 PRECEDING ),NULL),DECODE(COUNT(*) OVER ( PARTITION BY
"A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
),1,FIRST_VALUE("A2"."ISPROTECTED") OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
"A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING ),NULL),DECODE(COUNT(*) OVER (
PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1 PRECEDING
AND 1 PRECEDING ),1,FIRST_VALUE("A2"."DATEOFBIRTH") OVER ( PARTITION BY
"A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
),NULL),DECODE(COUNT(*) OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
"A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING ),1,FIRST_VALUE("A2"."GENDER")
OVER ( PARTITION BY "A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1
PRECEDING AND 1 PRECEDING ),NULL),DECODE(COUNT(*) OVER ( PARTITION BY
"A2"."ventity_id" ORDER BY "A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
),1,FIRST_VALUE("A2"."SINDIGITS") OVER ( PARTITION BY "A2"."ventity_id" ORDER BY
"A2"."action_date" ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING ),NULL) FROM
"CPP_SCHEMA"."ventity_AR" "A2", (SELECT DISTINCT "A3"."ventity_id"
"ventity_id" FROM "CPP_SCHEMA"."ventity_AR" "A3" WHERE
"A3"."action_date">='01_MAR-10' AND "A3"."action_date"<='10-APR-10' AND
"A3"."AR_ACTION_PERFORMED"<>'RETRIEVE' AND TO_DATE('01_MAR-10')<=TO_DATE('10-APR-10'))
"A1" WHERE "A2"."action_date"<='10-APR-10' AND "A2"."ventity_id"="A1"."ventity_id"
AND "A2"."AR_ACTION_PERFORMED"<>'RETRIEVE' (accessing 'EBCP01.EBC.GOV.BC.CA' )Your advise and/or help is highly appreciated.
THanks
Edited by: rsar001 on Apr 20, 2010 6:57 AMMaybe I'm missing something but this subquery seems inefficient:
SELECT var.ventity_id, var.ar_action_performed, var.action_date,
var.familyname_id, var.status, var.isprotected,
var.dateofbirth, var.gender, var.sindigits,
LAG (var.familyname_id) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_familyname_id,
LAG (var.status) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_status,
LAG (var.isprotected) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_isprotected,
LAG (var.dateofbirth) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_dateofbirth,
LAG (var.gender) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_gender,
LAG (var.sindigits) OVER (PARTITION BY var.ventity_id ORDER BY action_date)
lag_sindigits
FROM cpp_schema.ventity_ar@CdpP var,
-- reduce the set to ventity_id that had a change within the time frame,
-- and filter out RETRIEVEs as they do not signal change
(SELECT DISTINCT ventity_id
FROM cpp_schema.ventity_ar@CdpP
WHERE action_date BETWEEN '01-MAR-10' AND '10-APR-10'
AND ar_action_performed <> 'RTRV') m
WHERE var.action_date <= '10-APR-10'
AND var.ventity_id = m.ventity_id
AND var.ar_action_performed != 'RTRV'I don't think accessing the VENTITY_AR table twice is helping you here. The comments looks like you want to restrict the set of VENTITY_IDs but if you look at the plan it is not happening. The plan is reading them from the index and joining against the full VENTITY_AR table anyways. I recommend you consolidate it into something like this:
SELECT var.ventity_id
, var.ar_action_performed
, var.action_date
, var.familyname_id
, var.status
, var.isprotected
, var.dateofbirth
, var.gender
, var.sindigits
, LAG (var.familyname_id) OVER (PARTITION BY var.ventity_id ORDER BY action_date) AS lag_familyname_id
, LAG (var.status) OVER (PARTITION BY var.ventity_id ORDER BY action_date) AS lag_status
, LAG (var.isprotected) OVER (PARTITION BY var.ventity_id ORDER BY action_date) AS lag_isprotected
, LAG (var.dateofbirth) OVER (PARTITION BY var.ventity_id ORDER BY action_date) AS lag_dateofbirth
, LAG (var.gender) OVER (PARTITION BY var.ventity_id ORDER BY action_date) AS lag_gender
, LAG (var.sindigits) OVER (PARTITION BY var.ventity_id ORDER BY action_date) AS lag_sindigits
FROM cpp_schema.ventity_ar@CdpP var
WHERE var.action_date BETWEEN TO_DATE('01-MAR-10','DD-MON-YY') AND TO_DATE('10-APR-10','DD-MON-YY')
AND var.ar_action_performed != 'RTRV'It may then be useful to put an index on (ACTION_DATE,AR_ACTION_PERFORMED) if one doesn't already exist.
*::EDIT::*
I noticed the large amount of NVL calls in your outer query. These NVLs could possibly be eliminated if you use the optional second and third arguments of the LAG analytical function. I'm not sure if this would improve performance but it may make the query more readable and maintainable.
HTH!
Edited by: Centinul on Apr 20, 2010 10:50 AM -
T-SQL query performance (CLR func + webservice)
Hi guys
I have CLR function which accepts address as a parameter, calls geocoding webservice and returns some information (coordinates etc.)
I run SQL query
SELECT *FROM T CROSS APPLY CLR_Func(T.Address)F
Table contains 8 million records and obviously query runs very slow.
Do you know any nice way to improve performance in this situation?
Thank you,
MaxNo WHERE condition? SQL Server will call the function 8 million times ....
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
SQL query performance question
So I had this long query that looked like this:
SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
FROM
ideal d, ideal_term a,( select deal_key, deal_term_key, max(createdOn) maxdate from Ideal_term B
where createdOn <= '03-OCT-12 10.03.00 AM' group by deal_key, deal_term_key ) B
WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
* and a.end_date >= '19-MAR-09 01.00.00 AM'*
* and A.deal_key = b.deal_key*
* and A.deal_term_key = b.deal_term_key*
* and a.createdOn = b.maxdate*
* and d.deal_key = a.deal_key*
* and d.name like 'MVPP1 B'*
order by
* a.begin_date, a.deal_key, a.deal_term_key;*
This performed very poorly for a record in one of the tables that has 43,000+ revisions. It took about 1 minute and 40 seconds. I asked the database guy at my company for help with it and he re-wrote it like so:
SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
FROM ideal d
INNER JOIN (SELECT deal_key,
deal_term_key,
MAX(createdOn) maxdate
FROM Ideal_term B2
WHERE '03-OCT-12 10.03.00 AM' >= createdOn
GROUP BY deal_key, deal_term_key) B1
ON d.deal_key = B1.deal_key
INNER JOIN ideal_term a
ON B1.deal_key = A.deal_key
AND B1.deal_term_key = A.deal_term_key
AND B1.maxdate = a.createdOn
AND d.deal_key = a.deal_key + 0
WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
AND a.end_date >= '19-MAR-09 01.00.00 AM'
AND d.name LIKE 'MVPP1 B'
ORDER BY a.begin_date, a.deal_key, a.deal_term_key
this works much better, it only takes 0.13 seconds. I've bee trying to figure out why exaclty his version performs so much better. His only epxlanation was that the "+ 0" in the WHERE clause prevented Oracle from using an index for that column which created a bad plan initially.
I think there has to be more to it than that though. Can someone give me a detailed explanation of why the second version of the query performed so much faster.
Thanks.
Edited by: su**** on Oct 10, 2012 1:31 PMI used Autotrace in SQL developer. Is that sufficient? Here is the Autotrace and Explain for the slow query:
and for the fast query:
I said that I thought there was more to it because when my team members and I looked at the re-worked query the database guy sent us, our initial thoughts were that in the slow query some of the tables didn't have joins and because of that the query formed a Cartesian product and this resulted in a huge 43,000+ rows matrix.
In his version all tables had joins properly defined and in addition he had that +0 which told it to ignore the index for the attribute deal_key of table ideal_term. I spoke with the database guy today and he confirmed our theory. -
How to compare same SQL query performance in different DB servers.
We have Production and Validation Environment of Oracle11g DB on two Solaris OSs.
H/W and DB,etc configurations of two Oracle DBs are almost same in PROD and VAL.
But we detected large SQL query performace difference in PROD DB and VAL DB in same SQL query.
I would like to find and solve the cause of this situation.
How could I do that ?
I plan to compare SQL execution plan in PROD and VAL DB and index fragmentations.
Before that I thought I need to keep same condition of DB statistics information in PROD and VAL DB.
So, I plan to execute alter system FLUSH BUFFER_CACHE;
But I am worring about bad effects of alter system FLUSH BUFFER_CACHE; to end users
If we did alter system FLUSH BUFFER_CACHE; and got execution plan of that SQL query in the time end users do not use that system ,
there is not large bad effect to end users after those operations?
Could you please let me know the recomendation to compare SQL query performace ?Thank you.
I got AWR report for only VAL DB server but it looks strange.
Is there any thing wrong in DB or how to get AWR report ?
Host Name
Platform
CPUs
Cores
Sockets
Memory (GB)
xxxx
Solaris[tm] OE (64-bit)
.00
Snap Id
Snap Time
Sessions
Cursors/Session
Begin Snap:
xxxx
13-Apr-15 04:00:04
End Snap:
xxxx
14-Apr-15 04:00:22
Elapsed:
1,440.30 (mins)
DB Time:
0.00 (mins)
Report Summary
Cache Sizes
Begin
End
Buffer Cache:
M
M
Std Block Size:
K
Shared Pool Size:
0M
0M
Log Buffer:
K
Load Profile
Per Second
Per Transaction
Per Exec
Per Call
DB Time(s):
0.0
0.0
0.00
0.00
DB CPU(s):
0.0
0.0
0.00
0.00
Redo size:
Logical reads:
0.0
1.0
Block changes:
0.0
1.0
Physical reads:
0.0
1.0
Physical writes:
0.0
1.0
User calls:
0.0
1.0
Parses:
0.0
1.0
Hard parses:
W/A MB processed:
16.7
1,442,472.0
Logons:
Executes:
0.0
1.0
Rollbacks:
Transactions:
0.0
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:
Redo NoWait %:
Buffer Hit %:
In-memory Sort %:
Library Hit %:
96.69
Soft Parse %:
Execute to Parse %:
0.00
Latch Hit %:
Parse CPU to Parse Elapsd %:
% Non-Parse CPU:
Shared Pool Statistics
Begin
End
Memory Usage %:
% SQL with executions>1:
34.82
48.31
% Memory for SQL w/exec>1:
63.66
73.05
Top 5 Timed Foreground Events
Event
Waits
Time(s)
Avg wait (ms)
% DB time
Wait Class
DB CPU
0
100.00
Host CPU (CPUs: Cores: Sockets: )
Load Average Begin
Load Average End
%User
%System
%WIO
%Idle
Instance CPU
%Total CPU
%Busy CPU
%DB time waiting for CPU (Resource Manager)
Memory Statistics
Begin
End
Host Mem (MB):
SGA use (MB):
46,336.0
46,336.0
PGA use (MB):
713.6
662.6
% Host Mem used for SGA+PGA:
Time Model Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Operating System Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Operating System Statistics - Detail
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Foreground Wait Class
s - second, ms - millisecond - 1000th of a second
ordered by wait time desc, waits desc
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Captured Time accounts for % of Total DB time .00 (s)
Total FG Wait Time: (s) DB CPU time: .00 (s)
Wait Class
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
%DB time
DB CPU
0
100.00
Back to Wait Events Statistics
Back to Top
Foreground Wait Events
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Background Wait Events
ordered by wait time desc, waits desc (idle events last)
Only events with Total Wait Time (s) >= .001 are shown
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
Waits /txn
% bg time
log file parallel write
527,034
0
2,209
4
527,034.00
db file parallel write
381,966
0
249
1
381,966.00
os thread startup
2,650
0
151
57
2,650.00
latch: messages
125,526
0
89
1
125,526.00
control file sequential read
148,662
0
54
0
148,662.00
control file parallel write
41,935
0
28
1
41,935.00
Log archive I/O
5,070
0
14
3
5,070.00
Disk file operations I/O
8,091
0
10
1
8,091.00
log file sequential read
3,024
0
6
2
3,024.00
db file sequential read
1,299
0
2
2
1,299.00
latch: shared pool
722
0
1
1
722.00
enq: CF - contention
4
0
1
208
4.00
reliable message
1,316
0
1
1
1,316.00
log file sync
71
0
1
9
71.00
enq: CR - block range reuse ckpt
36
0
0
13
36.00
enq: JS - queue lock
459
0
0
1
459.00
log file single write
414
0
0
1
414.00
enq: PR - contention
5
0
0
57
5.00
asynch descriptor resize
67,076
100
0
0
67,076.00
LGWR wait for redo copy
5,184
0
0
0
5,184.00
rdbms ipc reply
1,234
0
0
0
1,234.00
ADR block file read
384
0
0
0
384.00
SQL*Net message to client
189,490
0
0
0
189,490.00
latch free
559
0
0
0
559.00
db file scattered read
17
0
0
6
17.00
resmgr:internal state change
1
100
0
100
1.00
direct path read
301
0
0
0
301.00
enq: RO - fast object reuse
35
0
0
2
35.00
direct path write
122
0
0
1
122.00
latch: cache buffers chains
260
0
0
0
260.00
db file parallel read
1
0
0
41
1.00
ADR file lock
144
0
0
0
144.00
latch: redo writing
55
0
0
1
55.00
ADR block file write
120
0
0
0
120.00
wait list latch free
2
0
0
10
2.00
latch: cache buffers lru chain
44
0
0
0
44.00
buffer busy waits
3
0
0
2
3.00
latch: call allocation
57
0
0
0
57.00
SQL*Net more data to client
55
0
0
0
55.00
ARCH wait for archivelog lock
78
0
0
0
78.00
rdbms ipc message
3,157,653
40
4,058,370
1285
3,157,653.00
Streams AQ: qmn slave idle wait
11,826
0
172,828
14614
11,826.00
DIAG idle wait
170,978
100
172,681
1010
170,978.00
dispatcher timer
1,440
100
86,417
60012
1,440.00
Streams AQ: qmn coordinator idle wait
6,479
48
86,413
13337
6,479.00
shared server idle wait
2,879
100
86,401
30011
2,879.00
Space Manager: slave idle wait
17,258
100
86,324
5002
17,258.00
pmon timer
46,489
62
86,252
1855
46,489.00
smon timer
361
66
86,145
238628
361.00
VKRM Idle
1
0
14,401
14400820
1.00
SQL*Net message from client
253,909
0
419
2
253,909.00
class slave wait
379
0
0
0
379.00
Back to Wait Events Statistics
Back to Top
Wait Event Histogram
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (64 msec to 2 sec)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 sec to 2 min)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 min to 1 hr)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Wait Class Stats
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
SQL Statistics
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by User I/O Wait Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Physical Reads (UnOptimized)
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Complete List of SQL Text
Back to Top
SQL ordered by Elapsed Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by CPU Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by User I/O Wait Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Gets
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Reads
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Physical Reads (UnOptimized)
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Executions
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Parse Calls
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Sharable Memory
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Version Count
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
Complete List of SQL Text
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
Instance Activity Statistics
Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Back to Top
Instance Activity Stats
No data exists for this section of the report.
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Absolute Values
No data exists for this section of the report.
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Thread Activity
Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic
Total
per Hour
log switches (derived)
69
2.87
Back to Instance Activity Statistics
Back to Top
IO Stats
IOStat by Function summary
IOStat by Filetype summary
IOStat by Function/Filetype summary
Tablespace IO Stats
File IO Stats
Back to Top
IOStat by Function summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
ordered by (Data Read + Write) desc
Function Name
Reads: Data
Reqs per sec
Data per sec
Writes: Data
Reqs per sec
Data per sec
Waits: Count
Avg Tm(ms)
Others
28.8G
20.55
.340727
16.7G
2.65
.198442
1803K
0.01
Direct Reads
43.6G
57.09
.517021
411M
0.59
.004755
0
LGWR
19M
0.02
.000219
41.9G
21.87
.496493
2760
0.08
Direct Writes
16M
0.00
.000185
8.9G
1.77
.105927
0
DBWR
0M
0.00
0M
6.7G
4.42
.079670
0
Buffer Cache Reads
3.1G
3.67
.037318
0M
0.00
0M
260.1K
3.96
TOTAL:
75.6G
81.33
.895473
74.7G
31.31
.885290
2065.8K
0.51
Back to IO Stats
Back to Top
IOStat by Filetype summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
Small Read and Large Read are average service times, in milliseconds
Ordered by (Data Read + Write) desc
Filetype Name
Reads: Data
Reqs per sec
Data per sec
Writes: Data
Reqs per sec
Data per sec
Small Read
Large Read
Data File
53.2G
78.33
.630701
8.9G
7.04
.105197
0.37
21.51
Log File
13.9G
0.18
.164213
41.9G
21.85
.496123
0.02
2.93
Archive Log
0M
0.00
0M
13.9G
0.16
.164213
Temp File
5.6G
0.67
.066213
8.1G
0.80
.096496
5.33
3713.27
Control File
2.9G
2.16
.034333
2G
1.46
.023247
0.05
19.98 -
TDE Table encryption SQL Query performance is very very slow
Hi,
We have done one column encryption for one table using TDE method with no salt option and it got impact the response time of sql query to 32 hours.
Oracle database version is 10.2.0.5
Example like
alter table abc modify (numberx encrypt no salt);
after encryption the SQL execution taking more time and below are the statement for the same.
================================
declare fNumber cardx.numberx%TYPE;
fCount integer :=0;
fserno cardx.serno%TYPE;
fcaccserno cardx.caccserno%TYPE;
ftrxnfeeprofserno cardx.trxnfeeprofserno%TYPE;
fstfinancial cardx.stfinancial%TYPE;
fexpirydate cardx.expirydate%TYPE;
fpreviousexpirydate cardx.previousexpirydate%TYPE;
fexpirydatestatus cardx.expirydatestatus%TYPE;
fblockeddate cardx.blockeddate%TYPE;
fproduct cardx.product%TYPE;
faccstmtsummaryind cardx.accstmtsummaryind%TYPE;
finstitution_id cardx.institution_id%TYPE;
fdefaultaccounttype cardx.defaultaccounttype%TYPE;
flanguagecode cardx.languagecode%TYPE;
froute integer;
begin for i in (select c.numberx from cardx c where c.stgeneral='NORM')
loop select c.serno, c.caccserno, c.trxnfeeprofserno, c.stfinancial, c.expirydate, c.previousexpirydate, c.expirydatestatus, c.blockeddate, c.product, c.accstmtsummaryind, c.institution_id, c.defaultaccounttype, c.languagecode, (select count(*) from caccountrouting ar where ar.cardxserno=c.serno and ar.rtrxntype=ISS_REWARDS.GetRewardTrxnTypeserno) into fserno, fcaccserno, ftrxnfeeprofserno, fstfinancial, fexpirydate, fpreviousexpirydate, fexpirydatestatus, fblockeddate, fproduct, faccstmtsummaryind, finstitution_id, fdefaultaccounttype, flanguagecode, froute from cardx c where c.numberx=i.numberx; fCount := fCount+1; end loop; dbms_output.put_line(fCount); end;
===============================
Any help would be great appreciate
Thanks,
Mohammed.
Edited by: Mohammed Yousuf on Oct 7, 2011 12:47 PMStill, that's not enough evidence to prove that TDE is indeed the culprit. Can you trace the query before and after enabling the TDE using 10046 and post it here.
Aman.... -
Speed up SQL Query performance
Hi,
I am having a SQL query which has got some inner joins between tables.
In this query i will be selecting values from set of values obtained by going through all rows in a table.
I am using a inner join between two tables to achive this purpose.
But, as the table which i go through all rows is extremely big it takes lot of time to go through all rows and the query slows down.
Is there any other way by which i can speed up query.This is the out put of my test plan.
Please suggest which one needs to be improved.
PLAN_TABLE_OUTPUT
Plan hash value: 3453987661
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 1002 | 3920 (1)| 00:00:48 |
| 1 | SORT ORDER BY | | 3 | 1002 | 3920 (1)| 00:00:48 |
|* 2 | TABLE ACCESS BY INDEX ROWID | AS_EVENT_CHR_DATA | 1 | 17 | 4 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 3 | 1002 | 3919 (1)| 00:00:48 |
|* 4 | HASH JOIN | | 3 | 951 | 3907 (1)| 00:00:47 |
|* 5 | TABLE ACCESS FULL | EV_CHR_DATA_TYPE | 1 | 46 | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID | AS_EVENT_CHR_DATA | 702 | 50544 | 3883 (1)| 00:00:47 |
| 7 | NESTED LOOPS | | 348 | 94308 | 3904 (1)| 00:00:47 |
| 8 | NESTED LOOPS | | 1 | 199 | 21 (5)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 174 | 20 (5)| 00:00:01 |
|* 10 | HASH JOIN | | 1 | 127 | 18 (6)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 95 | 13 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 60 | 12 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 33 | 10 (0)| 00:00:01 |
| 14 | TABLE ACCESS BY INDEX ROWID| ASSET | 1 | 21 | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | SERIAL_NUMBER_K3 | 1 | | 1 (0)| 00:00:01 |
|* 16 | INDEX FAST FULL SCAN | SYS_C0053318 | 1 | 12 | 8 (0)| 00:00:01 |
| 17 | TABLE ACCESS BY INDEX ROWID | SEGMENT_CHILD | 1 | 27 | 2 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | SYS_C0053319 | 12 | | 1 (0)| 00:00:01 |
| 19 | TABLE ACCESS BY INDEX ROWID | SEGMENT | 1 | 35 | 1 (0)| 00:00:01 |
|* 20 | INDEX UNIQUE SCAN | SYS_C0053318 | 1 | | 0 (0)| 00:00:01 |
|* 21 | TABLE ACCESS FULL | SEGMENT_TYPE | 1 | 32 | 4 (0)| 00:00:01 |
| 22 | TABLE ACCESS BY INDEX ROWID | ASSET_ON_SEGMENT | 1 | 47 | 2 (0)| 00:00:01 |
|* 23 | INDEX RANGE SCAN | ASSET_ON_SEGME_UK8115533871153 | 1 | | 1 (0)| 00:00:01 |
| 24 | TABLE ACCESS BY INDEX ROWID | ASSET | 1 | 25 | 1 (0)| 00:00:01 |
|* 25 | INDEX UNIQUE SCAN | SYS_C0053240 | 1 | | 0 (0)| 00:00:01 |
|* 26 | INDEX RANGE SCAN | AS_EV_CHR_DATA_ASSETPK | 4673 | | 28 (4)| 00:00:01 |
|* 27 | INDEX RANGE SCAN | SYS_C0053249 | 5 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("PARAMETRIC_TAG_NAME"."DATA_VALUE"='EngineOilConsumption')
4 - access("AS_EVENT_CHR_DATA"."EC_DB_SITE"="EV_CHR_DATA_TYPE"."EC_DB_SITE" AND
"AS_EVENT_CHR_DATA"."EC_DB_ID"="EV_CHR_DATA_TYPE"."EC_DB_ID" AND
"AS_EVENT_CHR_DATA"."EC_TYPE_CODE"="EV_CHR_DATA_TYPE"."EC_TYPE_CODE")
5 - filter("EV_CHR_DATA_TYPE"."NAME"='servicing ptric time unit')
10 - access("OILSEG"."SG_TYPE_CODE"="SEGMENT_TYPE"."SG_TYPE_CODE")
15 - access("ASSET"."SERIAL_NUMBER"='30870')
16 - filter("ASSET"."ASSET_ID"="SEGMENT"."SEGMENT_ID")
18 - access("SEGMENT"."SEGMENT_SITE"="SEGMENT_CHILD"."SEGMENT_SITE" AND
"SEGMENT"."SEGMENT_ID"="SEGMENT_CHILD"."SEGMENT_ID")
20 - access("SEGMENT_CHILD"."CHILD_SG_SITE"="OILSEG"."SEGMENT_SITE" AND
"SEGMENT_CHILD"."CHILD_SG_ID"="OILSEG"."SEGMENT_ID")
21 - filter("SEGMENT_TYPE"."NAME"='Aircraft Equipment Engine Holder')
23 - access("OILSEG"."SEGMENT_ID"="ASSET_ON_SEGMENT"."SEGMENT_ID")
25 - access("ASSET_ON_SEGMENT"."ASSET_ORG_SITE"="OILASSET"."ASSET_ORG_SITE" AND
"ASSET_ON_SEGMENT"."ASSET_ID"="OILASSET"."ASSET_ID")
26 - access("ASSET_ON_SEGMENT"."ASSET_ORG_SITE"="AS_EVENT_CHR_DATA"."ASSET_ORG_SITE" AND
"ASSET_ON_SEGMENT"."ASSET_ID"="AS_EVENT_CHR_DATA"."ASSET_ID")
27 - access("AS_EVENT_CHR_DATA"."AS_EV_ID"="PARAMETRIC_TAG_NAME"."AS_EV_ID")
Maybe you are looking for
-
Can you add smooth scrolling in the next update?
It would be very, very, VERRRYY nice if they added Smooth Scrolling in an update of Safari. IE, Firefox and Chrome support it, and I think that actually, alot of people didn't notice or want this feature. So.. please add it, pretty please, because I
-
Drag-Drop not working with JRE 1.6.07
I have a problem where (when using the next-gen java plugin) applets do not appear as drop targets. In more detail, I have a .NET winforms application, which has a web browser component on it running an Oracle forms application in a Java app. In JRE
-
Driver for 3510 All-in-One for Mac OS 10.4.11
Is it possible to find a driver for the 3510 for Mac OS 10.4.11?
-
Best pracitice while doing implementation like do's and dont's
hi all, I am into implementation, for configuration in spro is there any book or pdf on best practices..like company code and plant should be copied from other or certain documents number range should be internal or storage location should not copied
-
Why are keywords are not showing ....
Why are keywords are not showing, all other functions and metadata seem fine