Performance :Query is taking Longtime
Hi,
Query on Cube jumps to Query on ODS and Query on ods takes very long time how can we optimize/improve?
Rgds,
C.V.
Message was edited by:
C.V. P
Hi,
well, i am sure you are aware that Data Stores are not optimized for reporting.
The Data Store active table can become very large and thus reporting on that table means reporting on a HUGE amount of data.
The common solution is the creation of additional index on the ODS table to speed up reporting performance. This can be doen in 3.x from the ODS maintenance.
Also make sure the DB Statistics are active (Put ODS active table in db20)
Look at this thread for the options you have:
ODS Performance?
Please assign points if useful,
Gili
Similar Messages
-
Account Query is taking longtime and not giving desired results
Hi,
I'm trying to run the following query to get code combination id for last six months with 1)no activity in gl_je_lines
2) Sum of begin balance and PTD is zero
Her's query :
select
cc.segment3 SEGMENT
from apps.gl_code_combinations cc,
apps.gl_balances bal,
apps.fnd_flex_values_vl ffvl,
apps.fnd_flex_value_sets ffvs,
(select period_name
from apps.gl_periods
where period_set_name ='abc Calendar'
and to_date(period_name,'Mon-YY') between add_months(sysdate,(6*-1)-1) and sysdate) p
where cc.CODE_COMBINATION_ID = bal.CODE_COMBINATION_ID
and ffvl.FLEX_VALUE = cc.segment3
and ffvl.FLEX_VALUE_SET_ID = ffvs.flex_value_set_id
and ffvs.flex_value_set_id = 2222222
and bal.SET_OF_BOOKS_ID =555
and cc.CHART_OF_ACCOUNTS_ID = 11111
and bal.period_name = p.period_name
and ffvl.ENABLED_FLAG = 'Y'
and ffvl.END_DATE_ACTIVE is null
and bal.TEMPLATE_ID IS NULL
and bal.actual_flag='A'
and bal.currency_code 'STAT'
and ffvl.creation_date <= add_months(sysdate,(6*-1)-1)
and cc.SEGMENT3 not in (
select
distinct gcc.SEGMENT3
from apps.gl_je_lines l
, apps.gl_code_Combinations gcc,
(select period_name
from apps.gl_periods
where period_set_name ='abc Calendar'
and end_date > add_months(last_day(sysdate),-1) and end_date <= last_day(sysdate)) lp
where l.code_combination_id = gcc.code_combination_id
and gcc.CHART_OF_ACCOUNTS_ID = 11111
and l.period_name=lp.period_name
and l.set_of_books_id = 555
and l.status='P')
group by cc.SEGMENT3
HAVING sum(abs(nvl(bal.BEGIN_BALANCE_DR,0))-abs(nvl(bal.BEGIN_BALANCE_CR,0))+abs(nvl(bal.PERIOD_NET_DR,0))-abs(nvl(bal.PERIOD_NET_CR,0))) = 0--------------------------------------------------------------------------------
Here's Explain Plan
Operation Node Cost IO Cost CPU Cost Cardinality Object Name Options Object Type Optimizer
SELECT STATEMENT 5155 5094 554583434 1 ALL_ROWS
FILTER
HASH (GROUP BY) 5155 5094 554583434 1 GROUP BY
FILTER
TABLE ACCESS (BY INDEX ROWID) 4 4 31301 1 GL_BALANCES BY INDEX ROWID TABLE ANALYZED
NESTED LOOPS 18 18 427467 1
MERGE JOIN (CARTESIAN) 14 14 396166 1 CARTESIAN
TABLE ACCESS (BY INDEX ROWID) 4 4 49409 1 GL_CODE_COMBINATIONS BY INDEX ROWID TABLE ANALYZED
NESTED LOOPS 8 8 82677 1
NESTED LOOPS 4 4 33268 1
NESTED LOOPS 4 4 31368 1
INDEX (UNIQUE SCAN) 1 1 8171 1 FND_FLEX_VALUE_SETS_U1 UNIQUE SCAN INDEX (UNIQUE) ANALYZED
TABLE ACCESS (BY INDEX ROWID) 3 3 23196 1 FND_FLEX_VALUES BY INDEX ROWID TABLE ANALYZED
INDEX (RANGE SCAN) 2 2 15293 1 FND_FLEX_VALUES_N3 RANGE SCAN INDEX ANALYZED
INDEX (UNIQUE SCAN) 0 0 1900 1 FND_FLEX_VALUES_TL_U1 UNIQUE SCAN INDEX (UNIQUE) ANALYZED
INDEX (RANGE SCAN) 2 2 33113 4 GL_CODE_COMBINATIONS_N3 RANGE SCAN INDEX ANALYZED
FILTER
TABLE ACCESS (BY INDEX ROWID) 20 20 163416 1 GL_JE_LINES BY INDEX ROWID TABLE ANALYZED
NESTED LOOPS 5136 5076 545022343 1
MERGE JOIN (CARTESIAN) 5103 5043 544739760 4 CARTESIAN
TABLE ACCESS (BY INDEX ROWID) 5 5 37587 1 GL_PERIODS BY INDEX ROWID TABLE ANALYZED
INDEX (RANGE SCAN) 2 2 15043 4 GL_PERIODS_N2 RANGE SCAN INDEX ANALYZED
BUFFER (SORT) 5098 5038 544702173 7 SORT
TABLE ACCESS (FULL) 5098 5038 544702173 7 GL_CODE_COMBINATIONS FULL TABLE ANALYZED
INDEX (RANGE SCAN) 3 3 29414 36 GL_JE_LINES_N1 RANGE SCAN INDEX ANALYZED
BUFFER (SORT) 10 10 346757 1 SORT
INDEX (FULL SCAN) 6 6 313489 1 GL_PERIODS_U1 FULL SCAN INDEX (UNIQUE) ANALYZED
INDEX (RANGE SCAN) 2 2 15493 1 GL_BALANCES_N1 RANGE SCAN INDEX ANALYZEDThis query is taking two hrs to get results :
How to tune this query to get results fasterHi,
170 posts and still do not know how to use {noformat}{noformat} tags?
Please read <a href="https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360002">How do I ask a question on the forums?</a>
If you have a performance issue have a look at <a href="https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003">How to improve the performance of my query? / My query is running slow. </a>
Additionally when you put some code please enclose it between two lines starting with {noformat}{noformat}
i.e.:
{noformat}{noformat}
SELECT ...
{noformat}{noformat}
Also consider closing some of your questions when someone is answering. Looking at your profile:
Handle: user518071
Status Level: Newbie
Registered: Jun 30, 2006
Total Posts: 170
Total Questions: 92 (58 unresolved) it still looks that you have 58 unresolved questions. Are they really all unresolved?
Regards.
Al -
Simple query is taking long time
Hi Experts,
The below query is taking long time.
[code]SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) ='COMPLETED';[/code]
this query is returning 1645457 records.
[code]PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 625K| 341M| 45113 (1)|
| 1 | HASH JOIN | | 625K| 341M| 45113 (1)|
| 2 | NESTED LOOPS | | 611 | 14664 | 22 (0)|
| 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 48090 | 22 (0)|
| 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 3 | 0 (0)|
| 5 | TABLE ACCESS FULL | FAX_STAGE | 2324K| 1214M| 45076 (1)|
PLAN_TABLE_OUTPUT
Note
- 'PLAN_TABLE' is old version
15 rows selected.[/code]
The distinct number of records in each table.
[code]SELECT FAX_STATUS,count(*)
FROM fax_STAGE
GROUP BY FAX_STATUS;
FAX_STATUS COUNT(*)
BROKEN 10
Broken - New 9
Completed 2324493
New 20
SELECT is_deleted,COUNT(*)
FROM FAX_SOURCE
GROUP BY IS_DELETED;
IS_DELETED COUNT(*)
N 2290
Y 78[/code]
Total number of records in each table.
[code]SELECT COUNT(*) FROM ORL.FAX_SOURCE FSRC-- 2368
SELECT COUNT(*) FROM ORL.FAX_STAGE--2324532
SELECT COUNT(*) FROM APPS_GLOBAL.GLOBAL_BU_MAPPING--9
[/code]
To improve the performance of this query I have created the following indexes.
[code]Functional based index on UPPER (FSRC.FAX_NUMBER) ,UPPER (FS.DESTINATION) and UPPER (FS.FAX_STATUS).
Bitmap index on FSRC.IS_DELETED.
Normal Index on GBM.BU_ID and FSRC.BUID.
[/code]
But still the performance is bad for this query.
What can I do apart from this to improve the performance of this query.
Please help me .
Thanks in advance.<I have created the following indexes.
CREATE INDEX ORL.IDX_DESTINATION_RAM ON ORL.FAX_STAGE(UPPER("DESTINATION"))
CREATE INDEX ORL.IDX_FAX_STATUS_RAM ON ORL.FAX_STAGE(LOWER("FAX_STATUS"))
CREATE INDEX ORL.IDX_UPPER_FAX_STATUS_RAM ON ORL.FAX_STAGE(UPPER("FAX_STATUS"))
CREATE INDEX ORL.IDX_BUID_RAM ON ORL.FAX_SOURCE(BUID)
CREATE INDEX ORL.IDX_FAX_NUMBER_RAM ON ORL.FAX_SOURCE(UPPER("FAX_NUMBER"))
CREATE BITMAP INDEX ORL.IDX_IS_DELETED_RAM ON ORL.FAX_SOURCE(IS_DELETED)
After creating the following indexes performance got improved.
But our DBA said that new BITMAP index at FAX_SOURCE table (ORL.IDX_IS_DELETED_RAM) can cause locks
on multiple rows if IS_DELETED column is in use. Please proceed with detailed tests.
I am sending the explain plan before creating indexes and after indexes has been created.
SELECT FS.*
FROM ORL.FAX_STAGE FS
INNER JOIN
ORL.FAX_SOURCE FSRC
INNER JOIN
GLOBAL_BU_MAPPING GBM
ON GBM.BU_ID = FSRC.BUID
ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
WHERE FSRC.IS_DELETED = 'N'
AND GBM.BU_ID IS NOT NULL
AND UPPER (FS.FAX_STATUS) =:B1;
--OLD without indexes
PLAN_TABLE_OUTPUT
Plan hash value: 3076973749
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 141K| 85M| 45130 (1)| 00:09:02 |
|* 1 | HASH JOIN | | 141K| 85M| 45130 (1)| 00:09:02 |
| 2 | NESTED LOOPS | | 611 | 18330 | 22 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| FAX_SOURCE | 2290 | 59540 | 22 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | FAX_STAGE | 23245 | 13M| 45106 (1)| 00:09:02 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access(UPPER("FSRC"."FAX_NUMBER")=UPPER("FS"."DESTINATION"))
3 - filter("FSRC"."IS_DELETED"='N')
4 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
5 - filter(UPPER("FS"."FAX_STATUS")=SYS_OP_C2C(:B1))
21 rows selected.
--NEW with indexes.
PLAN_TABLE_OUTPUT
Plan hash value: 665032407
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5995 | 3986K| 3117 (1)| 00:00:38 |
|* 1 | HASH JOIN | | 5995 | 3986K| 3117 (1)| 00:00:38 |
| 2 | NESTED LOOPS | | 611 | 47658 | 20 (5)| 00:00:01 |
|* 3 | VIEW | index$_join$_002 | 2290 | 165K| 20 (5)| 00:00:01 |
|* 4 | HASH JOIN | | | | | |
|* 5 | HASH JOIN | | | | | |
PLAN_TABLE_OUTPUT
| 6 | BITMAP CONVERSION TO ROWIDS| | 2290 | 165K| 1 (0)| 00:00:01 |
|* 7 | BITMAP INDEX SINGLE VALUE | IDX_IS_DELETED_RAM | | | | |
| 8 | INDEX FAST FULL SCAN | IDX_BUID_RAM | 2290 | 165K| 8 (0)| 00:00:01 |
| 9 | INDEX FAST FULL SCAN | IDX_FAX_NUMBER_RAM | 2290 | 165K| 14 (0)| 00:00:01 |
|* 10 | INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID | 1 | 4 | 0 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | FAX_STAGE | 23245 | 13M| 3096 (1)| 00:00:38 |
|* 12 | INDEX RANGE SCAN | IDX_UPPER_FAX_STATUS_RAM | 9298 | | 2434 (1)| 00:00:30 |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - access(UPPER("DESTINATION")="FSRC"."SYS_NC00035$")
3 - filter("FSRC"."IS_DELETED"='N')
4 - access(ROWID=ROWID)
5 - access(ROWID=ROWID)
7 - access("FSRC"."IS_DELETED"='N')
10 - access("GBM"."BU_ID"="FSRC"."BUID")
filter("GBM"."BU_ID" IS NOT NULL)
12 - access(UPPER("FAX_STATUS")=SYS_OP_C2C(:B1))
31 rows selected
Please confirm on the DBA comment.Is this bitmap index locks rows in my case.
Thanks.> -
Hi all,
db:oracle 9i
I am facing below query prob.
prob is that query is taking more time 45 min than earliar (10 sec).
please any one suggest me .....
SQL> SELECT MAX (tdar1.ID) ID, tdar1.request_id, tdar1.lolm_transaction_id,
2 tdar1.transaction_version
3 FROM transaction_data_arc tdar1
4 WHERE tdar1.transaction_name ='O96U '
5 AND tdar1.transaction_type = 'REQUEST'
6 AND tdar1.message_type_code ='PCN'
7 AND NOT EXISTS (
8 SELECT NULL
9 FROM transaction_data_arc tdar2
10 WHERE tdar2.request_id = tdar1.request_id
11 AND tdar2.lolm_transaction_id != tdar1.lolm_transaction_id
12 AND tdar2.ID > tdar1.ID)
13 GROUP BY tdar1.request_id,
14 tdar1.lolm_transaction_id,
15 tdar1.transaction_version;
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=42)
1 0 SORT (GROUP BY) (Cost=12 Card=1 Bytes=42)
2 1 FILTER
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
' (Cost=1 Card=1 Bytes=42)
4 3 INDEX (RANGE SCAN) OF 'NK_TDAR_2' (NON-UNIQUE) (Cost
=3 Card=1)
5 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
' (Cost=5 Card=918 Bytes=20196)
6 5 INDEX (RANGE SCAN) OF 'NK_TDAR_7' (NON-UNIQUE) (Cost
=8 Card=4760)prob is that query is taking more time 45 min than earliar (10 sec).Then something must have changed (data growth/stale statistics/...?).
You should post as much details as possible, how and what it is described in the FAQ, see:
*3. How to improve the performance of my query? / My query is running slow*.
When your query takes too long...
How to post a SQL statement tuning request
SQL and PL/SQL FAQ
Also, given your database version, using NOT IN instead of NOT EXISTS might make a difference (but they're not the same).
See: SQL and PL/SQL FAQ -
Select query is taking lot of time to fetch data.....
Select query is taking lot of time to fetch data.
SELECT algnum atanum abdatu abzeit abname abenum bmatnr bmaktx bqdatu bqzeit bvlenr bnlenr bvltyp bvlber b~vlpla
bnltyp bnlber bnlpla bvsola b~vorga INTO TABLE it_final FROM ltak AS a
INNER JOIN ltap AS b ON btanum EQ atanum AND algnum EQ blgnum
WHERE a~lgnum = p_whno
AND a~tanum IN s_tono
AND a~bdatu IN s_tocd
AND a~bzeit IN s_bzeit
AND a~bname IN s_uname
AND a~betyp = 'P'
AND b~matnr IN s_mno
AND b~vorga <> 'ST'.
Moderator message: Please Read before Posting in the Performance and Tuning Forum
Edited by: Thomas Zloch on Mar 27, 2011 12:05 PMHi Shiva,
I am using two more select queries with the same manner ....
here are the other two select query :
***************1************************
SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
FROM ztftelpt LEFT JOIN ztfzberep
ON ztfzberep~gjahr = st_input-gjahr
AND ztfzberep~poper = st_input-poper
AND ztfzberepcntr = ztftelptrprctr
WHERE rldnr = c_telstra_projects
AND rrcty = c_actual
AND rvers = c_ver_001
AND rbukrs = st_input-bukrs
AND racct = st_input-saknr
AND ryear = st_input-gjahr
and rzzlstar in r_lstar
AND rpmax = c_max_period.
and the second one is
*************************2************************
SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
FROM ztftelnt LEFT JOIN ztfzberep
ON ztfzberep~gjahr = st_input-gjahr
AND ztfzberep~poper = st_input-poper
AND ztfzberepcntr = ztftelntrprctr
WHERE rldnr = c_telstra_networks
AND rrcty = c_actual
AND rvers = c_ver_001
AND rbukrs = st_input-bukrs
AND racct = st_input-saknr
AND ryear = st_input-gjahr
and rzzlstar in r_lstar
AND rpmax = c_max_period.
for both the above table program is taking very less time .... although both the table used in above queries have similar amount of data. And i can not remove the APPENDING CORRESPONDING. because i have to append the data after fetching from the tables. if i will not use it will delete all the data fetched earlier.
Thanks on advanced......
Sourabh -
hi
The following query is taking too much time (more than 30 minutes), working with 11g.
The table has three columns rid, ida, geometry and index has been created on all columns.
The table has around 5,40,000 records of point geometries.
Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
regardsI have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
a.ida='CORD' AND
b.idat='CORD' AND
a.rid !=b.rid AND
sdo_equal(a.geometry, b.geometry)='TRUE'
ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
[ c o d e ]
<code/results here>
[ / c o d e ]
that way the original formatting is kept and it makes things much easier to read.
Regards,
Stefan -
When query is taking too long time
When query is taking too long time,Where and how to start tuning it?
Here i've listed few things need to be considered,out of my knowledge and understanding
1.What the sql is waiting for(wait events)
2.Parameter modification need to be done at system/session level
3.The query has to be tuned (using hints )
4.Gathering/deleting statistics
List out any other things that need to be taken into account?
Which approach must be followed and on what basis that approach must be considered?When query is taking too long time,Where and how to start tuning it?explain plan will be good start . trace also
Here i've listed few things need to be considered,out of my knowledge and understanding
1.What the sql is waiting for(wait events)When Oracle executes an SQL statement, it is not constantly executing. Sometimes it has to wait for a specific event to happen befor it can proceed.
Read
http://www.adp-gmbh.ch/ora/tuning/event.html
2.Parameter modification need to be done at system/session levelDepend on parameter , define parameter , trace done on session level for example
3.The query has to be tuned (using hints )Could be help you but you must know how to use .
4.Gathering/deleting statisticsDo it in non working hours , it will impact on database performance , but its good
List out any other things that need to be taken into account?Which account ?
Which approach must be followed and on what basis that approach must be considered?you could use lot of tools , Trace , AWR -
Query is taking too long to execute - contd
I am unable to post the entire explain plan in one post as it exceeds maximum length.
Please advise on how to post this.
Previous post Link : Link: Query is taking too long to execute
Regards,
Sreekanth Munagala.
Edited by: Sreekanth Munagala on Oct 27, 2009 8:31 AM
Edited by: Sreekanth Munagala on Oct 27, 2009 8:34 AMHi Tubby,
Today i executed only the first query in the view and it took almost 2.5 hrs.
Here is the explain plan for this query
SQL> SET SERVEROUTPUT ON
SQL> set linesize 200
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 766 | 2448 |
| 1 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 13 | 3 |
|* 2 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
| 3 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 29 | 3 |
|* 4 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
| 5 | VIEW | POC_ASN_PICKUP_LOCATIONS_V | 2 | 2426 | 17 |
| 6 | UNION-ALL | | | | |
| 7 | NESTED LOOPS | | 1 | 85 | 4 |
| 8 | NESTED LOOPS | | 1 | 78 | 4 |
|* 9 | TABLE ACCESS BY INDEX ROWID | PO_VENDOR_SITES_ALL | 1 | 73 | 3 |
|* 10 | INDEX UNIQUE SCAN | PO_VENDOR_SITES_U2 | 1 | | 2 |
|* 11 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | 5 | 1 |
|* 12 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | 7 | |
| 13 | NESTED LOOPS | | 1 | 91 | 13 |
| 14 | NESTED LOOPS | | 1 | 84 | 13 |
| 15 | TABLE ACCESS BY INDEX ROWID | PO_VENDORS | 1 | 13 | 3 |
|* 16 | INDEX UNIQUE SCAN | PO_VENDORS_U1 | 1 | | 2 |
PLAN_TABLE_OUTPUT
|* 17 | TABLE ACCESS BY INDEX ROWID | FND_LOOKUP_VALUES | 1 | 71 | 10 |
|* 18 | INDEX RANGE SCAN | FND_LOOKUP_VALUES_U2 | 13 | | 2 |
|* 19 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | 7 | |
|* 20 | COUNT STOPKEY | | | | |
| 21 | TABLE ACCESS BY INDEX ROWID | MTL_SYSTEM_ITEMS_B | 8 | 136 | 12 |
|* 22 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_U1 | 8 | | 3 |
|* 23 | COUNT STOPKEY | | | | |
| 24 | TABLE ACCESS BY INDEX ROWID | MTL_SYSTEM_ITEMS_B | 8 | 288 | 12 |
|* 25 | INDEX RANGE SCAN | MTL_SYSTEM_ITEMS_B_U1 | 8 | | 3 |
| 26 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 27 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
| 28 | NESTED LOOPS | | 1 | 40 | 5 |
| 29 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 11 | 3 |
|* 30 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 2 |
| 31 | TABLE ACCESS BY INDEX ROWID | HZ_PARTIES | 1 | 29 | 2 |
|* 32 | INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | | 1 |
| 33 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 34 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
| 35 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 36 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
|* 37 | COUNT STOPKEY | | | | |
PLAN_TABLE_OUTPUT
|* 38 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_HEADERS | 1 | 21 | 3 |
|* 39 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_HEADERS_U2 | 1 | | 2 |
| 40 | TABLE ACCESS BY INDEX ROWID | FND_TERRITORIES_TL | 1 | 24 | 2 |
|* 41 | INDEX UNIQUE SCAN | FND_TERRITORIES_TL_U1 | 1 | | 1 |
|* 42 | COUNT STOPKEY | | | | |
|* 43 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_HEADERS | 1 | 21 | 3 |
|* 44 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_HEADERS_U2 | 1 | | 2 |
| 45 | SORT AGGREGATE | | 1 | 39 | |
| 46 | NESTED LOOPS OUTER | | 2 | 78 | 1828 |
|* 47 | TABLE ACCESS FULL | ONTC_MTC_PROFORMA_HEADERS | 1 | 24 | 1825 |
| 48 | TABLE ACCESS BY INDEX ROWID | ONTC_MTC_PROFORMA_LINES | 5 | 75 | 3 |
|* 49 | INDEX RANGE SCAN | ONTC_MTC_PROFORMA_LINES_PK | 11 | | 2 |
| 50 | NESTED LOOPS | | 1 | 766 | 2448 |
| 51 | NESTED LOOPS | | 1 | 761 | 2447 |
| 52 | NESTED LOOPS | | 1 | 746 | 2445 |
| 53 | NESTED LOOPS | | 1 | 694 | 2443 |
| 54 | NESTED LOOPS | | 1 | 682 | 2441 |
| 55 | NESTED LOOPS | | 1 | 671 | 2439 |
| 56 | NESTED LOOPS | | 1 | 612 | 2437 |
| 57 | NESTED LOOPS | | 1 | 600 | 2435 |
| 58 | NESTED LOOPS | | 1 | 575 | 2433 |
PLAN_TABLE_OUTPUT
| 59 | NESTED LOOPS | | 1 | 552 | 2431 |
| 60 | NESTED LOOPS | | 1 | 533 | 2429 |
| 61 | NESTED LOOPS | | 1 | 524 | 2428 |
| 62 | NESTED LOOPS | | 1 | 455 | 2427 |
| 63 | NESTED LOOPS | | 1 | 429 | 2426 |
| 64 | NESTED LOOPS | | 1 | 389 | 2424 |
| 65 | NESTED LOOPS | | 1 | 368 | 2422 |
| 66 | NESTED LOOPS | | 1 | 308 | 2421 |
| 67 | NESTED LOOPS | | 1 | 281 | 2419 |
| 68 | NESTED LOOPS | | 1 | 253 | 2418 |
| 69 | NESTED LOOPS | | 1 | 214 | 2416 |
| 70 | NESTED LOOPS | | 39 | 7371 | 2338 |
|* 71 | TABLE ACCESS FULL | RCV_SHIPMENT_HEADERS | 39 | 5070 | 2221 |
|* 72 | TABLE ACCESS BY INDEX ROWID| RCV_SHIPMENT_LINES | 1 | 59 | 3 |
|* 73 | INDEX RANGE SCAN | RCV_SHIPMENT_LINES_U2 | 1 | | 2 |
|* 74 | TABLE ACCESS BY INDEX ROWID | PO_LINES_ALL | 1 | 25 | 2 |
|* 75 | INDEX UNIQUE SCAN | PO_LINES_U1 | 1 | | 1 |
|* 76 | TABLE ACCESS BY INDEX ROWID | PO_LINE_LOCATIONS_ALL | 1 | 39 | 2 |
|* 77 | INDEX UNIQUE SCAN | PO_LINE_LOCATIONS_U1 | 1 | | 1 |
|* 78 | TABLE ACCESS BY INDEX ROWID | PO_HEADERS_ALL | 1 | 28 | 1 |
|* 79 | INDEX UNIQUE SCAN | PO_HEADERS_U1 | 1 | | |
PLAN_TABLE_OUTPUT
|* 80 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_LINES_ALL | 1 | 27 | 2 |
|* 81 | INDEX UNIQUE SCAN | OE_ORDER_LINES_U1 | 1 | | 1 |
| 82 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_HEADERS_ALL | 1 | 60 | 1 |
|* 83 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 | 1 | | |
|* 84 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 21 | 2 |
|* 85 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | 1 |
|* 86 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 1 | 40 | 2 |
|* 87 | INDEX UNIQUE SCAN | HZ_CUST_SITE_USES_U1 | 1 | | 1 |
| 88 | TABLE ACCESS BY INDEX ROWID | WSH_CARRIERS | 1 | 26 | 1 |
|* 89 | INDEX UNIQUE SCAN | WSH_CARRIERS_U2 | 1 | | |
|* 90 | TABLE ACCESS BY INDEX ROWID | WSH_CARRIER_SERVICES | 1 | 69 | 1 |
|* 91 | INDEX RANGE SCAN | WSH_CARRIER_SERVICES_N1 | 2 | | |
|* 92 | TABLE ACCESS BY INDEX ROWID | WSH_ORG_CARRIER_SERVICES | 1 | 9 | 1 |
|* 93 | INDEX RANGE SCAN | WSH_ORG_CARRIER_SERVICES_N1 | 1 | | |
| 94 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCOUNTS | 1 | 19 | 2 |
|* 95 | INDEX UNIQUE SCAN | HZ_CUST_ACCOUNTS_U1 | 1 | | 1 |
|* 96 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 23 | 2 |
|* 97 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | 1 |
|* 98 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_ACCT_SITES_ALL | 1 | 25 | 2 |
|* 99 | INDEX UNIQUE SCAN | HZ_CUST_ACCT_SITES_U1 | 1 | | 1 |
| 100 | TABLE ACCESS BY INDEX ROWID | HZ_PARTY_SITES | 1 | 12 | 2 |
PLAN_TABLE_OUTPUT
|*101 | INDEX UNIQUE SCAN | HZ_PARTY_SITES_U1 | 1 | | 1 |
| 102 | TABLE ACCESS BY INDEX ROWID | HZ_LOCATIONS | 1 | 59 | 2 |
|*103 | INDEX UNIQUE SCAN | HZ_LOCATIONS_U1 | 1 | | 1 |
|*104 | INDEX RANGE SCAN | HZ_LOC_ASSIGNMENTS_N1 | 1 | 11 | 2 |
| 105 | TABLE ACCESS BY INDEX ROWID | HZ_PARTY_SITES | 1 | 12 | 2 |
|*106 | INDEX UNIQUE SCAN | HZ_PARTY_SITES_U1 | 1 | | 1 |
| 107 | TABLE ACCESS BY INDEX ROWID | HZ_LOCATIONS | 1 | 52 | 2 |
|*108 | INDEX UNIQUE SCAN | HZ_LOCATIONS_U1 | 1 | | 1 |
|*109 | INDEX RANGE SCAN | HZ_LOC_ASSIGNMENTS_N1 | 1 | 15 | 2 |
|*110 | INDEX UNIQUE SCAN | HZ_PARTIES_U1 | 1 | 5 | 1 |
I will put the predicate information in another post.
193 rows selected.
SQL> spool offPlease suggest on how can we improve the performance.
Regards,
Sreekanth Munagala. -
Update query which taking more time
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd<>13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank you
Edited by: 960991 on Apr 16, 2013 7:11 AM960991 wrote:
Hi
I am running an update query which takeing more time any help to run this fast.
update arm538e_tmp t
set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
where m.vndr#=t.vndr#
and m.cust_type_cd=t.cust_type
and m.cust_type_cd13
and m.yymm between 201301 and 201303
group by m.vndr#,m.cust_type_cd;
help will be appreciable
thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
Some things to look at ...
1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
2. Unnecessary "(" ")" in the subquery "(sum" are confusing
3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
5. Is tqtr5 part of an index? It is a bad idea to update indexed columns -
Why this Query is taking much longer time than expected?
Hi,
I need experts support on the below mentioned issue:
Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time. Below, please find the DDL & DML:
DDL
BHDCollections
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BHDCollections](
[BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
[GroupMemberid] [int] NOT NULL,
[BHDDate] [datetime] NOT NULL,
[BHDShift] [varchar](10) NULL,
[SlipValue] [decimal](18, 3) NOT NULL,
[ProcessedValue] [decimal](18, 3) NOT NULL,
[BHDRemarks] [varchar](500) NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
[BHDCollectionid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
BHDCollectionsDet
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[BHDCollectionsDet](
[CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
[BHDCollectionid] [bigint] NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](18, 3) NOT NULL,
[Quantity] [int] NOT NULL,
CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
[CollectionDetailid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Banks
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Banks](
[Bankid] [int] IDENTITY(1,1) NOT NULL,
[Bankname] [varchar](50) NOT NULL,
[Bankabbr] [varchar](50) NULL,
[BankContact] [varchar](50) NULL,
[BankTel] [varchar](25) NULL,
[BankFax] [varchar](25) NULL,
[BankEmail] [varchar](50) NULL,
[BankActive] [bit] NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
[Bankid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
Groupmembers
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[GroupMembers](
[GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
[Groupid] [int] NOT NULL,
[BAID] [int] NOT NULL,
[Createdby] [varchar](50) NULL,
[Createdon] [datetime] NULL,
CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
[GroupMemberid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
REFERENCES [dbo].[BankAccounts] ([BAID])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
GO
ALTER TABLE [dbo].[GroupMembers] WITH CHECK ADD CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
REFERENCES [dbo].[Groups] ([Groupid])
GO
ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
BankAccounts
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[BankAccounts](
[BAID] [int] IDENTITY(1,1) NOT NULL,
[CustomerID] [int] NOT NULL,
[Locationid] [varchar](25) NOT NULL,
[Bankid] [int] NOT NULL,
[BankAccountNo] [varchar](50) NOT NULL,
CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
[BAID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
REFERENCES [dbo].[Banks] ([Bankid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
GO
ALTER TABLE [dbo].[BankAccounts] WITH CHECK ADD CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
REFERENCES [dbo].[Locations] ([Locationid])
GO
ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
Currency
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[Currency](
[Currencyid] [int] IDENTITY(1,1) NOT NULL,
[CurrencyISOCode] [varchar](20) NOT NULL,
[CurrencyCountry] [varchar](50) NULL,
[Currency] [varchar](50) NULL,
CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
[Currencyid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
CurrencyDetails
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[CurrencyDetails](
[CurDenid] [int] IDENTITY(1,1) NOT NULL,
[Currencyid] [int] NOT NULL,
[Denomination] [decimal](15, 3) NOT NULL,
[DenominationType] [varchar](25) NOT NULL,
CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
[CurDenid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
QUERY
WITH TEMP_TABLE AS
SELECT 0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
UNION ALL
SELECT BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
(BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
FROM BHDCollections INNER JOIN
BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
Banks ON BankAccounts.Bankid = Banks.Bankid
GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
HAVING (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
(CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
TEMP_TABLE2 AS
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
HAVING COUNT(DSLIPS)<>0;Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
Just
SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS FROM
#tmp Group By CollectionDate,DSLIPS,Bankname
HAVING COUNT(DSLIPS)<>0;
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Why the processing time for loading is taking longtime
Hi All,
why the processing time for loading is taking longtime i want to know the soltion.
Thanks,
chanduTo analyze the process chain and fix it, go through the below document:
[SAP BW Data Load Performance Analysis and Tuning|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b]
Hope it helps,
Naveen -
Query is taking hours to give output
Hi All,
The following query is taking hours to give o/p,is there any way to optimize it for a better performance.
I took the expalin plan for the sql.
SELECT SUM(L.EXTENDED_AMOUNT)
FROM mtl_categories cat,
mtl_item_categories mic ,
RA_CUSTOMER_TRX_ALL H ,
RA_CUST_TRX_TYPES_ALL T ,
RA_CUSTOMER_TRX_LINES_ALL L
WHERE cat.category_id = 4341
AND cat.category_id = mic.category_id
AND mic.organization_id = 4
AND mic.category_set_id = 4
AND mic.inventory_item_id = l.inventory_item_id
AND TO_CHAR(H.TRX_DATE,'MON-RR')= 'MAR-12'
AND l.CUSTOMER_TRX_ID = H.CUSTOMER_TRX_ID
AND T.TYPE = 'INV'
AND T.NAME NOT IN ('PB-INV-MEMO', 'PB-RMA-MEMO','PB-CM-MEMO','JM-INV-MEMO','JM-RMA-MEMO','JM-CM-MEMO','JM-CO','PB-CO')
AND H.CUST_TRX_TYPE_ID = T.CUST_TRX_TYPE_ID
AND H.BILL_TO_CUSTOMER_ID =3284explain plan for the sql
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 89 | 417 |
| 1 | SORT AGGREGATE | | 1 | 89 | |
| 2 | TABLE ACCESS BY INDEX ROWID | RA_CUSTOMER_TRX_LINES_ALL | 1 | 15 | 3 |
| 3 | NESTED LOOPS | | 1 | 89 | 417 |
| 4 | NESTED LOOPS | | 42 | 3108 | 291 |
| 5 | MERGE JOIN CARTESIAN | | 56 | 2856 | 11 |
| 6 | NESTED LOOPS | | 1 | 28 | 8 |
| 7 | NESTED LOOPS | | 1 | 13 | 1 |
|* 8 | INDEX UNIQUE SCAN | MTL_CATEGORIES_B_U1 | 1 | 5 | 1 |
|* 9 | INDEX UNIQUE SCAN | MTL_CATEGORIES_TL_U1 | 1 | 8 | |
| 10 | TABLE ACCESS BY INDEX ROWID | MTL_ITEM_CATEGORIES | 1 | 15 | 7 |
|* 11 | INDEX RANGE SCAN | MTL_ITEM_CATEGORIES_N1 | 1 | | 2 |
| 12 | BUFFER SORT | | 57 | 1311 | 4 |
|* 13 | TABLE ACCESS FULL | RA_CUST_TRX_TYPES_ALL | 57 | 1311 | 3 |
|* 14 | TABLE ACCESS BY INDEX ROWID | RA_CUSTOMER_TRX_ALL | 1 | 23 | 5 |
|* 15 | INDEX RANGE SCAN | RA_CUSTOMER_TRX_N11 | 26 | | 2 |
|* 16 | INDEX RANGE SCAN | JM_RA_CUSTOMER_TRX_LINES_N1 | 1 | | 2 |
8 - access("B"."CATEGORY_ID"=4341)
9 - access("T"."CATEGORY_ID"=4341 AND "T"."LANGUAGE"=:B1)
filter("B"."CATEGORY_ID"="T"."CATEGORY_ID")
11 - access("MIC"."ORGANIZATION_ID"=4 AND "MIC"."CATEGORY_SET_ID"=4 AND
"MIC"."CATEGORY_ID"=4341)
filter("B"."CATEGORY_ID"="MIC"."CATEGORY_ID")
13 - filter("SYS_ALIAS_0000"."TYPE"='INV' AND "SYS_ALIAS_0000"."NAME"<>'PB-INV-MEMO' AND
"SYS_ALIAS_0000"."NAME"<>'PB-RMA-MEMO' AND "SYS_ALIAS_0000"."NAME"<>'PB-CM-MEMO' AND
"SYS_ALIAS_0000"."NAME"<>'JM-INV-MEMO' AND "SYS_ALIAS_0000"."NAME"<>'JM-RMA-MEMO' AND
"SYS_ALIAS_0000"."NAME"<>'JM-CM-MEMO' AND "SYS_ALIAS_0000"."NAME"<>'JM-CO' AND
"SYS_ALIAS_0000"."NAME"<>'PB-CO')
14 - filter(TO_CHAR("H"."TRX_DATE",'MON-RR')='MAR-12')
15 - access("H"."BILL_TO_CUSTOMER_ID"=3284 AND
"H"."CUST_TRX_TYPE_ID"="SYS_ALIAS_0000"."CUST_TRX_TYPE_ID")
16 - access("L"."CUSTOMER_TRX_ID"="H"."CUSTOMER_TRX_ID" AND
"MIC"."INVENTORY_ITEM_ID"="L"."INVENTORY_ITEM_ID")
Note: cpu costing is off -
Query is taking 4 minute to update 100000 records.
Hi Experts,
Query is taking 4 minutes to update 100000 records.
Please help me.How to improve the performance of this query.
Please help me.
Thanks.Most of your time seems to be spent on the nested loop join between the old and new data - and the "actuals" show that this was a bad choice of join mechanism introduced by a very bad estimate of the number of rows returned by the first join to product_SDS.
It would be useful to see the predicate section of the in-memory plan, but my best guess from the information given is that the estimate is bad because Oracle has used the "independent predicates" multiplication of selectivity on
PDS.DTPS_ID ='PROCESSED'
PDS.PD_ID = 4
PDS.DESCP IS NOT NULL
PDS.STCK_CD IS NOT NULL
PDS.UNIT IS NOT NULL
It's possible that a column group (extended_statistics) on some subset of these 5 columns would address the cardinality problem sufficiently well that the optimizer would switch to a hash join. Alternatively it's possible that the estimates are bad because you need a couple of histograms - this might also explain the discrepancy on the indexed access between the estimated 16,739 rows and the actual 128K rows. Columns to consider are: PDS.SO_ID, and PDS,DTOS_ID
Bottom line - the run-statistics row counts suggest you should be doing a hash join between the old and new data - the run time information suggests quite strongly that the optimizer needs some help with statistics
Regards
Jonathan Lewis -
Frm-40505:ORACLE error: unable to perform query in oracle forms 10g
Hi,
I get error frm-40505:ORACLE error: unable to perform query on oracle form in 10g environment, but the same form works properly in 6i.
Please let me know what do i need to do to correct this problem.
Regards,
PriyaHi everyone,
I have block created on view V_LE_USID_1L (which gives the error frm-40505) . We don't need any updation on this block, so the property 'updateallowed' is set to 'NO'.
To fix this error I modified 'Keymode' property, set it to 'updatable' from 'automatic'. This change solved the problem with frm-40505 but it leads one more problem.
The datablock v_le_usid_1l allows user to enter the text (i.e. updated the field), when the data is saved, no message is shown. When the data is refreshed on the screen, the change done previously on the block will not be seen (this is because the block updateallowed is set to NO), how do we stop the fields of the block being editable?
We don't want to go ahead with this solution as, we might find several similar screens nad its diff to modify each one of them individually. When they work properly in 6i, what it doesn't in 10g? does it require any registry setting?
Regards,
Priya -
Query is taking more time to execute
Hi,
Query is taking more time to execute.
But when i execute same query in other server then it is giving immediate output.
What is the reason of it.
thanks in advance.'My car doesn't start, please help me to start my car'
Do you think we are clairvoyant?
Or is your salary subtracted for every letter you type here?
Please be aware this is not a chatroom, and we can not see your webcam.
Sybrand Bakker
Senior Oracle DBA
Maybe you are looking for
-
I just purchased an AppleTV and am trying to connect AirPlay with my mac (2012 model) In 'system preferences > displays' an airplay menu is listed but it says 'no device detected'. My AppleTV and macbook are connected to the same network. Any suggest
-
SPAD - Spool Server - Set Command Id.
Dear Experts. I set up a printer. This printer spool write spool orders in the operating system. Orders are generated spool in background jobs running various programs. Example: zjob Step 1 rsusr100n Step 2 rsusr003 Step 3 resauselect1 So
-
How is the security maintained for pages within the portal . Is it table driven or is it driven through some other mechanism. I have looked at the deployment descriptors of the beans and only the ones for administration have roles defined for SystemA
-
I've done what Contribute tells me to do about loading a PDF file from my computer. I get the file into the edit box but there is a message above that says "you can't use Contribute to edit this type of file." I don't want to edit--just to publish. W
-
Strange audio in infinity blade 2?
Hi guys Does anybody has a problem with audio in infinity blade 2? In detail,the game is playing only music, and not others effects, as walk steps, sword hit ecc ecc. Sometimes it's working, but just exit end re-enter with double tap on home button,