Query taking much time to execute
The following query is taking more than 4hrs to execute.
select l_extendedprice , count(l_extendedprice) from dbo.lineitem group by l_extendedprice
Cardinality of table : 6001215 ( > 6 million)
Index on l_extendedprice is there
ReadAheadLobThreshold = 500
Database version 7.7.06.09
I need to optimize this query. Kindly suggest way out.
Thanks
Priyank
Data Cache : 80296 KB
Ok, that's 8 Gigs for cache.
The index takes 16335 pages à 8 KB = 130.680 KB = 128 MB.
Fits completely into RAM - the same is true for the additional temp resultset.
So once the index has been read to cache I assume the query is a lot quicker than 4 hours.
6 Data Volumes
first 3 of size 51.200 KB
other 3 of size 1,048,576 KB
Well, that's not the smartest thing to do.
That way the larger volumes will get double the I/O requests which eventually saturates the I/O channel.
Yes looking at the cardinality of the table lots of I/O required but still more than 4 hrs is quite unrealistic. Some tuning is required.
We're not talking about the cardinality here - you want all the data.
We talk pages then.
And as we've seen, the table is not touched for this query.
Instead the smaller index is completely read in a index only strategy.
Loading 128MB from disk, creating temporary data in the same size and spilling out the information (and thereby reading the 128 MB temp size again) in 4 hours add up to ca. 384 MB/4 hours = 96 MB/hour = 1,6 MB/minute.
Not too good really - I suspect that the I/O system here is not the quickest one.
You may want to activate the time measurement and set the DB Analyzer interval to 120 secs.
Then activate Command and Resourcemonitor and look for statements taking longer than 10 minutes.
Now run your statement again and let us know the information from Command/Resourcemonitor and check for warnings in the DBanalyzer output.
regards,
Lars
Similar Messages
-
Stopping a Query taking more time to execute in runtime in Oracle Forms.
Hi,
In the present application one of the oracle form screen is taking long time to execute a query, user wanted an option to stop the query in between and browse the result (whatever has been fetched before stopping the query).
We have tried three approach.
1. set max fetch record in form and block level.
2. set max fetch time in form and block level.
in above two method does not provide the appropiate solution for us.
3. the third approach we applied is setting the interaction mode to "NON BLOCKING" at the form level.
It seems to be worked, while the query took long time to execute, oracle app server prompts an message to press Esc to cancel the query and it a displaying the results fetched upto that point.
But the drawback is one pressing esc, its killing the session itself. which is causing the entire application to collapse.
Please suggest if there is any alternative approach for this or how to overcome this perticular scenario.
This kind of facility is alreday present in TOAD and PL/SQL developer where we can stop an executing query and browse the results fetched upto that point, is the similar facility is avialable in oracle forms ,please suggest.
Thanks and Regards,
Suraj
Edited by: user10673131 on Jun 25, 2009 4:55 AMHello Friend,
You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
Please check the possbility of finding it through other AR tables.
2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
For example : For ra_cust_trx_types_all use ra_cust_trx_types.
This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
You can also check the same using
SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
If the tables are not analyzed , the CBO will not be able to tune your query.
4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
report.
Thanks,
Neeraj Shrivastava
[email protected]
Edited by: user9352949 on Oct 1, 2010 8:02 PM
Edited by: user9352949 on Oct 1, 2010 8:03 PM -
Query taking much time Orace 9i
Hi,
**How can we tune the sql query in oracle 9i.**
The select query taking more than 1 and 30 min to throw the result.
Due to this,
We have created materialsed view on the select query and also we submitted a job to get Materilazed view refreshed daily in dba_jobs.
When we tried to retrive the data from Materilased view getting result very quickly.
But the job which we has been assisgned in Dbajobs taking equal time to complete, as the query use to take.
We feel since the job taking much time in the test Database and it may cause load if we move the same scripts in Production Environment.
Please suggest how to resolvethe issue and also how to tune the sql
With Regards,
Srinivas
Edited by: Srinivas.. on Dec 17, 2009 6:29 AMHi Srinivas;
Please follow this search and see its helpful
Regard
Helios -
Query taking much time.
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")user3479748 wrote:
Hi All,
I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
select count(*) from
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tls.siebel_ba, tls.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba = tlo.siebel_ba (+) and
tls.msisdn = tlo.msisdn (+) and
tlo.siebel_ba is null and
tlo.msisdn is null
union
select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
tlo.siebel_ba, tlo.msisdn
from
TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
where
tls.siebel_ba (+) = tlo.siebel_ba and
tls.msisdn (+) = tlo.msisdn and
tls.siebel_ba is null and
tls.msisdn is null
) ;explain plan of above query is
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | | 14 | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
| 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
| 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
| 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
|* 6 | FILTER | | | | | 41,03 | PCWC | |
|* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
| 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
|* 10 | FILTER | | | | | 41,03 | PCWC | |
|* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
| 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
| 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
Predicate Information (identified by operation id):
6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")
I dunno, it looks like you are getting all the things that are null with an outer join, so won't that decide to full scan anyways? Plus the union means it will do it twice and do a distinct to get rid of dups - see how it does a union all and then sort unique. Somehow I have the feeling there might be a more trick way to do what you want, so maybe you should state exactly what you want in English. -
ExecuteQuery method of view object taking much time to execute
Hi All,
I am using a view object and execute the VO query using executeQuery method in VOImpl java.
But, problem is, it is taking long time to bring the results after executing the query after setting the parameters. Same query in TOAD gives 4 seconds. While executing using executeQuery method, it is taking 5 mins.
It is urgent issue. Please help me. Thanks.
Regards, SooryaHi Kali,
Thanks for your prompt response.
Yes. It has bind parameters. I have printed the statement before and after the executeQuery method
++VOImpl Code snippet++
setWhereClauseParams(params);
System.out.println("before executing query:Time:"+System.currentTimeMillis());
executeQuery();
System.out.println( "after executing query:Time:"+System.currentTimeMillis());
+++
I have removed some conditions in the query as it is business confidential. Please find the jdev log.
++++++++
before executing query:Time:1322071711046
[724] Column count: 41
[725] ViewObject close prepared statements...
[726] ViewObject : Created new QUERY statement
[727] ViewObject: VO1
[728] UserDefined Query: SELECT DISTINCT
ai.invoice_num invoice_num
FROM ap_invoices_all ai
, ap_checks_all ac
WHERE ...
ai.org_id = :p_orgid
AND ac.id = :p_id
[729] Binding param 1: ,468,
[730] Binding param 2: 247
[731] The resource pool monitor thread invoked cleanup on the application module pool, AM, at 2011-11-23 23:41:32.781
after executing query:Time:1322072052875
+++++++
Regards,
Soorya -
Dear all ,
I am fetching data from pool table a006. The select query is mentioned below.
select * from a005 into table i_a005 for all wntries in it_table
where kappl = 'V'
and kschl IN s_kschl
and vkorg in s_vkorg
and vtweg in s_vtgew
and matnr in s_matnr
and knumh = it_table-knumh .
here every fields are primary key fields except one field knumh which is comparing with table it_table. Because of these field this query is taking too much time as KNUMH is not primary key. And a005 is pool table . So , i cant create index for same. If there is alternate solutions , than please let me know..
Thank You ,
And in technical setting of table ITS Metioned as Fully buffered and size category is 0 .. But data are around 9000000. Is it issue or What ? Can somebody tell some genual reason ? Or improvement in my select query.........
Edited by: TVC6784 on Jun 30, 2011 3:31 PMTVC6784 wrote:
Hi Yuri ,
>
> Thanks for your reply....I will check as per your comment...
> bUT if i remove field KNUMH From selection condition and also for all entries in it_itab , than data fetch very fast As KNUMH is not primary key..
> . the example is below
>
> select * from a005 into table i_a005
> where kappl = 'V'
> and kschl IN s_kschl
> and vkorg in s_vkorg
> and vtweg in s_vtgew
> and matnr in s_matnr.
>
> Can you comment anything about it ?
>
> And can you please say how can i check its size as you mention that is 2-3 Mb More ?
>
> Edited by: TVC6784 on Jun 30, 2011 7:37 PM
I cannot see the trace and other information about the table so I cannot judge why the select w/o KNUMH is faster.
Basically, if the table is buffered and it's contents is in the SAP application server memory, the access should be really fast. Does not really matter if it is with KNUMH or without.
I would really like to see at least ST05 trace of your report that is doing this select. This would clarify many things.
You can check the size by multiplying the entries in A005 table by 138. This is (in my test system) the ABAP width of the structure.
If you have 9.000.000 records in A005, then it would take 1,24 Gb in the buffer (which is a clear sign to unbuffer). -
One query taking different time to execute on different environments
I am working on Oracle 10g. We have setup of two different environments - Development and Alpha.
I have written a query which gets some records from a table. This table contains around 1,000,000 records on both the environments.
This query takes 5 seconds to execute on Development environment to get 200 records but the same query takes around 50 seconds to execute on Alpha environment and to retrieve same number of records.
Data and indexes on the table is same on both environments. There are no joins in the query.
Please let me know what are the all possible reasons for this?
Edited by: 956610 on Sep 3, 2012 2:37 AMBelow is the trace on the two environments ---
-----------------------Development ------------------------------
CPU used by this session 1741
CPU used when call started 1741
Cached Commit SCN referenced 15634
DB time 1752
Effective IO time 7236
Number of read IOs issued 173
SQL*Net roundtrips to/from client 14
buffer is not pinned count 90474
buffer is pinned count 264554
bytes received via SQL*Net from client 4507
bytes sent via SQL*Net to client 28859
calls to get snapshot scn: kcmgss 6
calls to kcmgcs 13
cell physical IO interconnect bytes 165330944
cleanout - number of ktugct calls 5273
cleanouts only - consistent read gets 5273
commit txn count during cleanout 5273
consistent gets 202533
consistent gets - examination 101456
consistent gets direct 19686
consistent gets from cache 182847
consistent gets from cache (fastpath) 81013
enqueue releases 3
enqueue requests 3
execute count 6
file io wait time 1582
immediate (CR) block cleanout applications 5273
index fetch by key 36608
index scans kdiixs1 36582
no buffer to keep pinned count 8
no work - consistent read gets 95791
non-idle wait count 42
non-idle wait time 2
opened cursors cumulative 6
parse count (hard) 1
parse count (total) 6
parse time cpu 1
parse time elapsed 2
physical read IO requests 181
physical read bytes 163299328
physical read total IO requests 181
physical read total bytes 163299328
physical read total multi block requests 162
physical reads 19934
physical reads direct 19934
physical reads direct temporary tablespace 248
physical write IO requests 8
physical write bytes 2031616
physical write total IO requests 8
physical write total bytes 2031616
physical write total multi block requests 8
physical writes 248
physical writes direct 248
physical writes direct temporary tablespace 248
physical writes non checkpoint 248
recursive calls 31
recursive cpu usage 1
rows fetched via callback 23018
session cursor cache hits 4
session logical reads 202533
session uga memory max 65488
sorts (memory) 3
sorts (rows) 19516
sql area evicted 2
table fetch by rowid 140921
table scan blocks gotten 19686
table scan rows gotten 2012896
table scans (direct read) 2
table scans (long tables) 2
user I/O wait time 2
user calls 16
workarea executions - onepass 4
workarea executions - optimal 7
workarea memory allocated 17
------------------------------------------------------ For Alpha ------------------------------------------------------------------
CCursor + sql area evicted 1
CPU used by this session 5763
CPU used when call started 5775
Cached Commit SCN referenced 9264
Commit SCN cached 1
DB time 6999
Effective IO time 4262103
Number of read IOs issued 2155
OS All other sleep time 10397
OS Chars read and written 340383180
OS Involuntary context switches 18766
OS Other system trap CPU time 27
OS Output blocks 12445
OS Process stack size 24576
OS System call CPU time 223
OS System calls 20542
OS User level CPU time 5526
OS User lock wait sleep time 86045
OS Voluntary context switches 15739
OS Wait-cpu (latency) time 273
SQL*Net roundtrips to/from client 14
buffer is not pinned count 2111
buffer is pinned count 334
bytes received via SQL*Net from client 4486
bytes sent via SQL*Net to client 28989
calls to get snapshot scn: kcmgss 510
calls to kcmgas 4
calls to kcmgcs 119
cell physical IO interconnect bytes 340041728
cleanout - number of ktugct calls 1
cleanouts only - consistent read gets 1
cluster key scan block gets 179
cluster key scans 168
commit txn count during cleanout 1
consistent gets 41298
consistent gets - examination 722
consistent gets direct 30509
consistent gets from cache 10789
consistent gets from cache (fastpath) 9038
cursor authentications 2
db block gets 7
db block gets from cache 7
dirty buffers inspected 1
enqueue releases 58
enqueue requests 58
execute count 510
file io wait time 6841235
free buffer inspected 8772
free buffer requested 8499
hot buffers moved to head of LRU 27
immediate (CR) block cleanout applications 1
index fast full scans (full) 1
index fetch by key 196
index scans kdiixs1 331
no work - consistent read gets 40450
non-idle wait count 1524
non-idle wait time 1208
opened cursors cumulative 511
parse count (hard) 39
parse count (total) 44
parse time cpu 78
parse time elapsed 343
physical read IO requests 3293
physical read bytes 329277440
physical read total IO requests 3293
physical read total bytes 329277440
physical read total multi block requests 1951
physical reads 40195
physical reads cache 8498
physical reads cache prefetch 7467
physical reads direct 31697
physical reads direct temporary tablespace 1188
physical write IO requests 126
physical write bytes 10764288
physical write total IO requests 126
physical write total bytes 10764288
physical writes 1314
physical writes direct 1314
physical writes direct temporary tablespace 1314
physical writes non checkpoint 1314
prefetched blocks aged out before use 183
recursive calls 1329
recursive cpu usage 76
rows fetched via callback 7
session cursor cache count 8
session cursor cache hits 491
session logical reads 41305
session pga memory max 851968
session uga memory -660696
session uga memory max 3315160
shared hash latch upgrades - no wait 14
sorts (disk) 1
sorts (memory) 177
sorts (rows) 21371
sql area evicted 10
table fetch by rowid 613
table scan blocks gotten 30859
table scan rows gotten 3738599
table scans (direct read) 4
table scans (long tables) 8
table scans (short tables) 3
user I/O wait time 1208
user calls 16
workarea executions - onepass 7
workarea executions - optimal 113
workarea memory allocated -617 -
ABAP QUERY taking much time after ERP Upgrade from 4.6 to 6.0
Hi All,
I have an ABAP QUERY which uses the INFOSET INVOICE_INBOUND and the USER GROUP InvoiceVerif. The INFOSET is using the tables RBKP and RSEG connected using a JOIN on BELNR and GJAHR fields.
The query was working fine in 4.6 C Version. Now the system has been upgraded to 6.0 version.
Now it takes so much time that the processing is not getting completed. Do we have to make any changes to the existing queries for an upgrade?
Thanks a lot in advance.
Gautham.Did u regenrated the query & Infoset & Program before transporting it to ECC6.0?
-
Sftp is taking much time to execute
Hi,
I am working on SFTP .
SFTP is working but while executing, sftp is taking 8min to execute.
Do i need to set properties regarding time ?
Regards,
Divya.Hi,
Use smaller polling interval for high throughput scenarios where the message size is not very large
Regards
Abhi -
Request for the reasons of Query taking much time
Hi,
I have one SQL query.When i execute that query from TOAD it is taking some 120 sec,but the same query when i execute from Forms(Front End) it is taking nearly 5 mints.I don't understand where the problem is.Can any one please help me on this with the resons & solutions(Steps to the overcome).
Regards,
Rao.Can you do an explain plan of the query in Toad?
And a sql_trace of the form when it executes the query?
If you have dba-rights you can enable sql-trace in the forms session by using:
dbms_system.set_sql_trace_in_session(..sid.., ..serial#.., true);where sid and serial# are the values of the forms session (can be found in v$session).
Toon -
Request for query taking much time
Hi,
I have one SQL query.When i execute that query from TOAD it is taking some 120 sec,but the same query when i execute from Forms(Front End) it is taking nearly 5 mints.I don't understand where the problem is.Can any one please help me on this with the resons & solutions(Steps to the overcome).
Regards,
Rao.Hi,
There are many factors involved in this.
user11312630 wrote:
Hi,
I have one SQL query.When i execute that query from TOAD it is taking some 120 sec,but the same query when i execute from Forms(Front End) it is taking nearly 5 mints.I don't understand where the problem is.Can any one please help me on this with the resons & solutions(Steps to the overcome).Because, Toad is directly contacting the database, but, for forms, the request goes to the server, server sends the query to database, gets the rows, pass back the response to the client.
>
Regards,
Rao.So, the factors like, network latency etc., will also play a part in the performance.
Take at a look at the performance tuning guides.
http://www.oracle.com/technology/products/forms/pdf/275191.pdf
http://download.oracle.com/docs/cd/B25527_01/doc/frs/forms/B14032_02/tuning.htm
-Arun -
Update query takes much time to execute
Hi Experts,
I need help regarding performance of the query.
update TEST_TAB
set fail=1, msg='HARD'
where id in (
select src.id from TEST_TAB src
inner join TEST_TAB l_1 on src.email=l_1.email and l_1.database_id=335090 and l_1.msg='HARD' and l_1.fail=1
inner join TEST_TAB l_2 on src.email=l_2.email and l_2.database_id=338310 and l_2.msg='HARD' and l_2.fail=1
inner join TEST_TAB l_3 on src.email=l_3.email and l_3.database_id=338470 and l_3.msg='HARD' and l_3.fail=1
where src.database_id=1111111;
This query is running for too long, takes >1 hour and it updates 26000 records.
But, if we run inner select query
select src.id from TEST_TAB src
inner join TEST_TAB l_1 on src.email=l_1.email and l_1.database_id=335090 and l_1.msg='HARD' and l_1.fail=1
inner join TEST_TAB l_2 on src.email=l_2.email and l_2.database_id=338310 and l_2.msg='HARD' and l_2.fail=1
inner join TEST_TAB l_3 on src.email=l_3.email and l_3.database_id=338470 and l_3.msg='HARD' and l_3.fail=1
where src.database_id=1111111
It takes <1 minute to execute.
Please give me suggetions in the update query so that i will improve performance of the query.SELECT src.id FROM lead src
inner join lead l_1 ON src.email=l_1.email AND
l_1.database_id=335090 AND l_1.bounce_msg_t='HARD' AND l_1.failed=1
inner join lead l_2 ON src.email=l_2.email AND
l_2.database_id=338310 AND l_2.bounce_msg_t='HARD' AND l_2.failed=1
inner join lead l_3 ON src.email=l_3.email AND
l_3.database_id=338470 AND l_3.bounce_msg_t='HARD' AND l_3.failed=1
WHERE src.database_id=264170;
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=ALL_ROWS 1 10453
TABLE ACCESS BY INDEX ROWID LEAD 1 32 27
NESTED LOOPS 1 130 10453
HASH JOIN 1 98 10426
HASH JOIN 199 12 K 6950
TABLE ACCESS BY INDEX ROWID LEAD 202 6 K 3476
INDEX RANGE SCAN LEAD_DATABASE_FK_I 94 K 259
TABLE ACCESS BY INDEX ROWID LEAD 94 K 3 M 3473
INDEX RANGE SCAN LEAD_DATABASE_FK_I 94 K 259
TABLE ACCESS BY INDEX ROWID LEAD 202 6 K 3476
INDEX RANGE SCAN LEAD_DATABASE_FK_I 94 K 259
INDEX RANGE SCAN LEAD_IDX_4 24 3 Update for one row.
UPDATE lead SET failed=1, bounce_msg_t='HARD'
WHERE id IN (SELECT src.id FROM lead src
inner join lead l_1 ON src.email=l_1.email AND
l_1.database_id=335090 AND l_1.bounce_msg_t='HARD' AND l_1.failed=1
inner join lead l_2 ON src.email=l_2.email AND
l_2.database_id=338310 AND l_2.bounce_msg_t='HARD' AND l_2.failed=1
inner join lead l_3 ON src.email=l_3.email AND
l_3.database_id=338470 AND l_3.bounce_msg_t='HARD' AND l_3.failed=1
WHERE src.database_id=264170
AND ROWNUM=1)
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=ALL_ROWS 1 10456
UPDATE LEAD
NESTED LOOPS 1 32 10456
VIEW VW_NSO_1 1 13 10453
SORT UNIQUE 1 130
COUNT STOPKEY
TABLE ACCESS BY INDEX ROWID LEAD 1 32 27
NESTED LOOPS 1 130 10453
HASH JOIN 1 98 10426
HASH JOIN 199 12 K 6950
TABLE ACCESS BY INDEX ROWID LEAD 202 6 K 3476
INDEX RANGE SCAN LEAD_DATABASE_FK_I 94 K 259
TABLE ACCESS BY INDEX ROWID LEAD 94 K 3 M 3473
INDEX RANGE SCAN LEAD_DATABASE_FK_I 94 K 259
TABLE ACCESS BY INDEX ROWID LEAD 202 6 K 3476
INDEX RANGE SCAN LEAD_DATABASE_FK_I 94 K 259
INDEX RANGE SCAN LEAD_IDX_4 24 3
TABLE ACCESS BY INDEX ROWID LEAD 1 19 2
INDEX UNIQUE SCAN LEADS_PK 1 1 -
Owb job taking too much time to execute
While creating a job in OWB, I am using three tables,a joiner and an aggregator which are all joined through another joiner to load into the final table. The output is coming correct but the sql query generated is very complex having so many sub-queries. So, its taking so much time to execute. Pls help me in reducing the cost.
-KCIt depends on what kind of code it generates at each stage. The first step would be collect stats for all the tables used and check the SQL generated using EXPLAIN PLAN. See which sub-query or inline view creates the most cost.
Generate SQL at various stages and see if you can achieve the same with a different operator.
The other option would be passing HINTS to the tables selected.
- K -
Query is taking more time to execute
Hi,
Query is taking more time to execute.
But when i execute same query in other server then it is giving immediate output.
What is the reason of it.
thanks in advance.'My car doesn't start, please help me to start my car'
Do you think we are clairvoyant?
Or is your salary subtracted for every letter you type here?
Please be aware this is not a chatroom, and we can not see your webcam.
Sybrand Bakker
Senior Oracle DBA -
Taking much time when trying to Drill a characterstic in the BW Report.
Hi All,
When we are executing the BW report, it is taking nearly 1 to 2 mins then when we are trying to drill down a characterstic it is taking much time nearly 30 mins to 1 hour and througing an error message that,
"An error has occared during loading. Plese look in the upper frame for further information."
I have executed this query in RSRT and cheked the query properties,
this quey is bringing the data directly form Aggregates but some chares are not avalable in the Agrregtes.
So... after execution when we are trying to drill down the chars is taking much time for chars which are not avilable in the Aggregates. For other chars which are avilable in the Aggregates it is taking only 2 to 3 mins only.
How to do the drill down for the chars which are not avilable in the Aggregates with out taking much time and error.
Could you kindly give any solution for this.
Thanks & Regards,
Raju. EHi,
The only solution is to include all the char used in the report in the aggregates or this will the issue you will face.
just create a proposal for aggregates before creating any new aggregates as it will give you the idea which one is most used.
Also you should make sure that all the navigation characteristics are part of the aggregates.
Thanks
Ajeet
Maybe you are looking for
-
Unable to move preview mode screen up and down on a mac
I'm teaming with a coworker to build a website on Muse. He's operating on a PC and I'm operating on a mac. We are both able to open the muse file and make changes, but I'm experiencing problems when I click over to Preview Mode on my mac. No matte
-
How can we set the Order block for One-Time Billto-address?
Hi Experts, How can we set Order bolck,if user uses one-time bill to address and needs to trigger workflow for releasing process according to order block. Please suggest me. Thanks and Regards, Kiran
-
Place multiple graphics in numerical order
Hi all, I am having a problem with the order in which indesign inserts placed multiple images (graphics). I have a folder of images labelled 1, 2, 3, 4, 5, 6.....and so on. Say I select the first 30 images in bridge (or in the finder) and drag them t
-
I can't get past the password input ... jt seems to be stalled. AftervI entered the assword it just has this coloured circle going around and it won't stop. Help! !
-
Why can't i use dictation & Speech in text?
I used to have dictation function in my iMac, but then recently i can't use it! Don't know why?