Query taking long
HI I have a query which is running longer .
I have checked every option of tuning this .I have attached the explain plan for the same . It seems it is doing a cartisian product .
SELECT analytic_source_cd,
SUM (CASE WHEN pricing_dt = '24jan2014' THEN cnt ELSE 0 END)
AS Prev_Count,
SUM (CASE WHEN pricing_dt = '27jan2014' THEN cnt ELSE 0 END)
AS Current_Count
FROM (SELECT af.analytic_source_cd,
af.pricing_dt,
COUNT (DISTINCT af.fi_instrument_id) cnt
FROM analytics_fact af,
fund f,
instrument_alternate_id iai,
(SELECT pricing_dt, vendor_instrument_id, index_cd
FROM fi_idx_benchmark_holdings
WHERE pricing_dt IN
('24jan2014', '27jan2014')
UNION
SELECT pricing_dt, vendor_instrument_id, index_cd
FROM fi_idx_forward_holdings
WHERE pricing_dt IN
('24jan2014', '27jan2014')) bh
WHERE
af.pricing_dt = bh.pricing_dt
AND f.official_index = bh.index_cd
AND af.fi_instrument_id = iai.fi_instrument_id
AND bh.vendor_instrument_id = iai.alternate_id
AND iai.alternate_id_type_code IN ('FMR_CUSIP', 'CUSIP')
and af.pricing_dt IN ('24jan2014', '27jan2014')
AND f.official_index IS NOT NULL
AND af.oad IS NOT NULL
GROUP BY af.analytic_source_cd, af.pricing_dt
GROUP BY analytic_source_cd
ORDER BY 1;
Please check the below .
Plan
SELECT STATEMENT ALL_ROWSCost: 210,133 Bytes: 27 Cardinality: 1
27 SORT GROUP BY Cost: 210,133 Bytes: 27 Cardinality: 1
26 VIEW A519350. Cost: 210,133 Bytes: 27 Cardinality: 1
25 HASH GROUP BY Cost: 210,133 Bytes: 26 Cardinality: 1
24 VIEW VIEW SYS.VM_NWVW_1 Cost: 210,133 Bytes: 26 Cardinality: 1
23 HASH GROUP BY Cost: 210,133 Bytes: 87 Cardinality: 1
22 HASH JOIN Cost: 210,132 Bytes: 87 Cardinality: 1
10 MERGE JOIN CARTESIAN Cost: 130,054 Bytes: 63 Cardinality: 1
7 NESTED LOOPS Cost: 129,831 Bytes: 61 Cardinality: 1
4 INLIST ITERATOR
3 PARTITION RANGE ITERATOR Cost: 129,827 Bytes: 30 Cardinality: 1 Partition #: 10 Partitions accessed #KEY(INLIST)
2 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_PORTFOLIO_DM.ANALYTICS_FACT Cost: 129,827 Bytes: 30 Cardinality: 1 Partition #: 10 Partitions accessed #KEY(INLIST)
1 INDEX RANGE SCAN INDEX (UNIQUE) FI_PORTFOLIO_DM.ANALYTICS_FACT_PK Cost: 667 Cardinality: 206,474 Partition #: 10 Partitions accessed #KEY(INLIST)
6 PARTITION LIST INLIST Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 13 Partitions accessed #KEY(INLIST)
5 INDEX RANGE SCAN INDEX (UNIQUE) FI_REFERENCE.INSTRUMENT_ALTERNATE_ID_PPK Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 13 Partitions accessed #KEY(INLIST)
9 BUFFER SORT Cost: 130,050 Bytes: 1,642 Cardinality: 821
8 TABLE ACCESS FULL TABLE FI_REFERENCE.FUND Cost: 224 Bytes: 1,642 Cardinality: 821
21 VIEW A519350. Cost: 80,049 Bytes: 63,861,216 Cardinality: 2,660,884
20 SORT UNIQUE Cost: 80,049 Bytes: 66,522,100 Cardinality: 2,660,884
19 UNION-ALL
14 INLIST ITERATOR
13 PARTITION RANGE ITERATOR Cost: 24,599 Bytes: 25,284,850 Cardinality: 1,011,394 Partition #: 21 Partitions accessed #KEY(INLIST)
12 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_BENCHMARK.FI_IDX_BENCHMARK_HOLDINGS Cost: 24,599 Bytes: 25,284,850 Cardinality: 1,011,394 Partition #: 21 Partitions accessed #KEY(INLIST)
11 INDEX RANGE SCAN INDEX FI_BENCHMARK.FI_IDX_BENCHMARK_HOLDINGS_I2 Cost: 1,973 Cardinality: 1,011,394 Partition #: 21 Partitions accessed #KEY(INLIST)
18 INLIST ITERATOR
17 PARTITION RANGE ITERATOR Cost: 36,066 Bytes: 41,237,250 Cardinality: 1,649,490 Partition #: 25 Partitions accessed #KEY(INLIST)
16 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_BENCHMARK.FI_IDX_FORWARD_HOLDINGS Cost: 36,066 Bytes: 41,237,250 Cardinality: 1,649,490 Partition #: 25 Partitions accessed #KEY(INLIST)
15 INDEX RANGE SCAN INDEX FI_BENCHMARK.FI_IDX_FORWARD_HOLDINGS_I2 Cost: 3,499 Cardinality: 1,649,490 Partition #: 25 Partitions accessed #KEY(INLIST)
could you please help if i miss anything?
One nice best practice: do not hard code date columns, use TO_DATE function instead.
For performance issue, check the order of join tables. For ex, you need just af.pricing_dt IN ('24jan2014', '27jan2014') date range in af table, but you join all columns matching with bh table first. Then, you give date condition for af table again. Therefore, the intermediate processed rows will be higher.
Another nice best practice: You can use JOIN keywords, while joining tables. Writing all in where clause make the code complicated. Simplicity is not easy, but impressive.
Regards,
Dilek
Similar Messages
-
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
Query taking long time to run.
The following query is taking long time to run, is there anything can be done to make it run faster by changing the sql etc.
select distinct
A.DEPTID,
A.POSITION_NBR,
A.EMPLID,
A.EMPL_RCD_NBR,
A.EFFDT,
B.NAME,
A.EMPL_STATUS,
A.JOBCODE,
A.ANNUAL_RT,
A.STD_HOURS,
A.PRIMARY_JOB,
C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,
PS_PERSONAL_DATA B,
PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and
((A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT)
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT))
or
(A.POSITION_NBR = C.POSITION_NBR
and A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT)
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB E
where E.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and E.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT)))
or
(A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)))Using distributive law A and (B or C) = (A and B) or (A and C) from right to left we can have:
select distinct A.DEPTID,A.POSITION_NBR,A.EMPLID,A.EMPL_RCD_NBR,A.EFFDT,B.NAME,A.EMPL_STATUS,
A.JOBCODE,A.ANNUAL_RT,A.STD_HOURS,A.PRIMARY_JOB,C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,PS_PERSONAL_DATA B,PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and (
A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT = C.EFFDT
and C.POSN_STATUS != 'G'
and (
C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT
or
A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT
or
A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
)may not help much as the optimizer might have guessed it already
Regards
Etbin -
CDHDR table query taking long time
Hi all,
Select query from CDHDR table is taking long time,in where condition i am giving OBJECTCLASS = 'MAT_FULL' udate = sy-datum and langu = 'EN'.
any suggestion to improve the performance.i want to select all the article which got changed on current date
regards
shibuThis will always be slow for large data volumes, since CDHDR is designed for quick access by object ID (in this case material number), not by date.
I'm afraid you would need to introduce a secondary index on OBJECTCLAS and UDATE, if that query is crucial enough to warrant the additional disk space and processing time taken by the new index.
Greetings
Thomas -
SQL Query taking longer time as seen from Trace file
Below Query Execution timings:
Any help will be benefitial as its affecting business needs.
SELECT MATERIAL_DETAIL_ID
FROM
GME_MATERIAL_DETAILS WHERE BATCH_ID = :B1 FOR UPDATE OF ACTUAL_QTY NOWAIT
call count cpu elapsed disk query current rows
Parse 1 0.00 0.70 0 0 0 0
Execute 2256 8100.00 24033.51 627 12298 31739 0
Fetch 2256 900.00 949.82 0 12187 0 30547
total 4513 9000.00 24984.03 627 24485 31739 30547
Thanks and RegardsThanks Buddy.
Data Collected from Trace file:
SELECT STEP_CLOSE_DATE
FROM
GME_BATCH_STEPS WHERE BATCH_ID
IN (SELECT
DISTINCT BATCH_ID FROM
GME_MATERIAL_DETAILS START WITH BATCH_ID = :B2 CONNECT BY PRIOR PHANTOM_ID=BATCH_ID)
AND NVL(STEP_CLOSE_DATE, :B1) > :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.54 0 0 0 0
Execute 2256 800.00 1120.32 0 0 0 0
Fetch 2256 9100.00 13551.45 396 77718 0 0
total 4513 9900.00 14672.31 396 77718 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 66 (recursive depth: 1)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID GME_BATCH_STEPS
13160 NESTED LOOPS
6518 VIEW
6518 SORT UNIQUE
53736 CONNECT BY WITH FILTERING
30547 NESTED LOOPS
30547 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
30547 TABLE ACCESS BY USER ROWID GME_MATERIAL_DETAILS
23189 NESTED LOOPS
53736 BUFFER SORT
53736 CONNECT BY PUMP
23189 TABLE ACCESS BY INDEX ROWID GME_MATERIAL_DETAILS
23189 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
4386 INDEX RANGE SCAN GME_BATCH_STEPS_U1 (object id 146144)
In the Package there are lots of SQL Statements using CONNECT BY CLAUSE.
Does the use of CONNECT BY Clause degrades performance?
As you can see the Rows Section is 0 but the Query and elapsed time is taking longer
Regards -
Sap bi--query taking long time to exexute
Hi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedharHi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedhar -
Hello
One of my report taking long to execute. i have range base partition on table and indexes by month. following are the details
SQL*Plus: Release 11.2.0.1.0 Production on Fri Oct 7 13:41:04 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> show parameter optimizer;
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 10
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 64
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 16384
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS AUTOGATHERING
SYSSTATS_INFO DSTART 10-07-2011 14:12
SYSSTATS_INFO DSTOP 10-07-2011 14:42
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 2526.08695652174
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
SYSSTATS_TEMP SBLKRDS 640991392
SYSSTATS_TEMP SBLKRDTIM 23353628654370
SYSSTATS_TEMP MBLKRDS 128258266
SYSSTATS_TEMP MBLKRDTIM 6812382430610
SYSSTATS_TEMP CPUCYCLES 75032664
SYSSTATS_TEMP CPUTIM 29682662
SYSSTATS_TEMP JOB 12769
SYSSTATS_TEMP CACHE_JOB 12770
SYSSTATS_TEMP MBRTOTAL 3373935275
22 rows selected Any help or suggestion to improve its performance ?Execution plan is
PLAN_TABLE_OUTPUT
Plan hash value: 2727856908
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 1183 | 138K (1)| 00:32:21 | | | | | |
|* 1 | COUNT STOPKEY | | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (ORDER) | :TQ10008 | 1 | 1183 | 138K (1)| 00:32:21 | | | Q1,08 | P->S | QC (ORDER) |
| 4 | VIEW | | 1 | 1183 | 138K (1)| 00:32:21 | | | Q1,08 | PCWP | |
|* 5 | SORT ORDER BY STOPKEY | | 1 | 400 | 138K (1)| 00:32:21 | | | Q1,08 | PCWP | |
| 6 | PX RECEIVE | | 1 | 1183 | | | | | Q1,08 | PCWP | |
| 7 | PX SEND RANGE | :TQ10007 | 1 | 1183 | | | | | Q1,07 | P->P | RANGE |
|* 8 | SORT ORDER BY STOPKEY | | 1 | 1183 | | | | | Q1,07 | PCWP | |
| 9 | NESTED LOOPS | | | | | | | | Q1,07 | PCWP | |
| 10 | NESTED LOOPS | | 1 | 400 | 138K (1)| 00:32:21 | | | Q1,07 | PCWP | |
| 11 | NESTED LOOPS | | 1 | 377 | 138K (1)| 00:32:21 | | | Q1,07 | PCWP | |
| 12 | NESTED LOOPS | | 1 | 341 | 138K (1)| 00:32:21 | | | Q1,07 | PCWP | |
|* 13 | HASH JOIN | | 1 | 324 | 138K (1)| 00:32:21 | | | Q1,07 | PCWP | |
| 14 | PX RECEIVE | | 1 | 289 | 123K (1)| 00:28:53 | | | Q1,07 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10006 | 1 | 289 | 123K (1)| 00:28:53 | | | Q1,06 | P->P | BROADCAST |
|* 16 | HASH JOIN | | 1 | 289 | 123K (1)| 00:28:53 | | | Q1,06 | PCWP | |
| 17 | PX RECEIVE | | 4 | 1016 | 123K (1)| 00:28:51 | | | Q1,06 | PCWP | |
| 18 | PX SEND BROADCAST | :TQ10005 | 4 | 1016 | 123K (1)| 00:28:51 | | | Q1,05 | P->P | BROADCAST |
|* 19 | HASH JOIN OUTER BUFFERED | | 4 | 1016 | 123K (1)| 00:28:51 | | | Q1,05 | PCWP | |
| 20 | PX RECEIVE | | | | | | | | Q1,05 | PCWP | |
| 21 | PX SEND HASH | :TQ10003 | | | | | | | Q1,03 | P->P | HASH |
| 22 | NESTED LOOPS | | | | | | | | Q1,03 | PCWP | |
| 23 | NESTED LOOPS | | 4 | 892 | 123K (1)| 00:28:51 | | | Q1,03 | PCWP | |
| 24 | NESTED LOOPS | | 6 | 948 | 123K (1)| 00:28:51 | | | Q1,03 | PCWP | |
| 25 | NESTED LOOPS | | 7 | 1043 | 123K (1)| 00:28:51 | | | Q1,03 | PCWP | |
|* 26 | HASH JOIN | | 5784 | 728K| 118K (1)| 00:27:45 | | | Q1,03 | PCWP | |
| 27 | PX RECEIVE | | 667K| 26M| 11597 (1)| 00:02:43 | | | Q1,03 | PCWP | |
| 28 | PX SEND BROADCAST | :TQ10000 | 667K| 26M| 11597 (1)| 00:02:43 | | | Q1,00 | P->P | BROADCAST |
| 29 | PX PARTITION RANGE ALL | | 667K| 26M| 11597 (1)| 00:02:43 | 1 | 14 | Q1,00 | PCWC | |
| 30 | WID TABLE ACCESS BY LOCAL INDEX RO | PAH01V1_DG1 | 667K| 26M| 11597 (1)| 00:02:43 | 1 | 14 | Q1,00 | PCWP | |
| 31 | BITMAP CONVERSION TO ROWIDS | | | | | | | | Q1,00 | PCWP | |
|* 32 | BITMAP INDEX SINGLE VALUE | MINDX_PAH01V1_DG1_14 | | | | | 1 | 14 | Q1,00 | PCWP | |
|* 33 | HASH JOIN | | 1826K| 151M| 107K (1)| 00:25:03 | | | Q1,03 | PCWP | |
| 34 | PX RECEIVE | | 2682 | 18774 | 58 (0)| 00:00:01 | | | Q1,03 | PCWP | |
| 35 | PX SEND BROADCAST | :TQ10001 | 2682 | 18774 | 58 (0)| 00:00:01 | | | Q1,01 | P->P | BROADCAST |
| 36 | PX BLOCK ITERATOR | | 2682 | 18774 | 58 (0)| 00:00:01 | 1 | 14 | Q1,01 | PCWC | |
| 37 | TABLE ACCESS FULL | PAH01V1_DG3 | 2682 | 18774 | 58 (0)| 00:00:01 | 1 | 14 | Q1,01 | PCWP | |
|* 38 | HASH JOIN | | 2409K| 183M| 107K (1)| 00:25:02 | | | Q1,03 | PCWP | |
| 39 | PX RECEIVE | | 476K| 19M| 5904 (1)| 00:01:23 | | | Q1,03 | PCWP | |
| 40 | PX SEND BROADCAST | :TQ10002 | 476K| 19M| 5904 (1)| 00:01:23 | | | Q1,02 | P->P | BROADCAST |
| 41 | PX PARTITION RANGE ALL | | 476K| 19M| 5904 (1)| 00:01:23 | 1 | 14 | Q1,02 | PCWC | |
| 42 | ROWID TABLE ACCESS BY LOCAL INDEX | PAH01V1_DG5 | 476K| 19M| 5904 (1)| 00:01:23 | 1 | 14 | Q1,02 | PCWP | |
| 43 | BITMAP CONVERSION TO ROWIDS | | | | | | | | Q1,02 | PCWP | |
| 44 | BITMAP MINUS | | | | | | | | Q1,02 | PCWP | |
| 45 | BITMAP MINUS | | | | | | | | Q1,02 | PCWP | |
| 46 | BITMAP MERGE | | | | | | | | Q1,02 | PCWP | |
| 47 | BITMAP INDEX FULL SCAN | MINDX_PAH01V1_DG517 | | | | | 1 | 14 | Q1,02 | PCWP | |
|* 48 | E BITMAP INDEX SINGLE VALU | MINDX_PAH01V1_DG5_3 | | | | | 1 | 14 | Q1,02 | PCWP | |
|* 49 | BITMAP INDEX SINGLE VALUE | MINDX_PAH01V1_DG5_3 | | | | | 1 | 14 | Q1,02 | PCWP | |
| 50 | PX BLOCK ITERATOR | | 275M| 9G| 101K (1)| 00:23:37 | 1 | 14 | Q1,03 | PCWC | |
|* 51 | TABLE ACCESS FULL | PAH01V1_JT | 275M| 9G| 101K (1)| 00:23:37 | 1 | 14 | Q1,03 | PCWP | |
| 52 | PARTITION RANGE ALL | | 1 | 20 | 1 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
|* 53 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_DG2 | 1 | 20 | 1 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
|* 54 | INDEX RANGE SCAN | PKINDX_PAH01V1_DG2_28 | 1 | | 1 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
| 55 | PARTITION RANGE ALL | | 1 | 9 | 0 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
| 56 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_DG4 | 1 | 9 | 0 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
|* 57 | INDEX RANGE SCAN | PKINDX_PAH01V1_DG4 | 1 | | 0 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
| 58 | PARTITION RANGE ALL | | 1 | | 1 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
|* 59 | INDEX RANGE SCAN | PKINDX_PAH01V1_DG0 | 1 | | 1 (0)| 00:00:01 | 1 | 14 | Q1,03 | PCWP | |
| 60 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_DG0 | 1 | 65 | 1 (0)| 00:00:01 | 1 | 1 | Q1,03 | PCWP | |
| 61 | PX RECEIVE | | 296 | 9176 | 2 (0)| 00:00:01 | | | Q1,05 | PCWP | |
| 62 | PX SEND HASH | :TQ10004 | 296 | 9176 | 2 (0)| 00:00:01 | | | Q1,04 | P->P | HASH |
| 63 | PX BLOCK ITERATOR | | 296 | 9176 | 2 (0)| 00:00:01 | | | Q1,04 | PCWC | |
| 64 | TABLE ACCESS FULL | PAH01V1_STORE_LKP | 296 | 9176 | 2 (0)| 00:00:01 | | | Q1,04 | PCWP | |
| 65 | PX PARTITION RANGE ITERATOR | | 1571K| 52M| 167 (1)| 00:00:03 | KEY | 14 | Q1,06 | PCWC | |
| 66 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_DG0 | 1571K| 52M| 167 (1)| 00:00:03 | KEY | 14 | Q1,06 | PCWP | |
| 67 | BITMAP CONVERSION TO ROWIDS | | | | | | | | Q1,06 | PCWP | |
| 68 | BITMAP AND | | | | | | | | Q1,06 | PCWP | |
| 69 | BITMAP MERGE | | | | | | | | Q1,06 | PCWP | |
|* 70 | BITMAP INDEX RANGE SCAN | MINDX_PAH01V1_DG022 | | | | | KEY | 14 | Q1,06 | PCWP | |
| 71 | BITMAP MERGE | | | | | | | | Q1,06 | PCWP | |
|* 72 | BITMAP INDEX RANGE SCAN | MINDX_PAH01V1_DG0_8 | | | | | KEY | 14 | Q1,06 | PCWP | |
| 73 | PX PARTITION RANGE ITERATOR | | 2594K| 86M| 14858 (1)| 00:03:29 | KEY | 14 | Q1,07 | PCWC | |
| 74 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_JT | 2594K| 86M| 14858 (1)| 00:03:29 | KEY | 14 | Q1,07 | PCWP | |
| 75 | BITMAP CONVERSION TO ROWIDS | | | | | | | | Q1,07 | PCWP | |
|* 76 | BITMAP INDEX RANGE SCAN | MINDX_PAH01V1_JT_PF_ | | | | | KEY | 14 | Q1,07 | PCWP | |
| 77 | PARTITION RANGE ITERATOR | | 1 | 17 | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
|* 78 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_DG2 | 1 | 17 | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
|* 79 | INDEX RANGE SCAN | PKINDX_PAH01V1_DG2_28 | 1 | | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
| 80 | PARTITION RANGE ITERATOR | | 1 | 36 | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
| 81 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_DG5 | 1 | 36 | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
|* 82 | INDEX RANGE SCAN | PKINDX_PAH01V1_DG5_18 | 1 | | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
| 83 | PARTITION RANGE ITERATOR | | 1 | | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
|* 84 | INDEX RANGE SCAN | PKINDX_PAH01V1_DG1_22 | 1 | | 1 (0)| 00:00:01 | KEY | 14 | Q1,07 | PCWP | |
|* 85 | TABLE ACCESS BY LOCAL INDEX ROWID | PAH01V1_DG1 | 1 | 23 | 1 (0)| 00:00:01 | 1 | 1 | Q1,07 | PCWP | |
135 rows selected -
the below query is taking very long time.
select /*+ PARALLEL(a,8) PARALLEL(b,8) */ a.personid,a.winning_id, b.questionid from
winning_id_cleanup a , rm_personquestion b
where a.personid = b.personid and (a.winning_id,b.questionid) not in
(select /*+ PARALLEL(c,8) */ c.personid,c.questionid from rm_personquestion c where c.personid=a.winning_id);
where the rm_personquestion table is having 45 million rows and winning_id_cleanup is having 1 million rows.
please tell me how to tune this query?Please post u'r query at PL/SQL
It's not for SQL and PL/SQL -
QUERY taking longer time than usual
Hello Gurus,
The query below used to take 5-10 minutes depending on the resource availability, but this time it is taking 4-5 hrs to complete this transaction.
INSERT /*+ APPEND */ INTO TAG_STAGING
SELECT /*+ INDEX(A,ALL_tags_INDX1) */
DISTINCT TRIM (serial) serial_num,
TRIM (COMPANY_numBER) COMPANY_NUM,
TRIM (PERSON_id) PERSON_id
FROM ALL_tags@DWDB_link a
WHERE serviceS IN (SELECT /*+ INDEX(B,service_CODES_INDX2) */
services
FROM service_CODES b
WHERE srvc_cd = 'R')
AND (ORDERDATE_date BETWEEN TO_DATE ('01-JAN-2007','dd-mon-yyyy')
AND TO_DATE ('31-DEC-2007','dd-mon-yyyy'))
AND ( (TRIM (status_1) IS NULL)
OR (TRIM (status_1) = 'R')
AND (TRIM (status_2) = 'R' OR TRIM (status_2) IS NULL)
TAG_STAGING table is empty with primary key on the three given columns
ALL_tags@DWDB_link table has about 100M rows
Ideally the query should fetch about 4M rows.
Could any one please give me an idea as to how to proceed to quicken the process.
Thanks in advance
Thanks,
TTFirst I'd check the explain plan to make sure that it makes sense. Perhaps an index was dropped or perhaps the stats are wrong for some reason.
If the explain plan looks good then I'd trace it and see where the time is being spent. -
Hi All,
I am trying to run one SELECT statement which uses 6 tables. That query generally take 25-30 minutes to generate output.
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.
What else I should check in order to figure out why my SELECT statement is taking time?
Any help will be much appreciated.
Thanks!Please let me know if you still want me to provide all the information mentioned in the link.Yes, please.
Before you can even start optimizing, it should be clear what parts of the query are running slow.
The links contains the steps to take regarding how to identify the things that make the query run slow.
Ideally you post a trace/tkprof report with wait events, it'll show on what time is being spent, give an execution plan and a database version all in once...
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.Well, something must have changed.
And you must indentify what exactly has changed, but it's a broad range you have to check:
- it could be outdated table statistics
- it could be data growth or skewness that makes Optimizer choose a wrong plan all of a sudden
- it could be a table that got modified with some bad index
- it could be ...
So, by posting the information in the link, you'll leave less room for guesses from us, so you'll get an explanation that makes sense faster or, while investigating by following the steps in the link, you'll get the explanation yourself. -
Hi
I have a query in which, its a 3 table join but takes a long time to execute. I had checked with plan table.. it shows one of the table is FULL ACCESS.
I have 2 clarifications.
1. Will the status checking as NULL - (it shouldn't use index)
2. Is the case statements are recommended for queries.
Query
Select .........
FROM CLIENT LEFT OUTER JOIN INTERNET_LOGIN ON INTERNET_LOGIN.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID,
POLI_MOT.
WHERE
POLI_MOT.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID
AND
(POLI_MOT.CHR_CANCEL_STATUS='N'
OR
POLI_MOT.CHR_CANCEL_STATUS IS NULL)
AND
CLIENT.NUM_CONTACT_TYPE_ID IN (1,3)
AND
(NVL(POLI_MOT.VCH_NEW_IC_NO,'A') =
CASE WHEN (NVL(null,NULL) IS NULL) THEN
NVL(POLI_MOT.VCH_NEW_IC_NO,'A')
ELSE
NVL(null,NULL)
END
OR
POLI_MOT.VCH_OLD_IC_NO =
CASE WHEN nvl(null,null) IS NULL THEN
POLI_MOT.VCH_OLD_IC_NO
ELSE
NVL(null,NULL)
END )
AND POLI_MOT.VCH_POLICY_NO =
CASE WHEN UPPER(nvl(NULL,null)) IS NULL THEN
POLI_MOT.VCH_POLICY_NO
ELSE
NVL(NULL,NULL)
END
AND POLI_MOT.VCH_VEHICLE_NO =
CASE WHEN UPPER(NVL('123',NULL)) IS NULL THEN
POLI_MOT.VCH_VEHICLE_NO
ELSE
NVL('123',NULL)
ENDHi,
There is nothing wrong in having a full table access. When you do the explain plan please check for which table costs you the maximun. try to work on that table.
To tune the performance of your query you can try either indexing or parallel access.
the syntax for parallel index is
/*+ PARALLEL("TBL_NM",100) */(any number)...
for index please use the index name of the table you want to index..
regards
Bharath -
Query taking long time To Fectch the Results
Hi!
when I run the query,it takes too long time for fetching the resultsets.
Please find the query below for the same.
SELECT
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ,
A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
C.MONETARY_AMOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE,
E.ENTRY_REASON,
E.AR_LVL,
E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL, E.GL_ENTRY_CREATED,
(Case when c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end)
FROM
sysadm.PS_JRNL_HEADER A,
sysadm.PS_JRNL_LN B,
sysadm.PS_BI_ACCT_ENTRY C,
sysadm.PS_BI_LINE D,
sysadm.PS_BI_HDR E
WHERE A.BUSINESS_UNIT = '&BU'
AND A.JOURNAL_DATE BETWEEN TO_DATE('&From_date','YYYY-MM-DD')
AND TO_DATE('&To_date','YYYY-MM-DD')
AND A.SOURCE = 'BI'
AND A.BUSINESS_UNIT = B.BUSINESS_UNIT
AND A.JOURNAL_ID = B.JOURNAL_ID
AND A.JOURNAL_DATE = B.JOURNAL_DATE
AND A.UNPOST_SEQ = B.UNPOST_SEQ
AND B.BUSINESS_UNIT = C.BUSINESS_UNIT
AND B.JOURNAL_ID = C.JOURNAL_ID
AND B.JOURNAL_DATE = C.JOURNAL_DATE
AND B.JOURNAL_LINE = C.JOURNAL_LINE
AND C.ACCT_ENTRY_TYPE = 'RR'
AND C.BUSINESS_UNIT = '&BU'
AND C.BUSINESS_UNIT = D.BUSINESS_UNIT
AND C.INVOICE = D.INVOICE
AND C.LINE_SEQ_NUM = D.LINE_SEQ_NUM
AND D.BUSINESS_UNIT = '&BU'
AND D.BUSINESS_UNIT = E.BUSINESS_UNIT
AND D.INVOICE = E.INVOICE
AND E.BUSINESS_UNIT = '&BU'
AND
((c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0)
OR
(c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','z')
and D.TAX_CD_VAT_PCT <> 25)
GROUP BY
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ, A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE, E.ENTRY_REASON,
E.AR_LVL, E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL,
E.GL_ENTRY_CREATED,
C.MONETARY_AMOUNT,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE
having
(Case when c.account in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end) is not null
So Could you provide the solution to fix this issue?
Thanks
senthil[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long ...
Regards,
Rob. -
Query taking long time in oracle 10g
I have a query which runs in 1 minute in oracle 8 but it takes 2 hours in oracle 10. The query has couple of sub queries to select the max effective date as wee as current effective sequence. I checked the parameters and their values are as follows. I want to know whether any values can be increased to make it run faster. Also I did not find the parameter unnestsubquery, I think it should be set to FALSE but did not find the value when I did select * from v$parameter. Is it set to false value by default or should i explicitly declare it. Thanks
Statistic Name Result
processes 200
sessions 225
timed_statistics TRUE
sga_target 335544320
control_files /ora1db13/oradata/KVSU2P13/control01.ctl, /ora2db13/o
db_block_size 8192
compatible 10.2.0.1.0
db_file_multiblock_read_count 4
undo_management AUTO
undo_tablespace ADP_UNDO
db_domain
service_names KVSU2P13, KVSU2P13_VSUP
dispatchers (PROTOCOL=tcp)(DISPATCHERS=4)(CONNECTIONS=50)
shared_servers 10
max_shared_servers 20
shared_server_sessions 150
job_queue_processes 10
background_dump_dest /udb01/app/oracle/admin/KVSU2P13/bdump
user_dump_dest /udb01/app/oracle/admin/KVSU2P13/udump
core_dump_dest /udb01/app/oracle/admin/KVSU2P13/cdump
db_name KVSU2P13
open_cursors 300
optimizercost_based_transformation off
alwayssemi_join off
optimizer_index_cost_adj 10
optimizer_index_caching 50
pga_aggregate_target 25165824
workarea_size_policy autoPlease read these standard threads:
How to post a tuning request:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long:
When your query takes too long ... -
SQL Query taking long time....its very urgent !!!
Hi All,
Can any body help me out to tune this query... its cost is 62,900.. and thete is full table scan on ap_invoices_all...
For one invoice ID its taking 20 sccs...
SELECT /*+ INDEX ( i2 AP_INVOICES_N8 ) INDEX ( i1 AP_INVOICES_N8 ) */ DISTINCT ou.name operating_unit,
NVL(SUBSTR(UPPER(TRANSLATE(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1') match_string,
UPPER(v.vendor_name) upper_supplier_name,
i1.invoice_num invoice_number,
to_char(i1.invoice_date,'DD-MON-YYYY') invoice_date,
--i1.invoice_date invoice_date,
NVL(i1.invoice_amount,0) invoice_amount,
i1.invoice_currency_code currency_code,
v.segment1 supplier_number,
v.vendor_name supplier_name,
ssa.vendor_site_code supplier_code,
lc.displayed_field invoice_type,
poh.segment1 po_number,
(select min(por.release_num)
from po_releases_all por
where poh.po_header_id = por.po_header_id) release_num,
gcc.segment1 location,
i1.payment_method_code payment_method_code,
DECODE(LENGTH(TO_CHAR(aca.check_number)),9,aca.check_number,aca.doc_sequence_value) payment_doc_number
FROM ap_invoices_all i1,
ap_invoices_all i2,
ap_suppliers v ,
ap_supplier_sites_all ssa,
ap_lookup_codes lc,
/* (select distinct pha.SEGMENT1, i.PO_HEADER_ID, i.INVOICE_ID
from ap_invoice_lines_all i
,po_headers_all pha
where pha.PO_HEADER_ID = i.PO_HEADER_ID) poh, */
po_headers_all poh,
ap_invoice_lines_all ail,
ap_invoice_distributions_all aida,
gl_code_combinations gcc,
ap_checks_all aca,
ap_invoice_payments_all ipa,
hr_all_organization_units ou
WHERE i1.invoice_id <> i2.invoice_id
AND NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1')
= NVL(substr(upper(translate(i2.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm')
--AND i1.creation_date between :p_creation_date_from and :p_creation_date_to
AND i1.cancelled_date IS NULL
--AND i2.creation_date between :p_creation_date_from and :p_creation_date_to
AND i2.cancelled_date IS NULL
AND i1.invoice_amount = nvl(i2.invoice_amount,-1)
--AND i1.vendor_id = i2.vendor_id
AND i1.vendor_id+0 = i2.vendor_id+0
AND nvl(i1.vendor_id,-1) = v.vendor_id
AND i1.invoice_id = aida.invoice_id
AND aida.distribution_line_number = 1
AND gcc.code_combination_id = aida.dist_code_combination_id
AND lc.lookup_code (+) = i1.invoice_type_lookup_code
AND lc.lookup_type (+) = 'INVOICE TYPE'
AND i1.vendor_site_id = ssa.vendor_site_id(+)
--AND i1.invoice_id = poh.invoice_id (+)
AND i1.invoice_id = ail.invoice_id
--AND ail.line_number = 1
AND aida.INVOICE_LINE_NUMBER = 1
--AND ail.po_header_id = poh.po_header_id (+)
AND ail.po_header_id = poh.po_header_id
AND ail.INVOICE_ID = aida.INVOICE_ID
and ail.LINE_NUMBER = aida.INVOICE_LINE_NUMBER
AND i1.invoice_id = ipa.invoice_id(+)
AND ipa.check_id = aca.check_id(+)
AND i1.org_id = ou.organization_id
and i1.invoice_id = 123456
ORDER BY upper(v.vendor_name),
NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm'),
upper(i1.invoice_num);
Regards
--HarryI tried to rewrite this query to format it into something more readable. Since I can't test, this may have introduced syntax errors:
SELECT /*+ INDEX ( i2 AP_INVOICES_N8 ) INDEX ( i1 AP_INVOICES_N8 ) */
DISTINCT ou.name operating_unit,
NVL(SUBSTR(UPPER(TRANSLATE(i1.invoice_num,
'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1') match_string,
UPPER(v.vendor_name) upper_supplier_name,
i1.invoice_num invoice_number,
to_char(i1.invoice_date,'DD-MON-YYYY') invoice_date,
NVL(i1.invoice_amount,0) invoice_amount,
i1.invoice_currency_code currency_code,
v.segment1 supplier_number,
v.vendor_name supplier_name,
ssa.vendor_site_code supplier_code,
lc.displayed_field invoice_type,
poh.segment1 po_number,
(SELECT MIN(por.release_num)
FROM po_releases_all por
WHERE poh.po_header_id = por.po_header_id) release_num,
gcc.segment1 location,
i1.payment_method_code payment_method_code,
DECODE(LENGTH(TO_CHAR(aca.check_number)),9,
aca.check_number,aca.doc_sequence_value) payment_doc_number
FROM ap_invoices_all i1
INNER JOIN ap_invoices_all i2
ON i1.invoice_id = i2.invoice_id
AND i1.invoice_amount = NVL(i2.invoice_amount,-1)
AND i1.vendor_id+0 = i2.vendor_id+0
INNER JOIN ap_suppliers v
ON NVL(i1.vendor_id,-1) = v.vendor_id
INNER JOIN ap_lookup_codes lc,
ON lc.lookup_code = i1.invoice_type_lookup_code
INNER JOIN ap_invoice_distributions_all aida
ON i1.invoice_id = aida.invoice_id
INNER JOIN gl_code_combinations gcc
ON gcc.code_combination_id = aida.dist_code_combination_id
INNER JOIN ap_invoice_lines_all ail
ON i1.invoice_id = ail.invoice_id
INNER JOIN po_headers_all poh
ON ail.po_header_id = poh.po_header_id
INNER JOIN hr_all_organization_units ou
ON i1.org_id = ou.organization_id
LEFT JOIN (ap_invoice_payments_all ipa
INNER JOIN ap_checks_all aca
ON ipa.check_id = aca.check_id)
ON i1.invoice_id = ipa.invoice_id
LEFT JOIN ap_supplier_sites_all ssa,
ON i1.vendor_site_id = ssa.vendor_site_id
WHERE NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1')
= NVL(substr(upper(translate(i2.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm')
AND i1.cancelled_date IS NULL
AND i2.cancelled_date IS NULL
AND aida.distribution_line_number = 1
AND aida.INVOICE_LINE_NUMBER = 1
AND ail.LINE_NUMBER = 1
AND lc.lookup_type = 'INVOICE TYPE'
AND i1.invoice_id = 123456
ORDER BY upper(v.vendor_name),
NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm'),
upper(i1.invoice_num);I dislike queries in the SELECT clause like the one you have to get RELEASE_NUM. One thing in particular that I see about this on is that this appears to be the only place that anything from the PO_HEADERS_ALL table is used. PO_HEADERS_ALL is only in the query pulled in by the AP_INVOICE_LINES_ALL table. Since the JOIN column used for that is PO_HEADER_ID and that's the same one used in the SELECT clause query, do you really even need the PO_HEADERS_ALL table? This would remove one join at least.
Your query had "AND aida.INVOICE_LINE_NUMBER = 1" and "AND ail.LINE_NUMBER = aida.INVOICE_LINE_NUMBER". The second needn't reference AIDA, I changed it to "AND ail.LINE_NUMBER = 1". It likely won't make a performance impact, but the SQL is clearer. -
Essbase Integration System problem: We build cubes using EIS query. The query which took around 10mins to fetch records is now taking more than 100mins. There hasn’t been any significant change with the fetched data records. No of data is almost same. From the DBA point of view, the query is shorting data at large scale. Any idea why this happens? It has suddenly started to happen
Essbase Integration System problem: We build cubes using EIS query. The query which took around 10mins to fetch records is now taking more than 100mins. There hasn’t been any significant change with the fetched data records. No of data is almost same. From the DBA point of view, the query is shorting data at large scale. Any idea why this happens? It has suddenly started to happen
-
Query taking long time please help
select
o.merchantid as merchantId,
o.orderid as orderId,
p.effortid as effortId,
p.attemptid as attemptId,
o.customerid as customerId,
p.contractid as contractId,
o.merchantreference as merchantReference,
o.ordertype as orderType,
o.statusid as orderstatusId,
o.amount/100 as orderAmount,
o.currencycode as ordercurrencyCode,
o.amountrefunded/100 as amountRefunded,
o.totalamountpaid/100 as totalAmountPaid,
o.totalamountrefunded/100 as totalAmountRefunded,
p.paymentreference as paymentReference,
p.statusid as statusId,
p.amount/100 as amount,
to_char(p.statusdate,
'YYYY-MM-DD HH24:MI') as statusDate,
to_char(p.receiveddate,
'YYYY-MM-DD HH24:MI') as receivedDate,
p.creditdebitindicator as creditdebitIndicator,
to_char(p.paymentdate,
'YYYY-MM-DD HH24:MI') as paymentDate,
p.paymentmethodid as paymentMethodId,
p.paymentproductid as paymentProductId,
p.currencycode as currencyCode,
p.paymentamount/100 as paymentAmount,
p.paymentcurrencycode as paymentCurrencyCode,
pe.rejectioncode as rejectionCode,
p.amountreceived/100 as amountReceived,
pe.rejectionparameters as rejectionParameters,
pp.paymentproductgroupid as paymentproductgroupId,
ps.Statuscode as statusCode,
pp.paymentproductname as paymentProductName
FROM
Opr_Order o,
opr_paymentattempt p,
opr_paymentattempt_error pe,
gpm_paymentstatus ps,
gpm_paymentproduct pp
WHERE
pe.merchantid(+) = p.merchantid
and pe.orderid(+) = p.orderid
and pe.effortid(+) = p.effortid
and pe.attemptid(+) = p.attemptid
and o.merchantid(+) = p.merchantid
and o.orderid(+) = p.orderid
AND pp.paymentproductid = p.paymentproductid
and p.paymentmethodid = ps.paymentmethodid
and p.statusid = ps.statusid
and pp.validindicator = 1
AND ROWNUM <= 1500
AND (
pp.PAYMENTPRODUCTGROUPID = 10
OR (
pp.PAYMENTPRODUCTGROUPID = 20
OR (
pp.PAYMENTPRODUCTGROUPID = 30
OR (
pp.PAYMENTPRODUCTGROUPID = 40
OR (
pp.PAYMENTPRODUCTGROUPID = 50
OR (
pp.PAYMENTPRODUCTGROUPID = 60
OR (
pp.PAYMENTPRODUCTGROUPID = 70
OR (
pp.PAYMENTPRODUCTGROUPID = 80
AND p.receiveddate BETWEEN TO_DATE(20050801000000,'yyyyMMddhh24miss') AND TO_DATE(20110810235900,'yyyyMMddhh24miss')Please follow these guidelines to submit your request for query tuning:
SQL and PL/SQL FAQ
OR
When your query takes too long ...
HOW TO: Post a SQL statement tuning request - template posting
/* Formatted on 9/13/2011 8:20:23 AM (QP5 v5.163.1008.3004) */
SELECT o.merchantid AS merchantId,
o.orderid AS orderId,
p.effortid AS effortId,
p.attemptid AS attemptId,
o.customerid AS customerId,
p.contractid AS contractId,
o.merchantreference AS merchantReference,
o.ordertype AS orderType,
o.statusid AS orderstatusId,
o.amount / 100 AS orderAmount,
o.currencycode AS ordercurrencyCode,
o.amountrefunded / 100 AS amountRefunded,
o.totalamountpaid / 100 AS totalAmountPaid,
o.totalamountrefunded / 100 AS totalAmountRefunded,
p.paymentreference AS paymentReference,
p.statusid AS statusId,
p.amount / 100 AS amount,
TO_CHAR (p.statusdate, 'YYYY-MM-DD HH24:MI') AS statusDate,
TO_CHAR (p.receiveddate, 'YYYY-MM-DD HH24:MI') AS receivedDate,
p.creditdebitindicator AS creditdebitIndicator,
TO_CHAR (p.paymentdate, 'YYYY-MM-DD HH24:MI') AS paymentDate,
p.paymentmethodid AS paymentMethodId,
p.paymentproductid AS paymentProductId,
p.currencycode AS currencyCode,
p.paymentamount / 100 AS paymentAmount,
p.paymentcurrencycode AS paymentCurrencyCode,
pe.rejectioncode AS rejectionCode,
p.amountreceived / 100 AS amountReceived,
pe.rejectionparameters AS rejectionParameters,
pp.paymentproductgroupid AS paymentproductgroupId,
ps.Statuscode AS statusCode,
pp.paymentproductname AS paymentProductName
FROM Opr_Order o,
opr_paymentattempt p,
opr_paymentattempt_error pe,
gpm_paymentstatus ps,
gpm_paymentproduct pp
WHERE p.merchantid = pe.merchantid(+)
AND p.orderid = pe.orderid(+)
AND p.effortid = pe.effortid(+)
AND p.attemptid = pe.attemptid(+)
AND p.merchantid = o.merchantid(+)
AND p.orderid = o.orderid(+)
AND p.paymentproductid = pp.paymentproductid
AND p.paymentmethodid = ps.paymentmethodid
AND p.statusid = ps.statusid
AND pp.validindicator = 1
AND ROWNUM <= 1500
AND pp.PAYMENTPRODUCTGROUPID IN ( 10, 20, 30, 40, 50, 60, 70, 80)
AND p.receiveddate BETWEEN TO_DATE (20050801000000, 'yyyyMMddhh24miss') AND TO_DATE (20110810235900, 'yyyyMMddhh24miss')Edited by: user130038 on Sep 13, 2011 5:28 AM
Maybe you are looking for
-
Hi after updating my iPad to Os6 I get to the point where it needs to be restored? It does a download bar and says 'Extracting software' then I get the message 'iPad cannot be restored at this time because the ipad update software server could not b
-
Dear all, I press right-click on Journal Entry and select "Cancel" function. SAP B1 creates a new journal entry with inverted rows registration. Now, I added a new button in this form. When I press the button, I need to add a new empty row in the mat
-
Client Copy (SAP 4.5B AIX-Oracle to W2003-SQL)
Hi! We have to extract a client from a SAP 4.5B running over AIX 5.2 & Oracle 9.2.0.5 where there are also 5 client besides. In this way, we are thinking about configuring a new eventual machine SAP 4.5B with AIX&Oracle, install it in the same transp
-
In quick view mode, why can't I enlarge pdf files like it used to?
When I use quick view mode by pressing space bar when I have selected a file, say pdf files, I could used to stretch the corners to get the page larger. But now, in Lion, it doesn't behave like that anymore. The page stays the same size however wide
-
Does anyone know if Captivate works with the LMS system Force 10? Thanks, Jeff