Merge Query Taking Long time !!!
Hi all,
Im using loading type update/insert in one of my mappings.
mapping usually takes 3-4 mins to complete.
Suddenly It has taken 1 Hr to complete. If I take the select query from the mapping, its running fast and completes in 2 mins.
I Checked the source data count, Indexes in the source columns. Everything is the same
Mapping level, I didnt do any changes. Can anyone suggest what would be possible reason???
Thanks and Regards
Ela
Lots of things come into play when you're tuning a query.
An (unformatted) execution plan isn't enough.
Tuning takes time and understanding how (a lot of) things work, there is no ASAP in the world of tuning.
Please post other important details, like your database version, optimizer settings, how/when are table statistics gathered etc.
So, read the following informative threads (and please take your time, this really is important stuff), and adust your thread as needed.
That way you'll have a bigger chance of getting help that makes sense...
Your DBA should/ought to be able to help you in this as well.
Re: HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Similar Messages
-
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
Query taking long time to run.
The following query is taking long time to run, is there anything can be done to make it run faster by changing the sql etc.
select distinct
A.DEPTID,
A.POSITION_NBR,
A.EMPLID,
A.EMPL_RCD_NBR,
A.EFFDT,
B.NAME,
A.EMPL_STATUS,
A.JOBCODE,
A.ANNUAL_RT,
A.STD_HOURS,
A.PRIMARY_JOB,
C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,
PS_PERSONAL_DATA B,
PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and
((A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT)
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT))
or
(A.POSITION_NBR = C.POSITION_NBR
and A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT)
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB E
where E.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and E.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT)))
or
(A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)))Using distributive law A and (B or C) = (A and B) or (A and C) from right to left we can have:
select distinct A.DEPTID,A.POSITION_NBR,A.EMPLID,A.EMPL_RCD_NBR,A.EFFDT,B.NAME,A.EMPL_STATUS,
A.JOBCODE,A.ANNUAL_RT,A.STD_HOURS,A.PRIMARY_JOB,C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,PS_PERSONAL_DATA B,PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and (
A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT = C.EFFDT
and C.POSN_STATUS != 'G'
and (
C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT
or
A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT
or
A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
)may not help much as the optimizer might have guessed it already
Regards
Etbin -
CDHDR table query taking long time
Hi all,
Select query from CDHDR table is taking long time,in where condition i am giving OBJECTCLASS = 'MAT_FULL' udate = sy-datum and langu = 'EN'.
any suggestion to improve the performance.i want to select all the article which got changed on current date
regards
shibuThis will always be slow for large data volumes, since CDHDR is designed for quick access by object ID (in this case material number), not by date.
I'm afraid you would need to introduce a secondary index on OBJECTCLAS and UDATE, if that query is crucial enough to warrant the additional disk space and processing time taken by the new index.
Greetings
Thomas -
Sap bi--query taking long time to exexute
Hi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedharHi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedhar -
the below query is taking very long time.
select /*+ PARALLEL(a,8) PARALLEL(b,8) */ a.personid,a.winning_id, b.questionid from
winning_id_cleanup a , rm_personquestion b
where a.personid = b.personid and (a.winning_id,b.questionid) not in
(select /*+ PARALLEL(c,8) */ c.personid,c.questionid from rm_personquestion c where c.personid=a.winning_id);
where the rm_personquestion table is having 45 million rows and winning_id_cleanup is having 1 million rows.
please tell me how to tune this query?Please post u'r query at PL/SQL
It's not for SQL and PL/SQL -
SQL Query taking longer time as seen from Trace file
Below Query Execution timings:
Any help will be benefitial as its affecting business needs.
SELECT MATERIAL_DETAIL_ID
FROM
GME_MATERIAL_DETAILS WHERE BATCH_ID = :B1 FOR UPDATE OF ACTUAL_QTY NOWAIT
call count cpu elapsed disk query current rows
Parse 1 0.00 0.70 0 0 0 0
Execute 2256 8100.00 24033.51 627 12298 31739 0
Fetch 2256 900.00 949.82 0 12187 0 30547
total 4513 9000.00 24984.03 627 24485 31739 30547
Thanks and RegardsThanks Buddy.
Data Collected from Trace file:
SELECT STEP_CLOSE_DATE
FROM
GME_BATCH_STEPS WHERE BATCH_ID
IN (SELECT
DISTINCT BATCH_ID FROM
GME_MATERIAL_DETAILS START WITH BATCH_ID = :B2 CONNECT BY PRIOR PHANTOM_ID=BATCH_ID)
AND NVL(STEP_CLOSE_DATE, :B1) > :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.54 0 0 0 0
Execute 2256 800.00 1120.32 0 0 0 0
Fetch 2256 9100.00 13551.45 396 77718 0 0
total 4513 9900.00 14672.31 396 77718 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 66 (recursive depth: 1)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID GME_BATCH_STEPS
13160 NESTED LOOPS
6518 VIEW
6518 SORT UNIQUE
53736 CONNECT BY WITH FILTERING
30547 NESTED LOOPS
30547 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
30547 TABLE ACCESS BY USER ROWID GME_MATERIAL_DETAILS
23189 NESTED LOOPS
53736 BUFFER SORT
53736 CONNECT BY PUMP
23189 TABLE ACCESS BY INDEX ROWID GME_MATERIAL_DETAILS
23189 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
4386 INDEX RANGE SCAN GME_BATCH_STEPS_U1 (object id 146144)
In the Package there are lots of SQL Statements using CONNECT BY CLAUSE.
Does the use of CONNECT BY Clause degrades performance?
As you can see the Rows Section is 0 but the Query and elapsed time is taking longer
Regards -
Hi
I have a query in which, its a 3 table join but takes a long time to execute. I had checked with plan table.. it shows one of the table is FULL ACCESS.
I have 2 clarifications.
1. Will the status checking as NULL - (it shouldn't use index)
2. Is the case statements are recommended for queries.
Query
Select .........
FROM CLIENT LEFT OUTER JOIN INTERNET_LOGIN ON INTERNET_LOGIN.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID,
POLI_MOT.
WHERE
POLI_MOT.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID
AND
(POLI_MOT.CHR_CANCEL_STATUS='N'
OR
POLI_MOT.CHR_CANCEL_STATUS IS NULL)
AND
CLIENT.NUM_CONTACT_TYPE_ID IN (1,3)
AND
(NVL(POLI_MOT.VCH_NEW_IC_NO,'A') =
CASE WHEN (NVL(null,NULL) IS NULL) THEN
NVL(POLI_MOT.VCH_NEW_IC_NO,'A')
ELSE
NVL(null,NULL)
END
OR
POLI_MOT.VCH_OLD_IC_NO =
CASE WHEN nvl(null,null) IS NULL THEN
POLI_MOT.VCH_OLD_IC_NO
ELSE
NVL(null,NULL)
END )
AND POLI_MOT.VCH_POLICY_NO =
CASE WHEN UPPER(nvl(NULL,null)) IS NULL THEN
POLI_MOT.VCH_POLICY_NO
ELSE
NVL(NULL,NULL)
END
AND POLI_MOT.VCH_VEHICLE_NO =
CASE WHEN UPPER(NVL('123',NULL)) IS NULL THEN
POLI_MOT.VCH_VEHICLE_NO
ELSE
NVL('123',NULL)
ENDHi,
There is nothing wrong in having a full table access. When you do the explain plan please check for which table costs you the maximun. try to work on that table.
To tune the performance of your query you can try either indexing or parallel access.
the syntax for parallel index is
/*+ PARALLEL("TBL_NM",100) */(any number)...
for index please use the index name of the table you want to index..
regards
Bharath -
Query taking long time To Fectch the Results
Hi!
when I run the query,it takes too long time for fetching the resultsets.
Please find the query below for the same.
SELECT
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ,
A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
C.MONETARY_AMOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE,
E.ENTRY_REASON,
E.AR_LVL,
E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL, E.GL_ENTRY_CREATED,
(Case when c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end)
FROM
sysadm.PS_JRNL_HEADER A,
sysadm.PS_JRNL_LN B,
sysadm.PS_BI_ACCT_ENTRY C,
sysadm.PS_BI_LINE D,
sysadm.PS_BI_HDR E
WHERE A.BUSINESS_UNIT = '&BU'
AND A.JOURNAL_DATE BETWEEN TO_DATE('&From_date','YYYY-MM-DD')
AND TO_DATE('&To_date','YYYY-MM-DD')
AND A.SOURCE = 'BI'
AND A.BUSINESS_UNIT = B.BUSINESS_UNIT
AND A.JOURNAL_ID = B.JOURNAL_ID
AND A.JOURNAL_DATE = B.JOURNAL_DATE
AND A.UNPOST_SEQ = B.UNPOST_SEQ
AND B.BUSINESS_UNIT = C.BUSINESS_UNIT
AND B.JOURNAL_ID = C.JOURNAL_ID
AND B.JOURNAL_DATE = C.JOURNAL_DATE
AND B.JOURNAL_LINE = C.JOURNAL_LINE
AND C.ACCT_ENTRY_TYPE = 'RR'
AND C.BUSINESS_UNIT = '&BU'
AND C.BUSINESS_UNIT = D.BUSINESS_UNIT
AND C.INVOICE = D.INVOICE
AND C.LINE_SEQ_NUM = D.LINE_SEQ_NUM
AND D.BUSINESS_UNIT = '&BU'
AND D.BUSINESS_UNIT = E.BUSINESS_UNIT
AND D.INVOICE = E.INVOICE
AND E.BUSINESS_UNIT = '&BU'
AND
((c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0)
OR
(c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','z')
and D.TAX_CD_VAT_PCT <> 25)
GROUP BY
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ, A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE, E.ENTRY_REASON,
E.AR_LVL, E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL,
E.GL_ENTRY_CREATED,
C.MONETARY_AMOUNT,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE
having
(Case when c.account in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end) is not null
So Could you provide the solution to fix this issue?
Thanks
senthil[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long ...
Regards,
Rob. -
Following query i write it returns me 1400 records. and below line taking much time.
1.5 second taken by
count = quer != null ? quer.Count() : 0;
and 2 sec taken by
candidateList = quer.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToList();
Please suggest.Hi Jon,
In SharePoint, I suggest you use CAML Query. If you use Linq, the performance won't be gurantteed.
For the first query, you can use SPQury.Count to achieve it, for the second query, you can build a proper CAML to filter the data.
Here are some detailed articles for your reference:
SPList.GetItems method (SPQuery)
SPQuery.Query Property
Zhengyu Guo
TechNet Community Support -
QUERY taking longer time than usual
Hello Gurus,
The query below used to take 5-10 minutes depending on the resource availability, but this time it is taking 4-5 hrs to complete this transaction.
INSERT /*+ APPEND */ INTO TAG_STAGING
SELECT /*+ INDEX(A,ALL_tags_INDX1) */
DISTINCT TRIM (serial) serial_num,
TRIM (COMPANY_numBER) COMPANY_NUM,
TRIM (PERSON_id) PERSON_id
FROM ALL_tags@DWDB_link a
WHERE serviceS IN (SELECT /*+ INDEX(B,service_CODES_INDX2) */
services
FROM service_CODES b
WHERE srvc_cd = 'R')
AND (ORDERDATE_date BETWEEN TO_DATE ('01-JAN-2007','dd-mon-yyyy')
AND TO_DATE ('31-DEC-2007','dd-mon-yyyy'))
AND ( (TRIM (status_1) IS NULL)
OR (TRIM (status_1) = 'R')
AND (TRIM (status_2) = 'R' OR TRIM (status_2) IS NULL)
TAG_STAGING table is empty with primary key on the three given columns
ALL_tags@DWDB_link table has about 100M rows
Ideally the query should fetch about 4M rows.
Could any one please give me an idea as to how to proceed to quicken the process.
Thanks in advance
Thanks,
TTFirst I'd check the explain plan to make sure that it makes sense. Perhaps an index was dropped or perhaps the stats are wrong for some reason.
If the explain plan looks good then I'd trace it and see where the time is being spent. -
Hi All,
I am trying to run one SELECT statement which uses 6 tables. That query generally take 25-30 minutes to generate output.
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.
What else I should check in order to figure out why my SELECT statement is taking time?
Any help will be much appreciated.
Thanks!Please let me know if you still want me to provide all the information mentioned in the link.Yes, please.
Before you can even start optimizing, it should be clear what parts of the query are running slow.
The links contains the steps to take regarding how to identify the things that make the query run slow.
Ideally you post a trace/tkprof report with wait events, it'll show on what time is being spent, give an execution plan and a database version all in once...
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.Well, something must have changed.
And you must indentify what exactly has changed, but it's a broad range you have to check:
- it could be outdated table statistics
- it could be data growth or skewness that makes Optimizer choose a wrong plan all of a sudden
- it could be a table that got modified with some bad index
- it could be ...
So, by posting the information in the link, you'll leave less room for guesses from us, so you'll get an explanation that makes sense faster or, while investigating by following the steps in the link, you'll get the explanation yourself. -
Select query taking long time (more then 6 min)
Dear experts,
DATA:IT_CHEQ2 TYPE TABLE OF TY_BSAS,
WA_CHEQ2 LIKE LINE OF IT_CHEQ2.
DATA : IT_CHEQ3 TYPE STANDARD TABLE OF TY_BSAS WITH HEADER LINE.
TYPES:BEGIN OF TY_BSAS,
BUKRS TYPE BSAS-BUKRS,
HKONT TYPE BSAS-HKONT,
AUGDT TYPE BSAS-AUGDT,
AUGBL TYPE BSAK-AUGBL,
ZUONR TYPE BSAK-ZUONR,
GJAHR TYPE BSAK-GJAHR,
BELNR TYPE BSAK-BELNR,
BUZEI TYPE BSAK-BUZEI,
BUDAT TYPE BSAK-BUDAT,
XBLNR TYPE BSAK-XBLNR,
BLART TYPE BSAK-BLART,
SHKZG TYPE BSAK-SHKZG,
DMBTR TYPE BSAK-DMBTR,
WMWST TYPE BSAK-WMWST,
AUGGJ TYPE BSAK-AUGGJ, " CLEARING FYSICAL YEAR
OT_TAX TYPE BSAK-DMBTR,
TDS TYPE BSAK-DMBTR,
VAT TYPE BSAK-DMBTR, "Vat amount
WCT TYPE BSAK-DMBTR,
ADV TYPE BSAK-DMBTR, "Advance
CHAMT TYPE BSAK-DMBTR,
CHNO TYPE PAYR-CHECT,
CHDATE TYPE PAYR-ZALDT,
DBIT_NOTE TYPE BSAK-DMBTR,
PAY_ADJ TYPE BSAK-DMBTR,
PEND_SES TYPE BSAK-DMBTR, "PENDING SES
CR_PARTY(50) TYPE C,
END OF TY_BSAS.
SELECT BUKRS HKONT AUGDT AUGBL ZUONR GJAHR BELNR BUZEI BUDAT XBLNR BLART SHKZG
DMBTR WMWST
FROM BSAS INTO " APPENDING
CORRESPONDING FIELDS OF TABLE IT_CHEQ3
FOR ALL ENTRIES IN IT_CHEQ2
WHERE AUGBL = IT_CHEQ2-AUGBL and
BUKRS = IT_CHEQ2-BUKRS AND
* AUGBL = IT_CHEQ2-AUGBL
GJAHR = IT_CHEQ2-GJAHR
AND XBLNR = IT_CHEQ2-XBLNR.
line company code hkont augdt augbl zuonr gjahr belnr buzei budat
1 1018 0012100030 20110831 2100009710 20110831 2011 2100009710 005 20110831
xblnr blart shkzg
RA03 KZ H 37067.00 0.00 2011 0.00 0.00
2 1018 0012100030 20110831 2100009710 20110831 2011 2100009710 006 20110831
RA03 KZ H 393850.00 0.00 2011 0.00 0.00
3 1018 0012100030 20110831 2100009710 20110831 2011 2100009710 004 20110831 RA03 KZ S 723589.13 0.00 2011 0.00 0.00
4 1018 0012100030 20110831 2100009710 20110823 2011 3900001250 001 20110823 RA03 RS H 712921.13 0.00 2011 0.00 0.00
5 1018 0023200000 20110831 2100009710 20110831 2011 2100009710 008 20110831 RA03 KZ H 21788.00 0.00 2011 0.00 0.00
6 1018 0023200000 20110831 2100009710 20110831 2011 2100009710 007 20110831 RA03 KZ H 1162821.00 0.00 2011 0.00 0.00
if i put same entry in se11 for bsas it takes 7 second
and in query takes more then 6 min ,kindly tell why
help me gurus
regards
victorTested point 2.
There is no difference.
REPORT Z_YZ_SELECT_ORDER.
types: begin of t_orderadm,
description type CRMT_PROCESS_DESCRIPTION,
created_at type COMT_CREATED_AT_USR,
LOGICAL_SYSTEM type CRMT_LOGSYS,
TEMPLATE_TYPE type CRMT_TEMPLATE_TYPE_DB,
VERIFY_DATE type CRMT_VERIFY_DATE,
GUID type CRMT_OBJECT_GUID,
end of t_orderadm.
types: begin of t_orderadm_1,
GUID type CRMT_OBJECT_GUID,
description type CRMT_PROCESS_DESCRIPTION,
LOGICAL_SYSTEM type CRMT_LOGSYS,
TEMPLATE_TYPE type CRMT_TEMPLATE_TYPE_DB,
created_at type COMT_CREATED_AT_USR,
VERIFY_DATE type CRMT_VERIFY_DATE,
end of t_orderadm_1.
data: lt_orders type table of t_orderadm,
lt_orders_1 type table of t_orderadm_1.
select description created_at logical_system template_type verify_date guid
into table lt_orders
from crmd_orderadm_h.
select guid description logical_system template_type created_at verify_date
into table lt_orders_1
from crmd_orderadm_h.
write 'done'.
First select - mixed order of fields. Response time: 82.155 microseconds for 39380 records selected.
Second select - fields in the order of the table. Response time: 81.061 microseconds for the same 39380 records selected.
Then I changed the order of SELECT statements. I have put first the select with ordered fields, and second - select with mixed order of fields. The runtimes were the following:
Ordered fields - 82.649 microseconds
Mixed order of fields - 80.270 microseconds.
So I'm going to change the Wiki page in order to avoid in future advices that make no sense. -
Select Query taking long time to run second time
Hi All,
I have Oracle 11gR1 in windows server 2008 R2 .
I have some tables with 10 million records . When i run the select query for those tables first time it gives me result in 15 seconds but if i am running the same script second time from the same session I am getting the result in 15 minutes to complete ..
Why it is happening? What may be the solution for this ?
Thanks & Regards,
Vikash jain(Junior DBA)Hi Mohamed,
I just saw that both the times for the same query execution plan is different ..
here are the details :
First time Second Time
g84m3qqjv2p3q g84m3qqjv2p3q
2733045235 1310485984
So plz tell me how should i force database to use the first execution plan ?
I got this script for forcing the Db to use the same execution plan
accept sql_id -
prompt 'Enter value for sql_id: ' -
default 'X0X0X0X0'
accept plan_hash_value -
prompt 'Enter value for plan_hash_value: ' -
default 'X0X0X0X0'
accept fixed -
prompt 'Enter value for fixed (NO): ' -
default 'NO'
accept enabled -
prompt 'Enter value for enabled (YES): ' -
default 'YES'
accept plan_name -
prompt 'Enter value for plan_name (ID_sqlid_planhashvalue): ' -
default 'X0X0X0X0'
set feedback off
set sqlblanklines on
set serveroutput on
declare
l_plan_name varchar2(40);
l_old_plan_name varchar2(40);
l_sql_handle varchar2(40);
ret binary_integer;
l_sql_id varchar2(13);
l_plan_hash_value number;
l_fixed varchar2(3);
l_enabled varchar2(3);
major_release varchar2(3);
minor_release varchar2(3);
begin
select regexp_replace(version,'\..*'), regexp_substr(version,'[0-9]+',1,2) into major_release, minor_release from v$instance;
minor_release := 2;
l_sql_id := '&&sql_id';
l_plan_hash_value := to_number('&&plan_hash_value');
l_fixed := '&&fixed';
l_enabled := '&&enabled';
ret := dbms_spm.load_plans_from_cursor_cache(
sql_id=>l_sql_id,
plan_hash_value=>l_plan_hash_value,
fixed=>l_fixed,
enabled=>l_enabled);
if minor_release = '1' then
-- 11gR1 has a bug that prevents renaming Baselines
dbms_output.put_line(' ');
dbms_output.put_line('Baseline created.');
dbms_output.put_line(' ');
else
-- This statements looks for Baselines create in the last 4 seconds
select sql_handle, plan_name,
decode('&&plan_name','X0X0X0X0','SQLID_'||'&&sql_id'||'_'||'&&plan_hash_value','&&plan_name')
into l_sql_handle, l_old_plan_name, l_plan_name
from dba_sql_plan_baselines spb
where created > sysdate-(1/24/60/15);
ret := dbms_spm.alter_sql_plan_baseline(
sql_handle=>l_sql_handle,
plan_name=>l_old_plan_name,
attribute_name=>'PLAN_NAME',
attribute_value=>l_plan_name);
dbms_output.put_line(' ');
dbms_output.put_line('Baseline '||upper(l_plan_name)||' created.');
dbms_output.put_line(' ');
end if;
end;
undef sql_id
undef plan_hash_value
undef plan_name
undef fixed
set feedback on
Output:
Enter value for sql_id: g84m3qqjv2p3q
Enter value for plan_hash_value: 2733045235
Enter value for fixed (NO):
Enter value for enabled (YES):
Enter value for plan_name (ID_sqlid_planhashvalue): g84m3qqjv2p3q
old 16: l_sql_id := '&&sql_id';
new 16: l_sql_id := 'g84m3qqjv2p3q';
old 17: l_plan_hash_value := to_number('&&plan_hash_value');
new 17: l_plan_hash_value := to_number('2733045235');
old 18: l_fixed := '&&fixed';
new 18: l_fixed := 'NO';
old 19: l_enabled := '&&enabled';
new 19: l_enabled := 'YES';
old 40: decode('&&plan_name','X0X0X0X0','SQLID_'||'&&sql_id'||'_'||'&&plan_hash_value','&&plan_name')
new 40: decode('g84m3qqjv2p3q','X0X0X0X0','SQLID_'||'g84m3qqjv2p3q'||'_'||'2733045235','g84m3qqjv2p3q')
declare
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 39
Kindly help me to resolve the issue ..
Thanks & Regards,
Vikash Jain(Junior DBA) -
Query taking long time in oracle 10g
I have a query which runs in 1 minute in oracle 8 but it takes 2 hours in oracle 10. The query has couple of sub queries to select the max effective date as wee as current effective sequence. I checked the parameters and their values are as follows. I want to know whether any values can be increased to make it run faster. Also I did not find the parameter unnestsubquery, I think it should be set to FALSE but did not find the value when I did select * from v$parameter. Is it set to false value by default or should i explicitly declare it. Thanks
Statistic Name Result
processes 200
sessions 225
timed_statistics TRUE
sga_target 335544320
control_files /ora1db13/oradata/KVSU2P13/control01.ctl, /ora2db13/o
db_block_size 8192
compatible 10.2.0.1.0
db_file_multiblock_read_count 4
undo_management AUTO
undo_tablespace ADP_UNDO
db_domain
service_names KVSU2P13, KVSU2P13_VSUP
dispatchers (PROTOCOL=tcp)(DISPATCHERS=4)(CONNECTIONS=50)
shared_servers 10
max_shared_servers 20
shared_server_sessions 150
job_queue_processes 10
background_dump_dest /udb01/app/oracle/admin/KVSU2P13/bdump
user_dump_dest /udb01/app/oracle/admin/KVSU2P13/udump
core_dump_dest /udb01/app/oracle/admin/KVSU2P13/cdump
db_name KVSU2P13
open_cursors 300
optimizercost_based_transformation off
alwayssemi_join off
optimizer_index_cost_adj 10
optimizer_index_caching 50
pga_aggregate_target 25165824
workarea_size_policy autoPlease read these standard threads:
How to post a tuning request:
HOW TO: Post a SQL statement tuning request - template posting
When your query takes too long:
When your query takes too long ...
Maybe you are looking for
-
Business service or Business system??
Hello All, I do get confused with respect to Business service and Business system?? What differentiates between them and when accordingly are they used?? If I am using a HTTP adapter to connect to a URL address of a 3rd party system, what would
-
Why can't I open e-mails in Firefox 7.01 or 8.0?
After the update to 7.01 I can't open e-mails(not attachments) sent to my hotmail account. I then tried to update to the 8.0 beta version, but it made no difference.
-
Time recorded correctly but distance stops updating early in the run Posted
I have just started using a brand new IiPod nano 5G and sports kit. Previously I have used an iPod Touch but this was getting soaked in sweat and the touch screen was becoming temperamental after runs. To save the iPod Touch from further soaking I mo
-
hi, I need to know if its possible to make a thread pool in java where the size of the threads in pool varies by user input (basically on the work load but is not Dynamic say only like 3 variables 10,15,20 threads ) The threads do not behave in a FIF
-
I bought the entire CS5 program on ebay about 1 week ago. I loaded it onto my brand new laptop with Windows 7. Before I loaded the software I had to add a some necassary information onto my already existent host file on my computer. I was told to