Calculations in query taking long time and to load target table
Hi,
I am pulling approx 45 Million records using the below query in a ssis package which pulls from one DB on one server and loading the results to another target table on the another server. In the select query I have a calculation for 6 columns. The target
table is trunctaed and loaded every day. Also most of the columns in the source which I used for the calculations is having 0 and it took approximately 1 hour 45 min to load the target table. Is there any way to reduce the load time? Also can I do the calcultions
after once all the 47 M records loaded during query running and then calculate for the non zero records alone?
SELECT T1.Col1,
T1.Col2,
T1.Col3,
T2.Col1,
T2.Col2,
T3.Col1,
convert( numeric(8,5), (convert( numeric,T3.COl2) / 1000000)) AS Colu2,
convert( numeric(8,5), (convert( numeric,T3.COl3) / 1000000)) AS Colu3,
convert( numeric(8,5), (convert( numeric,T3.COl4) / 1000000)) AS Colu4,
convert( numeric(8,5),(convert( numeric, T3.COl5) / 1000000)) AS Colu5,
convert( numeric(8,5), (convert( numeric,T3.COl6) / 1000000)) AS Colu6,
convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000)) AS Colu7,
FROM Tab1 T1
JOIN Tab2 T2
ON (T1.Col1 = T2.Col1)
JOIN Tab3 T3
ON (Tab3.Col9 =Tab3.Col9)
Anand
So 45 or 47? Nevertheless ...
This is hardly a heavy calculation, the savings will be dismal. Also anything numeric is very easy on CPU in general.
But
convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000))
is not optimal.
CONVERT( NUMERIC(8,5),300 / 1000000.00000 )
Is
Now it boils to how to make load faster: do it in parallel. Find how many sockets the machine have and split the table into as many chunks. Also profile to find out where it spends most of the time. I saw sometimes the network is not letting me thru so you
may want to play with buffers, and packet sizes, for example if OLEDB used increase the packet size two times see if works faster, then x2 more and so forth.
To help you further you need to tell more e.g. what is this source, destination, how you configured the load.
Please understand that there is no Silver Bullet anywhere, or a blanket solution, and you need to tell me your desired load time. E.g. if you tell me it needs to load in 5 min I will give your ask a pass.
Arthur
MyBlog
Twitter
Similar Messages
-
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
Query taking long time to run.
The following query is taking long time to run, is there anything can be done to make it run faster by changing the sql etc.
select distinct
A.DEPTID,
A.POSITION_NBR,
A.EMPLID,
A.EMPL_RCD_NBR,
A.EFFDT,
B.NAME,
A.EMPL_STATUS,
A.JOBCODE,
A.ANNUAL_RT,
A.STD_HOURS,
A.PRIMARY_JOB,
C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,
PS_PERSONAL_DATA B,
PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and
((A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT)
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT))
or
(A.POSITION_NBR = C.POSITION_NBR
and A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT)
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB E
where E.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and E.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT)))
or
(A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)))Using distributive law A and (B or C) = (A and B) or (A and C) from right to left we can have:
select distinct A.DEPTID,A.POSITION_NBR,A.EMPLID,A.EMPL_RCD_NBR,A.EFFDT,B.NAME,A.EMPL_STATUS,
A.JOBCODE,A.ANNUAL_RT,A.STD_HOURS,A.PRIMARY_JOB,C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,PS_PERSONAL_DATA B,PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and (
A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT = C.EFFDT
and C.POSN_STATUS != 'G'
and (
C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT
or
A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT
or
A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
)may not help much as the optimizer might have guessed it already
Regards
Etbin -
CDHDR table query taking long time
Hi all,
Select query from CDHDR table is taking long time,in where condition i am giving OBJECTCLASS = 'MAT_FULL' udate = sy-datum and langu = 'EN'.
any suggestion to improve the performance.i want to select all the article which got changed on current date
regards
shibuThis will always be slow for large data volumes, since CDHDR is designed for quick access by object ID (in this case material number), not by date.
I'm afraid you would need to introduce a secondary index on OBJECTCLAS and UDATE, if that query is crucial enough to warrant the additional disk space and processing time taken by the new index.
Greetings
Thomas -
Sap bi--query taking long time to exexute
Hi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedharHi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedhar -
Query optimization - Query is taking long time even there is no table scan in execution plan
Hi All,
The below query execution is taking very long time even there are all required indexes present.
Also in execution plan there is no table scan. I did a lot of research but i am unable to find a solution.
Please help, this is required very urgently. Thanks in advance. :)
WITH cte
AS (
SELECT Acc_ex1_3
FROM Acc_ex1
INNER JOIN Acc_ex5 ON (
Acc_ex1.Acc_ex1_Id = Acc_ex5.Acc_ex5_Id
AND Acc_ex1.OwnerID = Acc_ex5.OwnerID
WHERE (
cast(Acc_ex5.Acc_ex5_92 AS DATETIME) >= '12/31/2010 18:30:00'
AND cast(Acc_ex5.Acc_ex5_92 AS DATETIME) < '01/31/2014 18:30:00'
SELECT DISTINCT R.ReportsTo AS directReportingUserId
,UC.UserName AS EmpName
,UC.EmployeeCode AS EmpCode
,UEx1.Use_ex1_1 AS PortfolioCode
SELECT TOP 1 TerritoryName
FROM UserTerritoryLevelView
WHERE displayOrder = 6
AND UserId = R.ReportsTo
) AS BranchName
,GroupsNotContacted AS groupLastContact
,GroupCount AS groupTotal
FROM ReportingMembers R
INNER JOIN TeamMembers T ON (
T.OwnerID = R.OwnerID
AND T.MemberID = R.ReportsTo
AND T.ReportsTo = 1
INNER JOIN UserContact UC ON (
UC.CompanyID = R.OwnerID
AND UC.UserID = R.ReportsTo
INNER JOIN Use_ex1 UEx1 ON (
UEx1.OwnerId = R.OwnerID
AND UEx1.Use_ex1_Id = R.ReportsTo
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupCount
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.Acc_ex1_3 > '0'
AND Accounts.OwnerID = 109
GROUP BY Accounts.AssignedTo
) TotalGroups ON (TotalGroups.AssignedTo = R.ReportsTo)
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupsNotContacted
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.OwnerID = Accounts.OwnerID
AND Acc_ex1.Acc_ex1_3 > '0'
INNER JOIN Acc_ex5 ON (
Accounts.AccountID = Acc_ex5.Acc_ex5_Id
AND Acc_ex5.OwnerID = Accounts.OwnerID
WHERE Accounts.OwnerID = 109
AND Acc_ex1.Acc_ex1_3 NOT IN (
SELECT Acc_ex1_3
FROM cte
GROUP BY Accounts.AssignedTo
) TotalGroupsNotContacted ON (TotalGroupsNotContacted.AssignedTo = R.ReportsTo)
WHERE R.OwnerID = 109
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash JugranHi All,
Thanks for the replies.
I have optimized that query to make it run in few seconds.
Here is my final query.
select ReportsTo as directReportingUserId,
UserName AS EmpName,
EmployeeCode AS EmpCode,
Use_ex1_1 AS PortfolioCode,
BranchName,
GroupInfo.groupTotal,
GroupInfo.groupLastContact,
case when exists
(select 1 from ReportingMembers RM
where RM.ReportsTo = UserInfo.ReportsTo
and RM.MemberID <> UserInfo.ReportsTo
) then 0 else UserInfo.ReportsTo end as memberid1,
(select code from Regions where ownerid=109 and name=UserInfo.BranchName) as BranchCode,
ROW_NUMBER() OVER (ORDER BY directReportingUserId) AS ROWNUMBER
FROM
(select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo )
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
union
select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
--INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo)
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
where R.MemberID = 1
) UserInfo
inner join
select directReportingUserId, sum(Groups) as groupTotal, SUM(GroupsNotContacted) as groupLastContact
from
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
union
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
where R.MemberID = 1
) GroupWiseInfo
group by directReportingUserId
) GroupInfo
on UserInfo.ReportsTo = GroupInfo.directReportingUserId
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran -
the below query is taking very long time.
select /*+ PARALLEL(a,8) PARALLEL(b,8) */ a.personid,a.winning_id, b.questionid from
winning_id_cleanup a , rm_personquestion b
where a.personid = b.personid and (a.winning_id,b.questionid) not in
(select /*+ PARALLEL(c,8) */ c.personid,c.questionid from rm_personquestion c where c.personid=a.winning_id);
where the rm_personquestion table is having 45 million rows and winning_id_cleanup is having 1 million rows.
please tell me how to tune this query?Please post u'r query at PL/SQL
It's not for SQL and PL/SQL -
Export (exp) taking long time and reading UNDO
Hi Guys,
Oracle 9.2.0.7 on AIX 5.3
A schema level export job is scheduled at night. Since day before yesterday it has been taking really long time. It used to finish in 8 hours or so but yesterday it took around 20 hours and was still running. The schema size to be exported is around 1 TB. (I know it is bit stupid to take such daily exports but customer requirement, you know ;) ) Today again it is still running although i scheduled it to run even earlier by 1 and 1/2 hour.
The command used is:
exp userid=abc/abc file=expabc.pipe buffer=100000 rows=y direct=y
recordlength=65535 indexes=n triggers=n grants=y
constraints=y statistics=none log=expabc.log owner=abcI have monitored the session and all the time the wait event is db file sequential read. From p1 i figured out that all the datafiles it reads belong to UNDO tablespace. What surprises me is that when consistent=Y is not specified should it go to read UNDO so frequently ?
There is total of around 1800 tables in the schema; what i can see from the export log is that it exported around 60 tables and has been stuck since then. The logfile, dumpfile both has not been updated since long time.
Any hints, clues in which direction to diagnose please.
Any other information required, please let me know.
Regards,
Amardeep SidhuThanks Hemant.
As i wrote above, it runs from a cron job.
Here is the output from a simple SQL querying v$session_wait & v$datafile:
13:50:00 SQL> l
1* select a.sid,a.p1,a.p2,a.p3,b.file#,b.name
from v$session_wait a,v$datafile b where a.p1=b.file# and a.sid=154
13:50:01 SQL> /
SID P1 P2 P3 FILE# NAME
154 509 158244 1 509 /<some_path_here>/undotbs_45.dbf
13:50:03 SQL> /
SID P1 P2 P3 FILE# NAME
154 509 157566 1 509 /<some_path_here>/undotbs_45.dbf
13:50:07 SQL> /
SID P1 P2 P3 FILE# NAME
154 509 157016 1 509 /<some_path_here>/undotbs_45.dbf
13:50:11 SQL> /
SID P1 P2 P3 FILE# NAME
154 509 156269 1 509 /<some_path_here>/undotbs_45.dbf
13:50:16 SQL> /
SID P1 P2 P3 FILE# NAME
154 508 167362 1 508 /<some_path_here>/undotbs_44.dbf
13:50:58 SQL> /
SID P1 P2 P3 FILE# NAME
154 508 166816 1 508 /<some_path_here>/undotbs_44.dbf
13:51:02 SQL> /
SID P1 P2 P3 FILE# NAME
154 508 165024 1 508 /<some_path_here>/undotbs_44.dbf
13:51:14 SQL> /
SID P1 P2 P3 FILE# NAME
154 507 159019 1 507 /<some_path_here>/undotbs_43.dbf
13:52:09 SQL> /
SID P1 P2 P3 FILE# NAME
154 506 193598 1 506 /<some_path_here>/undotbs_42.dbf
13:52:12 SQL> /
SID P1 P2 P3 FILE# NAME
154 506 193178 1 506 /<some_path_here>/undotbs_42.dbf
13:52:14 SQL>Regards,
Amardeep Sidhu
Edited by: Amardeep Sidhu on Jun 9, 2010 2:26 PM
Replaced few paths with <some_path_here> ;) -
SQL Query taking longer time as seen from Trace file
Below Query Execution timings:
Any help will be benefitial as its affecting business needs.
SELECT MATERIAL_DETAIL_ID
FROM
GME_MATERIAL_DETAILS WHERE BATCH_ID = :B1 FOR UPDATE OF ACTUAL_QTY NOWAIT
call count cpu elapsed disk query current rows
Parse 1 0.00 0.70 0 0 0 0
Execute 2256 8100.00 24033.51 627 12298 31739 0
Fetch 2256 900.00 949.82 0 12187 0 30547
total 4513 9000.00 24984.03 627 24485 31739 30547
Thanks and RegardsThanks Buddy.
Data Collected from Trace file:
SELECT STEP_CLOSE_DATE
FROM
GME_BATCH_STEPS WHERE BATCH_ID
IN (SELECT
DISTINCT BATCH_ID FROM
GME_MATERIAL_DETAILS START WITH BATCH_ID = :B2 CONNECT BY PRIOR PHANTOM_ID=BATCH_ID)
AND NVL(STEP_CLOSE_DATE, :B1) > :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.54 0 0 0 0
Execute 2256 800.00 1120.32 0 0 0 0
Fetch 2256 9100.00 13551.45 396 77718 0 0
total 4513 9900.00 14672.31 396 77718 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 66 (recursive depth: 1)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID GME_BATCH_STEPS
13160 NESTED LOOPS
6518 VIEW
6518 SORT UNIQUE
53736 CONNECT BY WITH FILTERING
30547 NESTED LOOPS
30547 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
30547 TABLE ACCESS BY USER ROWID GME_MATERIAL_DETAILS
23189 NESTED LOOPS
53736 BUFFER SORT
53736 CONNECT BY PUMP
23189 TABLE ACCESS BY INDEX ROWID GME_MATERIAL_DETAILS
23189 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
4386 INDEX RANGE SCAN GME_BATCH_STEPS_U1 (object id 146144)
In the Package there are lots of SQL Statements using CONNECT BY CLAUSE.
Does the use of CONNECT BY Clause degrades performance?
As you can see the Rows Section is 0 but the Query and elapsed time is taking longer
Regards -
Hi
I have a query in which, its a 3 table join but takes a long time to execute. I had checked with plan table.. it shows one of the table is FULL ACCESS.
I have 2 clarifications.
1. Will the status checking as NULL - (it shouldn't use index)
2. Is the case statements are recommended for queries.
Query
Select .........
FROM CLIENT LEFT OUTER JOIN INTERNET_LOGIN ON INTERNET_LOGIN.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID,
POLI_MOT.
WHERE
POLI_MOT.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID
AND
(POLI_MOT.CHR_CANCEL_STATUS='N'
OR
POLI_MOT.CHR_CANCEL_STATUS IS NULL)
AND
CLIENT.NUM_CONTACT_TYPE_ID IN (1,3)
AND
(NVL(POLI_MOT.VCH_NEW_IC_NO,'A') =
CASE WHEN (NVL(null,NULL) IS NULL) THEN
NVL(POLI_MOT.VCH_NEW_IC_NO,'A')
ELSE
NVL(null,NULL)
END
OR
POLI_MOT.VCH_OLD_IC_NO =
CASE WHEN nvl(null,null) IS NULL THEN
POLI_MOT.VCH_OLD_IC_NO
ELSE
NVL(null,NULL)
END )
AND POLI_MOT.VCH_POLICY_NO =
CASE WHEN UPPER(nvl(NULL,null)) IS NULL THEN
POLI_MOT.VCH_POLICY_NO
ELSE
NVL(NULL,NULL)
END
AND POLI_MOT.VCH_VEHICLE_NO =
CASE WHEN UPPER(NVL('123',NULL)) IS NULL THEN
POLI_MOT.VCH_VEHICLE_NO
ELSE
NVL('123',NULL)
ENDHi,
There is nothing wrong in having a full table access. When you do the explain plan please check for which table costs you the maximun. try to work on that table.
To tune the performance of your query you can try either indexing or parallel access.
the syntax for parallel index is
/*+ PARALLEL("TBL_NM",100) */(any number)...
for index please use the index name of the table you want to index..
regards
Bharath -
Query taking long time To Fectch the Results
Hi!
when I run the query,it takes too long time for fetching the resultsets.
Please find the query below for the same.
SELECT
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ,
A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
C.MONETARY_AMOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE,
E.ENTRY_REASON,
E.AR_LVL,
E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL, E.GL_ENTRY_CREATED,
(Case when c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end)
FROM
sysadm.PS_JRNL_HEADER A,
sysadm.PS_JRNL_LN B,
sysadm.PS_BI_ACCT_ENTRY C,
sysadm.PS_BI_LINE D,
sysadm.PS_BI_HDR E
WHERE A.BUSINESS_UNIT = '&BU'
AND A.JOURNAL_DATE BETWEEN TO_DATE('&From_date','YYYY-MM-DD')
AND TO_DATE('&To_date','YYYY-MM-DD')
AND A.SOURCE = 'BI'
AND A.BUSINESS_UNIT = B.BUSINESS_UNIT
AND A.JOURNAL_ID = B.JOURNAL_ID
AND A.JOURNAL_DATE = B.JOURNAL_DATE
AND A.UNPOST_SEQ = B.UNPOST_SEQ
AND B.BUSINESS_UNIT = C.BUSINESS_UNIT
AND B.JOURNAL_ID = C.JOURNAL_ID
AND B.JOURNAL_DATE = C.JOURNAL_DATE
AND B.JOURNAL_LINE = C.JOURNAL_LINE
AND C.ACCT_ENTRY_TYPE = 'RR'
AND C.BUSINESS_UNIT = '&BU'
AND C.BUSINESS_UNIT = D.BUSINESS_UNIT
AND C.INVOICE = D.INVOICE
AND C.LINE_SEQ_NUM = D.LINE_SEQ_NUM
AND D.BUSINESS_UNIT = '&BU'
AND D.BUSINESS_UNIT = E.BUSINESS_UNIT
AND D.INVOICE = E.INVOICE
AND E.BUSINESS_UNIT = '&BU'
AND
((c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0)
OR
(c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','z')
and D.TAX_CD_VAT_PCT <> 25)
GROUP BY
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ, A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE, E.ENTRY_REASON,
E.AR_LVL, E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL,
E.GL_ENTRY_CREATED,
C.MONETARY_AMOUNT,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE
having
(Case when c.account in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end) is not null
So Could you provide the solution to fix this issue?
Thanks
senthil[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long ...
Regards,
Rob. -
Following query i write it returns me 1400 records. and below line taking much time.
1.5 second taken by
count = quer != null ? quer.Count() : 0;
and 2 sec taken by
candidateList = quer.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToList();
Please suggest.Hi Jon,
In SharePoint, I suggest you use CAML Query. If you use Linq, the performance won't be gurantteed.
For the first query, you can use SPQury.Count to achieve it, for the second query, you can build a proper CAML to filter the data.
Here are some detailed articles for your reference:
SPList.GetItems method (SPQuery)
SPQuery.Query Property
Zhengyu Guo
TechNet Community Support -
SQL Query taking long time....its very urgent !!!
Hi All,
Can any body help me out to tune this query... its cost is 62,900.. and thete is full table scan on ap_invoices_all...
For one invoice ID its taking 20 sccs...
SELECT /*+ INDEX ( i2 AP_INVOICES_N8 ) INDEX ( i1 AP_INVOICES_N8 ) */ DISTINCT ou.name operating_unit,
NVL(SUBSTR(UPPER(TRANSLATE(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1') match_string,
UPPER(v.vendor_name) upper_supplier_name,
i1.invoice_num invoice_number,
to_char(i1.invoice_date,'DD-MON-YYYY') invoice_date,
--i1.invoice_date invoice_date,
NVL(i1.invoice_amount,0) invoice_amount,
i1.invoice_currency_code currency_code,
v.segment1 supplier_number,
v.vendor_name supplier_name,
ssa.vendor_site_code supplier_code,
lc.displayed_field invoice_type,
poh.segment1 po_number,
(select min(por.release_num)
from po_releases_all por
where poh.po_header_id = por.po_header_id) release_num,
gcc.segment1 location,
i1.payment_method_code payment_method_code,
DECODE(LENGTH(TO_CHAR(aca.check_number)),9,aca.check_number,aca.doc_sequence_value) payment_doc_number
FROM ap_invoices_all i1,
ap_invoices_all i2,
ap_suppliers v ,
ap_supplier_sites_all ssa,
ap_lookup_codes lc,
/* (select distinct pha.SEGMENT1, i.PO_HEADER_ID, i.INVOICE_ID
from ap_invoice_lines_all i
,po_headers_all pha
where pha.PO_HEADER_ID = i.PO_HEADER_ID) poh, */
po_headers_all poh,
ap_invoice_lines_all ail,
ap_invoice_distributions_all aida,
gl_code_combinations gcc,
ap_checks_all aca,
ap_invoice_payments_all ipa,
hr_all_organization_units ou
WHERE i1.invoice_id <> i2.invoice_id
AND NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1')
= NVL(substr(upper(translate(i2.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm')
--AND i1.creation_date between :p_creation_date_from and :p_creation_date_to
AND i1.cancelled_date IS NULL
--AND i2.creation_date between :p_creation_date_from and :p_creation_date_to
AND i2.cancelled_date IS NULL
AND i1.invoice_amount = nvl(i2.invoice_amount,-1)
--AND i1.vendor_id = i2.vendor_id
AND i1.vendor_id+0 = i2.vendor_id+0
AND nvl(i1.vendor_id,-1) = v.vendor_id
AND i1.invoice_id = aida.invoice_id
AND aida.distribution_line_number = 1
AND gcc.code_combination_id = aida.dist_code_combination_id
AND lc.lookup_code (+) = i1.invoice_type_lookup_code
AND lc.lookup_type (+) = 'INVOICE TYPE'
AND i1.vendor_site_id = ssa.vendor_site_id(+)
--AND i1.invoice_id = poh.invoice_id (+)
AND i1.invoice_id = ail.invoice_id
--AND ail.line_number = 1
AND aida.INVOICE_LINE_NUMBER = 1
--AND ail.po_header_id = poh.po_header_id (+)
AND ail.po_header_id = poh.po_header_id
AND ail.INVOICE_ID = aida.INVOICE_ID
and ail.LINE_NUMBER = aida.INVOICE_LINE_NUMBER
AND i1.invoice_id = ipa.invoice_id(+)
AND ipa.check_id = aca.check_id(+)
AND i1.org_id = ou.organization_id
and i1.invoice_id = 123456
ORDER BY upper(v.vendor_name),
NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm'),
upper(i1.invoice_num);
Regards
--HarryI tried to rewrite this query to format it into something more readable. Since I can't test, this may have introduced syntax errors:
SELECT /*+ INDEX ( i2 AP_INVOICES_N8 ) INDEX ( i1 AP_INVOICES_N8 ) */
DISTINCT ou.name operating_unit,
NVL(SUBSTR(UPPER(TRANSLATE(i1.invoice_num,
'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1') match_string,
UPPER(v.vendor_name) upper_supplier_name,
i1.invoice_num invoice_number,
to_char(i1.invoice_date,'DD-MON-YYYY') invoice_date,
NVL(i1.invoice_amount,0) invoice_amount,
i1.invoice_currency_code currency_code,
v.segment1 supplier_number,
v.vendor_name supplier_name,
ssa.vendor_site_code supplier_code,
lc.displayed_field invoice_type,
poh.segment1 po_number,
(SELECT MIN(por.release_num)
FROM po_releases_all por
WHERE poh.po_header_id = por.po_header_id) release_num,
gcc.segment1 location,
i1.payment_method_code payment_method_code,
DECODE(LENGTH(TO_CHAR(aca.check_number)),9,
aca.check_number,aca.doc_sequence_value) payment_doc_number
FROM ap_invoices_all i1
INNER JOIN ap_invoices_all i2
ON i1.invoice_id = i2.invoice_id
AND i1.invoice_amount = NVL(i2.invoice_amount,-1)
AND i1.vendor_id+0 = i2.vendor_id+0
INNER JOIN ap_suppliers v
ON NVL(i1.vendor_id,-1) = v.vendor_id
INNER JOIN ap_lookup_codes lc,
ON lc.lookup_code = i1.invoice_type_lookup_code
INNER JOIN ap_invoice_distributions_all aida
ON i1.invoice_id = aida.invoice_id
INNER JOIN gl_code_combinations gcc
ON gcc.code_combination_id = aida.dist_code_combination_id
INNER JOIN ap_invoice_lines_all ail
ON i1.invoice_id = ail.invoice_id
INNER JOIN po_headers_all poh
ON ail.po_header_id = poh.po_header_id
INNER JOIN hr_all_organization_units ou
ON i1.org_id = ou.organization_id
LEFT JOIN (ap_invoice_payments_all ipa
INNER JOIN ap_checks_all aca
ON ipa.check_id = aca.check_id)
ON i1.invoice_id = ipa.invoice_id
LEFT JOIN ap_supplier_sites_all ssa,
ON i1.vendor_site_id = ssa.vendor_site_id
WHERE NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_%^&*.','a')),
1,:P_MATCH_LENGTH),'NomatchKluDge1')
= NVL(substr(upper(translate(i2.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm')
AND i1.cancelled_date IS NULL
AND i2.cancelled_date IS NULL
AND aida.distribution_line_number = 1
AND aida.INVOICE_LINE_NUMBER = 1
AND ail.LINE_NUMBER = 1
AND lc.lookup_type = 'INVOICE TYPE'
AND i1.invoice_id = 123456
ORDER BY upper(v.vendor_name),
NVL(substr(upper(translate(i1.invoice_num,'a!@#\/-_$%^&*.','a')),
1,:P_MATCH_LENGTH),'abcdefghijklm'),
upper(i1.invoice_num);I dislike queries in the SELECT clause like the one you have to get RELEASE_NUM. One thing in particular that I see about this on is that this appears to be the only place that anything from the PO_HEADERS_ALL table is used. PO_HEADERS_ALL is only in the query pulled in by the AP_INVOICE_LINES_ALL table. Since the JOIN column used for that is PO_HEADER_ID and that's the same one used in the SELECT clause query, do you really even need the PO_HEADERS_ALL table? This would remove one join at least.
Your query had "AND aida.INVOICE_LINE_NUMBER = 1" and "AND ail.LINE_NUMBER = aida.INVOICE_LINE_NUMBER". The second needn't reference AIDA, I changed it to "AND ail.LINE_NUMBER = 1". It likely won't make a performance impact, but the SQL is clearer. -
Merge Query Taking Long time !!!
Hi all,
Im using loading type update/insert in one of my mappings.
mapping usually takes 3-4 mins to complete.
Suddenly It has taken 1 Hr to complete. If I take the select query from the mapping, its running fast and completes in 2 mins.
I Checked the source data count, Indexes in the source columns. Everything is the same
Mapping level, I didnt do any changes. Can anyone suggest what would be possible reason???
Thanks and Regards
ElaLots of things come into play when you're tuning a query.
An (unformatted) execution plan isn't enough.
Tuning takes time and understanding how (a lot of) things work, there is no ASAP in the world of tuning.
Please post other important details, like your database version, optimizer settings, how/when are table statistics gathered etc.
So, read the following informative threads (and please take your time, this really is important stuff), and adust your thread as needed.
That way you'll have a bigger chance of getting help that makes sense...
Your DBA should/ought to be able to help you in this as well.
Re: HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html -
QUERY taking longer time than usual
Hello Gurus,
The query below used to take 5-10 minutes depending on the resource availability, but this time it is taking 4-5 hrs to complete this transaction.
INSERT /*+ APPEND */ INTO TAG_STAGING
SELECT /*+ INDEX(A,ALL_tags_INDX1) */
DISTINCT TRIM (serial) serial_num,
TRIM (COMPANY_numBER) COMPANY_NUM,
TRIM (PERSON_id) PERSON_id
FROM ALL_tags@DWDB_link a
WHERE serviceS IN (SELECT /*+ INDEX(B,service_CODES_INDX2) */
services
FROM service_CODES b
WHERE srvc_cd = 'R')
AND (ORDERDATE_date BETWEEN TO_DATE ('01-JAN-2007','dd-mon-yyyy')
AND TO_DATE ('31-DEC-2007','dd-mon-yyyy'))
AND ( (TRIM (status_1) IS NULL)
OR (TRIM (status_1) = 'R')
AND (TRIM (status_2) = 'R' OR TRIM (status_2) IS NULL)
TAG_STAGING table is empty with primary key on the three given columns
ALL_tags@DWDB_link table has about 100M rows
Ideally the query should fetch about 4M rows.
Could any one please give me an idea as to how to proceed to quicken the process.
Thanks in advance
Thanks,
TTFirst I'd check the explain plan to make sure that it makes sense. Perhaps an index was dropped or perhaps the stats are wrong for some reason.
If the explain plan looks good then I'd trace it and see where the time is being spent.
Maybe you are looking for
-
Unable to call HttpSevlet from java web server
Hi , I have followed the tutorial given in jdeveloper,creating a Httpservlet. I followed the steps and tried calling the sevlet. I got 404 error. Can any one clarify how to view the web page outside Jdeveloper and how to configure JavaWebServer.
-
How many Apple TVs can you have on one iTunes account?
Just wondering how many Apple Tvs I can put on my iTunes account. Already have one in the living room and I was looking to put one more in the bedroom.
-
Hi All, I have a problem that I have been working on for the past week and hoping someone can provide the missing link for me. Background We currently have a 2008 "VDI" environment with both personal and pooled Windows 7 desktops deployed to thin cli
-
Powerpivot add-in installation issue "Could not find a part of the path"
Hello, I got a Windows Server 2008R2 terminal server with Office 2010 and Powerpivot installed. I have an issue for some users, not every user get this error. When trying to add the Powerpivot add-in in Excel the following error message is displayed
-
제품 : ORACLE SERVER 작성날짜 : 2004-08-13 SCOPE 8.1.6 이상의 Standard Edition 부터 지원이 되는 기능입니다. Subject: Parallel Load - space usage. I have put together a short summary of how Parallel Load handles storage allocation in 7.1 which I am going to send to a cust