QUERY taking longer time than usual
Hello Gurus,
The query below used to take 5-10 minutes depending on the resource availability, but this time it is taking 4-5 hrs to complete this transaction.
INSERT /*+ APPEND */ INTO TAG_STAGING
SELECT /*+ INDEX(A,ALL_tags_INDX1) */
DISTINCT TRIM (serial) serial_num,
TRIM (COMPANY_numBER) COMPANY_NUM,
TRIM (PERSON_id) PERSON_id
FROM ALL_tags@DWDB_link a
WHERE serviceS IN (SELECT /*+ INDEX(B,service_CODES_INDX2) */
services
FROM service_CODES b
WHERE srvc_cd = 'R')
AND (ORDERDATE_date BETWEEN TO_DATE ('01-JAN-2007','dd-mon-yyyy')
AND TO_DATE ('31-DEC-2007','dd-mon-yyyy'))
AND ( (TRIM (status_1) IS NULL)
OR (TRIM (status_1) = 'R')
AND (TRIM (status_2) = 'R' OR TRIM (status_2) IS NULL)
TAG_STAGING table is empty with primary key on the three given columns
ALL_tags@DWDB_link table has about 100M rows
Ideally the query should fetch about 4M rows.
Could any one please give me an idea as to how to proceed to quicken the process.
Thanks in advance
Thanks,
TT
First I'd check the explain plan to make sure that it makes sense. Perhaps an index was dropped or perhaps the stats are wrong for some reason.
If the explain plan looks good then I'd trace it and see where the time is being spent.
Similar Messages
-
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
When I send an email no matter how small it now seems to take a much longer time than usual (by watching the gear wheel spinning). Anyone have any ideas how I can get my sending back to a much shorter time?
Have you burned Discs with other programs using this computer? Are you certain that you have a drive that will burn discs?
-
Query taking long time to run.
The following query is taking long time to run, is there anything can be done to make it run faster by changing the sql etc.
select distinct
A.DEPTID,
A.POSITION_NBR,
A.EMPLID,
A.EMPL_RCD_NBR,
A.EFFDT,
B.NAME,
A.EMPL_STATUS,
A.JOBCODE,
A.ANNUAL_RT,
A.STD_HOURS,
A.PRIMARY_JOB,
C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,
PS_PERSONAL_DATA B,
PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and
((A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT)
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT))
or
(A.POSITION_NBR = C.POSITION_NBR
and A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT)
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB E
where E.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and E.EFFDT = A.EFFDT)
and C.POSN_STATUS <> 'G'
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA F
where F.POSITION_NBR = A.POSITION_NBR
and F.EFFDT = C.EFFDT)))
or
(A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT)))Using distributive law A and (B or C) = (A and B) or (A and C) from right to left we can have:
select distinct A.DEPTID,A.POSITION_NBR,A.EMPLID,A.EMPL_RCD_NBR,A.EFFDT,B.NAME,A.EMPL_STATUS,
A.JOBCODE,A.ANNUAL_RT,A.STD_HOURS,A.PRIMARY_JOB,C.POSN_STATUS,
case when A.POSITION_NBR = ' ' then 0 else C.STD_HOURS end,
case when A.POSITION_NBR = ' ' then ' ' else C.DEPTID end
from PS_JOB A,PS_PERSONAL_DATA B,PS_POSITION_DATA C
where A.EMPLID = B.EMPLID
and (
A.POSITION_NBR = C.POSITION_NBR
and A.EFFSEQ = (select max(D.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
and C.EFFSEQ = (select max(F.EFFSEQ)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT = C.EFFDT
and C.POSN_STATUS != 'G'
and (
C.EFFDT = (select max(E.EFFDT)
from PS_POSITION_DATA E
where E.POSITION_NBR = A.POSITION_NBR
and E.EFFDT <= A.EFFDT
or
A.EFFDT = (select max(D.EFFDT)
from PS_JOB D
where D.EMPLID = A.EMPLID
and D.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT <= C.EFFDT
or
A.POSITION_NBR = ' '
and A.EFFSEQ = (select max(E.EFFSEQ)
from PS_JOB D
where D.EMPLID = A.EMPLID
and E.EMPL_RCD_NBR = A.EMPL_RCD_NBR
and D.EFFDT = A.EFFDT
)may not help much as the optimizer might have guessed it already
Regards
Etbin -
CDHDR table query taking long time
Hi all,
Select query from CDHDR table is taking long time,in where condition i am giving OBJECTCLASS = 'MAT_FULL' udate = sy-datum and langu = 'EN'.
any suggestion to improve the performance.i want to select all the article which got changed on current date
regards
shibuThis will always be slow for large data volumes, since CDHDR is designed for quick access by object ID (in this case material number), not by date.
I'm afraid you would need to introduce a secondary index on OBJECTCLAS and UDATE, if that query is crucial enough to warrant the additional disk space and processing time taken by the new index.
Greetings
Thomas -
Sap bi--query taking long time to exexute
Hi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedharHi
When i try run the bex query ,its taking long time,please suggest
Thanks
sreedhar -
Oracle SQL Select query takes long time than expected.
Hi,
I am facing a problem in SQL select query statement. There is a long time taken in select query from the Database.
The query is as follows.
select /*+rule */ f1.id,f1.fdn,p1.attr_name,p1.attr_value from fdnmappingtable f1,parametertable p1 where p1.id = f1.id and ((f1.object_type ='ne_sub_type.780' )) and ( (f1.id in(select id from fdnmappingtable where fdn like '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#%')))order by f1.id asc
This query is taking more than 4 seconds to get the results in a system where the DB is running for more than 1 month.
The same query is taking very few milliseconds (50-100ms) in a system where the DB is freshly installed and the data in the tables are same in both the systems.
Kindly advice what is going wrong??
Regards,
PurushothamSQL> @/alcatel/omc1/data/query.sql
2 ;
9 rows selected.
Execution Plan
Plan hash value: 3745571015
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT ORDER BY | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | TABLE ACCESS FULL | PARAMETERTABLE |
|* 5 | TABLE ACCESS BY INDEX ROWID| FDNMAPPINGTABLE |
|* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
|* 7 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
|* 8 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
Predicate Information (identified by operation id):
5 - filter("F1"."OBJECT_TYPE"='ne_sub_type.780')
6 - access("P1"."ID"="F1"."ID")
7 - filter("FDN" LIKE '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#
8 - access("F1"."ID"="ID")
Note
- rule based optimizer used (consider using cbo)
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
0 bytes sent via SQL*Net to client
0 bytes received via SQL*Net from client
0 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9 rows processed
SQL> -
the below query is taking very long time.
select /*+ PARALLEL(a,8) PARALLEL(b,8) */ a.personid,a.winning_id, b.questionid from
winning_id_cleanup a , rm_personquestion b
where a.personid = b.personid and (a.winning_id,b.questionid) not in
(select /*+ PARALLEL(c,8) */ c.personid,c.questionid from rm_personquestion c where c.personid=a.winning_id);
where the rm_personquestion table is having 45 million rows and winning_id_cleanup is having 1 million rows.
please tell me how to tune this query?Please post u'r query at PL/SQL
It's not for SQL and PL/SQL -
SQL Query taking longer time as seen from Trace file
Below Query Execution timings:
Any help will be benefitial as its affecting business needs.
SELECT MATERIAL_DETAIL_ID
FROM
GME_MATERIAL_DETAILS WHERE BATCH_ID = :B1 FOR UPDATE OF ACTUAL_QTY NOWAIT
call count cpu elapsed disk query current rows
Parse 1 0.00 0.70 0 0 0 0
Execute 2256 8100.00 24033.51 627 12298 31739 0
Fetch 2256 900.00 949.82 0 12187 0 30547
total 4513 9000.00 24984.03 627 24485 31739 30547
Thanks and RegardsThanks Buddy.
Data Collected from Trace file:
SELECT STEP_CLOSE_DATE
FROM
GME_BATCH_STEPS WHERE BATCH_ID
IN (SELECT
DISTINCT BATCH_ID FROM
GME_MATERIAL_DETAILS START WITH BATCH_ID = :B2 CONNECT BY PRIOR PHANTOM_ID=BATCH_ID)
AND NVL(STEP_CLOSE_DATE, :B1) > :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.54 0 0 0 0
Execute 2256 800.00 1120.32 0 0 0 0
Fetch 2256 9100.00 13551.45 396 77718 0 0
total 4513 9900.00 14672.31 396 77718 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 66 (recursive depth: 1)
Rows Row Source Operation
0 TABLE ACCESS BY INDEX ROWID GME_BATCH_STEPS
13160 NESTED LOOPS
6518 VIEW
6518 SORT UNIQUE
53736 CONNECT BY WITH FILTERING
30547 NESTED LOOPS
30547 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
30547 TABLE ACCESS BY USER ROWID GME_MATERIAL_DETAILS
23189 NESTED LOOPS
53736 BUFFER SORT
53736 CONNECT BY PUMP
23189 TABLE ACCESS BY INDEX ROWID GME_MATERIAL_DETAILS
23189 INDEX RANGE SCAN GME_MATERIAL_DETAILS_U1 (object id 146151)
4386 INDEX RANGE SCAN GME_BATCH_STEPS_U1 (object id 146144)
In the Package there are lots of SQL Statements using CONNECT BY CLAUSE.
Does the use of CONNECT BY Clause degrades performance?
As you can see the Rows Section is 0 but the Query and elapsed time is taking longer
Regards -
Hi
I have a query in which, its a 3 table join but takes a long time to execute. I had checked with plan table.. it shows one of the table is FULL ACCESS.
I have 2 clarifications.
1. Will the status checking as NULL - (it shouldn't use index)
2. Is the case statements are recommended for queries.
Query
Select .........
FROM CLIENT LEFT OUTER JOIN INTERNET_LOGIN ON INTERNET_LOGIN.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID,
POLI_MOT.
WHERE
POLI_MOT.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID
AND
(POLI_MOT.CHR_CANCEL_STATUS='N'
OR
POLI_MOT.CHR_CANCEL_STATUS IS NULL)
AND
CLIENT.NUM_CONTACT_TYPE_ID IN (1,3)
AND
(NVL(POLI_MOT.VCH_NEW_IC_NO,'A') =
CASE WHEN (NVL(null,NULL) IS NULL) THEN
NVL(POLI_MOT.VCH_NEW_IC_NO,'A')
ELSE
NVL(null,NULL)
END
OR
POLI_MOT.VCH_OLD_IC_NO =
CASE WHEN nvl(null,null) IS NULL THEN
POLI_MOT.VCH_OLD_IC_NO
ELSE
NVL(null,NULL)
END )
AND POLI_MOT.VCH_POLICY_NO =
CASE WHEN UPPER(nvl(NULL,null)) IS NULL THEN
POLI_MOT.VCH_POLICY_NO
ELSE
NVL(NULL,NULL)
END
AND POLI_MOT.VCH_VEHICLE_NO =
CASE WHEN UPPER(NVL('123',NULL)) IS NULL THEN
POLI_MOT.VCH_VEHICLE_NO
ELSE
NVL('123',NULL)
ENDHi,
There is nothing wrong in having a full table access. When you do the explain plan please check for which table costs you the maximun. try to work on that table.
To tune the performance of your query you can try either indexing or parallel access.
the syntax for parallel index is
/*+ PARALLEL("TBL_NM",100) */(any number)...
for index please use the index name of the table you want to index..
regards
Bharath -
Query taking long time To Fectch the Results
Hi!
when I run the query,it takes too long time for fetching the resultsets.
Please find the query below for the same.
SELECT
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ,
A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
C.MONETARY_AMOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE,
E.ENTRY_REASON,
E.AR_LVL,
E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL, E.GL_ENTRY_CREATED,
(Case when c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end)
FROM
sysadm.PS_JRNL_HEADER A,
sysadm.PS_JRNL_LN B,
sysadm.PS_BI_ACCT_ENTRY C,
sysadm.PS_BI_LINE D,
sysadm.PS_BI_HDR E
WHERE A.BUSINESS_UNIT = '&BU'
AND A.JOURNAL_DATE BETWEEN TO_DATE('&From_date','YYYY-MM-DD')
AND TO_DATE('&To_date','YYYY-MM-DD')
AND A.SOURCE = 'BI'
AND A.BUSINESS_UNIT = B.BUSINESS_UNIT
AND A.JOURNAL_ID = B.JOURNAL_ID
AND A.JOURNAL_DATE = B.JOURNAL_DATE
AND A.UNPOST_SEQ = B.UNPOST_SEQ
AND B.BUSINESS_UNIT = C.BUSINESS_UNIT
AND B.JOURNAL_ID = C.JOURNAL_ID
AND B.JOURNAL_DATE = C.JOURNAL_DATE
AND B.JOURNAL_LINE = C.JOURNAL_LINE
AND C.ACCT_ENTRY_TYPE = 'RR'
AND C.BUSINESS_UNIT = '&BU'
AND C.BUSINESS_UNIT = D.BUSINESS_UNIT
AND C.INVOICE = D.INVOICE
AND C.LINE_SEQ_NUM = D.LINE_SEQ_NUM
AND D.BUSINESS_UNIT = '&BU'
AND D.BUSINESS_UNIT = E.BUSINESS_UNIT
AND D.INVOICE = E.INVOICE
AND E.BUSINESS_UNIT = '&BU'
AND
((c.account in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0)
OR
(c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','z')
and D.TAX_CD_VAT_PCT <> 25)
GROUP BY
A.BUSINESS_UNIT,
A.JOURNAL_ID,
TO_CHAR(A.JOURNAL_DATE,'YYYY-MM-DD'),
A.UNPOST_SEQ, A.FISCAL_YEAR,
A.ACCOUNTING_PERIOD,
A.JRNL_HDR_STATUS,
C.INVOICE,
C.ACCT_ENTRY_TYPE,
C.LINE_DST_SEQ_NUM,
C.TAX_AUTHORITY_CD,
C.ACCOUNT,
D.BILL_SOURCE_ID,
D.IDENTIFIER,
D.VAT_TXN_TYPE_CD,
D.TAX_CD_VAT,
D.TAX_CD_VAT_PCT,
D.VAT_APPLICABILITY,
E.BILL_TO_CUST_ID,
E.BILL_STATUS,
E.BILL_CYCLE_ID,
TO_CHAR(E.INVOICE_DT,'YYYY-MM-DD'),
TO_CHAR(E.ACCOUNTING_DT,'YYYY-MM-DD'),
TO_CHAR(E.DT_INVOICED,'YYYY-MM-DD'),
E.ENTRY_TYPE, E.ENTRY_REASON,
E.AR_LVL, E.AR_DST_OPT,
E.AR_ENTRY_CREATED,
E.GEN_AR_ITEM_FLG,
E.GL_LVL,
E.GL_ENTRY_CREATED,
C.MONETARY_AMOUNT,
D.VAT_AMT_BSE,
D.VAT_TRANS_AMT_BSE
having
(Case when c.account in ('30120000','30180050','30190000','30290000',
'30490000','30690000','30900040','30990000','35100000','35120000','35150000',
'35160000','39100050','90100000')
and D.TAX_CD_VAT_PCT <> 0 then 'Ej_Momskonto_med_moms'
When c.account not in ('30120000','30180050','30190000','30290000','30490000',
'30690000','30900040','30990000','35100000','35120000','35150000','35160000',
'39100050','90100000')
and D.TAX_CD_VAT_PCT <> 25 then 'Momskonto_utan_moms' end) is not null
So Could you provide the solution to fix this issue?
Thanks
senthil[url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long ...
Regards,
Rob. -
Wait step taking longer time than the defined wait time
Hi,
We have very simple BPM with 3 steps: Receive, Wait and Send.
Wait step has following parameters:
Type: Wait Specified Time Period
Duration: 1
Unit: Minutes
From the above parameters idelly it should wait only for 60 sec and start the next step which is send step. But Wait step is taking following times: 1m 09s,1m 40s, 3m 40s, 1m 41s, 2m 10s, 2m 16s, 2m 46s, 1m 09s, 1m 40s, 3m 39s
Does anyone know what could be the reason for above longer wait time??
Thx
N@v!nHi Naveen,
How did you measure these times? Tha additional time might be because of the message processing time included or size of the message is bigger to process.. or may be because of the network traffic..!!
Check the each time stamp from message monitoring tool in RWB and verify for accuracy.
VJ -
Following query i write it returns me 1400 records. and below line taking much time.
1.5 second taken by
count = quer != null ? quer.Count() : 0;
and 2 sec taken by
candidateList = quer.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToList();
Please suggest.Hi Jon,
In SharePoint, I suggest you use CAML Query. If you use Linq, the performance won't be gurantteed.
For the first query, you can use SPQury.Count to achieve it, for the second query, you can build a proper CAML to filter the data.
Here are some detailed articles for your reference:
SPList.GetItems method (SPQuery)
SPQuery.Query Property
Zhengyu Guo
TechNet Community Support -
Suddenly ODI scheduled executions taking more time than usual.
Hi,
I have set ODI packages scheduled for execution.
From some days those are taking more time to execute themselves.
Before they used to take 1 hr 30 mins approx.
Now they are taking 3 - 3 hr 15 mins approx.
And there no any major change in data in terms of Quantity.
My ODI version s
Standalone Edition Version 11.1.1
Build ODI_11.1.1.3.0_GENERIC_100623.1635
ODI packages are mainly using Oracle as SOURCE and TARGET DB.
What things should i check to get to know reasons of sudden increase in time of execution.
Any pointers regarding this would be appreciated.
Thanks,
MaheshMahesh,
Use some repository queries to retrieve the session task timings and compare your slow execution to a previous acceptable execution, then look for the biggest changes - this will highlight where you are slowing down, then its off to tune the item accordingly.
See here for some example reports , you might need to tweak for your current repos version but I dont think the table structures have changed that much :
http://rnm1978.wordpress.com/2010/11/03/analysing-odi-batch-performance/ -
Cube content deletion is taking more time than usual.
Hi Experts,
We have a Process chain which ideally should run in every two hours. This chain has a delete data cube content step before the new data is loaded in the cube. This chain is running fine for one instance & the other instance is taking more time so it is quite intermittent.
In the process chain we are also deleting contents from the Dimension tables (in the delete content step). Need your inputs to improve the performance of this step.
Thanks & Regards
Mayank Tyagi.Hi Mayank ,
You can delete the indexes of the cube before deleting the contents of the cube . The concept is same as of data loading that data loads happens faster when indexes are deleted .
If you are having aggregates over this cube , then that aggregate will be also adjusted .
Kind Regards,
Ashutosh Singh
Maybe you are looking for
-
Add Button to Display Adobe Form through SPRO
Hello All, I am trying to add a button to display Online Interactive Adobe Form through SPRO but its not working. So please suggest me how can i add a button for it ASAP. Thanks and Regards: Anugrah
-
How to perform a clean install of Windows 8 / 8.1
This document describes, how to perform a clean install of Windows 8. Windows 8 System Requirements 1 GHz or faster processor (with PAE, NX and SSE2 support) 1 GB RAM (32-bit) or 2 GB RAM (64-bit) 16 GB available hard disk space (32-bit) or 20 GB (64
-
Creating code in AS3 to launch a PDF file?
Hi, I am interested in creating a page on my website where visitors can download a pdf file. I have never done anything like this. Can anyone recommen a good tutorial or website tat has info regarding the code necessary to make this work in Flash AS3
-
Getting contacts added on phone into address book.
I just got my iPhone yesterday and after importing my contacts from my old phone from a .csv file I had over 200 duplicate "no name" contacts. I followed the instructions here: http://support.apple.com/kb/ts2481 and was able to get rid of the duplica
-
Delete unused footage in iMovie 10.0.6
I have used previous versions of iMovie and found that I was able to delete any unused footage in a clip. For example you had a video clip that was 2 minutes long, but you only used a 10 second snip of it in a project you could delete the remaining