Performance issue with sql query. Please explain
I have a sql query
A.column1 and B.column1 are indexed.
Query1: select A.column1,A.column3, B.column2 from tableA , tableB where A.column1=B.column1;
query2: select A.column1,A.column3,B.column2,B.column4 from tableA , tableB where A.column1=B.column1;
1. Does both query takes same time? If not why?.
As they both go for same row with different number of columns. And Since the complete datablock is loaded in Database buffer cache. so there should not be extra time taken upto this time.
Please tell me if I am wrong.
For me apart from required excessive bytes sent via SQL*Net to client as well bytes received via SQL*Net from client will cause to chatty network which will degrade the performance.
SQL> COLUMN plan_plus_exp FORMAT A100
SQL> SET LINESIZE 1000
SQL> SET AUTOTRACE TRACEONLY
SQL> SELECT *
2 FROM emp
3 /
14 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=616)
1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=616)
Statistics
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
1631 bytes sent via SQL*Net to client
423 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed
SQL> SELECT ename
2 FROM emp
3 /
14 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=154)
1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=154)
Statistics
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
456 bytes sent via SQL*Net to client
423 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed Khurram
Similar Messages
-
Performance Issue with sql query
Hi,
My db is 10.2.0.5 with RAC on ASM, Cluster ware version 10.2.0.5.
With bsoa table as
SQL> desc bsoa;
Name Null? Type
ID NOT NULL NUMBER
LOGIN_TIME DATE
LOGOUT_TIME DATE
SUCCESSFUL_IND VARCHAR2(1)
WORK_STATION_NAME VARCHAR2(80)
OS_USER VARCHAR2(30)
USER_NAME NOT NULL VARCHAR2(30)
FORM_ID NUMBER
AUDIT_TRAIL_NO NUMBER
CREATED_BY VARCHAR2(30)
CREATION_DATE DATE
LAST_UPDATED_BY VARCHAR2(30)
LAST_UPDATE_DATE DATE
SITE_NO NUMBER
SESSION_ID NUMBER(8)
The query
UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID
Is taking a lot of time to execute and in AWR reports also it is on top in
1. SQL Order by elapsed time
2. SQL order by reads
3. SQL order by gets
So i am trying a way to solve the performance issue as the application is slow specially during login and logout time.
I understand that the function in the where condition cause to do FTS, but i can not think what other parts to look at.
Also:
SQL> SELECT COUNT(1) FROM BSOA;
COUNT(1)
7800373
The explain plan for "UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID" is
{code}
PLAN_TABLE_OUTPUT
Plan hash value: 1184960901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 1 | 26 | 18748 (3)| 00:03:45 |
| 1 | UPDATE | BSOA | | | | |
|* 2 | TABLE ACCESS FULL| BSOA | 1 | 26 | 18748 (3)| 00:03:45 |
Predicate Information (identified by operation id):
2 - filter("SESSION_ID"=TO_NUMBER(SYS_CONTEXT('USERENV','SESSIONID')))
{code}Hi,
There are also triggers before update and AUDITS on this table.
CREATE OR REPLACE TRIGGER B2.TRIGGER1
BEFORE UPDATE
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
CREATE OR REPLACE TRIGGER B2.TRIGGER2
BEFORE INSERT
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.CREATED_BY := USER ;
:NEW.CREATION_DATE := SYSDATE ;
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
And also there is an audit on this table
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER SUCCESSFUL;
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER NOT SUCCESSFUL;
And the sessionid column in BSOA has height balanced histogram.
When i create an index i get the following error. As i am on 10g I can't use DDL_LOCK_TIMEOUT . I may have to wait for next down time.
SQL> CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS;
CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
Thanks -
Performance issue with SQL with different input data
Hi Experts,
I have an SQL query that uses three tables imtermination, imconnection, imconnectiondesign. Each of these tables have around 23Lakh, 11Lakh, 11Lakh rows respectively. There are appropriate indexes on these tables.
Now there is a query:
SELECT
/*+ NO_MERGE(a) ORDERED USE_NL(c) */ c.objectid,
c.typeid,
c.transactionstatus,
c.usersessionid,
cd.objectid designid,
c.reservationid,
c.networkid,
c.networktype,
cd.inprojectid,
cd.inprojecttype,
cd.outprojectid,
cd.outprojecttype,
cd.asiteid,
cd.asitetype,
cd.anetworkelementid,
cd.anetworkelementtype,
cd.aportid,
cd.aporttype,
cd.achannelpath,
c.asignaltype,
cd.zsiteid,
cd.zsitetype,
cd.znetworkelementid,
cd.znetworkelementtype,
cd.zportid,
cd.zporttype,
cd.zchannelpath,
c.zsignaltype,
c.signaltype,
c.visualkey,
c.resourcestate,
cd.assignmentstate,
c.effectivefrom,
cd.effectiveto,
c.channelized,
c.circuitusage,
c.hardwired,
c.consumedsignaltype,
c.flowqualitycode,
c.capacityused,
c.percentused,
c.maxcapacity,
c.warningthreshold,
c.typecode,
cd.lastupdateddate,
c.lastreconcileddate,
c.bandwidth,
c.unit
FROM
(SELECT terminatedid
FROM imtermination t1
WHERE t1.networkelementid = 9200150)
a,
imconnectiondesign cd,
imconnection c
WHERE cd.objectid = a.terminatedid
AND c.objectid = cd.connectionid
AND(SUBSTR('10000000000000000000011111111111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', c.consumedsignaltype + 1, 1) = '1' OR SUBSTR('10000000000000000000011111111111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', c.signaltype + 1, 1) = '1')
AND c.typeid = '$131'
AND cd.assignmentstate IN(2, 3)
The above query takes around 70 secs to execute when input t1.networkelementid = 9200150. Moreover i have observed in the enterprise manager that this has very high i/o wait time.
Now the same query takes around 5 secs to execute when the input t1.networkelementid = 42407448. Both these obejcts with id 9200150 and 42407448 have almost same number of rows and output and without any condition each have 6500 rows in all the three tables.
The execution plan for both these queries with t1.networkelementid = 9200150 and t1.networkelementid = 42407448 is also coming same.
The rows that are corresponding to t1.networkelementid = 9200150 in these three tables are the result of the data created through the application over a period of time. While in case of rows corresponding to t1.networkelementid = 42407448 i have created manually and are contiguous in the three tables.
Does the above behavior is because in case of t1.networkelementid = 42407448 the rows that corresponds to it are not contiguous as they are created over a period of time ?
Execution Statistics
Total Per Execution Per Row
Executions 1 1 0.02
CPU Time (sec) 0.91 0.91 0.02
Buffer Gets 11943 11943.00 238.86
Disk Reads 4804 4804.00 96.08
Direct Writes 0 0.00 0.00
Rows 50 50.00 1
Fetches 5 5.00 0.10
User I/O Waits(98.7%)
CPU(1.3%)
Enterprise manager shows high db file scattered read in case of t1.networkelementid = 9200150, the input for which it is taking 70 secs.
Request experts to provide some pointers to fix this issue as i am not an expert in db tuning.
Thanks in advance for your help.
RegardsHi David,
Please find below the output:
SQL> SELECT table_name, num_rows, last_analyzed
2 FROM all_tables
3 WHERE table_name IN ('IMTERMINATION', 'IMCONNECTIONDESIGN', 'IMCONNECTION')
4 /
TABLE_NAME NUM_ROWS LAST_ANAL
IMTERMINATION 2338746 19-SEP-11
IMCONNECTIONDESIGN 1129298 19-SEP-11
IMCONNECTION 1169373 19-SEP-11
IMTERMINATION 19852 13-SEP-11
IMCONNECTIONDESIGN 6820 13-SEP-11
IMCONNECTION 9926 13-SEP-11
6 rows selected.
SQL> SELECT table_name, index_name,num_rows, last_analyzed
2 FROM all_indexes
3 WHERE table_name IN ('IMTERMINATION', 'IMCONNECTIONDESIGN', 'IMCONNECTION')
4 order by table_name,index_name
5 /
TABLE_NAME INDEX_NAME NUM_ROWS LAST_ANAL
IMCONNECTION IMCONNECTION_A_NE 9925 13-SEP-11
IMCONNECTION IMCONNECTION_A_NE 1169154 19-SEP-11
IMCONNECTION IMCONNECTION_A_PORT 84743 19-SEP-11
IMCONNECTION IMCONNECTION_A_PORT 3371 13-SEP-11
IMCONNECTION IMCONNECTION_A_SITE 1169373 19-SEP-11
IMCONNECTION IMCONNECTION_A_SITE 9926 13-SEP-11
IMCONNECTION IMCONNECTION_NET 0 19-SEP-11
IMCONNECTION IMCONNECTION_NET 12 13-SEP-11
IMCONNECTION IMCONNECTION_PK 9926 13-SEP-11
IMCONNECTION IMCONNECTION_PK 1169373 19-SEP-11
IMCONNECTION IMCONNECTION_RES 0 13-SEP-11
IMCONNECTION IMCONNECTION_RES 0 19-SEP-11
IMCONNECTION IMCONNECTION_ST 2 13-SEP-11
IMCONNECTION IMCONNECTION_ST 60 19-SEP-11
IMCONNECTION IMCONNECTION_TYPEID 4 13-SEP-11
IMCONNECTION IMCONNECTION_TYPEID 64 19-SEP-11
IMCONNECTION IMCONNECTION_UR 26880 19-SEP-11
IMCONNECTION IMCONNECTION_UR 3 13-SEP-11
IMCONNECTION IMCONNECTION_VK 9810 13-SEP-11
IMCONNECTION IMCONNECTION_VK 1191866 19-SEP-11
IMCONNECTION IMCONNECTION_Z_NE 1169173 19-SEP-11
IMCONNECTION IMCONNECTION_Z_NE 9925 13-SEP-11
IMCONNECTION IMCONNECTION_Z_PORT 84092 19-SEP-11
IMCONNECTION IMCONNECTION_Z_PORT 3370 13-SEP-11
IMCONNECTION IMCONNECTION_Z_SITE 9926 13-SEP-11
IMCONNECTION IMCONNECTION_Z_SITE 1169373 19-SEP-11
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_CON 1129298 19-SEP-11
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_CON 6820 13-SEP-11
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_PK 1129298 19-SEP-11
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_PK 6820 13-SEP-11
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_ST 6820 13-SEP-11
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_ST 1129298 19-SEP-11
IMTERMINATION IMTERMINATION_ID 19852 13-SEP-11
IMTERMINATION IMTERMINATION_ID 2279477 19-SEP-11
IMTERMINATION IMTERMINATION_NE 19850 13-SEP-11
IMTERMINATION IMTERMINATION_NE 2327175 19-SEP-11
IMTERMINATION IMTERMINATION_PORT 168835 19-SEP-11
IMTERMINATION IMTERMINATION_PORT 6741 13-SEP-11
IMTERMINATION IMTERMINATION_SITE 19852 13-SEP-11
IMTERMINATION IMTERMINATION_SITE 2391415 19-SEP-11
40 rows selected.
SQL> select table_name,index_name,column_name,column_position
2 FROM all_ind_columns
3 WHERE table_name IN ('IMTERMINATION', 'IMCONNECTIONDESIGN', 'IMCONNECTION')
4 order by table_name,index_name, column_position
5 /
TABLE_NAME INDEX_NAME
COLUMN_NAME
COLUMN_POSITION
IMCONNECTION IMCONNECTION_A_NE
ANETWORKELEMENTID
1
IMCONNECTION IMCONNECTION_A_NE
ANETWORKELEMENTID
1
IMCONNECTION IMCONNECTION_A_PORT
APORTID
1
IMCONNECTION IMCONNECTION_A_PORT
APORTID
1
IMCONNECTION IMCONNECTION_A_SITE
ASITEID
1
IMCONNECTION IMCONNECTION_A_SITE
ASITEID
1
IMCONNECTION IMCONNECTION_NET
NETWORKID
1
IMCONNECTION IMCONNECTION_NET
NETWORKID
1
IMCONNECTION IMCONNECTION_PK
OBJECTID
1
IMCONNECTION IMCONNECTION_PK
OBJECTID
1
IMCONNECTION IMCONNECTION_RES
RESERVATIONID
1
IMCONNECTION IMCONNECTION_RES
RESERVATIONID
1
IMCONNECTION IMCONNECTION_ST
RESOURCESTATE
1
IMCONNECTION IMCONNECTION_ST
RESOURCESTATE
1
IMCONNECTION IMCONNECTION_TYPEID
TYPEID
1
IMCONNECTION IMCONNECTION_TYPEID
TYPEID
1
IMCONNECTION IMCONNECTION_UR
USERSESSIONID
1
IMCONNECTION IMCONNECTION_UR
USERSESSIONID
1
IMCONNECTION IMCONNECTION_VK
VISUALKEY
1
IMCONNECTION IMCONNECTION_VK
VISUALKEY
1
IMCONNECTION IMCONNECTION_Z_NE
ZNETWORKELEMENTID
1
IMCONNECTION IMCONNECTION_Z_NE
ZNETWORKELEMENTID
1
IMCONNECTION IMCONNECTION_Z_PORT
ZPORTID
1
IMCONNECTION IMCONNECTION_Z_PORT
ZPORTID
1
IMCONNECTION IMCONNECTION_Z_SITE
ZSITEID
1
IMCONNECTION IMCONNECTION_Z_SITE
ZSITEID
1
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_CON
CONNECTIONID
1
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_CON
CONNECTIONID
1
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_PK
OBJECTID
1
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_PK
OBJECTID
1
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_ST
ASSIGNMENTSTATE
1
IMCONNECTIONDESIGN IMCONNECTIONDESIGN_ST
ASSIGNMENTSTATE
1
IMTERMINATION IMTERMINATION_ID
TERMINATEDID
1
IMTERMINATION IMTERMINATION_ID
TERMINATEDID
1
IMTERMINATION IMTERMINATION_NE
NETWORKELEMENTID
1
IMTERMINATION IMTERMINATION_NE
NETWORKELEMENTID
1
IMTERMINATION IMTERMINATION_PORT
PORTID
1
IMTERMINATION IMTERMINATION_PORT
PORTID
1
IMTERMINATION IMTERMINATION_SITE
SITEID
1
IMTERMINATION IMTERMINATION_SITE
SITEID
1
40 rows selected.
Plan without sql hints:
SQL> select * from table(dbms_xplan.display)
2 /
PLAN_TABLE_OUTPUT
Plan hash value: 2493901029
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 40 | 9960 | 6316 (1)| 00:01:16 |
| 1 | NESTED LOOPS | | 40 | 9960 | 6316 (1)| 00:01:16 |
| 2 | NESTED LOOPS | | 1359 | 160K| 3592 (1)| 00:00:44 |
| 3 | TABLE ACCESS BY INDEX ROWID| IMTERMINATION | 1359 | 16308 | 915 (1)| 00:00:11 |
|* 4 | INDEX RANGE SCAN | IMTERMINATION_NE | 1359 | | 6 (0)| 00:00:01 |
|* 5 | TABLE ACCESS BY INDEX ROWID| IMCONNECTIONDESIGN | 1 | 109 | 2 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | IMCONNECTIONDESIGN_PK | 1 | | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS BY INDEX ROWID | IMCONNECTION | 1 | 128 | 2 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | IMCONNECTION_PK | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("T1"."NETWORKELEMENTID"=9200150)
5 - filter("CD"."ASSIGNMENTSTATE"=2 OR "CD"."ASSIGNMENTSTATE"=3)
6 - access("CD"."OBJECTID"="TERMINATEDID")
7 - filter((("C"."CONSUMEDSIGNALTYPE"=21 OR "C"."CONSUMEDSIGNALTYPE"=22 OR
"C"."CONSUMEDSIGNALTYPE"=23 OR "C"."CONSUMEDSIGNALTYPE"=24 OR "C"."CONSUMEDSIGNALTYPE"=25 OR
"C"."CONSUMEDSIGNALTYPE"=26 OR "C"."CONSUMEDSIGNALTYPE"=27 OR "C"."CONSUMEDSIGNALTYPE"=28 OR
"C"."CONSUMEDSIGNALTYPE"=29 OR "C"."CONSUMEDSIGNALTYPE"=30 OR "C"."CONSUMEDSIGNALTYPE"=31 OR
"C"."CONSUMEDSIGNALTYPE"=32 OR "C"."CONSUMEDSIGNALTYPE"=33) OR ("C"."SIGNALTYPE"=21 OR
"C"."SIGNALTYPE"=22 OR "C"."SIGNALTYPE"=23 OR "C"."SIGNALTYPE"=24 OR "C"."SIGNALTYPE"=25 OR
"C"."SIGNALTYPE"=26 OR "C"."SIGNALTYPE"=27 OR "C"."SIGNALTYPE"=28 OR "C"."SIGNALTYPE"=29 OR
"C"."SIGNALTYPE"=30 OR "C"."SIGNALTYPE"=31 OR "C"."SIGNALTYPE"=32 OR "C"."SIGNALTYPE"=33)) AND
"C"."TYPEID"='$131')
8 - access("C"."OBJECTID"="CD"."CONNECTIONID")
32 rows selected. -
Performance Problem with SQL Query
Hi
I have a SQL Query (say, SSS) which runs quite fast when I run it from TOAD.
But when the same query is being used as
INSERT INTO <table> VALUES <SSS>;
It is taking hours to complete.
The Original Query(OSQ) was changed to this SQL query(SSS) for Performance improvement.
The Cost has also improved quite a lot from OSQ to SSS. But when SSS runs from inside a procedure it is taking the same long hours as OSQ used to take.
Please help ASAP as it needs to get fixed.
Thanks
ArnabSELECT SRM.ID PROJECT_ID,
SRM.UNIQUE_NAME PROJECT_CODE,
ODFP.GS_FINANCE_PROJ_CODE,
ODFP.GS_PRODUCT_CODE,
ODFP.GS_CUSTOMER,
ODFP.GS_LEGAL_ENT_REF,
ODF_PRJ_TYPE.NAME GS_PROJ_TYPE,
ODF_SRC_SYS.NAME GS_SOURCE_SYSTEM,
RESM.FIRST_NAME||' '||RESM.LAST_NAME GS_PROJECT_MANAGER,
ODFP.GS_LEG_PROJ_NUM,
ODFP.GS_SUPP_DOC ,
ODFP.GS_REQUEST_ID ,
ODFP.GS_CONTRACT ,
ODFP.GS_SIEBEL_REF,
DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg)LAB_BDGT,
DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)NLI_BDGT,
(DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg))+(DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)) sum_lab_nonlab_budg,
frct.LABOUR,
frct.NON_LABOUR,
frct.LABOUR+frct.NON_LABOUR FORCAST_TOT_SUM,
gfr.code GFR_CODE,
odfp.gs_rev_recognition,
rev_rec.NAME gs_rev_recognition,
DECODE(odfp.gs_prevent_reqtxn,1,'YES','0','NO','NO'),
GS_PROGTYPE.NAME,
COMM_APPR.UNIQUE_NAME COMM_APPROVER_EIN,
COMM_APPR.FIRST_NAME||' '||COMM_APPR.LAST_NAME COMM_APPROVER_NAME,
PR_APPR.UNIQUE_NAME PR_APPROVER_EIN,
PR_APPR.FIRST_NAME||' '||PR_APPR.LAST_NAME PR_APPROVER_NAME,
ODFP.GS_BILL_BUD,
frct.FRCST_REVENUE TOTAL_FRC_REVENUE,
actuals.LABOUR_ACTUALS,
actuals.NL_ACTUALS,
-- get_wip_totalcost(prjp.prid,'L') LABOUR_ACTUALS,
-- get_wip_totalcost(prjp.prid,'M') NL_ACTUALS,
ODFP.GS_ASS_PRODUCT_CODE,
PROJ_TEMPLATE.ID GS_PROJ_TEMPLATE_ID,
PROJ_TEMPLATE.UNIQUE_NAME GS_PROJ_TEMPLATE_CODE,
PROJ_TEMPLATE.name GS_PROJ_TEMPLATE_NAME,
DECODE( NVL(ODFP.GS_FIN_TEMPLATE,0) ,0,'NO',1,'YES') GS_FIN_TEMPLATE,
ODF_PRJ_TYPE.LOOKUP_CODE GS_PROJ_TYPE
FROM PROJECTS SRM,
J_PROJECTS PRJP,
A_PROJECT ODFP,
RESOURCES RESM,
s_gfr gfr,
RESOURCES PROJ_ACCT,
RESOURCES COMM_APPR,
RESOURCES PR_APPR,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_PROJECT_TYPE'
AND LOOKUP.ID =CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_PRJ_TYPE,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_SOURCE_SYSTEM'
AND LOOKUP.ID =CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_SRC_SYS,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_PROG_TYPE'
AND LOOKUP.ID =CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS')GS_PROGTYPE,
(SELECT odf_parent_id
,SUM(gs_lab_budg) lab_budg
,SUM(gs_nl_budg) nli_budg
FROM T_CHANGES
WHERE gs_status='APPR'
GROUP BY odf_parent_id) budg,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_REV_RECOGNITION'
AND LOOKUP.ID = CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS') rev_rec,
(SELECT FRCP1.project_id,
NVL(SUM(CASE WHEN srmr.resource_type=0 THEN for_val.cost ELSE NULL END),0) AS LABOUR,
NVL(SUM(CASE WHEN srmr.resource_type <> 0 THEN for_val.cost ELSE NULL END),0) AS NON_LABOUR,
NVL(SUM(for_val.revenue),0) AS FRCST_REVENUE
FROM T_PROPERTIES FRCP1,
T_DETAILS FOR_DET,
RESOURCES SRMR,
T_VALUES FOR_VAL
WHERE (FRCP1.project_id, revision) IN (SELECT project_id, MAX(revision)
FROM T_PROPERTIES FRCP
WHERE FRCP.status=2
AND FRCP.PERIOD_TYPE='SEMI_MONTHLY'
GROUP BY project_id)
AND FRCP1.PERIOD_TYPE='SEMI_MONTHLY'
AND FOR_DET.forecast_id=frcp1.id
AND SRMR.id(+)=FOR_DET.detail_id
AND FOR_VAL.currency_type='BILLING'
AND FOR_VAL.forecast_details_id=FOR_DET.ID
GROUP BY FRCP1.project_id) frct,
(SELECT srmp.id
,NVL(SUM(CASE WHEN wip.transtype='L' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS LABOUR_ACTUALS
,NVL(SUM(CASE WHEN wip.transtype='M' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS NL_ACTUALS
FROM A_WIP WIP
,om_periods biz
,P_VALUES WIPVAL
,PROJECTS SRMP
,A_PROJECT ODF
WHERE biz.period_type ='SEMI_MONTHLY'
AND TRUNC(wip.TRANSDATE) BETWEEN biz.start_date AND biz.end_date
AND WIP.TRANSNO=WIPVAL.TRANSNO
AND WIPVAL.CURRENCY_TYPE='BILLING'
AND SRMP.UNIQUE_NAME=WIP.PROJECT_CODE
AND SRMP.ID=ODF.ID
AND UPPER(ODF.partition_code)='GLOBAL'
AND UPPER(WIP.external_id) <> 'SPREADSHEET'
GROUP BY srmp.id) actuals,
RESOURCES BUD_HOLDER,
PROJECTS PROJ_TEMPLATE
WHERE SRM.ID = PRJP.PRID
AND ODFP.ID=PRJP.PRID
AND ODFP.PARTITION_CODE='GLOBAL'
AND RESM.id(+)=prjp.manager_id
AND ODFP.GS_PROJECT_TYPE = ODF_PRJ_TYPE.LOOKUP_CODE(+)
AND ODFP.GS_SOURCE_SYSTEM = ODF_SRC_SYS.LOOKUP_CODE(+)
AND ODFP.GS_prog_type = GS_PROGTYPE.LOOKUP_CODE(+)
AND budg.odf_parent_id(+)=SRM.ID
AND frct.project_id(+) = prjp.prid
AND ODFP.gs_risk_gfr=gfr.code(+)
AND ODFP.GS_REV_RECOGNITION=rev_rec.LOOKUP_CODE(+)
AND ODFP.GS_PROJ_ACCT = PROJ_ACCT.ID(+)
AND ODFP.GS_COMM_APPR = COMM_APPR.ID(+)
AND ODFP.GS_PR_APPR = PR_APPR.ID(+)
AND actuals.id(+) = prjp.prid -- Arnab
AND ODFP.GS_BUDGET_HOLDER=BUD_HOLDER.ID(+)
AND ODFP.GS_TEMPLATE_USED=PROJ_TEMPLATE.ID(+)
The above is OSQ with cost 45000. ... takes 1-1.5 hours through Toad
SELECT SRM.ID PROJECT_ID,
SRM.UNIQUE_NAME PROJECT_CODE,
ODFP.GS_FINANCE_PROJ_CODE,
ODFP.GS_PRODUCT_CODE,
ODFP.GS_CUSTOMER,
ODFP.GS_LEGAL_ENT_REF,
ODF_PRJ_TYPE.NAME GS_PROJ_TYPE,
ODF_SRC_SYS.NAME GS_SOURCE_SYSTEM,
RESM.FIRST_NAME||' '||RESM.LAST_NAME GS_PROJECT_MANAGER,
ODFP.GS_LEG_PROJ_NUM,
ODFP.GS_SUPP_DOC ,
ODFP.GS_REQUEST_ID ,
ODFP.GS_CONTRACT ,
ODFP.GS_SIEBEL_REF,
DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg)LAB_BDGT,
DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)NLI_BDGT,
(DECODE(budg.ODF_PARENT_ID,NULL,PRJP.bdgt_cst_total, budg.lab_budg))+(DECODE(budg.ODF_PARENT_ID,NULL,0,budg.nli_budg)) sum_lab_nonlab_budg,
frct.LABOUR,
frct.NON_LABOUR,
frct.LABOUR+frct.NON_LABOUR FORCAST_TOT_SUM,
gfr.code GFR_CODE,
odfp.gs_rev_recognition,
rev_rec.NAME gs_rev_recognition,
DECODE(odfp.gs_prevent_reqtxn,1,'YES','0','NO','NO'),
GS_PROGTYPE.NAME,
COMM_APPR.UNIQUE_NAME COMM_APPROVER_EIN,
COMM_APPR.FIRST_NAME||' '||COMM_APPR.LAST_NAME COMM_APPROVER_NAME,
PR_APPR.UNIQUE_NAME PR_APPROVER_EIN,
PR_APPR.FIRST_NAME||' '||PR_APPR.LAST_NAME PR_APPROVER_NAME,
ODFP.GS_BILL_BUD,
frct.FRCST_REVENUE TOTAL_FRC_REVENUE,
-- actuals.LABOUR_ACTUALS, -- Arnab
-- actuals.NL_ACTUALS, -- Arnab
get_wip_totalcost(prjp.prid,'L') LABOUR_ACTUALS, -- Arnab
get_wip_totalcost(prjp.prid,'M') NL_ACTUALS, -- Arnab
ODFP.GS_ASS_PRODUCT_CODE,
PROJ_TEMPLATE.ID GS_PROJ_TEMPLATE_ID,
PROJ_TEMPLATE.UNIQUE_NAME GS_PROJ_TEMPLATE_CODE,
PROJ_TEMPLATE.name GS_PROJ_TEMPLATE_NAME,
DECODE( NVL(ODFP.GS_FIN_TEMPLATE,0) ,0,'NO',1,'YES') GS_FIN_TEMPLATE,
ODF_PRJ_TYPE.LOOKUP_CODE GS_PROJ_TYPE
FROM PROJECTS SRM,
J_PROJECTS PRJP,
A_PROJECT ODFP,
RESOURCES RESM,
s_gfr gfr,
RESOURCES PROJ_ACCT,
RESOURCES COMM_APPR,
RESOURCES PR_APPR,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_PROJECT_TYPE'
AND LOOKUP.ID =CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_PRJ_TYPE,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_SOURCE_SYSTEM'
AND LOOKUP.ID =CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS') ODF_SRC_SYS,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_PROG_TYPE'
AND LOOKUP.ID =CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS')GS_PROGTYPE,
(SELECT odf_parent_id
,SUM(gs_lab_budg) lab_budg
,SUM(gs_nl_budg) nli_budg
FROM T_CHANGES
WHERE gs_status='APPR'
GROUP BY odf_parent_id) budg,
(SELECT CAP.NAME
,LOOKUP.LOOKUP_CODE
FROM S_NLS CAP,
N_LOOKUPS LOOKUP
WHERE LOOKUP_TYPE ='GS_REV_RECOGNITION'
AND LOOKUP.ID = CAP.PK_ID
AND CAP.LANGUAGE_CODE='en'
AND CAP.TABLE_NAME = 'CMN_LOOKUPS') rev_rec,
(SELECT FRCP1.project_id,
NVL(SUM(CASE WHEN srmr.resource_type=0 THEN for_val.cost ELSE NULL END),0) AS LABOUR,
NVL(SUM(CASE WHEN srmr.resource_type <> 0 THEN for_val.cost ELSE NULL END),0) AS NON_LABOUR,
NVL(SUM(for_val.revenue),0) AS FRCST_REVENUE
FROM T_PROPERTIES FRCP1,
T_DETAILS FOR_DET,
RESOURCES SRMR,
T_VALUES FOR_VAL
WHERE (FRCP1.project_id, revision) IN (SELECT project_id, MAX(revision)
FROM T_PROPERTIES FRCP
WHERE FRCP.status=2
AND FRCP.PERIOD_TYPE='SEMI_MONTHLY'
GROUP BY project_id)
AND FRCP1.PERIOD_TYPE='SEMI_MONTHLY'
AND FOR_DET.forecast_id=frcp1.id
AND SRMR.id(+)=FOR_DET.detail_id
AND FOR_VAL.currency_type='BILLING'
AND FOR_VAL.forecast_details_id=FOR_DET.ID
GROUP BY FRCP1.project_id) frct,
/* (SELECT srmp.id
,NVL(SUM(CASE WHEN wip.transtype='L' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS LABOUR_ACTUALS
,NVL(SUM(CASE WHEN wip.transtype='M' THEN WIPVAL.TOTALCOST ELSE NULL END),0) AS NL_ACTUALS
FROM A_WIP WIP
,om_periods biz
,P_VALUES WIPVAL
,PROJECTS SRMP
,A_PROJECT ODF
WHERE biz.period_type ='SEMI_MONTHLY'
AND TRUNC(wip.TRANSDATE) BETWEEN biz.start_date AND biz.end_date
AND WIP.TRANSNO=WIPVAL.TRANSNO
AND WIPVAL.CURRENCY_TYPE='BILLING'
AND SRMP.UNIQUE_NAME=WIP.PROJECT_CODE
AND SRMP.ID=ODF.ID
AND UPPER(ODF.partition_code)='GLOBAL'
AND UPPER(WIP.external_id) <> 'SPREADSHEET'
GROUP BY srmp.id) actuals, */ -- Arnab
RESOURCES BUD_HOLDER,
PROJECTS PROJ_TEMPLATE
WHERE SRM.ID = PRJP.PRID
AND ODFP.ID=PRJP.PRID
AND ODFP.PARTITION_CODE='GLOBAL'
AND RESM.id(+)=prjp.manager_id
AND ODFP.GS_PROJECT_TYPE = ODF_PRJ_TYPE.LOOKUP_CODE(+)
AND ODFP.GS_SOURCE_SYSTEM = ODF_SRC_SYS.LOOKUP_CODE(+)
AND ODFP.GS_prog_type = GS_PROGTYPE.LOOKUP_CODE(+)
AND budg.odf_parent_id(+)=SRM.ID
AND frct.project_id(+) = prjp.prid
AND ODFP.gs_risk_gfr=gfr.code(+)
AND ODFP.GS_REV_RECOGNITION=rev_rec.LOOKUP_CODE(+)
AND ODFP.GS_PROJ_ACCT = PROJ_ACCT.ID(+)
AND ODFP.GS_COMM_APPR = COMM_APPR.ID(+)
AND ODFP.GS_PR_APPR = PR_APPR.ID(+)
AND actuals.id(+) = prjp.prid Arnab
AND ODFP.GS_BUDGET_HOLDER=BUD_HOLDER.ID(+)
AND ODFP.GS_TEMPLATE_USED=PROJ_TEMPLATE.ID(+)
This one in SSS where "-- Arnab " are only changes .....
The cost of this 7000.....finishes in 1 - 1.5 min in Toad
This is used as INSERT INTO TABLE <Select Query> ... this was taking 1.5 Hours to run.
It was changed using BULK COLLECT - still its taking around 1.5 hours to run...
Thanks
Arnab -
Issue with SQL Query with Presentation Variable as Data Source in BI Publisher
Hello All
I have an issue with creating BIP report based on OBIEE reports which is done using direct SQL. There is this one report in OBIEE dashboard, which is written using direct SQL. To create the pixel perfect version of this report, I am creating BIP data model using SQL Query as data source. The physical query that is used to create OBIEE report has several presentation variables in its where clause.
select TILE4,max(APPTS), 'Top Count' from
SELECT c5 as division,nvl(DECODE (C2,0,0,(c1/c2)*100),0) AS APPTS,NTILE (4) OVER ( ORDER BY nvl(DECODE (C2,0,0,(c1/c2)*100),0)) AS TILE4,
c4 as dept,c6 as month FROM
select sum(case when T6736.TYPE = 'ATM' then T7608.COUNT end ) as c1,
sum(case when T6736.TYPE in ('Call Center', 'LSM') then T7608.CONFIRMED_COUNT end ) as c2,
T802.NAME_LEVEL_6 as c3,
T802.NAME_LEVEL_1 as c4,
T6172.CALENDARMONTHNAMEANDYEAR as c5,
T6172.CALENDARMONTHNUMBERINYEAR as c6,
T802.DEPT_CODE as c7
from
DW_date_DIM T6736 /* z_dim_date */ ,
DW_MONTH_DIM T6172 /* z_dim_month */ ,
DW_GEOS_DIM T802 /* z_dim_dept_geo_hierarchy */ ,
DW_Count_MONTH_AGG T7608 /* z_fact_Count_month_agg */
where ( T802.DEpt_CODE = T7608.DEPT_CODE and T802.NAME_LEVEL_1 = '@{PV_D}{RSD}'
and T802.CALENDARMONTHNAMEANDYEAR = 'July 2013'
and T6172.MONTH_KEY = T7608.MONTH_KEY and T6736.DATE_KEY = T7608.DATE_KEY
and (T6172.CALENDARMONTHNUMBERINYEAR between substr('@{Month_Start}',0,6) and substr('@{Month_END}',8,13))
and (T6736.TYPE in ('Call Center', 'LSM')) )
group by T802.DEPT_CODE, T802.NAME_LEVEL_6, T802.NAME_LEVEL_1, T6172.CALENDARMONTHNAMEANDYEAR, T6172.CALENDARMONTHNUMBERINYEAR
order by c4, c3, c6, c7, c5
))where tile4=3 group by tile4
When I try to view data after creating the data set, I get the following error:
Failed to load XML
XML Parsing Error: mismatched tag. Expected: . Location: http://172.20.17.142:9704/xmlpserver/servlet/xdo Line Number 2, Column 580:
Now when I remove those Presention variables (@{PV1}, @{PV2}) in the query with some hard coded values, it is working fine.
So I know it is the PV that's causing this error.
How can I work around it?
There is no way to create equivalent report without using the direct sql..
Thanks in advanceI have found a solution to this problem after some more investigation. PowerQuery does not support to use SQL statement as source for Teradata (possibly same for other sources as well). This is "by design" according to Microsoft. Hence the problem
is not because different PowerQuery versions as mentioned above. When designing the query in PowerQuery in Excel make sure to use the interface/navigation to create the query/select tables and NOT a SQL statement. The SQL statement as source works fine on
a client machine but not when scheduling it in Power BI in the cloud. I would like to see that the functionality within PowerQuery and Excel should be the same as in Power BI in the cloud. And at least when there is a difference it would be nice with documentation
or more descriptive errors.
//Jonas -
Performance issue with insert query !
Hi ,
I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
My container is indexed with "node-attribute-equality-string". The insert query I used:
insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
If I modify my query with
into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
insted of
into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
Time taken reduces only by 8 secs.
I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
Has anybody got any idea regarding this performance issue.
Kindly help me out.
Thanks,
Kapil.Hi George,
Thanks for your reply.
Here is the info you requested,
dbxml> listIndexes
Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
Index: node-attribute-equality-string for node {}:mySSIAddress
2 indexes found.
dbxml> info
Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
Berkeley DB 4.6.21: (September 27, 2007)
Default container name: n_b_i_f_c_a_z.dbxml
Type of default container: NodeContainer
Index Nodes: on
Shell and XmlManager state:
Not transactional
Verbose: on
Query context state: LiveValues,Eager
The insery query with update takes ~32 sec ( shown below )
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
Time in seconds for command 'query': 32.5002
and the query without the updation part takes ~14 sec ( shown below )
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
Time in seconds for command 'query': 13.7289
The query :
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
Time in seconds for command 'query': 0.005375
is very fast.
The Updation of the document seems to consume much of the time.
Regards,
Kapil. -
Performance issue with select query
Hi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AMHi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AM -
Issue with SQL query of Oracle Apps Alert
Hi,
I have build the following SQL Query for one of the alert.
But when i try to valid it it is throwing an error as
"APP-ALR-04108: SQL Error - ORA-07144 in appropriate into occurred in '
My SQL is as follows:
===============
SELECT details into
&v_details from (SELECT 2, RPAD(SUBSTR(lookup_code,1),25,' ') ||' - '||RPAD(SUBSTR(NVL(lu.description,' '),1),240,' ') Details
INTO &V_seq,&V_resp
from apps.FND_LOOKUP_VALUES_VL lu,apps.fnd_user fu
where lookup_type ='XX_TEST'
AND lookup_code = fu.user_name
AND (fu.end_date IS NULL OR fu.end_date >= SYSDATE))
Please help on this.
Thanks,
NazWhy there are 2 into clauses? You need to change your query to fetch all columns first and then use into to fetch into variables.
Thanks
Shree -
Performance issue with this query.
Hi Experts,
This query is fetching 500 records.
SELECT
RECIPIENT_ID ,FAX_STATUS
FROM
FAX_STAGE WHERE LOWER(FAX_STATUS) like 'moved to%'
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 159K| 10M| 2170 (1)|
| 1 | TABLE ACCESS BY INDEX ROWID| FAX_STAGE | 159K| 10M| 2170 (1)|
| 2 | INDEX RANGE SCAN | INDX_FAX_STATUS_RAM | 28786 | | 123 (0)|
Note
- 'PLAN_TABLE' is old version
Statistics
1 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
937 bytes sent via SQL*Net to client
375 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
19 rows processed
Total number of records in the table.
SELECT COUNT(*) FROM FAX_STAGE--3679418
Distinct reccords are low for this column.
SELECT DISTINCT FAX_STATUS FROM FAX_STAGE;
Completed
BROKEN
Broken - New
moved to - America
MOVED to - Australia
Moved to Canada and australia
Functional based indexe on FAX_STAGE(LOWER(FAX_STATUS))
stats are upto date.
Still the cost is high
How to improve the performance of this query.
Please help me.
Thanks in advance.With no heavy activity on your fax_stage table a bitmap index might do better - see CREATE INDEX
I would try FTS (Full Table Scan) first as 6 vs. 3679418 is low cardinality for sure so using an index is not very helpful in this case (maybe too much Exadata oriented)
There's a lot of web pages where you can read: full table scans are not always evil and indexes are not always good or vice versa Ask Tom &quot;How to avoid the full table scan&quot;
Regards
Etbin -
Performance issue with select query and for all entries.
hi,
i have a report to be performance tuned.
the database table has around 20 million entries and 25 fields.
so, the report fetches the distinct values of two fields using one select query.
so, the first select query fetches around 150 entries from the table for 2 fields.
then it applies some logic and eliminates some entries and makes entries around 80-90...
and then it again applies the select query on the same table using for all entries applied on the internal table with 80-90 entries...
in short,
it accesses the same database table twice.
so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
is around 80-90 entries too much for using "for all entries"?
the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
i really cant find the way out...
please help.chinmay kulkarni wrote:Chinmay,
Even though you tried to ask the question with detailed explanation, unfortunately it is still not clear.
It is perfectly fine to access the same database twice. If that is working for you, I don't think there is any need to change the logic. As Rob mentioned, 80 or 8000 records is not a problem in "for all entries" clause.
>
> so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
>
It is not clear what you tried to do here. Did you try to bring all 20 million records into an internal table? That will certainly cause the program to short dump with memory shortage.
> the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
>
That is fine. Actually, it is better (performance wise) to do much of the work in ABAP than writing a complex WHERE clause that might bog down the database. -
Performance Issue with the query
Hi Experts,
While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
Here is the query
SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID, E.DESCR, A.EMPLID, D.NAME, C.DESCR, A.TRC,
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( A.EST_GROSS),
DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
'2012-07-01',
'2012-07-31',
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / ' || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT,
E.PROJECT_ID
FROM
PS_TL_PAYABLE_TIME A,
PS_TL_TRC_TBL C,
PS_PROJECT E,
PS_PERSONAL_DATA D,
PS_PERALL_SEC_QRY D1,
PS_JOB F,
PS_EMPLMT_SRCH_QRY F1
WHERE
D.EMPLID = D1.EMPLID
AND D1.OPRID = 'TMANI'
AND F.EMPLID = F1.EMPLID
AND F.EMPL_RCD = F1.EMPL_RCD
AND F1.OPRID = 'TMANI'
AND A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
AND C.TRC = A.TRC
AND C.EFFDT = (SELECT
MAX(C_ED.EFFDT)
FROM
PS_TL_TRC_TBL C_ED
WHERE
C.TRC = C_ED.TRC
AND C_ED.EFFDT <= SYSDATE
AND E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
AND E.PROJECT_ID = A.PROJECT_ID
AND A.EMPLID = D.EMPLID
AND A.EMPLID = F.EMPLID
AND A.EMPL_RCD = F.EMPL_RCD
AND F.EFFDT = (SELECT
MAX(F_ED.EFFDT)
FROM
PS_JOB F_ED
WHERE
F.EMPLID = F_ED.EMPLID
AND F.EMPL_RCD = F_ED.EMPL_RCD
AND F_ED.EFFDT <= SYSDATE
AND F.EFFSEQ = (SELECT
MAX(F_ES.EFFSEQ)
FROM
PS_JOB F_ES
WHERE
F.EMPLID = F_ES.EMPLID
AND F.EMPL_RCD = F_ES.EMPL_RCD
AND F.EFFDT = F_ES.EFFDT
AND F.GP_PAYGROUP = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
AND A.CURRENCY_CD = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
AND DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
AND ( A.EMPLID, A.EMPL_RCD) IN (SELECT
B.EMPLID,
B.EMPL_RCD
FROM
PS_TL_GROUP_DTL B
WHERE
B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012')
AND E.PROJECT_USER1 = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
AND A.PROJECT_ID = DECODE(' ', ' ', A.PROJECT_ID, ' ')
AND A.PAYABLE_STATUS <>
CASE
WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
THEN
CASE
WHEN A.DUR > last_day(add_months(sysdate, -2))
THEN ' '
ELSE 'NA'
END
ELSE
CASE
WHEN A.DUR > last_day(add_months(sysdate, -1))
THEN ' '
ELSE 'NA'
END
END
AND A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
GROUP BY A.BUSINESS_UNIT_PC,
A.PROJECT_ID,
E.DESCR,
A.EMPLID,
D.NAME,
C.DESCR,
A.TRC,
'2012-07-01',
'2012-07-31',
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / '
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ') || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT, E.PROJECT_ID
HAVING SUM( A.EST_GROSS) <> 0
ORDER BY 1,
2,
5,
6 ;Here is the screenshot for IO wait
[https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
Edited by: Oceaner on Aug 14, 2012 5:38 PMHere is the execution plan with all the statistics present
PLAN_TABLE_OUTPUT
Plan hash value: 1575300420
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 237 | 2703 (1)| 00:00:33 |
|* 1 | FILTER | | | | | |
| 2 | SORT GROUP BY | | 1 | 237 | | |
| 3 | CONCATENATION | | | | | |
|* 4 | FILTER | | | | | |
|* 5 | FILTER | | | | | |
|* 6 | HASH JOIN | | 1 | 237 | 1695 (1)| 00:00:21 |
|* 7 | HASH JOIN | | 1 | 204 | 1689 (1)| 00:00:21 |
|* 8 | HASH JOIN SEMI | | 1 | 193 | 1352 (1)| 00:00:17 |
| 9 | NESTED LOOPS | | | | | |
| 10 | NESTED LOOPS | | 1 | 175 | 1305 (1)| 00:00:16 |
|* 11 | HASH JOIN | | 1 | 148 | 1304 (1)| 00:00:16 |
| 12 | JOIN FILTER CREATE | :BF0000 | | | | |
| 13 | NESTED LOOPS | | | | | |
| 14 | NESTED LOOPS | | 1 | 134 | 967 (1)| 00:00:12 |
| 15 | NESTED LOOPS | | 1 | 103 | 964 (1)| 00:00:12 |
|* 16 | TABLE ACCESS FULL | PS_PROJECT | 197 | 9062 | 278 (1)| 00:00:04 |
|* 17 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 7 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | IDX$$_C44D0007 | 16 | | 3 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
| 21 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
PLAN_TABLE_OUTPUT
| 22 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 23 | FILTER | | | | | |
| 24 | JOIN FILTER USE | :BF0000 | 55671 | 2827K| 335 (1)| 00:00:05 |
| 25 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 26 | TABLE ACCESS BY INDEX ROWID| PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 27 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|* 28 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 29 | CONCATENATION | | | | | |
| 30 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 31 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 32 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 33 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 34 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 36 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 38 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
|* 39 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 40 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
|* 41 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
| 42 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 43 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|* 44 | FILTER | | | | | |
|* 45 | FILTER | | | | | |
| 46 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 47 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 48 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|* 49 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 50 | CONCATENATION | | | | | |
| 51 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 52 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 53 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 54 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 55 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 56 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 57 | CONCATENATION | | | | | |
| 58 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 59 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 60 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 61 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 62 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 64 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 65 | INLIST ITERATOR | | | | | |
|* 66 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|* 67 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 68 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 69 | SORT AGGREGATE | | 1 | 20 | | |
|* 70 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 71 | SORT AGGREGATE | | 1 | 14 | | |
|* 72 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 73 | SORT AGGREGATE | | 1 | 23 | | |
|* 74 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
|* 75 | FILTER | | | | | |
PLAN_TABLE_OUTPUT
|* 76 | FILTER | | | | | |
|* 77 | HASH JOIN | | 1 | 237 | 974 (2)| 00:00:12 |
|* 78 | HASH JOIN | | 1 | 226 | 637 (1)| 00:00:08 |
|* 79 | HASH JOIN | | 1 | 193 | 631 (1)| 00:00:08 |
| 80 | NESTED LOOPS | | | | | |
| 81 | NESTED LOOPS | | 1 | 179 | 294 (1)| 00:00:04 |
| 82 | NESTED LOOPS | | 1 | 152 | 293 (1)| 00:00:04 |
| 83 | NESTED LOOPS | | 1 | 121 | 290 (1)| 00:00:04 |
|* 84 | HASH JOIN SEMI | | 1 | 75 | 289 (1)| 00:00:04 |
|* 85 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 242 (0)| 00:00:03 |
|* 86 | INDEX SKIP SCAN | IDX$$_C44D0007 | 587 | | 110 (0)| 00:00:02 |
|* 87 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
|* 88 | TABLE ACCESS BY INDEX ROWID | PS_PROJECT | 1 | 46 | 1 (0)| 00:00:01 |
|* 89 | INDEX UNIQUE SCAN | PS_PROJECT | 1 | | 0 (0)| 00:00:01 |
|* 90 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
|* 91 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 92 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 93 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
| 94 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
| 95 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 96 | FILTER | | | | | |
| 97 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 98 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 99 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*100 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 101 | CONCATENATION | | | | | |
| 102 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*103 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*104 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 105 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*106 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*107 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 108 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*109 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*110 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 111 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 112 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 113 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|*114 | FILTER | | | | | |
|*115 | FILTER | | | | | |
| 116 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 117 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|*118 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*119 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 120 | CONCATENATION | | | | | |
| 121 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|*122 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*123 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 124 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*125 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*126 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 127 | CONCATENATION | | | | | |
| 128 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*129 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*130 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 131 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*132 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*133 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 134 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 135 | INLIST ITERATOR | | | | | |
|*136 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|*137 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 138 | SORT AGGREGATE | | 1 | 20 | | |
|*139 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 140 | SORT AGGREGATE | | 1 | 14 | | |
|*141 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 142 | SORT AGGREGATE | | 1 | 23 | | |
|*143 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
| 144 | SORT AGGREGATE | | 1 | 14 | | |
|*145 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 146 | SORT AGGREGATE | | 1 | 20 | | |
|*147 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 148 | SORT AGGREGATE | | 1 | 23 | | |
|*149 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
Second part of the Question continues.... -
Performance issue with infoset query
Dear Experts,
when i trying to run a query based on infoset. the execution time of the query is very high. when i try to see the query designer i have got the following warning...
Warning Message :
Diagnosis
InfoProvider XXXX does not contain characteristic 0CALYEAR. The exception aggregation for key figure XXXX can therefore not be applied.
System Response
The key figure will therefore be aggregated using all characteristics of XXXXX with the generic aggregation SUM.
could anyone can guide me how to check at the infoset level to resolve this issue....
Thanks,
Mannu!Hi Mannu.
Go to change mode in your InfoSet and identify the Char 0CALYEAR. Check whether it is selected (click the box beside 0CALYEAR).
This needs to be done, because the exception aggregation can occur only for the fields available in the BEx Query!
Since its not available, it is trying to aggregate based on all available fields selected in Infoset. This may be the reasong to your performace issue.
Make this setting and I hope your issue will get resolved. Do post the outcome.
Cheers,
VA
Edited by: Vishwa Anand on Sep 6, 2010 5:15 PM -
Performance Issue with the Query urgent
is there any way to get the data for the material Rejected with Movement type 122 & 123 except MSEG table, as my report is very slow...
my query is as below : -
SELECT SUM( a~dmbtr )INTO value1
FROM mseg AS a INNER JOIN mkpf AS b
ON amblnr = bmblnr
AND amjahr = bmjahr
WHERE a~lifnr = p_lifnr
AND a~bwart IN ('122')
AND b~budat IN s_budat
GROUP BY lifnr.
ENDSELECT.
abhishek suppalHi Abhi,
Try like this ....
SELECT SUM( a~dmbtr )INTO value1
FROM mseg AS a INNER JOIN mkpf AS b
ON amblnr = bmblnr
AND amjahr = bmjahr
WHERE a~lifnr = p_lifnr
AND a~bwart IN ('122'<b>,'123'</b>)
AND b~budat IN s_budat
GROUP BY lifnr.
ENDSELECT.
or ...
define ranges...like
ranges: r_bwart for XXXX-bwart.
r_bwart-sign = 'I'.
r_bwart-option = 'EQ'.
r_bwart-low = 122.
append r_bwart.
r_bwart-low = 123.
append r_bwart.
now...
in select statement u just add
AND a~bwart IN r_bwart
Thanks
Eswar -
Performance issue with Spatial query
Need inputs to optimize a spatial query that takes 9 seconds to return 34410 rows. All tables have been analyzed. Plan shows indexes columns. Details below :
SELECT count(*) FROM (select ANT_DIR_GEOLOC , decode(D_UL_SQE_LOW,0,0,D_UL_SQE_LOW/D_UL_SQE_CHECK) as D_UL_SQE_BAD_SECTORS
from dae_admin.VIEW_D_UL_SQE_BAD_SECTOR G , SECTORS S Where G.SECTOR_ID = S.SECTOR_ID and WEEK_NUMBER=32 and
WEEK_NUMBER_YEAR=2007 and WEEKEND = 'N')
WHERE MDSYS.SDO_FILTER(ANT_DIR_GEOLOC, MDSYS.SDO_GEOMETRY(2003, 99900001,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 1003,
3),MDSYS.SDO_ORDINATE_ARRAY(-1.06449263071068E7,3111428.11503326,-8140237.76425813,5605276.61542001)), 'querytype=WINDOW') ='TRUE'
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 146 | | 26699 (1)| 00:05:21 |
| 1 | SORT AGGREGATE | | 1 | 146 | | | |
|* 2 | HASH JOIN | | 1123 | 160K| | 26699 (1)| 00:05:21 |
| 3 | PARTITION RANGE ALL | | 1123 | 152K| | 603 (1)| 00:00:08 |
| 4 | TABLE ACCESS BY LOCAL INDEX ROWID | SECTORS | 1123 | 152K| | 603 (1)| 00:00:
|* 5 | DOMAIN INDEX | SECTORS_ANTDIR_SX | | | | | |
| 6 | VIEW | VIEW_D_UL_SQE_BAD_SECTOR | 20510 | 140K| | 26095 (1)| 00:05:14 |
| 7 | HASH GROUP BY | | 20510 | 901K| | 26095 (1)| 00:05:14 |
| 8 | VIEW | VIEW_D_UL_SQE_BAD_CARRIER | 984K| 42M| | 26095 (1)| 00:05:14 |
| 9 | HASH GROUP BY | | 984K| 73M| 180M| 26095 (1)| 00:05:14 |
| 10 | TABLE ACCESS BY LOCAL INDEX ROWID| FACT_DAILY_CHANNEL_STATS | 395K| 12M| | 603
| 11 | NESTED LOOPS | | 984K| 73M| | 8179 (1)| 00:01:39 |
| 12 | MERGE JOIN CARTESIAN | | 2 | 90 | | 4 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID | NETWORK_PARAMETERS | 1 | 19 | | 2 (0)| 00:00
|* 14 | INDEX RANGE SCAN | NP_NAME_IDX | 1 | | | 1 (0)| 00:00:01 |
| 15 | BUFFER SORT | | 5 | 130 | | 2 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID | DIM_TIME | 5 | 130 | | 2 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | DIM_TIME_WEEK_3_IX | 5 | | | 1 (0)| 00:00:01 |
| 18 | PARTITION RANGE ITERATOR | | 395K| | | 2139 (1)| 00:00:26 |
|* 19 | INDEX RANGE SCAN | FA_DCC_EVENT_DATE_NP_ID | 395K| | | 2139 (1)| 00:00:26 |
CREATE OR REPLACE VIEW VIEW_D_UL_SQE_BAD_SECTOR
(SECTOR_ID, WEEK_NUMBER_YEAR, WEEK_NUMBER, WEEKEND, D_UL_SQE_LOW,
D_UL_SQE_CHECK)
AS
SELECT sector_id,
week_number_year,
week_number,
weekend,
SUM(d_ul_sqe_low) d_ul_sqe_low,
SUM(d_ul_sqe_check) d_ul_sqe_check
FROM view_d_ul_sqe_bad_carrier
GROUP BY sector_id, week_number_year, week_number, weekend
view_d_ul_sqe_bad_carrier:
SELECT /*+ index(f) */
f.sector_id,
f.channel,
f.br_id,
t.week_number_year,
t.week_number,
t.weekend,
SUM(f.np_value_numerator) d_ul_sqe_low,
SUM(f.np_value_denominator) d_ul_sqe_check
FROM fact_daily_channel_stats f, network_parameters p, dim_time t
WHERE f.np_id = p.np_id
AND f.event_date = t.event_date
AND p.name = 'DL_UL_SQE_LOW_RATE'
GROUP BY f.sector_id, f.channel, f.br_id, t.week_number_year, t.week_number, t.weekend
fact_daily_channel_stats is a table that isPartitioned on event date.Ivan - Yes, all the columns in the filter are indexed.
Row Count:
==========
fact_daily_channel_stats - 101263303
network_parameters - 168
SECTORS - 105216
dim_time - 7306
Any tip/ clue on re-writing the query would be of great help.
Rich - Below is the response from our Spatial Team :
1. Projection:
We are working in Mercator projection. Since Oracle has out of the box Mercator with datum NAD 27 (SRID=49155), and we needed Mercator with NAD 83, we defined our own projection with SRID = 99900001, which you are seeing in our query. Projection is defined as:
insert into sdo_coord_ref_system
SELECT 99900001, a.coord_ref_sys_name, a.coord_ref_sys_kind,
a.coord_sys_id, a.datum_id, 10076,
2000023, a.projection_conv_id, a.cmpd_horiz_srid,
a.cmpd_vert_srid, a.information_source, a.data_source,
a.is_legacy, a.legacy_code, 'PROJCS["Mercator", GEOGCS [ "NAD 83", DATUM ["NAD 83", SPHEROID ["GRS 80", 6378137, 298.257222101]], PRIMEM [ "Greenwich", 0.000000 ], UNIT ["Decimal Degree", 0.01745329251994330]], PROJECTION ["Mercator"], UNIT ["Meter", 1.000000000000]]', a.legacy_cs_bounds, a.is_valid, a.supports_sdo_geometry
FROM mdsys.sdo_coord_ref_sys a where srid=49155;
2. Metadata for SECTORS table is:
INSERT INTO USER_SDO_GEOM_METADATA (TABLE_NAME, COLUMN_NAME, DIMINFO, SRID)
VALUES('SECTORS', 'GEOLOC',SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -20037566, 20037566, 1),
SDO_DIM_ELEMENT('Y', -71957574, 71957574, 1)),99900001);
which means tolerance is 1 meter.
3. Precision - I don't think we need to supply to MDSYS.SDO_FILTER coordinates with such great precision. Maybe rounding them (by dropping fractional part at all, 1 meter should be enough) would help. If not - then maybe we need to round coordinates we generate when converting from longitude/latitude to Mercator and store in SECTORS table. But on the other hand SECTORS table defines tolerance of 1 meter in metadata, and spatial index should use that tolerance. -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701
Maybe you are looking for
-
How to get a opening stock of a material on a particular date
Hi, Could you pls provide me the logic of how to get the opening stock of the material on a particular date. I searched the forum and found the logic to fetch the closing stock at a given period but not for a date. An immediate response would be real
-
I'm trying to just remove my cc info from being on my ITunes account permanently
-
Error while adding database in OEM
hi when i trying to connect oem.When i am trying to add database to tree and to network i am getting following error oem error Failed to parse tnsnames.ora file Error :100--NLNV-NLNV String format Error So please help me.
-
Korean Version of Safari switch to English
I teach English in Korea at a Korean School. I was given a laptop with Windows XP, all words are in Korean. I downloaded the latest version of Safari, hoping the language would be English but it ended up being in Korean. I found the setting to change
-
So my iphone 4 lock button broke i havent had it for a year yet what can i do???
so my iphone 4 lock button broke i havent had it for a year yet what can i do???