Merge the Query
SELECT
MAX(fndattdoc.LAST_UPDATE_DATE ) as LAST_UPDATE_DATE,
MAX(DECODE(fndcatusg.format,'H', st.short_text,NULL,st.short_text, NULL)) as COMMENTS,
MAX(fnddoc.description) as REASON
FROM fnd_attachment_functions fndattfn,
fnd_doc_category_usages fndcatusg,
fnd_documents_vl fnddoc,
fnd_attached_documents fndattdoc,
fnd_documents_short_text st,
fnd_document_categories_tl fl,
WSH_NEW_DELIVERIES DLVRY
WHERE fndattfn.attachment_function_id = fndcatusg.attachment_function_id
AND fndcatusg.category_id = fnddoc.category_id
AND fnddoc.document_id = fndattdoc.document_id
AND fndattfn.function_name = 'WSHFSTRX'
AND fndattdoc.entity_name = 'WSH_NEW_DELIVERIES'
AND fl.CATEGORY_ID = fnddoc.category_id
AND fl.LANGUAGE = 'US'
AND fl.USER_NAME ='Delivery Failure'
AND fndattdoc.pk1_value =DLVRY.DELIVERY_ID
AND FNDDOC.DESCRIPTION NOT IN ('Corrected Actual Delivery Date','Corrected Promised Date')
AND fnddoc.media_id=st.media_id
GROUP BY fndattdoc.pk1_value
SELECT
MAX(DECODE(fndcatusg.format,'H', st.short_text,NULL,st.short_text, NULL)) as CORRECTD_ACTUAL_DELIVERY_DATE,
MAX(fndattdoc.LAST_UPDATE_DATE ) as LAST_UPDATE_DATE
FROM fnd_attachment_functions fndattfn,
fnd_doc_category_usages fndcatusg,
fnd_documents_vl fnddoc,
fnd_attached_documents fndattdoc,
fnd_documents_short_text st,
fnd_document_categories_tl fl,
WSH_NEW_DELIVERIES DLVRY
WHERE fndattfn.attachment_function_id = fndcatusg.attachment_function_id
AND fndcatusg.category_id = fnddoc.category_id
AND fnddoc.document_id = fndattdoc.document_id
AND fndattfn.function_name = 'WSHFSTRX'
AND fndattdoc.entity_name = 'WSH_NEW_DELIVERIES'
AND fl.CATEGORY_ID = fnddoc.category_id
AND fl.LANGUAGE = 'US'
AND fl.USER_NAME ='Delivery Failure'
AND fndattdoc.pk1_value =DLVRY.DELIVERY_ID
AND FNDDOC.DESCRIPTION = 'Corrected Actual Delivery Date'
AND fnddoc.media_id=st.media_id
GROUP BY fndattdoc.pk1_value
SELECT
MAX(DECODE(fndcatusg.format,'H', st.short_text,NULL,st.short_text, NULL) ) AS CORRECTD_PROMISE_DATE,
MAX(fndattdoc.LAST_UPDATE_DATE ) as LAST_UPDATE_DATE
FROM fnd_attachment_functions fndattfn,
fnd_doc_category_usages fndcatusg,
fnd_documents_vl fnddoc,
fnd_attached_documents fndattdoc,
fnd_documents_short_text st,
fnd_document_categories_tl fl,
WSH_NEW_DELIVERIES DLVRY
WHERE fndattfn.attachment_function_id = fndcatusg.attachment_function_id
AND fndcatusg.category_id = fnddoc.category_id
AND fnddoc.document_id = fndattdoc.document_id
AND fndattfn.function_name = 'WSHFSTRX'
AND fndattdoc.entity_name = 'WSH_NEW_DELIVERIES'
AND fl.CATEGORY_ID = fnddoc.category_id
AND fl.LANGUAGE = 'US'
AND fl.USER_NAME ='Delivery Failure'
AND fndattdoc.pk1_value =DLVRY.DELIVERY_ID
AND FNDDOC.DESCRIPTION = 'Corrected Promised Date'
AND fnddoc.media_id=st.media_id
GROUP BY fndattdoc.pk1_value
Hi I have above three select stetements, I have to merge those select statements into one select statements, all select statements having same conditions and filter conditions(in each select statement one filter condition different), I highlighted in the bold differencet filter conditions in each table, fianlly I should be get 7 coloumns like
LAST_UPDATE_DATE, COMMENTS, REASON, CORRECTD_ACTUAL_DELIVERY_DATE, LAST_UPDATE_DATE, CORRECTD_PROMISE_DATE, LAST_UPDATE_DATE
Please help on this
Thanks
Venki
Use CASE?
Possibly something like this:
SELECT
MAX(CASE WHEN FNDDOC.DESCRIPTION NOT IN ('Corrected Actual Delivery Date','Corrected Promised Date')
THEN fndattdoc.LAST_UPDATE_DATE
END) as LAST_UPDATE_DATE,
MAX(CASE WHEN FNDDOC.DESCRIPTION NOT IN ('Corrected Actual Delivery Date','Corrected Promised Date')
THEN DECODE(fndcatusg.format,'H', st.short_text,NULL,st.short_text, NULL)
END) as COMMENTS,
MAX(CASE WHEN FNDDOC.DESCRIPTION NOT IN ('Corrected Actual Delivery Date','Corrected Promised Date')
THEN fnddoc.description
END) as REASON,
MAX(CASE WHEN FNDDOC.DESCRIPTION = 'Corrected Actual Delivery Date'
THEN DECODE(fndcatusg.format,'H', st.short_text,NULL,st.short_text, NULL)
END) as CORRECTD_ACTUAL_DELIVERY_DATE,
MAX(CASE WHEN FNDDOC.DESCRIPTION = 'Corrected Actual Delivery Date'
THEN fndattdoc.LAST_UPDATE_DATE
END) as LAST_UPDATE_DATE,
MAX(CASE WHEN FNDDOC.DESCRIPTION = 'Corrected Promised Date'
THEN DECODE(fndcatusg.format,'H', st.short_text,NULL,st.short_text, NULL)
END) AS CORRECTD_PROMISE_DATE,
MAX(CASE WHEN FNDDOC.DESCRIPTION = 'Corrected Promised Date'
THEN fndattdoc.LAST_UPDATE_DATE
END) as LAST_UPDATE_DATE
FROM fnd_attachment_functions fndattfn,
fnd_doc_category_usages fndcatusg,
fnd_documents_vl fnddoc,
fnd_attached_documents fndattdoc,
fnd_documents_short_text st,
fnd_document_categories_tl fl,
WSH_NEW_DELIVERIES DLVRY
WHERE fndattfn.attachment_function_id = fndcatusg.attachment_function_id
AND fndcatusg.category_id = fnddoc.category_id
AND fnddoc.document_id = fndattdoc.document_id
AND fndattfn.function_name = 'WSHFSTRX'
AND fndattdoc.entity_name = 'WSH_NEW_DELIVERIES'
AND fl.CATEGORY_ID = fnddoc.category_id
AND fl.LANGUAGE = 'US'
AND fl.USER_NAME ='Delivery Failure'
AND fndattdoc.pk1_value =DLVRY.DELIVERY_ID
AND fnddoc.media_id=st.media_id
GROUP BY fndattdoc.pk1_value
Similar Messages
-
Hi,
Ii need to merge the below 2 queries.could you please help some one.
primary foreign key relation of major_id column
major.major_id primary key.
major_inactive_list.major_id referential integrity
SELECT m.major_inactive_id, m.institution_id, m.major_id, m.created_on,
m.modified_on, m.created_by, m.modified_by, m.active_status
FROM major_inactive_list m
WHERE m.institution_id IN (1, 5)
ORDER BY m.institution_id
SELECT M.major_id ,
M.major_name ,
M.major_code ,
M.major_comment,
M.delete_status,
M.active_status,
M.institution_id,
M.default_major
FROM major M
WHERE (M.institution_id IN (1, 5, 6))
AND M.delete_status = '0'
AND M.active_status = '1'
AND (M.major_id NOT IN (768002, 767978))
OR M.institution_id = 6
AND M.active_status = '0'
AND M.delete_status = '0'
ORDER BY M.institution_id DESC, UPPER (M.major_name)need to re write as a single query instead of 2 queries.There should be some logic behind making it as a single query. I was looking for you to explain that. If you ask me to make it a single query without giving any details i would do this.
with t1
as
SELECT m.major_inactive_id, m.institution_id, m.major_id, m.created_on,
m.modified_on, m.created_by, m.modified_by, m.active_status
FROM major_inactive_list m
WHERE m.institution_id IN (1, 5)
t2 as
SELECT M.major_id ,
M.major_name ,
M.major_code ,
M.major_comment,
M.delete_status,
M.active_status,
M.institution_id,
M.default_major
FROM major M
WHERE M.institution_id IN (1, 5, 6)
AND M.delete_status = '0'
AND M.active_status = '1'
AND M.major_id NOT IN (768002, 767978)
OR M.institution_id = 6
AND M.active_status = '0'
AND M.delete_status = '0'
select t1.*, t2.*
from t1, t2
where t1.major_id = t2.major_id -
How to improve the query performance or tune query from Explain Plan
Hi
The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204
8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1
5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1
2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1
1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1
3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1
6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1
13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1
21 FILTER
16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49
20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1
18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1
23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204
42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204
38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204
34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925
30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699
26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35
37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38
36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2
35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2
41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41
40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2
39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2
44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1
43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1damorgan wrote:
Tuning is NOT about reducing the cost of i/o.
i/o is only one of many contributors to cost and only one of many contributors to waits.
Any time you would like to explore this further run this code:
SELECT 1 FROM dual
WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
And when I say "extreme" I mean "EXTREME!"
You've been warned.I think you just need a faster server.
SQL> set autotrace traceonly statistics
SQL> set timing on
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');
no rows selected
Elapsed: 00:00:00.00
Statistics
1 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
243 bytes sent via SQL*Net to client
349 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processedRepeated from an Oracle 10.2.0.x instance:
SQL> SELECT DISTINCT SID FROM V$MYSTAT;
SID
310
SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
Session altered.
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
COLUMN STAT_NAME FORMAT A35 TRU
SET PAGESIZE 200
SELECT
STAT_NAME,
VALUE
FROM
V$SESS_TIME_MODEL
WHERE
SID=310;
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0The session is not reporting additional CPU usage or parse time.
Let's check one of the session's statistics:
SELECT
SS.VALUE
FROM
V$SESSTAT SS,
V$STATNAME SN
WHERE
SN.NAME='consistent gets'
AND SN.STATISTIC#=SS.STATISTIC#
AND SS.SID=310;
VALUE
163Not many consistent gets after 20+ minutes.
Let's take a look at the plan:
SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
al%';
SQL_ID CHILD_NUMBER
04mpgrzhsv72w 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
select 1 from dual where regexp_like (' ','^*[ ]*a')
NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
Please verify value of SQL_ID and CHILD_NUMBER;
It could also be that the plan is no longer in cursor cache (check v$sql_p
lan)No plan...
Let's take a look at the 10053 trace file:
Registered qb: SEL$1 0x19157f38 (PARSER)
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
CBQT: Validity checks failed for 7uqx4guu04x3g.
CVM: Considering view merge in query block SEL$1 (#0)
CBQT: Validity checks failed for 7uqx4guu04x3g.
Subquery Unnest
SU: Considering subquery unnesting in query block SEL$1 (#0)
Set-Join Conversion (SJC)
SJC: Considering set-join conversion in SEL$1 (#0).
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
PM: PM bypassed: Outer query contains no views.
FPD: Considering simple filter push in SEL$1 (#0)
FPD: Current where clause predicates in SEL$1 (#0) :
REGEXP_LIKE (' ','^*[ ]*a')
kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
predicates with check contraints: REGEXP_LIKE (' ','^*[ ]*a')
after transitive predicate generation: REGEXP_LIKE (' ','^*[ ]*a')
finally: REGEXP_LIKE (' ','^*[ ]*a')
apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
kkoqbc-start
: call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
We are getting multiple 8623 Errors in SQL Log while running Vendor's software.
How can you catch which Query causes the error?
I tried to catch it using SQL Profiler Trace but it doesn't show which Query/Sp is the one causing an error.
I also tried to use Extended Event session to catch it, but it doesn't create any output either.
Error:
The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that
reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.
Extended Event Session that I used;
CREATE EVENT SESSION
overly_complex_queries
ON SERVER
ADD EVENT sqlserver.error_reported
ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
WHERE ([severity] = 16
AND [error_number] = 8623)
ADD TARGET package0.asynchronous_file_target
(SET filename = 'E:\SQLServer2012\MSSQL11.MSSQLSERVER\MSSQL\Log\XE\overly_complex_queries.xel' ,
metadatafile = 'E:\SQLServer2012\MSSQL11.MSSQLSERVER\MSSQL\Log\XE\overly_complex_queries.xem',
max_file_size = 10,
max_rollover_files = 5)
WITH (MAX_DISPATCH_LATENCY = 5SECONDS)
GO
-- Start the session
ALTER EVENT SESSION overly_complex_queries
ON SERVER STATE = START
GO
It creates only .xel file, but not .xem
Any help/advice is greatly appreciatedHi VK_DBA,
According to your error message, about which query statement may fail with error message 8623, as other post, you can use trace flag 4102 & 4118 for overcoming this error. Another way is looking for queries with very long IN lists, a large number of
UNIONs, or a large number of nested sub-queries. These are the most common causes of this particular error message.
The error 8623 occurs when attempting to select records through a query with a large number of entries in the "IN" clause (> 10,000). For avoiding this error, I suggest that you could apply the latest Cumulative Updates media for SQL Server 2012 Service
Pack 1, then simplify the query. You may try divide and conquer approach to get part of the query working (as temp table) and then add extra joins / conditions. Or You could try to run the query using the hint option (force order), option (hash join), option
(merge join) with a plan guide.
For more information about error 8623, you can review the following article.
http://blogs.technet.com/b/mdegre/archive/2012/03/13/8623-the-query-processor-ran-out-of-internal-resources-and-could-not-produce-a-query-plan.aspx
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
Can users see the query plan of a SQL query in Oracle?
Hi,
I wonder for a given sql query, after the system optimization, can I see the query plan in oracle? If yes, how to do that? thank you.
XingYou can use explain plan in SQLPlus
SQL> explain plan for select * from user_tables;
Explained.
Elapsed: 00:00:01.63
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
Plan hash value: 806004009
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2014 | 1123K| 507 (6)| 00:00:07 |
|* 1 | HASH JOIN RIGHT OUTER | | 2014 | 1123K| 507 (6)| 00:00:07 |
| 2 | TABLE ACCESS FULL | SEG$ | 4809 | 206K| 34 (3)| 00:00:01 |
|* 3 | HASH JOIN RIGHT OUTER | | 1697 | 873K| 472 (6)| 00:00:06 |
| 4 | TABLE ACCESS FULL | USER$ | 74 | 1036 | 3 (0)| 00:00:01 |
|* 5 | HASH JOIN OUTER | | 1697 | 850K| 468 (6)| 00:00:06 |
| 6 | NESTED LOOPS OUTER | | 1697 | 836K| 315 (6)| 00:00:04 |
|* 7 | HASH JOIN | | 1697 | 787K| 226 (8)| 00:00:03 |
| 8 | TABLE ACCESS FULL | TS$ | 13 | 221 | 5 (0)| 00:00:01 |
| 9 | NESTED LOOPS | | 1697 | 759K| 221 (8)| 00:00:03 |
| 10 | MERGE JOIN CARTESIAN | | 1697 | 599K| 162 (10)| 00:00:02 |
|* 11 | HASH JOIN | | 1 | 326 | 1 (100)| 00:00:01 |
|* 12 | FIXED TABLE FULL | X$KSPPI | 1 | 55 | 0 (0)| 00:00:01 |
| 13 | FIXED TABLE FULL | X$KSPPCV | 100 | 27100 | 0 (0)| 00:00:01 |
| 14 | BUFFER SORT | | 1697 | 61092 | 162 (10)| 00:00:02 |
|* 15 | TABLE ACCESS FULL | OBJ$ | 1697 | 61092 | 161 (10)| 00:00:02 |
|* 16 | TABLE ACCESS CLUSTER | TAB$ | 1 | 96 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | I_OBJ# | 1 | | 0 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID| OBJ$ | 1 | 30 | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | I_OBJ1 | 1 | | 0 (0)| 00:00:01 |
| 20 | TABLE ACCESS FULL | OBJ$ | 52728 | 411K| 151 (4)| 00:00:02 |
Predicate Information (identified by operation id):
1 - access("T"."FILE#"="S"."FILE#"(+) AND "T"."BLOCK#"="S"."BLOCK#"(+) AND
"T"."TS#"="S"."TS#"(+))
3 - access("CX"."OWNER#"="CU"."USER#"(+))
5 - access("T"."DATAOBJ#"="CX"."OBJ#"(+))
7 - access("T"."TS#"="TS"."TS#")
11 - access("KSPPI"."INDX"="KSPPCV"."INDX")
12 - filter("KSPPI"."KSPPINM"='_dml_monitoring_enabled')
15 - filter("O"."OWNER#"=USERENV('SCHEMAID') AND BITAND("O"."FLAGS",128)=0)
16 - filter(BITAND("T"."PROPERTY",1)=0)
17 - access("O"."OBJ#"="T"."OBJ#")
19 - access("T"."BOBJ#"="CO"."OBJ#"(+))
42 rows selected.
Elapsed: 00:00:03.61
SQL> If your plan table does not exist, execute the script $ORACLE_HOME/RDBMS/ADMIN/utlxplan.sql to create the table. -
Help to rewrite the query --performance issue
Hi ,
Please help to rewrite the query since it's performance is not good.Especially second inline query(CASE statements are therein select caluse ..)is taking more cost.
Database Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
SELECT *
FROM
(SELECT q.*,
COUNT(*) OVER() AS record_count,
ROWNUM AS row_num
FROM
(SELECT ExName.examiner_code,
examiner_name,
:v_year,
:v_month,
count_fb,
NVL(count_entered_fb, 0) count_entered_fb,
NVL(count_sent_fb, 0) count_sent_fb,
NVL(count_edited_fb, 0) count_edited_fb,
NVL(count_complete_fb, 0) count_complete_fb,
NVL(count_withibcardiff_fb, 0) count_withibcardiff_fb
FROM
(SELECT examiner_code,
COUNT(*) AS count_fb
FROM
(SELECT
examiner_code,
paper_code,
assessment_school
FROM
( SELECT DISTINCT ce.examiner_code,
ce.paper_code,
ce.assessment_school
FROM
(SELECT
DISTINCT assessment_school,
paper_code,
examiner_code
FROM candidate_examiner_allocation cea
WHERE cea.element = 'Moderation of IA'
AND cea.year = :v_year
AND cea.month = :v_month
) ce,
subject_group sg,
subject_component sc
WHERE (:v_padded_examiner_code IS NULL
OR ce.examiner_code = :v_padded_examiner_code)
AND (:v_subject_group IS NULL
OR sg.group_number = :v_subject_group)
AND sg.year = :v_year
AND sg.month = :v_month
AND sc.year = :v_year
AND sc.month = :v_month
AND sc.paper_code = ce.paper_code
AND sc.subject = sg.subject
AND sc.lvl = sg.lvl
AND (:v_subject IS NULL
OR sc.subject = :v_subject)
AND (:v_lvl IS NULL
OR sc.lvl = :v_lvl)
) ea
GROUP BY examiner_code
) ExName,
(SELECT examiner_code,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'ENTERED'
THEN 1
ELSE NULL
END) AS count_entered_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'SENT'
THEN 1
ELSE NULL
END) AS count_sent_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'EDITED'
THEN 1
ELSE NULL
END) AS count_edited_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'COMPLETE'
THEN 1
ELSE NULL
END) AS count_complete_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'WITH IBCARDIFF'
THEN 1
ELSE NULL
END) AS count_withibcardiff_fb
FROM ia_instances ia1,
workflow_instance wfi
WHERE wfi.instance_id = ia1.workflow_instance_id
AND ia1.year = :v_year
AND ia1.month = :v_month
GROUP BY ia1.year,
ia1.month,
examiner_code
) iaF,
(SELECT person_code,
title
|| ' '
|| firstname
|| ' '
|| lastname AS examiner_name
FROM person
WHERE :v_examiner_name IS NULL
OR UPPER(title
|| ' '
|| firstname
|| ' '
|| lastname) LIKE :v_search_examiner_name
) P
WHERE ExName.examiner_code = iaF.examiner_code (+)
AND ExName.examiner_code = p.person_code
ORDER BY ExName.examiner_code
) q
) rc
WHERE row_num >= :v_start_row
AND row_num <= (:v_start_row+(:v_max_row-1));explain plan
line 1: SQLPLUS Command Skipped: set linesize 130
line 2: SQLPLUS Command Skipped: set pagesize 0
PLAN_TABLE_OUTPUT
Plan hash value: 1581970599
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 276 | | 2187 (6)| 00:00:34 |
|* 1 | FILTER | | | | | | |
|* 2 | VIEW | | 1 | 276 | | 2187 (6)| 00:00:34 |
| 3 | WINDOW BUFFER | | 1 | 250 | | 2187 (6)| 00:00:34 |
| 4 | COUNT | | | | | | |
| 5 | VIEW | | 1 | 250 | | 2187 (6)| 00:00:34 |
| 6 | SORT ORDER BY | | 1 | 119 | | 2187 (6)| 00:00:34 |
| 7 | NESTED LOOPS | | 1 | 119 | | 2186 (6)| 00:00:34 |
|* 8 | HASH JOIN OUTER | | 1 | 92 | | 2185 (6)| 00:00:34 |
| 9 | VIEW | | 1 | 20 | | 51 (4)| 00:00:01 |
| 10 | SORT GROUP BY | | 1 | 7 | | 51 (4)| 00:00:01 |
| 11 | VIEW | | 1 | 7 | | 51 (4)| 00:00:01 |
| 12 | SORT UNIQUE | | 1 | 127 | | 51 (4)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 127 | | 50 (2)| 00:00:01 |
|* 14 | HASH JOIN | | 1 | 68 | | 44 (3)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID| SUBJECT_COMPONENT | 13 | 520 | | 40 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | SUBJECT_COMPONENT_ASSESS_TYPE | 1059 | | | 9 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | SUBJECT_GROUP_PK | 41 | 1148 | | 3 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | CEA_AUTOMATIC_ALLOCATION_STATS | 5 | 295 | | 6 (0)| 00:00:01 |
| 19 | VIEW | | 679 | 48888 | | 2133 (6)| 00:00:33 |
| 20 | SORT GROUP BY | | 679 | 25123 | | 2133 (6)| 00:00:33 |
|* 21 | HASH JOIN | | 52408 | 1893K| 1744K| 2126 (6)| 00:00:33 |
| 22 | TABLE ACCESS BY INDEX ROWID | IA_INSTANCES | 52408 | 1125K| | 688 (1)| 00:00:11 |
|* 23 | INDEX RANGE SCAN | IND_IA_INSTANCES | 49077 | | | 137 (2)| 00:00:03 |
| 24 | TABLE ACCESS FULL | WORKFLOW_INSTANCE | 1075K| 15M| | 960 (7)| 00:00:15 |
|* 25 | TABLE ACCESS BY INDEX ROWID | PERSON | 1 | 27 | | 1 (0)| 00:00:01 |
|* 26 | INDEX UNIQUE SCAN | PERSON_PK | 1 | | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(TO_NUMBER(:V_START_ROW)<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1))
2 - filter("ROW_NUM">=TO_NUMBER(:V_START_ROW) AND "ROW_NUM"<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1))
8 - access("EXNAME"."EXAMINER_CODE"="IAF"."EXAMINER_CODE"(+))
14 - access("SC"."SUBJECT"="SG"."SUBJECT" AND "SC"."LVL"="SG"."LVL")
15 - filter((:V_SUBJECT IS NULL OR "SC"."SUBJECT"=:V_SUBJECT) AND ("SC"."LVL"=:V_LVL OR :V_LVL IS NULL))
16 - access("SC"."YEAR"=TO_NUMBER(:V_YEAR) AND "SC"."MONTH"=:V_MONTH)
17 - access("SG"."YEAR"=TO_NUMBER(:V_YEAR) AND "SG"."MONTH"=:V_MONTH)
filter(:V_SUBJECT_GROUP IS NULL OR "SG"."GROUP_NUMBER"=TO_NUMBER(:V_SUBJECT_GROUP))
18 - access("CEA"."YEAR"=TO_NUMBER(:V_YEAR) AND "CEA"."MONTH"=:V_MONTH AND "SC"."PAPER_CODE"="PAPER_CODE" AND
"CEA"."ELEMENT"='Moderation of IA')
filter("CEA"."ELEMENT"='Moderation of IA' AND (:V_PADDED_EXAMINER_CODE IS NULL OR
"EXAMINER_CODE"=:V_PADDED_EXAMINER_CODE))
21 - access("WFI"."INSTANCE_ID"="IA1"."WORKFLOW_INSTANCE_ID")
23 - access("IA1"."YEAR"=TO_NUMBER(:V_YEAR) AND "IA1"."MONTH"=:V_MONTH)
25 - filter(:V_EXAMINER_NAME IS NULL OR UPPER("TITLE"||' '||"FIRSTNAME"||' '||"LASTNAME") LIKE :V_SEARCH_EXAMINER_NAME)
26 - access("EXNAME"."EXAMINER_CODE"="PERSON_CODE")
53 rows selectedHi,
please find the below rigjt explan paln.
PLAN_TABLE_OUTPUT
SQL_ID 2ct41vyyzqyh7, child number 0
SELECT * FROM (SELECT q.*, COUNT(*) OVER() AS record_count, ROWNUM AS row_num FROM (SELECT
ExName.examiner_code, examiner_name, :v_year, :v_month, count_fb, NVL(count_entered_fb,
0) count_entered_fb, NVL(count_sent_fb, 0) count_sent_fb, NVL(count_edited_fb, 0) count_edited_fb,
NVL(count_complete_fb, 0) count_complete_fb, NVL(count_withibcardiff_fb, 0) count_withibcardiff_fb FROM
(SELECT examiner_code, COUNT(*) AS count_fb FROM (SELECT
examiner_code, paper_code, assessment_school FROM ( SELECT DISTINCT
ce.examiner_code, ce.paper_code, ce.assessment_school FROM (SELECT
DISTINCT assessment_school,
paper_code, examiner
Plan hash value: 651311258
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | | 2785 (100)| |
|* 1 | FILTER | | | | | | |
|* 2 | VIEW | | 4 | 1104 | | 2785 (7)| 00:00:43 |
| 3 | WINDOW BUFFER | | 4 | 1000 | | 2785 (7)| 00:00:43 |
| 4 | COUNT | | | | | | |
| 5 | VIEW | | 4 | 1000 | | 2785 (7)| 00:00:43 |
| 6 | NESTED LOOPS | | 4 | 476 | | 2785 (7)| 00:00:43 |
| 7 | MERGE JOIN OUTER | | 4 | 368 | | 2781 (7)| 00:00:43 |
| 8 | VIEW | | 4 | 80 | | 72 (3)| 00:00:02 |
| 9 | SORT GROUP BY | | 4 | 28 | | 72 (3)| 00:00:02 |
| 10 | VIEW | | 4 | 28 | | 72 (3)| 00:00:02 |
| 11 | SORT UNIQUE | | 4 | 508 | | 72 (3)| 00:00:02 |
| 12 | NESTED LOOPS | | 4 | 508 | | 71 (2)| 00:00:02 |
|* 13 | HASH JOIN | | 1 | 68 | | 44 (3)| 00:00:01 |
|* 14 | TABLE ACCESS BY INDEX ROWID| SUBJECT_COMPONENT | 13 | 520 | | 40 (0)| 00:00:01 |
|* 15 | INDEX RANGE SCAN | SUBJECT_COMPONENT_ASSESS_TYPE | 1059 | | | 9 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | SUBJECT_GROUP_PK | 41 | 1148 | | 3 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | CEA_AUTOMATIC_ALLOCATION_STATS | 30 | 1770 | | 27 (0)| 00:00:01 |
|* 18 | SORT JOIN | | 576 | 41472 | | 2709 (7)| 00:00:42 |
| 19 | VIEW | | 576 | 41472 | | 2708 (7)| 00:00:42 |
| 20 | SORT GROUP BY | | 576 | 21312 | | 2708 (7)| 00:00:42 |
|* 21 | HASH JOIN | | 52408 | 1893K| 1744K| 2701 (7)| 00:00:41 |
|* 22 | TABLE ACCESS FULL | IA_INSTANCES | 52408 | 1125K| | 1263 (6)| 00:00:20 |
| 23 | TABLE ACCESS FULL | WORKFLOW_INSTANCE | 1075K| 15M| | 960 (7)| 00:00:15 |
|* 24 | TABLE ACCESS BY INDEX ROWID | PERSON | 1 | 27 | | 1 (0)| 00:00:01 |
|* 25 | INDEX UNIQUE SCAN | PERSON_PK | 1 | | | 0 (0)| |
Predicate Information (identified by operation id):
1 - filter(TO_NUMBER(:V_START_ROW)<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1))
2 - filter(("ROW_NUM">=TO_NUMBER(:V_START_ROW) AND "ROW_NUM"<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1)))
13 - access("SC"."SUBJECT"="SG"."SUBJECT" AND "SC"."LVL"="SG"."LVL")
14 - filter(((:V_SUBJECT IS NULL OR "SC"."SUBJECT"=:V_SUBJECT) AND ("SC"."LVL"=:V_LVL OR :V_LVL IS NULL)))
15 - access("SC"."YEAR"=TO_NUMBER(:V_YEAR) AND "SC"."MONTH"=:V_MONTH)
16 - access("SG"."YEAR"=TO_NUMBER(:V_YEAR) AND "SG"."MONTH"=:V_MONTH)
filter((:V_SUBJECT_GROUP IS NULL OR "SG"."GROUP_NUMBER"=TO_NUMBER(:V_SUBJECT_GROUP)))
17 - access("CEA"."YEAR"=TO_NUMBER(:V_YEAR) AND "CEA"."MONTH"=:V_MONTH AND "SC"."PAPER_CODE"="PAPER_CODE" AND
"CEA"."ELEMENT"='Moderation of IA')
filter(("CEA"."ELEMENT"='Moderation of IA' AND (:V_PADDED_EXAMINER_CODE IS NULL OR
"EXAMINER_CODE"=:V_PADDED_EXAMINER_CODE)))
18 - access("EXNAME"."EXAMINER_CODE"="IAF"."EXAMINER_CODE")
filter("EXNAME"."EXAMINER_CODE"="IAF"."EXAMINER_CODE")
21 - access("WFI"."INSTANCE_ID"="IA1"."WORKFLOW_INSTANCE_ID")
22 - filter(("IA1"."MONTH"=:V_MONTH AND "IA1"."YEAR"=TO_NUMBER(:V_YEAR)))
24 - filter((:V_EXAMINER_NAME IS NULL OR UPPER("TITLE"||' '||"FIRSTNAME"||' '||"LASTNAME") LIKE :V_SEARCH_EXAMINER_NAME))
25 - access("EXNAME"."EXAMINER_CODE"="PERSON_CODE")
66 rows selected -
Automatic Refresh of a Merged Power Query giving 'Table Not Registered' error
Scenario: I have 5 tables in my Azure VM SQL database associated with my ERP system. I have loaded all 5 tables into a 'Build Excel' workbook, merged the tables together and created a dataset that my users will be using. From that 'Build
Excel' workbook - I published to the BI catalog a data set simply called 'Customer information and Statistics' - This is searchable by my users and they don't have to go through the effort of combining the 5 tables.
I have a published workbook that uses the 'Customer Information and Statistics' data set and nothing else. I tried to schedule a refresh of the data and it failed (below is the error when I did a manual refresh):
OnPremise error: Sorry, the data source for this data connection isn't registered for Power BI. Ask your Power BI admin to register the data source in the Power BI admin center. An error occurred while processing table 'Customer Information and Statistics'.
The current operation was cancelled because another operation in the transaction failed.
So, going down that rabbit hole, I began the process of 'registering' this in Power BI - however, when I copied in the connection string it began walking me through a process of providing credentials to the 5 tables in my SQL server - that are already exposed
in my main data Sources. I figured I would continue and tried to use the service account credentials that my main data sources use and it would not properly authenticate.
Taking a step back it prompted a few questions:
Why am I being asked to register these 5 merged data sources that are already exposed in my base data sources?
Is my approach of creating 'Friendly named' Merged datasets for my users an incorrect approach?
Is there a different method I should be using to establish autorefresh on Merged datasets in these published workbooks using Power Query?
Thanks for the help -
Tom GroundsTom, can you submit this as a bug in the UI via the Smile Frown button?
Thanks!
Ed Price, Azure & Power BI Customer Program Manager (Blog,
Small Basic,
Wiki Ninjas,
Wiki)
Answer an interesting question?
Create a wiki article about it! -
Merging the output of two queries in to one
Dear experts,
There are two reports generated with sap queries.
I want some of the fields of first query to be displayed in to second.
Is there any option for merging two queries.
Where is the request of the query saved.
How to do the changes in the query and how to transport it after changes.
How to find the user group and infoset of the query.
Please help me.Please serach in this forum, u can find lot of threads
/people/shafiq.rehman3/blog/2008/06/16/sap-adhoc-query-sq01-sq02-sq03
a® -
I need help with the query
Here is what I need
For a particular comm record if there is no Salary record where comm:Date = Salary:Date, then
• Find maximum dated Salary record as of comm:Date.
• Clone this record and set Salary:Date = comm:Date
• Set Salary:rate = comm:rate
Like wise for a particular Salary record If there is no comm record where Salary:Date = comm:Date then
• Find maximum effective dated comm record as of Salary:Date
• Apply Rate 2 amount from this maximum effective dated record to Salary record i.e. Set Salary:rate = comm:rate
Example
Salary Table :
ID Sal_Date Rate Hours
1 07/01/2011 400.00 40
2 02/15/2011 200.00 40
3 01/01/2011 160.00 40
Sal_comm Table:
Sal_Date comm_Rate
1 07/01/2011 10.00
4 03/01/2011 7.50
3 01/01/2011 4.00
I need to merge comm_Rate column in Salary table, since there is no salary record as off 03/01/2011, I need to find the maximum dated salary record as of 03/01/2011
i.e. the record dated 02/15/2011. Now I need to clone that salary record, set the SAL_date as 03/01/2011 and update Rate2 amount. So the record set will be like:
Sal_Date:
id sal_Date Rate Hours comm_Rate
1 07/01/2011 400.00 40 10.00
4 03/01/2011 200.00 40 7.50
2 02/15/2011 200.00 40 4.00
3 01/01/2011 160.00 40 4.00So you need all used dates as the "driving" dataset. And you need the according data for each of these.
WITH salary_table as
(select 1 id,to_date('07/01/2011','MM/DD/YYYY')sal_date,400 rate,40 hours from dual union all
select 2 id,to_date('02/15/2011','MM/DD/YYYY')sal_date,200 rate,40 hours from dual union all
select 3 id,to_date('01/01/2011','MM/DD/YYYY')sal_date,160 rate,40 hours from dual),
sal_comm as
(select 1 id,to_date('07/01/2011','MM/DD/YYYY')sal_date,10 comm_Rate from dual union all
select 4 id,to_date('03/01/2011','MM/DD/YYYY')sal_date,7.5 comm_Rate from dual union all
select 3 id,to_date('01/01/2011','MM/DD/YYYY')sal_date,4 comm_Rate from dual)
select to_char(all_dates.sal_date,'MM/DD/YYYY') sal_date,sal.rate,sal.hours,com.comm_rate
from (select sal_date from salary_table
union
select sal_date from sal_comm) all_dates
inner join (select s1.*,lead(sal_date-1,1,to_date('31/12/9999','DD/MM/YYYY')) over (order by sal_date) next_sal_date
from salary_table s1) sal
on (all_dates.sal_date between sal.sal_date and sal.next_sal_date)
inner join (select s1.*,lead(sal_date-1,1,to_date('31/12/9999','DD/MM/YYYY')) over (order by sal_date) next_sal_date
from sal_comm s1) com
on (all_dates.sal_date between com.sal_date and com.next_sal_date)
order by all_dates.sal_date desc;
SAL_DATE RATE HOURS COMM_RATE
07/01/2011 400 40 10
03/01/2011 200 40 7.5
02/15/2011 200 40 4
01/01/2011 160 40 4
-
Need to merge the below two rows
Hi,
I have a sample data as shown below:
ID OBJID ACT_CODE ADDNL_INFO ENTRY_TIME
2523540 333003736 900 from WIP default to Queue PSD1. 1/3/2012 15:07
2523540 333003271 100 100 from Queue PSD1 to WIP For assigment. 1/3/2012 15:43
2523540 333003744 900 900 from WIP default to Queue PSD1. 1/3/2012 15:49
2523540 333004966 100 100 from Queue PSD1 to WIP For assigment. 1/3/2012 16:04
I need to merge the first two rows and get a single record for each "from" and "to" as shown below (desired output)
ID_NUMBER ADDNL_INFO ENTRY_TIME EXIT_TIME TOTAL TIME
2523540 PSD1 1/3/2012 15:07 1/3/2012 15:43 0.025069444
2523540 PSD1 1/3/2012 15:49 1/3/2012 16:04 0.010231481
I have used function on the addnl_info column to display only the name of the queue "PSD1"
(SUBSTR(ADDNL_INFO, INSTR(ADDNL_INFO, 'PSD1'), LENGTH('PSD1'))) QUEUE_NAME
Can any one help me out in getting the desired output.Below is a solution to your query:
drop table test_Table;
create table test_Table
ID number,
objid number,
act_code number,
addl_info varchar2(500),
entry_time timestamp
insert into test_Table values (2523540, 333003736, 900, 'from WIP default to Queue PSD1.', to_timestamp('1/3/2012 15:07', 'DD/MM/YYYY HH24:MI'));
insert into test_Table values (2523540, 333003271, 100, 'from Queue PSD1 to WIP For assigment.', to_timestamp('1/3/2012 15:43', 'DD/MM/YYYY HH24:MI'));
insert into test_Table values (2523540, 333003744, 900, 'from WIP default to Queue PSD1.', to_timestamp('1/3/2012 15:49', 'DD/MM/YYYY HH24:MI'));
insert into test_Table values (2523540, 333004966, 100, 'from Queue PSD1 to WIP For assigment.', to_timestamp('1/3/2012 16:04', 'DD/MM/YYYY HH24:MI'));
select * from test_table;
select id, addl_info, entry_time, exit_time, total_time
from
select a.id, a.objid, 'PSD1' addl_info, a.entry_time, lead(a.entry_time, 1, null) over (order by a.entry_time) exit_time,
lead(a.entry_time, 1, null) over (order by a.entry_time) - a.entry_time total_time, DECODE(a.act_code, 900, 'D', 'ND') disp
from test_Table a
where a.id = 2523540
where disp = 'D';
ID ADDL_INFO ENTRY_TIME EXIT_TIME TOTAL_TIME
2523540 PSD1 01-MAR-12 03.07.00.000000000 PM 01-MAR-12 03.43.00.000000000 PM 0 0:36:0.0
2523540 PSD1 01-MAR-12 03.49.00.000000000 PM 01-MAR-12 04.04.00.000000000 PM 0 0:15:0.0I see a shortcoming in the design:
1. For "WIP default to Queue PSD1" there are two records, both containing different OBJID but same ID, which in ideal case should not happen. You MUST have a COMPOSITE key on ID and OBJID; This shall help you to identify the distinct records.
My solution is not perfect as it is based on the ENTRY TIME. The reason being SIMPLE Ordering by the OBJID would not lead to a correct difference in the Transaction Time (referred by you as TOTAL_TIME.); Hence, for this reason I had to use ENTRY_TIME to correctly get the total_time.
If you wish you may follow the solution else I shall always recommend to change the Table Design and follow the correct approach.
If you are changing the Table design then following shall be a solution:
select id, addl_info, entry_time, exit_time, total_time
from
select a.id, a.objid, 'PSD1' addl_info, a.entry_time, lead(a.entry_time, 1, null) over (order by a.id, a.objid) exit_time,
lead(a.entry_time, 1, null) over (order by a.entry_time) - a.entry_time total_time, DECODE(a.act_code, 900, 'D', 'ND') disp
from test_Table a
where disp = 'D';Regards,
P. -
Hi,
I have created Credit Management workflow in CRM and now I am stuck with having the Credit Request Summary report. I tried Print preview, however it does not provide me the entire details. Under request, I have tabs such as Request details, Customer
details, Facility Details (multiple tables) and other securities details. Customer needs to have Print of Credit Request, to solve this I have created multiple reports which display the Facility details. But now I am stuck with merging those reports.
Need assistance....Hi ,
You can Merge the column by using Table Interface in WAD.What you can do is include the list of atendees(person) in the query and in Table interface you can have a lookup to the Infoprovider and get the corresponding value of list of atendees( company). Then you can concatenate both the value and diplay them under same column in WAD Report.
You can refer the below attached document for Table interface....
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/49dfeb90-0201-0010-a1a2-9d7a7ca1a238
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f0aca990-0201-0010-0380-f3aac4127a57
http://help.sap.com/saphelp_nw04/helpdata/en/a2/06a83a4bd5a27ae10000000a11402f/frameset.htm
and shown below is the sample code how we are merging two columns in WAD report..
if I_X = 3.
MOVE C_CELL_CONTENT to CELL_DETAILS.
SEARCH CELL_DETAILS FOR '>'.
STARTING_OFFSET = SY-FDPOS + 1.
SEARCH CELL_DETAILS FOR '<' STARTING AT STARTING_OFFSET.
LENGTH = SY-FDPOS - 1.
MOVE CELL_DETAILS+STARTING_OFFSET(LENGTH) TO POSMDESC.
concatenate POSMDESC '<BR>' into POSMDESC.
read table itab_/BIC/TAPMPOSM into it_/BIC/TAPMPOSM with key
/BIC/APMPOSM = POSMID.
concatenate POSMDESC it_/BIC/TAPMPOSM-TXTLG into POSMDESC.
move POSMDESC to C_CELL_CONTENT.
endif.
Hope it helps....
Regards,
Umesh. -
Problems with putting the Schema on the query!!!! Need Help.
Hi guys!
I have a problem and a doubt about putting schema name on my query. I want to know if is neccesery specify the schema name on the query I want to execute. All my queries are on my application, I connect from the begging to my oracle data base with a user and password, this user is only alow for that schema. So my question is if I can ommit the schema name on the query.
Explample
Select * From Table
Select * from Student
Select * from Schema.Table
Select * from Institution.Student
Thanks, and I hope you can help me,
Yuni.YOU WROTE "I have a problem and a doubt about putting schema name on my query. I want to know if is neccesery specify the schema name on the query I want to execute. All my queries are on my application, I connect from the begging to my oracle data base with a user and password, this user is only alow for that schema. So my question is if I can ommit the schema name on the query."
don't use words that you don't know!
also, your example (in the first post) gave the schema.table as INSTITUTION.STUDENT
so these are all words that you started with.
now, PAY ATTENTION and RUN THE EXAMPLE SQLS I GAVE
connect to the database as INSTITUTION (user, schema, I don't care which).
execute the sql statement "select * from student".
did it do anything? did it return data, did it say "no rows", or did it have an error? my magic 8 ball and x-ray glasses are broken and I can't see you monitor (my old computer only has that one way window). if it works, then you clearly do not need to put be "putting the Schema on the query!!!!". if it didn't work, then TELL US EXACTLY WHAT HAPPENED (copy AND paste).
and where are my cigars ;-)
MERGE AGAIN????? PLZ Somebody HELP ME!!!!! -
Is it possible to merge 2 query results?
I have a function that returns a querey result.
< CFSET temp = query1>
<CFSET FNCTransfers = temp>
Now I want to change the query to return a merged query
result
< CFSET temp = query1>
<CFSET temp2 = query2>
Is it possible to merge the two results?
Some thing like this (I know that it cannot be done like
this)
<CFSET FNCTransfers = temp1 & temp2>
Maby it should be a union and query of query?UNION could be the way to go. the query results will have to
match up
between the 2 queries (field data types need to match for
both queries)
<cfquery name="qryMergedQueries" dbtype="query">
SELECT * FROM query1
UNION
SELECT * FROM query2
</cfquery>
kruse wrote:
> Is it possible to merge 2 query results?
>
> I have a function that returns a querey result.
>
> < CFSET temp = query1>
> <CFSET FNCTransfers = temp>
>
> Now I want to change the query to return a merged query
result
>
> < CFSET temp = query1>
> <CFSET temp2 = query2>
>
> Is it possible to merge the two results?
>
> Some thing like this (I know that it cannot be done like
this)
> <CFSET FNCTransfers = temp1 & temp2>
>
> Maby it should be a union and query of query?
> -
Hi ,
I have a Query that needs to be tuned.
The query joins two views with some filter condition.
While running the individual view query with the filter condition i can able to get the results quickly within a seconds.
But while joining the views conditions of the same criteria which i have used for the individual query takes more than 30 minute.
i am struggling to tuning this query which was written using the views.
Note :
My problem is while checking the explain plan unique sort is taking more cost.
is that i can reduce the time by giving some optimizer hints to reduce the unique sort cost for query using views?
Thanks & regards,
Senthur pandi MHi,
BluShadow wrote:
957595 wrote:
Hi ,
I have a Query that needs to be tuned.
The query joins two views with some filter condition.
While running the individual view query with the filter condition i can able to get the results quickly within a seconds.
But while joining the views conditions of the same criteria which i have used for the individual query takes more than 30 minute.
i am struggling to tuning this query which was written using the views.
Note :
My problem is while checking the explain plan unique sort is taking more cost.Cost in not necessarily a good comparison to use. The cost is a figure determined on a per-query basis,The problem with cost is that it's a prediction made by the optimizer, rather than the actual measure of query performance. Optimizer often makes mistakes about expected query performance. Ironically, people normally look at query cost when it needs tuning, i.e. when the chance that optimizer made a mistake is especially high.
In many internet forums one can see claims that cost estimates are meaningless across different queries. Such claims are unfounded. When calculated correctly, cost is quite meaningful, and in such cases there is nothing wrong about comparing cost not only for different queries, but also for different databases (if they have same optimizer settings and system stats).
is that i can reduce the time by giving some optimizer hints to reduce the unique sort cost for query using views?Hints are not the way to improve performance. That's an overstatement. The sad truth is that in many cases there is no viable alternative to using hints. Rather than always avoid hints no matter what cost, it's better to understand how hints affect optimizer behavior, and when it's safe to use them.
They are great for identifying where the cause of a performance issue is, but shouldn't be used in production code, as it would be like saying that you know better than Oracle how to retrieve the data, not just now, but in the future as more
data is added and as data is deleted and replaced with new data etc. By adding hints you are effectively forcing the optimizer to execute the query in a particular way, which may be fast now, but in the future may be worse than what the optimizer can determine itself.Hints that force the optimizer to use a specific access path or a specific join method are dangerous -- because the only lock-in one part of the plan, but not the entire plan (e.g. INDEX hint only ensures that an index is used if possible, but it cannot ensure INDEX UNIQUE/RANGE SCAN, so you may end up in a situation when the optimizer is doing an expensive and meaningless INDEX FULL SCAN because of the hint that was indended to force a different, more selective, access method).
Hints that don't do that, but rather prevent the optimizer from trying to be smart when it's better to keep things simple, are relatively safe.
So, use the hints to identify where there are issues in the SQL or in the database design, and fix those issues, rather than leave hints in production code.As a general rule, sure. Here, however, the problem seems to be obvious -- if views are fast separately, and slow when joined, that suggests that the optimizer doesn't merge them correctly.
Best regards,
Nikolay -
How get cfspreadsheet to return the query dump in Excel?
I read my data as shown in examples, but my data just displays the query dump. If I add cfheader and cfcontent, the query dump just displays in an Excel.
I'm using CF 9.01 and Excel 2007
shell. Here is my code:
<cfscript>
//Use an absolute path for the files. --->
theDir=GetDirectoryFromPath(GetCurrentTemplatePath());
theFile=theDir & "TrackEverythingXLS.xls";
//Create two empty ColdFusion spreadsheet objects. --->
theSheet = SpreadsheetNew("TBI_2009");
theSheet2 = SpreadsheetNew("TBI_2008");
//Populate each object with a query. --->
SpreadsheetAddRows(theSheet,QO_getAllData);
SpreadsheetAddRows(theSheet2,QO_getAllData_TBI_2008);
</cfscript>
<!--- Write the sheet --->
<cfspreadsheet action="write" filename="#theFile#" overwrite="true" name="theSheet"
sheetname="QO_getAllData">
<cfspreadsheet action="update" filename="#theFile#" name="theSheet2"
sheetname="QO_getAllData_TBI_2008">
<cfheader name="Content-Disposition" value="inline; filename=TrackEverythingXLS.xls">
<cfcontent type="application/vnd.msexcel">
<cfspreadsheet action="read" src="#theFile#" sheetname="QO_getAllData"
query="spreadsheetData">
<cfspreadsheet action="read" src="#theFile#" sheetname="QO_getAllData_TBI_2008"
query="spreadsheetData2">
<cfdump var="#spreadsheetData#" />
<cfdump var="#spreadsheetData2#" />
As you can see, I'm trying to write 2 tabs. That doesn't work either. All the data is dumped into one tab.My initial problem may start with "Simply create your spreadsheet as usual." How do I do that?
With the code I just tried below, I did get a spreadsheet. So using cfheader and cfcontent is correct?
I did not get 2 sheets though.
<cfscript>
//Use an absolute path for the files. --->
theDir=GetDirectoryFromPath(GetCurrentTemplatePath());
theFile=theDir & "TrackEverythingXLS.xls";
//Create two empty ColdFusion spreadsheet objects. --->
theSheet = SpreadsheetNew("TBI_2009");
SpreadsheetAddRows(theSheet,QO_getAllData);
//Create a new sheet.
SpreadsheetCreateSheet (theSheet, "QO_getAllData_TBI_2008"); // 2nd query
//Set the sheet as active.
SpreadsheetSetActiveSheet (theSheet, "QO_getAllData_TBI_2008");
//Populate each object with a query. --->
SpreadsheetAddRows(theSheet,QO_getAllData_TBI_2008);
</cfscript>
<!--- Write the sheet --->
<cfspreadsheet action="write" filename="#theFile#" overwrite="true" name="theSheet"
sheetname="QO_getAllData">
<!--- (no formatting) Works best. 8/10/11 2:54 pm --->
<cfheader name="Content-Disposition" value="inline; filename=TrackEverythingXLS.xls">
<cfcontent type="application/vnd.msexcel">
<cfspreadsheet action="read" src="#theFile#" sheetname="QO_getAllData"
query="spreadsheetData">
<cfdump var="#spreadsheetData#" />
Also, will I be able to code to Sort? Right now, I get an error message saying: "This operation requires the merged cells to be identically sized. (I had selected a Custom Sort on the entire sheet.) Is this a formatting issue?
Thanks for helping me out.
Maybe you are looking for
-
Unable to Sync i-Tunes and Apple TV
Hello. I am writing about my inability to sync i-Tunes and Apple TV. I have read a variety of posts in this forum regarding other users' problems with the same issue and have been unable to solve my own based on recommendations, including exchanging
-
I have an iPhone 4s that is no longer active on my account. I recently upgraded it after completing a 2-year contract. Is it still possible for me to have the iPhone 4s unlocked even though it is no longer active on my account?
-
Mac Mini Slow Boot. Leopard
I have a Mac Mini 5 months old running Leopard as delivered. Recently has started taking 5 minutes to boot. I see no obvious troubleshooting tools I would expect with a Windows PC to assist. What are the likley causes? What tools can I use to debug p
-
Can we pass delimeted file through PI without converting it to XML ?
Hi All, I will be getting File from Sender in Inbound Payload as delimitted file and not in XML structure using XI adapter. So can you please suggest on how I need to do mapping as file will be passed as it is and which need to transfer from Sender t
-
Help! Kernel panic whenever I plug in my new 2TB WD My Passport
Hello everyone, I have a 27 inch iMac that is still under AppleCare. When I plug in my 2 TB WD My Passport external drive, I get the screen that says, "You need to restart your computer. Hold down the Power button until it turns off, then press the P