How to improve performance of a query that is based on an xmltype table
Dear Friends,
I have a query that is pulling records from an xmltype table with 9000 rows and it is running very slow.
I am using XMLTABLE command to retreive the rows. It is taking upto 30 minutes to finish.
Would you be able to suggest how I can make it faster. Thanks.
Below is the query.....
INSERT INTO temp_sap_po_receipt_history_t
(po_number, po_line_number, doc_year,
material_doc, material_doc_item, quantity, sap_ref_doc_no_long,
reference_doc, movement_type_code,
sap_ref_doc_no, posting_date, entry_date, entry_time, hist_type)
SELECT :pin_po_number po_number,
b.po_line_number, b.doc_year,
b.material_doc, b.material_doc_item, b.quantity, b.sap_ref_doc_no_long,
b.reference_doc, b.movement_type_code,
b.sap_ref_doc_no, to_date(b.posting_date,'rrrr-mm-dd'),
to_date(b.entry_date,'rrrr-mm-dd'), b.entry_time, b.hist_type
FROM temp_xml t,
XMLTABLE(XMLNAMESPACES('urn:sap-com:document:sap:rfc:functions' AS "n0"),
'/n0:BAPI_PO_GETDETAIL1Response/POHISTORY/item'
PASSING t.object_value
COLUMNS PO_LINE_NUMBER VARCHAR2(20) PATH 'PO_ITEM',
DOC_YEAR varchar2(4) PATH 'DOC_YEAR',
MATERIAL_DOC varchar2(30) PATH 'MAT_DOC',
MATERIAL_DOC_ITEM VARCHAR2(10) PATH 'MATDOC_ITEM',
QUANTITY NUMBER(20,6) PATH 'QUANTITY',
SAP_REF_DOC_NO_LONG VARCHAR2(20) PATH 'REF_DOC_NO_LONG',
REFERENCE_DOC VARCHAR2(20) PATH 'REF_DOC',
MOVEMENT_TYPE_CODE VARCHAR2(4) PATH 'MOVE_TYPE',
SAP_REF_DOC_NO VARCHAR2(20) PATH 'REF_DOC_NO',
POSTING_DATE VARCHAR2(10) PATH 'PSTNG_DATE',
ENTRY_DATE VARCHAR2(10) PATH 'ENTRY_DATE',
ENTRY_TIME VARCHAR2(8) PATH 'ENTRY_TIME',
HIST_TYPE VARCHAR2(5) PATH 'HIST_TYPE') b;
Based on response from mdrake on this thread:
Re: XML file processing into oracle
For large XML's, you can speed up the processing of XMLTABLE by using a registered schema...
declare
SCHEMAURL VARCHAR2(256) := 'http://xmlns.example.org/xsd/testcase.xsd';
XMLSCHEMA VARCHAR2(4000) := '<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" xdb:storeVarrayAsTable="true">
<xs:element name="cust_order" type="cust_orderType" xdb:defaultTable="CUST_ORDER_TBL"/>
<xs:complexType name="groupType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="item" type="itemType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="id" type="xs:byte" use="required"/>
</xs:complexType>
<xs:complexType name="itemType" xdb:maintainDOM="false">
<xs:simpleContent>
<xs:extension base="xs:string">
<xs:attribute name="id" type="xs:short" use="required"/>
<xs:attribute name="name" type="xs:string" use="required"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<xs:complexType name="cust_orderType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="group" type="groupType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="cust_id" type="xs:short" use="required"/>
</xs:complexType>
</xs:schema>';
INSTANCE CLOB :=
'<cust_order cust_id="12345">
<group id="1">
<item id="1" name="Standard Mouse">100</item>
<item id="2" name="Keyboard">100</item>
<item id="3" name="Memory Module 2Gb">200</item>
<item id="4" name="Processor 3Ghz">25</item>
<item id="5" name="Processor 2.4Ghz">75</item>
</group>
<group id="2">
<item id="1" name="Graphics Tablet">15</item>
<item id="2" name="Keyboard">15</item>
<item id="3" name="Memory Module 4Gb">15</item>
<item id="4" name="Processor Quad Core 2.8Ghz">15</item>
</group>
<group id="3">
<item id="1" name="Optical Mouse">5</item>
<item id="2" name="Ergo Keyboard">5</item>
<item id="3" name="Memory Module 2Gb">10</item>
<item id="4" name="Processor Dual Core 2.4Ghz">5</item>
<item id="5" name="Dual Output Graphics Card">5</item>
<item id="6" name="28inch LED Monitor">10</item>
<item id="7" name="Webcam">5</item>
<item id="8" name="A3 1200dpi Laser Printer">2</item>
</group>
</cust_order>';
begin
dbms_xmlschema.registerSchema
schemaurl => SCHEMAURL
,schemadoc => XMLSCHEMA
,local => TRUE
,genTypes => TRUE
,genBean => FALSE
,genTables => TRUE
,ENABLEHIERARCHY => DBMS_XMLSCHEMA.ENABLE_HIERARCHY_NONE
execute immediate 'insert into CUST_ORDER_TBL values (XMLTYPE(:INSTANCE))' using INSTANCE;
end;
SQL> desc CUST_ORDER_TBL
Name Null? Type
TABLE of SYS.XMLTYPE(XMLSchema "http://xmlns.example.org/xsd/testcase.xsd" Element "cust_order") STORAGE Object-relational TYPE "cust_orderType222_T"
SQL> set autotrace on explain
SQL> set pages 60 lines 164 heading on
SQL> col cust_id format a8
SQL> select extract(object_value,'/cust_order/@cust_id') as cust_id
2 ,grp.id as group_id, itm.id as item_id, itm.inm as item_name, itm.qty as item_qty
3 from CUST_ORDER_TBL
4 ,XMLTABLE('/cust_order/group'
5 passing object_value
6 columns id number path '@id'
7 ,item xmltype path 'item'
8 ) grp
9 ,XMLTABLE('/item'
10 passing grp.item
11 columns id number path '@id'
12 ,inm varchar2(30) path '@name'
13 ,qty number path '.'
14 ) itm
15 /
CUST_ID GROUP_ID ITEM_ID ITEM_NAME ITEM_QTY
12345 1 1 Standard Mouse 100
12345 1 2 Keyboard 100
12345 1 3 Memory Module 2Gb 200
12345 1 4 Processor 3Ghz 25
12345 1 5 Processor 2.4Ghz 75
12345 2 1 Graphics Tablet 15
12345 2 2 Keyboard 15
12345 2 3 Memory Module 4Gb 15
12345 2 4 Processor Quad Core 2.8Ghz 15
12345 3 1 Optical Mouse 5
12345 3 2 Ergo Keyboard 5
12345 3 3 Memory Module 2Gb 10
12345 3 4 Processor Dual Core 2.4Ghz 5
12345 3 5 Dual Output Graphics Card 5
12345 3 6 28inch LED Monitor 10
12345 3 7 Webcam 5
12345 3 8 A3 1200dpi Laser Printer 2
17 rows selected.Need at least 10.2.0.3 for performance i.e. to avoid COLLECTION ITERATOR PICKLER FETCH in execution plan...
On 10.2.0.1:
Execution Plan
Plan hash value: 3741473841
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 24504 | 89M| 873 (1)| 00:00:11 |
| 1 | NESTED LOOPS | | 24504 | 89M| 873 (1)| 00:00:11 |
| 2 | NESTED LOOPS | | 3 | 11460 | 805 (1)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3777 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | SYS_IOT_TOP_774117 | 3 | 129 | 1 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | | | | |
Predicate Information (identified by operation id):
4 - access("NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
filter("SYS_NC_TYPEID$" IS NOT NULL)
Note
- dynamic sampling used for this statementOn 10.2.0.3:
Execution Plan
Plan hash value: 1048233240
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 17 | 132K| 839 (0)| 00:00:11 |
| 1 | NESTED LOOPS | | 17 | 132K| 839 (0)| 00:00:11 |
| 2 | MERGE JOIN CARTESIAN | | 17 | 131K| 805 (0)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3781 | 3 (0)| 00:00:01 |
| 4 | BUFFER SORT | | 17 | 70839 | 802 (0)| 00:00:10 |
|* 5 | INDEX FAST FULL SCAN| SYS_IOT_TOP_56154 | 17 | 70839 | 802 (0)| 00:00:10 |
|* 6 | INDEX UNIQUE SCAN | SYS_IOT_TOP_56152 | 1 | 43 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | SYS_C006701 | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - filter("SYS_NC_TYPEID$" IS NOT NULL)
6 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
filter("SYS_NC_TYPEID$" IS NOT NULL AND
"NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
7 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
Note
- dynamic sampling used for this statement----------------------------------------------------------------------------------------------------------
-- CLEAN UP
DROP TABLE CUST_ORDER_TBL purge;
exec dbms_xmlschema.deleteschema('http://xmlns.example.org/xsd/testcase.xsd');
Similar Messages
-
How to improve performance of attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query -
How to improve Performance of a Query whcih is on a Vritual Cube
Hi All,
Please suggest me some tips through which we can improve the performance of a queries that were built on Viirtual Cubes.
Thanks iin advance.
Regards,
RajHi Raj,
How is your direct access datasource built ? Is this a standard datasource or generic datasource on any view/table/function module. This strengthens my second point.
Suppose you built a virtual cube on direct access datasource built on AUFK table with Order as primary key (Order master data). when you use Order as selection on query built on this virtual cube then it retrievies the data faster than firing the query on other selections.
If your selections are different. You can possibly create a secondary index on the table with selections used in query.
Regards
vamsi -
How to improve performance of my query
Hello Friends,
Good Morning.
I am having the following query which is never ending - Can any one throw some light on how to improve the performance of my said said query ..This is the query generated in ODI ( ORACLE DATA INTEGRATOR 11G )
The only thing I can put in this query is optimizers
- issue resolved
Please advice .
Thanks / Kumar
Edited by: kumar73 on May 18, 2012 6:38 AM
Edited by: kumar73 on May 18, 2012 6:39 AM
Edited by: kumar73 on May 18, 2012 12:04 PMThe two DISTINCTs are redundant, as UNION results in unique records, as a set can't have duplicates.
Other than that the query is not formatted and unreadable, and you didn't provide a description of the tables involved.
Your strategy seems to be maximum help from this forum with minimum effort from yourself, other than hitting copy and paste.
Sybrand Bakker
Senior Oracle DBA -
How to improve performance of select query when primary key is not referred
Hi,
There is a select query where we are unable to refrence primary key of the tables:
Since, the the below code is refrensing to vgbel and vgpos fields instead of vbeln and posnr..... the performance is very slow.
select vbeln posnr into (wa-vbeln1, wa-posnr1)
from lips
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endselect.
Please le t me know if you have some tips..hi,
I hope you are using the select statement inside a loop ...endloop get that outside to improve the performance ..
if not i_vbap[] is initial.
select vbeln posnr into table it_lips
from lips
for all entries in i_vbap
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endif. -
How to Improve Performance of this query??
Hi experts,
Kindly suggest me some perfomance optimization on the below code.
SELECT * FROM vtrdi AS v
INTO TABLE six
FOR ALL ENTRIES IN r_vbeln
WHERE vbeln EQ r_vbeln-low
AND trsta IN s_trsta
AND vstel IN s_vstel
AND tddat IN s_tddat
AND vbtyp IN r_vbtyp
AND lstel IN s_lstel
AND route IN s_route
AND tragr IN s_tragr
AND vsbed IN s_vsbed
AND land1 IN s_land1
AND lzone IN s_lzone
AND wadat IN s_wadat
AND wbstk IN s_wbstk
AND lddat IN s_lddat
AND lfdat IN s_lfdat
AND kodat IN s_kodat
AND kunnr IN s_kunnr
AND spdnr IN s_spdnr
AND inco1 IN s_inco1
AND inco2 IN s_inco2
AND lprio IN s_lprio
AND EXISTS ( SELECT * FROM likp
WHERE vbeln EQ v~vbeln
AND lifnr IN s_lifnr
AND lgtor IN s_lgtor
AND lgnum IN s_lgnum
AND lfuhr IN s_lfuhr
AND aulwe IN s_aulwe
AND traty IN s_traty
AND traid IN s_traid
AND vsart IN s_vsart
AND trmtyp IN s_trmtyp
AND sdabw IN s_sdabw
AND cont_dg IN r_cont_dg ).
Thanks in Advance...
Santosh.Try to write 2 select
SELECT * FROM vtrdi AS v
INTO TABLE six
FOR ALL ENTRIES IN r_vbeln
WHERE vbeln EQ r_vbeln-low
AND trsta IN s_trsta
AND vstel IN s_vstel
AND tddat IN s_tddat
AND vbtyp IN r_vbtyp
AND lstel IN s_lstel
AND route IN s_route
AND tragr IN s_tragr
AND vsbed IN s_vsbed
AND land1 IN s_land1
AND lzone IN s_lzone
AND wadat IN s_wadat
AND wbstk IN s_wbstk
AND lddat IN s_lddat
AND lfdat IN s_lfdat
AND kodat IN s_kodat
AND kunnr IN s_kunnr
AND spdnr IN s_spdnr
AND inco1 IN s_inco1
AND inco2 IN s_inco2
AND lprio IN s_lprio.
SELECT * FROM likp into table itab
WHERE vbeln EQ v~vbeln
AND lifnr IN s_lifnr
AND lgtor IN s_lgtor
AND lgnum IN s_lgnum
AND lfuhr IN s_lfuhr
AND aulwe IN s_aulwe
AND traty IN s_traty
AND traid IN s_traid
AND vsart IN s_vsart
AND trmtyp IN s_trmtyp
AND sdabw IN s_sdabw
AND cont_dg IN r_cont_dg
loop at six
check whether entry is exists or not
if not remove from six interbal table.
endloop.
Thanks
Venkat -
How to improve performance of insert statement
Hi all,
How to improve performance of insert statement
I am inserting 1lac records into table it takes around 20 min..
Plz help.
Thanx In Advance.I tried :
SQL> create table test as select * from dba_objects;
Table created.
SQL> delete from test;
3635 rows deleted.
SQL> commit;
Commit complete.
SQL> select count(*) from dba_extents where segment_name='TEST';
COUNT(*)
4
SQL> insert /*+ APPEND */ into test select * from dba_objects;
3635 rows created.
SQL> commit;
Commit complete.
SQL> select count(*) from dba_extents where segment_name='TEST';
COUNT(*)
6
Cheers, Bhupinder -
How to improve performance of the attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query -
How to improve performance of query
Hi all,
How to improve performance of query.
please send :
[email protected]
thanks in advance
bhaskarhi
go through the following links for performance
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
http://www.asug.com/client_files/Calendar/Upload/ASUG%205-mar-2004%20BW%20Performance%20PDF.pdf
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2 -
How to improve performance of Siebel Configurator
Hi All,
We are using Siebel Configurator to model the item structures. We wrote few constraint rules on that. But while launching the configurator it is taking more time to open.
Even without rules also it is behaving in the same manner.
Any inputs on this could be highly appreciated
RAMduplicate thread..
How to improve performance of attached query -
How to improve performance of MediaPlayer?
I tried to use the MediaPlayer with a On2 VP6 flv movie.
Showing a video with a resolution of 1024x768 works.
Showing a video with a resolution of 1280x720 and a average bitrate of 1700 kb/s leads to a delay of the video signal behind the audio signal of a couple of seconds. VLC, Media Player Classic and a couple of other players have no problem with the video. Only the FX MediaPlayer shows a poor performance.
Additionally mouse events in a second stage (the first stage is used for the video) are not processed in 2 of 3 cases. If the MediaPlayer is switched off, the mouse events work reliable.
Does somebody know a solution for this problems?
Cheers
masimduplicate thread..
How to improve performance of attached query -
HOW TO IMPROVE PERFORMANCE ON SUM FUNCTION IN INLINE SQL QUERY
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advance989209 wrote:
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advancea) Lac is not an international unit, so is not understood by everyone. This is an international forum so please use international units.
b) Please read the FAQ: {message:id=9360002} to learn how to format your question correctly for people to help you.
c) As your question relates to performance tuning, please also read the two threads linked to in the FAQ: {message:id=9360003} for an idea of what specific information you need to provide for people to help you tune your query. -
How to improve performance of this SQL query?
Hi,
I have a query that tries to build a string (RPATH) for use as a url parameter. The query is:
SELECT DISTINCT USERNAME, PASSWORD, ROLE, RIGHTS,
DECODE(GEO, ROLE, 'g='||NVL(GEO,'none'), NULL)||
DECODE(AREA, ROLE, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none'), NULL)||
DECODE(REGION, ROLE, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none')||'&r='||NVL(REGION,'none'), NULL)||
DECODE(DISTRICT, ROLE, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none')||'&r='||NVL(REGION,'none')||'&d='||NVL(DISTRICT,'none'), NULL)||
DECODE(OFFICE, ROLE, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none')||'&r='||NVL(REGION,'none')||'&d='||NVL(DISTRICT,'none')||'&o='||NVL(OFFICE,'none')
, NULL) RPATH
FROM (SELECT U.*, L.*
FROM (SELECT * FROM T_USERS WHERE USERNAME='xxx' AND PASSWORD='yyy') U, T_LOC_SUB L
WHERE U.ROLE IN ('WW', L.GEO, L.AREA, L.REGION, L.DISTRICT, L.OFFICE))
GROUP BY USERNAME, PASSWORD, ROLE, RIGHTS, GEO, AREA, REGION, DISTRICT, OFFICE;
T_USERS is defined as
CREATE TABLE T_USERS (
username VARCHAR2(10) CONSTRAINT T_USERS_username_pk PRIMARY KEY,
password VARCHAR2(10),
role CONSTRAINT T_USERS_role_FK REFERENCES T_LOC_MAIN(loc),
rights VARCHAR2(3)
T_LOC_SUB is defined as
CREATE TABLE T_LOC_SUB (
geo CONSTRAINT T_LOC_SUB_geo_FK REFERENCES T_LOC_MAIN(loc),
area CONSTRAINT T_LOC_SUB_area_FK REFERENCES T_LOC_MAIN(loc),
region CONSTRAINT T_LOC_SUB_region_FK REFERENCES T_LOC_MAIN(loc),
district CONSTRAINT T_LOC_SUB_district_FK REFERENCES T_LOC_MAIN(loc),
office CONSTRAINT T_LOC_SUB_office_FK REFERENCES T_LOC_MAIN(loc)
T_LOC_MAIN is defined as
CREATE TABLE T_LOC_MAIN (
loc VARCHAR2(4) CONSTRAINT T_LOC_MAIN_loc_PK PRIMARY KEY,
label VARCHAR2(60),
rank NUMBER
REGION and DISTRICT columns in T_LOC_SUB may be NULL at times. How can I rewrite the SQL to make it run faster or more efficiently?
Please help.. Thank you..Hi,
I just realised I can simplify the query to:
SELECT DISTINCT USERNAME, PASSWORD, ROLE, RIGHTS,
DECODE(ROLE,
GEO, 'g='||NVL(GEO,'none'),
AREA, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none'),
REGION, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none')||'&r='||NVL(REGION,'none'),
DISTRICT, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none')||'&r='||NVL(REGION,'none')||'&d='||NVL(DISTRICT,'none'),
OFFICE, 'g='||NVL(GEO,'none')||'&a='||NVL(AREA,'none')||'&r='||NVL(REGION,'none')||'&d='||NVL(DISTRICT,'none')||'&o='||NVL(OFFICE,'none'),
NULL) RPATH
FROM (SELECT U.*, L.*
FROM (SELECT * FROM T_USERS WHERE USERNAME='xxx' AND PASSWORD='yyy') U, T_LOC_SUB L
WHERE U.ROLE IN ('WW', L.GEO, L.AREA, L.REGION, L.DISTRICT, L.OFFICE))
GROUP BY USERNAME, PASSWORD, ROLE, RIGHTS, GEO, AREA, REGION, DISTRICT, OFFICE;
Anyone can offer a better and more efficient improvement?
Thanx! -
How to improve on insert-select query performance
Hi,
Would like to get some opinion on how to improve this query inside my stored proc.
This insert stmt has run more than 4 hours for inserting around 62k records.
I have identified the bottleneck is in the function within the select stmt.
Could anyone help to finetune?
INSERT INTO STG_PRICE_OUT
( ONPN,
EFFECTIVE_DT,
PRICE_CATENAME,
QUEUE_ID
SELECT P.ONPN, P.EFFECTIVE_DT,
gps_get_catename(P.PART_STATUS ,P.PROGRAM_CD ,P.MARKET_CD),
'1'
FROM PRICE P,
GPS_INV_ITEMS GII
WHERE P.ONPN = GII.ONPN
FUNCTION Gps_Get_Catename
p_status VARCHAR2,
p_pgm VARCHAR2,
p_market VARCHAR2
RETURN VARCHAR2
IS
catename VARCHAR2(30);
BEGIN
SELECT PRICE_CATENAME
INTO catename
FROM PRICE_CATEGORY PC
WHERE NVL(PC.PART_STATUS,' ')= NVL(p_status,' ')
AND NVL(PC.PROGRAM_CD,' ') = NVL(p_pgm,' ')
AND NVL(PC.MARKET_CD,' ') = NVL(p_market,' ')
RETURN catename;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
RETURN NULL;
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE('gps_get_catename: Exception caught!! (' || SQLCODE || ') : ' || SQLERRM);
RETURN catename;
END;
STG_PRICE_OUT has around 1 mil records
GPS_INV_ITEMS has around 140K records
PRICE has around 60k records
INDEX:
STG_PRICE_OUT - INDEX 1(ONPN), INDEX2(ONPN,QUEUE_ID)
GPS_INV_ITEMS - INDEX 3(ONPN)
PRICE - INDEX 4(ONPN)
PRICE_CATEGORY - INDEX 5(PART_STATUS ,PROGRAM_CD ,MARKET_CD)
Thanks and regards,
WHOnly use PL/SQL when you can't do it all in SQL...
INSERT INTO STG_PRICE_OUT
( ONPN,
EFFECTIVE_DT,
PRICE_CATENAME,
QUEUE_ID
SELECT P.ONPN, P.EFFECTIVE_DT,
PC.PRICE_CATENAME,
'1'
FROM PRICE_CATEGORY PC, PRICE P,
GPS_INV_ITEMS GII
WHERE P.ONPN = GII.ONPN
AND PC.PART_STATUS(+) = P.PART_STATUS
AND PC.PROGRAM_CD(+) = P.PROGRAM_CD
AND PC.MARKET_CD(+) = P.MARKET_CD
/Cheers, APC
P.S. You may need to tweak the outer joins - I'm not quite sure what your business rule is. -
How to Improve performance issue when we are using BRM LDB
HI All,
I am facing a performanc eissue when i am retriving the data from BKPF and respective BSEG table....I see that for fiscal period there are around 60lakhs records. and to populate the data value from the table to final internal table its taking so much of time.
when i tried to make use of the BRM LDB with the SAP Query/Quickviewer, its the same issue.
Please suggest me how to improve the performance issue.
Thanks in advance
ChakradharModerator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Rob
Maybe you are looking for
-
HR runtime error on agent login/logout
Good afternoon. Has anyone run in a a runtime error when running agent login/logout reports? I am unable to run this report. a runtime error occurred while executing the inquiry. i know there used to be bugs on this but. uccx, ha,
-
How do you post a Browser Recommendation on an iWeb '08 page?
Several years back, many websites used to have a "+Best if Viewed With+" and name a preferred browser. While we wait for improved compatibility for iWeb with other browsers (namely PC IE), I was thinking of adding that tag with a link to the Mac and
-
EPub TOC not appearing in one book only
Hi - I've been creating ePubs from Pages files for the past several months, and when I open the ePubs in Adobe Digital Editions 2.0, the TOC appears in the left panel - except in one book! I've created them all the same way in Pages, using the same s
-
Hi, i have 2 doubts. 1) how to link posting variant to company code. 2) in asset management how to link number range to asset class. regards. venkat.
-
Safari shutdown unexpectedly and will NOT reopen
Safari shut down unexpectedly and will not reopen after several tries at logging bout and starting up again etc e tc