How to sppedup performance of a query
Hi,
I am running query on a partion table its taking too much of time in stage database.
where as same query is taking very less time in prod.
table stats are nt stale..indexes are same as prod and data is also same as prod.Still query is taking much time on stage database.
Any suggestions to tune query.
Any work around
Thanks.
What about the explain plan from prod to stage database?
Please check the following thread:
When your query takes too long ..
When your query takes too long ...
HTH
-Anantha
Similar Messages
-
How to improve performance of attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query -
How to improve performance of a query that is based on an xmltype table
Dear Friends,
I have a query that is pulling records from an xmltype table with 9000 rows and it is running very slow.
I am using XMLTABLE command to retreive the rows. It is taking upto 30 minutes to finish.
Would you be able to suggest how I can make it faster. Thanks.
Below is the query.....
INSERT INTO temp_sap_po_receipt_history_t
(po_number, po_line_number, doc_year,
material_doc, material_doc_item, quantity, sap_ref_doc_no_long,
reference_doc, movement_type_code,
sap_ref_doc_no, posting_date, entry_date, entry_time, hist_type)
SELECT :pin_po_number po_number,
b.po_line_number, b.doc_year,
b.material_doc, b.material_doc_item, b.quantity, b.sap_ref_doc_no_long,
b.reference_doc, b.movement_type_code,
b.sap_ref_doc_no, to_date(b.posting_date,'rrrr-mm-dd'),
to_date(b.entry_date,'rrrr-mm-dd'), b.entry_time, b.hist_type
FROM temp_xml t,
XMLTABLE(XMLNAMESPACES('urn:sap-com:document:sap:rfc:functions' AS "n0"),
'/n0:BAPI_PO_GETDETAIL1Response/POHISTORY/item'
PASSING t.object_value
COLUMNS PO_LINE_NUMBER VARCHAR2(20) PATH 'PO_ITEM',
DOC_YEAR varchar2(4) PATH 'DOC_YEAR',
MATERIAL_DOC varchar2(30) PATH 'MAT_DOC',
MATERIAL_DOC_ITEM VARCHAR2(10) PATH 'MATDOC_ITEM',
QUANTITY NUMBER(20,6) PATH 'QUANTITY',
SAP_REF_DOC_NO_LONG VARCHAR2(20) PATH 'REF_DOC_NO_LONG',
REFERENCE_DOC VARCHAR2(20) PATH 'REF_DOC',
MOVEMENT_TYPE_CODE VARCHAR2(4) PATH 'MOVE_TYPE',
SAP_REF_DOC_NO VARCHAR2(20) PATH 'REF_DOC_NO',
POSTING_DATE VARCHAR2(10) PATH 'PSTNG_DATE',
ENTRY_DATE VARCHAR2(10) PATH 'ENTRY_DATE',
ENTRY_TIME VARCHAR2(8) PATH 'ENTRY_TIME',
HIST_TYPE VARCHAR2(5) PATH 'HIST_TYPE') b;Based on response from mdrake on this thread:
Re: XML file processing into oracle
For large XML's, you can speed up the processing of XMLTABLE by using a registered schema...
declare
SCHEMAURL VARCHAR2(256) := 'http://xmlns.example.org/xsd/testcase.xsd';
XMLSCHEMA VARCHAR2(4000) := '<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xdb="http://xmlns.oracle.com/xdb" xdb:storeVarrayAsTable="true">
<xs:element name="cust_order" type="cust_orderType" xdb:defaultTable="CUST_ORDER_TBL"/>
<xs:complexType name="groupType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="item" type="itemType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="id" type="xs:byte" use="required"/>
</xs:complexType>
<xs:complexType name="itemType" xdb:maintainDOM="false">
<xs:simpleContent>
<xs:extension base="xs:string">
<xs:attribute name="id" type="xs:short" use="required"/>
<xs:attribute name="name" type="xs:string" use="required"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
<xs:complexType name="cust_orderType" xdb:maintainDOM="false">
<xs:sequence>
<xs:element name="group" type="groupType" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="cust_id" type="xs:short" use="required"/>
</xs:complexType>
</xs:schema>';
INSTANCE CLOB :=
'<cust_order cust_id="12345">
<group id="1">
<item id="1" name="Standard Mouse">100</item>
<item id="2" name="Keyboard">100</item>
<item id="3" name="Memory Module 2Gb">200</item>
<item id="4" name="Processor 3Ghz">25</item>
<item id="5" name="Processor 2.4Ghz">75</item>
</group>
<group id="2">
<item id="1" name="Graphics Tablet">15</item>
<item id="2" name="Keyboard">15</item>
<item id="3" name="Memory Module 4Gb">15</item>
<item id="4" name="Processor Quad Core 2.8Ghz">15</item>
</group>
<group id="3">
<item id="1" name="Optical Mouse">5</item>
<item id="2" name="Ergo Keyboard">5</item>
<item id="3" name="Memory Module 2Gb">10</item>
<item id="4" name="Processor Dual Core 2.4Ghz">5</item>
<item id="5" name="Dual Output Graphics Card">5</item>
<item id="6" name="28inch LED Monitor">10</item>
<item id="7" name="Webcam">5</item>
<item id="8" name="A3 1200dpi Laser Printer">2</item>
</group>
</cust_order>';
begin
dbms_xmlschema.registerSchema
schemaurl => SCHEMAURL
,schemadoc => XMLSCHEMA
,local => TRUE
,genTypes => TRUE
,genBean => FALSE
,genTables => TRUE
,ENABLEHIERARCHY => DBMS_XMLSCHEMA.ENABLE_HIERARCHY_NONE
execute immediate 'insert into CUST_ORDER_TBL values (XMLTYPE(:INSTANCE))' using INSTANCE;
end;
SQL> desc CUST_ORDER_TBL
Name Null? Type
TABLE of SYS.XMLTYPE(XMLSchema "http://xmlns.example.org/xsd/testcase.xsd" Element "cust_order") STORAGE Object-relational TYPE "cust_orderType222_T"
SQL> set autotrace on explain
SQL> set pages 60 lines 164 heading on
SQL> col cust_id format a8
SQL> select extract(object_value,'/cust_order/@cust_id') as cust_id
2 ,grp.id as group_id, itm.id as item_id, itm.inm as item_name, itm.qty as item_qty
3 from CUST_ORDER_TBL
4 ,XMLTABLE('/cust_order/group'
5 passing object_value
6 columns id number path '@id'
7 ,item xmltype path 'item'
8 ) grp
9 ,XMLTABLE('/item'
10 passing grp.item
11 columns id number path '@id'
12 ,inm varchar2(30) path '@name'
13 ,qty number path '.'
14 ) itm
15 /
CUST_ID GROUP_ID ITEM_ID ITEM_NAME ITEM_QTY
12345 1 1 Standard Mouse 100
12345 1 2 Keyboard 100
12345 1 3 Memory Module 2Gb 200
12345 1 4 Processor 3Ghz 25
12345 1 5 Processor 2.4Ghz 75
12345 2 1 Graphics Tablet 15
12345 2 2 Keyboard 15
12345 2 3 Memory Module 4Gb 15
12345 2 4 Processor Quad Core 2.8Ghz 15
12345 3 1 Optical Mouse 5
12345 3 2 Ergo Keyboard 5
12345 3 3 Memory Module 2Gb 10
12345 3 4 Processor Dual Core 2.4Ghz 5
12345 3 5 Dual Output Graphics Card 5
12345 3 6 28inch LED Monitor 10
12345 3 7 Webcam 5
12345 3 8 A3 1200dpi Laser Printer 2
17 rows selected.Need at least 10.2.0.3 for performance i.e. to avoid COLLECTION ITERATOR PICKLER FETCH in execution plan...
On 10.2.0.1:
Execution Plan
Plan hash value: 3741473841
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 24504 | 89M| 873 (1)| 00:00:11 |
| 1 | NESTED LOOPS | | 24504 | 89M| 873 (1)| 00:00:11 |
| 2 | NESTED LOOPS | | 3 | 11460 | 805 (1)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3777 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | SYS_IOT_TOP_774117 | 3 | 129 | 1 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE | | | | |
Predicate Information (identified by operation id):
4 - access("NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
filter("SYS_NC_TYPEID$" IS NOT NULL)
Note
- dynamic sampling used for this statementOn 10.2.0.3:
Execution Plan
Plan hash value: 1048233240
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 17 | 132K| 839 (0)| 00:00:11 |
| 1 | NESTED LOOPS | | 17 | 132K| 839 (0)| 00:00:11 |
| 2 | MERGE JOIN CARTESIAN | | 17 | 131K| 805 (0)| 00:00:10 |
| 3 | TABLE ACCESS FULL | CUST_ORDER_TBL | 1 | 3781 | 3 (0)| 00:00:01 |
| 4 | BUFFER SORT | | 17 | 70839 | 802 (0)| 00:00:10 |
|* 5 | INDEX FAST FULL SCAN| SYS_IOT_TOP_56154 | 17 | 70839 | 802 (0)| 00:00:10 |
|* 6 | INDEX UNIQUE SCAN | SYS_IOT_TOP_56152 | 1 | 43 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | SYS_C006701 | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - filter("SYS_NC_TYPEID$" IS NOT NULL)
6 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
filter("SYS_NC_TYPEID$" IS NOT NULL AND
"NESTED_TABLE_ID"="CUST_ORDER_TBL"."SYS_NC0000900010$")
7 - access("SYS_NTpzENS1H/RwSSC7TVzvlqmQ=="."NESTED_TABLE_ID"="SYS_NTnN5b8Q+8Txi9V
w5Ysl6x9w=="."SYS_NC0000600007$")
Note
- dynamic sampling used for this statement----------------------------------------------------------------------------------------------------------
-- CLEAN UP
DROP TABLE CUST_ORDER_TBL purge;
exec dbms_xmlschema.deleteschema('http://xmlns.example.org/xsd/testcase.xsd'); -
How to improve Performance of a Query whcih is on a Vritual Cube
Hi All,
Please suggest me some tips through which we can improve the performance of a queries that were built on Viirtual Cubes.
Thanks iin advance.
Regards,
RajHi Raj,
How is your direct access datasource built ? Is this a standard datasource or generic datasource on any view/table/function module. This strengthens my second point.
Suppose you built a virtual cube on direct access datasource built on AUFK table with Order as primary key (Order master data). when you use Order as selection on query built on this virtual cube then it retrievies the data faster than firing the query on other selections.
If your selections are different. You can possibly create a secondary index on the table with selections used in query.
Regards
vamsi -
How to improve performance of my query
Hello Friends,
Good Morning.
I am having the following query which is never ending - Can any one throw some light on how to improve the performance of my said said query ..This is the query generated in ODI ( ORACLE DATA INTEGRATOR 11G )
The only thing I can put in this query is optimizers
- issue resolved
Please advice .
Thanks / Kumar
Edited by: kumar73 on May 18, 2012 6:38 AM
Edited by: kumar73 on May 18, 2012 6:39 AM
Edited by: kumar73 on May 18, 2012 12:04 PMThe two DISTINCTs are redundant, as UNION results in unique records, as a set can't have duplicates.
Other than that the query is not formatted and unreadable, and you didn't provide a description of the tables involved.
Your strategy seems to be maximum help from this forum with minimum effort from yourself, other than hitting copy and paste.
Sybrand Bakker
Senior Oracle DBA -
How to improve performance of select query when primary key is not referred
Hi,
There is a select query where we are unable to refrence primary key of the tables:
Since, the the below code is refrensing to vgbel and vgpos fields instead of vbeln and posnr..... the performance is very slow.
select vbeln posnr into (wa-vbeln1, wa-posnr1)
from lips
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endselect.
Please le t me know if you have some tips..hi,
I hope you are using the select statement inside a loop ...endloop get that outside to improve the performance ..
if not i_vbap[] is initial.
select vbeln posnr into table it_lips
from lips
for all entries in i_vbap
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endif. -
Hi everyone
Let's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
Read record from source table
Check to see if matching header record exists in TableA (using indexed field)
If match found then store TXH_ID (PK in TableA)
If no match found then create new header record in TableA with new TXH_ID
Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
If so, is there any way to rectify this? Would updating the stats at certain points in the process be effective?
Would it be any different if a MERGE was used to (conditionally) insert the header records into TableA? (i.e. would the stats still get stale?)
DB is 11GR2 and OS is Windows Server 2008
ThanksLet's say you have a PL/SQL routine that is processing data from a source table and for each record, it checks to see whether a matching record exists in a header table (TableA); if one does, it uses it otherwise it creates a new one. It then inserts associated detail records (into TableB) linked to the header record. So the process is:
Read record from source table
Check to see if matching header record exists in TableA (using indexed field)
If match found then store TXH_ID (PK in TableA)
If no match found then create new header record in TableA with new TXH_ID
Create detail record in TableB where TXD_TXH_ID (FK on TableB) = TXH_ID
If the header table (Table A) starts getting big (i.e. the process adds a few million records to it), presumably the stats on TableA will start to get stale and therefore the query in step 2 will become more time consuming?
What do you mean 'presumably the stats . .'?
In item #3 you said that TXH_ID is the primary key. That means only ONE value will EVER be found in the index so there should be NO degradation for looking up that primary key value.
The plan you posted shows an index range scan. A range scan is NOT used to lookup primary key values since they must be unique (meaning there is NO RANGE).
So there should be NO impact due to the header table 'getting big'. -
How to Improve Performance of this query??
Hi experts,
Kindly suggest me some perfomance optimization on the below code.
SELECT * FROM vtrdi AS v
INTO TABLE six
FOR ALL ENTRIES IN r_vbeln
WHERE vbeln EQ r_vbeln-low
AND trsta IN s_trsta
AND vstel IN s_vstel
AND tddat IN s_tddat
AND vbtyp IN r_vbtyp
AND lstel IN s_lstel
AND route IN s_route
AND tragr IN s_tragr
AND vsbed IN s_vsbed
AND land1 IN s_land1
AND lzone IN s_lzone
AND wadat IN s_wadat
AND wbstk IN s_wbstk
AND lddat IN s_lddat
AND lfdat IN s_lfdat
AND kodat IN s_kodat
AND kunnr IN s_kunnr
AND spdnr IN s_spdnr
AND inco1 IN s_inco1
AND inco2 IN s_inco2
AND lprio IN s_lprio
AND EXISTS ( SELECT * FROM likp
WHERE vbeln EQ v~vbeln
AND lifnr IN s_lifnr
AND lgtor IN s_lgtor
AND lgnum IN s_lgnum
AND lfuhr IN s_lfuhr
AND aulwe IN s_aulwe
AND traty IN s_traty
AND traid IN s_traid
AND vsart IN s_vsart
AND trmtyp IN s_trmtyp
AND sdabw IN s_sdabw
AND cont_dg IN r_cont_dg ).
Thanks in Advance...
Santosh.Try to write 2 select
SELECT * FROM vtrdi AS v
INTO TABLE six
FOR ALL ENTRIES IN r_vbeln
WHERE vbeln EQ r_vbeln-low
AND trsta IN s_trsta
AND vstel IN s_vstel
AND tddat IN s_tddat
AND vbtyp IN r_vbtyp
AND lstel IN s_lstel
AND route IN s_route
AND tragr IN s_tragr
AND vsbed IN s_vsbed
AND land1 IN s_land1
AND lzone IN s_lzone
AND wadat IN s_wadat
AND wbstk IN s_wbstk
AND lddat IN s_lddat
AND lfdat IN s_lfdat
AND kodat IN s_kodat
AND kunnr IN s_kunnr
AND spdnr IN s_spdnr
AND inco1 IN s_inco1
AND inco2 IN s_inco2
AND lprio IN s_lprio.
SELECT * FROM likp into table itab
WHERE vbeln EQ v~vbeln
AND lifnr IN s_lifnr
AND lgtor IN s_lgtor
AND lgnum IN s_lgnum
AND lfuhr IN s_lfuhr
AND aulwe IN s_aulwe
AND traty IN s_traty
AND traid IN s_traid
AND vsart IN s_vsart
AND trmtyp IN s_trmtyp
AND sdabw IN s_sdabw
AND cont_dg IN r_cont_dg
loop at six
check whether entry is exists or not
if not remove from six interbal table.
endloop.
Thanks
Venkat -
How can I perform this kind of range join query using DPL?
How can I perform this kind of range join query using DPL?
SELECT * from t where 1<=t.a<=2 and 3<=t.b<=5
In this pdf : http://www.oracle.com/technology/products/berkeley-db/pdf/performing%20queries%20in%20oracle%20berkeley%20db%20java%20edition.pdf,
It shows how to perform "Two equality-conditions query on a single primary database" just like SELECT * FROM tab WHERE col1 = A AND col2 = B using entity join class, but it does not give a solution about the range join query.I'm sorry, I think I've misled you. I suggested that you perform two queries and then take the intersection of the results. You could do this, but the solution to your query is much simpler. I'll correct my previous message.
Your query is very simple to implement. You should perform the first part of query to get a cursor on the index for 'a' for the "1<=t.a<=2" part. Then simply iterate over that cursor, and process the entities where the "3<=t.b<=5" expression is true. You don't need a second index (on 'b') or another cursor.
This is called "filtering" because you're iterating through entities that you obtain from one index, and selecting some entities for processing and discarding others. The white paper you mentioned has an example of filtering in combination with the use of an index.
An alternative is to reverse the procedure above: use the index for 'b' to get a cursor for the "3<=t.b<=5" part of the query, then iterate and filter the results based on the "1<=t.a<=2" expression.
If you're concerned about efficiency, you can choose the index (i.e., choose which of these two alternatives to implement) based on which part of the query you believe will return the smallest number of results. The less entities read, the faster the query.
Contrary to what I said earlier, taking the intersection of two queries that are ANDed doesn't make sense -- filtering is the better solution. However, taking the union of two queries does make sense, when the queries are ORed. Sorry for the confusion.
--mark -
How to get sql server performance counters using query?
Hai i want to see my sql server performance counters like, Full Scans/sec, Buffer
Cache Hit Ratio, Database Transactions/sec, User
Connections, Average Latch Wait Time (ms), Lock
Waits/sec, Lock Timeouts/sec, Number
of Deadlocks/sec, Total Server Memory, SQL
Re-Compilations/sec, User Settable Query. If any one know how to get it by using query means, please help me.
Thanks in advanceHello,
Below is query created by Jonathan Kehayias for measuring Perfom counters using DMV sys.dm_os_performance_counter.
You can download book from below link
https://www.simple-talk.com/books/sql-books/troubleshooting-sql-server-a-guide-for-the-accidental-dba/
DECLARE @CounterPrefix NVARCHAR(30)
SET @CounterPrefix = CASE WHEN @@SERVICENAME = 'MSSQLSERVER'
THEN 'SQLServer:'
ELSE 'MSSQL$' + @@SERVICENAME + ':'
END ;
-- Capture the first counter set
SELECT CAST(1 AS INT) AS collection_instance ,
[OBJECT_NAME] ,
counter_name ,
instance_name ,
cntr_value ,
cntr_type ,
CURRENT_TIMESTAMP AS collection_time
INTO #perf_counters_init
FROM sys.dm_os_performance_counters
WHERE ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Full Scans/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Index Searches/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Lazy Writes/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Page life expectancy'
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'Processes Blocked'
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'User Connections'
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Waits/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Wait Time (ms)'
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Re-Compilations/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Memory Manager'
AND counter_name = 'Memory Grants Pending'
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'Batch Requests/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Compilations/sec'
-- Wait on Second between data collection
WAITFOR DELAY '00:00:01'
-- Capture the second counter set
SELECT CAST(2 AS INT) AS collection_instance ,
OBJECT_NAME ,
counter_name ,
instance_name ,
cntr_value ,
cntr_type ,
CURRENT_TIMESTAMP AS collection_time
INTO #perf_counters_second
FROM sys.dm_os_performance_counters
WHERE ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Full Scans/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Access Methods'
AND counter_name = 'Index Searches/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Lazy Writes/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Buffer Manager'
AND counter_name = 'Page life expectancy'
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'Processes Blocked'
OR ( OBJECT_NAME = @CounterPrefix + 'General Statistics'
AND counter_name = 'User Connections'
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Waits/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Locks'
AND counter_name = 'Lock Wait Time (ms)'
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Re-Compilations/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'Memory Manager'
AND counter_name = 'Memory Grants Pending'
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'Batch Requests/sec'
OR ( OBJECT_NAME = @CounterPrefix + 'SQL Statistics'
AND counter_name = 'SQL Compilations/sec'
-- Calculate the cumulative counter values
SELECT i.OBJECT_NAME ,
i.counter_name ,
i.instance_name ,
CASE WHEN i.cntr_type = 272696576
THEN s.cntr_value - i.cntr_value
WHEN i.cntr_type = 65792 THEN s.cntr_value
END AS cntr_value
FROM #perf_counters_init AS i
JOIN #perf_counters_second AS s
ON i.collection_instance + 1 = s.collection_instance
AND i.OBJECT_NAME = s.OBJECT_NAME
AND i.counter_name = s.counter_name
AND i.instance_name = s.instance_name
ORDER BY OBJECT_NAME
-- Cleanup tables
DROP TABLE #perf_counters_init
DROP TABLE #perf_counters_second
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
HOW TO IMPROVE PERFORMANCE ON SUM FUNCTION IN INLINE SQL QUERY
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advance989209 wrote:
SELECT NVL(SUM(B1.T_AMOUNT),0) PAYMENT,B1.ACCOUNT_NUM,B1.BILL_SEQ
FROM
SELECT P.T_AMOUNT,P.ACCOUNT_NUM,P.BILL_SEQ
FROM PAYMENT_DATA_VIEW P
WHERE TRUNC(P.ACC_PAYMENT_DATE) < '01-JAN-2013'
AND P.CUSTOMER_NAME ='XYZ'
AND P.CLASS_ID IN (-1,1,2,94)
) B1
GROUP BY B1.ACCOUNT_NUM,B1.BILL_SEQ
Above is the query.If we run inner query it takes few second to execute but while we are summing up the same amount and bill_Seq using inline view, it takes time to execute it.
Note: Count of rows selected from inner query will be around >10 Lac
How to improve the performance for this query?
Pls suggest
Thanks in advancea) Lac is not an international unit, so is not understood by everyone. This is an international forum so please use international units.
b) Please read the FAQ: {message:id=9360002} to learn how to format your question correctly for people to help you.
c) As your question relates to performance tuning, please also read the two threads linked to in the FAQ: {message:id=9360003} for an idea of what specific information you need to provide for people to help you tune your query. -
How to improve the performance of the query
Hi,
Help me by giving tips how to improve the performance of the query. Can I post the query?
SureshBelow is the formatted query and no wonder it is taking lot of time. Will give you a list of issues soon after analyzing more. Till then understand the pitfalls yourself from this formatted query.
SELECT rt.awb_number,
ar.activity_id as task_id,
t.assignee_org_unit_id,
t.task_type_code,
ar.request_id
FROM activity_task ar,
request_task rt,
task t
WHERE ar.activity_id =t.task_id
AND ar.request_id = rt.request_id
AND ar.complete_status != 'act.stat.closed'
AND t.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND ar.parent_task_id not in (SELECT tt.task_id
FROM task tt
WHERE tt.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND rt.awb_number is not null
ORDER BY rt.awb_numberCheers
Sarma. -
HI All, How to improve the performance in given query?
HI All,
How to improve the performance in given query?
Query is..
PARAMETERS : p_vbeln type lips-vbeln.
DATA : par_charg TYPE LIPS-CHARG,
par_werks TYPE LIPS-WERKS,
PAR_MBLNR TYPE MSEG-MBLNR .
SELECT SINGLE charg
werks
INTO (par_charg, par_werks)
FROM lips
WHERE vbeln = p_vbeln.
IF par_charg IS NOT INITIAL.
SELECT single max( mblnr )
INTO par_mblnr
FROM mseg
WHERE bwart EQ '101'
AND werks EQ par_werks (index on werks only)
AND charg EQ par_charg.
ENDIF.
Regards
SteveHi steve,
Can't you use the material in your query (and not only the batch)?
I am assuming your system has an index MSEG~M by MANDT + MATNR + WERKS (+ other fields). Depending on your system (how many different materials you have), this will probably speed up the query considerably.
Anyway, in our system we ended up by creating an index by CHARG, but leave as a last option, only if selecting by matnr and werks is not good enough for your scenario.
Hope this helps,
Rui Dantas -
Please help me how to improve the performance of this query further.
Hi All,
Please help me how to improve the performance of this query further.
Thanks.Hi,
this is not your first SQL tuning request in this community -- you really should learn how to obtain performance diagnostics.
The information you posted is not nearly enough to even start troubleshooting the query -- you haven't specified elapsed time, I/O, or the actual number of rows the query returns.
The only piece of information we have is saying that your query executes within a second. If we believe this, then your query doesn't need tuning. If we don't, then we throw it away
and we're left with nothing.
Start by reading this blog post: Kyle Hailey &raquo; Power of DISPLAY_CURSOR
and applying this knowledge to your case.
Best regards,
Nikolay -
How to improve performance of the attached query
Hi,
How to improve performance of the below query, Please help. also attached explain plan -
SELECT Camp.Id,
rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount,
(SUM(rCam.Impressions) * 0.001 + SUM(rCam.Clickthrus)) AS GR,
rCam.AccountKey as AccountKey
FROM Campaign Camp, rCamSit rCam, CamBilling, Site xSite
WHERE Camp.AccountKey = rCam.AccountKey
AND Camp.AvCampaignKey = rCam.AvCampaignKey
AND Camp.AccountKey = CamBilling.AccountKey
AND Camp.CampaignKey = CamBilling.CampaignKey
AND rCam.AccountKey = xSite.AccountKey
AND rCam.AvSiteKey = xSite.AvSiteKey
AND rCam.RmWhen BETWEEN to_date('01-01-2009', 'DD-MM-YYYY') and
to_date('01-01-2011', 'DD-MM-YYYY')
GROUP By rCam.AccountKey,
Camp.Id,
CamBilling.Cpm,
CamBilling.Cpc,
CamBilling.FlatRate,
Camp.CampaignKey,
Camp.AccountKey,
CamBilling.billoncontractedamount
Explain Plan :-
Description Object_owner Object_name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 14 1 13
SORT AGGREGATE 1 13
VIEW GEMINI_REPORTING 14 1 13
HASH GROUP BY 14 1 103
NESTED LOOPS 13 1 103
HASH JOIN 12 1 85
TABLE ACCESS BY INDEX ROWID GEMINI_REPORTING RCAMSIT 2 4 100
NESTED LOOPS 9 5 325
HASH JOIN 7 1 40
SORT UNIQUE 2 1 18
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY SITE 2 1 18
INDEX RANGE SCAN GEMINI_PRIMARY SITE_I0 1 1
TABLE ACCESS FULL GEMINI_PRIMARY SITE 3 27 594
INDEX RANGE SCAN GEMINI_REPORTING RCAMSIT_I 1 1 5
TABLE ACCESS FULL GEMINI_PRIMARY CAMPAIGN 3 127 2540
TABLE ACCESS BY INDEX ROWID GEMINI_PRIMARY CAMBILLING 1 1 18
INDEX UNIQUE SCAN GEMINI_PRIMARY CAMBILLING_U1 0 1duplicate thread..
How to improve performance of attached query
Maybe you are looking for
-
How can I make my PDF document a smaller file size
I am using photoshop cs3. I have a 8.5 x 11 document. It includes images and text. It is 5.48 MB. How can I make to file size smaller??? This is for print so I need to keep the quality of the image.
-
Hi All, I have this query: select columnA, columnB ,sum (Case when wType='FULL' then YEAR_1 else 0 end) as FULL_1 ,sum (Case when wType='FULL' then YEAR_2 else 0 end) as FULL_2 ,sum (Case when wType='PART' then YEAR_1 else 0 end) as PART_1 ,sum (Case
-
ISG does not send Access-Request to download service definition
Hi guys, I got these configs on my ISG and when I see the packets between AAA and ISG router, there's no access-request for downloading the service definition! policy-map type control PPPoE_MAIN_POLICY class type control always event session-start
-
Insert/Update VO with UTF-8 charset
Hi, I worked so far only with iso-8859-1 charset and everything went fine, but now with UTF-8 I am experiencing strange problems. Every unicode char is converted to html equivalent(e.g. &#xxxx;) and saved in that form to DB. Is there any work around
-
Hello, How will we get the output of SM35? For example. if there are 50 records in the job out of which 40 are created and 10 are error records. I would like to get the transaction data for these records. Yes, we can go check the transaction but i wa