Improving a query performance
Hello gurus,
We do use RSRT, ST03(expert mode) to check if a query is running slowly. what actually do we check there and how?
Can anyone please explain me how its done.
Thanks in advance
S N
Hi SN,
You need to check the DB time taken by the queries and the summarization ratio for the queries. As a rule of thumb if the summarization ration > 10 and the database time > 30% then creating an aggregate will improve the query performance.
BY summarization ration i mean ratio between the number of records read from the cube to the number of records displayed in the report.
Thx,
Soumya
Similar Messages
-
How to improve the query performance in to report level and designer level
How to improve the query performance in to report level and designer level......?
Plz let me know the detail view......first its all based on the design of the database, universe and the report.
at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
and when you create a paremeter try to get it match with the key fields in the database.
good luck
Amr -
How to improve the query performance
ALTER PROCEDURE [SPNAME]
@Portfolio INT,
@Program INT,
@Project INT
AS
BEGIN
--DECLARE @StartDate DATETIME
--DECLARE @EndDate DATETIME
--SET @StartDate = '11/01/2013'
--SET @EndDate = '02/28/2014'
IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
DROP TABLE #Dates
IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
DROP TABLE #DailyTasks
CREATE TABLE #Dates(WorkDate DATE)
--CREATE INDEX IDX_Dates ON #Dates(WorkDate)
;WITH Dates AS
SELECT (@StartDate) DateValue
UNION ALL
SELECT DateValue + 1
FROM Dates
WHERE DateValue + 1 <= @EndDate
INSERT INTO #Dates
SELECT DateValue
FROM Dates D
LEFT JOIN tb_Holidays H
ON H.HolidayOn = D.DateValue
AND H.OfficeID = 2
WHERE DATEPART(dw,DateValue) NOT IN (1,7)
AND H.UID IS NULL
OPTION(MAXRECURSION 0)
SELECT TSK.TaskID,
TR.ResourceID,
WC.WorkDayCount,
(TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
D.WorkDate,
TSK.ProjectID,
RES.ResourceName
INTO #DailyTasks
FROM Tasks TSK
INNER JOIN TasksResource TR
ON TSK.TaskID = TR.TaskID
INNER JOIN tb_Resource RES
ON TR.ResourceID=RES.UID
OUTER APPLY (SELECT COUNT(*) WorkDayCount
FROM #Dates
WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
INNER JOIN #Dates D
ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
-------WHERE TSK.ProjectID = @Project-----
SELECT D.ResourceID,
D.WorkDayCount,
SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
D.WorkDate,
T.TaskID,
D.ResourceName
FROM #DailyTasks D
OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
FROM #DailyTasks DA
WHERE D.WorkDate = DA.WorkDate
AND D.ResourceID = DA.ResourceID
FOR XML PATH('')) AS TaskID) T
LEFT JOIN tb_Project PRJ
ON D.ProjectID=PRJ.UID
INNER JOIN tb_Program PR
ON PRJ.ProgramID=PR.UID
INNER JOIN tb_Portfolio PF
ON PR.PortfolioID=PF.UID
WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
AND (@Program = -1 or PR.UID = @Program)
AND (@Project = -1 or PRJ.UID = @Project)
GROUP BY D.ResourceID,
D.WorkDate,
T.TaskID,
D.WorkDayCount,
D.ResourceName
HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
hi..
My SP is as above..
I connected this SP to dataset in SSRS report..as per my logic..Portfolio contains many Programs and Program contains many Projects.
When i selected the ALL value for parameters Program and Project..i'm unable to get output.
but when i select values for all 3 parameters i'm getting output. i took default values for paramters also.
so i commented the where condition in SP as shown above
--------where TSK.ProjectID=@Project-------------
now i'm getting output when selecting ALL value for parameters.
but here the issue is performance..it takes 10sec to retrieve for single project when i'm executing the sp.
how can i create index on temp table in this sp and how can i improve the query performance..
please help.
thanks in advance..
luckyDidnt i provide you solution in other thread?
ALTER PROCEDURE [SPNAME]
@Portfolio INT,
@Program INT,
@Project INT
AS
BEGIN
--DECLARE @StartDate DATETIME
--DECLARE @EndDate DATETIME
--SET @StartDate = '11/01/2013'
--SET @EndDate = '02/28/2014'
IF OBJECT_ID('tempdb..#Dates') IS NOT NULL
DROP TABLE #Dates
IF OBJECT_ID('tempdb..#DailyTasks') IS NOT NULL
DROP TABLE #DailyTasks
CREATE TABLE #Dates(WorkDate DATE)
--CREATE INDEX IDX_Dates ON #Dates(WorkDate)
;WITH Dates AS
SELECT (@StartDate) DateValue
UNION ALL
SELECT DateValue + 1
FROM Dates
WHERE DateValue + 1 <= @EndDate
INSERT INTO #Dates
SELECT DateValue
FROM Dates D
LEFT JOIN tb_Holidays H
ON H.HolidayOn = D.DateValue
AND H.OfficeID = 2
WHERE DATEPART(dw,DateValue) NOT IN (1,7)
AND H.UID IS NULL
OPTION(MAXRECURSION 0)
SELECT TSK.TaskID,
TR.ResourceID,
WC.WorkDayCount,
(TSK.EstimateHrs/WC.WorkDayCount) EstimateHours,
D.WorkDate,
TSK.ProjectID,
RES.ResourceName
INTO #DailyTasks
FROM Tasks TSK
INNER JOIN TasksResource TR
ON TSK.TaskID = TR.TaskID
INNER JOIN tb_Resource RES
ON TR.ResourceID=RES.UID
OUTER APPLY (SELECT COUNT(*) WorkDayCount
FROM #Dates
WHERE WorkDate BETWEEN TSK.StartDate AND TSK.EndDate)WC
INNER JOIN #Dates D
ON WorkDate BETWEEN TSK.StartDate AND TSK.EndDate
WHERE (TSK.ProjectID = @Project OR @Project = -1)
SELECT D.ResourceID,
D.WorkDayCount,
SUM(D.EstimateHours/D.WorkDayCount) EstimateHours,
D.WorkDate,
T.TaskID,
D.ResourceName
FROM #DailyTasks D
OUTER APPLY (SELECT (SELECT CAST(TaskID AS VARCHAR(255))+ ','
FROM #DailyTasks DA
WHERE D.WorkDate = DA.WorkDate
AND D.ResourceID = DA.ResourceID
FOR XML PATH('')) AS TaskID) T
LEFT JOIN tb_Project PRJ
ON D.ProjectID=PRJ.UID
INNER JOIN tb_Program PR
ON PRJ.ProgramID=PR.UID
INNER JOIN tb_Portfolio PF
ON PR.PortfolioID=PF.UID
WHERE (@Portfolio = -1 or PF.UID = @Portfolio)
AND (@Program = -1 or PR.UID = @Program)
AND (@Project = -1 or PRJ.UID = @Project)
GROUP BY D.ResourceID,
D.WorkDate,
T.TaskID,
D.WorkDayCount,
D.ResourceName
HAVING SUM(D.EstimateHours/D.WorkDayCount) > 8
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
Hi, I need help for improving query performance..
I have a query in which i am joining 2 tables, join after some aggregation. Both table has more than 50 million record.
There is no index created on these table,both tables are loaded after truncation. So is it required to create index on this table before joining? The query status was showing 'suspended' since it was running for long time. For temporary purpose, i just executed
the query multiple times by changing month filter each times.
How can i improve this instead of adding month filter and running multiple timesHi Nikkred,
According to your description, you are joining 2 table which contain more than 50 million records. Now what you want is improving query performance, right?
Query tuning is not an easy task. Basically it depends on three factors: your degree of knowledge, the query itself and the amount of optimization required. So in your scenario, you can post your detail query, so that you can get more help. Besides, you
can create index on your table which can improve the performance. Here are some links about performance tuning tips for you reference.
http://www.mssqltips.com/sql-server-tip-category/9/performance-tuning/
http://www.infoworld.com/d/data-management/7-performance-tips-faster-sql-queries-262
Regards,
Charlie Liao
TechNet Community Support -
Using DB Links - Improving SELECT query performance
Hi there,
I am using dblink in the following query:
I would like to improve performance of the query by using hints as per described in the link: http://www.experts-exchange.com/Database/Oracle/9.x/Q_23640348.html. However, i am not sure how can i include this in my select query.
Details are:
Oracle - 9i Database Terminal Release .8
DB Link: TCPROD
Could someone please explain with an example how to use hints to get the query to select data on the remote database and then return the results to the target database?
Many Thanks.
SELECT ec.obid AS prObid,
ec.b2ProgramName AS program,
ec.projectName AS project,
ec.wbsID AS prNo,
ec.wbsName AS title,
ec.revision AS revision,
ec.superseded AS revisionSuperseded,
ec.lifeCycleState AS lifeCycleState,
ec.b2ChangeType AS type,
ec.b2Complexity AS subType,
ec.r1SsiCode AS ssi,
ec.b2disposition as disposition,
ec.wbsOriginator AS requestor,
ec.wbsAdministrator AS administrator,
ec.changepriority as priority,
ec.r1tsc as tsc,
ec.t1comments as tenixComments,
ec.b2securityclass as securityClassification,
ec.t1changesafety as safety,
ec.t1actionofficer as actionOfficer,
ec.t1changereason as changeReason,
ec.t1wbsextchangenumber as extChangeNo,
ec.creator as creator,
to_date(substr(ec.creationdate,
0,
instr(ec.creationdate, ':', 1, 3) - 1),
'YYYY/MM/DD-HH24:MI:SS') as creationdate,
to_date(ec.originatorassigndate, 'YYYY/MM/DD') as originatorassigndate,
zbd.description as description,
zbc.comments as comments
FROM (SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM awdbt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mart1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mpsdt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM nondt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnast1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnlht1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnolt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rzptt1m4.cmPrRpIt@TCPRODit's the tablename in the hint, not the column name
something like
SELECT ec.obid AS prObid,
ec.b2ProgramName AS program,
ec.projectName AS project,
ec.wbsID AS prNo,
ec.wbsName AS title,
ec.revision AS revision,
ec.superseded AS revisionSuperseded,
ec.lifeCycleState AS lifeCycleState,
ec.b2ChangeType AS type,
ec.b2Complexity AS subType,
ec.r1SsiCode AS ssi,
ec.b2disposition as disposition,
ec.wbsOriginator AS requestor,
ec.wbsAdministrator AS administrator,
ec.changepriority as priority,
ec.r1tsc as tsc,
ec.t1comments as tenixComments,
ec.b2securityclass as securityClassification,
ec.t1changesafety as safety,
ec.t1actionofficer as actionOfficer,
ec.t1changereason as changeReason,
ec.t1wbsextchangenumber as extChangeNo,
ec.creator as creator,
to_date(substr(ec.creationdate,
0,
instr(ec.creationdate, ':', 1, 3) - 1),
'YYYY/MM/DD-HH24:MI:SS') as creationdate,
to_date(ec.originatorassigndate, 'YYYY/MM/DD') as originatorassigndate
FROM (SELECT /*+ DRIVING_SITE(awdbt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM awdbt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(mart1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mart1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(mpsdt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mpsdt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(nondt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM nondt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnast1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnast1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnlht1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnlht1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnolt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnolt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rzptt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rzptt1m4.cmPrRpIt@TCPROD) ec(not tested, of course) -
How to improve the query performance or tune query from Explain Plan
Hi
The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204
8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1
5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1
2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1
1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1
3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1
6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1
13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1
21 FILTER
16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49
20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1
18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1
23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204
42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204
38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204
34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925
30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699
26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35
37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38
36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2
35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2
41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41
40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2
39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2
44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1
43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1damorgan wrote:
Tuning is NOT about reducing the cost of i/o.
i/o is only one of many contributors to cost and only one of many contributors to waits.
Any time you would like to explore this further run this code:
SELECT 1 FROM dual
WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
And when I say "extreme" I mean "EXTREME!"
You've been warned.I think you just need a faster server.
SQL> set autotrace traceonly statistics
SQL> set timing on
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');
no rows selected
Elapsed: 00:00:00.00
Statistics
1 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
243 bytes sent via SQL*Net to client
349 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processedRepeated from an Oracle 10.2.0.x instance:
SQL> SELECT DISTINCT SID FROM V$MYSTAT;
SID
310
SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
Session altered.
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
COLUMN STAT_NAME FORMAT A35 TRU
SET PAGESIZE 200
SELECT
STAT_NAME,
VALUE
FROM
V$SESS_TIME_MODEL
WHERE
SID=310;
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0The session is not reporting additional CPU usage or parse time.
Let's check one of the session's statistics:
SELECT
SS.VALUE
FROM
V$SESSTAT SS,
V$STATNAME SN
WHERE
SN.NAME='consistent gets'
AND SN.STATISTIC#=SS.STATISTIC#
AND SS.SID=310;
VALUE
163Not many consistent gets after 20+ minutes.
Let's take a look at the plan:
SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
al%';
SQL_ID CHILD_NUMBER
04mpgrzhsv72w 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
select 1 from dual where regexp_like (' ','^*[ ]*a')
NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
Please verify value of SQL_ID and CHILD_NUMBER;
It could also be that the plan is no longer in cursor cache (check v$sql_p
lan)No plan...
Let's take a look at the 10053 trace file:
Registered qb: SEL$1 0x19157f38 (PARSER)
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
CBQT: Validity checks failed for 7uqx4guu04x3g.
CVM: Considering view merge in query block SEL$1 (#0)
CBQT: Validity checks failed for 7uqx4guu04x3g.
Subquery Unnest
SU: Considering subquery unnesting in query block SEL$1 (#0)
Set-Join Conversion (SJC)
SJC: Considering set-join conversion in SEL$1 (#0).
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
PM: PM bypassed: Outer query contains no views.
FPD: Considering simple filter push in SEL$1 (#0)
FPD: Current where clause predicates in SEL$1 (#0) :
REGEXP_LIKE (' ','^*[ ]*a')
kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
predicates with check contraints: REGEXP_LIKE (' ','^*[ ]*a')
after transitive predicate generation: REGEXP_LIKE (' ','^*[ ]*a')
finally: REGEXP_LIKE (' ','^*[ ]*a')
apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
kkoqbc-start
: call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Select Statement taking more time.How to improve the query performance.
SELECT DISTINCT ORDERKEY, SUM(IMPRESSIONCNT) AS ActualImpressions ,SUM(DiscountedSales)AS ActualRevenue ,SUM(AgencyCommAmt) as AgencyCommAmt
,SUM(SalesHouseCommAMT) as SalesHouseCommAMT
--INTO Anticiapted_ADXActualsMeasures
FROM AdRevenueFact_ADX ADx WITH(NOLOCK)
Where FiscalMonthkey >=201301 and Exists (Select 1 from Anticipated_cdr_AX_OrderItem OI Where Adx.Orderkey=Oi.Orderkey)
GROUP BY ORDERKEY
Clustered indexes on orderkey,fiscalmonthkey and orderkey in AdRevenueFact_ADX(contain more than 170 million rows)
thanksAs mentioned by Kalman, if your clustered index starts with Orderkey, then this query will require a full table scan. If it is an option to change the clustered index in such a way that FiscalMonthkey is the leading column, then only the data of the last
two year has to be queried.
In addition, you should have a look at the indexes of table Anticipated_cdr_AX_OrderItem. Ideally, there is a nonclustered index on Orderkey.
To get better advice, please post the query plan and list all available indexes of these tables.
Finally, an off topic remark: it is a good practice to keep consistent spelling of object names, and to keep the same spelling as their declaration. Your query would cause serious problems if the database is ever run with case sensitive collation.
Gert-Jan -
How to eleminate the co-releated Subquery to improve the query performance
Please find the query below which takes long time to fetch the records.
SQL> SET LINE 120
SQL> EXPLAIN PLAN FOR select *
2 from KEMP_SRC a1
3 where ('MOFF' is null or eq_name = 'MOFF')
4 and
5 is_ad_hoc <> 1
6 and (pb_proc_id is null
7 or pb_proc_id in (select proc_id from KEMP_CONFIG where frequency_type <> -1)
8 )
9 and KEMPUtility.DTTM(end_dt, end_tm) in (select max(KEMPUtility.DTTM(end_dt, end_tm))
10 from KEMP_SRC a2
11 where a2.eq_name = a1.eq_name
12 and (a2.pb_proc_id = a1.pb_proc_id or (a2.pb_proc_id is null and a1.pb_proc_id is null))
13 and a2.is_ad_hoc <> -1 -- repeating case
14 group by eq_name, pb_proc_id
15 );
Explained.
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY());
PLAN_TABLE_OUTPUT
Plan hash value: 2624956131
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 96 | 69399 (3)| 00:13:53 |
|* 1 | FILTER | | | | | |
|* 2 | TABLE ACCESS FULL | KEMP_SRC | 2896 | 271K| 124 (2)| 00:00:02 |
|* 3 | TABLE ACCESS FULL | KEMP_CONFIG | 1 | 26 | 2 (0)| 00:00:01 |
|* 4 | FILTER | | | | | |
| 5 | HASH GROUP BY | | 1 | 35 | 125 (3)| 00:00:02 |
PLAN_TABLE_OUTPUT
|* 6 | TABLE ACCESS FULL| KEMP_SRC | 364 | 12740 | 124 (2)| 00:00:02 |
Predicate Information (identified by operation id):
1 - filter(("PB_PROC_ID" IS NULL OR EXISTS (SELECT /*+ */ 0 FROM
"KEMP_CONFIG" "KEMP_CONFIG" WHERE "PROC_ID"=:B1 AND
"FREQUENCY_TYPE"<>(-1))) AND EXISTS (SELECT /*+ */ 0 FROM "KEMP_SRC" "A2" WHERE
"A2"."EQ_NAME"=:B2 AND ("A2"."PB_PROC_ID"=:B3 OR :B4 IS NULL AND "A2"."PB_PROC_ID" IS
NULL) AND "A2"."IS_AD_HOC"<>(-1) GROUP BY "EQ_NAME","PB_PROC_ID" HAVING
PLAN_TABLE_OUTPUT
"KEMPUtility"."DTTM"(:B5,:B6)=MAX("KEMPUtility"."DTTM"("END_DT","END_TM"))))
2 - filter("EQ_NAME"='BILAN_MAZOUT_BFOE' AND "IS_AD_HOC"<>1)
3 - filter("PROC_ID"=:B1 AND "FREQUENCY_TYPE"<>(-1))
4 - filter("KEMPUtility"."DTTM"(:B1,:B2)=MAX("KEMPUtility"."DTTM"("END_DT","END_TM")))
6 - filter("A2"."EQ_NAME"=:B1 AND ("A2"."PB_PROC_ID"=:B2 OR :B3 IS NULL AND
"A2"."PB_PROC_ID" IS NULL) AND "A2"."IS_AD_HOC"<>(-1))
28 rows selected.When i comment the reference to a1 in the subquery , then the cost drastically reduced .
select *
2 from KEMP_SRC a1
3 where ('MOFF' is null or eq_name = 'MOFF')
4 and
5 is_ad_hoc != 1
6 and (pb_proc_id is null
7 or pb_proc_id in (select proc_id from KEMP_CONFIG where frequency_type -1)
8 )
9 and KEMPUtility.DTTM(end_dt, end_tm) in (select max(KEMPUtility.DTTM(end_dt, end_tm))
10 from KEMP_SRC a2
11 where
-- a2.eq_name = a1.eq_name
12 -- and (a2.pb_proc_id = a1.pb_proc_id or (a2.pb_proc_id is null and a1.pb_proc_id is null))
13 --and
a2.is_ad_hoc != -1 -- repeating case
14 group by eq_name, pb_proc_id
15 );
PLAN_TABLE_OUTPUT
Plan hash value: 3739658629
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 78 | 8190 | 249 (3)| 00:00:03 |
|* 1 | FILTER | | | | | |
|* 2 | HASH JOIN | | 203 | 21315 | 249 (3)| 00:00:03 |
| 3 | VIEW | VW_NSO_1 | 7 | 63 | 125 (3)| 00:00:02 |
| 4 | HASH UNIQUE | | 7 | 245 | 125 (3)| 00:00:02 |
| 5 | HASH GROUP BY | | 7 | 245 | 125 (3)| 00:00:02 |
PLAN_TABLE_OUTPUT
|* 6 | TABLE ACCESS FULL| KEMP_SRC | 2896 | 98K| 124 (2)| 00:00:02 |
|* 7 | TABLE ACCESS FULL | KEMP_SRC | 2896 | 271K| 124 (2)| 00:00:02 |
|* 8 | TABLE ACCESS FULL | KEMP_CONFIG | 1 | 26 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("PB_PROC_ID" IS NULL OR EXISTS (SELECT /*+ */ 0 FROM
"KEMP_CONFIG" "KEMP_CONFIG" WHERE "PROC_ID"=:B1 AND
"FREQUENCY_TYPE"<>(-1)))
PLAN_TABLE_OUTPUT
2 - access("$nso_col_1"="KEMPUTILITY"."DTTM"("END_DT","END_TM"))
6 - filter("A2"."EQ_NAME"='BILAN_MAZOUT_BFOE' AND "A2"."IS_AD_HOC"<>(-1))
7 - filter("EQ_NAME"='BILAN_MAZOUT_BFOE' AND "IS_AD_HOC"<>1)
8 - filter("PROC_ID"=:B1 AND "FREQUENCY_TYPE"<>(-1)) -
"Improving SQL query performance using secondary indexes"
I have a very old copy of this document from 1997. I'm hoping to find newer version, if one exists, but the search facility on SDN is not working at the moment. Does anyone have a more up to date copy or link they can point me to ?
thanks,
Malcolm.HI,
check it out , may be it will help you
[http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10743/schema.htm]
[http://teradata.uark.edu/research/wang/indexes.html]
[http://www.geekinterview.com/question_details/33720] -
How to improve query performance built on a ODS
Hi,
I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
Is there any method to improve or optimize th query performance that build on ODS.
The ODS got huge volume of data ~ 300 Million records for 2 years.
Thanx in advance,
Guru.Hi Raj,
Here are some few tips which helps you in improving ur query performance
Checklist for Query Performance
1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
3. Within structures, make sure the filter order exists with the highest level filter first.
4. Check code for all exit variables used in a report.
5. Move Time restrictions to a global filter whenever possible.
6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
9. If Alternative UOM solution is used, turn off query cache.
10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
11. Turn off formatting and results rows to minimize Frontend time whenever possible.
12. Check for nested hierarchies. Always a bad idea.
13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
calculations. Try to avoid calculations before restrictions.
17. Turn off warning messages on queries.
18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
19. Check to see where currency conversions are happening if they are used.
20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
21. Avoid Cell Editor use if at all possible.
22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order. -
How to improve Query performance on large table in MS SQL Server 2008 R2
I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups is a best option or splitting the table into multiple smaller tables?
Hi bala197164,
First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
Partitioning:
http://msdn.microsoft.com/en-us/library/ms178148.aspx
CREATE INDEX (Transact-SQL):
http://msdn.microsoft.com/en-us/library/ms188783.aspx
TechNet
Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Allen Li
TechNet Community Support -
How to improve query performance of an ODS- with 320 million records
<b>Issue:</b>
The reports are giving time-outs while execution.
<b>Scenario</b>:
We have an ODS having approximately 320 millions of records in it.
The reports are based on
The ODS and
InfoSets based on this ODS.
These reports are giving time-outs while execution.
<b>Few facts about this ODS:</b>
There are around 75 restricted and calculated keyfigures used in the query definition.
We cant replace this ODS by cube as there is requirement of InfoSet on it.
This is in BW 3.5 environment.
<b>Few things we tried:</b>
Secondary Indices are created on the fields which are appearing in the selection screen of the reports. Its not worked.
The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
Question:
Can you suggest the ways to improve the query performance of this ODS?
Your immediate response is highly appreciated. Thanks in advance.Hey!
I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
Is there a way to delete some data from the ODS?
Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
Best regards,
Peter -
How to improve query performance when reporting on ods object?
Hi,
Can anybody give me the answer, how to improve my query performance when reporting on ODS object?
Thanks in advance,
Ravi Alakuntla.Hi Ravi,
Check these links which may cater your requirement,
Re: performance issues of ODS
Which criteria to follow to pick InfoObj. as secondary index of ODS?
PDF on BW performance tuning,
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Regards,
Mani. -
Steps to Improve Query Performance
Hi All,
I have a request from User to improve the Query performance for few of the sales reports. Please let me know the different steps I need to perform to improve it.
The data is coming from R/3 and the query is on a Multicube modeled upon three Cubes.
It takes way lot of time to open and refresh the report an further execute. The data available is not really huge but still it is taking time.
Tell me what are the areas that i need to check to understand the issue and how should o progress to improve the performance.
It would be great help if you can help me with as much details as possible.
Thanks in advance
Pavanhttps://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695 -
How to improve Query Performance
Hi Friends...
I Want to improve query performance.I need following things.
1.What is the process to findout the performance?. Any transaction code's and how to use?.
2.How can I know whether the query is running good or bad ,ie. in performance praspect.
3.I want to see the values i.e. how much time it is taking to run?. and where the defect is?.
4.How to improve the query performance?. After I did the needfull things to improve performance, I want to see the query execution time. i.e. it is running fast or not?.
Eg..
Eg 1. Need to create aggregates.
Solution: where can I create aggregates?. Now I'm in production system. So where I need to create? .i.e. indevelopment or in Quality or in Production system?.
Any chenges I need to do in Development?.Because I'm in Production system.
So please tell me solution for my questions.
Thanks
Ganga
Message was edited by: Ganga Nhi ganga
please refer oss note :557870 : Frequently asked questions on query performance
also refer to
Prakash's weblog
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
performance docs on query
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/064fed90-0201-0010-13ae-b16fa4dab695
This is the oss notes of FAQ on query performance
1. What kind of tools are available to monitor the overall Query Performance?
1. BW Statistics
2. BW Workload Analysis in ST03N (Use Export Mode!)
3. Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools is available to analyze a specific query in detail?
1. Transaction RSRT
2. Transaction RSRTRACE
4. Do I have an overall query performance problem?
i. Use ST03N -> BW System load values to recognize the problem. Use the number given in table 'Reporting - InfoCubes:Share of total time (s)' to check if one of the columns %OLAP, %DB, %Frontend shows a high number in all Info Cubes.
ii. You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
1. If the database statistic strategy is set up properly for your DB platform (above all for the BW specific tables)
2. If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
3. If Buffers, I/O, CPU, memory on the database server are exhausted?
4. If Cube compression is used regularly
5. If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
1. If the CPUs on the application server are exhausted
2. If the SAP R/3 memory set up is done properly (use TX ST02 to find bottlenecks)
3. If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT, Customizing default)
7. What can I do if the client proportion is high for all queries?
Check whether most of your clients are connected via a WAN connection and the amount of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
1. Again you can use ST03N -> BW System Load
2. Depending on the time frame you select, you get historical data or current data.
3. To get to a specific query you need to drill down using the InfoCube name
4. Use Aggregation Query to get more runtime information about a single query. Use tab All data to get to the details. (DB, OLAP, and Frontend time, plus Select/ Transferred records, plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
1. High Database Runtime
2. High OLAP Runtime
3. High Frontend Runtime
10. What can I do if a query has a high database runtime?
1. Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
2. o Check if database statistics are update to data for the Cube/Aggregate, use TX RSRV output (use database check for statistics and indexes)
3. Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
1. Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
2. Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
3. Check if a user exit Usage is involved in the OLAP runtime?
4. Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed. Use SE16 on the inclusion tables and use the List of Value feature on the column successor and predecessor to see which entry level of the hierarchy is used.
5. Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
1. Check if a very high number of cells and formatting are transferred to the Frontend (use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
2. Check if frontend PC are within the recommendation (RAM, CPU MHz)
3. Check if the bandwidth for WAN connection is sufficient
REWARDING POINTS IS THE WAY OF SAYING THANKS IN SDN
CHEERS
RAVI
Maybe you are looking for
-
My Nokia 5230 has not showing the Photos and video...
My Nokia 5230 has not showing the Photos and videos(from Memory Card) in gallery. and Music library has not updating. Please send me the solution.
-
How to Apply sign flipping applied in Bex Query in Web i report
Hi, We have created a universe on top of Bex query and we have applied sign reversal to two of our rows in bex query, unfortunately when I run a report in webi the reversal of sign is not getting reflected in web i report. Could any one please help
-
How can I make a purchase on USA itunes via UAE a/c?
How can I make a purchase on USA itunes via UAE a/c? Can I have more than 1 Itunes (regions) configured?
-
Hello, I want to purchase Snow Leopard (upgrade version) but I was planning on doing a format when installing it. Is it possible? Thank you!
-
Logic Pro 8 problem on new iMac running 10.6.7
I just purchased a new iMac (3.2 Ghz core i3) running 10.6.7 I have ran all the system updates. I have migrated Logic Pro 8.0.2 from my old iMac (running os 10.4.11). I have also performed a clean install of Logic Pro and am still having same issue,