Performance of the query is poor
Hi All,
This is Prasad. I have a problem with the query it is taking more time to retrieve the data from the Cube. In the query they are using a Variable of type Customer Exit. The Cube is not at compressed. I think the issue with the F fact table is due to the high number of table partitions (requests) that it has to select from. If I compress the cube, the performance of the query is increased r not? Is there any alternative for improving the performance of the query. Somebody suggested Result set query, iam not aware of this technique if u know let me know.
Thanks in advance
Hi Prasad,
Query performance will depend on many factors like
1. Aggregates
2. Compression of requests
3. Query read mode setting
4. Cache memory setting
5. By Creating BI Accelerator Indexes on Infocubes
6. Indexes
Proposing aggregates to improve query performance:
First try to execute the query in RSRT on which u required to build aggregates. Check how much time it is taking to execute.....and whether it is required to build aggregate on this querry?? To get this information, Goto SE11> Give tabl name RSDDSTAT_DM in BI7.0 or RSDDSTAT in BW3.x.> Disply -> Contnts-> Give from date and to date values as today, user name as Ur user name, and give the query name
--> execute.
Now u'll get a list with fields like Object anme(Report anme), Time read, Infoprovider name(Multiprovider), Partprovider name (Cube), Aggregate name... etc. If the time read is less than 100,000,000 (100 sec) is acceptable. If the time read is more than 100 sec then it is recommended to create Aggregates for that query to increase performance. Keep in mind this time read.
Again goto RSRT> Give query name> Execute+Debug-->
A popup will come in that select the check box display aggregates found--> continue. If any aggregates or exist for that
query it will display first if u press on continue button, it will display from which cube which fields are coming it will display...try to copy this list of objects on which aggregate can be created into one text file...
then select that particular cube in RSA1>context>Maintain Aggregates-> Create by own> click on create aggregate button on top left side> Give discription of the aggregate>continue> take first object from list and fclick on find button in aggregates creation screen> give the object name and search... drag and drop that object into aggregate name right side (Drag and drop all the fields like this into aggregate).---->
Activate the aggregate--> it will take some time once the activation finishes --> make sure that aggregate is in switch on mode.
Try to xecute the query from RSRT again and find out the time read and compare this with first time read. If it is less tahn first time read then u can propose this aggregate to incraese the performance of the query.
I hope this will help u... go through the below links to know about aggregates more clear.
http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
Follow this thread for creation of BIA Indexes:
Re: BIA Creation
Hopr this helps...
Regards,
Ramki.
Similar Messages
-
Please help me to increase the performance of the query
Hello
I am not an oracle expert or developer and i have a problem to resolve.
Below is the query and explaiation plan and seeking the help to improve the performance of the query.
Our Analysis,
The query runs good,takes less one minute and fetches the results but during peak time it takes 8 minutes
Require anyone suggestion's to improve the query.
The query is generated from the Microsft dll so we dont have SQL code and require some help on tuning the tables.
If tuning the query improves then also fine please suggest for that also.
Enviroment: Solaris 8
DB : oracle 9i
(SELECT vw.dispapptobjid, vw.custsiteobjid, vw.emplastname, vw.empfirstname,
vw.scheduledonsite AS starttime, vw.appttype, vw.latestart,
vw.endtime, vw.typetitle, vw.empobjid, vw.latitude, vw.longitude,
vw.workduration AS DURATION, vw.dispatchtype, vw.availability
FROM ora_appt_disp_view vw
WHERE ( ( vw.starttime >=
TO_DATE ('2/12/2007 4:59 PM', 'MM/DD/YYYY HH12:MI AM')
AND vw.starttime <
TO_DATE ('2/21/2007 3:59 PM', 'MM/DD/YYYY HH12:MI AM')
OR vw.endtime >
TO_DATE ('2/12/2007 4:59 PM', 'MM/DD/YYYY HH12:MI AM')
AND vw.endtime <=
TO_DATE ('2/21/2007 3:59 PM', 'MM/DD/YYYY HH12:MI AM')
OR ( vw.starttime <=
TO_DATE ('2/12/2007 4:59 PM', 'MM/DD/YYYY HH12:MI AM')
AND vw.endtime >=
TO_DATE ('2/21/2007 3:59 PM', 'MM/DD/YYYY HH12:MI AM')
UNION
(SELECT 0 AS dispapptobjid, emp.emp_physical_site2site AS custsiteobjid,
emp.last_name AS emplastname, emp.first_name AS empfirstname,
TO_DATE ('1/1/3000', 'MM/DD/YYYY') AS starttime, 'E' AS appttype,
NULL AS latestart, NULL AS endtime, '' AS typetitle,
emp.objid AS empobjid, 0 AS latitude, 0 AS longitude, 0 AS DURATION,
'' AS dispatchtype, 0 AS availability
FROM table_employee emp, table_user usr
WHERE emp.employee2user = usr.objid AND emp.field_eng = 1 AND usr.status = 1)
ORDER BY empobjid, starttime, endtime DESC
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=HINT: ALL_ROWS 23 K 11312
SORT UNIQUE 23 K 3 M 11140
UNION-ALL
VIEW ORA_APPT_DISP_VIEW 17 K 3 M 10485
UNION-ALL
CONCATENATION
NESTED LOOPS OUTER 68 24 K 437
NESTED LOOPS 68 23 K 369
NESTED LOOPS OUTER 68 25 K 505
NESTED LOOPS OUTER 68 24 K 505
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 22 K 369
NESTED LOOPS OUTER 68 22 K 369
NESTED LOOPS 19 6 K 312
NESTED LOOPS 19 5 K 312
HASH JOIN 19 5 K 293
NESTED LOOPS 19 5 K 274
NESTED LOOPS 19 4 K 236
NESTED LOOPS 19 4 K 198
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 4 K 160
NESTED LOOPS OUTER 19 1 K 103
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 2 K 103
TABLE ACCESS BY INDEX ROWID TABLE_DISPTCHFE 19 1 K 46
INDEX RANGE SCAN GSA_SCHED_REPAIR 44 3
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22 3
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28 3
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_CASE 1 30 2
INDEX UNIQUE SCAN CASE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_SITE 1 12 2
INDEX UNIQUE SCAN SITE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_ADDRESS 1 12 2
INDEX UNIQUE SCAN ADDRESS_OBJINDEX 1 1
TABLE ACCESS FULL TABLE_EMPLOYEE 1 34 1
INDEX UNIQUE SCAN SITE_OBJINDEX 1 6 1
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_X_GSA_TIME_STAMPS 4 48 3
INDEX RANGE SCAN GSAIDX_TS2DISP 1 2
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_MOD_LEVEL 1 12 1
INDEX UNIQUE SCAN MOD_LEVEL_OBJINDEX 1
INDEX UNIQUE SCAN PART_NUM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN SUBCASE_OBJINDX 1 6 1
NESTED LOOPS OUTER 68 25 K 505
NESTED LOOPS OUTER 68 24 K 505
NESTED LOOPS OUTER 68 24 K 437
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 22 K 369
NESTED LOOPS OUTER 68 22 K 369
NESTED LOOPS 19 6 K 312
NESTED LOOPS 19 5 K 312
NESTED LOOPS 19 5 K 293
NESTED LOOPS 19 5 K 274
NESTED LOOPS 19 4 K 236
NESTED LOOPS 19 4 K 198
NESTED LOOPS OUTER 19 4 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 1 K 103
TABLE ACCESS BY INDEX ROWID TABLE_DISPTCHFE 19 1 K 46
INDEX RANGE SCAN GSA_SCHED_REPAIR 44 3
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22 3
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28 3
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_CASE 1 30 2
INDEX UNIQUE SCAN CASE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_SITE 1 12 2
INDEX UNIQUE SCAN SITE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_ADDRESS 1 12 2
INDEX UNIQUE SCAN ADDRESS_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_EMPLOYEE 1 34 1
INDEX UNIQUE SCAN EMPLOYEE_OBJINDEX 1
INDEX UNIQUE SCAN SITE_OBJINDEX 1 6 1
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_X_GSA_TIME_STAMPS 4 48 3
INDEX RANGE SCAN GSAIDX_TS2DISP 1 2
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN SUBCASE_OBJINDX 1 6 1
TABLE ACCESS BY INDEX ROWID TABLE_MOD_LEVEL 1 12 1
INDEX UNIQUE SCAN MOD_LEVEL_OBJINDEX 1
INDEX UNIQUE SCAN PART_NUM_OBJINDEX 1 6
NESTED LOOPS OUTER 68 25 K 505
NESTED LOOPS OUTER 68 24 K 505
NESTED LOOPS OUTER 68 24 K 437
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 23 K 369
NESTED LOOPS 68 22 K 369
NESTED LOOPS OUTER 68 22 K 369
NESTED LOOPS 19 6 K 312
NESTED LOOPS 19 5 K 312
NESTED LOOPS 19 5 K 293
NESTED LOOPS 19 5 K 274
NESTED LOOPS 19 4 K 236
NESTED LOOPS 19 4 K 198
NESTED LOOPS OUTER 19 4 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 3 K 160
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 2 K 103
NESTED LOOPS OUTER 19 1 K 103
TABLE ACCESS BY INDEX ROWID TABLE_DISPTCHFE 19 1 K 46
INDEX RANGE SCAN GSA_REQ_ETA 44 3
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22 3
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 22
INDEX RANGE SCAN GSA_COMDFE 1 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28 3
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_COMMIT_LOG 1 28
INDEX RANGE SCAN IND_CASE_COMMIT2CASE 2 2
TABLE ACCESS BY INDEX ROWID TABLE_CASE 1 30 2
INDEX UNIQUE SCAN CASE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_SITE 1 12 2
INDEX UNIQUE SCAN SITE_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_ADDRESS 1 12 2
INDEX UNIQUE SCAN ADDRESS_OBJINDEX 1 1
TABLE ACCESS BY INDEX ROWID TABLE_EMPLOYEE 1 34 1
INDEX UNIQUE SCAN EMPLOYEE_OBJINDEX 1
INDEX UNIQUE SCAN SITE_OBJINDEX 1 6 1
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
TABLE ACCESS BY INDEX ROWID TABLE_X_GSA_TIME_STAMPS 4 48 3
INDEX RANGE SCAN GSAIDX_TS2DISP 1 2
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN GBST_ELM_OBJINDEX 1 6
INDEX UNIQUE SCAN SUBCASE_OBJINDX 1 6 1
TABLE ACCESS BY INDEX ROWID TABLE_MOD_LEVEL 1 12 1
INDEX UNIQUE SCAN MOD_LEVEL_OBJINDEX 1
INDEX UNIQUE SCAN PART_NUM_OBJINDEX 1 6
NESTED LOOPS 16 K 2 M 5812
HASH JOIN 16 K 2 M 5812
HASH JOIN 16 K 2 M 5286
TABLE ACCESS FULL TABLE_EMPLOYEE 13 K 441 K 28
HASH JOIN 16 K 1 M 5243
TABLE ACCESS FULL TABLE_SCHEDULE 991 11 K 2
HASH JOIN OUTER 16 K 1 M 5240
HASH JOIN OUTER 16 K 1 M 3866
HASH JOIN OUTER 16 K 1 M 450
HASH JOIN 16 K 1 M 44
TABLE ACCESS FULL TABLE_GBST_ELM 781 14 K 2
TABLE ACCESS FULL TABLE_APPOINTMENT 16 K 822 K 41
INDEX FAST FULL SCAN CASE_OBJINDEX 1 M 6 M 201
TABLE ACCESS FULL TABLE_SITE 967 K 11 M 3157
TABLE ACCESS FULL TABLE_ADDRESS 961 K 11 M 1081
INDEX FAST FULL SCAN SITE_OBJINDEX 967 K 5 M 221
INDEX UNIQUE SCAN USER_OBJINDEX 1 6
HASH JOIN 6 K 272 K 51
TABLE ACCESS FULL TABLE_USER 6 K 51 K 21
TABLE ACCESS FULL TABLE_EMPLOYEE 6 K 220 K 28Hi,
First-off, it appear that you are querying a view. I would redo the auery against the base table.
Next, look at a function-based index for the DATE column. Here are my notes:
http://www.dba-oracle.com/t_function_based_indexes.htm
http://www.dba-oracle.com/oracle_tips_index_scan_fbi_sql.htm
Also, make sure you are analyzed properly with dbms_stats:
http://www.dba-oracle.com/art_builder_dbms_stats.htm
And histograms, if appropriate:
http://www.dba-oracle.com/art_builder_histo.htm
Lasty, look at increasing hash_area_size or pga_aggregate_tagtet, depending on your table sizes:
http://www.dba-oracle.com/art_so_undocumented_pga_parameters.htm
Hope this helps. . . .
Donald K. Burleson
Oracle Press Author -
How to improve the performance of the query
Hi,
Help me by giving tips how to improve the performance of the query. Can I post the query?
SureshBelow is the formatted query and no wonder it is taking lot of time. Will give you a list of issues soon after analyzing more. Till then understand the pitfalls yourself from this formatted query.
SELECT rt.awb_number,
ar.activity_id as task_id,
t.assignee_org_unit_id,
t.task_type_code,
ar.request_id
FROM activity_task ar,
request_task rt,
task t
WHERE ar.activity_id =t.task_id
AND ar.request_id = rt.request_id
AND ar.complete_status != 'act.stat.closed'
AND t.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND ar.parent_task_id not in (SELECT tt.task_id
FROM task tt
WHERE tt.assignee_org_unit_id in (SELECT org_unit_id
FROM org_unit
WHERE org_unit_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
OR parent_id in (SELECT oo.org_unit_id
FROM org_unit oo
WHERE oo.org_unit_id='3'
OR oo.parent_id ='3'
AND has_queue=1
AND rt.awb_number is not null
ORDER BY rt.awb_numberCheers
Sarma. -
Performance of the query incresed to 1 hour 15 mins....
the view is working, but the performance of the query is horrible.
Our pull time has increased from 25 minutes to 1 hour and 15 minutes.Can you please advice me the same solution pplies for prodction box also...
Production database details::::
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The query in production database::::
SELECT /*+ALL_ROWS*/
2 a .lcl_id AS Ora_Order, --Order_Number,
3 a.closed_date AS Closed_Date,
4 a.modified_date AS Modified_Date,
5 a.received_date AS Received_Date,
6 a.status AS Status,
7 b.seq AS Ora_Line, --Line_Number
8 b.sub_seq AS Ora_sub_line,
9 c.seq AS Unit_Number,
10 SUBSTR (c.olig_group_id, INSTR (c.olig_group_id,
11 '.',
12 -1,
13 1)
14 + 1)
15 AS shipment_number,
16 c.tag AS Tag,
17 c.special_tag AS Customer_Tag,
18 h.fmly_serial_id AS Serial_Number,
19 d.allocation_timestamp AS Alloc_Date,
20 MIN (f.closed_timestamp) AS First_Event_On_Floor,
21 -- CALIBRATION
22 MAX (DECODE (f.uutt_mstr_id, 1, f.closed_timestamp, NULL))
23 AS Calibration_Date,
24 -- PACKAGING
25 MAX (DECODE (f.uutt_mstr_id, 50, f.closed_timestamp, NULL))
26 AS Package_Date,
27 -- CAPS KITTING
28 MAX(DECODE (
29 f.uutt_mstr_id,
30 100,
31 DECODE (f.stnd_seq, 2024961, f.closed_timestamp, NULL)
32 ))
33 AS Caps_Kitting_Date,
34 lastprodsn.pm_mstr_id AS Tagged_Model,
35 b.CEP AS ETO_Number,
36 j.VALUE AS Product_Options,
37 a.PO AS PO_Number,
38 -- lastprodsn.uut_glbl_id as LastProdSN_UUT_Glbl_ID -- replaced on 3/31/2011 BJACK
39 MAX (DECODE (f.uutt_mstr_id, 2, f.glbl_id, NULL))
40 AS LastProdSN_UUT_Glbl_ID
41 FROM ssc.ordr_hdrs a, -- glbl_id = sales order number
42 ssc.ordr_lns b, -- oh_glbl_id = SO #, SEQ = line number
43 ssc.ordr_ln_itms c, -- ol_oh_glbl_id = SO #, ol_seq = line #, seq = unit #, olig_group_id = shipment #
44 ssc.omar_track_maps d, -- for tracking id, holds the allocation timestamp
45 (SELECT x.uut_glbl_id,
46 x.oli_ol_oh_glbl_id,
47 x.oli_ol_seq,
48 x.oli_ol_sub_seq,
49 x.oli_seq,
50 x.sm_glbl_id,
51 x.pm_mstr_id
52 FROM ssc.serial_prod_uut_maps x
53 JOIN
54 ( SELECT oli_ol_oh_glbl_id,
55 oli_ol_seq,
56 oli_ol_sub_seq,
57 oli_seq,
58 MAX (uut_glbl_id) Max_oli_uut_glbl_id
59 FROM ssc.serial_prod_uut_maps
60 GROUP BY oli_ol_oh_glbl_id,
61 oli_ol_seq,
62 oli_ol_sub_seq,
63 oli_seq) MAXOLIUUT
64 ON MAXOLIUUT.Max_oli_uut_glbl_id = x.uut_glbl_id
65 AND MAXOLIUUT.oli_ol_oh_glbl_id =
66 x.oli_ol_oh_glbl_id
67 AND MAXOLIUUT.oli_ol_seq = x.oli_ol_seq
68 AND MAXOLIUUT.oli_ol_sub_seq = x.oli_ol_sub_seq
69 AND MAXOLIUUT.oli_seq = x.oli_seq) lastprodsn, -- find latest uut for OLI (assumes UUT ids are in sequence so max is latest; needed to deal with SN or product chgs for OLI)
70 ssc.serial_prod_uut_maps e, -- go get all UUT IDs for the OLI's latest product number and serial number
71 ssc.uuts f, -- go get UUT details for all of the good OLI-product-SNs
72 ssc.uut_params g, -- go get the package void parameter (so can exclude them)
73 ssc.serial_mstrs h, -- go get serial number for the SN id
74 ssc.ORDR_LN_PARAMS j -- go get options for product number
75 WHERE -- join a to b sales orders to sales order lines
76 a .glbl_id = b.oh_glbl_id
77 AND -- join b to c to get sales order line items (units for a line item)
78 b.oh_glbl_id = c.ol_oh_glbl_id
79 AND b.seq = c.ol_seq
80 AND b.sub_seq = c.ol_sub_seq
81 AND -- join c to d to get allocation date if available (outer join)
82 c.otm_track_id = d.track_id(+)
83 AND -- join c to lastprodsn
84 c.ol_oh_glbl_id = lastprodsn.oli_ol_oh_glbl_id(+)
85 AND c.ol_seq = lastprodsn.oli_ol_seq(+)
86 AND c.ol_sub_seq = lastprodsn.oli_ol_sub_seq(+)
87 AND c.seq = lastprodsn.oli_seq(+)
88 AND -- join lastprodsn to k to get serial number for last product/serial number processed
89 lastprodsn.sm_glbl_id = h.glbl_id(+)
90 AND -- join lastprodsn to e to go get all the UUT ids for this OLI + Product # + Serial #
91 lastprodsn.oli_ol_oh_glbl_id = e.oli_ol_oh_glbl_id(+)
92 AND lastprodsn.oli_ol_seq = e.oli_ol_seq(+)
93 AND lastprodsn.oli_ol_sub_seq = e.oli_ol_sub_seq(+)
94 AND lastprodsn.oli_seq = e.oli_seq(+)
95 AND lastprodsn.pm_mstr_id = e.pm_mstr_id(+)
96 AND lastprodsn.sm_glbl_id = e.sm_glbl_id(+)
97 AND --join e to f to get UUT details for the good OLI-Product-SN combos
98 e.uut_glbl_id = f.glbl_id(+)
99 AND -- join f to g to get the voided parameter
100 f.glbl_id = g.uut_glbl_id(+)
101 AND -- join c to j to get the option codes for the product number (parameter 2070)
102 c.ol_oh_glbl_id = j.ol_oh_glbl_id(+)
103 AND c.ol_seq = j.ol_seq(+)
104 AND c.ol_sub_seq = j.ol_sub_seq(+)
105 AND c.seq = j.seq(+)
106 AND j.par_mstr_id(+) = 2070
107 AND j.VALUE(+) IS NOT NULL
108 AND -- un-voided packages only
109 g.par_mstr_id(+) = 1003
110 AND (g.uut_glbl_id IS NULL OR g.VALUE = 'N')
111 /* AND -- 1003 = package void status parameter
112 g.VALUE(+) = 'N' */
113 GROUP BY a.lcl_id,
114 b.seq,
115 b.sub_seq,
116 c.seq,
117 c.olig_group_id,
118 a.closed_date,
119 a.modified_date,
120 a.received_date,
121 a.status,
122 c.tag,
123 c.special_tag,
124 h.fmly_serial_id,
125 d.allocation_timestamp,
126 lastprodsn.pm_mstr_id,
127 b.CEP,
128 j.VALUE,
129 a.PO
130 /
SQL>=================
explain plan:::
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 82182 | 29M| | 160K (2)| 00:32:09 |
| 1 | HASH GROUP BY | | 82182 | 29M| 30M| 160K (2)| 00:32:09 |
| 2 | NESTED LOOPS OUTER | | 82182 | 29M| | 154K (2)| 00:30:51 |
| 3 | NESTED LOOPS OUTER | | 82182 | 26M| | 145K (2)| 00:29:12 |
|* 4 | HASH JOIN | | 82182 | 23M| 10M| 137K (2)| 00:27:33 |
| 5 | TABLE ACCESS FULL | ORDR_HDRS | 159K| 8716K| | 397 (4)| 00:00:05 |
|* 6 | HASH JOIN | | 89664 | 20M| 15M| 135K (2)| 00:27:09 |
| 7 | TABLE ACCESS FULL | ORDR_LNS | 506K| 9882K| | 688 (5)| 00:00:09 |
|* 8 | HASH JOIN RIGHT OUTER | | 89424 | 19M| 17M| 133K (2)| 00:26:39 |
| 9 | TABLE ACCESS FULL | OMAR_TRACK_MAPS | 567K| 10M| | 725 (5)| 00:00:09 |
|* 10 | FILTER | | | | | | |
|* 11 | HASH JOIN RIGHT OUTER | | 89424 | 17M| 4440K| 130K (2)| 00:26:09 |
|* 12 | TABLE ACCESS FULL | UUT_PARAMS | 133K| 2869K| | 3608 (7)| 00:00:44 |
|* 13 | HASH JOIN RIGHT OUTER | | 3244K| 563M| 85M| 96934 (3)| 00:19:24 |
| 14 | TABLE ACCESS FULL | UUTS | 2247K| 60M| | 4893 (4)| 00:00:59 |
|* 15 | HASH JOIN RIGHT OUTER | | 3244K| 476M| 239M| 62078 (3)| 00:12:25 |
| 16 | TABLE ACCESS FULL | SERIAL_PROD_UUT_MAPS | 3639K| 197M| | 6481 (4)| 00:01:18 |
|* 17 | HASH JOIN RIGHT OUTER | | 3244K| 300M| | 26716 (4)| 00:05:21 |
| 18 | VIEW | | 1 | 48 | | 18639 (4)| 00:03:44 |
|* 19 | FILTER | | | | | | |
| 20 | HASH GROUP BY | | 1 | 85 | | 18639 (4)| 00:03:44 |
|* 21 | HASH JOIN | | 308K| 25M| 40M| 18587 (4)| 00:03:44 |
|* 22 | TABLE ACCESS FULL| SERIAL_PROD_UUT_MAPS | 1060K| 28M| | 6520 (5)| 00:01:19 |
|* 23 | TABLE ACCESS FULL| SERIAL_PROD_UUT_MAPS | 1060K| 57M| | 6520 (5)| 00:01:19 |
| 24 | TABLE ACCESS FULL | ORDR_LN_ITMS | 3244K| 151M| | 8011 (4)| 00:01:37 |
|* 25 | TABLE ACCESS BY INDEX ROWID | ORDR_LN_PARAMS | 1 | 35 | | 1 (0)| 00:00:01 |
|* 26 | INDEX RANGE SCAN | OLP_OL_FK_I | 1 | | | 1 (0)| 00:00:01 |
| 27 | TABLE ACCESS BY INDEX ROWID | SERIAL_MSTRS | 1 | 37 | | 1 (0)| 00:00:01 |
|* 28 | INDEX RANGE SCAN | SM_PK | 1 | | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("A"."GLBL_ID"="B"."OH_GLBL_ID")
6 - access("B"."OH_GLBL_ID"="C"."OL_OH_GLBL_ID" AND "B"."SEQ"="C"."OL_SEQ" AND
"B"."SUB_SEQ"="C"."OL_SUB_SEQ")
8 - access("C"."OTM_TRACK_ID"="D"."TRACK_ID"(+))
10 - filter("G"."UUT_GLBL_ID" IS NULL OR "G"."VALUE"='N')
11 - access("F"."GLBL_ID"="G"."UUT_GLBL_ID"(+))
12 - filter("G"."PAR_MSTR_ID"(+)=1003)
13 - access("E"."UUT_GLBL_ID"="F"."GLBL_ID"(+))
15 - access("LASTPRODSN"."OLI_OL_OH_GLBL_ID"="E"."OLI_OL_OH_GLBL_ID"(+) AND
"LASTPRODSN"."OLI_OL_SEQ"="E"."OLI_OL_SEQ"(+) AND "LASTPRODSN"."OLI_OL_SUB_SEQ"="E"."OLI_OL_SUB_SEQ"(+)
AND "LASTPRODSN"."OLI_SEQ"="E"."OLI_SEQ"(+) AND "LASTPRODSN"."PM_MSTR_ID"="E"."PM_MSTR_ID"(+) AND
"LASTPRODSN"."SM_GLBL_ID"="E"."SM_GLBL_ID"(+))
17 - access("C"."OL_OH_GLBL_ID"="LASTPRODSN"."OLI_OL_OH_GLBL_ID"(+) AND
"C"."OL_SEQ"="LASTPRODSN"."OLI_OL_SEQ"(+) AND "C"."OL_SUB_SEQ"="LASTPRODSN"."OLI_OL_SUB_SEQ"(+) AND
"C"."SEQ"="LASTPRODSN"."OLI_SEQ"(+))
19 - filter("X"."UUT_GLBL_ID"=MAX("UUT_GLBL_ID"))
21 - access("OLI_OL_OH_GLBL_ID"="X"."OLI_OL_OH_GLBL_ID" AND "OLI_OL_SEQ"="X"."OLI_OL_SEQ" AND
"OLI_OL_SUB_SEQ"="X"."OLI_OL_SUB_SEQ" AND "OLI_SEQ"="X"."OLI_SEQ")
22 - filter("OLI_OL_OH_GLBL_ID" IS NOT NULL AND "OLI_OL_SEQ" IS NOT NULL AND "OLI_SEQ" IS NOT NULL AND
"OLI_OL_SUB_SEQ" IS NOT NULL)
23 - filter("X"."OLI_OL_OH_GLBL_ID" IS NOT NULL AND "X"."OLI_OL_SEQ" IS NOT NULL AND "X"."OLI_SEQ" IS
NOT NULL AND "X"."OLI_OL_SUB_SEQ" IS NOT NULL)
25 - filter("J"."PAR_MSTR_ID"(+)=2070 AND "J"."VALUE"(+) IS NOT NULL AND "C"."SEQ"="J"."SEQ"(+))
26 - access("C"."OL_OH_GLBL_ID"="J"."OL_OH_GLBL_ID"(+) AND "C"."OL_SEQ"="J"."OL_SEQ"(+) AND
"C"."OL_SUB_SEQ"="J"."OL_SUB_SEQ"(+))
28 - access("LASTPRODSN"."SM_GLBL_ID"="H"."GLBL_ID"(+))
SQL>mod. action : adding tags , is that so difficult ? -
Help required for improving performance of the Query
Hello SAP Techies,
I have MRP Query which shows Inventory projection by Calendar Year/Month wise.
There are 2 variables Plant and Material in free charateristics where it has been restricted by replacement of Query result .
Another query is Control M Query which is based on multiprovider. Multiprovider is created on 5 cubes.
The Query is taking 20 -15 Mins to get the result.
Due to replacement path by query result for the 2 variables first the control M Query is excuted. Business wanted to see all those materials in MRP query which are allocated to base plant hence they designed the query to use replacement Path by Query result. So it will get all the materials and plants from the control M query and will find the Invetory projection for the same selection in MRP query.
Is there any way I can improve the performance of the Query.
Query performance has been discussed innumerable times in the forums and there is a lot of information on the blogs and the WIKI - please search the forums before posting and if the existing posts do no answer your question satisfactorily then please raise a new post - else almost all the answers you get will be rehashed versions of previous posts ( and in most cases without attribution to the original author )
Edited by: Arun Varadarajan on Apr 19, 2011 9:23 PMHi ,
Please see if you can make these changes currently to the report . It will help in improving the performance of the query
1. Select the right read mode.
Reading data during navigation minimizes the impact on
the application server resources because only data that
the user requires will be retrieved.
2. Leverage filters as much as possible. Using filters contributes to
reducing the number of database reads and the size of the result set,
hereby significantly improving query runtimes.
Filters are especially valuable when associated with u201Cbig
dimensionsu201D where there is a large number of characteristics such as
customers and document numbers.
3. Reduce RKFs in the query to as few as possible. Also, define
calculated & RKFs on the Infoprovider level instead of locally within the query.
Regards
Garima -
Poor Performance of the query.
Hi all,
i am using this query
select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website
from (select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website,
row_number() over (partition by contactname, address1
order by contactname, address1) as rn
from vw_sub_cl_add1 where siteid=10 and bpcnum_0 = '0063') emp where rn =1I used explain plan for the query the result is
Plan hash value: 3976107967
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Inst |IN-OUT|
| 0 | SELECT STATEMENT | | | | 0 (0)| | |
| 1 | REMOTE | | | | | INFO | R->S |
8 rows returned in 0.04 seconds but, actually the query return 10 rows.
the view "vw_sub_cl_add1" is created using database links(remote database server).
this query i am using in for loop to retrieve the records and print it one by one.
The problem is : The perfomance of the query is so poor. it takes 1.08 sec to display all the records.
what are the steps i should do to minimize the retrival time.?
Thanks in advance
bye
SrikaviSince this is query that is processed completely on the remote site, there are at least two potential issues that you should check if you don't want to use the "materialized view" approach:
1. The time it takes to transport the result set to your local database, i.e. potential network issues
2. The time it takes to process the query on the remote site
Since you're only fetching 10 rows - if I understand you correctly - the first point shouldn't be an issue in your case.
If you have suitable access to the remote site you would need to generate an execution plan of the "local" version of the query by logging directly into the remote size to find out why it takes longer than you expect. Probably it's missing some indexes if the number of rows to process should be only a few and you expect it to return more quickly.
Here are simple instructions how to generate a meaningful execution plan if you want to post it here:
Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
In SQL*Plus:
SET LINESIZE 130
EXPLAIN PLAN FOR <your statement>;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
In previous versions you could run the following in SQL*Plus (on the server) instead:
@?/rdbms/admin/utlxplsA different approach in SQL*Plus:
SET AUTOTRACE ON EXPLAIN
<run your statement>;will also show the execution plan.
In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
[When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
and post the "tkprof" output here, too.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Performance of the query for YTD info
Hi Experts,
I have a query it takes more time when you run that query for period or year. Like selection ceritaria is few days info work quick. But when it run for period it take more then 10 minutes.
Give me suggestions how to speed up the query
ThanksBuild aggregates. Goto RSRT, execute and debug, switch on statistics and display aggregates checkbox. See whether an aggregate is hit or why not and build aggregates accordingly.
Also look at statsitcs if database is performance killer see events 9000ff for this.
Other hints: reduce start level of your query and remove some chracteristics if possible.
Regards,
Juergen -
Hello Guys,
iam having performance problem with query .when i run the the query with intial variables its displaying report quickly but when i go drilling with filter values its taking 10 minutes to display report.can anybody suggest me possible solutions for the performance improvement.
Regards
PriyaHi Priya,
First, you have to check what is causing the performance issue. You can do this by running the query in transaction RSRT. Execute the query in debug mode with the option "Display Statistics Data". You can navigate the query as you would normally. After that, check the statistics information and see what causes the performance issue. My guess is that you need to build an aggregate.
If rhe Data Manager time is high (a large % of the total runtime) and the ratio of the number of records selected VS the number of records transfered is high (e.g. > 10), then try to build an aggregate to help on the performance. To check for aggregate suggestions, run RSRT again with the option "Display Aggregates Found". It will show you what characteristics and characteristics selections would help (note that the suggestion might not always be the optimal one).
If OLAP Data Transfer time is high, then try optimizing the query design (e.g. try reducing the amount of restricted KFs or try calculating some KFs during the data flow instead of calculating them in the query).
Hope this helps. -
Performance - reading the query cost
Which of these three represent the most efficient query? I'm wondering if the smallest plan in bytes is the best plan. Also, are eager spools expensive or inexpensive, and is a smaller sub-tree cost better than a larger sub-tree cost? The
first plan is 48 bytes and the second is 160 bytes. Both return the same records. project is smaller than architecture but architecture comes from a different database.
R, JThe architecture table is small with about 1700 records, the project table is small with about 1100 records, the access_level, approval_version, and security_classifications are tiny. This is a group of tables called repeatedly for security.
It's more like a scalar function (though it seeds a table) or perhaps it can be considered as a boolean qualifier. The upper one is a modification that removes the tables that are joined in from another databases. Those have been removed
and placed into a single schema-bound table. The query executes in about 10 milliseconds but is frequently blocked by itself.
My question is more about the way the scan plan changes (hashes, eager spools and so on). The cost of the architecture scan doesn't change much but it pushes the table into the calling database to the top of the query plan and most of the scan becomes
a hash. There's a third rendition where the LEFT JOIN is changed to a LEFT LOOP JOIN [which I recently discovered]. I expected the schema-binding to provide better results - so the first of the two would be the better plan, I suppose.
I'm not sure. With the loop join condition, I was able to force a seek. It also crossed my mind that given the tiny size of the tables, perhaps they'd be better off without an index.
Here is what happens when the LOOP JOIN condition is added.
R, J -
Problem facing in performance of the query which calls one function
Hello Team,
Actually am facing performance issue for the following query.Even though for columns in where condition indexes are there.Also I used the hints but no use.Pls suggest me how can I increase the performance.Currently it's taking around 5 mins to execute the select statement.
SELECT crf_id, crf_code,crf_nme,
DECODE(UPPER( fn_hrc_refs( crf_id)),'Y','Yes','No') ua_ind,
creation_date, datetime_stamp, user_id
FROM BC_FBCTOR
ORDER BY crf_nme DESC;
FUNCTION fn_hrc_refs (pi_crf_id IN NUMBER)
RETURN VARCHAR2
IS
childref VARCHAR2 (1) := 'N';
cnt NUMBER := 0;
BEGIN
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CALIB
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CALIB_DET
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_MPG_DTL
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CNTRY_DTL
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_COR
WHERE x_axis_crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_RESI_COR
WHERE y_axis_crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_PRIME
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM DR_FBCT
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM DR_RISK
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_DTL
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_RESULT
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_LOSS
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_CRITERIA
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_CORR
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM EC_PORT
WHERE crf_id = pi_crf_id;
IF cnt<> 0
THEN
childref := 'Y';
RETURN childref;
END IF;
RETURN childref;
EXCEPTION
WHEN OTHERS
THEN
childref := 'N';
RETURN childref;
END;
Regards,
AshisYou are checking for the existence of detail records. What is the purpose of this. Most applications I know rely on normal foreign key constraint to ensure parent-child relationsships.
The select is slow for two reasons.
reason one) You count all details records when you only need to know if one detail records exists or not.
reason two) Multiple context switches between sql and pl/sql for each row of your parent table
SELECT NVL (COUNT (*), 0)
INTO cnt
FROM BC_CALIB
WHERE crf_id = pi_crf_id;
IF cnt 0
THEN
childref := 'Y';
RETURN childref;
END IF;This could be replaced by
begin
SELECT 'Y'
INTO childref
FROM BC_CALIB
WHERE crf_id = pi_crf_id
and rownum = 1; /* only fetch one row */
execption
when no_data_found then
/* continue with the next select */
end;
return childref;but this would still do a lot of context switches, especially for parents without a detail records.
Also consider this option:
SELECT crf_id, crf_code,crf_nme,
case when exists (SELECT null FROM BC_CALIB t WHERE t.crf_id = BC_FBCTOR.crf_id)
then 'Yes'
when exists (SELECT null FROM BC_CALIB_DET t WHERE t.crf_id = BC_FBCTOR.crf_id)
then 'Yes'
else
'No'
end ua_ind,
creation_date, datetime_stamp, user_id
FROM BC_FBCTOR
ORDER BY crf_nme DESC;It should also be possible to use a UNION ALL select instead of different single case lines. But i choose this way since it resembles your original selection a bit better. And you could change the 'Yes' to something different like 'Childs in Table abc'.
Edited by: Sven W. on Sep 5, 2008 2:29 PM -
Performance of the Bex query, buit on infoset is very low
Dear experts,
I have a bex query developed on infoset, has hit the performance problem.
when I am trying to check the query, found warning message as following
Diagnosis
InfoProvider ABC does not contain characteristic 0CALYEAR. The exception aggregation for key figure xyz can therefore not be applied.
System Response
The key figure will therefore be aggregated using all characteristics of ABC with the generic aggregation SUM.
but the infoobject 0CALYEAR is active and is in infoset as an attribute to the one of the Master data infoobject.
Now, could you please help me to improve the performance of the query, which is buit on infoset.
Thanks,
MannuHi,
If Info set is based on Cube then
-->Create Aggregate on the cube for those object used in the Query
-->Compressed the Info Cube
-->then run the query
and also in RSRT there are many properties according to the Target please check which property is suitable for you..
Best Regards
Obaid -
How to analyse the performance by using RSRTand byseeing the query results
Hi,
I want to see the performance of the query in each land scape. I have executed my query
using the transaction RSRT. Ho w can we analyse the query reuires aggregats or not.
I have taken the no. of records in cube . I also saw the number of records in the aggregates.
I didnot get the clear picture.
I selected the options Aggregates , Statistics and donot use cache. Query got execute and it displays one report . But I am unable to analyse the performace.
Can anyone please guide me with steps . Which factors we need to consider for the performace point of view.
Points will be rewarded.
Thanks in advacne for all your help.
VamsiHi,
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
Using cache memoery will decrease the loading time of the report.
Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work. By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Performance of BW infocubes
Go to SE38
Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
Thanks,
JituK -
Parameters to be considered for tuning the QUERY Performance.
Hi Guys
I wanted to identify the Query which is taking more resources through the Dynamic performance views and not through OEM.I wanter to tune the Query.
Please suggest me on this. Also i wanted to know what are all the parameters to be considered for tuning a query like Query execution time, I/O Hits, Library cache hits, etc and from which performance view these paramaters can be taken. I also wanted to know how to find the size of library cache and how much queries can be holded in a library cache and what will be the retention of queries in library cache and how to retrive the queries from Library cache.
Thanks a lot
Edited by: user448837 on Sep 23, 2008 9:24 PMHmm there is a parameter called makemyquery=super_fast, check that ;-).
Ok jokes apart, your question is like asking a doctor that when a person is sick than how to troubleshoot his sickeness? So does a doctor start giving medicine immediately? No right? He has to check some symptoms of the patient. The same is applicable to the query also. There is no view as such which would tell you the performance of the query ie it is good or bad. Tell me if a query is taking up 5 minutes and I come and tell you this, what would you say its good or bad or the query took this much CPU time what would you say, its too much or too less? I am sure your answers would be "it depends". That's the key point.
The supermost point in the performance check is two things, one the plan of the query. What kind of plan optimizer took for the query and whether it was really good or not? There are millions os ways to do som ething but the idea is to do it in the best way. That's what the idea of reading explain plan. I would suggest get your self familiar with explain plan's various paths and their pros and cons. Also about the query's charectristics, you can check V$sql, v$sqlarea , V$sql_text views.
Your other question that you want to check the size of the library cache , its contents and the retention, I would say that its all internally managed. You never size library cache but shared pool only. Similary the best bet for a dba/developer is to check the queries are mostly shareable and there wont be any duplicate sqls. So the cursor_sharing parameter can be used for this. The control is not given to us to manage the rentention or the flushing of the queries from the cache.
HTH
Aman.... -
Hpw to tune the performance of following query?
Hi all,
I want a particular set of G/L acct nos for the list of billing documents that i have in an internal table ie. pt_vbrk. I am using table BSIS.However ,when i use the following query it is taking very very long time .I even tried using LOOP...ENDLOOP statement.Can anyone tell me how to enhance the performance of the query?Thanks in advance.
i = 0.
loop at pt_vbrk.
i = i + 1.
endloop.
while j <= i.
read table pt_vbrk index j.
select single hkont from bsis into corresponding fields of
pt_vbrk where
( hkont = '0013100000' or hkont = '0013105000'
or hkont = '0013110000' or hkont = '0013112000'
or hkont = '0013115000' or hkont = '0013120000'
or hkont = '0013125000' or hkont = '0013135500'
or hkont = '0013140000' or hkont = '0013145000'
or hkont = '0013155000' or hkont = '0013165000'
or hkont = '0013175000' or hkont = '0013170000' )
and belnr = pt_vbrk-belnr and gjahr = pt_vbrk-gjahr . "#EC CI_NOFIRST
modify pt_vbrk from pt_vbrk index j.
j = j + 1.
endwhile.
Moderator message - Moved to the correct forum
Edited by: Rob Burbank on Sep 22, 2009 9:06 AMTry the following approach.
TYPES: BEGIN OF ty_bsis,
bukrs TYPE bsis-bukrs,
hkont TYPE bsis-hkont,
augdt TYPE bsis-augdt,
augbl TYPE bsis-augbl,
zuonr TYPE bsis-zuonr,
gjahr TYPE bsis-gjahr,
belnr TYPE bsis-belnr,
buzei TYPE bsis-buzei,
END OF ty_bsis.
DATA: w_bsis TYPE ty_bsis ,
w_index TYPE sy-tabix,
t_bsis TYPE SORTED TABLE OF ty_bsis
WITH NON-UNIQUE KEY bukrs belnr gjahr ,
t_vbrk_tmp LIKE TABLE OF pt_vbrk .
RANGES: r_hkont FOR bsis-hkont.
IF NOT pt_vbrk[] IS INITIAL.
REFRESH r_hkont.
r_hkont-sign = 'I'.
r_hkont-option = 'EQ'.
r_hkont-low = '0013100000'.
APPEND r_hkont.
r_hkont-low = '0013105000'.
APPEND r_hkont.
r_hkont-low = '0013110000'.
APPEND r_hkont.
r_hkont-low = '0013112000'.
APPEND r_hkont.
r_hkont-low = '0013115000'.
APPEND r_hkont.
r_hkont-low = '0013120000'.
APPEND r_hkont.
r_hkont-low = '0013125000'.
APPEND r_hkont.
r_hkont-low = '0013135500'.
APPEND r_hkont.
r_hkont-low = '0013140000'.
APPEND r_hkont.
r_hkont-low = '0013145000'.
APPEND r_hkont.
r_hkont-low = '0013155000'.
APPEND r_hkont.
r_hkont-low = '0013165000'.
APPEND r_hkont.
r_hkont-low = '0013175000'.
APPEND r_hkont.
r_hkont-low = '0013170000'.
APPEND r_hkont.
t_vbrk_tmp[] = pt_vbrk[].
SORT t_vbrk_tmp BY bukrs gjahr belnr.
DELETE ADJACENT DUPLICATES FROM t_vbrk_tmp
COMPARING bukrs gjahr belnr.
SELECT bukrs
hkont
augdt
augbl
zuonr
gjahr
belnr
buzei
FROM bsis
INTO TABLE t_bsis
FOR ALL ENTRIES IN t_vbrk_tmp
WHERE bukrs EQ t_vbrk_tmp-bukrs
AND hkont IN r_hkont
AND gjahr EQ t_vbrk_tmp-gjahr
AND belnr EQ t_vbrk_tmp-belnr.
IF sy-subrc EQ 0.
LOOP AT pt_vbrk.
w_index = sy-tabix.
READ TABLE t_bsis INTO w_bsis
WITH KEY bukrs = pt_vbrk-bukrs
belnr = pt_vbrk-belnr
gjahr = pt_vbrk-gjahr
TRANSPORTING
hkont.
IF sy-subrc EQ 0.
pt_vbrk-hkont = w_bsis-hkont.
MODIFY pt_vbrk INDEX w_index
TRANSPORTING
hkont.
ENDIF.
ENDIF.
ENDIF. -
Sub Queries VS Variables in the Query
#1
for i in ( select a.name,b.price,a.date from
list a, sales b
where a.header_id=b.header_id
and b.date_key= (select date_key from date_tab
where date_field=trunc(sydate))
loop
end loop.
#2
select date_key into v_date_key from date_tab where date_field=trunc(sydate);
for i in ( select a.name,b.price,a.date from
list a, sales b
where a.header_id=b.header_id
and b.date_key= v_date_key)
loop
end loop.
Please advice me , using the sub queries(#1) or substution variable (#2) in the condition part will improve the performance of the query.
ThanksMost likely, there won't be a difference in performance.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC
Maybe you are looking for
-
Creating pictures/posters from dvd images
Hi. my mac is new so I still don't entirely know what I'm doing. My friend created a dvd of me doing a few scenes for a promo reel (I'm an actor). I would like to be able to take shots from the dvd and use it as pictures for head shots, self promotio
-
Dear experts , Do we have a report showing the stock levels for a period of time for a particular materail . Regards Anis
-
Outlook 2010 Home and Business
Hello, I have Outlook 2010 Home and Business and I have IMAP email account. Outlook made connection automatically and I am getting an error : Synchronizing subscribed folders for [email protected] and Checking for new mail in subscribed folders on [e
-
Configure CPN Between Asa-Astaro
Hi All I have a ASA 5510, I have configure 2 VPN, router 850-ASA is OK, but I can't establish the other VPN ASA-Astaro, the error is: Jul 09 15:35:57 [IKEv1]: Group = 200.50.2.114, IP = 200.50.2.114, QM FSM error (P2 struct &0x3bcd8c0, mess id 0x4f4f
-
Can't deal with these kernel panics... Should I cave and buy a new SSD?
Specs: OSX.8.2 / ADATA SX900 256GB SSD / 15in Mid-2010 MBP / No longer under AppleCare Problem: Intermittent kernel panics What I believe: - Source of issue is SSD - Took laptop to Apple and 3rd party Apple Certified Specialist, both showed clea