Performance Tuning - Self Join Issue
Hi,
The following query takes long time to execute. Is there any better way to
re-writing the query to reduce the time it takes to execute.
INSERT INTO TT_TEMP_MAINGUI_SP_PERCENT_MOV
(prev_prc_dt,asset_id,pricing_pt_id,price_dt)
SELECT max(tpm2.prc_dt),
tpm2.asset_id ,
tpm2.pricing_pt_id ,
tpm1.prc_dt
FROM t_prc_master tpm1,
t_prc_master tpm2
WHERE tpm1.prc_dt = '19-Dec-07'
AND tpm1.asset_id = tpm2.asset_id
AND tpm1.pricing_pt_id = tpm2.pricing_pt_id
AND tpm2.prc_dt < tpm1.prc_dt
AND tpm2.accept_flg = 'Y'
AND tpm1.accept_flg = 'Y'
AND EXISTS (SELECT 1 FROM t_temp_prcmov
WHERE pca_flg = 'P'
AND tpm1.pricing_pt_id = prc_pt_cntry_atyp)
GROUP BY tpm2.asset_id, tpm2.pricing_pt_id,tpm1.prc_dt;
select count(*) from t_prc_master
where prc_dt = '19-Dec-07'
COUNT(*)
784161
-- Here is the TKPROF Output
INSERT INTO TT_TEMP_MAINGUI_SP_PERCENT_MOV
(prev_prc_dt,asset_id,pricing_pt_id,price_dt)
SELECT max(tpm2.prc_dt),
tpm2.asset_id ,
tpm2.pricing_pt_id ,
tpm1.prc_dt
FROM t_prc_master tpm1,
t_prc_master tpm2
WHERE tpm1.prc_dt = '19-Dec-07'
AND tpm1.asset_id = tpm2.asset_id
AND tpm1.pricing_pt_id = tpm2.pricing_pt_id
AND tpm2.prc_dt < tpm1.prc_dt
AND tpm2.accept_flg = 'Y'
AND tpm1.accept_flg = 'Y'
AND EXISTS (SELECT 1 FROM t_temp_prcmov
WHERE pca_flg = 'P'
AND tpm1.pricing_pt_id = prc_pt_cntry_atyp)
GROUP BY tpm2.asset_id, tpm2.pricing_pt_id,tpm1.prc_dt
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 226.01 317.50 1980173 4915655 805927 780544
Fetch 0 0.00 0.00 0 0 0 0
total 2 226.01 317.51 1980173 4915655 805927 780544
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 98 (PRSDBO)
Rows Row Source Operation
780544 SORT GROUP BY (cr=4915236 r=1980165 w=0 time=312751120 us)
40416453 NESTED LOOPS (cr=4915236 r=1980165 w=0 time=245408132 us)
783459 NESTED LOOPS (cr=956325 r=92781 w=0 time=17974163 us)
55 TABLE ACCESS FULL T_TEMP_PRCMOV (cr=3 r=0 w=0 time=406 us)
783459 TABLE ACCESS BY INDEX ROWID T_PRC_MASTER (cr=956322 r=92781 w=0 time=17782856 us)
784161 INDEX RANGE SCAN PRC_DT_ASSET_ID (cr=412062 r=69776 w=0 time=14136725 us)(object id 450059)
40416453 INDEX RANGE SCAN ASSET_DT_ACCEPT_FLG (cr=3958911 r=1887384 w=0 time=217215303 us)(object id 450055)
Rows Execution Plan
0 INSERT STATEMENT GOAL: CHOOSE
780544 SORT (GROUP BY)
40416453 NESTED LOOPS
783459 NESTED LOOPS
55 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'T_TEMP_PRCMOV'
783459 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'T_PRC_MASTER'
784161 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'PRC_DT_ASSET_ID'
(NON-UNIQUE)
40416453 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'ASSET_DT_ACCEPT_FLG'
(UNIQUE)
Could somebody help me in resolving the issue? It would be appreciated...
Well, it's a bit of a mess to read. Please use the pre or code tags enclosed in [] next time to preserve the formatting of the code.
First thing that looks 'bad' to me is
WHERE tpm1.prc_dt = '19-Dec-07'which should be (i assume you want 2007 and not 1907)
WHERE tpm1.prc_dt = TO_DATE('19-Dec-2007', 'DD-MON-YYYY');The next thing i'm very confused with is...why are you self joining the table? You should be able to just do this.....logically, it should produce the same results, though it's obviously not tested :D)
SELECT
max(tpm2.prc_dt),
tpm2.asset_id ,
tpm2.pricing_pt_id ,
TO_DATE('19-Dec-2007', 'DD-MON-YYYY') AS prc_dt
FROM t_prc_master tpm2
WHERE tpm2.prc_dt < TO_DATE('19-Dec-2007', 'DD-MON-YYYY')
AND tpm2.accept_flg = 'Y'
AND EXISTS (
SELECT
NULL
FROM t_prc_master tpm1
WHERE tpm1.prc_dt = TO_DATE('19-Dec-2007', 'DD-MON-YYYY')
AND tpm1.asset_id = tpm2.asset_id
AND tpm1.pricing_pt_id = tpm2.pricing_pt_id
AND tpm1.accept_flg = 'Y'
AND tpm1.pricing_pt_id IN (SELECT tmov.prc_pt_cntry_atyp FROM t_temp_prcmov tmov WHERE tmov.pca_flg = 'P')
GROUP BY tpm2.asset_id, tpm2.pricing_pt_id, TO_DATE('19-Dec-2007', 'DD-MON-YYYY');Message was edited by:
Tubby
Similar Messages
-
How to perform a self-join in WebI?
Post Author: willgreenland
CA Forum: WebIntelligence Reporting
I want to perform a self-join on a table in WebI, in order to achieve the following result (of course, if there is another way of doing this I'd be glad to hear it):
I have a table that lists the department in which an employee is located at given dates in the past:
EMPLID DEPT DATE
123 Sales 2007...
I want to use this table to track migration between departments, in other words I want to produce the following output table, showing how in 2008, 5 employees moved from Sales to Marketing (etc):DEPT_A DATE_A DEPT_B DATE_B COUNT(EMPLID) Sales 2007 Mrkting 2008 5...
In order to do this in SQL, I would do the following:
SELECT a.DEPT, b.DEPT, count(distinct EMPLID)FROMEMPL_DEPT a, EMPL_DEPT b // note the self-join hereWHERE( a.EMPLID = b.EMPLID AND a.DATE = '2007' AND b.DATE = '2008' )GROUP BY a.DEPT, b.DEPT;
Is there a way of doing this in WebI, ideally without resorting to manual SQL editing (I want this to be a report that other users can make sense of without necessarily getting into the SQL)?Post Author: amr_foci
CA Forum: WebIntelligence Reporting
you cant do something like that in the WebI directly, you have to manager that at the unvinerse level first
good luck -
Hi all,
SQL> select * from TEST6152;
A B DD
2 USD 12-DEC-07
30 USD 12-DEC-07
30 USD 12-NOV-07
15 USD 22-NOV-07
65 USD 13-SEP-07
I require this output from the quiery
CUR DT SUM(T.A)
USD 30-SEP-07 65
USD 30-NOV-07 110
USD 31-DEC-07 142
which is basically the cumulative sum
Now this is what I tried
SQL> ed
Wrote file afiedt.buf
1 select t1.curr, t1.dt, sum(t.a) from
2 test6152 t,
3 (
4 select B curr, last_day(trunc(DD)) dt from TEST6152 group by B, last_day(trunc(dd))
5 ) t1
6 where
7 t.b = t1.curr
8 and t.dd < t1.dt
9 group by
10 t1.curr, t1.dt
11* order by 1, 2
SQL> /
CUR DT SUM(T.A)
USD 30-SEP-07 65
USD 30-NOV-07 110
USD 31-DEC-07 142
.Now my question is : Is there a better way to do this in SQL coz when I translate the same logic to a bigger table (rows > 3 million) , it gives me a performance issue.
Thanks in advanceHi,
you can use analytic functions:
select
B as CUR,
C as DT,
sum (A) over (partition by B order by C) as COM_SUM
from test 6152 -
Idoc views updation, Workflow, Performance tuning techniques!
Hello,
Greetings for the Day!
Currently my client is facing following issues and they seek an help/attention to these issues. Following is the current landscape of an client.
Sector – Mining
SAP NW MDM 7.1 SP 09
SAP ECC EHP 5
SAP PI 7.0
List of Issues:
Classification (CLFMAS idoc) and Quality (MATQM idoc) views tries to update before MATMAS idoc updates and creates the material in ECC table.
At workflow level, how to assign incoming record approval request, put them in mask like functionality and approve them as bulk records.
Performance tuning techniques.
Issue description:
Classification (CLFMAS idoc) and Quality (MATQM idoc) views tries to update before MATMAS idoc updates and creates the material in a table.
Currently, client’s MATMAS idoc updates Basic data1 and Basic data2 along with other views and material gets updated in ECC table, but whenever record has classification and quality view to update via CLFMAS and MATQM idoc, these 2 idocs tries to search the material ECC table before respective MATMAS to update the table. As it does not have the basic data created for the material entire idoc fails. Kindly suggest the solution as in how we can align the process where classification and quality view will get update only after the basic data views gets updated to material master. Is there any way we can make views to be updated sequentially?
At workflow level, how to assign incoming record approval request, put them in mask like functionality and approve them as bulk records.
Currently, super users are configured within the system, they have 2 roles assigned to their ID’s, 1.custodian and 2.steward. In custodian role user assigns the MDM material number and check other relevant assignment to record creation request, user approves the material request and the request goes to steward role. As the 1 user has 2 roles, same user need not to checks everything again in steward role, hence user wants whatever request comes at steward user inbox, he shall be able to create one single group for those 20-30 records and on one single click entire materials shall be approved and disappear out of his workflow level. Is there any way by which it can be achieved.
Performance tuning techniques.
Currently, client MDM system response time is very very slow, after a single click of action it takes long time to reflect the action within MDM. Material database is almost around 2.5 lakh records, standard structure has been used, not a complex landscape structure. Both ECC and MDM server is on single hardware, only the logical separate DB. Kindly suggest performance techniques if any.
Kindly suggest !
Regards,
NeilHi Niel,
Kindly try the below options
-> Performance tuning techniques.
SAP Recommendation is to put the application ,server and Database in different Boxes . I am not sure how you managed to install both MDM and ECC in the same box but that is a big NO NO .
Make sure there is enough hardware support for a separate MDM box.
-> Classification (CLFMAS idoc) and Quality (MATQM idoc) views tries to update before MATMAS idoc updates and creates the material in a table.
MDM only sends out an XML file , so you definitely need a middle ware (PI) to do the conversion.
You can use PI logic ( ccBPM) to sent the IDOC is the necessary sequence .
Else you can maintain this logic in the Processing code of ECC system .
PS : The PI option is more recommended.
Regards,
Vag VIgnesh Shenoy -
A process for the performance monitoring, tuning and fixing issues
Hello
Any recommendations for 10g a process/procedure/methodology for the performance monitoring, tuning and fixing issues for a team to follow ?Ranker wrote:
Hello
Any recommendations for 10g a process/procedure/methodology for the performance monitoring, tuning and fixing issues for a team to follow ?1) upgrade the DB to a supported version.
2) Read The Fine Manual; Performance Tuning Guide
http://docs.oracle.com/cd/E11882_01/server.112/e10822/toc.htm
Handle: Ranker
Status Level: Newbie
Registered: May 12, 2013
Total Posts: 13
Total Questions: 4 (4 unresolved)
How sad!
why do you never get your questions answered here? -
Oracle Memory Issue/ performance tuning
I have Oracle 9i running on Window 2003 server. 2 GB memory is allocated to Oralce DB( even though server has 14GB memory)
Recently, the oracle process has been slow.. running query
I ran the window task manager. Here is the numbers that I see
Mem usage: 556660k
page Faults: 1075029451
VM size: 1174544 K
I am not sure how to analyze this data. why the page fault is so huge. and Mem usage is half of VM size?
How can I do the performance tuning on this box?I'm having a similar issue with Oracle 10g R2 64-bit on Windows 2003 x64. Performance on complicated queries is abysmal because [I think] most of the SGA is sitting in a page file, even though there is plenty of physical RAM to be had. Performance on simple queries is probably bad also, but it's not really noticable. Anyway, page faults skyrocket when I hit the "go" button on big queries. Our legacy system runs our test queries in about 5 minutes, but the new system takes at least 30 if not 60. The new system has 24 gigs of RAM, but at this point, I'm only allocating 1 gig to the SGA and 1/2 gig to the PGA. Windows reports oracle.exe has 418,000K in RAM and 1,282,000K in the page file (I rounded a bit). When I had the PGA set to 10 gigs, the page usage jumped to over 8 gigs.
I tried adding ORA_LPENABLE=1 to the registry, but this issue seems to be independent. Interestingly, the amount of RAM taken by oracle.exe goes down a bit (to around 150,000K) when I do this. I also added "everyone" to the security area "lock pages in memory", but again, this is probably unrelated.
I did an OS datafile copy and cloned the database to a 32-bit windows machine (I had to invalidate and recompile all objects to get this to work), and this 32-bit test machine now has the same problem.
Any ideas? -
Performance tuning related issues
hi experts
i am new to performance tuning application. can any one share some stuff(bw3.5& bi7)related to the concern area.send any relavent docs to my id: [email protected] .
thanks in advance
regards
gavaskar
[email protected]hi Gavaskar,
check this, you can download lot of performance materials
Business Intelligence Performance Tuning [original link is broken]
and e-learning -> intermediate course and advance course
https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/fe5b0b5e-0501-0010-cd88-c871915ec3bf [original link is broken]
e.g
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/10b589ad-0701-0010-0299-e5c282b7aaad
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/8e6183ad-0701-0010-e083-9ab1c6afe6f2
performance tools in bw 3.5
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/07a4f070-0701-0010-3b91-a6bf7644c98f
(here also you can download the presentation by righ click the disk drive icon)
hope this helps. -
Hi folks,
I having a problem with performance tuning ... Below is a sample query
SELECT /*+ PARALLEL (K 4) */ DISTINCT ltrim(rtrim(ibc_item)), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE ltrim(rtrim(ibc_item)) NOT IN
select /*+ PARALLEL (II 4) */ DISTINCT ltrim(rtrim(THIRD_MAINKEY)) FROM BBB II
WHERE SECOND_MAINKEY = 3
UNION
SELECT /*+ PARALLEL (III 4) */ DISTINCT ltrim(rtrim(BLN_BUSINESS_LINE_NAME)) FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
The above query is having a cost of 460 Million. I tried creating index but oracle is not using index as a FT scan looks better. (I too feel FT scan is the best as 90% of the rows are used in the table)
After using the parallel hint the cost goes to 100 Million ....
Is there any way to decrease the cost ...
Thanks in advance for ur help !Be aware too Nalla, that the PARALLEL hint will rule out the use of an index if Oracle adheres to it.
This is what I would try:
SELECT /*+ PARALLEL (K 4) */ DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = TRIM(K.ibc_item))
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = TRIM(K.ibc_item))But I don't like this at all: TRIM(K.ibc_item), and you never need to use DISTINCT with NOT IN or NOT EXISTS.
Try this:
SELECT DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = K.ibc_item
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = K.ibc_itemThis may not work though, since you may have whitespaces in K.ibc_item. -
Tuning question. Self join vs Analytical function
Hi all,
I am a bit confused about this query cost.
So I have this query. Now follow the original one (after rewritten by me):
SELECT /*+ parallel (d 8) parallel(h 8) parallel(c 8) */
DISTINCT
d.customer_node_id AS root_customer_node_id,
d.customer_node_id AS customer_node_id,
nvl(h.account_id,c.account_id) AS account_id,
nvl(h.account_name,c.account_name) AS account_name,
d.service_id AS service_id,
nvl((SELECT /*+ parallel(x 8) */ max(x.service_name) FROM delta_service_history x
WHERE x.service_id=d.service_id AND v_upperbound_upd_dt BETWEEN x.effective_start_date AND x.effective_end_date GROUP BY x.service_id),d.service_name) AS service_name
FROM
delta_service_history d,
delta_account c,
stg_hierarchy h
WHERE
d.customer_node_id=c.customer_node_id(+) AND
d.customer_node_id=h.customer_node_id(+)
......the new one (I decided to use analitycal function to calculate max(service_name) for each service_id instead of self join done for "delta_service_history" )
I thought that self join was very heavy....
Anyway, my two questions are:
1. why the second one is heavier than the first. I reduce the number of join.....
2. how can be rewritten the first one query? In particular way I don't like that self join..... :)
Select Distinct
root_customer_node_id,
customer_node_id,
account_id,
account_name,
service_id,
service_name
From
SELECT /*+ parallel (d 8) parallel(h 8) parallel(c 8) */
d.customer_node_id AS root_customer_node_id,
d.customer_node_id AS customer_node_id,
nvl(h.account_id,c.account_id) AS account_id,
nvl(h.account_name,c.account_name) AS account_name,
d.service_id AS service_id,
d.service_name,
row_number() over (partition by d.service_id order by d.service_name desc) r1
FROM
delta_service_history d,
delta_account c,
stg_hierarchy_new h
WHERE
d.customer_node_id=c.customer_node_id(+) AND
d.customer_node_id=h.customer_node_id(+) AND
v_upperbound_upd_dt BETWEEN d.effective_start_date AND d.effective_end_date
)a
Where a.r1 = 1
Thank you all.I Post query plan.
First one query (the original):
Plan
MERGE STATEMENT ALL_ROWSCost: 2.691.669 Bytes: 784.141.119.324 Cardinality: 1.754.230.692
27 MERGE STGADMIN.STG_HIERARCHY
26 PX COORDINATOR
25 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10005 :Q1005Cost: 2.691.669 Bytes: 475.396.517.532 Cardinality: 1.754.230.692
24 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1005
23 HASH JOIN RIGHT OUTER BUFFERED PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 2.691.669 Bytes: 475.396.517.532 Cardinality: 1.754.230.692
4 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1005
3 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120
2 PX SEND HASH PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120
1 TABLE ACCESS FULL TABLE STGADMIN.STG_HIERARCHY Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120
22 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 2.669.426 Bytes: 376.698.378.630 Cardinality: 1.752.085.482
21 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004Cost: 2.669.426 Bytes: 376.698.378.630 Cardinality: 1.752.085.482
20 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1004Cost: 2.669.426 Bytes: 376.698.378.630 Cardinality: 1.752.085.482
19 SORT UNIQUE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 2.669.426 Bytes: 127.902.240.186 Cardinality: 1.752.085.482
18 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 35,386 Bytes: 127.902.240.186 Cardinality: 1.752.085.482
13 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 4,86 Bytes: 647.395.154 Cardinality: 13.212.146
8 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464
7 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464
6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464
5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_SERVICE_HISTORY :Q1001Cost: 1,961 Bytes: 158.611.600 Cardinality: 6.344.464
12 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
11 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
10 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
9 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.STG_HIERARCHY :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
17 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622
16 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622
15 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622
14 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_ACCOUNT :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622
...second query
Plan
MERGE STATEMENT ALL_ROWSCost: 3.521.711 Bytes: 291.687.979.305 Cardinality: 652.545.815
32 MERGE STGADMIN.STG_HIERARCHY
31 PX COORDINATOR
30 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10006 :Q1006Cost: 3.521.711 Bytes: 176.839.915.865 Cardinality: 652.545.815
29 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1006
28 HASH JOIN RIGHT OUTER BUFFERED PARALLEL_COMBINED_WITH_PARENT :Q1006Cost: 3.521.711 Bytes: 176.839.915.865 Cardinality: 652.545.815
4 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1006
3 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1006Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120
2 PX SEND HASH PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120
1 TABLE ACCESS FULL TABLE STGADMIN.STG_HIERARCHY Cost: 20,828 Bytes: 860.278.720 Cardinality: 15.362.120
27 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1006Cost: 3.500.345 Bytes: 140.125.783.665 Cardinality: 651.747.831
26 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10005 :Q1005Cost: 3.500.345 Bytes: 140.125.783.665 Cardinality: 651.747.831
25 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1005Cost: 3.500.345 Bytes: 140.125.783.665 Cardinality: 651.747.831
24 SORT UNIQUE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 3.500.345 Bytes: 121.225.096.566 Cardinality: 651.747.831
23 VIEW PARALLEL_COMBINED_WITH_PARENT STGADMIN. :Q1005Cost: 1.195.554 Bytes: 121.225.096.566 Cardinality: 651.747.831
22 WINDOW SORT PUSHED RANK PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831
21 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831
20 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831
19 WINDOW CHILD PUSHED RANK PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 1.195.554 Bytes: 58.005.556.959 Cardinality: 651.747.831
18 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 34,402 Bytes: 58.005.556.959 Cardinality: 651.747.831
13 HASH JOIN OUTER PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 4,859 Bytes: 319.455.955 Cardinality: 4.914.707
8 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380
7 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380
6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380
5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_SERVICE_HISTORY :Q1001Cost: 1,963 Bytes: 152.576.580 Cardinality: 3.721.380
12 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
11 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
10 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
9 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.STG_HIERARCHY :Q1002Cost: 2,879 Bytes: 368.690.880 Cardinality: 15.362.120
17 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622
16 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622
15 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622
14 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STGADMIN.DELTA_ACCOUNT :Q1003Cost: 28,982 Bytes: 5.838.854.928 Cardinality: 243.285.622 -
Performance tuning in oracle 10g
Hi Guys
i hope all are well,Have a nice day Today
i have discuss with some performance tuning issue
recently , i joined the new project that project improve the efficiency of the applicaton. in this environment oracle plsql langauage are used , so if i need to improve effiency of application what are the step are taken
and what are the way to go through the process improvement
kindly help megenerate statspack/AWR reports
HOW To Make TUNING request
https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003 -
Performance tuning for Sales Order and its configuration data extraction
I write here the data fetching subroutine of an extract report.
This report takes 2.5 hours to extract 36000 records in the quality server.
Kindly provide me some suggestions for performance tuning it.
SELECT auart vkorg vtweg spart vkbur augru
kunnr yxinsto bstdk vbeln kvgr1 kvgr2 vdatu
gwldt audat knumv
FROM vbak
INTO TABLE it_vbak
WHERE vbeln IN s_vbeln
AND erdat IN s_erdat
AND auart IN s_auart
AND vkorg = p_vkorg
AND spart IN s_spart
AND vkbur IN s_vkbur
AND vtweg IN s_vtweg.
IF NOT it_vbak[] IS INITIAL.
SELECT mvgr1 mvgr2 mvgr3 mvgr4 mvgr5
yyequnr vbeln cuobj
FROM vbap
INTO TABLE it_vbap
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND posnr = '000010'.
SELECT bstkd inco1 zterm vbeln
prsdt
FROM vbkd
INTO TABLE it_vbkd
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln.
SELECT kbetr kschl knumv
FROM konv
INTO TABLE it_konv
FOR ALL ENTRIES IN it_vbak
WHERE knumv = it_vbak-knumv
AND kschl = 'PN00'.
SELECT vbeln parvw kunnr
FROM vbpa
INTO TABLE it_vbpa
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND parvw IN ('PE', 'YU', 'RE').
ENDIF.
LOOP AT it_vbap INTO wa_vbap.
IF NOT wa_vbap-cuobj IS INITIAL.
CALL FUNCTION 'VC_I_GET_CONFIGURATION'
EXPORTING
instance = wa_vbap-cuobj
language = sy-langu
TABLES
configuration = it_config
EXCEPTIONS
instance_not_found = 1
internal_error = 2
no_class_allocation = 3
instance_not_valid = 4
OTHERS = 5.
IF sy-subrc = 0.
READ TABLE it_config WITH KEY atnam = 'IND_PRODUCT_LINES'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_GQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_VKN'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_ZE'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_HQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_CALCULATED_INST_HOURS'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP. " End of loop on it_vbap
Edited by: jaya rangwani on May 11, 2010 12:50 PM
Edited by: jaya rangwani on May 11, 2010 12:52 PMHello Jaya,
Will provide some point which will increase the performance of the program:
1. VBAK & VBAP are header & item table. And so the relation will be 1 to many. In this case, you can use inner join instead multiple select statement.
2. If you are very much confident in handling the inner join, then you can do a single statement to get the data from VBAK, VBAP & VBKD using the inner join.
3. Before using for all entries, check whether the internal table is not initial.
And sort the internal table and delete adjacent duplicates.
4. Sort all the resultant internal table based on the required key fields and read always using the binary search.
You will get a number of documents where you can get a fair idea of what should be done and what should not be while doing a program related to performance issue.
Also you can have number of function module and BAPI where you can get the sales order details. You can try with u2018BAPISDORDER_GETDETAILEDLISTu2019.
Regards,
Selva K. -
Performance Tuning for OBIEE Reports
Hi Experts,
I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
The key point is the Dimension table is used in most of the OOTB reports.
so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
can anyone suggest good performance tuning tips to tune the reports.
i created some indices on Materialized view columns and and on dimension table columns.
i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
Please Provide valuable suggestions and comments
Thank You
KumarKumar,
Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
Hope that helps
~Srix -
Performance Tuning - Suggestions
Hi,
I have an ABAP (Interactive List) Program times out in PRD very often. The ABAP run time is about 99%. The DB time is less than 1%. All the select statements has the table index in place. Actually it isprocessing all the Production Orders (Released but not Confirmed/Closed). Please let me know if you have any suggestion.
Appreciate Your Help.
Thanks,
Kannan.Hi
1) Dont use nested select statements
2) If possible use for all entries in addition
3) In the where addition make sure you give all the primary key
4) Use Index for the selection criteria.
5) You can also use inner joins
6) You can try to put the data from the first select statement into an Itab and then in order to select the data from the second table use for all entries in.
7) Use the runtime analysis SE30 and SQL Trace (ST05) to identify the performance and also to identify where the load is heavy, so that you can change the code accordingly
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5d0db4c9-0e01-0010-b68f-9b1408d5f234
ABAP performance depends upon various factors and in devicded in three parts:
1. Database
2. ABAP
3. System
Run Any program using SE30 (performance analys) to improve performance refer to tips and trics section of SE30, Always remember that ABAP perfirmance is improved when there is least load on Database.
u can get an interactive grap in SE30 regarding this with a file.
also if u find runtime of parts of codes then use :
Switch on RTA Dynamically within ABAP Code
*To turn runtim analysis on within ABAP code insert the following code
SET RUN TIME ANALYZER ON.
*To turn runtim analysis off within ABAP code insert the following code
SET RUN TIME ANALYZER OFF.
Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
Avoid for all entries in JOINS
Try to avoid joins and use FOR ALL ENTRIES.
Try to restrict the joins to 1 level only ie only for tables
Avoid using Select *.
Avoid having multiple Selects from the same table in the same object.
Try to minimize the number of variables to save memory.
The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
Avoid creation of index as far as possible
Avoid operators like <>, > , < & like % in where clause conditions
Avoid select/select single statements in loops.
Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
Avoid using ORDER BY in selects
Avoid Nested Selects
Avoid Nested Loops of Internal Tables
Try to use FIELD SYMBOLS.
Try to avoid into Corresponding Fields of
Avoid using Select Distinct, Use DELETE ADJACENT
Check the following Links
Re: performance tuning
Re: Performance tuning of program
http://www.sapgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
check the below link
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
See the following link if it's any help:
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Check also http://service.sap.com/performance
and
books like
http://www.sap-press.com/product.cfm?account=&product=H951
http://www.sap-press.com/product.cfm?account=&product=H973
http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Performance tuning for Data Selection Statement
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
performance
Performance
Performance Guide
performance issues...
Performance Tuning
Performance issues
performance tuning
performance tuning
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
1 Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
2 Avoid for all entries in JOINS
3 Try to avoid joins and use FOR ALL ENTRIES.
4 Try to restrict the joins to 1 level only ie only for 2 tables
5 Avoid using Select *.
6 Avoid having multiple Selects from the same table in the same object.
7 Try to minimize the number of variables to save memory.
8 The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
9 Avoid creation of index as far as possible
10 Avoid operators like <>, > , < & like % in where clause conditions
11 Avoid select/select single statements in loops.
12 Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
13 Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
14 Avoid using ORDER BY in selects
15 Avoid Nested Selects
16 Avoid Nested Loops of Internal Tables
17 Try to use FIELD SYMBOLS.
18 Try to avoid into Corresponding Fields of
19 Avoid using Select Distinct, Use DELETE ADJACENT.
Regards
Anji -
Hi All,
I have 2 tables.
Parts and Part_Property
Parts
Part ID Status
1 Complete
2 Pending
3 Complete
Part Property
Part ID Status
Part Type Part String
1 Complete
Active True
1 Complete
Data_Status Raw
1 Complete
Temp_Verification Valid_Test
1 Complete
Name Screw
2 Complete
Active False
2 Complete
Data_Status Raw
2 Complete
Temp_Verification Valid_Test
2 Complete
Name Hooks
3 Complete
Active True
3 Complete
Data_Status Raw
3 Complete
Temp_Verification Valid_Test
3 Complete
Name Bolt
The above rows are small set of data. Our Previous code with Left Outer Join, has huge Scan Count > 15000 and Logical Reads > 40000. And the users complain that it is very slow always.
SQL Server Execution Times:
CPU time = 313 ms, elapsed time = 318 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Is there any way to overcome these performance bottleneck ?You violated ISO-11179 naming standards. You violated First Normal Form(1NF).
You have committed EAV (Entity-Attribute-Value), a huge design flaw. We never, never, never mix data and meta data in a table. Using "VW_" in the names of VIEW is called "Volkswagen Programming", another noob error
This is a pretty simple and very clean EAV example. In practice you will see one table, descriptively named something like "DATA", all NULL-able columns, no attempt at keys and worse. Much worse.
CREATE TABLE Attributes_Values
(attribute_name VARCHAR (10) NOT NULL,
attribute_value VARCHAR (50) NOT NULL,
PRIMARY KEY (attribute_name, attribute_value));
INSERT INTO Attributes_Values
VALUES ('LOCATION', 'Bedroom'),
('LOCATION', 'Dining Room'),
('LOCATION', 'Bathroom'),
('LOCATION', 'courtyard'),
('EVENT', 'verbal aggression'),
('EVENT', 'peer'),
('EVENT', 'bad behavior'),
('EVENT', 'other');
CREATE TABLE Entities
(physical_row_locator INTEGER IDENTITY (1,1) NOT NULL,
generic_entity_id INTEGER,
attribute_name VARCHAR (10) NOT NULL,
attribute_value VARCHAR (50) NOT NULL,
FOREIGN KEY (attribute_name, attribute_value)
REFERENCES Attributes_Values (attribute_name, attribute_value)
INSERT INTO Entities
VALUES (1, 'LOCATION', 'bedroom'),
(1, 'EVENT', 'other'),
(1, 'EVENT', 'bad behavior'),
(2, 'LOCATION', 'bedroom'),
(2, 'EVENT', 'other'),
(2, 'EVENT', 'verbal aggression'),
(3, 'LOCATION', 'courtyard'),
(3, 'EVENT', 'other'),
(3, 'EVENT', 'peer');
Please notice that there is nothing to prevent me from inserting a row (2, 'AUTHORITY', 'police') or (42, 'SHOESIZE', '10') that may or may not make sense. No two generic_entity_id's are required to have the same structure when you join their
attributes together. There is no restriction on the attribute names or values; every typo is a new attribute or value.
The poster wanted a simple (Location, Event, COUNT(*) ) report. That is about as basic as you can get. Here is one shot at it.
WITH Locations (location_name) -- notice aliasing!
AS
(SELECT attribute_value
FROM Attributes_Values AS AV1
WHERE attribute_name = 'LOCATION'),
Events (incident_type) -- notice aliasing!
AS
(SELECT attribute_value
FROM Attributes_Values AS AV2
WHERE attribute_name = 'EVENT'),
Incidents (incident_nbr, location_name, incident_type) -- notice aliasing!
AS
(SELECT L1.generic_entity_id, L1.location_name, E1.incident_type
FROM Locations AS L1,
Events AS E1
WHERE L1.generic_entity_id = E1.generic_entity_id
SELECT location_name, incident_type, COUNT(*) AS incident_cnt
FROM Incidents
GROUP BY location_name, incident_type;
This is a general pattern for EAV queries. Each column is extracted from the Attribute-Values table. A query with (n) columns becomes an n-way self-join under the covers. Then each of those working tables will need (n-1) joins to the Entities table. This gives
us something like a table, but it has no data integrity, guarantee of a key, constraints, nor numeric and temporal data types.
The use of CTEs is simply to make the query easier to read. It does not help performance. You also have to give particular names to the generic data as you extract it. How do you get everyone to agree on those names?
Did you want to add a 'FINE" attribute to the table? Well, the values can hold only character data. so you now have the overhead of CAST() calls. In fact, since you cannot predict what will go into a column, you have to use the most general data type you can
find to cast to anything else -- NVARCHAR (MAX). But you will probably use a VARCHAR(<big number here>) instead.
I worked for a software company that used an EAV model for a package to compute insurance salesman's commission based on an elaborate tiered scheme. They got paid based on the performance of their personal sales, their team's sales, their district sales and
finally the company as a whole. This is a fairly common way to compute commissions.
The reason given for the EAV model was that the commission algorithm could be easily changed by end users on the fly. The bad news was that it was changed by the end users on fly. Orphan rows could not be removed for fear of breaking something even if you could
figure out the chains of GUIDs used to link things together. Servers fill with junk data and locked up the system in a few months.
The bigger problem is that EAV has no data integrity. Consider the constraints you need to add to the simple Attributes_Values table to make it almost work:
INSERT INTO Attributes_Values
VALUES ('LOCATION', 'bedroom'),
('LOCATION', 'dining room'),
('LOCATION', 'bathroom'),
('LOCATION', 'courtyard'),
('EVENT', 'verbal aggression'),
('EVENT', 'peer'),
('EVENT', 'bad behavior'),
('EVENT', 'other');
CREATE TABLE Attributes_Values
(attribute_name VARCHAR (10) NOT NULL,
attribute_value VARCHAR (50) NOT NULL,
PRIMARY KEY (attribute_name, attribute_value),
CONSTRAINT Valid_Attribute_Names
CHECK (attribute_name IN ('LOCATION', 'EVENT')),
CONSTRAINT Generic_DRI
CHECK (CASE WHEN attribute_name = 'LOCATION'
AND attribute_value
IN ('bedroom', 'dining room', 'bathroom', 'courtyard')
THEN 'T'
WHEN attribute_name = 'EVENT'
AND attribute_value
IN ('verbal aggression', 'peer', 'bad behavior', 'other')
THEN 'T' ELSE 'F' END = 'T')
Now add a FINE attribute and constraint it to being non-negative money amounts. Now add a constraint that the amount of the fine cannot be over $5.00 if the event was 'verbal aggression' in a bedroom.
Try to write a single DEFAULT clause for all the entities crammed into one column. Impossible, unless they all happen to use NULL.
This is the same thing in a proper schema would start with a sane schema design. There should be separate referenced tables or CHECK() constraints for Locations and Events, since they are attributes of something.
CREATE TABLE Incident_Reports
(incident_report_nbr CHAR(12) NOT NULL PRIMARY KEY,
location_code VARCHAR(15) NOT NULL
REFERENCES Locations (location_code)
ON DELETE CASCADE
ON UPDATE CASCADE,
incident_type VARCHAR(20) NOT NULL
REFERENCES Incident_Types (incident_type)
ON UPDATE CASCADE,
etc);
Entities, attributes and values are where they belong, so the query is now trivial:
SELECT location_code, incident_type, COUNT(*)
FROM Incident_Reports
GROUP BY location_code, incident_type;
Then I could get fancier report with a simple change to the GROUP BY clause:
SELECT location_code, incident_type, COUNT(*)
FROM Incident_Reports
GROUP BY ROLLUP (location_code, incident_type);
The EAV version is left as an exercise for the reader.
There is such a thing as "too" generic. "To be is to be something in particular; to be nothing in particular or everything in general is to be nothing at all." --Law of Identity, Parmenides the Eleatic (circa BCE 490)
References
For those who are interested, there are couple of links to articles on EAV I found on the net:
Generic Design of Web-Based Clinical Databases
http://www.jmir.org/2003/4/e27/
The Attributes_Values/CR Model of Data Representation
http://ycmi.med.yale.edu/nadkarni/eav_CR_contents.htm
An Introduction to Entity-Attribute-Value Design for Generic Clinical Study Data Management Systems
http://ycmi.med.yale.edu/nadkarni/Introduction%20to%20EAV%20systems.htm
Data Extraction and Ad Hoc Query of an Entity-Attribute-Value Database
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubme...
Exploring Performance Issues for a Clinical Database Organized Using an Entity-Attribute-Value Representation
http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pubmed&pubme...
A really good horror story about this kind of disaster is at:
http://www.simple-talk.com/opinion/opinion-pieces/bad-carma/
--CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
in Sets / Trees and Hierarchies in SQL -
Hi,
Please can anybody help me in the performance tuning of the VA01 transaction since its consuming a lot of time in production.
This issue is very urgent.
Pls help.Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
Avoid for all entries in JOINS
Try to avoid joins and use FOR ALL ENTRIES.
Try to restrict the joins to 1 level only ie only for tables
Avoid using Select *.
Avoid having multiple Selects from the same table in the same object.
Try to minimize the number of variables to save memory.
The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
Avoid creation of index as far as possible
Avoid operators like <>, > , < & like % in where clause conditions
Avoid select/select single statements in loops.
Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
Avoid using ORDER BY in selects
Avoid Nested Selects
Avoid Nested Loops of Internal Tables
Try to use FIELD SYMBOLS.
Try to avoid into Corresponding Fields of
Avoid using Select Distinct, Use DELETE ADJACENT
you can refer these links :
http://www.sapgenie.com/abap/performance.htm
chk this
How to increase the performance of a program
Check the following Links
Re: performance tuning
Re: Performance tuning of program
http://www.sapgenie.com/abap/performance.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
check the below link
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
See the following link if it's any help:
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
Check also http://service.sap.com/performance
and
books like
http://www.sap-press.com/product.cfm?account=&product=H951
http://www.sap-press.com/product.cfm?account=&product=H973
http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
cheers!
sri
Maybe you are looking for
-
Warning message due to JSPDynpage
Hi All, We have a scenario where in the jspdynpage we have only hyperlinks and One of the hyperlink is pointing to the standard uwl iview as follows [http://portalhostname:50000/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fever
-
Problems running servlet due to missing classes?
Hi everyone, I am doing a project which requires me to convert a java coding into a servlet. This java coding I am working on is some kind of location based service which tracks people (receiving an argument of a string as IP). It imports an external
-
hi all, while i am trying to do changes in BEX Query designer, following error occurs and not allowing me do the changes(like drag and droppinf of char and key fig in rows and colums) errors 1 Bex transport request is not suitable or available 2 choo
-
Accesing Oracle 8i Database from iPlanet (Sun ONE) web server
Hi all, I am trying to set up basic authentication mechanism on an iPlanet web server hosted web application. Instead of using the iPlanet LDAP directory, I would like to access the Oracle RDBMS for userid/password authentication. Does Oracle provide
-
Helo SAP Experts, Kindly tell all kind of tables in SAP MM related. than Edited by: Csaba Szommer on Aug 18, 2011 12:02 PM