Query taking time
We have this query which is taking a long time to execute. From the explain plan what i found out is there is a full table scan going on W_GL_OTHER_F. Please help in identifying the problem area and solutions.
The query is,
select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
D1.c6 as c6,
D1.c7 as c7,
D1.c8 as c8
from
(select distinct D1.c2 as c1,
D1.c3 as c2,
D1.c4 as c3,
D1.c5 as c4,
D1.c6 as c5,
D1.c7 as c6,
D1.c8 as c7,
D1.c1 as c8,
D1.c5 as c9
from
(select sum(case when T324628.OTHER_DOC_AMT is null then 0 else T324628.OTHER_DOC_AMT end ) as c1,
T91397.GL_ACCOUNT_NUM as c2,
T149255.SEGMENT_VAL_CODE as c3,
T148908.SEGMENT_VAL_DESC as c4,
T148543.HIER4_CODE as c5,
T148543.HIER4_NAME as c6,
T91707.ACCT_DOC_NUM as c7,
T91707.X_LINE_DESCRIPTION as c8
from
W_GL_OTHER_F T91707 /* Fact_W_GL_OTHER_F */ ,
W_GL_ACCOUNT_D T91397 /* Dim_W_GL_ACCOUNT_D */ ,
W_STATUS_D T96094 /* Dim_W_STATUS_D_Generic */ ,
WC_GL_OTHER_F_MV T324628 /* Fact_WC_GL_OTHER_MV */ ,
W_GL_SEGMENT_D T149255 /* Dim_W_GL_SEGMENT_D_Segment1 */ ,
W_GL_SEGMENT_D T148937 /* Dim_W_GL_SEGMENT_D_Segment3 */ ,
W_HIERARCHY_D T148543 /* Dim_W_HIERARCHY_D_Segment3 */ ,
W_GL_SEGMENT_D T148908 /* Dim_W_GL_SEGMENT_D_Segment2 */
where ( T91397.ROW_WID = T91707.GL_ACCOUNT_WID and T91707.DOC_STATUS_WID = T96094.ROW_WID and T96094.ROW_WID = T324628.DOC_STATUS_WID and T148543.HIER_CODE = T148937.SEGMENT_LOV_ID and T148543.HIER20_CODE = T148937.SEGMENT_VAL_CODE and T324628.DELETE_FLG = 'N' and T324628.X_CURRENCY_CODE = 'CAD' and T148543.HIER4_CODE <> '00000000000' and T91397.RECON_TYPE_CODE is not null and T91397.ROW_WID = T324628.GL_ACCOUNT_WID and T91397.ACCOUNT_SEG3_CODE = T148937.SEGMENT_VAL_CODE and T91397.ACCOUNT_SEG3_ATTRIB = T148937.SEGMENT_LOV_ID and T91397.ACCOUNT_SEG2_CODE = T148908.SEGMENT_VAL_CODE and T91397.ACCOUNT_SEG2_ATTRIB = T148908.SEGMENT_LOV_ID and T91397.ACCOUNT_SEG1_CODE = T149255.SEGMENT_VAL_CODE and T91397.ACCOUNT_SEG1_ATTRIB = T149255.SEGMENT_LOV_ID and (T96094.W_STATUS_CODE in ('POSTED', 'REVERSED')) and T91397.GL_ACCOUNT_NUM like '%98%' )
group by T91397.GL_ACCOUNT_NUM, T91707.ACCT_DOC_NUM, T91707.X_LINE_DESCRIPTION, T148543.HIER4_CODE, T148543.HIER4_NAME, T148908.SEGMENT_VAL_DESC, T149255.SEGMENT_VAL_CODE
) D1
) D1
order by c1, c2, c3, c4, c5, c6, c7The plan is,
PLAN_TABLE_OUTPUT
Plan hash value: 3196636288
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Psto
| 0 | SELECT STATEMENT | | 810K| 306M| | 266K (1)| 01:20:03 | | |
| 1 | HASH GROUP BY | | 810K| 306M| 320M| 266K (1)| 01:20:03 | | |
|* 2 | HASH JOIN | | 810K| 306M| 38M| 239K (1)| 01:11:56 | | |
|* 3 | MAT_VIEW ACCESS FULL | WC_GL_OTHER_F_MV | 1137K| 40M| | 9771 (2)| 00:0
|* 4 | HASH JOIN | | 531K| 189M| | 222K (1)| 01:06:38 | | |
| 5 | INLIST ITERATOR | | | | | | | | |
|* 6 | INDEX RANGE SCAN | W_STATUS_D_U2 | 4 | 56 | | 1 (0)| 00:00:01 |
|* 7 | HASH JOIN | | 607K| 208M| 8704K| 222K (1)| 01:06:38 | | |
|* 8 | HASH JOIN | | 40245 | 8214K| 2464K| 10843 (2)| 00:03:16 | | |
| 9 | VIEW | index$_join$_007 | 35148 | 2025K| | 122 (32)| 00:00:03 | |
|* 10 | HASH JOIN | | | | | | | | |
|* 11 | HASH JOIN | | | | | | | | |
|* 12 | HASH JOIN | | | | | | | | |
| 13 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 1 (0)| 00:00:01 | |
| 14 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M2 | | | | | | |
| 15 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
| 16 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M4 | | | | | | |
| 17 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
|* 18 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M11 | | | | | | |
| 19 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 33 (0)| 00:00:01 | |
| 20 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M12 | | | | | | |
|* 21 | HASH JOIN | | 40246 | 5895K| 4096K| 10430 (2)| 00:03:08 | |
| 22 | VIEW | index$_join$_008 | 65417 | 3321K| | 197 (14)| 00:00:04 |
|* 23 | HASH JOIN | | | | | | | | |
|* 24 | HASH JOIN | | | | | | | | |
| 25 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 3 (0)| 00:00:01 | |
| 26 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 27 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 66 (2)| 00:00:02 | |
| 28 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
| 29 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 100 (1)| 00:00:02 | |
| 30 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M3 | | | | | | |
|* 31 | HASH JOIN | | 40246 | 3851K| | 9953 (1)| 00:03:00 | | |
| 32 | VIEW | index$_join$_006 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 33 | HASH JOIN | | | | | | | | |
| 34 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 35 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 36 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 37 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 38 | HASH JOIN | | 40246 | 3144K| | 9870 (1)| 00:02:58 | | |
| 39 | VIEW | index$_join$_005 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 40 | HASH JOIN | | | | | | | | |
| 41 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 42 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 43 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 44 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 45 | TABLE ACCESS FULL | W_GL_ACCOUNT_D | 40246 | 2436K| | 9788 (1)| 00:02:57
| 46 | PARTITION RANGE ALL | | 11M| 4261M| | 152K (2)| 00:45:43 | 1 |1048
| 47 | TABLE ACCESS FULL | W_GL_OTHER_F | 11M| 4261M| | 152K (2)| 00:45:43
Predicate Information (identified by operation id):
2 - access("T96094"."ROW_WID"="T324628"."DOC_STATUS_WID" AND "T91397"."ROW_WID"="T324628"."GL_ACC
3 - filter("T324628"."X_CURRENCY_CODE"='CAD' AND "T324628"."DELETE_FLG"='N')
4 - access("T91707"."DOC_STATUS_WID"="T96094"."ROW_WID")
6 - access("T96094"."W_STATUS_CODE"='POSTED' OR "T96094"."W_STATUS_CODE"='REVERSED')
7 - access("T91397"."ROW_WID"="T91707"."GL_ACCOUNT_WID")
8 - access("T148543"."HIER_CODE"="T148937"."SEGMENT_LOV_ID" AND "T148543"."HIER20_CODE"="T148937"
10 - access(ROWID=ROWID)
11 - access(ROWID=ROWID)
12 - access(ROWID=ROWID)
18 - filter("T148543"."HIER4_CODE"<>'00000000000')
21 - access("T91397"."ACCOUNT_SEG2_CODE"="T148908"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG2_ATTRIB"="T148908"."SEGMENT_LOV_ID")
23 - access(ROWID=ROWID)
24 - access(ROWID=ROWID)
31 - access("T91397"."ACCOUNT_SEG3_CODE"="T148937"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG3_ATTRIB"="T148937"."SEGMENT_LOV_ID")
33 - access(ROWID=ROWID)
38 - access("T91397"."ACCOUNT_SEG1_CODE"="T149255"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG1_ATTRIB"="T149255"."SEGMENT_LOV_ID")
40 - access(ROWID=ROWID)
45 - filter("T91397"."GL_ACCOUNT_NUM" LIKE '%98%' AND "T91397"."RECON_TYPE_CODE" IS NOT NULL)
79 rows selected.
user605926 wrote:
We have this query which is taking a long time to execute. From the explain plan what i found out is there is a full table scan going on W_GL_OTHER_F. Please help in identifying the problem area and solutions.
The plan is,
PLAN_TABLE_OUTPUT
Plan hash value: 3196636288
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Psto
| 0 | SELECT STATEMENT | | 810K| 306M| | 266K (1)| 01:20:03 | | |
| 1 | HASH GROUP BY | | 810K| 306M| 320M| 266K (1)| 01:20:03 | | |
|* 2 | HASH JOIN | | 810K| 306M| 38M| 239K (1)| 01:11:56 | | |
|* 3 | MAT_VIEW ACCESS FULL | WC_GL_OTHER_F_MV | 1137K| 40M| | 9771 (2)| 00:0
|* 4 | HASH JOIN | | 531K| 189M| | 222K (1)| 01:06:38 | | |
| 5 | INLIST ITERATOR | | | | | | | | |
|* 6 | INDEX RANGE SCAN | W_STATUS_D_U2 | 4 | 56 | | 1 (0)| 00:00:01 |
|* 7 | HASH JOIN | | 607K| 208M| 8704K| 222K (1)| 01:06:38 | | |
|* 8 | HASH JOIN | | 40245 | 8214K| 2464K| 10843 (2)| 00:03:16 | | |
| 9 | VIEW | index$_join$_007 | 35148 | 2025K| | 122 (32)| 00:00:03 | |
|* 10 | HASH JOIN | | | | | | | | |
|* 11 | HASH JOIN | | | | | | | | |
|* 12 | HASH JOIN | | | | | | | | |
| 13 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 1 (0)| 00:00:01 | |
| 14 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M2 | | | | | | |
| 15 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
| 16 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M4 | | | | | | |
| 17 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
|* 18 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M11 | | | | | | |
| 19 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 33 (0)| 00:00:01 | |
| 20 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M12 | | | | | | |
|* 21 | HASH JOIN | | 40246 | 5895K| 4096K| 10430 (2)| 00:03:08 | |
| 22 | VIEW | index$_join$_008 | 65417 | 3321K| | 197 (14)| 00:00:04 |
|* 23 | HASH JOIN | | | | | | | | |
|* 24 | HASH JOIN | | | | | | | | |
| 25 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 3 (0)| 00:00:01 | |
| 26 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 27 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 66 (2)| 00:00:02 | |
| 28 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
| 29 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 100 (1)| 00:00:02 | |
| 30 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M3 | | | | | | |
|* 31 | HASH JOIN | | 40246 | 3851K| | 9953 (1)| 00:03:00 | | |
| 32 | VIEW | index$_join$_006 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 33 | HASH JOIN | | | | | | | | |
| 34 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 35 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 36 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 37 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 38 | HASH JOIN | | 40246 | 3144K| | 9870 (1)| 00:02:58 | | |
| 39 | VIEW | index$_join$_005 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 40 | HASH JOIN | | | | | | | | |
| 41 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 42 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 43 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 44 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 45 | TABLE ACCESS FULL | W_GL_ACCOUNT_D | 40246 | 2436K| | 9788 (1)| 00:02:57
| 46 | PARTITION RANGE ALL | | 11M| 4261M| | 152K (2)| 00:45:43 | 1 |1048
| 47 | TABLE ACCESS FULL | W_GL_OTHER_F | 11M| 4261M| | 152K (2)| 00:45:43
Predicate Information (identified by operation id):
2 - access("T96094"."ROW_WID"="T324628"."DOC_STATUS_WID" AND "T91397"."ROW_WID"="T324628"."GL_ACC
3 - filter("T324628"."X_CURRENCY_CODE"='CAD' AND "T324628"."DELETE_FLG"='N')
4 - access("T91707"."DOC_STATUS_WID"="T96094"."ROW_WID")
6 - access("T96094"."W_STATUS_CODE"='POSTED' OR "T96094"."W_STATUS_CODE"='REVERSED')
7 - access("T91397"."ROW_WID"="T91707"."GL_ACCOUNT_WID")
8 - access("T148543"."HIER_CODE"="T148937"."SEGMENT_LOV_ID" AND "T148543"."HIER20_CODE"="T148937"
10 - access(ROWID=ROWID)
11 - access(ROWID=ROWID)
12 - access(ROWID=ROWID)
18 - filter("T148543"."HIER4_CODE"<>'00000000000')
21 - access("T91397"."ACCOUNT_SEG2_CODE"="T148908"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG2_ATTRIB"="T148908"."SEGMENT_LOV_ID")
23 - access(ROWID=ROWID)
24 - access(ROWID=ROWID)
31 - access("T91397"."ACCOUNT_SEG3_CODE"="T148937"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG3_ATTRIB"="T148937"."SEGMENT_LOV_ID")
33 - access(ROWID=ROWID)
38 - access("T91397"."ACCOUNT_SEG1_CODE"="T149255"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG1_ATTRIB"="T149255"."SEGMENT_LOV_ID")
40 - access(ROWID=ROWID)
45 - filter("T91397"."GL_ACCOUNT_NUM" LIKE '%98%' AND "T91397"."RECON_TYPE_CODE" IS NOT NULL)
79 rows selected.
You may want to have a look at <a href="HOW TO: Post a SQL statement tuning request - template posting">HOW TO: Post a SQL statement tuning request - template posting</a> to see what more details are needed in order for somebody to provide better answer.
Based on what you have posted so far, you may want to share details of following questions (in addition to details in above link)
1) How much time does the query currently take to execute? How much time do you expect it to take? Also, how are you measuring query execution time?
2) Your plan suggests that the query is expected to return 810K rows. Is this figure close to actual number of records? What are you doing with this huge amount of data?
Similar Messages
-
THe following query is taking time. Is there anyway better to write this query.
SELECT PROGRAM_NAME_ID ,PROGRAM_NAME,sum(balance)"Unpaid Balance"
FROM (
SELECT DISTINCT
PROGRAM_NAME_ID ,PROGRAM_NAME,
t.billing_key billing_key,
(TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
-PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
Report_period,company_id
FROM BILLING B,
PROG_SURCH T ,
mv_program_dict P
WHERE
B.BILLING_KEY=T.BILLING_KEY
AND p.program_key= t.program_key(+)
and company_id=:p3_hide_comp
and b.SUBMIT_STATUS='S'
union
SELECT DISTINCT
PROGRAM_NAME_ID ,PROGRAM_NAME,
t.billing_key billing_key,
(TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
-PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
Report_period,company_id
FROM MV_BILLING B,
MV_PROG_SURCH T ,
mv_program_dict P
WHERE
B.BILLING_KEY=T.BILLING_KEY
AND p.program_key= t.program_key(+)
and company_id=:p3_hide_comp
order by report_period,program_name_id )
where balance>=0
GROUP BY PROGRAM_NAME_ID,PROGRAM_NAME
ORDER BY PROGRAM_NAME_IDHi,
This is totally right.
>
Being one such call. The price for calling pl/sql functions in SQL can be quite high. I'd highly recommend you find a way to incorporate the pl/sql code into the SQL query.
>
but, try with this query. I hope would help you and return the rows you want.
SELECT program_name_id, program_name,
SUM ( tuff_generic_pkg.get_total (billing_key, program_key)
+ NVL (penalty_interest (billing_key,
program_key,
company_id,
report_period
0
- payment_amount (company_id, program_key, report_period) balance
FROM (SELECT program_name_id, program_name, t.billing_key, t.program_key,
b.company_id, b.report_period
FROM billing b, prog_surch t, mv_program_dict p
WHERE b.billing_key = t.billing_key
AND p.program_key = t.program_key(+)
AND company_id = :p3_hide_comp
AND b.submit_status = 'S'
UNION
SELECT program_name_id, program_name, t.billing_key, t.program_key,
b.company_id, b.report_period report_period, company_id
FROM mv_billing b, mv_prog_surch t, mv_program_dict p
WHERE b.billing_key = t.billing_key
AND p.program_key = t.program_key(+)
AND company_id = :p3_hide_comp) sub
WHERE ( tuff_generic_pkg.get_total (billing_key, program_key)
+ NVL (penalty_interest (billing_key,
program_key,
company_id,
report_period
0
- payment_amount (company_id, program_key, report_period) >= 0
GROUP BY program_name_id, program_nameObviosly I cannot testing.
HTH -- johnxjean -- -
Hello,
i have created one query based on inventory cube 0IC_C03. when i am executing the infocube based on a particular date i am able to see the output but when i am executing the query on the basis of that particular date given while executing the cube the query is taking long time to execute and throwing a message of time limit exceed.
could anyone suggest me why the query is showing such nessage along with resolution.
Thanks,
Kumkum
Edited by: kumkum basu on Nov 29, 2010 2:33 PMHi,
There can be number of reason.
What you can do is:-
put the unwanted characteristics in Free Characteristics
Remove unwanted cell reference
Try using partitions in cubes
Use aggregates for summarised data.
f the above options doesnt work, then try pre-caching.This will definitely help!
Use proper selections to get small subset of data.
Goto RSRT>> type your query name>> Query properties>> select cache mode=4
In addition to RSRT, ST05 (sql trace), SE30 (runtime analysis) and system statistics (ST03) may help you in identifying performance issues with a report.
Thanks,.
Saveen Kumar -
I have a query -->select c1,c2,c3 from table1 . This query takes only few milliseconds. But when I take count from the same query i.e. when I execute select count(c1,c2,c3) from table1 then it takes a very long time (about 1 min). The table1 contains about 25000 rows. Please help to improve performance of Count query.
Satej wrote:
I have a query -->select c1,c2,c3 from table1 . This query takes only few milliseconds. But when I take count from the same query i.e. when I execute select count(c1,c2,c3) from table1 then it takes a very long time (about 1 min).Classic misperception of Toad, SQL Navigator and similar tool users. All these tools fetch just first result screen and show time it took to fetch just that and not the time to fetch all rows. And in order to count you need to fetch all rows.l That is why select count(*) takes longer. But 1 min for 25000 rows is a bit long. Check execution plan to see what is going on.
SY. -
Hi:
We are using a common component for generating query screens wherein we just have to pass a query number to the component which in turn return a list view of records.
We have a particular Query which when executed through Toad or any SQL tool returns records within 4-5 seconds but the same one takes almost 3 minutes to load from Front End.
What could be the possible reason for the page to take so much time to load.
We are using WebSphere 6.0.2.17 server. Is there any settings to be done for statement cache size or for any connection timeout or is there anything to be done at Oracle side.
Since the moment we change the query number to some simple query like select * from dual kind the screen loads instantly.
P.S: The query takes almost 4-5 seconds to execute in backend through TOAD.
Avadhoot SawantSo 45 or 47? Nevertheless ...
This is hardly a heavy calculation, the savings will be dismal. Also anything numeric is very easy on CPU in general.
But
convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000))
is not optimal.
CONVERT( NUMERIC(8,5),300 / 1000000.00000 )
Is
Now it boils to how to make load faster: do it in parallel. Find how many sockets the machine have and split the table into as many chunks. Also profile to find out where it spends most of the time. I saw sometimes the network is not letting me thru so you
may want to play with buffers, and packet sizes, for example if OLEDB used increase the packet size two times see if works faster, then x2 more and so forth.
To help you further you need to tell more e.g. what is this source, destination, how you configured the load.
Please understand that there is no Silver Bullet anywhere, or a blanket solution, and you need to tell me your desired load time. E.g. if you tell me it needs to load in 5 min I will give your ask a pass.
Arthur
MyBlog
Twitter -
Query taking time when MVIEW is used!!!
Dear All,
Whenever i try and execute a query involving Materialized View(MVIEW), more often i have seen that if the query takes *5hrs* to execute and MVIEW refresh is every *2hrs*, it'll throw error as "error -- precedeing line from ...".
I just wanted to confirm if there will be a problem in conditions where query execution is overlapping the then next MVIEW refresh???
** Sorry could not provide the exact errorSo what do you think what ORACLE is doing? Giving you the "latest" update? Of course not, you have read consistency,so as soon as the data changed, your query reads from UNDO. Is the MV a "fast" or "complete" refresh? If its "fast" if should not matter too much, depending on the amount of updates. But I bet it's "complete".
How long does the refresh go? That should be < 2h right? So your query will not get substantial slower when using the source of the MV directly.
5h is massive. I bet that can be done better. If not (wanna bet ? ;-) ) how long does the query take without MV refresh (for a test)? Then you can put the MV data in a temp table and use that for a start.
And get your error handler right.
-- andy -
Query taking time in Oracle 10g
Hi,
Recently we had a database upgrade from 9.2.0.8 to 10.2.0.4. We use HP-UX B11.23 as OS.The problem is we have a query which used to take 3 mins in 9i database but it is not returning any output in 10g database after running for 8 hours after which we need to kill it . The query is ,
SELECT DPPB.CO_CD, DPPB.PRC_BOOK_CD,NVL(PB.CO_PRC_BOOK_CD,'NULL') ,
NVL(BP.BASE_PROD_CD,'NULL'),NVL(FG.FG_CD,'NULL'),DPPB.EFFTV_STRT_DT,
DPPB.EFFTV_END_DT,PRC_BOOK_AMT, PRC_LST_RPT_IND ,
SYSDATE + (RANK () OVER (PARTITION BY PROD_PRC_BOOK_CD ORDER BY DPPB.EFFTV_STRT_DT)/(24*60*60)) "RANK",
SYSDATE FROM
DIM_PROD_PRC_BOOK DPPB,dim_prod FG,dim_prod BP,dim_prc_book PB
WHERE
DPPB.BASE_PROD_OID =BP.BASE_PROD_OID and bp.end_date>sysdate and bp.be_id=bp.base_prod_oid AND
FG.FG_OID=DPPB.FG_OID and fg.end_date>sysdate and fg.be_id=fg.fg_oid
AND DPPB.PRC_BOOK_OID=PB.prc_book_oid and pb.end_date>sysdate and pb.be_id=pb.PRC_BOOK_OID
AND DPPB.EFFTV_END_DT > ADD_MONTHS(TRUNC(SYSDATE), -15)
AND DPPB.CURR_IND='Y'
AND
PROD_PRC_BOOK_CD ||'-'||TO_CHAR(DPPB.END_DATE ,'DD-MM-YYYY hh24:mi:ss')
IN(
SELECT PROD_PRC_BOOK_CD ||'-'||TO_CHAR(MAX(DPPB.END_DATE ),'DD-MM-YYYY hh24:mi:ss')
FROM DIM_PROD_PRC_BOOK DPPB WHERE PROD_PRC_BOOK_CD IS NOT NULL GROUP BY PROD_PRC_BOOK_CD ,EFFTV_STRT_DT
)The explain plan of the query in 9i is,
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 1 2964
WINDOW SORT 1 661 2964
HASH JOIN 1 661 2958
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 73 1
NESTED LOOPS 1 355 290
NESTED LOOPS 1 282 289
HASH JOIN 164 32 K 284
TABLE ACCESS FULL WHSUSR.DIM_PRC_BOOK 1 57 2
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 6 K 957 K 281
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 77 1
INDEX RANGE SCAN WHSUSR.XN15_DIM_PROD 3 1
INDEX RANGE SCAN WHSUSR.XN22_DIM_PROD 5 1
VIEW SYS.VW_NSO_1 132 K 38 M 2665
SORT UNIQUE 132 K 6 M 2665
SORT GROUP BY 132 K 6 M 2665
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 132 K 6 M 281 And the explain plan of the query in 10g database is
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=ALL_ROWS 4 1702
WINDOW SORT 4 1 K 1702
FILTER
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 73 1
NESTED LOOPS 1 339 899
NESTED LOOPS 14 3 K 898
HASH JOIN 2 K 428 K 805
TABLE ACCESS FULL WHSUSR.DIM_PRC_BOOK 1 53 3
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 93 K 12 M 801
TABLE ACCESS BY INDEX ROWID WHSUSR.DIM_PROD 1 77 1
INDEX RANGE SCAN WHSUSR.XN15_DIM_PROD 2 1
INDEX RANGE SCAN WHSUSR.XN22_DIM_PROD 5 1
FILTER
HASH GROUP BY 1 K 59 K 802
TABLE ACCESS FULL WHSUSR.DIM_PROD_PRC_BOOK 117 K 5 M 794 Please help in identifying the problem and how to tune it.user605926 wrote:
Thanks Sir for your immense help. I used the hint /*+ optimizer_features_enable('9.2.0.8') */ and the query took only 2 seconds. I am really delighted.
Sorry for not clicking the 'helpful' button earlier since honestly I did not know about the rules. Going forward I will not forget to do that.Don't apologise, it wasn't intended as a personal criticism - it's just a footnote I tend to use at present as a general reminder to everyone that feedback is useful.
I have one question. Do i have to use this hint for each and every query that is becoming a headache or is their any permanent solution to fix all the queries that used to run good on 9.2.0.8 database ? Please suggest.When doing an upgrade it is always valid (in the short term) to set the optimizer_features_enable parameter to the value of the database your moving from so that you can get the code improvements (or bug fixes) of the newer software without risking execution plan changes.
After that the ideal is to test software and identify generic cases where a change like an index definition, or some statistical information needs to be corrected for a particular reason in classes of queries. Eventually you get down to the point where you have a few awkward queries which the optimizer can't handle where you need hints. The optimizer_features_enable is very convenient here. In 10g, however, you could then capture the older plan and record is as an SQL Baseline against the unhinted query rather than permanently including hints.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
A general reminder about "Forum Etiquette / Reward Points": http://forums.oracle.com/forums/ann.jspa?annID=718
If you never mark your questions as answered people will eventually decide that it's not worth trying to answer you because they will never know whether or not their answer has been of any use, or whether you even bothered to read it.
It is also important to mark answers that you thought helpful - again it lets other people know that you appreciate their help, but it also acts as a pointer for other people when they are researching the same question, moreover it means that when you mark a bad or wrong answer as helpful someone may be prompted to tell you (and the rest of the forum) what's so bad or wrong about the answer you found helpful. -
I am creating a jsp application . . . .I am retreiving some values from my database oracle 10g..my query taking time to execute even in isql plus..am uisng tomcat5 and oracle 10gR2..can anyone pls tell me why it happens...my query fetches data from four tables...i dont know exactly why its taking so muc time say 50seconds...when i run the application in my local host it retrievs fast when i do that in server it creates a problem..
Actually it works fine when i deployed in my client server it takes time sometimes it takes atmost one minute
1.pls tell me how can i test my query about the performance
2.is der any command in oracle to test
3.how much bytes it takesLook at this thread...
When your query takes too long ... -
Hi All,
I am trying to run one SELECT statement which uses 6 tables. That query generally take 25-30 minutes to generate output.
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.
What else I should check in order to figure out why my SELECT statement is taking time?
Any help will be much appreciated.
Thanks!Please let me know if you still want me to provide all the information mentioned in the link.Yes, please.
Before you can even start optimizing, it should be clear what parts of the query are running slow.
The links contains the steps to take regarding how to identify the things that make the query run slow.
Ideally you post a trace/tkprof report with wait events, it'll show on what time is being spent, give an execution plan and a database version all in once...
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.Well, something must have changed.
And you must indentify what exactly has changed, but it's a broad range you have to check:
- it could be outdated table statistics
- it could be data growth or skewness that makes Optimizer choose a wrong plan all of a sudden
- it could be a table that got modified with some bad index
- it could be ...
So, by posting the information in the link, you'll leave less room for guesses from us, so you'll get an explanation that makes sense faster or, while investigating by following the steps in the link, you'll get the explanation yourself. -
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
Using Materilaized view in a query .. query is taking time????
Hi I have a query :-
SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
FROM emp_view u -- using view
CONNECT BY PRIOR u.empid = u.supervisor_id
START WITH u.sbcuid = 'ph2755';
here emp_view is a view .
------ The above query is taking 3 sec to execute.
Then I created Materialuized view emp_mv and the the MV query is same as emp_view view query.
After this i executed following sql
SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
FROM emp_mv u -- using materialized view
CONNECT BY PRIOR u.empid = u.supervisor_id
START WITH u.sbcuid = 'ph2755';
this query is taking 15 sec to execute..... :(
can anyone please tell me why MV query is taking time????Hi,
In your first case you query a view, meaning that you query the underlying tables. These probably have indexes and stats are updated.
In you second case you query a materialized view, meaning that you query the underlying base table of that mview.
This probably do not have the same indexes to support that query.
But of course, I'm just guessing based on the little information provided.
If you want to take this further, please search for "When your query takes too long" and "How to post a tuning request".
These two threads holds valuable information, not only on how to ask this kind of question, but also how to start solving it on your own.
Regards
Peter -
Query taking much time Orace 9i
Hi,
**How can we tune the sql query in oracle 9i.**
The select query taking more than 1 and 30 min to throw the result.
Due to this,
We have created materialsed view on the select query and also we submitted a job to get Materilazed view refreshed daily in dba_jobs.
When we tried to retrive the data from Materilased view getting result very quickly.
But the job which we has been assisgned in Dbajobs taking equal time to complete, as the query use to take.
We feel since the job taking much time in the test Database and it may cause load if we move the same scripts in Production Environment.
Please suggest how to resolvethe issue and also how to tune the sql
With Regards,
Srinivas
Edited by: Srinivas.. on Dec 17, 2009 6:29 AMHi Srinivas;
Please follow this search and see its helpful
Regard
Helios -
How to know which sql query is taking time for concurrent program
Hi sir,
I am running concurrent program,that is taking time to execute ,i want to know which sql query causing performance
Thanaks,
SreekanthHi,
My Learning: Diagnosing Oracle Applications Concurrent Programmes - 11i/R12
How to run a Trace for a Concurrent Program? (Doc ID 415640.1)
FAQ: Common Tracing Techniques in Oracle E-Business Applications 11i and R12 (Doc ID 296559.1)
How To Get Level 12 Trace And FND Debug File For Concurrent Programs (Doc ID 726039.1)
How To Trace a Concurrent Request And Generate TKPROF File (Doc ID 453527.1)
Regards
Yoonas -
Sql Query taking very long time to complete
Hi All,
DB:oracle 9i R2
OS:sun solaris 8
Below is the Sql Query taking very long time to complete
Could any one help me out regarding this.
SELECT MAX (md1.ID) ID, md1.request_id, md1.jlpp_transaction_id,
md1.transaction_version
FROM transaction_data_arc md1
WHERE md1.transaction_name = :b2
AND md1.transaction_type = 'REQUEST'
AND md1.message_type_code = :b1
AND NOT EXISTS (
SELECT NULL
FROM transaction_data_arc tdar2
WHERE tdar2.request_id = md1.request_id
AND tdar2.jlpp_transaction_id != md1.jlpp_transaction_id
AND tdar2.ID > md1.ID)
GROUP BY md1.request_id,
md1.jlpp_transaction_id,
md1.transaction_version
Any alternate query to get the same results?
kindly let me know if any one knows.
regards,
kk.
Edited by: kk001 on Apr 27, 2011 11:23 AMDear
/* Formatted on 2011/04/27 08:32 (Formatter Plus v4.8.8) */
SELECT MAX (md1.ID) ID, md1.request_id, md1.jlpp_transaction_id,
md1.transaction_version
FROM transaction_data_arc md1
WHERE md1.transaction_name = :b2
AND md1.transaction_type = 'REQUEST'
AND md1.message_type_code = :b1
AND NOT EXISTS (
SELECT NULL
FROM transaction_data_arc tdar2
WHERE tdar2.request_id = md1.request_id
AND tdar2.jlpp_transaction_id != md1.jlpp_transaction_id
AND tdar2.ID > md1.ID)
GROUP BY md1.request_id
,md1.jlpp_transaction_id
,md1.transaction_versionCould you please post here :
(a) the available indexes on transaction_data_arc table
(b) the description of transaction_data_arc table
(c) and the formatted explain plan you will get after executing the query and issuing:
select * from table (dbms_xplan.display_cursor);Hope this helps
Mohamed Houri -
Iam having nearly 2 crores records at present in my table..
I want to get the avg of price from my table..
i put the query like
select avg(sum(price)) from table group by product_id
The query taking more than 5 mins to execute...
is that any other way i can simplify my query?Warren:
Your first query gives:
SQL> SELECT AVG(SUM(price)) sum_price
2 FROM t;
SELECT AVG(SUM(price)) sum_price
ERROR at line 1:
ORA-00978: nested group function without GROUP BYand your second gives:
SQL> SELECT product_id, AVG(SUM(price))
2 FROM t
3 GROUP BY product_id;
SELECT product_id, AVG(SUM(price))
ERROR at line 1:
ORA-00937: not a single-group group functionSymon:
What exactly are you ttrying to accomplish. Your query as posted will calculate the average of the sums of the prices for all product_id values. That is, it is equivalent to:
SELECT AVG(sum_price)
FROM (SELECT SUM(price) sum_price
FROM t
GROUP BY product_id)So given:
SQL> SELECT * FROM t;
PRODUCT_ID PRICE
PROD1 5
PROD1 7
PROD1 10
PROD2 3
PROD2 4
PROD2 5The sum of the prices per product_id is:
SQL> SELECT SUM(price) sum_price
2 FROM t
3 GROUP BY product_id;
SUM_PRICE
22
12 and the average of that is (22 + 12) / 2 = 17. Is that what you are looking for? If so, then the equivalent query I posted above is at least clearer, but may not be any faster. If this is not what you are looking for, then some sample data and expected results may help. Although, it appears that you need to full scan the table in either case, so that may be as good as it gets.
John
Maybe you are looking for
-
Change Data Source in SMSY from "manual" to "SLD"
Hello, unfortunately some of my systems appear in SMSY with data source "manual" although the data is collected automatically via SLD. How can I switch the data source to "SLD" for these systems? I already tried to delete the system in SMSY. After th
-
How do I stop the disk utility from showing up every time I put in a disc..
..or my flash drive. I never use it and its just an annoyance.
-
I'm moving to Turkey and want to use my iphone 3Gs. Can anyone tell me if my phone will work in Turkey. I've heard I need to unlock it and purchase a new sims card when I arrive in Turkey. Does the MHz on my phone matter? Any help would be wonder
-
To replace SAP script with smartform in customer statement using t-code F.27
Hi, I have a requirement to replace SAP script while generating a customer statement form using F.27 with Smartform. The standard program which triggers the SAP script is RFKORD11. Can anyone tell me how can we achieve this. Thanks in advance. BR, Ka
-
Macbook Pro temperature after installing Mountain Lion
Hi guys, I wonder if any of you could give me a little help. I installed ML about one week ago and i've been noticing that my macbook pro (from mid 2009, 13inch) has been increasing. With itunes, chrome a minor download program called igetter CPU tem