Isuue with RANGE Windows in analytical functions
Hi,
I have a table abc described as
SQL> desc abc;
Name Null? Type
ID VARCHAR2(10)
NAME VARCHAR2(20)
VALUE NUMBER(10)
I am quite a bit confused by the behaviour of below written SQl query.
select ID,NAME,VALUE,SUM(VALUE) OVER(PARTITION BY ID order by value RANGE BETWEEN 2 PRECEDING and 2 following)a from abc;
The output obtained is
ID NAME VALUE A
BEN a1 1 4
BEN c3 3 13
BEN d4 4 12
BEN e5 5 12
BEN b2 17 17
TAB a1 11 11
TAB c3 20 20
TAB b2 100 100
I am getting the values correctly for first partition of 'BEN'. But could you please explain the values which are coming in the for field A for the second partition of 'TAB'?
Its not giving the sum of the range, instead its simply giving the filed value.
Thanks in advance.
regards,
Sreenath
Hi, Sreenath,
858163 wrote:
... select ID,NAME,VALUE,SUM(VALUE) OVER(PARTITION BY ID order by value RANGE BETWEEN 2 PRECEDING and 2 following)a from abc;
The output obtained is
ID NAME VALUE A
BEN a1 1 4
BEN c3 3 13
BEN d4 4 12
BEN e5 5 12
BEN b2 17 17
TAB a1 11 11
TAB c3 20 20
TAB b2 100 100
I am getting the values correctly for first partition of 'BEN'. But could you please explain the values which are coming in the for field A for the second partition of 'TAB'?
Its not giving the sum of the range, instead its simply giving the filed value.They happen to be the same in those cases.
Look at the row where vlaue=11. The window is
RANGE BETWEEN 2 PRECEDING AND 2 FOLLOWING, which means
value BETWEEN 9 and 13.
The row with value=11 itself is the only row in that range.
The same goes for the row with value=20; there is no other row with value BETWEEN 18 and 22.
The same goes for the row with value=100; there is no other row with value BETWEEN 98 and 102.
What results did you want?
Perhaps you meant ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING.
Edited by: Frank Kulash on May 11, 2011 10:12 AM
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only), and also post the results you want from that data (formatted, between \ tags).
Explain, using specific examples, how you get those results from that data.
Always say which version of Oracle you're using.
Similar Messages
-
Custom window in analytic function
SELECT device_id,
case_id,
value,
fo_value,
nomination,
symbol,
cur_id,
time,
data_type,
COALESCE (
VALUE,
LAST_VALUE (
VALUE IGNORE NULLS
OVER (PARTITION BY device_id, cur_id, nomination
ORDER BY time
ROWS UNBOUNDED PRECEDING)
+ SUM(fo_value)
OVER (PARTITION BY device_id, cur_id, nomination
ORDER BY time
ROWS UNBOUNDED PRECEDING)
AS val
FROM tab
DEVICE_ID CASE_ID VALUE FO_VALUE NOMINATION SYMBOL CUR_ID TIME DATA_TYPE VAL
8744 4901 -23 20 PLN 1 2009-08-25 11:00:00.000000 F -48
8744 4901 -2 20 PLN 1 2009-08-25 07:00:00.000000 F -25
8744 4901 100 -13 20 PLN 1 2009-08-24 23:00:00.000000 I 100
8744 4901 -6 20 PLN 1 2009-08-24 17:00:00.000000 F 45
8744 4901 -5 20 PLN 1 2009-08-24 14:00:00.000000 F 51
8744 4901 -5 20 PLN 1 2009-08-24 11:00:00.000000 F 56VAL column is defined as last VALUE from which there was substracted cummulative sum of FO_VALUE column.
Every record with data_type = 'I' has some number in VALUE column. What I need is to define window in expression that counts sum of FO_VALUE which adds only the values from last NON-NULL VALUE column - currently query substracts all the values in partition.
I need something like window from last NON-NULL in VALUE column row to current row
Desired output should look like this:
DEVICE_ID CASE_ID VALUE FO_VALUE NOMINATION SYMBOL CUR_ID TIME DATA_TYPE VAL
8744 4901 -23 20 PLN 1 2009-08-25 11:00:00.000000 F 62
8744 4901 -2 20 PLN 1 2009-08-25 07:00:00.000000 F 85
8744 4901 100 -13 20 PLN 1 2009-08-24 23:00:00.000000 I 87
8744 4901 -6 20 PLN 1 2009-08-24 17:00:00.000000 F 45
8744 4901 -5 20 PLN 1 2009-08-24 14:00:00.000000 F 51
8744 4901 -5 20 PLN 1 2009-08-24 11:00:00.000000 F 56I'd be very appeciated for any suggestions.Hi,
Sorry, I don't understand the problem.
In particular, I don't see how you get the vals (56, 51, 45) for the earlier times. It looks like you should get NULL for val in the ealiest 3 rows. Can you explain how you get those numbers?
Is it possible for a partition (device_id, cur_id, nomination) to have 2 or more rows where value is not NULL? If so, please post an example of the results you would want.
Rather than adjusting ROWS ... to get the results you want, it might be easier to adjust the PARTITION BY ...
For example, this gets the results you want on and after the row where value is not NULL:
WITH got_grp_num
AS
SELECT device_id, case_id, value, fo_value, nomination, symbol, cur_id, time, data_type,
COUNT (value) OVER ( PARTITION BY device_id, cur_id, nomination
ORDER BY time
) AS grp_num
FROM tab
SELECT device_id, case_id, value, fo_value, nomination, symbol, cur_id, time, data_type,
LAST_VALUE (value IGNORE NULLS) OVER ( PARTITION BY device_id, cur_id, nomination, grp_num
ORDER BY time
+ SUM (fo_value) OVER ( PARTITION BY device_id, cur_id, nomination, grp_num
ORDER BY time
) AS val
FROM got_grp_num
ORDER BY device_id, cur_id, nomination,
time DESC
;As you can see, there is a new column in the PARTITION BY clauses that compute val. The new column is grp_num, which has a new value whenever there is a non-null value.
Edited by: Frank Kulash on Aug 28, 2009 1:27 PM -
Return multiple columns from an analytic function with a window
Hello,
Is it possible to obtain multiple columns from an analytic function with a window?
I have a table with 4 columns, an id, a test id, a date, and the result of a test. I'm using an analytic function to obtain, for each row, the current test value, and the maximum test value in the next 2 days like so:
select
id,
test_id,
date,
result,
MAX ( result ) over ( partition BY id, test_id order by date RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING ) AS max_result_next_two_day
from table
This is working fine, but I would like to also obtain the date when the max result occurs. I can see that this would be possible using a self join, but I'd like to know if there is a better way? I cannot use the FIRST_VALUE aggregate function and order by result, because the window function needs to be ordered by the date.
It would be a great help if you could provide any pointers/suggestions.
Thanks,
Dan
http://danieljamesscott.orgAssuming RESULT is a positive integer that has a maximum width of, say 10,
and assuming date has no time-component:
select
id
,test_id
,date
,result
,to_number(substr(max_result_with_date,1,10)) as max_result_next_two_day
,to_date(substr(max_result_with_date,11),'YYYYMMDD') as date_where_max_result_occurs
from (select
id
,test_id
,date
,result
,MAX(lpad(to_char(result),10,'0')||to_char(date,'YYYYMMDD'))
over (partition BY id, test_id
order by date
RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING )
AS max_result_with_date
from table) -
Using analytical function to calculate concurrency between date range
Folks,
I'm trying to use analytical functions to come up with a query that gives me the
concurrency of jobs executing between a date range.
For example:
JOB100 - started at 9AM - stopped at 11AM
JOB200 - started at 10AM - stopped at 3PM
JOB300 - started at 12PM - stopped at 2PM
The query would tell me that JOB1 ran with a concurrency of 2 because JOB1 and JOB2
were running started and finished within the same time. JOB2 ran with the concurrency
of 3 because all jobs ran within its start and stop time. The output would look like this.
JOB START STOP CONCURRENCY
=== ==== ==== =========
100 9AM 11AM 2
200 10AM 3PM 3
300 12PM 2PM 2
I've been looking at this post, and this one if very similar...
Analytic functions using window date range
Here is the sample data..
CREATE TABLE TEST_JOB
( jobid NUMBER,
created_time DATE,
start_time DATE,
stop_time DATE
insert into TEST_JOB values (100, sysdate -1, to_date('05/04/08 09:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 11:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (200, sysdate -1, to_date('05/04/08 10:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 13:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (300, sysdate -1, to_date('05/04/08 12:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 14:00:00','MM/DD/YY hh24:mi:ss'));
select * from test_job;
JOBID|CREATED_TIME |START_TIME |STOP_TIME
----------|--------------|--------------|--------------
100|05/04/08 09:28|05/04/08 09:00|05/04/08 11:00
200|05/04/08 09:28|05/04/08 10:00|05/04/08 13:00
300|05/04/08 09:28|05/04/08 12:00|05/04/08 14:00
Any help with this query would be greatly appreciated.
thanks.
-peterafter some checking the model rule wasn't working exactly as expected.
I believe it's working right now. I'm posting a self-contained example for completeness sake.I use 2 functions to convert back and forth between epoch unix timestamps, so
I'll post them here as well.
Like I said I think this works okay, but any feedback is always appreciated.
-peter
CREATE OR REPLACE FUNCTION date_to_epoch(p_dateval IN DATE)
RETURN NUMBER
AS
BEGIN
return (p_dateval - to_date('01/01/1970','MM/DD/YYYY')) * (24 * 3600);
END;
CREATE OR REPLACE FUNCTION epoch_to_date (p_epochval IN NUMBER DEFAULT 0)
RETURN DATE
AS
BEGIN
return to_date('01/01/1970','MM/DD/YYYY') + (( p_epochval) / (24 * 3600));
END;
DROP TABLE TEST_MODEL3 purge;
CREATE TABLE TEST_MODEL3
( jobid NUMBER,
start_time NUMBER,
end_time NUMBER);
insert into TEST_MODEL3
VALUES (300,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 19:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (200,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 12:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (400,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 14:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (500,date_to_epoch(to_date('05/07/2008 11:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 16:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (600,date_to_epoch(to_date('05/07/2008 15:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 22:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (100,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 23:00','MM/DD/YYYY hh24:mi')));
commit;
SELECT jobid,
epoch_to_date(start_time)start_time,
epoch_to_date(end_time)end_time,
n concurrency
FROM TEST_MODEL3
MODEL
DIMENSION BY (start_time,end_time)
MEASURES (jobid,0 n)
(n[any,any]=
count(*)[start_time<= cv(start_time),end_time>=cv(start_time)]+
count(*)[start_time > cv(start_time) and start_time <= cv(end_time), end_time >= cv(start_time)]
ORDER BY start_time;
The results look like this:
JOBID|START_TIME|END_TIME |CONCURRENCY
----------|---------------|--------------|-------------------
100|05/07/08 09:00|05/07/08 23:00| 6
200|05/07/08 09:00|05/07/08 12:00| 5
300|05/07/08 10:00|05/07/08 19:00| 6
400|05/07/08 10:00|05/07/08 14:00| 5
500|05/07/08 11:00|05/07/08 16:00| 6
600|05/07/08 15:00|05/07/08 22:00| 4 -
Discoverer Analytic Function windowing - errors and bad aggregation
I posted this first on Database General forum, but then I found this was the place to put it:
Hi, I'm using this kind of windowing function:
SUM(Receitas Especificas) OVER(PARTITION BY Tipo Periodo,Calculado,"Empresa Descrição (Operador)","Empresa Descrição" ORDER BY Ini Periodo RANGE BETWEEN INTERVAL '12' MONTH PRECEDING AND INTERVAL '12' MONTH PRECEDING )
If I use the "Receitas Especificas SUM" instead of
"Receitas Especificas" I get the following error running the report:
"an error occurred while attempting to run..."
This is not in accordance to:
http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/bi.904/b10268.pdf
but ok, the version without SUM inside works.
Another problem is the fact that for analytic function with PARTITION BY,
this does not work (shows the cannot aggregate symbol) if we collapse or use "<All>" in page items.
But it works if we remove the item from the PARTITION BY and also remove from workbook.
It's even worse for windowing functions(query above), because the query
only works if we remove the item from the PARTITION BY but we have to show it on the workbook - and this MAKES NO SENSE... :(
Please help.Unfortunately Discoverer doesn't show (correct) values for analytical functions when selecting "<All>" in a page item. I found out that it does work when you add the analytical function to the db-view instead of to the report as a calculation or as a calculated item on the folder.
The only problem is you've to name all page-items in the PARTITION window, so, when adding a page-item to the report, you,ve to change the db-view and alter the PARTITION window.
Michael -
Analytic Functions with GROUP-BY Clause?
I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
INSERT INTO SALES VALUES(1,1,1,1.0);
INSERT INTO SALES VALUES(1,1,1,2.0);
INSERT INTO SALES VALUES(1,2,1,3.0);
INSERT INTO SALES VALUES(1,2,1,4.0);
INSERT INTO SALES VALUES(2,1,1,5.0);
INSERT INTO SALES VALUES(2,1,1,6.0);
INSERT INTO SALES VALUES(2,2,1,7.0);
INSERT INTO SALES VALUES(2,2,1,8.0);
COMMIT;
Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
The desired result set is:
DAY_ID MY_MEASURE
1 6
1 14The following SQL gets me tantalizingly close:
SELECT DAY_ID,
MAX(PURCHASE_PRICE)
KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
FROM SALES
ORDER BY DAY_ID
DAY_ID MY_MEASURE
1 2
1 2
1 4
1 4
2 6
2 6
2 8
2 8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
(Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
I humbly solicit your collective wisdom, oh forum.Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
SELECT DAY_ID,
PRODUCT_ID,
MAX(PURCHASE_PRICE) MAX_PRICE
FROM SALES
GROUP BY DAY_ID,
PRODUCT_ID
ORDER BY DAY_ID,
PRODUCT_ID
DAY_ID PRODUCT_ID MAX_PRICE
1 1 2
1 2 4
2 1 6
2 2 8 -
Speed up query with analytic function
Hi
how can I speed up the query below ?
All time is in analytic function (WINDOW SORT)
Thanks for your help
11.2.0.1
Rows Row Source Operation
28987 HASH UNIQUE (cr=12677 pr=155778 pw=109730 time=25010 us cost=5502 size=3972960 card=14880)
1668196 WINDOW SORT (cr=12677 pr=155778 pw=109730 time=890411840 us cost=5502 size=3972960 card=14880)
1668196 HASH JOIN RIGHT OUTER (cr=12677 pr=0 pw=0 time=1069165 us cost=3787 size=3972960 card=14880)
30706 TABLE ACCESS FULL FLO_FML_EVENT (cr=270 pr=0 pw=0 time=7420 us cost=56 size=814158 card=30154)
194733 HASH JOIN RIGHT OUTER (cr=12407 pr=0 pw=0 time=571145 us cost=3730 size=3571200 card=14880)
613 VIEW (cr=342 pr=0 pw=0 time=489 us cost=71 size=23840 card=745)
613 HASH UNIQUE (cr=342 pr=0 pw=0 time=244 us cost=71 size=20115 card=745)
745 WINDOW SORT (cr=342 pr=0 pw=0 time=1736 us cost=71 size=20115 card=745)
745 MAT_VIEW ACCESS FULL MVECRF_CUR_QUERY (cr=342 pr=0 pw=0 time=1736 us cost=69 size=20115 card=745)
194733 HASH JOIN (cr=12065 pr=0 pw=0 time=431813 us cost=3658 size=3095040 card=14880)
43 MAT_VIEW ACCESS FULL MVECRF_VISIT_REVS (cr=3 pr=0 pw=0 time=0 us cost=2 size=946 card=43)
194733 HASH JOIN OUTER (cr=12062 pr=0 pw=0 time=292098 us cost=3656 size=2767680 card=14880)
194733 HASH JOIN OUTER (cr=10553 pr=0 pw=0 time=234394 us cost=2962 size=2574240 card=14880)
194733 HASH JOIN (cr=9999 pr=0 pw=0 time=379996 us cost=2570 size=2380800 card=14880)
30076 MAT_VIEW ACCESS FULL MVECRF_ACTIVATED_FORMS (cr=1817 pr=0 pw=0 time=28411 us cost=361 size=2000285 card=29855)
194733 HASH JOIN (cr=8182 pr=0 pw=0 time=209061 us cost=1613 size=9026301 card=97057)
628 MAT_VIEW ACCESS FULL MVECRF_STUDYVERSION_FORMS (cr=19 pr=0 pw=0 time=250 us cost=6 size=18212 card=628)
194733 MAT_VIEW ACCESS FULL MVECRF_FORMITEMS (cr=8163 pr=0 pw=0 time=80733 us cost=1606 size=12462912 card=194733)
132342 MAT_VIEW ACCESS FULL MVECRF_ITEM_SDV (cr=554 pr=0 pw=0 time=23678 us cost=112 size=1720446 card=132342)
221034 MAT_VIEW ACCESS FULL MVECRF_ITEMDATA (cr=1509 pr=0 pw=0 time=46459 us cost=299 size=2873442 card=221034)
SELECT
DISTINCT
'CL238093011' AS ETUDE,
FI.STUDYID,
FI.STUDYVERSIONID,
FI.SITEID,
FI.SUBJECTID,
FI.VISITID,
VR.VISITREFNAME,
FI.SUBJECTVISITID,
FI.FORMID,
FI.FORMINDEX,
SVF.FORMREFNAME,
SVF.FORMMNEMONIC AS FMLNOM,
EVENT_ITEM.EVENT AS EVENUM,
EVENT_ITEM.EVENT_ROW AS LIGNUM,
NULL AS CODVISEVE,
MIN(DID.MINENTEREDDATE)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSAI,
MIN(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSDV,
MAX(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDATDERSDV,
DECODE (AF.SDVCOMPLETESTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVCOP,
AF.FMINSDVCOMPLETESTATE AS ATTDAT1ERSDVCOP,
DECODE (AF.SDVPARTIALSTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVPTL,
EVENT_ITEM.EVENT_RELECT AS ATTINDRVUMEDCOP,
DECODE (QUERY.NBQSTFML, NULL, 'N', 'Y') AS ATTINDQST,
DECODE (AF.MISSINGITEMSSTATE,
0,
'N',
1,
'Y')
AS ATTINDITMABS,
DECODE (AF.FROZENSTATE,
0,
'N',
1,
'Y')
AS ATTINDETACON,
AF.FMINFROZENSTATE AS ATTDAT1ERCON,
AF.FMAXFROZENSTATE AS ATTDATDERCON,
DECODE (AF.DELETEDSTATE,
0,
'N',
1,
'Y')
AS ATTINDETASPR,
EVENT_ITEM.ROW_DELETED AS ATTINDLIGSPR
FROM CL238093011.MVECRF_FORMITEMS FI,
CL238093011.MVECRF_STUDYVERSION_FORMS SVF,
CL238093011.MVECRF_ACTIVATED_FORMS AF,
CL238093011.MVECRF_ITEM_SDV IFSDV,
CL238093011.MVECRF_VISIT_REVS VR,
CL238093011.MVECRF_ITEMDATA DID,
(SELECT DISTINCT
SUBJECTID,
VISITID,
FORMID,
FORMINDEX,
COUNT (
DISTINCT QUERYID
OVER (
PARTITION BY SUBJECTID, VISITID, FORMID, FORMINDEX
NBQSTFML
FROM CL238093011.MVECRF_CUR_QUERY
WHERE QUERYSTATE IN (0, 1, 2)) QUERY,
CL238093011.FLO_FML_EVENT EVENT_ITEM
WHERE (AF.VISITDELETED IS NULL OR AF.VISITDELETED = 0)
AND AF.FORMTYPE NOT IN (4, 5, 6, 7, 8, 103)
AND (AF.DELETEDDYNAMICFORMSTATE IS NULL
OR AF.DELETEDDYNAMICFORMSTATE = 0)
AND FI.SUBJECTVISITID = AF.SUBJECTVISITID
AND FI.FORMID = AF.FORMID
AND FI.FORMREV = AF.FORMREV
AND FI.FORMINDEX = AF.FORMINDEX
AND FI.VISITID = VR.VISITID
AND FI.VISITREV = VR.VISITREV
AND FI.CONTEXTID = IFSDV.CONTEXTID(+)
AND FI.CONTEXTID = DID.CONTEXTID(+)
AND FI.SUBJECTID = QUERY.SUBJECTID(+)
AND FI.VISITID = QUERY.VISITID(+)
AND FI.FORMID = QUERY.FORMID(+)
AND FI.FORMINDEX = QUERY.FORMINDEX(+)
AND FI.STUDYVERSIONID = SVF.STUDYVERSIONID
AND FI.FORMID = SVF.FORMID
AND FI.VISITID = SVF.VISITID
AND FI.SUBJECTID = EVENT_ITEM.SUBJECTID(+)
AND FI.VISITID = EVENT_ITEM.VISITID(+)
AND FI.FORMID = EVENT_ITEM.FORMID(+)
AND FI.FORMINDEX = EVENT_ITEM.FORMINDEX(+)user12045475 wrote:
Hi
how can I speed up the query below ?
All time is in analytic function (WINDOW SORT)
Thanks for your help
11.2.0.1
Rows Row Source Operation
28987 HASH UNIQUE (cr=12677 pr=155778 pw=109730 time=25010 us cost=5502 size=3972960 card=14880)
1668196 WINDOW SORT (cr=12677 pr=155778 pw=109730 time=890411840 us cost=5502 size=3972960 card=14880)
1668196 HASH JOIN RIGHT OUTER (cr=12677 pr=0 pw=0 time=1069165 us cost=3787 size=3972960 card=14880)
30706 TABLE ACCESS FULL FLO_FML_EVENT (cr=270 pr=0 pw=0 time=7420 us cost=56 size=814158 card=30154)
194733 HASH JOIN RIGHT OUTER (cr=12407 pr=0 pw=0 time=571145 us cost=3730 size=3571200 card=14880)
613 VIEW (cr=342 pr=0 pw=0 time=489 us cost=71 size=23840 card=745)
613 HASH UNIQUE (cr=342 pr=0 pw=0 time=244 us cost=71 size=20115 card=745)
745 WINDOW SORT (cr=342 pr=0 pw=0 time=1736 us cost=71 size=20115 card=745)
745 MAT_VIEW ACCESS FULL MVECRF_CUR_QUERY (cr=342 pr=0 pw=0 time=1736 us cost=69 size=20115 card=745)
194733 HASH JOIN (cr=12065 pr=0 pw=0 time=431813 us cost=3658 size=3095040 card=14880)
43 MAT_VIEW ACCESS FULL MVECRF_VISIT_REVS (cr=3 pr=0 pw=0 time=0 us cost=2 size=946 card=43)
194733 HASH JOIN OUTER (cr=12062 pr=0 pw=0 time=292098 us cost=3656 size=2767680 card=14880)
194733 HASH JOIN OUTER (cr=10553 pr=0 pw=0 time=234394 us cost=2962 size=2574240 card=14880)
194733 HASH JOIN (cr=9999 pr=0 pw=0 time=379996 us cost=2570 size=2380800 card=14880)
30076 MAT_VIEW ACCESS FULL MVECRF_ACTIVATED_FORMS (cr=1817 pr=0 pw=0 time=28411 us cost=361 size=2000285 card=29855)
194733 HASH JOIN (cr=8182 pr=0 pw=0 time=209061 us cost=1613 size=9026301 card=97057)
628 MAT_VIEW ACCESS FULL MVECRF_STUDYVERSION_FORMS (cr=19 pr=0 pw=0 time=250 us cost=6 size=18212 card=628)
194733 MAT_VIEW ACCESS FULL MVECRF_FORMITEMS (cr=8163 pr=0 pw=0 time=80733 us cost=1606 size=12462912 card=194733)
132342 MAT_VIEW ACCESS FULL MVECRF_ITEM_SDV (cr=554 pr=0 pw=0 time=23678 us cost=112 size=1720446 card=132342)
221034 MAT_VIEW ACCESS FULL MVECRF_ITEMDATA (cr=1509 pr=0 pw=0 time=46459 us cost=299 size=2873442 card=221034)
SELECT
DISTINCT
'CL238093011' AS ETUDE,
FI.STUDYID,
FI.STUDYVERSIONID,
FI.SITEID,
FI.SUBJECTID,
FI.VISITID,
VR.VISITREFNAME,
FI.SUBJECTVISITID,
FI.FORMID,
FI.FORMINDEX,
SVF.FORMREFNAME,
SVF.FORMMNEMONIC AS FMLNOM,
EVENT_ITEM.EVENT AS EVENUM,
EVENT_ITEM.EVENT_ROW AS LIGNUM,
NULL AS CODVISEVE,
MIN(DID.MINENTEREDDATE)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSAI,
MIN(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSDV,
MAX(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDATDERSDV,
DECODE (AF.SDVCOMPLETESTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVCOP,
AF.FMINSDVCOMPLETESTATE AS ATTDAT1ERSDVCOP,
DECODE (AF.SDVPARTIALSTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVPTL,
EVENT_ITEM.EVENT_RELECT AS ATTINDRVUMEDCOP,
DECODE (QUERY.NBQSTFML, NULL, 'N', 'Y') AS ATTINDQST,
DECODE (AF.MISSINGITEMSSTATE,
0,
'N',
1,
'Y')
AS ATTINDITMABS,
DECODE (AF.FROZENSTATE,
0,
'N',
1,
'Y')
AS ATTINDETACON,
AF.FMINFROZENSTATE AS ATTDAT1ERCON,
AF.FMAXFROZENSTATE AS ATTDATDERCON,
DECODE (AF.DELETEDSTATE,
0,
'N',
1,
'Y')
AS ATTINDETASPR,
EVENT_ITEM.ROW_DELETED AS ATTINDLIGSPR
FROM CL238093011.MVECRF_FORMITEMS FI,
CL238093011.MVECRF_STUDYVERSION_FORMS SVF,
CL238093011.MVECRF_ACTIVATED_FORMS AF,
CL238093011.MVECRF_ITEM_SDV IFSDV,
CL238093011.MVECRF_VISIT_REVS VR,
CL238093011.MVECRF_ITEMDATA DID,
(SELECT DISTINCT
SUBJECTID,
VISITID,
FORMID,
FORMINDEX,
COUNT (
DISTINCT QUERYID
OVER (
PARTITION BY SUBJECTID, VISITID, FORMID, FORMINDEX
NBQSTFML
FROM CL238093011.MVECRF_CUR_QUERY
WHERE QUERYSTATE IN (0, 1, 2)) QUERY,
CL238093011.FLO_FML_EVENT EVENT_ITEM
WHERE (AF.VISITDELETED IS NULL OR AF.VISITDELETED = 0)
AND AF.FORMTYPE NOT IN (4, 5, 6, 7, 8, 103)
AND (AF.DELETEDDYNAMICFORMSTATE IS NULL
OR AF.DELETEDDYNAMICFORMSTATE = 0)
AND FI.SUBJECTVISITID = AF.SUBJECTVISITID
AND FI.FORMID = AF.FORMID
AND FI.FORMREV = AF.FORMREV
AND FI.FORMINDEX = AF.FORMINDEX
AND FI.VISITID = VR.VISITID
AND FI.VISITREV = VR.VISITREV
AND FI.CONTEXTID = IFSDV.CONTEXTID(+)
AND FI.CONTEXTID = DID.CONTEXTID(+)
AND FI.SUBJECTID = QUERY.SUBJECTID(+)
AND FI.VISITID = QUERY.VISITID(+)
AND FI.FORMID = QUERY.FORMID(+)
AND FI.FORMINDEX = QUERY.FORMINDEX(+)
AND FI.STUDYVERSIONID = SVF.STUDYVERSIONID
AND FI.FORMID = SVF.FORMID
AND FI.VISITID = SVF.VISITID
AND FI.SUBJECTID = EVENT_ITEM.SUBJECTID(+)
AND FI.VISITID = EVENT_ITEM.VISITID(+)
AND FI.FORMID = EVENT_ITEM.FORMID(+)
AND FI.FORMINDEX = EVENT_ITEM.FORMINDEX(+)
Do you have the license for parallel query (may/may not help)? PQO can help with sorts ... -
Having clause with Analytic function
can you pls let me know if we can use HAVING clause with analytic function
select eid,empno,sum(sal) over(partition by year)
from employee
where dept = 'SALES'
having sum(sal) > 10000I m getting error while using the above,
IS that we can use HAVING clause with partition by
Thanks in advanceYour having clause isn't using an analytical function, is using a regular aggregate function.
You also can't use analytical functions in the where clause or having clause like that as they are windowing functions and belong at the top of the query.
You would have to wrap the query to achieve what you want e.g.
SQL> ed
Wrote file afiedt.buf
1 select deptno, total_sal
2 from (
3 select deptno,sum(sal) over (partition by deptno) as total_sal
4 from emp
5 )
6 group by deptno, total_sal
7* having total_sal > 10000
SQL> /
DEPTNO TOTAL_SAL
20 10875
SQL> -
EVALUATE in OBIEE with Analytic function LAST_VALUE
Hi,
I'm trying to use EVALUATE with analytic function LAST_VALUE but it is giving me error below:
[nQSError: 17001] Oracle Error code: 30483, message: ORA-30483: window functions are not allowed here at OCI call OCIStmtExecute. [nQSError: 17010] SQL statement preparation failed. (HY000)
Thanks
Kumar.Hi Kumar,
The ORA error tells me that this is something conveyed by the oracle database but not the BI Server. In this case, the BI server might have fired the incorrect query onto the DB and you might want to check what's wrong with it too.
The LAST_VALUE is an analytic function which works over a set/partition of records. Request you to refer to the semantics at http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions073.htm and see if it is violating any rules here. You may want to post the physical sql here too to check.
Hope this helps.
Thank you,
Dhar -
I successfully use the following analytical function to sum all net_movement of a position (key for a position: bp_id, prtfl_num, instrmnt_id, cost_prc_crncy) from first occurrence until current row:
SELECT SUM (net_movement) OVER (PARTITION BY bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND 0 FOLLOWING) holding,
what i need is another column to sum net_movement of a position but only for the current date, but all my approaches fail..
- add the date (val_dt) to the 'partition by' clause and therefore sum only values with same position and date
SELECT SUM (net_movement) OVER (PARTITION BY val_dt, bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND 0 FOLLOWING) today_net_movement
- take the holding for the last date and subtract it from the current holding afterwards
SELECT SUM (net_movement) OVER (PARTITION BY bp_id, prtfl_num, instrmnt_id, cost_prc_crncy ORDER BY TRUNC (val_dt) RANGE BETWEEN UNBOUNDED PRECEDING AND -1 FOLLOWING) last_holding,
- using lag on the analytical function which calculates holding fails too
I also want to avoid creating a table which stores the last holding..
Does anyone sees where I make a mistake or knows an alternative to get this value?
It would help me much!
Thanks in advance!Thank you,
but I already tried that but it returns strange values which are not the correct ones for sure.
It is always the same value for each row, if its not 0, and a very high one (500500 for example), even if the sum of all net_movement of that date is 0 (and the statement for holding returns 0 too)
I also tried witch trunc(val_dt,'DDD') with the same result (without trunc it is the same issue)
please help if you can, thanks in advance! -
COUNT(DISTINCT) WITH ORDER BY in an analytic function
-- I create a table with three fields: Name, Amount, and a Trans_Date.
CREATE TABLE TEST
NAME VARCHAR2(19) NULL,
AMOUNT VARCHAR2(8) NULL,
TRANS_DATE DATE NULL
-- I insert a few rows into my table:
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '21', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '68', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '110', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Anna', '20', TO_DATE('06/06/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/01/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '77', TO_DATE('06/02/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '221', TO_DATE('06/03/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '43', TO_DATE('06/04/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
INSERT INTO TEST ( TEST.NAME, TEST.AMOUNT, TEST.TRANS_DATE ) VALUES ( 'Bill', '73', TO_DATE('06/05/2005 08:00:00 PM', 'MM/DD/YYYY HH12:MI:SS PM') );
commit;
/* I want to retrieve all the distinct count of amount for every row in an analytic function with COUNT(DISTINCT AMOUNT) sorted by name and ordered by trans_date where I get only calculate for the last four trans_date for each row (i.e., for the row "Anna 110 6/5/2005 8:00:00.000 PM," I only want to look at the previous dates from 6/2/2005 to 6/5/2005 and get the distinct count of how many amounts there are different for Anna). Note, I cannot use the DISTINCT keyword in this query because it doesn't work with the ORDER BY */
select NAME, AMOUNT, TRANS_DATE, COUNT(/*DISTINCT*/ AMOUNT) over ( partition by NAME
order by TRANS_DATE range between numtodsinterval(3,'day') preceding and current row ) as COUNT_AMOUNT
from TEST t;
This is the results I get if I just count all the AMOUNT without using distinct:
NAME AMOUNT TRANS_DATE COUNT_AMOUNT
Anna 110 6/1/2005 8:00:00.000 PM 2
Anna 20 6/1/2005 8:00:00.000 PM 2
Anna 110 6/2/2005 8:00:00.000 PM 3
Anna 21 6/3/2005 8:00:00.000 PM 4
Anna 68 6/4/2005 8:00:00.000 PM 5
Anna 110 6/5/2005 8:00:00.000 PM 4
Anna 20 6/6/2005 8:00:00.000 PM 4
Bill 43 6/1/2005 8:00:00.000 PM 1
Bill 77 6/2/2005 8:00:00.000 PM 2
Bill 221 6/3/2005 8:00:00.000 PM 3
Bill 43 6/4/2005 8:00:00.000 PM 4
Bill 73 6/5/2005 8:00:00.000 PM 4
The COUNT_DISTINCT_AMOUNT is the desired output:
NAME AMOUNT TRANS_DATE COUNT_DISTINCT_AMOUNT
Anna 110 6/1/2005 8:00:00.000 PM 1
Anna 20 6/1/2005 8:00:00.000 PM 2
Anna 110 6/2/2005 8:00:00.000 PM 2
Anna 21 6/3/2005 8:00:00.000 PM 3
Anna 68 6/4/2005 8:00:00.000 PM 4
Anna 110 6/5/2005 8:00:00.000 PM 3
Anna 20 6/6/2005 8:00:00.000 PM 4
Bill 43 6/1/2005 8:00:00.000 PM 1
Bill 77 6/2/2005 8:00:00.000 PM 2
Bill 221 6/3/2005 8:00:00.000 PM 3
Bill 43 6/4/2005 8:00:00.000 PM 3
Bill 73 6/5/2005 8:00:00.000 PM 4
Thanks in advance.you can try to write your own udag.
here is a fake example, just to show how it "could" work. I am here using only 1,2,4,8,16,32 as potential values.
create or replace type CountDistinctType as object
bitor_number number,
static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType)
return number,
member function ODCIAggregateIterate(self IN OUT CountDistinctType,
value IN number) return number,
member function ODCIAggregateTerminate(self IN CountDistinctType,
returnValue OUT number, flags IN number) return number,
member function ODCIAggregateMerge(self IN OUT CountDistinctType,
ctx2 IN CountDistinctType) return number
create or replace type body CountDistinctType is
static function ODCIAggregateInitialize(sctx IN OUT CountDistinctType)
return number is
begin
sctx := CountDistinctType('');
return ODCIConst.Success;
end;
member function ODCIAggregateIterate(self IN OUT CountDistinctType, value IN number)
return number is
begin
if (self.bitor_number is null) then
self.bitor_number := value;
else
self.bitor_number := self.bitor_number+value-bitand(self.bitor_number,value);
end if;
return ODCIConst.Success;
end;
member function ODCIAggregateTerminate(self IN CountDistinctType, returnValue OUT
number, flags IN number) return number is
begin
returnValue := 0;
for i in 0..log(2,self.bitor_number) loop
if (bitand(power(2,i),self.bitor_number)!=0) then
returnValue := returnValue+1;
end if;
end loop;
return ODCIConst.Success;
end;
member function ODCIAggregateMerge(self IN OUT CountDistinctType, ctx2 IN
CountDistinctType) return number is
begin
return ODCIConst.Success;
end;
end;
CREATE or REPLACE FUNCTION CountDistinct (n number) RETURN number
PARALLEL_ENABLE AGGREGATE USING CountDistinctType;
drop table t;
create table t as select rownum r, power(2,trunc(dbms_random.value(0,6))) p from all_objects;
SQL> select r,p,countdistinct(p) over (order by r) d from t where rownum<10 order by r;
R P D
1 4 1
2 1 2
3 8 3
4 32 4
5 1 4
6 16 5
7 16 5
8 4 5
9 4 5buy some good book if you want to start at writting your own "distinct" algorythm.
Message was edited by:
Laurent Schneider
a simpler but memory killer algorithm would use a plsql table in an udag and do the count(distinct) over that table to return the value -
There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
the script below (SQLCMD mode, set DataDrive & LogDrive variables for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
location of data.
--=================================================================================
-- PartitionLabSetup_RangeRight.sql
-- 001. Create test database
-- 002. Add file groups and files
-- 003. Create partition function and schema
-- 004. Create and populate a test table
--=================================================================================
USE [master]
GO
-- 001 - Create Test Database
:SETVAR DataDrive "D:\SQL\Data\"
:SETVAR LogDrive "D:\SQL\Logs\"
:SETVAR DatabaseName "workspace"
:SETVAR TableName "TestTable"
-- Drop if exists and create Database
IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
BEGIN
ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE $(DatabaseName)
END
CREATE DATABASE $(DatabaseName)
ON
( NAME = $(DatabaseName)_data,
FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
SIZE = 10,
MAXSIZE = 500,
FILEGROWTH = 5 )
LOG ON
( NAME = $(DatabaseName)_log,
FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
SIZE = 5MB,
MAXSIZE = 5000MB,
FILEGROWTH = 5MB ) ;
GO
-- 002. Add file groups and files
--:SETVAR DatabaseName "workspace"
--:SETVAR TableName "TestTable"
--:SETVAR DataDrive "D:\SQL\Data\"
--:SETVAR LogDrive "D:\SQL\Logs\"
DECLARE @nSQL NVARCHAR(2000) ;
DECLARE @x INT = 1;
WHILE @x <= 6
BEGIN
SELECT @nSQL =
'ALTER DATABASE $(DatabaseName)
ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
ALTER DATABASE $(DatabaseName)
ADD FILE
NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
EXEC sp_executeSQL @nSQL;
SET @x = @x + 1;
END
-- 003. Create partition function and schema
--:SETVAR TableName "TestTable"
--:SETVAR DatabaseName "workspace"
USE $(DatabaseName);
CREATE PARTITION FUNCTION $(TableName)_func (int)
AS RANGE RIGHT FOR VALUES
0,
15,
30,
45,
60
CREATE PARTITION SCHEME $(TableName)_scheme
AS
PARTITION $(TableName)_func
TO
$(TableName)_fg1,
$(TableName)_fg2,
$(TableName)_fg3,
$(TableName)_fg4,
$(TableName)_fg5,
$(TableName)_fg6
-- Create TestTable
--:SETVAR TableName "TestTable"
--:SETVAR BackupDrive "D:\SQL\Backups\"
--:SETVAR DatabaseName "workspace"
CREATE TABLE [dbo].$(TableName)(
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_scheme(Partition_PK)
) ON $(TableName)_scheme(Partition_PK)
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
-- 004. Create and populate a test table
-- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
--:SETVAR TableName "TestTable"
SET NOCOUNT ON;
DECLARE @Now DATETIME = GETDATE()
WHILE @Now > DATEADD(minute,-1,GETDATE())
BEGIN
INSERT INTO [dbo].$(TableName)
([Partition_PK]
,[RandomNbr])
VALUES
DATEPART(second,GETDATE())
,ROUND((RAND() * 100),0)
END
-- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
--=================================================================================
-- SECTION 2 - SWITCH OUT
-- 001 - Create TestTableOut
-- 002 - Switch out partition in range 0-14
-- 003 - Merge range 0 -29
-- 001. TestTableOut
:SETVAR TableName "TestTable"
IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
DROP TABLE [dbo].[$(TableName)Out]
CREATE TABLE [dbo].[$(TableName)Out](
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_fg2;
GO
-- 002 - Switch out partition in range 0-14
--:SETVAR TableName "TestTable"
ALTER TABLE dbo.$(TableName)
SWITCH PARTITION 2 TO dbo.$(TableName)Out;
-- 003 - Merge range 0 - 29
--:SETVAR TableName "TestTable"
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (15);
-- Confirm table partitioning
-- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
The T-SQL code below illustrates the problem.
-- PartitionLab_RangeRight
USE workspace;
DROP TABLE dbo.TestTableOut;
USE master;
ALTER DATABASE workspace
REMOVE FILE TestTable_f3 ;
-- ERROR
--Msg 5042, Level 16, State 1, Line 1
--The file 'TestTable_f3 ' cannot be removed because it is not empty.
ALTER DATABASE workspace
REMOVE FILE TestTable_f2 ;
-- Works surprisingly!!
use workspace;
ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
--Msg 622, Level 16, State 3, Line 2
--The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
--The statement has been terminated.
If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
-- RANGE RIGHT
-- Rerun PartitionLabSetup_RangeRight.sql before the code below
USE workspace;
DROP TABLE dbo.TestTableOut;
ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
USE master;
ALTER DATABASE workspace
REMOVE FILE TestTable_f3;
-- Works as expected!!
The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
the left file group (File Group 2) is retained when splitting ranges.
RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
on the test table but uses different boundary definitions and RANGE LEFT.
--=================================================================================
-- PartitionLabSetup_RangeLeft.sql
-- 001. Create test database
-- 002. Add file groups and files
-- 003. Create partition function and schema
-- 004. Create and populate a test table
--=================================================================================
USE [master]
GO
-- 001 - Create Test Database
:SETVAR DataDrive "D:\SQL\Data\"
:SETVAR LogDrive "D:\SQL\Logs\"
:SETVAR DatabaseName "workspace"
:SETVAR TableName "TestTable"
-- Drop if exists and create Database
IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
BEGIN
ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE $(DatabaseName)
END
CREATE DATABASE $(DatabaseName)
ON
( NAME = $(DatabaseName)_data,
FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
SIZE = 10,
MAXSIZE = 500,
FILEGROWTH = 5 )
LOG ON
( NAME = $(DatabaseName)_log,
FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
SIZE = 5MB,
MAXSIZE = 5000MB,
FILEGROWTH = 5MB ) ;
GO
-- 002. Add file groups and files
--:SETVAR DatabaseName "workspace"
--:SETVAR TableName "TestTable"
--:SETVAR DataDrive "D:\SQL\Data\"
--:SETVAR LogDrive "D:\SQL\Logs\"
DECLARE @nSQL NVARCHAR(2000) ;
DECLARE @x INT = 1;
WHILE @x <= 6
BEGIN
SELECT @nSQL =
'ALTER DATABASE $(DatabaseName)
ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
ALTER DATABASE $(DatabaseName)
ADD FILE
NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
EXEC sp_executeSQL @nSQL;
SET @x = @x + 1;
END
-- 003. Create partition function and schema
--:SETVAR TableName "TestTable"
--:SETVAR DatabaseName "workspace"
USE $(DatabaseName);
CREATE PARTITION FUNCTION $(TableName)_func (int)
AS RANGE LEFT FOR VALUES
-1,
14,
29,
44,
59
CREATE PARTITION SCHEME $(TableName)_scheme
AS
PARTITION $(TableName)_func
TO
$(TableName)_fg1,
$(TableName)_fg2,
$(TableName)_fg3,
$(TableName)_fg4,
$(TableName)_fg5,
$(TableName)_fg6
-- Create TestTable
--:SETVAR TableName "TestTable"
--:SETVAR BackupDrive "D:\SQL\Backups\"
--:SETVAR DatabaseName "workspace"
CREATE TABLE [dbo].$(TableName)(
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_scheme(Partition_PK)
) ON $(TableName)_scheme(Partition_PK)
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
-- 004. Create and populate a test table
-- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
--:SETVAR TableName "TestTable"
SET NOCOUNT ON;
DECLARE @Now DATETIME = GETDATE()
WHILE @Now > DATEADD(minute,-1,GETDATE())
BEGIN
INSERT INTO [dbo].$(TableName)
([Partition_PK]
,[RandomNbr])
VALUES
DATEPART(second,GETDATE())
,ROUND((RAND() * 100),0)
END
-- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
--=================================================================================
-- SECTION 2 - SWITCH OUT
-- 001 - Create TestTableOut
-- 002 - Switch out partition in range 0-14
-- 003 - Merge range 0 -29
-- 001. TestTableOut
:SETVAR TableName "TestTable"
IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
DROP TABLE [dbo].[$(TableName)Out]
CREATE TABLE [dbo].[$(TableName)Out](
[Partition_PK] [int] NOT NULL,
[GUID_PK] [uniqueidentifier] NOT NULL,
[CreateDate] [datetime] NULL,
[CreateServer] [nvarchar](50) NULL,
[RandomNbr] [int] NULL,
CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
[Partition_PK] ASC,
[GUID_PK] ASC
) ON $(TableName)_fg2;
GO
-- 002 - Switch out partition in range 0-14
--:SETVAR TableName "TestTable"
ALTER TABLE dbo.$(TableName)
SWITCH PARTITION 2 TO dbo.$(TableName)Out;
-- 003 - Merge range 0 - 29
:SETVAR TableName "TestTable"
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (14);
-- Confirm table partitioning
-- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
SELECT
N'DatabaseName' = DB_NAME()
, N'SchemaName' = s.name
, N'TableName' = o.name
, N'IndexName' = i.name
, N'IndexType' = i.type_desc
, N'PartitionScheme' = ps.name
, N'DataSpaceName' = ds.name
, N'DataSpaceType' = ds.type_desc
, N'PartitionFunction' = pf.name
, N'PartitionNumber' = dds.destination_id
, N'BoundaryValue' = prv.value
, N'RightBoundary' = pf.boundary_value_on_right
, N'PartitionFileGroup' = ds2.name
, N'RowsOfData' = p.[rows]
FROM
sys.objects AS o
INNER JOIN sys.schemas AS s
ON o.[schema_id] = s.[schema_id]
INNER JOIN sys.partitions AS p
ON o.[object_id] = p.[object_id]
INNER JOIN sys.indexes AS i
ON p.[object_id] = i.[object_id]
AND p.index_id = i.index_id
INNER JOIN sys.data_spaces AS ds
ON i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes AS ps
ON ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions AS pf
ON ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values AS prv
ON pf.function_id = prv.function_id
AND p.partition_number = prv.boundary_id
LEFT OUTER JOIN sys.destination_data_spaces AS dds
ON ps.data_space_id = dds.partition_scheme_id
AND p.partition_number = dds.destination_id
LEFT OUTER JOIN sys.data_spaces AS ds2
ON dds.data_space_id = ds2.data_space_id
ORDER BY
DatabaseName
,SchemaName
,TableName
,IndexName
,PartitionNumber
The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data.
If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
investigating this.
NOTE 10/03/2014 - The solution
The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
-- Wrong Boundary Point Range Right
--ALTER PARTITION FUNCTION $(TableName)_func()
--MERGE RANGE (15);
-- Wrong Boundary Point Range Left
--ALTER PARTITION FUNCTION $(TableName)_func()
--MERGE RANGE (14);
-- Correct Boundary Pounts for MERGE
ALTER PARTITION FUNCTION $(TableName)_func()
MERGE RANGE (0); -- or -1 for RANGE LEFT
The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
file groups and apologize :-)Hi Paul Brewer,
Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
Regards.
Sofiya Li
Sofiya Li
TechNet Community Support -
To use "analytic function" at "recursive with clause"
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10002.htm#i2077142
The recursive member cannot contain any of the following elements:
・An aggregate function. However, analytic functions are permitted in the select list.
OK I will use analytic function at The recursive member :-)
SQL> select * from v$version;
BANNER
Oracle Database 11g Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> with rec(Val,TotalRecCnt) as(
2 select 1,1 from dual
3 union all
4 select Val+1,count(*) over()
5 from rec
6 where Val+1 <= 5)
7 select * from rec;
select * from rec
ERROR at line 7:
ORA-32486: unsupported operation in recursive branch of recursive WITH clauseWhy ORA-32486 happen ?:|Hi Aketi,
It works in 11.2.0.2, so it is probably a bug:
select * from v$version
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
with rec(Val,TotalRecCnt) as(
select 1,1 from dual
union all
select Val+1,count(*) over()
from rec
where Val+1 <= 5)
select * from rec
VAL TOTALRECCNT
1 1
2 1
3 1
4 1
5 1 Regards,
Bob -
Analytical function count(*) with order by Rows unbounded preceding
Hi
I have query about analytical function count(*) with order by (col) ROWS unbounded preceding.
If i removed order by rows unbouned preceding then it behaves some other way.
Can anybody tell me what is the impact of order by ROWS unbounded preceding with count(*) analytical function?
Please help me and thanks in advance.Sweety,
CURRENT ROW is the default behaviour of the analytical function if no windowing clause is provided. So if you are giving ROWS UNBOUNDED PRECEDING, It basically means that you want to compute COUNT(*) from the beginning of the window to the current ROW. In other words, the use of ROWS UNBOUNDED PRECEDING is to implicitly indicate the end of the window is the current row
The beginning of the window of a result set will depend on how you have defined your partition by clause in the analytical function.
If you specify ROWS 2 preceding, then it will calculate COUNT(*) from 2 ROWS prior to the current row. It is a physical offset.
Regards,
Message was edited by:
henryswift -
How to use analytic function with aggregate function
hello
can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
Edited by: Oracle Studnet on Nov 15, 2009 10:29 PMselect
t1.region_name,
t2.division_name,
t3.month,
t3.amount mthly_sales,
max(t3.amount) over (partition by t1.region_name, t2.division_name)
max_mthly_sales
from
region t1,
division t2,
sales t3
where
t1.region_id=t3.region_id
and
t2.division_id=t3.division_id
and
t3.year=2004
Source:http://www.orafusion.com/art_anlytc.htm
Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
Hth
Girish Sharma
Maybe you are looking for
-
After downloading IOS 5, I no longer get the sounds on my IPAD 2 to work. They are on in settings and the volume is at maximum, but the sounds such as keyboard click, email sent etc, donor work. Any suggestions. Regards
-
Attaching a class to multiple library symbols
Hello again, I'm so grateful for the help I've gotten so far! Here's the situation: I wrote a class called ButtonGlow in ButtonGlow.as. I made a movieclip with the "Base class:" of "flash.display.MovieClip" and the "Class:" of "ButtonGlow". This actu
-
Service PO and Service Entry Sheets Question
Hi Guru, I need some advice on the following questions in service PO and service entry sheets. 1) How can i change the default unit of measure in service PO? where is this in customizing? 2) How can i turn off the message "No message record could be
-
Hi, Is there anyone with experience for loans management (mortgages) in combination with Workflowmanagent? Any ideas are welcome. Kind regards Robert
-
My Ipad is losing its charge and grainy photo quality
When I sent my iPad away to be fixed as it wasn't keeping its charge, it came back not fixed ( so its still losing charge a lot quicker than most I pads) and my photo albums came back with a grainy quality on the photos and blank boxes between them.