Custom window in analytic function
SELECT device_id,
case_id,
value,
fo_value,
nomination,
symbol,
cur_id,
time,
data_type,
COALESCE (
VALUE,
LAST_VALUE (
VALUE IGNORE NULLS
OVER (PARTITION BY device_id, cur_id, nomination
ORDER BY time
ROWS UNBOUNDED PRECEDING)
+ SUM(fo_value)
OVER (PARTITION BY device_id, cur_id, nomination
ORDER BY time
ROWS UNBOUNDED PRECEDING)
AS val
FROM tab
DEVICE_ID CASE_ID VALUE FO_VALUE NOMINATION SYMBOL CUR_ID TIME DATA_TYPE VAL
8744 4901 -23 20 PLN 1 2009-08-25 11:00:00.000000 F -48
8744 4901 -2 20 PLN 1 2009-08-25 07:00:00.000000 F -25
8744 4901 100 -13 20 PLN 1 2009-08-24 23:00:00.000000 I 100
8744 4901 -6 20 PLN 1 2009-08-24 17:00:00.000000 F 45
8744 4901 -5 20 PLN 1 2009-08-24 14:00:00.000000 F 51
8744 4901 -5 20 PLN 1 2009-08-24 11:00:00.000000 F 56VAL column is defined as last VALUE from which there was substracted cummulative sum of FO_VALUE column.
Every record with data_type = 'I' has some number in VALUE column. What I need is to define window in expression that counts sum of FO_VALUE which adds only the values from last NON-NULL VALUE column - currently query substracts all the values in partition.
I need something like window from last NON-NULL in VALUE column row to current row
Desired output should look like this:
DEVICE_ID CASE_ID VALUE FO_VALUE NOMINATION SYMBOL CUR_ID TIME DATA_TYPE VAL
8744 4901 -23 20 PLN 1 2009-08-25 11:00:00.000000 F 62
8744 4901 -2 20 PLN 1 2009-08-25 07:00:00.000000 F 85
8744 4901 100 -13 20 PLN 1 2009-08-24 23:00:00.000000 I 87
8744 4901 -6 20 PLN 1 2009-08-24 17:00:00.000000 F 45
8744 4901 -5 20 PLN 1 2009-08-24 14:00:00.000000 F 51
8744 4901 -5 20 PLN 1 2009-08-24 11:00:00.000000 F 56I'd be very appeciated for any suggestions.
Hi,
Sorry, I don't understand the problem.
In particular, I don't see how you get the vals (56, 51, 45) for the earlier times. It looks like you should get NULL for val in the ealiest 3 rows. Can you explain how you get those numbers?
Is it possible for a partition (device_id, cur_id, nomination) to have 2 or more rows where value is not NULL? If so, please post an example of the results you would want.
Rather than adjusting ROWS ... to get the results you want, it might be easier to adjust the PARTITION BY ...
For example, this gets the results you want on and after the row where value is not NULL:
WITH got_grp_num
AS
SELECT device_id, case_id, value, fo_value, nomination, symbol, cur_id, time, data_type,
COUNT (value) OVER ( PARTITION BY device_id, cur_id, nomination
ORDER BY time
) AS grp_num
FROM tab
SELECT device_id, case_id, value, fo_value, nomination, symbol, cur_id, time, data_type,
LAST_VALUE (value IGNORE NULLS) OVER ( PARTITION BY device_id, cur_id, nomination, grp_num
ORDER BY time
+ SUM (fo_value) OVER ( PARTITION BY device_id, cur_id, nomination, grp_num
ORDER BY time
) AS val
FROM got_grp_num
ORDER BY device_id, cur_id, nomination,
time DESC
;As you can see, there is a new column in the PARTITION BY clauses that compute val. The new column is grp_num, which has a new value whenever there is a non-null value.
Edited by: Frank Kulash on Aug 28, 2009 1:27 PM
Similar Messages
-
Isuue with RANGE Windows in analytical functions
Hi,
I have a table abc described as
SQL> desc abc;
Name Null? Type
ID VARCHAR2(10)
NAME VARCHAR2(20)
VALUE NUMBER(10)
I am quite a bit confused by the behaviour of below written SQl query.
select ID,NAME,VALUE,SUM(VALUE) OVER(PARTITION BY ID order by value RANGE BETWEEN 2 PRECEDING and 2 following)a from abc;
The output obtained is
ID NAME VALUE A
BEN a1 1 4
BEN c3 3 13
BEN d4 4 12
BEN e5 5 12
BEN b2 17 17
TAB a1 11 11
TAB c3 20 20
TAB b2 100 100
I am getting the values correctly for first partition of 'BEN'. But could you please explain the values which are coming in the for field A for the second partition of 'TAB'?
Its not giving the sum of the range, instead its simply giving the filed value.
Thanks in advance.
regards,
SreenathHi, Sreenath,
858163 wrote:
... select ID,NAME,VALUE,SUM(VALUE) OVER(PARTITION BY ID order by value RANGE BETWEEN 2 PRECEDING and 2 following)a from abc;
The output obtained is
ID NAME VALUE A
BEN a1 1 4
BEN c3 3 13
BEN d4 4 12
BEN e5 5 12
BEN b2 17 17
TAB a1 11 11
TAB c3 20 20
TAB b2 100 100
I am getting the values correctly for first partition of 'BEN'. But could you please explain the values which are coming in the for field A for the second partition of 'TAB'?
Its not giving the sum of the range, instead its simply giving the filed value.They happen to be the same in those cases.
Look at the row where vlaue=11. The window is
RANGE BETWEEN 2 PRECEDING AND 2 FOLLOWING, which means
value BETWEEN 9 and 13.
The row with value=11 itself is the only row in that range.
The same goes for the row with value=20; there is no other row with value BETWEEN 18 and 22.
The same goes for the row with value=100; there is no other row with value BETWEEN 98 and 102.
What results did you want?
Perhaps you meant ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING.
Edited by: Frank Kulash on May 11, 2011 10:12 AM
I hope this answers your question.
If not, post a little sample data (CREATE TABLE and INSERT statements, relevant columns only), and also post the results you want from that data (formatted, between \ tags).
Explain, using specific examples, how you get those results from that data.
Always say which version of Oracle you're using. -
Discoverer Analytic Function windowing - errors and bad aggregation
I posted this first on Database General forum, but then I found this was the place to put it:
Hi, I'm using this kind of windowing function:
SUM(Receitas Especificas) OVER(PARTITION BY Tipo Periodo,Calculado,"Empresa Descrição (Operador)","Empresa Descrição" ORDER BY Ini Periodo RANGE BETWEEN INTERVAL '12' MONTH PRECEDING AND INTERVAL '12' MONTH PRECEDING )
If I use the "Receitas Especificas SUM" instead of
"Receitas Especificas" I get the following error running the report:
"an error occurred while attempting to run..."
This is not in accordance to:
http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/bi.904/b10268.pdf
but ok, the version without SUM inside works.
Another problem is the fact that for analytic function with PARTITION BY,
this does not work (shows the cannot aggregate symbol) if we collapse or use "<All>" in page items.
But it works if we remove the item from the PARTITION BY and also remove from workbook.
It's even worse for windowing functions(query above), because the query
only works if we remove the item from the PARTITION BY but we have to show it on the workbook - and this MAKES NO SENSE... :(
Please help.Unfortunately Discoverer doesn't show (correct) values for analytical functions when selecting "<All>" in a page item. I found out that it does work when you add the analytical function to the db-view instead of to the report as a calculation or as a calculated item on the folder.
The only problem is you've to name all page-items in the PARTITION window, so, when adding a page-item to the report, you,ve to change the db-view and alter the PARTITION window.
Michael -
Return multiple columns from an analytic function with a window
Hello,
Is it possible to obtain multiple columns from an analytic function with a window?
I have a table with 4 columns, an id, a test id, a date, and the result of a test. I'm using an analytic function to obtain, for each row, the current test value, and the maximum test value in the next 2 days like so:
select
id,
test_id,
date,
result,
MAX ( result ) over ( partition BY id, test_id order by date RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING ) AS max_result_next_two_day
from table
This is working fine, but I would like to also obtain the date when the max result occurs. I can see that this would be possible using a self join, but I'd like to know if there is a better way? I cannot use the FIRST_VALUE aggregate function and order by result, because the window function needs to be ordered by the date.
It would be a great help if you could provide any pointers/suggestions.
Thanks,
Dan
http://danieljamesscott.orgAssuming RESULT is a positive integer that has a maximum width of, say 10,
and assuming date has no time-component:
select
id
,test_id
,date
,result
,to_number(substr(max_result_with_date,1,10)) as max_result_next_two_day
,to_date(substr(max_result_with_date,11),'YYYYMMDD') as date_where_max_result_occurs
from (select
id
,test_id
,date
,result
,MAX(lpad(to_char(result),10,'0')||to_char(date,'YYYYMMDD'))
over (partition BY id, test_id
order by date
RANGE BETWEEN CURRENT ROW AND INTERVAL '2' DAY FOLLOWING )
AS max_result_with_date
from table) -
A Job for 'PARTION BY' Analytical Function?
Hi,
I'm still a little fuzzy on using partitions but this looks like a possible candidate to me.
I need to count the number of different customers that visit an office in a single day. If a customer visits an office more than once in a single day that counts as 1.
Input
OFFICE CUSTOMER TRAN_DATE
1 11 1-Apr-09
1 11 1-Apr-09
1 11 1-Apr-09
1 11 2-Apr-09
2 22 2-Apr-09
2 22 2-Apr-09
2 33 2-Apr-09
select a.office as "OFFICE", a.customer AS "CUSTOMER", a.tran_date AS "TRAN_DATE", COUNT(*)
FROM
(SELECT 1 AS "OFFICE", 11 AS "CUSTOMER", '01-APR-2009' AS "TRAN_DATE" FROM DUAL
UNION ALL
SELECT 1 , 11 , '01-APR-2009' FROM DUAL
UNION ALL
SELECT 1 , 11 , '01-APR-2009' FROM DUAL
UNION ALL
SELECT 1 , 11 , '02-APR-2009' FROM DUAL
UNION ALL
SELECT 2 , 22 , '02-APR-2009' FROM DUAL
UNION ALL
SELECT 2 , 22 , '02-APR-2009' FROM DUAL
UNION ALL
SELECT 2 , 33 , '02-APR-2009' FROM DUAL
) a;
Desired Result
1 1-Apr-09 1
1 2-Apr-09 1
2 2-Apr-09 2
Is this possible with partitions, do I need to use subqueries, or some other methid?
Thank You in Advance for Your Help,
Lou
Edited by: Wind In Face on Apr 15, 2009 1:34 PM"I wanted to use PARTITION BY instead of what John suggested because it is my understanding that PARTION BY will be faster"
It may be, or it may not be. As Frank pointed out analytic functions have their uses, and aggregate functions have theis. In some places, those uses do overlap, but not always. You query is equivalent to mine, that is, it returns the same resultset in this case, however, there are some differences.
For the relatively small amount of data I generated, it is probably not significant, but the analytic version does two sorts (one unique) while my aggregate version does only one.
SQL> CREATE TABLE test (office NUMBER, customer NUMBER, tran_dt DATE);
Table created.
SQL> INSERT /*+ APPEND */ INTO test
2 SELECT MOD(rownum, 10)+1, MOD(rownum, 121)+1, TRUNC(sysdate+MOD(rownum, 42))
3 FROM all_objects;
18135 rows created.
SQL> COMMIT;
Commit complete.
SQL> SELECT office, tran_dt, COUNT(DISTINCT customer) cust_count
2 FROM test
3 GROUP BY office, tran_dt;
210 rows selected.
Execution Plan
Plan hash value: 2407667464
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT GROUP BY | |
| 2 | TABLE ACCESS FULL| TEST |
Statistics
0 recursive calls
0 db block gets
27 consistent gets
0 physical reads
0 redo size
6061 bytes sent via SQL*Net to client
631 bytes received via SQL*Net from client
15 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
210 rows processed
SQL> SELECT DISTINCT office office, tran_dt tran_date,
2 COUNT(DISTINCT customer) OVER(PARTITION BY office, tran_dt) cust_count
3 FROM test;
210 rows selected.
Execution Plan
Plan hash value: 1303194651
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT UNIQUE | |
| 2 | WINDOW SORT | |
| 3 | TABLE ACCESS FULL| TEST |
Statistics
0 recursive calls
0 db block gets
27 consistent gets
0 physical reads
0 redo size
6063 bytes sent via SQL*Net to client
631 bytes received via SQL*Net from client
15 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
210 rows processedFor a larger resultset, this could make a significant difference.
In general, my preference is use the simplest construct that will work.
John -
Completion of data series by analytical function
I have the pleasure of learning the benefits of analytical functions and hope to get some help
The case is as follows:
Different projects gets funds from different sources over several years, but not from each source every year.
I want to produce the cumulative sum of funds for each source for each year for each project, but so far I have not been able to do so for years without fund for a particular source.
I have used this syntax:
SUM(fund) OVER(PARTITION BY project, source ORDER BY year ROWS UNBOUNDED PRECEDING)
I have also experimented with different variations of the window clause, but without any luck.
This is the last step in a big job I have been working on for several weeks, so I would be very thankful for any help.If you want to use Analytic functions and if you are on 10.1.3.3 version of BI EE then try using Evaluate, Evaluate_aggr that support native database functions. I have blogged about it here http://oraclebizint.wordpress.com/2007/09/10/oracle-bi-ee-10133-support-for-native-database-functions-and-aggregates/. But in your case all you might want to do is have a column with the following function.
SUM(Measure BY Col1, Col2...)
I have also blogged about it here http://oraclebizint.wordpress.com/2007/10/02/oracle-bi-ee-101332-varying-aggregation-based-on-levels-analytic-functions-equivalence/.
Thanks,
Venkat
http://oraclebizint.wordpress.com -
Custom window not displaying it's custom template
I have created a custom window class named CrackenWindow, and a custom template in the Generic.xaml resource dictionary.
After I create a new window, which inherits CrackenWindow, nothing happens. The visual remains the same and I cannot use CrackenWindow's extra functionality. If someone has the time, please review what I'm doing wrong. I have uploaded the code to https://onedrive.live.com/redir?resid=fa5f36f7b4d34c12%21106.
Thank you for your time,
suzi9spalI was kind of put off by seeing a collection of stuff there when I took a quick look on onedrive.
Are you sure you want this thing to be a custom control rather than just a window that has a template?
Custom controls ought to be the choice of last resort.
If you explain what you're trying to do, maybe I/we can suggest an alternative route.
Like what extra functionality?
You can add eventhandlers, dependencyproperties and whatnot in a base class.
For example:
public class BaseFancyWindow : Window
public BaseFancyWindow()
CloseCommand = new RelayCommand(CloseExecute);
public RelayCommand CloseCommand { get; set; }
private void CloseExecute()
this.Close();
and use that
<local:BaseFancyWindow x:Class="wpf_WindowChrome.Window6"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:wpf_WindowChrome"
Title="Finished Fancy Window" Height="300" Width="300"
Style="{StaticResource FinishedWindow}"
>
<Grid>
<TextBlock Text="This is some content in the window"/>
</Grid>
</local:BaseFancyWindow>
And I have a template which is just in a resource dictionary which makes that window look
Hope that helps.
Technet articles: Uneventful MVVM;
All my Technet Articles -
Analytic function to retrieve a value one year ago
Hello,
I'm trying to find an analytic function to get a value on another row by looking on a date with Oracle 11gR2.
I have a table with a date_id (truncated date), a flag and a measure. For each date, I have at least one row (sometimes 2), so it is gapless.
I would like to find analytic functions to show for each date :
sum of the measure for that date
sum of the measure one week ago
sum of the measure one year ago
As it is gapless I managed to do it the week doing a group by date in a subquery and using a LAG with offset set to 7 on top of it (see below).
However I'm struggling on how to do that for the data one year ago as we might have leap years. I cannot simply set the offset to 365.
Is it possible to do it with a RANGE BETWEEN window clause? I can't manage to have it working with dates.
Week :LAG with offset 7
SQL Fiddle
or
create table daily_counts
date_id date,
internal_flag number,
measure1 number
insert into daily_counts values ('01-Jan-2013', 0, 8014);
insert into daily_counts values ('01-Jan-2013', 1, 2);
insert into daily_counts values ('02-Jan-2013', 0, 1300);
insert into daily_counts values ('02-Jan-2013', 1, 37);
insert into daily_counts values ('03-Jan-2013', 0, 19);
insert into daily_counts values ('03-Jan-2013', 1, 14);
insert into daily_counts values ('04-Jan-2013', 0, 3);
insert into daily_counts values ('05-Jan-2013', 0, 0);
insert into daily_counts values ('05-Jan-2013', 1, 1);
insert into daily_counts values ('06-Jan-2013', 0, 0);
insert into daily_counts values ('07-Jan-2013', 1, 3);
insert into daily_counts values ('08-Jan-2013', 0, 33);
insert into daily_counts values ('08-Jan-2013', 1, 9);
commit;
select
date_id,
total1,
LAG(total1, 7) OVER(ORDER BY date_id) total_one_week_ago
from
select
date_id,
SUM(measure1) total1
from daily_counts
group by date_id
order by 1;
Year : no idea?
I can't give a gapless example, would be too long but if there is a solution with the date directly :
SQL Fiddle
or add this to the schema above :
insert into daily_counts values ('07-Jan-2012', 0, 11);
insert into daily_counts values ('07-Jan-2012', 1, 1);
insert into daily_counts values ('08-Jan-2012', 1, 4);
Thank you for your help.
FloydHi,
Sorry, I;m not sure I understand the problem.
If you are certain that there is at least 1 row for every day, then you can be sure that the GROUP BY will produce exactly 1 row per day, and you can use LAG (total1, 365) just like you already use LAG (total1, 7).
Are you concerned about leap years? That is, when the day is March 1, 2016, do you want the total_one_year_ago column to reflect March 1, 2015, which was 366 days earlier? If that case, use
date_id - ADD_MONTHS (date_id, -12)
instead of 365.
LAG only works with an exact number, but you can use RANGE BETWEEN with other analytic functions, such as MIN or SUM:
SELECT DISTINCT
date_id
, SUM (measure1) OVER (PARTITION BY date_id) AS total1
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 7 PRECEDING
AND 7 PRECEDING
) AS total1_one_week_ago
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 365 PRECEDING
AND 365 PRECEDING
) AS total1_one_year_ago
FROM daily_counts
ORDER BY date_id
Again, use date arithmetic instead of the hard-coded 365, if that's an issue.
As Hoek said, it really helps to post the exact results you want from the given sample data. You're miles ahead of the people who don't even post the sample data, though.
You're right not to post hundreds of INSERT statements to get a year's data. Here's one way to generate sample data for lots of rows at the same time:
-- Put a 0 into the table for every day in 2012
INSERT INTO daily_counts (date_id, measure1)
SELECT DATE '2011-12-31' + LEVEL
, 0
FROM dual
CONNECT BY LEVEL <= 366 -
Using analytical function to calculate concurrency between date range
Folks,
I'm trying to use analytical functions to come up with a query that gives me the
concurrency of jobs executing between a date range.
For example:
JOB100 - started at 9AM - stopped at 11AM
JOB200 - started at 10AM - stopped at 3PM
JOB300 - started at 12PM - stopped at 2PM
The query would tell me that JOB1 ran with a concurrency of 2 because JOB1 and JOB2
were running started and finished within the same time. JOB2 ran with the concurrency
of 3 because all jobs ran within its start and stop time. The output would look like this.
JOB START STOP CONCURRENCY
=== ==== ==== =========
100 9AM 11AM 2
200 10AM 3PM 3
300 12PM 2PM 2
I've been looking at this post, and this one if very similar...
Analytic functions using window date range
Here is the sample data..
CREATE TABLE TEST_JOB
( jobid NUMBER,
created_time DATE,
start_time DATE,
stop_time DATE
insert into TEST_JOB values (100, sysdate -1, to_date('05/04/08 09:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 11:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (200, sysdate -1, to_date('05/04/08 10:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 13:00:00','MM/DD/YY hh24:mi:ss'));
insert into TEST_JOB values (300, sysdate -1, to_date('05/04/08 12:00:00','MM/DD/YY hh24:mi:ss'), to_date('05/04/08 14:00:00','MM/DD/YY hh24:mi:ss'));
select * from test_job;
JOBID|CREATED_TIME |START_TIME |STOP_TIME
----------|--------------|--------------|--------------
100|05/04/08 09:28|05/04/08 09:00|05/04/08 11:00
200|05/04/08 09:28|05/04/08 10:00|05/04/08 13:00
300|05/04/08 09:28|05/04/08 12:00|05/04/08 14:00
Any help with this query would be greatly appreciated.
thanks.
-peterafter some checking the model rule wasn't working exactly as expected.
I believe it's working right now. I'm posting a self-contained example for completeness sake.I use 2 functions to convert back and forth between epoch unix timestamps, so
I'll post them here as well.
Like I said I think this works okay, but any feedback is always appreciated.
-peter
CREATE OR REPLACE FUNCTION date_to_epoch(p_dateval IN DATE)
RETURN NUMBER
AS
BEGIN
return (p_dateval - to_date('01/01/1970','MM/DD/YYYY')) * (24 * 3600);
END;
CREATE OR REPLACE FUNCTION epoch_to_date (p_epochval IN NUMBER DEFAULT 0)
RETURN DATE
AS
BEGIN
return to_date('01/01/1970','MM/DD/YYYY') + (( p_epochval) / (24 * 3600));
END;
DROP TABLE TEST_MODEL3 purge;
CREATE TABLE TEST_MODEL3
( jobid NUMBER,
start_time NUMBER,
end_time NUMBER);
insert into TEST_MODEL3
VALUES (300,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 19:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (200,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 12:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (400,date_to_epoch(to_date('05/07/2008 10:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 14:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (500,date_to_epoch(to_date('05/07/2008 11:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 16:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (600,date_to_epoch(to_date('05/07/2008 15:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 22:00','MM/DD/YYYY hh24:mi')));
insert into TEST_MODEL3
VALUES (100,date_to_epoch(to_date('05/07/2008 09:00','MM/DD/YYYY hh24:mi')),
date_to_epoch(to_date('05/07/2008 23:00','MM/DD/YYYY hh24:mi')));
commit;
SELECT jobid,
epoch_to_date(start_time)start_time,
epoch_to_date(end_time)end_time,
n concurrency
FROM TEST_MODEL3
MODEL
DIMENSION BY (start_time,end_time)
MEASURES (jobid,0 n)
(n[any,any]=
count(*)[start_time<= cv(start_time),end_time>=cv(start_time)]+
count(*)[start_time > cv(start_time) and start_time <= cv(end_time), end_time >= cv(start_time)]
ORDER BY start_time;
The results look like this:
JOBID|START_TIME|END_TIME |CONCURRENCY
----------|---------------|--------------|-------------------
100|05/07/08 09:00|05/07/08 23:00| 6
200|05/07/08 09:00|05/07/08 12:00| 5
300|05/07/08 10:00|05/07/08 19:00| 6
400|05/07/08 10:00|05/07/08 14:00| 5
500|05/07/08 11:00|05/07/08 16:00| 6
600|05/07/08 15:00|05/07/08 22:00| 4 -
How do you get a topic to open in a custom window when accessed from the index?
I right click on an index key word and open Properties. There is only one topic associated with this key word. I click on the Advanced tab, select the custom window that I created. Click OK, compile, etc.
When I try to access this topic from the index, it still opens in the default pane. When I access it from the TOC, it opens in the custom window. How can I get the topic to open in the custom window from the index?
Thanks!
SueHello,
The link might not work in Preview (When the option to open link in new tab is checked) however it will work if you do a File>Preview page/site in Browser.
Muse Preview has only one tab and cannot open another tab/window which is why this functionality does not work in Muse Preview. but works in preview in Browser and in published site.
Regards,
Sachin -
Analytic Functions with GROUP-BY Clause?
I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
INSERT INTO SALES VALUES(1,1,1,1.0);
INSERT INTO SALES VALUES(1,1,1,2.0);
INSERT INTO SALES VALUES(1,2,1,3.0);
INSERT INTO SALES VALUES(1,2,1,4.0);
INSERT INTO SALES VALUES(2,1,1,5.0);
INSERT INTO SALES VALUES(2,1,1,6.0);
INSERT INTO SALES VALUES(2,2,1,7.0);
INSERT INTO SALES VALUES(2,2,1,8.0);
COMMIT;
Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
The desired result set is:
DAY_ID MY_MEASURE
1 6
1 14The following SQL gets me tantalizingly close:
SELECT DAY_ID,
MAX(PURCHASE_PRICE)
KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
FROM SALES
ORDER BY DAY_ID
DAY_ID MY_MEASURE
1 2
1 2
1 4
1 4
2 6
2 6
2 8
2 8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
(Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
I humbly solicit your collective wisdom, oh forum.Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
SELECT DAY_ID,
PRODUCT_ID,
MAX(PURCHASE_PRICE) MAX_PRICE
FROM SALES
GROUP BY DAY_ID,
PRODUCT_ID
ORDER BY DAY_ID,
PRODUCT_ID
DAY_ID PRODUCT_ID MAX_PRICE
1 1 2
1 2 4
2 1 6
2 2 8 -
[b]Using Analytic functions...[/b]
Hi All,
I need help in writing a query using analytic functions.
Foll is my scenario. I have a table cust_points
CREATE TABLE cust_points
( cust_id varchar2(10),
pts_dt date,
reward_points number(3),
bal_points number(3)
insert into cust_points values ('ABC',01-MAY-2004',5, 15)
insert into cust_points values ('ABC',05-MAY-2004',3, 12)
insert into cust_points values ('ABC',09-MAY-2004',3, 9)
insert into cust_points values ('XYZ',02-MAY-2004',8, 4)
insert into cust_points values ('XYZ',03-MAY-2004',5, 1)
insert into cust_points values ('JKL',10-MAY-2004',5, 11)
I want a result set which shows for each customer, the sum of reward his/her points
but balance points as of the last date. So for the above I should have foll results
cust_id reward_pts bal_points
ABC 11 9
XYZ 13 1
JKL 5 11
I having tried using last_value(), for eg
Select cust_id, sum(reward_points), last_value(bal_points) over (partition by cust_id)...but run into grouping errors.
Can anyone help ?try this...
SELECT a.pkcol,
nvl(SUM(b.col1),0) col1,
nvl(SUM(b.col2),0) col2,
nvl(SUM(b.col3),0) col3
FROM table1 a, table2 b, table3 c
WHERE a.pkcol = b.plcol(+)
AND a.pkcol = c.pkcol
GROUP BY a.pkcol;
SQL> select a.deptno,
2 nvl((select sum(sal) from test_emp b where a.deptno = b.deptno),0) col1,
3 nvl((select sum(comm) from test_emp b where a.deptno = b.deptno),0) col2
4 from test_dept a;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno
6 group by a.deptno;
DEPTNO COL1 COL2
30 11217 2415
20 13237 738
10 12786 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno(+)
6 group by a.deptno;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> -
Analytic Functions - Need resultset only in one select
Hello Experts,
Problem Definition: Using Analytic Function, get Total sales for the Product P1 and Customer C1 [Total sales for the customer itself] in one line. I want to restrict the ResultSet of the query to Product P1, please look at the data below, queries and problems..
Data
Customer Product Qtr Sales
C1 P1 19991 100.00
C1 P1 19992 125.00
C1 P1 19993 175.00
C1 P1 19994 300.00
C1 P2 19991 100.00
C1 P2 19992 125.00
C1 P2 19993 175.00
C1 P2 19994 300.00
C2 P1 19991 100.00
C2 P1 19992 125.00
C2 P1 19993 175.00
C2 P1 19994 300.00
Problem, I want to display....
Customer Product ProdSales CustSales
C1 P1 700 1400
But Without using outer query, i.e. please look below for the query that returns this reult with two select, I want this result in one query only..
Select * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer) CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Also, I want to avoid Hard coding of P1 in the select clause....
I mean, I can do it in one shot/select, but look at the query below, it uses P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
Select Customer,Decode(Product, 'P1','P1','P1') Product,
Decode(Product,'P1',Sales,0) ProdSales,
Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
From t1
Where customer='C1' ;
This will get me what I want, but as I said earlier, I want to avoid using P1 in the
Select clause..
Goal is to Avoid using
1-> Two Select/Outer Query/In Line Views
2-> Product 'P1' in the Select clause...No hard coded product name in the select clause and group by clause..
Thanks
-DhavalSelect * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer)
CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Goal is to Avoid using
1-> Two Select/Outer Query/In Line ViewsWhy? -
Restrict Query Resultset which uses Analytic Function
Gents,
Problem Definition: Using Analytic Function, get Total sales for the Product P1
and Customer C1 [Total sales for the customer itself] in one line.
I want to restrict the ResultSet of the query to Product P1,
please look at the data below, queries and problems..
Data
Customer Product Qtr Sales
C1 P1 19991 100.00
C1 P1 19992 125.00
C1 P1 19993 175.00
C1 P1 19994 300.00
C1 P2 19991 100.00
C1 P2 19992 125.00
C1 P2 19993 175.00
C1 P2 19994 300.00
C2 P1 19991 100.00
C2 P1 19992 125.00
C2 P1 19993 175.00
C2 P1 19994 300.00
Problem, I want to display....
Customer Product ProdSales CustSales
C1 P1 700 1400
But Without using outer query, i.e. please look below for the query that
returns this reult with two select, I want this result in one query only..
Select * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer) CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Also, I want to avoid Hard coding of P1 in the select clause....
I mean, I can do it in one shot/select, but look at the query below, it uses
P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
Select Customer,Decode(Product, 'P1','P1','P1') Product,
Decode(Product,'P1',Sales,0) ProdSales,
Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
From t1
Where customer='C1' ;
This will get me what I want, but as I said earlier, I want to avoid using P1 in the
Select clause..
Goal is to Avoid using
1-> Two Select/Outer Query/In Line Views
2-> Product 'P1' in the Select clause...
Thanks
-Dhaval RasaniaI don't understand goal number 1 of not using an inline view.
What is the harm? -
Speed up query with analytic function
Hi
how can I speed up the query below ?
All time is in analytic function (WINDOW SORT)
Thanks for your help
11.2.0.1
Rows Row Source Operation
28987 HASH UNIQUE (cr=12677 pr=155778 pw=109730 time=25010 us cost=5502 size=3972960 card=14880)
1668196 WINDOW SORT (cr=12677 pr=155778 pw=109730 time=890411840 us cost=5502 size=3972960 card=14880)
1668196 HASH JOIN RIGHT OUTER (cr=12677 pr=0 pw=0 time=1069165 us cost=3787 size=3972960 card=14880)
30706 TABLE ACCESS FULL FLO_FML_EVENT (cr=270 pr=0 pw=0 time=7420 us cost=56 size=814158 card=30154)
194733 HASH JOIN RIGHT OUTER (cr=12407 pr=0 pw=0 time=571145 us cost=3730 size=3571200 card=14880)
613 VIEW (cr=342 pr=0 pw=0 time=489 us cost=71 size=23840 card=745)
613 HASH UNIQUE (cr=342 pr=0 pw=0 time=244 us cost=71 size=20115 card=745)
745 WINDOW SORT (cr=342 pr=0 pw=0 time=1736 us cost=71 size=20115 card=745)
745 MAT_VIEW ACCESS FULL MVECRF_CUR_QUERY (cr=342 pr=0 pw=0 time=1736 us cost=69 size=20115 card=745)
194733 HASH JOIN (cr=12065 pr=0 pw=0 time=431813 us cost=3658 size=3095040 card=14880)
43 MAT_VIEW ACCESS FULL MVECRF_VISIT_REVS (cr=3 pr=0 pw=0 time=0 us cost=2 size=946 card=43)
194733 HASH JOIN OUTER (cr=12062 pr=0 pw=0 time=292098 us cost=3656 size=2767680 card=14880)
194733 HASH JOIN OUTER (cr=10553 pr=0 pw=0 time=234394 us cost=2962 size=2574240 card=14880)
194733 HASH JOIN (cr=9999 pr=0 pw=0 time=379996 us cost=2570 size=2380800 card=14880)
30076 MAT_VIEW ACCESS FULL MVECRF_ACTIVATED_FORMS (cr=1817 pr=0 pw=0 time=28411 us cost=361 size=2000285 card=29855)
194733 HASH JOIN (cr=8182 pr=0 pw=0 time=209061 us cost=1613 size=9026301 card=97057)
628 MAT_VIEW ACCESS FULL MVECRF_STUDYVERSION_FORMS (cr=19 pr=0 pw=0 time=250 us cost=6 size=18212 card=628)
194733 MAT_VIEW ACCESS FULL MVECRF_FORMITEMS (cr=8163 pr=0 pw=0 time=80733 us cost=1606 size=12462912 card=194733)
132342 MAT_VIEW ACCESS FULL MVECRF_ITEM_SDV (cr=554 pr=0 pw=0 time=23678 us cost=112 size=1720446 card=132342)
221034 MAT_VIEW ACCESS FULL MVECRF_ITEMDATA (cr=1509 pr=0 pw=0 time=46459 us cost=299 size=2873442 card=221034)
SELECT
DISTINCT
'CL238093011' AS ETUDE,
FI.STUDYID,
FI.STUDYVERSIONID,
FI.SITEID,
FI.SUBJECTID,
FI.VISITID,
VR.VISITREFNAME,
FI.SUBJECTVISITID,
FI.FORMID,
FI.FORMINDEX,
SVF.FORMREFNAME,
SVF.FORMMNEMONIC AS FMLNOM,
EVENT_ITEM.EVENT AS EVENUM,
EVENT_ITEM.EVENT_ROW AS LIGNUM,
NULL AS CODVISEVE,
MIN(DID.MINENTEREDDATE)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSAI,
MIN(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSDV,
MAX(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDATDERSDV,
DECODE (AF.SDVCOMPLETESTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVCOP,
AF.FMINSDVCOMPLETESTATE AS ATTDAT1ERSDVCOP,
DECODE (AF.SDVPARTIALSTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVPTL,
EVENT_ITEM.EVENT_RELECT AS ATTINDRVUMEDCOP,
DECODE (QUERY.NBQSTFML, NULL, 'N', 'Y') AS ATTINDQST,
DECODE (AF.MISSINGITEMSSTATE,
0,
'N',
1,
'Y')
AS ATTINDITMABS,
DECODE (AF.FROZENSTATE,
0,
'N',
1,
'Y')
AS ATTINDETACON,
AF.FMINFROZENSTATE AS ATTDAT1ERCON,
AF.FMAXFROZENSTATE AS ATTDATDERCON,
DECODE (AF.DELETEDSTATE,
0,
'N',
1,
'Y')
AS ATTINDETASPR,
EVENT_ITEM.ROW_DELETED AS ATTINDLIGSPR
FROM CL238093011.MVECRF_FORMITEMS FI,
CL238093011.MVECRF_STUDYVERSION_FORMS SVF,
CL238093011.MVECRF_ACTIVATED_FORMS AF,
CL238093011.MVECRF_ITEM_SDV IFSDV,
CL238093011.MVECRF_VISIT_REVS VR,
CL238093011.MVECRF_ITEMDATA DID,
(SELECT DISTINCT
SUBJECTID,
VISITID,
FORMID,
FORMINDEX,
COUNT (
DISTINCT QUERYID
OVER (
PARTITION BY SUBJECTID, VISITID, FORMID, FORMINDEX
NBQSTFML
FROM CL238093011.MVECRF_CUR_QUERY
WHERE QUERYSTATE IN (0, 1, 2)) QUERY,
CL238093011.FLO_FML_EVENT EVENT_ITEM
WHERE (AF.VISITDELETED IS NULL OR AF.VISITDELETED = 0)
AND AF.FORMTYPE NOT IN (4, 5, 6, 7, 8, 103)
AND (AF.DELETEDDYNAMICFORMSTATE IS NULL
OR AF.DELETEDDYNAMICFORMSTATE = 0)
AND FI.SUBJECTVISITID = AF.SUBJECTVISITID
AND FI.FORMID = AF.FORMID
AND FI.FORMREV = AF.FORMREV
AND FI.FORMINDEX = AF.FORMINDEX
AND FI.VISITID = VR.VISITID
AND FI.VISITREV = VR.VISITREV
AND FI.CONTEXTID = IFSDV.CONTEXTID(+)
AND FI.CONTEXTID = DID.CONTEXTID(+)
AND FI.SUBJECTID = QUERY.SUBJECTID(+)
AND FI.VISITID = QUERY.VISITID(+)
AND FI.FORMID = QUERY.FORMID(+)
AND FI.FORMINDEX = QUERY.FORMINDEX(+)
AND FI.STUDYVERSIONID = SVF.STUDYVERSIONID
AND FI.FORMID = SVF.FORMID
AND FI.VISITID = SVF.VISITID
AND FI.SUBJECTID = EVENT_ITEM.SUBJECTID(+)
AND FI.VISITID = EVENT_ITEM.VISITID(+)
AND FI.FORMID = EVENT_ITEM.FORMID(+)
AND FI.FORMINDEX = EVENT_ITEM.FORMINDEX(+)user12045475 wrote:
Hi
how can I speed up the query below ?
All time is in analytic function (WINDOW SORT)
Thanks for your help
11.2.0.1
Rows Row Source Operation
28987 HASH UNIQUE (cr=12677 pr=155778 pw=109730 time=25010 us cost=5502 size=3972960 card=14880)
1668196 WINDOW SORT (cr=12677 pr=155778 pw=109730 time=890411840 us cost=5502 size=3972960 card=14880)
1668196 HASH JOIN RIGHT OUTER (cr=12677 pr=0 pw=0 time=1069165 us cost=3787 size=3972960 card=14880)
30706 TABLE ACCESS FULL FLO_FML_EVENT (cr=270 pr=0 pw=0 time=7420 us cost=56 size=814158 card=30154)
194733 HASH JOIN RIGHT OUTER (cr=12407 pr=0 pw=0 time=571145 us cost=3730 size=3571200 card=14880)
613 VIEW (cr=342 pr=0 pw=0 time=489 us cost=71 size=23840 card=745)
613 HASH UNIQUE (cr=342 pr=0 pw=0 time=244 us cost=71 size=20115 card=745)
745 WINDOW SORT (cr=342 pr=0 pw=0 time=1736 us cost=71 size=20115 card=745)
745 MAT_VIEW ACCESS FULL MVECRF_CUR_QUERY (cr=342 pr=0 pw=0 time=1736 us cost=69 size=20115 card=745)
194733 HASH JOIN (cr=12065 pr=0 pw=0 time=431813 us cost=3658 size=3095040 card=14880)
43 MAT_VIEW ACCESS FULL MVECRF_VISIT_REVS (cr=3 pr=0 pw=0 time=0 us cost=2 size=946 card=43)
194733 HASH JOIN OUTER (cr=12062 pr=0 pw=0 time=292098 us cost=3656 size=2767680 card=14880)
194733 HASH JOIN OUTER (cr=10553 pr=0 pw=0 time=234394 us cost=2962 size=2574240 card=14880)
194733 HASH JOIN (cr=9999 pr=0 pw=0 time=379996 us cost=2570 size=2380800 card=14880)
30076 MAT_VIEW ACCESS FULL MVECRF_ACTIVATED_FORMS (cr=1817 pr=0 pw=0 time=28411 us cost=361 size=2000285 card=29855)
194733 HASH JOIN (cr=8182 pr=0 pw=0 time=209061 us cost=1613 size=9026301 card=97057)
628 MAT_VIEW ACCESS FULL MVECRF_STUDYVERSION_FORMS (cr=19 pr=0 pw=0 time=250 us cost=6 size=18212 card=628)
194733 MAT_VIEW ACCESS FULL MVECRF_FORMITEMS (cr=8163 pr=0 pw=0 time=80733 us cost=1606 size=12462912 card=194733)
132342 MAT_VIEW ACCESS FULL MVECRF_ITEM_SDV (cr=554 pr=0 pw=0 time=23678 us cost=112 size=1720446 card=132342)
221034 MAT_VIEW ACCESS FULL MVECRF_ITEMDATA (cr=1509 pr=0 pw=0 time=46459 us cost=299 size=2873442 card=221034)
SELECT
DISTINCT
'CL238093011' AS ETUDE,
FI.STUDYID,
FI.STUDYVERSIONID,
FI.SITEID,
FI.SUBJECTID,
FI.VISITID,
VR.VISITREFNAME,
FI.SUBJECTVISITID,
FI.FORMID,
FI.FORMINDEX,
SVF.FORMREFNAME,
SVF.FORMMNEMONIC AS FMLNOM,
EVENT_ITEM.EVENT AS EVENUM,
EVENT_ITEM.EVENT_ROW AS LIGNUM,
NULL AS CODVISEVE,
MIN(DID.MINENTEREDDATE)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSAI,
MIN(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDAT1ERSDV,
MAX(IFSDV.ITEMFIRSTSDV)
OVER (
PARTITION BY FI.SUBJECTID, FI.VISITID, FI.FORMID, FI.FORMINDEX
AS ATTDATDERSDV,
DECODE (AF.SDVCOMPLETESTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVCOP,
AF.FMINSDVCOMPLETESTATE AS ATTDAT1ERSDVCOP,
DECODE (AF.SDVPARTIALSTATE,
0,
'N',
1,
'Y')
AS ATTINDSDVPTL,
EVENT_ITEM.EVENT_RELECT AS ATTINDRVUMEDCOP,
DECODE (QUERY.NBQSTFML, NULL, 'N', 'Y') AS ATTINDQST,
DECODE (AF.MISSINGITEMSSTATE,
0,
'N',
1,
'Y')
AS ATTINDITMABS,
DECODE (AF.FROZENSTATE,
0,
'N',
1,
'Y')
AS ATTINDETACON,
AF.FMINFROZENSTATE AS ATTDAT1ERCON,
AF.FMAXFROZENSTATE AS ATTDATDERCON,
DECODE (AF.DELETEDSTATE,
0,
'N',
1,
'Y')
AS ATTINDETASPR,
EVENT_ITEM.ROW_DELETED AS ATTINDLIGSPR
FROM CL238093011.MVECRF_FORMITEMS FI,
CL238093011.MVECRF_STUDYVERSION_FORMS SVF,
CL238093011.MVECRF_ACTIVATED_FORMS AF,
CL238093011.MVECRF_ITEM_SDV IFSDV,
CL238093011.MVECRF_VISIT_REVS VR,
CL238093011.MVECRF_ITEMDATA DID,
(SELECT DISTINCT
SUBJECTID,
VISITID,
FORMID,
FORMINDEX,
COUNT (
DISTINCT QUERYID
OVER (
PARTITION BY SUBJECTID, VISITID, FORMID, FORMINDEX
NBQSTFML
FROM CL238093011.MVECRF_CUR_QUERY
WHERE QUERYSTATE IN (0, 1, 2)) QUERY,
CL238093011.FLO_FML_EVENT EVENT_ITEM
WHERE (AF.VISITDELETED IS NULL OR AF.VISITDELETED = 0)
AND AF.FORMTYPE NOT IN (4, 5, 6, 7, 8, 103)
AND (AF.DELETEDDYNAMICFORMSTATE IS NULL
OR AF.DELETEDDYNAMICFORMSTATE = 0)
AND FI.SUBJECTVISITID = AF.SUBJECTVISITID
AND FI.FORMID = AF.FORMID
AND FI.FORMREV = AF.FORMREV
AND FI.FORMINDEX = AF.FORMINDEX
AND FI.VISITID = VR.VISITID
AND FI.VISITREV = VR.VISITREV
AND FI.CONTEXTID = IFSDV.CONTEXTID(+)
AND FI.CONTEXTID = DID.CONTEXTID(+)
AND FI.SUBJECTID = QUERY.SUBJECTID(+)
AND FI.VISITID = QUERY.VISITID(+)
AND FI.FORMID = QUERY.FORMID(+)
AND FI.FORMINDEX = QUERY.FORMINDEX(+)
AND FI.STUDYVERSIONID = SVF.STUDYVERSIONID
AND FI.FORMID = SVF.FORMID
AND FI.VISITID = SVF.VISITID
AND FI.SUBJECTID = EVENT_ITEM.SUBJECTID(+)
AND FI.VISITID = EVENT_ITEM.VISITID(+)
AND FI.FORMID = EVENT_ITEM.FORMID(+)
AND FI.FORMINDEX = EVENT_ITEM.FORMINDEX(+)
Do you have the license for parallel query (may/may not help)? PQO can help with sorts ...
Maybe you are looking for
-
VA01 and VA02 Search Help Mitigation
Hello Everyone, I need to restric several search help results to users without authorization to see these values. All these search helps are in transactions VA01, VA02, and VA03. And that is my problem, since they are SAP Standard transactions I can'
-
GPIB Instrument control with VB6
I have recently decided to migrate from labVIEW to Visual Basic, as my applications are becoming more and more demanding. I have Visual Studio 6 for Visual Basic, and I have no problem coding and creating interfaces, but I have no idea where to start
-
How to install DSEE 6.3 on RHAS4-U2 with functional DSCC?
Been trying to install DSEE 6.3 (native package version + multi-language support) on an out-of-the-box RHAS4-U2 (32-bit) as per DSEE 6.3 doco and associated patch doco and have DSCC continue to work after patching... Our initial install of DSEE 6.0 (
-
How to retain the picture drawn by my mouse?
i have a label which i would use my mouse to draw on it.. however the screen would refresh after 1 minute or when i move the window, the image that i have drawn would also disappear. how can i retain my picture so that it would always stays there? ar
-
If I switch to another company will I loose my SDN id?
I understand that my SDN id which is "marjan" is tied to my Service Market Place id. If I switch to another company I will definitely loose my Service Market Place id. How can I maintain my SDN id if I switch company? Any ideas? regards, Marjan