A Job for 'PARTION BY' Analytical Function?
Hi,
I'm still a little fuzzy on using partitions but this looks like a possible candidate to me.
I need to count the number of different customers that visit an office in a single day. If a customer visits an office more than once in a single day that counts as 1.
Input
OFFICE CUSTOMER TRAN_DATE
1 11 1-Apr-09
1 11 1-Apr-09
1 11 1-Apr-09
1 11 2-Apr-09
2 22 2-Apr-09
2 22 2-Apr-09
2 33 2-Apr-09
select a.office as "OFFICE", a.customer AS "CUSTOMER", a.tran_date AS "TRAN_DATE", COUNT(*)
FROM
(SELECT 1 AS "OFFICE", 11 AS "CUSTOMER", '01-APR-2009' AS "TRAN_DATE" FROM DUAL
UNION ALL
SELECT 1 , 11 , '01-APR-2009' FROM DUAL
UNION ALL
SELECT 1 , 11 , '01-APR-2009' FROM DUAL
UNION ALL
SELECT 1 , 11 , '02-APR-2009' FROM DUAL
UNION ALL
SELECT 2 , 22 , '02-APR-2009' FROM DUAL
UNION ALL
SELECT 2 , 22 , '02-APR-2009' FROM DUAL
UNION ALL
SELECT 2 , 33 , '02-APR-2009' FROM DUAL
) a;
Desired Result
1 1-Apr-09 1
1 2-Apr-09 1
2 2-Apr-09 2
Is this possible with partitions, do I need to use subqueries, or some other methid?
Thank You in Advance for Your Help,
Lou
Edited by: Wind In Face on Apr 15, 2009 1:34 PM
"I wanted to use PARTITION BY instead of what John suggested because it is my understanding that PARTION BY will be faster"
It may be, or it may not be. As Frank pointed out analytic functions have their uses, and aggregate functions have theis. In some places, those uses do overlap, but not always. You query is equivalent to mine, that is, it returns the same resultset in this case, however, there are some differences.
For the relatively small amount of data I generated, it is probably not significant, but the analytic version does two sorts (one unique) while my aggregate version does only one.
SQL> CREATE TABLE test (office NUMBER, customer NUMBER, tran_dt DATE);
Table created.
SQL> INSERT /*+ APPEND */ INTO test
2 SELECT MOD(rownum, 10)+1, MOD(rownum, 121)+1, TRUNC(sysdate+MOD(rownum, 42))
3 FROM all_objects;
18135 rows created.
SQL> COMMIT;
Commit complete.
SQL> SELECT office, tran_dt, COUNT(DISTINCT customer) cust_count
2 FROM test
3 GROUP BY office, tran_dt;
210 rows selected.
Execution Plan
Plan hash value: 2407667464
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT GROUP BY | |
| 2 | TABLE ACCESS FULL| TEST |
Statistics
0 recursive calls
0 db block gets
27 consistent gets
0 physical reads
0 redo size
6061 bytes sent via SQL*Net to client
631 bytes received via SQL*Net from client
15 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
210 rows processed
SQL> SELECT DISTINCT office office, tran_dt tran_date,
2 COUNT(DISTINCT customer) OVER(PARTITION BY office, tran_dt) cust_count
3 FROM test;
210 rows selected.
Execution Plan
Plan hash value: 1303194651
| Id | Operation | Name |
| 0 | SELECT STATEMENT | |
| 1 | SORT UNIQUE | |
| 2 | WINDOW SORT | |
| 3 | TABLE ACCESS FULL| TEST |
Statistics
0 recursive calls
0 db block gets
27 consistent gets
0 physical reads
0 redo size
6063 bytes sent via SQL*Net to client
631 bytes received via SQL*Net from client
15 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
210 rows processedFor a larger resultset, this could make a significant difference.
In general, my preference is use the simplest construct that will work.
John
Similar Messages
-
Analytical Functions: Parent/child bucketing (distribution)
OK here is an interest problem for the analytical functions hardcore users. I have a parent and child tables. A parent record can have N number of child records. The child table has a FK to the parent table so all child records have a parent. I need to distribute my parents into an N number of buckets based on the number of children they have. The number of buckets will be variable and user-defined. Each bucket should have a similar number of child records but the number of parents is not important. Here a simple output example with 6 parents using 3 buckets of 10 children each:
Parent Number_Of_Children Bucket
1 5 1
2 5 1
3 6 2
4 3 2
5 1 2
6 4 3At first this looks like an easy job for the NTILE analytical function to bucket the child records. Another solution could be to use the ROW_NUMBER analytical function to number every child record and then use the WIDTH_BUCKET analytical function to bucket the parents based on their children ROW_NUMBER's value. But there is an additional requirement that makes these 2 approaches unusable. When doing the bucket distribution I need to guarantee that all children from the same parent are bucketed into the same bucket. This obviously makes it much more difficult. The distribution will obviously leave non-equiheight buckets since there will be no guarantee I can fill all my buckets with the same number of child records. Small differences are aceptable however. The distribution can be random and doesn't need to be the best possible distribution (the one that will cause the buckets to be as equiheight as possible). Finally this needs to be a SQL query not PL/SQL. ThanksThanks Frank, that was very helpful. I did some research on bin packing and indeed there doesn't seem to be a SQL solution for this kind of problem. Lucky for me some of my requirements are slightly different to the bin packing problem so I was able to come up with something that works. My requirements are that I need to bucket the data by the number of bins rather than by the size of the buckets (unlike most bin packing problems where the size of the bin is fixed). So the bin's size will based on the total number of children divided by the number of desired buckets. The buckets don't have to (and probably won't be) of the same size given that parents can have N number of children. Furthermore we are not lucking to fill the buckets as much as possible, small differences are OK. Based on the above I came up with the following solution.
1) I calculate the size of each bin based on the total childs and the number of bins.
2) I then adjust this size finding the Top N parents by child where N = number of bins. This is done since I am going to bucket parents by simple order (Parent ID and Child ID). This means I will most likely end up with parents that have children in two buckets which obviously is not desired. My approach is to move these parents to the last bucket hence I have to increase the bucket size. In the worst case scenario I will have the parents with most childs overflowing into the next bucket for all buckets. So I find out which are the Top N parents and then increase my total child universe by that much.
3) I then bucket the data using the WIDTH_BUCKET function and ROW_NUMBER by simple order (Parent ID and Child ID)
4) I then find out the parents that have children in two buckets.
5) And finally I move the "broken" parents to the last bucket.
This approach seems to work well. I ran it for ~130,000 parents 500,000 childs and the query returns the bucketed data in 4 seconds. The number of buckets does not affect the performance of the query.
WITH total_bins AS
-- Load the number of bins
SELECT 5 AS BIN_COUNT
FROM DUAL
count_childs AS
-- Calculate the sum of child records for top N parents based on the number of bins and the total number of child records
SELECT SUM(CASE WHEN ranked.RANK <= total_bins.BIN_COUNT THEN ranked.CHILD_COUNT ELSE NULL END) AS TOP_N_CHILDS,
SUM(CHILD_COUNT) AS TOTAL_CHILDS
FROM
-- Rank records by their child count
SELECT CHILD_COUNT, ROW_NUMBER() OVER (ORDER BY CHILD_COUNT DESC) AS RANK FROM
-- Count all child records for each parent
SELECT PARENT_ID, COUNT(1) AS CHILD_COUNT FROM PARENT_CHILD GROUP BY PARENT_ID ORDER BY 2 DESC
) ranked
CROSS JOIN
total_bins
bins AS
-- Calculate each bin's size based on the number of childs and bins
-- top_n_childs is used to increase the bins in case of childs from 1
-- parent falling into 2 different bins
SELECT CEIL((TOTAL_CHILDS + TOP_N_CHILDS) / total_bins.BIN_COUNT) AS SIZE_EACH,
CEIL((TOTAL_CHILDS + TOP_N_CHILDS) / total_bins.BIN_COUNT) * total_bins.BIN_COUNT AS SIZE_ALL
FROM count_childs
CROSS JOIN
total_bins
bucket_data AS
-- Bucket data using WIDTH_BUCKET data function, most likely child records will end in 2 buckets
SELECT PARENT_ID,
CHILD_ID,
WIDTH_BUCKET(ORDER_NUMBER, 1, (SELECT bins.SIZE_ALL FROM bins), (SELECT total_bins.BIN_COUNT FROM total_bins)) AS BUCKET
FROM
SELECT PARENT_ID, CHILD_ID, ROW_NUMBER() OVER (ORDER BY PARENT_ID, CHILD_ID) ORDER_NUMBER FROM PARENT_CHILD
broken_parents AS
-- This finds out which parents have their child records on more than 1 bucket so they can be fixed
SELECT PARENT_ID FROM bucket_data GROUP BY PARENT_ID HAVING COUNT(DISTINCT BUCKET) > 1
fixed_data AS
-- Join bucket_data and broken_parents and moved broken_parents to the last bucket
SELECT bucket_data.PARENT_ID,
bucket_data.CHILD_ID,
CASE WHEN broken_parents.PARENT_ID IS NOT NULL THEN (SELECT total_bins.BIN_COUNT FROM total_bins) ELSE bucket_data.BUCKET END AS BUCKET
FROM bucket_data,
broken_parents
WHERE bucket_data.PARENT_ID = broken_parents.PARENT_ID (+)
SELECT PARENT_ID,
CHILD_ID,
BUCKET
FROM fixed_data
-- Check number of childs per bucket
-- SELECT BUCKET, COUNT(1) FROM fixed_data GROUP BY BUCKET ORDER BY 1
-- Check all parents have their childs on the same bucket
-- SELECT PARENT_ID FROM fixed_data GROUP BY PARENT_ID HAVING COUNT(DISTINCT BUCKET) > 1 -
I don't believe if analytic functions do it for me or not
Hey everyone,
I'm looking for a way handling this report for my own job.
a table having the following attributes exists.
Create table Test (
Public_Date varchar2(10),
City varchar2(10),
count number(3))
Query with the following output readily could be produced using group by clause.
Year Sum
2005 23
2006 36
2007 15
2008 10
But the question is that How I can lead to the following output.
(I want to merge some records into one record in the output, in this example
sum of all years after 2005 is my interest not each year individually come before)
Year(s) Sum
2005 23
2006 36
2007,2008 25 /*(15+10)*/
I think analytic functions may be useful in producing this output but I don't know how.
Could everyone help me how to handle this?Hi,
You can use a CASE (or DECODE) statement to map all the years after 2006 to some common value, like 9999, and GROUP BY that computed value.
If you want the 9999 row to be labeled '2007, 2008', do a search for "string aggregate" for various techniques, or see Tom Kyte's excellent page on this subject:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2196162600402
I use STRAGG (near the top of the page). -
Analytic function for grouping?
Hello @all
10gR2
Is it possible to use an analytic function for grouping following (example-)query:
SELECT job, ename, sal,
ROW_NUMBER() OVER(PARTITION BY job ORDER BY empno) AS no,
RANK() OVER(PARTITION BY job ORDER BY NULL) AS JobNo
FROM emp;The output is following:
JOB ENAME SAL NO JOBNO
ANALYST SCOTT 3000 1 1
ANALYST FORD 3000 2 1
CLERK SMITH 818 1 1
CLERK ADAMS 1100 2 1
CLERK JAMES 950 3 1
CLERK MILLER 1300 4 1
MANAGER Müller 1000 1 1
MANAGER JONES 2975 2 1
....The JobNo should increase group by job and ename; my desired output should be looking like...:
JOB ENAME SAL NO JOBNO
ANALYST SCOTT 3000 1 1
ANALYST FORD 3000 2 1
CLERK SMITH 818 1 2
CLERK ADAMS 1100 2 2
CLERK JAMES 950 3 2
CLERK MILLER 1300 4 2
MANAGER Müller 1000 1 3
MANAGER JONES 2975 2 3
MANAGER BLAKE 2850 3 3
MANAGER CLARK 2450 4 3
PRESIDENT KING 5000 1 4
SALESMAN ALLEN 1600 1 5
SALESMAN WARD 1250 2 5
SALESMAN MARTIN 1250 3 5
SALESMAN TURNER 1500 4 5How can I achieve this?This, perhaps?
with emp as (select 1 empno, 'ANALYST' job, 'SCOTT' ename, 3000 sal from dual union all
select 2 empno, 'ANALYST' job, 'FORD' ename, 3000 sal from dual union all
select 3 empno, 'CLERK' job, 'SMITH' ename, 818 sal from dual union all
select 4 empno, 'CLERK' job, 'ADAMS' ename, 1100 sal from dual union all
select 5 empno, 'CLERK' job, 'JAMES' ename, 950 sal from dual union all
select 6 empno, 'CLERK' job, 'MILLER' ename, 1300 sal from dual union all
select 7 empno, 'MANAGER' job, 'Müller' ename, 1000 sal from dual union all
select 8 empno, 'MANAGER' job, 'JONES' ename, 2975 sal from dual union all
select 9 empno, 'MANAGER' job, 'BLAKE' ename, 2850 sal from dual union all
select 10 empno, 'MANAGER' job, 'CLARK' ename, 2450 sal from dual union all
select 11 empno, 'PRESIDENT' job, 'KING' ename, 5000 sal from dual union all
select 12 empno, 'SALESMAN' job, 'ALLEN' ename, 1600 sal from dual union all
select 13 empno, 'SALESMAN' job, 'WARD' ename, 1250 sal from dual union all
select 14 empno, 'SALESMAN' job, 'MARTIN' ename, 1250 sal from dual union all
select 15 empno, 'SALESMAN' job, 'TURNER' ename, 1500 sal from dual)
select job, ename, sal,
row_number() over(partition by job order by empno) no,
dense_rank() over(order by job) jobno
from emp
JOB ENAME SAL NO JOBNO
ANALYST SCOTT 3000 1 1
ANALYST FORD 3000 2 1
CLERK SMITH 818 1 2
CLERK ADAMS 1100 2 2
CLERK JAMES 950 3 2
CLERK MILLER 1300 4 2
MANAGER Müller 1000 1 3
MANAGER JONES 2975 2 3
MANAGER BLAKE 2850 3 3
MANAGER CLARK 2450 4 3
PRESIDENT KING 5000 1 4
SALESMAN ALLEN 1600 1 5
SALESMAN WARD 1250 2 5
SALESMAN MARTIN 1250 3 5
SALESMAN TURNER 1500 4 5 -
Analytical function fine within TOAD but throwing an error for a mapping.
Hi,
When I validate an expression based on SUM .... OVER PARTITION BY in a mapping, I am getting the following error.
Line 4, Col 23:
PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
* & = - + < / > at in is mod remainder not rem then
<an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
However, using TOAD, the expression is working fine.
A staging table has got three columns, col1, col2 and col3. The expression is checking for a word in col3. The expression is as under.
(CASE WHEN SUM (CASE WHEN UPPER(INGRP1.col3) LIKE 'some_value%'
THEN 1
ELSE 0
END) OVER (PARTITION BY INGRP1.col1
,INGRP1.col2) > 0
THEN 'Y'
ELSE 'N'
END)
I searched the forum for similar issues, but not able to resolve my issue.
Could you please let me know what's wrong here?
Many thanks,
Manoj.Yes, expression validation in 10g simply does not work for (i.e. does not recognize) analytic functions.
It can simply be ignored. You should also set Generation mode to "Set Based only". Otherwise the mapping will fail to deploy under certain circumstances (when using non-set-based (PL/SQL) operators after the analytic function). -
Query for using "analytical functions" in DWH...
Dear team,
I would like to know if following task can be done using analytical functions...
If it can be done using other ways, please do share the ideas...
I have table as shown below..
Create Table t As
Select *
From
Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH From dual Union All
Select 12345, 'W2', 0, 100, 50, 0 From dual Union All
Select 12345, 'W3', 0, 100, 50, 0 From dual Union All
Select 12345, 'W4', 0, 100, 50, 0 From dual
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 0 0 10000
12345 W2 0 100 50 0
12345 W3 0 100 50 0
12345 W4 0 100 50 0
Now i want to calcuate EOH (ending on hand) quantity for W1...
This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
The formula is :- EOH = SOH - (DEMAND + SUPPLY)
The output should be as follows...
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 10000
12345 W2 10,000 100 50 9950
12345 W3 9,950 100 50 9900
12345 W4 9,000 100 50 8950
Kindly share your ideas...Nicloei W wrote:
Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
If yes, why are you expecting it to be 9000 for W4??
So in output should be...
PRODUCT WE SOH DEMAND SUPPLY EOH SOH_AFTER_SUPPLY
12345 W1 10000 0 0 0 10000
12345 W2 10000 100 50 0 9950
12345 W3 9950 100 50 0 *9900*
12345 W4 *9000* 100 50 0 9850
per logic you explained, shouldn't it be *9900* instead???
you could customize Martin Preiss's logic for your requirement :
SQL> with
2 data
3 As
4 (
5 Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH Fom dual Union All
6 Select 12345, 'W2', 0, 100, 50, 0 From dal Union All
7 Select 12345, 'W3', 0, 100, 50, 0 From dal Union All
8 Select 12345, 'W4', 0, 100, 50, 0 From dual
9 )
10 Select Product
11 ,Week
12 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
13 ,Demand
14 ,Supply
15 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
16 from data;
PRODUCT WE SOH DEMAND SUPPLY EOH
12345 W1 10000 0 0 10000
12345 W2 10000 100 50 9950
12345 W3 9950 100 50 9900
12345 W4 9900 100 50 9850 Vivek L -
Alternate for analytic functions
Hello All,
I'm trying to write a query without using analytic functions.
Using Analytic func,
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
"CORE 11.2.0.2.0 Production"
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SELECT id, sal, rank() OVER (PARTITION BY ID ORDER BY SAL) rnk FROM
(SELECT 10 AS id, 100 AS sal FROM DUAL
UNION ALL
SELECT 10, 300 FROM DUAL
UNION ALL
SELECT 10, 400 FROM DUAL
UNION ALL
SELECT 20, 200 FROM DUAL
UNION ALL
SELECT 20, 200 FROM DUAL
UNION ALL
SELECT 20, 300 FROM DUAL
UNION ALL
SELECT 30, 100 FROM DUAL
UNION ALL
SELECT 40, 100 FROM DUAL
UNION ALL
SELECT 40, 200 FROM DUAL
) Expected results. I want these results without analytic functions.
10 100 1
10 300 2
10 400 3
20 200 1
20 200 1
20 300 3
30 100 1
40 100 1
40 200 2Hi,
SamFisher wrote:
Thank You Frank. That was simple.
I was trying to get the reults without using analytical functions. Just trying to improve my SQL skills. Yes, I admit that practicising using the wrong tools can improve your SQL skills, but I think there's a lot to be said for practising using the right tools, too.
I tried all sort of things. I thought hierarchical query would do it but hard luck for me.Do you want to use a CONNECT BY query for this? Here's one way:
WITH got_max_level AS
SELECT id
, sal
, MAX (LEVEL) AS max_level
FROM table_x
CONNECT BY NOCYCLE id = PRIOR id
AND sal >= PRIOR sal
AND ( sal > PRIOR sal
OR ROWID > PRIOR ROWID
GROUP BY id
, sal
, got_cnt AS
SELECT id
, sal
, COUNT (*) AS cnt
FROM table_x
GROUP BY id
, sal
SELECT x.id
, x.sal
, l.max_level + 1 - c.cnt AS rnk
FROM table_x x
JOIN got_max_level l ON x.id = l.id
AND x.sal = l.sal
JOIN got_cnt c ON x.id = c.id
AND x.sal = c.sal
ORDER BY x.id
, x.sal
;This is even less efficient, as well as more complicated, than the scalar sub-query solution. -
Are analytic functions usefull only for data warehouses?
Hi,
I deal with reporting queries on Oracle databases but I don't work on Data Warehouses, thus I'd like to know if learning to use analytic functions (sql for anaylis such as rollup, cube, grouping, ...) might be usefull in helping me to develop better reports or if analytic functions are usually usefull only for data warehouses queries. I mean are rollup, cube, grouping, ... usefull also on operational database or do they make sense only on DWH?
Thanks!Mark1970 wrote:
thus does it worth learning them for improving report queries also not on DHW but on common operational databases?Why pigeonhole report queries as "+operational+" or "+data warehouse+"?
Do you tell a user/manager that "<i>No, this report cannot be done as it looks like a data warehouse report and we have an operational database!</i>"?
Data processing and data reporting requirements not not care what label you assign to your database.
Simple real world example of using analytical queries on a non warehouse. We supply data to an external system via XML. They require that we limit the number of parent entities per XML file we supply. E.g. 100 customer elements (together with all their child elements) per file. Analytical SQL enables this to be done by creating "buckets" that can only contain 100 parent elements at a time. Complete process is SQL driven - no slow-by-slow row by row processing in PL/SQL using nested cursor loops and silly approaches like that.
Analytical SQL is a tool in the developer toolbox. It would be unwise to remove it from the toolbox, thinking that it is not applicable and won't be needed for the work that's to be done. -
Disco -- Any analytical functions for comparisons
Hi:
I'm wondering if there are any analytical function to help me out with comparisons? Users often need to displays totals based on date ranges, and show the difference between the two totals, as well as percent change.
For example, a workbook would show the comparison of cases and dollars, for 2002 vs. 2003. Currently, my solution for this is to create DECODE calculations based on year and type (cases or dollars), and perform the
comparisons in separate calculations. I'd like to know if Discoverer already has a function that would handle some of this, and reduce the number of DECODES and separate calculations the users have to create...
Thanks,
Subramanyam
Sr. Technical Consultant
Oracle DirectHello
You can use the Oracle database SQL analytic function to perform comparison and window based calculations. Example: LAG, LEAD, RANK, etc...
Please consult the Oracle 9.2 database documentation as well as Discoverer documentation for examples and syntax.
Regards
Discoverer Product Management -
Analytical function sum() ...for Till-date reporting
Hi,
I need help in forming an SQL with analytical function.
Here is my scenario:
create table a (name varchar2(10), qty_sold number,on_date date);
insert into a values ('abc',10,'10-JAN-2007 00:01:00');
insert into a values ('abc',01,'10-JUL-2007 00:01:00');
insert into a values ('abc',05,'10-JUL-2007 08:11:00');
insert into a values ('abc',17,'10-JUL-2007 09:11:00');
insert into a values ('def',10,'10-JAN-2006 08:01:00');
insert into a values ('def',01,'10-JUN-2006 10:01:00');
insert into a values ('def',05,'10-JUL-2006 08:10:00');
insert into a values ('pqr',17,'10-JUL-2006 09:11:00');
Now I want to have a sql which displays the following:
NAME--TOTAL_QTY_SOLD_IN_LAST_10_DAYS, TOTAL_QTY_SOLD_IN_LAST_20_DAYS...etc
I know we can do it using sum(qty_sold) over (order on_date range interval '10' days and preceding) .... but I get too many rows for each "NAME" ....for each of the date in the database table a ... I want just one row for each "Name"...and sum() should be till SYSDATE ....
Any help is highly appreciated.
Thanks.SQL> select name
2 , sum(case when sysdate - on_date <= 10 then qty_sold end) total_qty_last_10_days
3 , sum(case when sysdate - on_date <= 100 then qty_sold end) total_qty_last_100_days
4 , sum(case when sysdate - on_date <= 500 then qty_sold end) total_qty_last_500_days
5 from a
6 group by name
7 /
NAME TOTAL_QTY_LAST_10_DAYS TOTAL_QTY_LAST_100_DAYS TOTAL_QTY_LAST_500_DAYS
abc 23 33
def 6
pqr 17
3 rijen zijn geselecteerd.Regards,
Rob. -
Error while using Back ground Job for Planning function in BPS
I have created FM and Program for scdualing Back ground Job for Planning function.
I have created Planning function with exit option and passed parameter Global seqence name.
Error is lot of Jobs are creating while exe in BPS0
Kindly help me the same
Regards
GRHi Rama,
It seems there are two diff. functional modules (UPC_BUNDLE_EXECUTE AND UPC_BUNDLE_EXECUTE_STEP). The second one divides the planning sequesces on the basis of something you specify (e.g. company code). Just make sure that you are using correct FM.
just a thought.......
Regards,
SK -
Analytical functions approach for this scenario?
Here is my data:
SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 26 17:03:17 2013
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select * from batch_parameters;
LOW HI MIN_ORDERS MAX_ORDERS
51 100 6 8
121 200 1 5
201 1000 1 1
SQL> select * from orders;
ORDER_NUMBER LINE_COUNT BATCH_NO
4905 154
4899 143
4925 123
4900 110
4936 106
4901 103
4911 101
4902 91
4903 91
4887 90
4904 85
4926 81
4930 75
4934 73
4935 71
4906 68
4907 66
4896 57
4909 57
4908 56
4894 55
4910 51
4912 49
4914 49
4915 48
4893 48
4916 48
4913 48
2894 47
4917 47
4920 46
4918 46
4919 46
4886 45
2882 45
2876 44
2901 44
4921 44
4891 43
4922 43
4923 42
4884 41
4924 40
4927 39
4895 38
2853 38
4890 37
2852 37
4929 37
2885 37
4931 37
4928 37
2850 36
4932 36
4897 36
2905 36
4933 36
2843 36
2833 35
4937 35
2880 34
4938 34
2836 34
2872 34
2841 33
4889 33
2865 31
2889 30
2813 29
2902 28
2818 28
2820 27
2839 27
2884 27
4892 27
2827 26
2837 22
2883 20
2866 18
2849 17
2857 17
2871 17
4898 16
2840 15
4874 13
2856 8
2846 7
2847 7
2870 7
4885 6
1938 6
2893 6
4888 2
4880 1
4875 1
4881 1
4883 1
ORDER_NUMBER LINE_COUNT BATCH_NO
4879 1
2899 1
2898 1
4882 1
4877 1
4876 1
2891 1
2890 1
2892 1
4878 1
107 rows selected.
SQL>The batch_parameters:
hi - high count of lines in the batch.
low - low count of lines in the batch.
min_orders - number of minimum orders in the batch
max_orders - number of maximum orders in the batch.
The issue is to create an optimal size of batches for us to pick the orders. Usually, you have stick within a given low - hi count but, there is a leeway of around let's say 5%percent on the batch size (for the number of lines in the batch).
But, for the number of orders in a batch, the leeway is zero.
So, I have to assign these 'orders' into the optimal mix of batches. Now, for every run, if I don't find the mix I am looking for, then, the last batch could be as small as a one line one order too. But, every Order HAS to be batched in that run. No exceptions.
I have a procedure that does 'sort of' this, but, it leaves non - optimal orders alone. There is a potential of the orders not getting batched, because they didn't fall in the optimal mix potentially missing our required dates. (I can write another procedure that can clean up after).
I was thinking (maybe just a general direction would be enough), with what analytical functions can do these days, if somebody can come up with the 'sql' that gets us the batch number (think of it as a sequence starting at 1).
Also, the batch_parameters limits are not hard and fast. Those numbers can change but, give you a general idea.
Any ideas?Ok, sorry about that. That was just guesstimates. I ran the program and here are the results.
SQL> SELECT SUM(line_count) no_of_lines_in_batch,
2 COUNT(*) no_of_orders_in_batch,
3 batch_no
4 FROM orders o
5 GROUP BY o.batch_no;
NO_OF_LINES_IN_BATCH NO_OF_ORDERS_IN_BATCH BATCH_NO
199 4 241140
99 6 241143
199 5 241150
197 6 241156
196 5 241148
199 6 241152
164 6 241160
216 2 241128
194 6 241159
297 2 241123
199 3 241124
192 2 241132
199 6 241136
199 5 241142
94 7 241161
199 6 241129
154 2 241135
193 6 241154
199 5 241133
199 4 241138
199 6 241146
191 6 241158
22 rows selected.
SQL> select * from orders;
ORDER_NUMBER LINE_COUNT BATCH_NO
4905 154 241123
4899 143 241123
4925 123 241124
4900 110 241128
4936 106 241128
4901 103 241129
4911 101 241132
4903 91 241132
4902 91 241129
4887 90 241133
4904 85 241133
4926 81 241135
4930 75 241124
4934 73 241135
4935 71 241136
4906 68 241136
4907 66 241138
4896 57 241136
4909 57 241138
4908 56 241138
4894 55 241140
4910 51 241140
4914 49 241142
4912 49 241140
4915 48 241142
4916 48 241142
4913 48 241142
4893 48 241143
2894 47 241143
4917 47 241146
4919 46 241146
4918 46 241146
4920 46 241146
2882 45 241148
4886 45 241148
2901 44 241148
2876 44 241148
4921 44 241140
4891 43 241150
4922 43 241150
4923 42 241150
4884 41 241150
4924 40 241152
4927 39 241152
2853 38 241152
4895 38 241152
4931 37 241154
2885 37 241152
4929 37 241154
4890 37 241154
4928 37 241154
2852 37 241154
2843 36 241156
2850 36 241156
4932 36 241156
4897 36 241156
4933 36 241158
2905 36 241156
2833 35 241158
4937 35 241158
4938 34 241158
2880 34 241159
2872 34 241159
2836 34 241158
2841 33 241159
4889 33 241159
2865 31 241159
2889 30 241150
2813 29 241159
2902 28 241160
2818 28 241160
4892 27 241160
2884 27 241160
2820 27 241160
2839 27 241160
2827 26 241161
2837 22 241133
2883 20 241138
2866 18 241148
2849 17 241161
2871 17 241156
2857 17 241158
4898 16 241161
2840 15 241161
4874 13 241146
2856 8 241154
2847 7 241161
2846 7 241161
2870 7 241152
2893 6 241142
1938 6 241161
4888 2 241129
2890 1 241133
2899 1 241136
4877 1 241143
4875 1 241143
2892 1 241136
ORDER_NUMBER LINE_COUNT BATCH_NO
4878 1 241146
4876 1 241136
2891 1 241133
4880 1 241129
4883 1 241143
4879 1 241143
2898 1 241129
4882 1 241129
4881 1 241124
106 rows selected.As you can see, my code is a little buggy in that it may not have followed the strict batch_parameters. But, this is acceptable to be in the general area. -
Question for analytic functions experts
Hi,
I have an ugly table containing an implicit master detail relation.
The table can be ordered by sequence and then each detail is beneath it's master (in sequence).
If it is a detail, the master column is NULL and vice versa.
Sample:
SEQUENCE MASTER DETAIL BOTH_PRIMARY_KEYS
1____________A______________1
2___________________A_______1
3___________________B_______2
4____________B______________2
5___________________A_______3
6___________________B_______4
Task: Go into the table with the primary key of my detail, and search the primary key of it's master.
I already have a solution how to get it, but I would like to know if there is an analytic statement,
which is more elegant, instead of selfreferencing my table three times. Somebody used to analytic functions?
Thanks,
DirkHi,
Do you mean like this?
with data as (
select 1 sequence, 'A' master, null detail, 1 both_primary_keys from dual union all
select 2, null, 'A', 1 from dual union all
select 3, null, 'B', 2 from dual union all
select 4, 'B', null, 2 from dual union all
select 5, null, 'A', 3 from dual union all
select 6, null, 'B', 4 from dual )
select (select max(both_primary_keys) keep (dense_rank last order by sequence)
from data
where sequence < detail_record.sequence and detail is null) master_primary_key
from data detail_record
where (both_primary_keys=3 /*lookup detail key 3 */ and master is null) -
2.1 EA Bug: group by auto complete generates group by for analytic function
Hi,
when using an analytic function in the sql text, sqldeveloper generates an automatic group by statement in the sql text.
Regards,
IngoPersonally, I don't want anything changed automatically EVER. The day you don't notice and you run a wrong statement, the consequences may be very costly (read: disaster).
Can this be turned off all together? If there's a preference I didn't find, can this be left off by default?
Thanks,
K. -
Completion of data series by analytical function
I have the pleasure of learning the benefits of analytical functions and hope to get some help
The case is as follows:
Different projects gets funds from different sources over several years, but not from each source every year.
I want to produce the cumulative sum of funds for each source for each year for each project, but so far I have not been able to do so for years without fund for a particular source.
I have used this syntax:
SUM(fund) OVER(PARTITION BY project, source ORDER BY year ROWS UNBOUNDED PRECEDING)
I have also experimented with different variations of the window clause, but without any luck.
This is the last step in a big job I have been working on for several weeks, so I would be very thankful for any help.If you want to use Analytic functions and if you are on 10.1.3.3 version of BI EE then try using Evaluate, Evaluate_aggr that support native database functions. I have blogged about it here http://oraclebizint.wordpress.com/2007/09/10/oracle-bi-ee-10133-support-for-native-database-functions-and-aggregates/. But in your case all you might want to do is have a column with the following function.
SUM(Measure BY Col1, Col2...)
I have also blogged about it here http://oraclebizint.wordpress.com/2007/10/02/oracle-bi-ee-101332-varying-aggregation-based-on-levels-analytic-functions-equivalence/.
Thanks,
Venkat
http://oraclebizint.wordpress.com
Maybe you are looking for
-
Assign Printers at request Level.
Hi, Everybody... I have a request to print reports(request) like Reciept,Invoice,report.etc.etc..on different printers. Like A user has one responsibility,under this responsibility menu he has request set. Request Set for Invoice,Receipt,Report Invoi
-
Can you 'Link Layers' in Illustrator like in Photoshop, or only merge?
I have spent days searching for How to 'link layers' in Illustrator (similar to how you would in Photoshop). Is this even possible,..? as it appears not. I can merge layers.. But i only want to LINK to keep the elements separate. Thanks for any advic
-
How do I work in Acrobat SDK?
Hello, I have never worked with Acrobat SDK before. I need to develop a plugin for Acrobat Reader 9. Can someone please point me in the right direction and tell me what to do? After I have installed the SDK I want to develop a custom search plugin
-
HT4897 can you switch alias email to main account email
can you switch alias email to main account email
-
I've read the installation guide but I don't know if I need a new server for to install the Management Portal 7.2(1) or I can install it on Admin Workstation, p.e. Where must/can I install it? Thank you very much. Best Regards, Alberto