Alternate for analytic functions
Hello All,
I'm trying to write a query without using analytic functions.
Using Analytic func,
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
"CORE 11.2.0.2.0 Production"
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SELECT id, sal, rank() OVER (PARTITION BY ID ORDER BY SAL) rnk FROM
(SELECT 10 AS id, 100 AS sal FROM DUAL
UNION ALL
SELECT 10, 300 FROM DUAL
UNION ALL
SELECT 10, 400 FROM DUAL
UNION ALL
SELECT 20, 200 FROM DUAL
UNION ALL
SELECT 20, 200 FROM DUAL
UNION ALL
SELECT 20, 300 FROM DUAL
UNION ALL
SELECT 30, 100 FROM DUAL
UNION ALL
SELECT 40, 100 FROM DUAL
UNION ALL
SELECT 40, 200 FROM DUAL
) Expected results. I want these results without analytic functions.
10 100 1
10 300 2
10 400 3
20 200 1
20 200 1
20 300 3
30 100 1
40 100 1
40 200 2
Hi,
SamFisher wrote:
Thank You Frank. That was simple.
I was trying to get the reults without using analytical functions. Just trying to improve my SQL skills. Yes, I admit that practicising using the wrong tools can improve your SQL skills, but I think there's a lot to be said for practising using the right tools, too.
I tried all sort of things. I thought hierarchical query would do it but hard luck for me.Do you want to use a CONNECT BY query for this? Here's one way:
WITH got_max_level AS
SELECT id
, sal
, MAX (LEVEL) AS max_level
FROM table_x
CONNECT BY NOCYCLE id = PRIOR id
AND sal >= PRIOR sal
AND ( sal > PRIOR sal
OR ROWID > PRIOR ROWID
GROUP BY id
, sal
, got_cnt AS
SELECT id
, sal
, COUNT (*) AS cnt
FROM table_x
GROUP BY id
, sal
SELECT x.id
, x.sal
, l.max_level + 1 - c.cnt AS rnk
FROM table_x x
JOIN got_max_level l ON x.id = l.id
AND x.sal = l.sal
JOIN got_cnt c ON x.id = c.id
AND x.sal = c.sal
ORDER BY x.id
, x.sal
;This is even less efficient, as well as more complicated, than the scalar sub-query solution.
Similar Messages
-
Alternate for Replicate function of sqlserver
What is the alternate for Replicate function of Sql Server in oracle.
Or, you can create it easily.
SQL> create or replace
2 function replicate(in_str varchar2,in_int number)
3 return varchar2
4 is
5 p_str varchar2(4000);
6 begin
7 p_str := '';
8 for i in 1..in_int loop
9 p_str := p_str||in_str;
10 end loop;
11 return p_str;
12 end;
13 /
Function created.
SQL> select replicate('abc',3) from dual;
REPLICATE('ABC',3)
abcabcabc -
Question for analytic functions experts
Hi,
I have an ugly table containing an implicit master detail relation.
The table can be ordered by sequence and then each detail is beneath it's master (in sequence).
If it is a detail, the master column is NULL and vice versa.
Sample:
SEQUENCE MASTER DETAIL BOTH_PRIMARY_KEYS
1____________A______________1
2___________________A_______1
3___________________B_______2
4____________B______________2
5___________________A_______3
6___________________B_______4
Task: Go into the table with the primary key of my detail, and search the primary key of it's master.
I already have a solution how to get it, but I would like to know if there is an analytic statement,
which is more elegant, instead of selfreferencing my table three times. Somebody used to analytic functions?
Thanks,
DirkHi,
Do you mean like this?
with data as (
select 1 sequence, 'A' master, null detail, 1 both_primary_keys from dual union all
select 2, null, 'A', 1 from dual union all
select 3, null, 'B', 2 from dual union all
select 4, 'B', null, 2 from dual union all
select 5, null, 'A', 3 from dual union all
select 6, null, 'B', 4 from dual )
select (select max(both_primary_keys) keep (dense_rank last order by sequence)
from data
where sequence < detail_record.sequence and detail is null) master_primary_key
from data detail_record
where (both_primary_keys=3 /*lookup detail key 3 */ and master is null) -
2.1 EA Bug: group by auto complete generates group by for analytic function
Hi,
when using an analytic function in the sql text, sqldeveloper generates an automatic group by statement in the sql text.
Regards,
IngoPersonally, I don't want anything changed automatically EVER. The day you don't notice and you run a wrong statement, the consequences may be very costly (read: disaster).
Can this be turned off all together? If there's a preference I didn't find, can this be left off by default?
Thanks,
K. -
Help:alternate for calling function in where clause
Hi ,
In below query i'm calling function in where clause to avoid COMPLETE status records,due to this query taking 700 secs to return result.If i'm remove below function condition it's returning results with in 5 secs.Can you some one advice to any alternate idea for this?
WHERE mark_status != 'COMPLETE'
SELECT assessment_school,
subject,
subject_option,
lvl,
component,mark_status,
mark_status
NULL AS grade_status,
NULL AS sample_status,
:v_year,
:v_month,
:v_formated_date,
:v_type,
cand_lang
FROM
(SELECT assessment_school,
subject,
subject_option,
lvl,
programme,
component,
paper_code,
cand_lang,
mark_entry.get_ia_entry_status(:v_year, :v_month, assessment_school, subject_option, lvl, cand_lang, component, paper_code) AS mark_status
FROM
(SELECT DISTINCT ccr.assessment_school,
ccr.subject,
ccr.subject_option,
ccr.lvl,
ccr.programme,
ccr.language AS cand_lang,
ccr.paper_code,
ccr.component
FROM candidate_component_reg ccr
WHERE ccr.split_session_year = :v_year
AND ccr.split_session_month = :v_month
AND EXISTS
(SELECT 1
FROM IBIS.subject_component sc
WHERE sc.year = ccr.split_session_year
AND sc.month = ccr.split_session_month
AND sc.paper_code = ccr.paper_code
AND sc.assessment_type = 'INTERNAL'
AND sc.subject_option NOT LIKE '%self taught%'
AND sc.component NOT IN ('PERFORMANCE PRODUCTION','PRESENTATION WORK','REFLECTIVE PROJECT','SPECIAL SYLLABUS INT. ASSESSMENT')
AND NVL(ccr.withdrawn,'N') = 'N'
AND ccr.mark_status != 'COMPLETE'
AND EXISTS
(SELECT 1
FROM school s
WHERE s.school_code = ccr.assessment_school
AND s.training_school = 'N'
WHERE mark_status != 'COMPLETE';One thing you can test quickly is to put the function call in it's own select ...from dual.
This might make a difference.
However, only you can check this, I don't have your tables or data.
So, what happens if you use:
paper_code,
cand_lang,
(select mark_entry.get_ia_entry_status(:v_year, :v_month, assessment_school, subject_option, lvl, cand_lang, component, paper_code) from dual ) AS mark_status
FROM
(SELECT DISTINCT ccr.assessment_school, --<< is the DISTINCT really needed?
ccr.subject,
ccr.subject_option,
...Also, try to find out the purpose of that above DISTINCT, is it really needed or is there some join missing? -
Alternate for stuff function of sqlserver
Hi,Can Anyone please help me in migrating following code from sqlserver to oracle.
set @sLastItemInPkts_2 = Stuff(@sLastItemInPkts_2,
@iOffset,
@iElementSize,
SubString(@sLastItemInPkts_1,
@iOffset,
@iElementSize)
set @sLastItemInPkts_1 = Stuff(@sLastItemInPkts_1,
@iOffset,
@iElementSize,
Convert(varChar(10),
Replace(Str(@iDocId, @iElementSize),
0))
Thanks in advance.If we need to replace in a string not a certain sequence of characters, but a specified number of characters, starting from some position, it's simpler to use the STUFF function:
STUFF ( <character_expression1> , <start> , <length> , <character_expression2> )
This function replaces a substring with length of length that starts from the start position in the character_expression1 with the character_expression2. -
Looking for Analytical function to filter query
Source dataset for one of policy
PC_COVKEY POLICY_NUMBER TERM_IDENT COVERAGE_NUMBER TRANSACTION_TYPE COV_CHG_EFF_DATE TIMESTAMP_ENTERED
10897523P7013MC0010072 10897523P7 013 001 10 17/NOV/2008 20/NOV/2008 05:36:45.482025 PM
10897523P7013MR0030062 10897523P7 013 003 10 17/NOV/2008 20/NOV/2008 05:36:45.514349 PM
10897523P7013MC0010062 10897523P7 013 001 03 20/NOV/2008 20/NOV/2008 05:26:13.205097 PM
10897523P7013MR0030052 10897523P7 013 003 03 20/NOV/2008 20/NOV/2008 05:26:42.587605 PM
10897523P7013MC0010082 10897523P7 013 001 07 20/NOV/2008 20/NOV/2008 05:36:51.605820 PM
10897523P7013MR0030072 10897523P7 013 003 07 20/NOV/2008 20/NOV/2008 05:36:51.971094 PM
10897523P7013MC0010092 10897523P7 013 001 03 20/NOV/2008 23/MAR/2010 04:00:21.801816 PM
10897523P7013MR0030082 10897523P7 013 003 03 20/NOV/2008 23/MAR/2010 04:03:01.402111 PM
Filter records
10897523P7013MC0010062 10897523P7 013 001 03 20/NOV/2008 20/NOV/2008 05:26:13.205097 PM
10897523P7013MR0030052 10897523P7 013 003 03 20/NOV/2008 20/NOV/2008 05:26:42.587605 PM
Rule:
Row with transaction_type 03's after a group 09 or 10 before the next 07 or 06Like this?
with t
as
select '10897523P7013MC0010072' pc_covkey, '10897523P7' policy_number, '013' term_ident, '001' coverage_number, '10' transaction_type, to_date('17/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('20/NOV/2008 05:36:45.482025 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
union all
select '10897523P7013MR0030062' pc_covkey, '10897523P7' policy_number, '013' term_ident, '003' coverage_number, '10' transaction_type, to_date('17/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('20/NOV/2008 05:36:45.514349 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
union all
select '10897523P7013MC0010062' pc_covkey, '10897523P7' policy_number, '013' term_ident, '001' coverage_number, '03' transaction_type, to_date('20/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('20/NOV/2008 05:26:13.205097 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
union all
select '10897523P7013MR0030052' pc_covkey, '10897523P7' policy_number, '013' term_ident, '003' coverage_number, '03' transaction_type, to_date('20/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('20/NOV/2008 05:26:42.587605 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
union all
select '10897523P7013MC0010082' pc_covkey, '10897523P7' policy_number, '013' term_ident, '001' coverage_number, '07' transaction_type, to_date('20/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('20/NOV/2008 05:36:51.605820 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
union all
select '10897523P7013MR0030072' pc_covkey, '10897523P7' policy_number, '013' term_ident, '003' coverage_number, '07' transaction_type, to_date('20/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('20/NOV/2008 05:36:51.971094 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
union all
select '10897523P7013MC0010092' pc_covkey, '10897523P7' policy_number, '013' term_ident, '001' coverage_number, '03' transaction_type, to_date('20/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('23/MAR/2010 04:00:21.801816 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
union all
select '10897523P7013MR0030082' pc_covkey, '10897523P7' policy_number, '013' term_ident, '003' coverage_number, '03' transaction_type, to_date('20/NOV/2008', 'dd/MON/yyyy') cov_chg_eff_date, to_timestamp('23/MAR/2010 04:03:01.402111 PM', 'dd/MON/yyyy hh:mi:ss.ff PM') timestamp_entered from dual
select *
from (
select lag(transaction_type) over(order by cov_chg_eff_date, timestamp_entered) prev,
lead(transaction_type) over(order by cov_chg_eff_date, timestamp_entered) post,
t.*
from t
where transaction_type = '03'
and prev in ('03', '09', '10')
and post in ('03', '07', '06')
-
Discoverer Analytic Function windowing - errors and bad aggregation
I posted this first on Database General forum, but then I found this was the place to put it:
Hi, I'm using this kind of windowing function:
SUM(Receitas Especificas) OVER(PARTITION BY Tipo Periodo,Calculado,"Empresa Descrição (Operador)","Empresa Descrição" ORDER BY Ini Periodo RANGE BETWEEN INTERVAL '12' MONTH PRECEDING AND INTERVAL '12' MONTH PRECEDING )
If I use the "Receitas Especificas SUM" instead of
"Receitas Especificas" I get the following error running the report:
"an error occurred while attempting to run..."
This is not in accordance to:
http://www.boku.ac.at/oradoc/ias/10g(9.0.4)/bi.904/b10268.pdf
but ok, the version without SUM inside works.
Another problem is the fact that for analytic function with PARTITION BY,
this does not work (shows the cannot aggregate symbol) if we collapse or use "<All>" in page items.
But it works if we remove the item from the PARTITION BY and also remove from workbook.
It's even worse for windowing functions(query above), because the query
only works if we remove the item from the PARTITION BY but we have to show it on the workbook - and this MAKES NO SENSE... :(
Please help.Unfortunately Discoverer doesn't show (correct) values for analytical functions when selecting "<All>" in a page item. I found out that it does work when you add the analytical function to the db-view instead of to the report as a calculation or as a calculated item on the folder.
The only problem is you've to name all page-items in the PARTITION window, so, when adding a page-item to the report, you,ve to change the db-view and alter the PARTITION window.
Michael -
Hide duplicate row and analytic functions
Hi all,
I have to count how many customers have two particular product on the same area.
Table cols are:
AREA
PRODUCT_CODE (PK)
CUSTOMER_ID (PK)
QTA
The query is:
select distinct area, count(customer_id) over(PARTITION BY area)
from all_products
where product_code in ('BC01007', 'BC01004')
group by area, customer_id
having ( sum(decode(FPP.PRODOTTO_CODE,'BC01007',qta,0)) ) > 0)
and ( sum(decode(FPP.PRODOTTO_CODE,'BC01004',qta,0)) ) > 0);
In SQL*PLUS works fine, but in Oracle Discoverer I can't get distinct results also if I chek "Hide duplicate rows" in Table Layout .
Anybody have another way to get distinct results for analytic function results?
Thanks in advance,
GiuseppeThe query in Disco is exactly that I've post before.
Results are there:
AREA.........................................C1
01704 - AREA VR NORD..............3
01704 - AREA VR NORD..............3
01704 - AREA VR NORD..............3
01705 - AREA VR SUD.................1
02702 - AREA EMILIA NORD........1
If I check "hide duplicate row" in layout option results didn't change.
If I add distinct clause manually in SQL*PLUS the query become:
SELECT distinct o141151.AREA as E141152,COUNT(o141151.CUSTOMER_ID) OVER(PARTITION BY o141151.AREA ) as C_1
FROM BPN.ALL_PRODUCTS o141151
WHERE (o141151.PRODUCT_CODE IN ('BC01006','BC01007','BC01004'))
GROUP BY o141151.AREA,o141151.CUSTOMER_ID
HAVING (( SUM(DECODE(o141151.PRODUCT_CODE,'BC01006',1,0)) ) > 0 AND ( SUM(DECODE(o141151.PRODUCT_CODE,'BC01004',1,0)) ) > 0)
and the results are not duplicate.
AREA.........................................C1
01704 - AREA VR NORD..............3
01705 - AREA VR SUD.................1
02702 - AREA EMILIA NORD........1
There is any other way to force distinct clause in Discoverer?
Thank you
Giuseppe -
From analytical function to regular query
Hi all,
I was reading a tutorial for analytical function and i found something like this
sum(princial) keep(dense_rank first order by d_date) over partition by (userid, alias, sec_id, flow, p_date)
can somebody translate this into simple queries / subquery? i am aware that analytical function are faster but i would like to know
how this can translate to regular query.
can someone help me writing a regular query that will be produce same result as query above?
. thanks
Edited by: Devx on Jun 10, 2010 11:16 AMHi,
WITH CUSTOMERS AS
SELECT 1 CUST_ID ,'NJ' STATE_CODE,1 TIMES_PURCHASED FROM DUAL UNION ALL
SELECT 1,'CT',1 FROM DUAL UNION ALL
SELECT 2,'NY',10 FROM DUAL UNION ALL
SELECT 2,'NY',10 FROM DUAL UNION ALL
SELECT 1,'CT',10 FROM DUAL UNION ALL
SELECT 3,'NJ',2 FROM DUAL UNION ALL
SELECT 4,'NY',4 FROM DUAL
SELECT SUM(TIMES_PURCHASED) KEEP(DENSE_RANK FIRST ORDER BY CUST_ID ASC) OVER (PARTITION BY STATE_CODE) SUM_TIMES_PURCHASED_WITH_MIN,
SUM(TIMES_PURCHASED) KEEP(DENSE_RANK LAST ORDER BY CUST_ID) OVER (PARTITION BY STATE_CODE) SUM_TIMES_PURCHASED_WITH_MAX,
C.*
FROM CUSTOMERS C;
SUM_TIMES_PURCHASED_WITH_MIN SUM_TIMES_PURCHASED_WITH_MAX CUST_ID STATE_CODE TIMES_PURCHASED
11 11 1 CT 10
11 11 1 CT 1
1 2 3 NJ 2
1 2 1 NJ 1
20 4 4 NY 4
20 4 2 NY 10
20 4 2 NY 10The above given example is self explanatory, execute the SQL, you'll notice that in the first column the sum of TIMES_PURCHASED partitioned by state code of FIRST cust_id will be repeated for the STATE_CODE partition, in the second column, the sum of TIMES_PURCHASED partitioned by state code of LAST cust_id will be repeated for the STATE_CODE partition.
HTH
*009*
Edited by: 009 on Jun 10, 2010 10:53 PM -
How can I restrict the rows of a SELECT which uses analytical functions?
Hello all,
Can anyone please tell me how to restrict the following query:
SELECT empno,
ename,
deptno,
SUM(sal) over(PARTITION BY deptno) sum_per_dept
FROM emp;
I would need just the lines which have sum_per_dept>100, without using a SUBSELECT.
Is there any way which is specific for analytical functions?
Thank you in advance,
Eugen
Message was edited by:
misailescuSQL> select empno,
2 ename,
3 deptno,sum_per_dept
4 from
5 (
6 SELECT empno,
7 ename,
8 deptno,
9 SUM(sal) over(PARTITION BY deptno) sum_per_dept
10 FROM emp
11 )
12 where sum_per_dept>1000;
EMPNO ENAME DEPTNO SUM_PER_DEPT
7839 KING 10 8750
7782 CLARK 10 8750
7934 MILLER 10 8750
7902 FORD 20 6775
7369 SMITH 20 6775
7566 JONES 20 6775
7900 JAMES 30 9400
7844 TURNER 30 9400
7654 MARTIN 30 9400
7521 WARD 30 9400
7499 ALLEN 30 9400
7698 BLAKE 30 9400
12 rows selected
SQL>
SQL> select empno,
2 ename,
3 deptno,sum_per_dept
4 from
5 (
6 SELECT empno,
7 ename,
8 deptno,
9 SUM(sal) over(PARTITION BY deptno) sum_per_dept
10 FROM emp
11 )
12 where sum_per_dept>9000;
EMPNO ENAME DEPTNO SUM_PER_DEPT
7900 JAMES 30 9400
7844 TURNER 30 9400
7654 MARTIN 30 9400
7521 WARD 30 9400
7499 ALLEN 30 9400
7698 BLAKE 30 9400
6 rows selected
SQL> Greetings...
Sim -
Analytical function fine within TOAD but throwing an error for a mapping.
Hi,
When I validate an expression based on SUM .... OVER PARTITION BY in a mapping, I am getting the following error.
Line 4, Col 23:
PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
* & = - + < / > at in is mod remainder not rem then
<an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
However, using TOAD, the expression is working fine.
A staging table has got three columns, col1, col2 and col3. The expression is checking for a word in col3. The expression is as under.
(CASE WHEN SUM (CASE WHEN UPPER(INGRP1.col3) LIKE 'some_value%'
THEN 1
ELSE 0
END) OVER (PARTITION BY INGRP1.col1
,INGRP1.col2) > 0
THEN 'Y'
ELSE 'N'
END)
I searched the forum for similar issues, but not able to resolve my issue.
Could you please let me know what's wrong here?
Many thanks,
Manoj.Yes, expression validation in 10g simply does not work for (i.e. does not recognize) analytic functions.
It can simply be ignored. You should also set Generation mode to "Set Based only". Otherwise the mapping will fail to deploy under certain circumstances (when using non-set-based (PL/SQL) operators after the analytic function). -
Query for using "analytical functions" in DWH...
Dear team,
I would like to know if following task can be done using analytical functions...
If it can be done using other ways, please do share the ideas...
I have table as shown below..
Create Table t As
Select *
From
Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH From dual Union All
Select 12345, 'W2', 0, 100, 50, 0 From dual Union All
Select 12345, 'W3', 0, 100, 50, 0 From dual Union All
Select 12345, 'W4', 0, 100, 50, 0 From dual
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 0 0 10000
12345 W2 0 100 50 0
12345 W3 0 100 50 0
12345 W4 0 100 50 0
Now i want to calcuate EOH (ending on hand) quantity for W1...
This EOH for W1 becomes SOH (Starting on hand) for W2...and so on...till end of weeks
The formula is :- EOH = SOH - (DEMAND + SUPPLY)
The output should be as follows...
PRODUCT WEEK SOH DEMAND SUPPLY EOH
12345 W1 10,000 10000
12345 W2 10,000 100 50 9950
12345 W3 9,950 100 50 9900
12345 W4 9,000 100 50 8950
Kindly share your ideas...Nicloei W wrote:
Means SOH_AFTER_SUPPLY for W1, should be displayed under SOH FOR W2...i.e. SOH for W4 should be SOH_AFTER_SUPPLY for W3, right?
If yes, why are you expecting it to be 9000 for W4??
So in output should be...
PRODUCT WE SOH DEMAND SUPPLY EOH SOH_AFTER_SUPPLY
12345 W1 10000 0 0 0 10000
12345 W2 10000 100 50 0 9950
12345 W3 9950 100 50 0 *9900*
12345 W4 *9000* 100 50 0 9850
per logic you explained, shouldn't it be *9900* instead???
you could customize Martin Preiss's logic for your requirement :
SQL> with
2 data
3 As
4 (
5 Select 12345 PRODUCT, 'W1' WEEK, 10000 SOH, 0 DEMAND, 0 SUPPLY, 0 EOH Fom dual Union All
6 Select 12345, 'W2', 0, 100, 50, 0 From dal Union All
7 Select 12345, 'W3', 0, 100, 50, 0 From dal Union All
8 Select 12345, 'W4', 0, 100, 50, 0 From dual
9 )
10 Select Product
11 ,Week
12 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Parttion By Product Order By Week)+Supply Soh
13 ,Demand
14 ,Supply
15 , Sum(Soh) Over(Partition By Product Order By Week)- Sum(Supply) Over(Partition By Product Order By Week) eoh
16 from data;
PRODUCT WE SOH DEMAND SUPPLY EOH
12345 W1 10000 0 0 10000
12345 W2 10000 100 50 9950
12345 W3 9950 100 50 9900
12345 W4 9900 100 50 9850 Vivek L -
I don't believe if analytic functions do it for me or not
Hey everyone,
I'm looking for a way handling this report for my own job.
a table having the following attributes exists.
Create table Test (
Public_Date varchar2(10),
City varchar2(10),
count number(3))
Query with the following output readily could be produced using group by clause.
Year Sum
2005 23
2006 36
2007 15
2008 10
But the question is that How I can lead to the following output.
(I want to merge some records into one record in the output, in this example
sum of all years after 2005 is my interest not each year individually come before)
Year(s) Sum
2005 23
2006 36
2007,2008 25 /*(15+10)*/
I think analytic functions may be useful in producing this output but I don't know how.
Could everyone help me how to handle this?Hi,
You can use a CASE (or DECODE) statement to map all the years after 2006 to some common value, like 9999, and GROUP BY that computed value.
If you want the 9999 row to be labeled '2007, 2008', do a search for "string aggregate" for various techniques, or see Tom Kyte's excellent page on this subject:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2196162600402
I use STRAGG (near the top of the page). -
Are analytic functions usefull only for data warehouses?
Hi,
I deal with reporting queries on Oracle databases but I don't work on Data Warehouses, thus I'd like to know if learning to use analytic functions (sql for anaylis such as rollup, cube, grouping, ...) might be usefull in helping me to develop better reports or if analytic functions are usually usefull only for data warehouses queries. I mean are rollup, cube, grouping, ... usefull also on operational database or do they make sense only on DWH?
Thanks!Mark1970 wrote:
thus does it worth learning them for improving report queries also not on DHW but on common operational databases?Why pigeonhole report queries as "+operational+" or "+data warehouse+"?
Do you tell a user/manager that "<i>No, this report cannot be done as it looks like a data warehouse report and we have an operational database!</i>"?
Data processing and data reporting requirements not not care what label you assign to your database.
Simple real world example of using analytical queries on a non warehouse. We supply data to an external system via XML. They require that we limit the number of parent entities per XML file we supply. E.g. 100 customer elements (together with all their child elements) per file. Analytical SQL enables this to be done by creating "buckets" that can only contain 100 parent elements at a time. Complete process is SQL driven - no slow-by-slow row by row processing in PL/SQL using nested cursor loops and silly approaches like that.
Analytical SQL is a tool in the developer toolbox. It would be unwise to remove it from the toolbox, thinking that it is not applicable and won't be needed for the work that's to be done.
Maybe you are looking for
-
All CDs deleted from iPod but still in iTunes library. help?
So i imported some more CDs to my iTunes and it wouldn't let me drag and drop it to sync. Nothing happened whenever i did that. I ended up pressing sync and it deleted all of my CDs from my iPod 5th generation. they're still in my library, but I have
-
Iphone 4s front glass screen smashed. Will apple replace for free if I have Apple Care?
Iphone 4s front glass screen smashed. Will apple replace for free if I have Apple Care? Can I do it in any Apple Store? I bought it in EEUU
-
Hi all, Has anyone noticed how the 3D tween causes a loss in visual quality of the movieclip? For example, if I have text within a movieclip, and then put a 3D tween on that movieclip, the text becomes almost blurry (even when the tween is complete)
-
Working in CS6, I can no longer copy and paste using shortcuts and when I select the Edit tab, these options are all grayed out. I have the correct tool selected so it's not that. In addition, I can no longer select and drag objects or text, I can on
-
PS cc ImageSize preview size???
Hi- In the new Image>Image Size dialog box, the preview is always at 100% which seems a bit disorienting. Is there a way to set that to default to "fit" Thanks Tom