Replacing Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
Hi,
I am using OBI 10.1.3.2.1 where, I guess, EVALUATE is not available. I would like to know alternatives, esp. to replace Oracle's FIRST_VALUE and LAST_VALUE analytical functions.
I want to track some changes. For example, there are four methods of travel - Air, Train, Road and Sea. Would like to know traveler's first method of traveling and the last method of traveling in an year. If both of them match then a certain action is taken. If they do not match, then another action is taken.
I tried as under.
1. Get Sequence ID for each travel within an year per traveler as Sequence_Id.
2. Get the Lowest Sequence ID (which should be 1) for travels within an year per traveler as Sequence_LId.
3. Get the Highest Sequence ID (which could be 1 or greater than 1) for travels within an year per traveler as Sequence_HId.
4. If Sequence ID = Lowest Sequence ID then display the method of travel as First Method of Travel.
5. If Sequence ID = Highest Sequence ID then display the method of travel as Latest Method of Travel.
6. If First Method of Travel = Latest Method of Travel then display Yes/No as Match.
The issue is cells could be blank in First Method of Travel and Last Method of Travel unless the traveler traveled only once in an year.
Using Oracle's FIRST_VALUE and LAST_VALUE analytical functions, I can get a result like
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | Air | No
ABC | 01/01/2000 | 15/12/2000 | Air | Road | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | Train | Yes
XYZ | 01/01/2000 | 04/11/2000 | Train | Train | Train | Yes
Using OBI Answers, I am getting something like this.
Traveler | Card Issue Date | Journey Date | Method | First Method of Travel | Last Method of Travel | Match?
ABC | 01/01/2000 | 04/04/2000 | Road | Road | <BLANK> | No
ABC | 01/01/2000 | 15/12/2000 | Air | <BLANK> | Air | No
XYZ | 01/01/2000 | 04/05/2000 | Train | Train | <BLANK> | No
XYZ | 01/01/2000 | 04/11/2000 | Train | <BLANK> | Train | No
Above, for XYZ traveler the Match? clearly shows a wrong result (although somehow it's correct for traveler ABC).
Would appreciate if someone can guide me how to resolve the issue.
Many thanks,
Manoj.
Edited by: mandix on 27-Nov-2009 08:43
Edited by: mandix on 27-Nov-2009 08:47
Hi,
Just to recap, in OBI 10.1.3.2.1, I am trying to find an alternative way to FIRST_VALUE and LAST_VALUE analytical functions used in Oracle. Somehow, I feel it's achievable. I would like to know answers to the following questions.
1. Is there any way of referring to a cell value and displaying it in other cells for a reference value?
For example, can I display the First Method of Travel for traveler 'ABC' and 'XYZ' for all the rows returned in the same column, respectively?
2. I tried RMIN, RMAX functions in the RDP but it does not accept "BY" clause (for example, RMIN(Transaction_Id BY Traveler) to define Lowest Sequence Id per traveler). Am I doing something wrong here? Why can a formula with "BY" clause be defined in Answers but not the RPD? The idea is to use this in Answers. This is in relation to my first question.
Could someone please let me know?
I understand that this thread that I have posted is related to something that can be done outside OBI, but still would like to know.
If anything is not clear please let me know.
Thanks,
Manoj.
Similar Messages
-
Oracle property path and arq property functions
Hi,
I am using Oracle Jena Adapter. My problem is about property paths on Sparql queries. When I tried to query ”SELECT * WHERE { ?t rdf:type owl:ObjectProperty. ?t rdfs:domain ?o. ?o owl:unionOf ?union. ?union rdf:rest*/rdf:first ?member. }” from ModelOracleSem it gives me parser error.
But, if i try it using jena in-memory model, it works perfectly as below code.
hybridGraph = OracleGraphWrapperForOntModel.getInstance(graph1);
model = ModelFactory.createModelForGraph(hybridGraph);
ontModel = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM, model);
I also tried arq’s list function as “?union list:member ?member”. it does not return any results from ModelOracleSem. But when i try it from jena in-memory model, it works perfectly again.
Can anyone help me to solve this issue?
Thanks,
Regards,Hi Mustafa,
We have generated the following code snippet to execute the SPARQL query using a ModelOracleSem model:
String szQuery = " PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> "+
" PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> " +
" PREFIX owl: <http://www.w3.org/2002/07/owl#> " +
" select * where { " +
" ?t rdf:type owl:ObjectProperty . " +
" ?t rdfs:domain ?o . " +
" ?o owl:unionOf ?union . " +
" ?union rdf:rest*/rdf:first ?member ." +
System.err.println("Create Oracle Connection");
Oracle oracle = new Oracle(szJdbcURL, szUser, szPasswd);
System.err.println("Create Oracle Model");
ModelOracleSem oracleModel = ModelOracleSem.createOracleSemModel(
oracle, szModelName);
oracleModel.removeAll();
System.err.println("Populate model");
String insertString =
" PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> "+
" PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> " +
" PREFIX owl: <http://www.w3.org/2002/07/owl#> " +
" PREFIX urn: <http://example/> " +
" INSERT DATA " +
" { urn:objA rdf:type owl:ObjectProperty . " +
" urn:objB rdf:type owl:ObjectProperty . " +
" urn:objC rdf:type owl:ObjectProperty . " +
" urn:objD rdf:type owl:ObjectProperty . " +
" urn:objA rdfs:domain urn:dom1 . " +
" urn:objB rdfs:domain urn:dom2 . " +
" urn:objC rdfs:domain urn:dom3 . " +
" urn:objD rdfs:domain urn:dom4 . " +
" urn:dom1 owl:unionOf _:setA . " +
" _:setA rdf:rest _:mem1 . " +
" _:setA rdf:first urn:C1 . " +
" _:mem1 rdf:first urn:C2 . " +
" _:mem1 rdf:rest rdf:nil . " +
UpdateAction.parseExecute(insertString, oracleModel);
System.err.println("Population done " + oracleModel.size());
System.err.println("Execute query over OracleModel");
Query query = QueryFactory.create(szQuery, Syntax.syntaxARQ);
QueryExecution qexec = QueryExecutionFactory.create(query, oracleModel);
ResultSet results = qexec.execSelect();
ResultSetFormatter.out(System.out, results, query);
Can you execute this code snippet and see if you get the expected result?
| t | o | union | member |
===============================================================================
| <http://example/objA> | <http://example/dom1> | _:b0 | <http://example/C1> |
| <http://example/objA> | <http://example/dom1> | _:b0 | <http://example/C2> |
Thanks,
Gaby -
[8i] Subquery vs Multiple Analytic Functions
Does anyone have an idea which is better performance-wise?
I have a query that is 3 layers deep already with sub-queries. In the topmost level, I have a choice. I can calculate one analytic function twice, and one analytic function three times, or I can make the topmost level into a subquery, and calculate each analytic function only once.
In case it matters for this problem, this query is running on an 8i database.
A simplified example:
CREATE TABLE my_data
( order_no CHAR(10)
, seq_nbr CHAR(4)
, area_id CHAR(4)
, start_date DATE
, unit_time NUMBER(7,2)
INSERT INTO my_data VALUES ('0000567890','0010','A001',TO_DATE('05/01/2000','mm/dd/yyyy'),0.34);
INSERT INTO my_data VALUES ('0000567890','0020','A001',TO_DATE('05/02/2000','mm/dd/yyyy'),0.78);
INSERT INTO my_data VALUES ('0000567890','0030','A002',TO_DATE('05/03/2000','mm/dd/yyyy'),0.91);
INSERT INTO my_data VALUES ('0000567890','0040','A003',TO_DATE('05/03/2000','mm/dd/yyyy'),0.27);
INSERT INTO my_data VALUES ('0000123456','0010','A001',TO_DATE('04/01/2001','mm/dd/yyyy'),0.39);
INSERT INTO my_data VALUES ('0000123456','0020','A001',TO_DATE('04/02/2001','mm/dd/yyyy'),0.98);
INSERT INTO my_data VALUES ('0000123456','0030','A002',TO_DATE('04/03/2001','mm/dd/yyyy'),0.77);
INSERT INTO my_data VALUES ('0000123456','0040','A003',TO_DATE('04/03/2001','mm/dd/yyyy'),0.28);
INSERT INTO my_data VALUES ('0000123123','0010','A001',TO_DATE('12/01/2001','mm/dd/yyyy'),0.31);
INSERT INTO my_data VALUES ('0000123123','0020','A001',TO_DATE('12/02/2001','mm/dd/yyyy'),0.86);
INSERT INTO my_data VALUES ('0000123123','0030','A002',TO_DATE('12/03/2001','mm/dd/yyyy'),0.82);
INSERT INTO my_data VALUES ('0000123123','0040','A003',TO_DATE('12/03/2001','mm/dd/yyyy'),0.23);
INSERT INTO my_data VALUES ('0000111111','0010','A001',TO_DATE('06/01/2002','mm/dd/yyyy'),0.29);
INSERT INTO my_data VALUES ('0000111111','0020','A001',TO_DATE('06/02/2002','mm/dd/yyyy'),0.84);
INSERT INTO my_data VALUES ('0000111111','0030','A002',TO_DATE('06/03/2002','mm/dd/yyyy'),0.78);
INSERT INTO my_data VALUES ('0000111111','0040','A003',TO_DATE('06/03/2002','mm/dd/yyyy'),0.26);
INSERT INTO my_data VALUES ('0000654321','0010','A001',TO_DATE('05/01/2003','mm/dd/yyyy'),0.28);
INSERT INTO my_data VALUES ('0000654321','0020','A001',TO_DATE('05/02/2003','mm/dd/yyyy'),0.88);
INSERT INTO my_data VALUES ('0000654321','0030','A002',TO_DATE('05/03/2003','mm/dd/yyyy'),0.75);
INSERT INTO my_data VALUES ('0000654321','0040','A003',TO_DATE('05/03/2003','mm/dd/yyyy'),0.25);My choices for the example are:
SELECT area_id
, period_start
, period_end
, AVG(tot_area_unit_hrs)
OVER (
PARTITION BY area_id
ORDER BY period_start
RANGE BETWEEN period_end - period_start PRECEDING
AND CURRENT ROW
STDDEV(tot_area_unit_hrs)
OVER (
PARTITION BY area_id
ORDER BY period_start
RANGE BETWEEN period_end - period_start PRECEDING
AND CURRENT ROW
) AS lo_unit_time
, AVG(tot_area_unit_hrs)
OVER (
PARTITION BY area_id
ORDER BY period_start
RANGE BETWEEN period_end - period_start PRECEDING
AND CURRENT ROW
) AS avg_unit_time
, AVG(tot_area_unit_hrs)
OVER (
PARTITION BY area_id
ORDER BY period_start
RANGE BETWEEN period_end - period_start PRECEDING
AND CURRENT ROW
) +
STDDEV(tot_area_unit_hrs)
OVER (
PARTITION BY area_id
ORDER BY period_start
RANGE BETWEEN period_end - period_start PRECEDING
AND CURRENT ROW
) AS hi_unit_time
FROM (
SELECT order_no
, area_id
, ADD_MONTHS(MIN(start_date),-24)+1 AS period_start
, MIN(start_date) AS period_end
, SUM(unit_time) AS tot_area_unit_hrs
FROM my_data
GROUP BY order_no
, area_id
ORDER BY area_id
, period_end
;or
SELECT area_id
, period_start
, period_end
, avg_unit_time - stdev_unit_time AS lo_unit_time
, avg_unit_time
, avg_unit_time + stdev_unit_time AS hi_unit_time
FROM (
SELECT area_id
, period_start
, period_end
, STDDEV(tot_area_unit_hrs)
OVER (
PARTITION BY area_id
ORDER BY period_start
RANGE BETWEEN period_end - period_start PRECEDING
AND CURRENT ROW
) AS stdev_unit_time
, AVG(tot_area_unit_hrs)
OVER (
PARTITION BY area_id
ORDER BY period_start
RANGE BETWEEN period_end - period_start PRECEDING
AND CURRENT ROW
) AS avg_unit_time
FROM (
SELECT order_no
, area_id
, ADD_MONTHS(MIN(start_date),-24)+1 AS period_start
, MIN(start_date) AS period_end
, SUM(unit_time) AS tot_area_unit_hrs
FROM my_data
GROUP BY order_no
, area_id
ORDER BY area_id
, period_end
;My gut instinct would be that the 2nd option is faster (with the sub-query), but before I try this on my actual data set which is much larger, I'd like a second opinion. I don't want to accidentally start running a "neverending" query.Sorry for the delay in response here... I was busy deleting 39 GB of trace files, because some silly person (hangs head in shame) accidentally set TRACE_LEVEL_CLIENT=SUPPORT months ago and forgot to turn it off, and didn't notice until her hard drive was full and she couldn't save a file.....
Anyway...
@Dev
For the real query...
option 1 explain plan:
OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER ID PARENT_ID POSITION COST CARDINALITY BYTES
SELECT STATEMENT REMOTE HINT: CHOOSE 0 6076 6076 1 91
SORT ORDER BY 1 0 1 6076 1 91
WINDOW SORT 2 1 1 1 91
SORT GROUP BY 3 2 1 6076 1 91
FILTER 4 3 1
NESTED LOOPS 5 4 1 6071 1 91
TABLE ACCESS FULL DB8I.WORLD ASCHEMA DETAILS 2 6 5 1 6068 1 50
INDEX RANGE SCAN DB8I.WORLD ASCHEMA ORDERS_IX3 UNIQUE ANALYZED 7 5 2 3 141930 5819130
TABLE ACCESS BY INDEX ROWID DB8I.WORLD ASCHEMA DETAILS 3 8 4 2 7 1 44
INDEX RANGE SCAN DB8I.WORLD ASCHEMA DETAILS_IX1 NON-UNIQUE ANALYZED 9 8 1 3 1 option 2 explain plan:
OPERATION OPTIONS OBJECT_NODE OBJECT_OWNER OBJECT_NAME OBJECT_INSTANCE OBJECT_TYPE OPTIMIZER ID PARENT_ID POSITION COST CARDINALITY BYTES
SELECT STATEMENT HINT: CHOOSE 0 2777 2777 96 14208
SORT ORDER BY 1 0 1 2777 96 14208
VIEW LOCALUSER 1 2 1 1 2776 96 14208
WINDOW SORT 3 2 1 2776 96 26400
HASH GROUP BY 4 3 1 2776 96 26400
FILTER 5 4 1
REMOTE DB8I REMOTE 6 5 1 1908 386 59830
REMOTE DB8I DETAILS REMOTE 7 5 2 4 1 129 You should know, in order to get explain plan on these queries, I have to run them via a database link to the 8i database through XE 10g on my local machine, as I don't have access to run explain plan on the remote database itself. I've never seen my local user name appear before in an explain plan run this way, but it did in the one for option #2 (subquery), so I'd guess that wouldn't be the explain plan I'd get if I ran it just in the 8i database. I also find it really odd that the ORDERS table doesn't seem to be referenced in the 2nd explain plan... I do think that's still going to be my best option though, so I'm going to try it and hope it doesn't take too long.
@Frank Kulash
Agreed. That's what I'll try, and I also think that I can't put the analytic function in with the aggregates, without having to do some additional computations in that sub-query, which I think would defeat the purpose of putting them there in the first place. -
EVALUATE in OBIEE with Analytic function LAST_VALUE
Hi,
I'm trying to use EVALUATE with analytic function LAST_VALUE but it is giving me error below:
[nQSError: 17001] Oracle Error code: 30483, message: ORA-30483: window functions are not allowed here at OCI call OCIStmtExecute. [nQSError: 17010] SQL statement preparation failed. (HY000)
Thanks
Kumar.Hi Kumar,
The ORA error tells me that this is something conveyed by the oracle database but not the BI Server. In this case, the BI server might have fired the incorrect query onto the DB and you might want to check what's wrong with it too.
The LAST_VALUE is an analytic function which works over a set/partition of records. Request you to refer to the semantics at http://docs.oracle.com/cd/B19306_01/server.102/b14200/functions073.htm and see if it is violating any rules here. You may want to post the physical sql here too to check.
Hope this helps.
Thank you,
Dhar -
OLAP Expression Analytical Functions and NA Values
Hello,
I am trying to use the SUM and MAX functions over a hierarchy where there are potentially NA values. I believe in OLAP DML, the natural behavior is to skip these values. Can a skip be accomplished with either the SUM or MAX OLAP Expression Syntax functions?
Cheers!Pre-requisites:
===============
Time dimension with level=DAY.... i have restricted data to 1 month approx.. 20100101 to 20100201 (32 days).
Measure of interest - a (say)
Time Dimension attribute which indicates WEEKDAY.... if you have END_DATE attribute with date datatype so we can extract the DAY (MON/TUE/WED/...) from it and decipher wkday/wkend status for DAY.
Sort time as per END_DATE ..
Take care of other dimensions during testing... restrict all other dimensions of cube to single value. Final formula would be independent of other dimensions but this helps development/testing.
Step 1:
======
"Firm up the required design in olap dml
"rpr down time
" w 10 heading 't long' time_long_description
" w 10 heading 't end date' time_end_date
" w 20 heading 'Day Type' convert(time_end_date text 'DY')
" a
NOTE: version 1 of moving total
" heading 'moving minus 2 all' movingtotal(a, -2, 0, 1, time status)
" w 20 heading 'Day Type' convert(time_end_date text 'DY')
" heading 'a wkday' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' then a else na
NOTE: version 2 of moving total
" heading 'moving minus 2 wkday' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN')
" w 20 heading 'Day Type' convert(time_end_date text 'DY')
" heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na
NOTE: version 3 of moving total
" heading 'moving minus 2 wkday non-na' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
OLAP DML Command:
rpr down time w 10 heading 't long' time_long_description w 10 heading 't end date' time_end_date w 20 heading 'Day Type' convert(time_end_date text 'DY') a heading 'moving minus 2 all' movingtotal(a, -2, 0, 1, time status) w 20 heading 'Day Type' convert(time_end_date text 'DY') heading 'a wkday' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' then a else na heading 'moving minus 2 wkday' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN') w 20 heading 'Day Type' convert(time_end_date text 'DY') heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na heading 'moving minus 2 wkday non-na' movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
Step 2:
======
"Define additional measure to contain the required/desired formula implementing the business requirements (version 3 above)
" create formula AF1 which points to last column... i.e. OLAP_DML_EXPRESSION
dfn af1 formula movingtotal(a, -2, 0, 1, time convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na)
"NOTE: Do this via AWM using calculated member with template type = OLAP_DML_EXPRESSION so that the cube view for cube contains a column for measure AF1
OLAP DML Command:
rpr down time w 10 heading 't long' time_long_description w 10 heading 't end date' time_end_date w 20 heading 'Day Type' convert(time_end_date text 'DY') a heading 'a wkday non-na' if convert(time_end_date text 'DY') ne 'SAT' and convert(time_end_date text 'DY') ne 'SUN' and a ne na then a else na heading 'moving minus 2 wkday non-na (AF1)' af1
->
Step 3:
=======
Extend Oracle OLAP with regular SQL functionality like SQL ANALYTICAL functions to fill up the gaps for intermediate week days like DAY_20100104 (TUE), DAY_20100105 (WED) etc.
Use: SQL Analytical Function LAST_VALUE() in query.. i.e. in report or query.. dont use AF1 but use LAST_VALUE(af1).... as below pseudo-code:
LAST_VALUE(cube_view.af1) over (partition by <product, organization, ... non-time dimensions> order by <DAY_KEY_Col> range unbounded preceeding and current row)
HTH
Shankar -
HTMLDB 1.6 and "order by" in analytic functions
In HTMLDB 1.6, oracle 10g, when i enter the string "order by" in the region source of a report of the type "sql query (pl/sql function body returning sql query", I get
1 error has occurred
* Your query can't include an "ORDER BY" clause when having column heading sorting enabled.
I understand the reason for this error, but unfortunately i need this for an analytic function:
row_number() over (partition by ... order by ...)
It seems that the check is performed by simply looking for the string "order by" in the "region source" (in fact the error fires even if that string is contained within a comment).
I know possible workarounds (eg creating a view and select'ing from it), i just wanted to let you know.
Regards
AlbertoAnother one under the 'obvious route' category:
Seems that the ORDER BY check is apparentl for ORDER<space>BY... so simply adding extra whitespace between ORDER and BY bypasses the check (at least in 2.1.0.00.39).
To make it a bit more obious that a separation is intended, an empty comment, i.e. ORDER/*/BY*, works nicely
Edited by: mcstock on Nov 19, 2008 10:29 AM -
Grouping error in Oracle's analytic function PERCENTILE_CONT()
Hi,
I have a question regarding the usage of Oracle's analytic function PERCENTILE_CONT(). The underlying time data in the table is of hourly granularity and I want to fetch average, peak values for the day along with 80th percentile for that day. For the sake of clarification I am only posting relevant portion of the query.
Any idea how to rewrite the query and achieve the same objective?
SELECT TRUNC (sdd.ts) AS ts,
max(sdd.maxvalue) AS max_value, avg(sdd.avgvalue) AS avg_value,
PERCENTILE_CONT(0.80) WITHIN GROUP (ORDER BY sdd.avgvalue ASC) OVER (PARTITION BY pm.sysid,trunc(sdd.ts)) as Percentile_Cont_AVG
FROM XYZ
WHERE
XYZ
GROUP BY TRUNC (sdd.ts)
ORDER BY TRUNC (sdd.ts)
Oracle Error:
ERROR at line 5:
ORA-00979: not a GROUP BY expressionYou probably mixed up the aggregate and analytical versin of PERCENTILE_CONT.
The below should work, but i dont know if it produces the desireed results.
SELECT TRUNC (sdd.ts) AS ts,
max(sdd.maxvalue) AS max_value, avg(sdd.avgvalue) AS avg_value,
PERCENTILE_CONT(0.80) WITHIN GROUP (ORDER BY sdd.avgvalue ASC) as Percentile_Cont_AVG
FROM XYZ
sorry, what is this where clause for??
WHERE
XYZ
GROUP BY TRUNC (sdd.ts)
ORDER BY TRUNC (sdd.ts) Edited by: chris227 on 26.03.2013 05:45 -
Understanding row_number() and using it in an analytic function
Dear all;
I have been playing around with row_number and trying to understand how to use it and yet I still cant figure it out...
I have the following code below
create table Employee(
ID VARCHAR2(4 BYTE) NOT NULL,
First_Name VARCHAR2(10 BYTE),
Last_Name VARCHAR2(10 BYTE),
Start_Date DATE,
End_Date DATE,
Salary Number(8,2),
City VARCHAR2(10 BYTE),
Description VARCHAR2(15 BYTE)
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values ('01','Jason', 'Martin', to_date('19960725','YYYYMMDD'), to_date('20060725','YYYYMMDD'), 1234.56, 'Toronto', 'Programmer');
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('02','Alison', 'Mathews', to_date('19760321','YYYYMMDD'), to_date('19860221','YYYYMMDD'), 6661.78, 'Vancouver','Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('03','James', 'Smith', to_date('19781212','YYYYMMDD'), to_date('19900315','YYYYMMDD'), 6544.78, 'Vancouver','Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('04','Celia', 'Rice', to_date('19821024','YYYYMMDD'), to_date('19990421','YYYYMMDD'), 2344.78, 'Vancouver','Manager')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('05','Robert', 'Black', to_date('19840115','YYYYMMDD'), to_date('19980808','YYYYMMDD'), 2334.78, 'Vancouver','Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('06','Linda', 'Green', to_date('19870730','YYYYMMDD'), to_date('19960104','YYYYMMDD'), 4322.78,'New York', 'Tester')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('07','David', 'Larry', to_date('19901231','YYYYMMDD'), to_date('19980212','YYYYMMDD'), 7897.78,'New York', 'Manager')
insert into Employee(ID, First_Name, Last_Name, Start_Date, End_Date, Salary, City, Description)
values('08','James', 'Cat', to_date('19960917','YYYYMMDD'), to_date('20020415','YYYYMMDD'), 1232.78,'Vancouver', 'Tester')I did a simple select statement
select * from Employee e
and it returns this below
ID FIRST_NAME LAST_NAME START_DAT END_DATE SALARY CITY DESCRIPTION
01 Jason Martin 25-JUL-96 25-JUL-06 1234.56 Toronto Programmer
02 Alison Mathews 21-MAR-76 21-FEB-86 6661.78 Vancouver Tester
03 James Smith 12-DEC-78 15-MAR-90 6544.78 Vancouver Tester
04 Celia Rice 24-OCT-82 21-APR-99 2344.78 Vancouver Manager
05 Robert Black 15-JAN-84 08-AUG-98 2334.78 Vancouver Tester
06 Linda Green 30-JUL-87 04-JAN-96 4322.78 New York Tester
07 David Larry 31-DEC-90 12-FEB-98 7897.78 New York Manager
08 James Cat 17-SEP-96 15-APR-02 1232.78 Vancouver TesterI wrote another select statement with row_number. see below
SELECT first_name, last_name, salary, city, description, id,
ROW_NUMBER() OVER(PARTITION BY description ORDER BY city desc) "Test#"
FROM employee
and I get this result
First_name last_name Salary City Description ID Test#
Celina Rice 2344.78 Vancouver Manager 04 1
David Larry 7897.78 New York Manager 07 2
Jason Martin 1234.56 Toronto Programmer 01 1
Alison Mathews 6661.78 Vancouver Tester 02 1
James Cat 1232.78 Vancouver Tester 08 2
Robert Black 2334.78 Vancouver Tester 05 3
James Smith 6544.78 Vancouver Tester 03 4
Linda Green 4322.78 New York Tester 06 5
I understand the partition by which means basically for each associated group a unique number wiill be assigned for that row, so in this case since tester is one group, manager is another group, and programmer is another group then tester gets its own unique number for each row, manager as well and etc.What is throwing me off is the order by and how this numbering are assigned. why is
1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black
I apologize if this is a stupid question, i have tried reading about it online and looking at the oracle documentation but that still dont fully understand why.user13328581 wrote:
understanding row_number() and using it in an analytic functionROW_NUMBER () IS an analytic fucntion. Are you trying to use the results of ROW_NUMBER in another analytic function? If so, you need a sub-query. Analuytic functions can't be nested within other analytic functions.
...I have the following code below
... I did a simple select statementThanks for posting all that! It's really helpful.
... and I get this result
First_name last_name Salary City Description ID Test#
Celina Rice 2344.78 Vancouver Manager 04 1
David Larry 7897.78 New York Manager 07 2
Jason Martin 1234.56 Toronto Programmer 01 1
Alison Mathews 6661.78 Vancouver Tester 02 1
James Cat 1232.78 Vancouver Tester 08 2
Robert Black 2334.78 Vancouver Tester 05 3
James Smith 6544.78 Vancouver Tester 03 4
Linda Green 4322.78 New York Tester 06 5... What is throwing me off is the order by and how this numbering are assigned. why is
1 assigned to Alison Mathews for the tester group and 2 assigned to James Cat and 3 assigned Robert Black That's determined by the analytic ORDER BY clause. Yiou said "ORDER BY city desc", so a row where city='Vancouver' will get a lower namber than one where city='New York', since 'Vancouver' comes after 'New York' in alphabetic order.
If you have several rows that all have the same city, then you can be sure that ROW_NUMBER will assign them consecutive numbers, but it's arbitrary which one of them will be lowest and which highest. For example, you have 5 'Tester's: 4 from Vancouver and 1 from New York. There's no particular reason why the one with first_name='Alison' got assinge 1, and 'James' got #2. If you run the same query again, without changing the table at all, then 'Robert' might be #1. It's certain that the 4 Vancouver rows will be assigned numbers 1 through 4, but there's no way of telling which of those 4 rows will get which of those 4 numbers.
Similar to a query's ORDER BY clause, the analytic ORDER BY clause can have two or more expressions. The N-th one will only be considered if there was a tie for all (N-1) earlier ones. For example "ORDER BY city DESC, last_name, first_name" would mena 'Vancouver' comes before 'New York', but, if multiple rows all have city='Vancouver', last_name would determine the order: 'Black' would get a lower number than 'Cat'. If you had multiple rows with city='Vancouver' and last_name='Black', then the order would be determined by first_name. -
Hide duplicate row and analytic functions
Hi all,
I have to count how many customers have two particular product on the same area.
Table cols are:
AREA
PRODUCT_CODE (PK)
CUSTOMER_ID (PK)
QTA
The query is:
select distinct area, count(customer_id) over(PARTITION BY area)
from all_products
where product_code in ('BC01007', 'BC01004')
group by area, customer_id
having ( sum(decode(FPP.PRODOTTO_CODE,'BC01007',qta,0)) ) > 0)
and ( sum(decode(FPP.PRODOTTO_CODE,'BC01004',qta,0)) ) > 0);
In SQL*PLUS works fine, but in Oracle Discoverer I can't get distinct results also if I chek "Hide duplicate rows" in Table Layout .
Anybody have another way to get distinct results for analytic function results?
Thanks in advance,
GiuseppeThe query in Disco is exactly that I've post before.
Results are there:
AREA.........................................C1
01704 - AREA VR NORD..............3
01704 - AREA VR NORD..............3
01704 - AREA VR NORD..............3
01705 - AREA VR SUD.................1
02702 - AREA EMILIA NORD........1
If I check "hide duplicate row" in layout option results didn't change.
If I add distinct clause manually in SQL*PLUS the query become:
SELECT distinct o141151.AREA as E141152,COUNT(o141151.CUSTOMER_ID) OVER(PARTITION BY o141151.AREA ) as C_1
FROM BPN.ALL_PRODUCTS o141151
WHERE (o141151.PRODUCT_CODE IN ('BC01006','BC01007','BC01004'))
GROUP BY o141151.AREA,o141151.CUSTOMER_ID
HAVING (( SUM(DECODE(o141151.PRODUCT_CODE,'BC01006',1,0)) ) > 0 AND ( SUM(DECODE(o141151.PRODUCT_CODE,'BC01004',1,0)) ) > 0)
and the results are not duplicate.
AREA.........................................C1
01704 - AREA VR NORD..............3
01705 - AREA VR SUD.................1
02702 - AREA EMILIA NORD........1
There is any other way to force distinct clause in Discoverer?
Thank you
Giuseppe -
Oracle Analytic function tuning
Hi all,
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Productio
NLSRTL Version 10.2.0.3.0 - Production
I have a query which has analytic function uses large space on temporary tablespace resulting in direct path temp read and direct path temp write wait events taking too much time. Is there any way to tune such query?
Thanks in advance.user9074365 wrote:
Hi all,
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.3.0 - Productio
NLSRTL Version 10.2.0.3.0 - Production
I have a query which has analytic function uses large space on temporary tablespace resulting in direct path temp read and direct path temp write wait events taking too much time. Is there any way to tune such query?
With your version of Oracle, and high-volumes of data going through analytic function, it's likely that this blog note applies. You may need an upgrade or special patch. http://jonathanlewis.wordpress.com/2009/09/07/analytic-agony/
Regards
Jonathan Lewis -
Advantages and disadvantages of Analytical function
Plz list out the advantages and disadvantages of normal queries and queries using analytical function (Performance wise)
I'm not sure how you wish to compare?
Analytical functions give you functionality that cannot otherwise be achieved easily in a lot of cases. They can introduce some performance degredation to a query but you have to compare on a query by query basis to determine if analytical functions or otherwise are the best solution for the issue. If it were as simple as saying that analytical functions are always slower than doing it without analytical functions, then Oracle wouldn't bother introducing them into the language. -
Analytic function and aggregate function
What are analytic function and aggregate function. What is difference between them?
hi,
Analytic Functions :----------
Analytic functions compute an aggregate value based on a group of rows. They differ from aggregate functions in that they return multiple rows for each group. The group of rows is called a window and is defined by the analytic_clause. For each row, a sliding window of rows is defined. The window determines the range of rows used to perform the calculations for the current row. Window sizes can be based on either a physical number of rows or a logical interval such as time.
Analytic functions are the last set of operations performed in a query except for the final ORDER BY clause. All joins and all WHERE, GROUP BY, and HAVING clauses are completed before the analytic functions are processed. Therefore, analytic functions can appear only in the select list or ORDER BY clause.
Analytic functions are commonly used to compute cumulative, moving, centered, and reporting aggregates.
Aggregate Functions :----------
Aggregate functions return a single result row based on groups of rows, rather than on single rows. Aggregate functions can appear in select lists and in ORDER BY and HAVING clauses. They are commonly used with the GROUP BY clause in a SELECT statement, where Oracle Database divides the rows of a queried table or view into groups. In a query containing a GROUP BY clause, the elements of the select list can be aggregate functions, GROUP BY expressions, constants, or expressions involving one of these. Oracle applies the aggregate functions to each group of rows and returns a single result row for each group.
If you omit the GROUP BY clause, then Oracle applies aggregate functions in the select list to all the rows in the queried table or view. You use aggregate functions in the HAVING clause to eliminate groups from the output based on the results of the aggregate functions, rather than on the values of the individual rows of the queried table or view.
let me know if you are feeling any problem in understanding.
thanks.
Edited by: varun4dba on Jan 27, 2011 3:32 PM -
Help with Oracle Analytic Function scenario
Hi,
I am new to analytic functions and was wondering if someone could help me with the data scenario below. I have a table with the following data
COLUMN A COLUMN B COLUMN C
13368834 34323021 100
13368835 34438258 50
13368834 34438258 50
13368835 34323021 100
The output I want is
COLUMN A COLUMN B COLUMN C
13368834 34323021 100
13368835 34438258 50
A simple DISTINCT won't give me the desired output so i was wondering if there is any way that I can get the result using ANALYTIC FUNCTIONS and DISTINCT ..
Any help will be greatly appreciated.
Thanks.Hi,
Welcome to the forum!
Whenever you have a question, please post your sample data in a form that people can use to re-create the problem and test their solutions.
For example:
CREATE TABLE table_x
( columna NUMBER
, columnb NUMBER
, columnc NUMBER
INSERT INTO table_x (columna, columnb, columnc) VALUES (13368834, 34323021, 100);
INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34438258, 50);
INSERT INTO table_x (columna, columnb, columnc) VALUES (13368834, 34438258, 50);
INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34323021, 100);Do you want something that works in your version or Oracle? Of course you do! So tell us which version that is.
How do you get the results that you want? Explain what each row of output represents. It looks like
the 1st row contains the 1st distinct value from each column (where "first" means descending order for columnc, and ascending order for the others),
the 2nd row contains the 2nd distinct value,
the 3rd row contains the 3rd distinct value, and so on.
If that's what you want, here's one way to get it (in Oracle 9 and up):
WITH got_nums AS
SELECT columna, columnb, columnc
, DENSE_RANK () OVER (ORDER BY columna ) AS a_num
, DENSE_RANK () OVER (ORDER BY columnb ) AS b_num
, DENSE_RANK () OVER (ORDER BY columnc DESC) AS c_num
FROM table_x
SELECT MAX (a.columna) AS columna
, MAX (b.columnb) AS columnb
, MAX (c.columnc) AS columnc
FROM got_nums a
FULL OUTER JOIN got_nums b ON b.b_num = a.a_num
FULL OUTER JOIN got_nums c ON c.c_num = COALESCE (a.a_num, b.b_num)
GROUP BY COALESCE (a.a_num, b.b_num, c.c_num)
ORDER BY COALESCE (a.a_num, b.b_num, c.c_num)
;I've been trying to find a good name for this type of query. The best I've heard so far is "Prix Fixe Query", named after the menus where you get a choice of soups (listed in one column), appetizers (in another column), main dishes (in a 3rd column), and so on. The items on the first row don't necessaily have any relationship to each other.
The solution does not assume that there are the same number of distinct items in each column.
For example, if you add this row to the sample data:
INSERT INTO table_x (columna, columnb, columnc) VALUES (13368835, 34323021, 99);which is a copy of the last row, except that there is a completely new value for columnc, then the output is:
` COLUMNA COLUMNB COLUMNC
13368834 34323021 100
13368835 34438258 99
50starting in Oracle 11, you can also do this with an unpivot-pivot query. -
Replacing Oracle EBS FND Attachment functionality with UCM
I am new to Oracle UCM and wondering about Replacing Oracle EBS FND Attachment functionality (Paper Clip in EBS forms) with UCM. Can we achieve this by simply defining the Attachment Repository using System Administrator Responsibility in EBS? Can you share some documentation on this?
If we have to customize the forms to re-direct the document repository from FND to UCM, please share the steps to do so.Didn't see the link in your post. may have been stripped.
if you want to use the ebs native attachment lists, what we've done is used a custom bpel composite to load items into the appropriate entity in the fnd_documents and all of those other related ebs tables. this takes webcenter out of it for the most part. you could also load links, similar to the imaging solution's approach. this way, again, uses the ebs attachments, but stores the image in IPM (in the imaging case, that is). there is a link loaded into the fnd tables.
if you want to leverage the ebs-side attachment lists, managed attachments may not be the approach you want to take. You may want to look into the imaging solution or creating a custom process.
-ryan -
Reports 6i and analytical function
hi
I have this query which wrks fine in TOAD
SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
rvt.transaction_date srv_date, inv.segment1 item_no,
rvt.item_desc item_description, hrov.NAME,
( SUBSTR (v.standard_industry_class, 1, 1)
|| '-'
|| po_headers.segment1
|| '-'
|| TO_CHAR (po_headers.creation_date, 'RRRR')
) po_no,
po_headers.creation_date_disp po_date,
( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty
)aMOUNT ,
----Analytic function used here
SUM( ( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,
(SELECT SUM (mot.on_hand)
FROM mtl_onhand_total_mwb_v mot
WHERE inv.inventory_item_id = mot.inventory_item_id
-- AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
AND loc.inventory_location_id = mot.locator_id
AND loc.organization_id = mot.organization_id
AND rvt.locator_id = mot.locator_id) onhand
FROM rcv_vrc_txs_v rvt,
mtl_system_items_b inv,
mtl_item_locations loc,
hr_organization_units_v hrov,
po_headers_v po_headers,
ap_vendors_v v,
po_lines_v po_lines
WHERE inv.inventory_item_id(+) = rvt.item_id
AND po_headers.vendor_id = v.vendor_id
AND rvt.po_line_id = po_lines.po_line_id
AND rvt.po_header_id = po_lines.po_header_id
AND rvt.po_header_id = po_headers.po_header_id
AND rvt.supplier_id = v.vendor_id
AND inv.organization_id = hrov.organization_id
AND rvt.transaction_type = 'DELIVER'
AND rvt.inspection_status_code <> 'REJECTED'
AND rvt.organization_id = inv.organization_id(+)
AND to_char(to_date(rvt.transaction_date, 'DD/MM/YYYY'), 'DD-MON-YYYY') BETWEEN (:p_from_date)
AND NVL (:p_to_date,
:p_from_date
AND rvt.locator_id = loc.physical_location_id(+)
AND transaction_id NOT IN (
SELECT parent_transaction_id
FROM rcv_vrc_txs_v rvtd
WHERE rvt.item_id = rvtd.item_id
AND rvtd.transaction_type IN
('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
GROUP BY rvt.receipt_num , rvt.supplier ,
rvt.transaction_date , inv.segment1 ,
rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qty
but it gives blank page in reports 6i
could it be that reports 6i donot support analytical functionskindly guide another alternaive
thanking in advance
Edited by: makdutakdu on Mar 25, 2012 2:22 PMhi
will the view be like
create view S_Amount as SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
rvt.transaction_date srv_date, inv.segment1 item_no,
rvt.item_desc item_description, hrov.NAME,
( SUBSTR (v.standard_industry_class, 1, 1)
|| '-'
|| po_headers.segment1
|| '-'
|| TO_CHAR (po_headers.creation_date, 'RRRR')
) po_no,
po_headers.creation_date_disp po_date,
( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty
)aMOUNT ,
----Analytic function used here
SUM( ( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,
(SELECT SUM (mot.on_hand)
FROM mtl_onhand_total_mwb_v mot
WHERE inv.inventory_item_id = mot.inventory_item_id
-- AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
AND loc.inventory_location_id = mot.locator_id
AND loc.organization_id = mot.organization_id
AND rvt.locator_id = mot.locator_id) onhand
FROM rcv_vrc_txs_v rvt,
mtl_system_items_b inv,
mtl_item_locations loc,
hr_organization_units_v hrov,
po_headers_v po_headers,
ap_vendors_v v,
po_lines_v po_lines
WHERE inv.inventory_item_id(+) = rvt.item_id
AND po_headers.vendor_id = v.vendor_id
AND rvt.po_line_id = po_lines.po_line_id
AND rvt.po_header_id = po_lines.po_header_id
AND rvt.po_header_id = po_headers.po_header_id
AND rvt.supplier_id = v.vendor_id
AND inv.organization_id = hrov.organization_id
AND rvt.transaction_type = 'DELIVER'
AND rvt.inspection_status_code <> 'REJECTED'
AND rvt.organization_id = inv.organization_id(+)
AND rvt.locator_id = loc.physical_location_id(+)
AND transaction_id NOT IN (
SELECT parent_transaction_id
FROM rcv_vrc_txs_v rvtd
WHERE rvt.item_id = rvtd.item_id
AND rvtd.transaction_type IN
('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
GROUP BY rvt.receipt_num , rvt.supplier ,
rvt.transaction_date , inv.segment1 ,
rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qtyis this correct ? i mean i have not included the bind parameters in the view ..moreover shoud this view be joined with all the columns in the from clause of the original query?
kindly guide
thanking in advance
Maybe you are looking for
-
How do I remove padlock icon from all my files?
On one of my drives, every single file has a little padlock icon superimposed over the file icon. I recall when I did a clean install of windows 7 RC over my Vista install, I wasn't getting write access to the drive so I had to take ownership of the
-
Data Load to BI (7.0 SP 9) from R3(ECC 6.0 SP-Basis 9)
Dear All, We have new instance of Devlopment BW System with version 7.0 and R/3 upgraded to ECC6.0. We connected the source system. When we extract the data through DTP the data load is sucessful with 0 Records. This is case with all the extractors.
-
Can i record sound from my imac intel soundcard
can I record from my soundcard. I have tried Audacity but cabn't see my soundcard on the drop down menu. Thanks, Gooner321
-
Form for print program(urgent)
hi to all experts, actually i want to find the form name of program rpuben15. can anybudy plz help me thanks in advance and reward also. Regards: Deepsama
-
How do I prevent Security Updates from overwriting (custom) PHP install?
Hi, I've been forced into re-installing PHP from binaries after every Apple Security Update that upgrades PHP and I'm over it. Everytime Apple upgrades PHP, "their" compile overwrites mine and I lose my libraries. I have to completely wipe the Apple