Aggregations on CE functions
Hi,
I was going through a presentation on CE functions. Please refer to the below screencap of the same\
It shows that the block of code above have been translated to the CE function style of code below.
Everything looks fine except the fact that the SUM(D) which was written in SELECT has not been handled in the CE version of the code.
Can you please explain why this has not been done?
Thanks,
Sammy
Hi Sammy,
If you need Aggregation, you need to use CE_AGGREGATION
In the above screenshot, you pasted i don't think he is doing any aggregation using CE functions or something wrong occured in it.
It should have been something like this:
sel1 = CE_COLUMN_TABLE ( tab1,[A,B,C,D]);
sel2 = CE_COLUMN_TABLE ( tab2,[A,B,C,D]);
agg1 = CE_AGGREGATION(:sel1,[sum(D) as "D"],[A,B,C,D];
agg2 = CE_AGGREGATION(:sel2,[sum(D) as "D"],[A,B,C,D];
out_result = CE_UNION_ALL ( :agg1,:agg2);
Regards,
Krishna Tangudu
Similar Messages
-
Aggregation of analytic functions not allowed
Hi all, I have a calculated field called Calculation1 with the following calculation:
AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report #7 COMPL".Resource Name )
The result of this calculation is correct, but is repeated for all the rows I have in the dataset.
Group Name Resourse name Calculation1
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
5112 rowsI tried to create another calculation in order to have only ONE value for the couple "Group Name, Resource Name) as AVG(Calculation1) but I have the error: Aggregation of analytic functions not allowed
I saw also inside the "Edit worksheet" panel that the Calculation1 *is not represented* with the "Sigma" symbol I(as for example a simple AVG(field_1)) and inside the SQL code I don't have GROUP BY Group Name, Resource Name......
I'd like to see ONLY one row as:
Group Name Resourse name Calculation1
SH Group Mr. A 10....that it means I grouped by Group Name, Resource Name
Anyone knows how can I achieve this result or any workarounds ??
Thanks in advance
AlexHi Rod unfortunately I can't use the AVG(Resolution_time) because my dataset is quite strange...I explain to you better.
Ι start from this situation:
!http://www.freeimagehosting.net/uploads/6c7bba26bd.jpg!
There are 3 calculated fields:
RANK is the first calculated field:
ROW_NUMBER() OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name,"Tickets Report Assigned To & Created By COMPL".Incident Id ORDER BY "Tickets Report Assigned To & Created By COMPL".Select Flag )
RT Calc is the 2nd calculation:
CASE WHEN RANK = 1 THEN Resolution_time END
and Calculation2 is the 3rd calculation:
AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name )
As you can see, from the initial dataset, I have duplicated incident id and a simple AVG(Resolution Time) counts also all the duplication.
I used the rank (based on the field "flag) to take, for each ticket, ONLY a "resolution time" value (in my case I need the resolution time when the rank =1)
So, with the Calculation2 I calculated for each couple Group Name, Resource Name the right AVG(Resolution time), but how yuo can see....this result is duplicated for each incident_id....
What I need instead is to see *once* for each couple 'Group Name, Resource Name' the AVG(Resolution time).
In other words I need to calculate the AVG(Resolution time) considering only the values written inside the RT Calc fields (where they are NOT NULL, and so, the total of the tickets it's not 14, but 9).
I tried to aggregate again using AVG(Calculation2)...but I had the error "Aggregation of analytic functions not allowed"...
Do you know a way to fix this problem ?
Thanks
Alex -
LAST Aggregation and AGO Function in OBI Administration Tool
I'm struggling a bit with the AGO function and LAST aggregation. The problem is when I try to compare balance from the last day from last month compared with last day of this month.
To get the balance of the last day of this month, I'm using based on dimensions LAST aggregation in the logical column KNSALDOPROMEDIOlast (avg balance).
KNSALDOPROMEDIOlast:
Dimension Formula
Other SUM(BWM.DWH_FTASA.KNSALDOPROMEDIOlast)
DTIEMPO_Dim LAST(BWM.DWH_FTASA.KNSALDOPROMEDIOlast)
To get the last day of last month I’m using the existing logical colum KNSALDOPROMEDIOlast applying the AGO function over the time dimension Month -1.
KNSALDOPROMEDIOlast-1:
AGO(BWM.DWH_FTASA.KNSALDOPROMEDIOlast, BWM.DTIEMPO_Dim.MES_Dim,1).
The problem is when the last month has more days than the current month.
Example:
Month Last Day KNSALDOPROMEDIOlast KNSALDOPROMEDIOlast-1
JAN 2010 JAN 31 437,880,393 0
FEB 2010 FEB 28 484,804,165 442,880,934
MAR 2010 MAR 31 562,201,480 484,804,165
APR 2010 APR 30 583,255,351 570,661,690
MAY 2010 MAY 31 663,660,138 583,255,351
1. The balance of last day of previous month of February is 442,880,93, is wrong this amount, is from Jan 28th, the real is 437,880,393 from last day of January, Jan 31st. January has more days than February.
2. The balance of last day of previous month of March is 484,804,165, is correct this amount is from last day of February, Feb 28th. March has less days than February.
3. The balance of last day of previous month of April is 570,661,690, is wrong this amount, is from mar 30th, the real is 562,201,480 from last day of March, Mar 31st. March has more days than April.
4. The balance of last day of previous month of May is 583,255,351, is correct this amount is from last day of April, Apr 30th. April has less days than May.
Edited by: user10541559 on Aug 18, 2010 1:44 PMHi,
I'm not sure about your issue.
The aggregation option "LAST" is used when you need the last value, the most typical example of this is inventory, if you want the inventory quantity of a product in a month, you don't sum the value for each day in the month, you only get the value of the inventory in the last day of required month, right?
Have you tried using session variables?
You can get the last day of the last month from Oracle DB with the function "LAST_DAY". For example:
SELECT LAST_DAY(ADD_MONTHS(CURRENT_DATE, -1)) from DUAL
I know that is not exactly what you need, but I hope I gave you an idea what other things you can try.
Regards,
Jorge. -
Planning level to be added in aggregation level
Hi,
Currently my aggregation level has the following elements:
1. Plant
2. Sales Group
3. Cutomer
Now I need to change from Plant to Customer Plant. Following are my queries on the same:
1. Can I add customer plant in addition to plant in the aggregation level?
2. Hence forth I would be doing planning on customer plant level, will that be an issue?
3. Will there be any issue with the historical data already planned at plant level?
Kindly help me with the above issue.
Regards,
BhushanHi Bhushan,
1. You can add freely your additional characteristic and your model will technically keep on working ,of course you will need to adjust the filters and functions to the new aggregation level of data
2. Again you will need to adjust your business logic in all components of the aggregation level (filters ,functions)
3.Historical data is apearing currently under Customer Plant = '#' ,you should consider reposting historical data in order to align current with historical data
Regards,
Eitan. -
I'm attempting to use Discoverer to create a rolling 12-month attrition report. It works fine, up to the point of trying to create a average headcount for each month in the current 12-month period over the last 12 months' each. The problem I'm encountering involves the use of the MIN() function in selecting active employees in each month, mostly due to a data-cleansing issue, which I'd hoped to bypass. Because some invididuals have two "data conversion" records - i.e., they were converted to the new database and an additional, subsequent record re-used what should have been a unique action reason - I need to test their MIN(position start date) so as to then use their actual start against the first record, whereas I can go on to use their position start to capture their FTE for any subsequent active records.
So I created a calculation, Min Start, to hold the earliest start date for each employee:
MIN(Position.Start.Date) OVER (PARTITION BY Employee.Number ORDER BY Position.Start.Date RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
Then I test the status of each position for each month in the report and return the FTE where it tests as active through month-end.
CASE WHEN Position.Action.Reason = 'CNV' THEN
CASE WHEN Position.Start.Date = Min Start THEN
CASE WHEN Hire.Date <= Month.End THEN
CASE WHEN Termination.Date is NULL THEN FTE
WHEN Termination.Date >= Month.End THEN FTE END END
WHEN Position.Start.Date <= Month.END
CASE WHEN Hire.Date <= Month.End THEN
CASE WHEN Termination.Date is NULL THEN FTE
WHEN Termination.Date >= Month.End THEN FTE END END END
WHEN Position.Start.Date <= Month.End THEN
CASE WHEN Termination.Date is NULL THEN FTE
WHEN Termination.Date >= Month.End THEN FTE END END
However, I can't SUM this because nesting isn't permitted and I can't average the sum because Aggregation of Analytic functions is not allowed either. I need a different approach. Data is always going to be dirty, so coding to account for such problems means I can perform my reporting requirements without interruption for clean-ups. (I know that keeping the data clean is best and highlighting such problems brings the attention of managers and staff to rectifying and avoiding such problems, but I still need to get the results out.)
I guess I need a new approach, but I'm at the end of every google search and textbook I can locate. Any suggestions would be very welcome.
Edited by: 814208 on 21/11/2010 19:14Updated: For the analysis authorization, for these characteristics, in addition to the specific values, there is an entry for ":" and this did not resolve the issue.
Authorization Check
Detail Check for InfoProvider ZBL_13IT
Preprocessing:
Selection Checked for Consistency, Preprocessed and Supplemented As Needed
Subselection (Technical SUBNR) 1
Check Node Definitions and Value Authorizations...
Node- and Value Authorizations Are OK
End of Preprocessing
Main Check:
Subselection (Technical SUBNR) 1
Supplementation of Selection for Aggregated Characteristics
Check Added for Aggregation Authorization: 0COMP_CODE
Check Added for Aggregation Authorization: 0PLANT
Following Set Is Checked Comparison with Following Authorized Set Result Restmenge
Characteristic Contents
0PLANT
0SALESORG
0TCAACTVT
0COMP_CODE
SQL Format:
COMP_CODE = ':'
AND PLANT = ':'
AND SALES
ORG = '0403'
AND TCAACTVT = '03'
Characteristic Contents
0PLANT I EQ IC01
I EQ IC02
I EQ IC03
I EQ IE02
I EQ IE04
I EQ IZ01
I EQ MC01
I EQ ME01
I EQ :
0SALESORG I EQ 0301
I EQ 0341
I EQ :
0TCAACTVT I EQ 03
I EQ :
0COMP_CODE I EQ 0327
I EQ 0341
I EQ :
Not Authorized
All Authorizations Tested
Message EYE007: You Do Not Have Sufficient Authorization
No Sufficient Authorization for This Subselection (SUBNR)
Following CHANMIDs Are Affected:
142 ( 0COMP_CODE )
189 ( 0PLANT )
192 ( 0SALESORG )
Authorization Check Complete -
Aggregator: Setting up navigation from one SWF to the end of a previous SWF.
If anyone has a strategy that has worked for them, please let me know. In the mean-time, I'll try an idea I have that employs a main swf to hold variables and set those variables with navigation buttons. For example, if the back button is clicked, I can set a variable indicating the target location for the previous file is the last page. I can use a conditional statement on the first slide.
I'll let you know.
ThomasI'm beginning to wonder if I am misunderstanding the purpose of aggregator. I always try to keep .swf files small. I planned to use aggregator to host variables from subordinate swf files. The Adobe help file at http://help.adobe.com/en_US/captivate/cp/using/W8c1b83f70210cd101-1ff8d6a911d0a0500a3-8000 .html seems to suggest a different vision. If the aggregator's only function is to combine multiple swfs into one giant swf, then clearly the author of Flash's scenes has escaped the asylum.
-
How to install BI-CONT 7.37 in SAP ECC system EHP 5
Hi SAP Gurus,
We recently upgraded BI_CONT in our BW landscape(SAP netweaver 7.3) from 7.36 to 7.37
What is the strategy to upgrade BI _CONT or corresponding changes in SAP ECC system(EHP 5).
Currently we don't have BI_CONT installed in our ECC system.
KIndly, let me know how to upgarde BI_CONT related objects.
Earlier we used to use custom extractors and data sources in ECC system for BI_CONT now we business has planned to use all standard data sources.
Question here is do we need to install BI_CONT in SAP ECC and what is the procedure?
Thanks,
Avadhesh
+91 8095226536Hi Avadhesh,
No need to upgrade or install add-on BI CONTENT in ECC system if you have already BI landscape for doing BI functions.
BI content includes below functionality which can be installed as an add on in ECC when you dont have BI landscape.
DataSources
Process chains
InfoObjects
InfoSources
Transformations
InfoProvider (InfoCubes and DataStore objects)
Variables
Data mining models
Queries
Workbooks
Web templates
Roles
Aggregation level
Planning function
Planning function type
SAP Crystal Reports (BI Content Ext.)
SAP BusinessObjects Dashboards (BI Content Ext.)
Regards,
Karthik -
Hi,
My DB is 9i.
I run a script that deletes all records from a table, and then repopulates based on a SQL insert.
This works fine.
I try to run the same combination but with...
BEGIN
END;...around the code, all other things being equal, and the script errors, complaining that my combination of stragg and distinct is not valid.
"ORA-06550: line 36, column 22:
PL/SQL: ORA-30482: DISTINCT option not allowed for this function
ORA-06550: line 5, column 1:
PL/SQL: SQL Statement ignored"
This being the offending line>
, apps.stragg(distinct substr(cmt_code,1,2)) parent_typesBut it does work as pure SQL....
Stragg, being a string aggregation text summary function that I picked up courtesy of 'Ask Tom' - shameless plug!
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2196162600402
My full code, is below - anyone have any idea why this would fail in an anonymous block run as exactly the same user???
Thanks for your time,
Robert
begin
delete cust.xx_eris_users_all;
insert into cust.xx_eris_users_all
(RAW_ENCRYPTED_KEY,
NAME,
E_MAIL,
COST_CENTRES,
NUM_COST_CENTRES,
PARENT_CODES,
PARENT_TYPES,
NUM_OF_PARENTS,
PERIOD_NAME,
PERIOD_NUM,
PERIOD_YEAR,
QUARTER_NUM
(select UTL_RAW.CAST_TO_RAW(isis.inner_text_link_1) raw_encrypted_key
, isis.inner_attribute_1 name
, isis.inner_attribute_2 e_mail
, apps.stragg(eris.cost_centre) cost_centres
, count(eris.cost_centre) num_cost_centres
, inna.parent_codes
, inna.parent_types
, count(inna.parent_codes) num_of_parents
, to_char(add_months(sysdate,-1),'MON-YY') period_name
, prd.period_num
, prd.period_year
, prd.quarter_num
from cust.xx_isis_all isis
, gl.gl_periods prd
, CUST.XX_ERIS_COST_CENTRE_SECURE_MV eris
, (select encrypted_key
, apps.stragg(cmt_code) parent_codes
, apps.stragg(distinct substr(cmt_code,1,2)) parent_types
from CUST.XX_ERIS_CMT_SECURE_MV eriscmt
group by
encrypted_key) inna
where isis.isis_protocol_id = 'XX_ERIS_USER'
and isis.inner_attribute_1 <> 'ALL'
--and isis.inner_attribute_1 = 'Robert Angel'
and eris.encrypted_key(+) = isis.inner_text_link_1
and inna.encrypted_key(+) = isis.inner_text_link_1
and prd.period_name = to_char(add_months(sysdate,-1),'MON-YY')
group by
UTL_RAW.CAST_TO_RAW(isis.inner_text_link_1)
, isis.inner_attribute_1
, isis.inner_attribute_2
, inna.parent_codes
, inna.parent_types
, prd.period_num
, prd.period_year
, prd.quarter_num);
end;Edited by: Robert Angel on 11-Nov-2010 02:02 - Added Ask Tom link to summary functionIn the same page search for the post dated "November 23, 2009". Tom, has provided an alternative to this problem.
Alternatively you can code two separate functions one performs distinct and one doesn't. Something like this.
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL> create or replace type string_agg_nodist_type as object
2 (
3 total varchar2(4000),
4
5 static function
6 ODCIAggregateInitialize(sctx IN OUT string_agg_nodist_type )
7 return number,
8
9 member function
10 ODCIAggregateIterate(self IN OUT string_agg_nodist_type ,
11 value IN varchar2 )
12 return number,
13
14 member function
15 ODCIAggregateTerminate(self IN string_agg_nodist_type ,
16 returnValue OUT varchar2,
17 flags IN number)
18 return number,
19
20 member function
21 ODCIAggregateMerge(self IN OUT string_agg_nodist_type ,
22 ctx2 IN string_agg_nodist_type)
23 return number
24
25 );
26 /
Type created.
SQL> create or replace type body string_agg_nodist_type
2 is
3
4 static function ODCIAggregateInitialize(sctx IN OUT string_agg_nodist_type )
5 return number
6 is
7 begin
8 sctx := string_agg_nodist_type ( null );
9 return ODCIConst.Success;
10 end;
11
12 member function ODCIAggregateIterate(self IN OUT string_agg_nodist_type ,
13 value IN varchar2 )
14 return number
15 is
16 begin
17 self.total := self.total || ',' || value;
18 return ODCIConst.Success;
19 end;
20
21 member function ODCIAggregateTerminate(self IN string_agg_nodist_type ,
22 returnValue OUT varchar2,
23 flags IN number)
24 return number
25 is
26 begin
27 returnValue := ltrim(self.total,',');
28 return ODCIConst.Success;
29 end;
30
31 member function ODCIAggregateMerge(self IN OUT string_agg_nodist_type ,
32 ctx2 IN string_agg_nodist_type)
33 return number
34 is
35 begin
36 self.total := self.total || ctx2.total;
37 return ODCIConst.Success;
38 end;
39
40
41 end;
42 /
Type body created.
SQL>
SQL> create or replace type string_agg_dist_type as object
2 (
3 total varchar2(4000),
4
5 static function
6 ODCIAggregateInitialize(sctx IN OUT string_agg_dist_type )
7 return number,
8
9 member function
10 ODCIAggregateIterate(self IN OUT string_agg_dist_type ,
11 value IN varchar2 )
12 return number,
13
14 member function
15 ODCIAggregateTerminate(self IN string_agg_dist_type ,
16 returnValue OUT varchar2,
17 flags IN number)
18 return number,
19
20 member function
21 ODCIAggregateMerge(self IN OUT string_agg_dist_type ,
22 ctx2 IN string_agg_dist_type)
23 return number
24
25 );
26 /
Type created.
SQL>
SQL> create or replace type body string_agg_dist_type
2 is
3
4 static function ODCIAggregateInitialize(sctx IN OUT string_agg_dist_type )
5 return number
6 is
7 begin
8 sctx := string_agg_dist_type ( null );
9 return ODCIConst.Success;
10 end;
11
12 member function ODCIAggregateIterate(self IN OUT string_agg_dist_type ,
13 value IN varchar2 )
14 return number
15 is
16 begin
17 if instr(nvl(self.total,' '), value,1) = 0
18 then
19 self.total := self.total || ',' || value;
20 else
21 self.total := self.total;
22 End If;
23 return ODCIConst.Success;
24 end;
25
26 member function ODCIAggregateTerminate(self IN string_agg_dist_type ,
27 returnValue OUT varchar2,
28 flags IN number)
29 return number
30 is
31 begin
32 returnValue := ltrim(self.total,',');
33 return ODCIConst.Success;
34 end;
35
36 member function ODCIAggregateMerge(self IN OUT string_agg_dist_type ,
37 ctx2 IN string_agg_dist_type)
38 return number
39 is
40 begin
41 self.total := self.total || ctx2.total;
42 return ODCIConst.Success;
43 end;
44
45
46 end;
47 /
Type body created.
SQL>
SQL>
SQL> CREATE or replace
2 FUNCTION stragg_nodistinct(input varchar2 )
3 RETURN varchar2
4 PARALLEL_ENABLE AGGREGATE USING string_agg_nodist_type;
5 /
Function created.
SQL>
SQL> CREATE or replace
2 FUNCTION stragg_distinct(input varchar2 )
3 RETURN varchar2
4 PARALLEL_ENABLE AGGREGATE USING string_agg_dist_type;
5 /
Function created.
SQL> begin
2 for i in (
3 with t
4 as
5 (
6 select 1 empno, 'hello' name from dual union all
7 select 1, 'hello' from dual union all
8 select 1, 'world' from dual union all
9 select 2, 'abc' from dual union all
10 select 2, 'dual' from dual
11 )
12 select empno, stragg_nodistinct(name) nodistinct_string, stragg_distinct(name) distinct_string from t
13 group by empno
14 )
15 Loop
16 dbms_output.put_line('Empno : ' || i.empno);
17 dbms_output.put_line('No distinct string : ' || i.nodistinct_string);
18 dbms_output.put_line('Distinct String : ' || i.distinct_string);
19 end loop;
20 end;
21 /
Empno : 1
No distinct string : hello,hello,world
Distinct String : hello,world
Empno : 2
No distinct string : abc,dual
Distinct String : abc,dual
PL/SQL procedure successfully completed.
SQL> spool offHope this helps.
Regards
Raj -
Hello
I'm having real problems trying to put together a hierarchical query. I've not really done too much with them before other than really basic stuff, so any help would be greatly appreciated.
I'm trying to set up a structure that will let me represent time windows as part of a batch job controller. Each window can have a parent and can also have multiple dependencies...probably best if I use a diagram:
Main batch
|
|----Data Preparation
| |
| |
| |-----60m
|
|----Thread 1 *Data preparation
| |
| |
| |-----30m
|----Thread 2 *Data preparation
| |
| |
| |-----90m
|----Thread 3 *Data preparation
| |
| |
| |-----180m
|----Thread 4 *Data preparation
| |
| |
| |-----10m
|
|----Reports *Thread 1,2,3,4
| |
| |
| |-----130m
|
|-----11 hoursThis shows that the "Main Batch" window is 11 hours in total. It contains 6 further windows each with their own durations. Each window can also be dependent on the completion of one or more other windows. So in this case, the "Reports" window is owned by the "Main Batch" window, but is dependent on the completion of Threads 1-4. The text in itallics denotes a dependency. Represented another way:
Main batch(11hr)
|
data preparation(1hr)
|
| | | |
thread 1 thread 2 thread 3 thread 4
(30m) (1hr30m) (3hr) (10m)
| | | |
|
Report(2hr10m)To represent this I've got 2 tables and data like so:
CREATE TABLE dt_test_window
( id NUMBER,
label VARCHAR2(30),
duration INTERVAL DAY TO SECOND,
parent_id NUMBER
CREATE TABLE dt_test_window_dependency
( dependent_id NUMBER,
dependee_id NUMBER
INSERT
INTO
dt_test_window
VALUES
( 1,
'Main batch window',
TO_DSINTERVAL('0 11:00:00'),
NULL
INSERT
INTO
dt_test_window
VALUES
( 2,
'Data preparation',
TO_DSINTERVAL('0 01:00:00'),
1
INSERT
INTO
dt_test_window
VALUES
( 3,
'Thread 1',
TO_DSINTERVAL('0 00:30:00'),
1
INSERT
INTO
dt_test_window
VALUES
( 4,
'Thread 2',
TO_DSINTERVAL('0 01:30:00'),
1
INSERT
INTO
dt_test_window
VALUES
( 5,
'Thread 3',
TO_DSINTERVAL('0 03:00:00'),
1
INSERT
INTO
dt_test_window
VALUES
( 6,
'Thread 4',
TO_DSINTERVAL('0 00:10:00'),
1
INSERT
INTO
dt_test_window
VALUES
( 7,
'Thread 0 Reports',
TO_DSINTERVAL('0 02:10:00'),
1
INSERT
INTO
dt_test_window_dependency
VALUES
( 3,
2
INSERT
INTO
dt_test_window_dependency
VALUES
( 4,
2
INSERT
INTO
dt_test_window_dependency
VALUES
( 5,
2
INSERT
INTO
dt_test_window_dependency
VALUES
( 6,
2
INSERT
INTO
dt_test_window_dependency
VALUES
( 7,
3
INSERT
INTO
dt_test_window_dependency
VALUES
( 7,
4
INSERT
INTO
dt_test_window_dependency
VALUES
( 7,
5
INSERT
INTO
dt_test_window_dependency
VALUES
( 7,
6
/What I'd like to do is run a query that will show the duration of each window, and also sum up the intervals for each window that it depends on, and where there are multiple dependencies at the same level, pick the longest duration. This would allow me to put in a start time, and show the expected end time for each stage. For these data I would expect to see
start_time := 12:00
label duration max_end_time
Main batch 11:00:00 23:00
Data preparation 01:00:00 13:00
Thread 1 00:30:00 13:30
Thread 2 01:30:00 14:30
Thread 3 03:00:00 16:00
Thread 4 00:10:00 13:10
Reports 02:10:00 18:10The exected time for the reports to finish would be 18:10 because they depend on threads 1-4. Thread 3 is the longest running so it cannot start until 16:00.
This is the query I have and the result I get, can anyone tell me where I'm going wrong?
tylerd@DEV2> WITH batch AS
2 ( SELECT
3 TO_DATE('23-JAN-2007 12:00:00','DD-MON-YYYY HH24:MI:SS') start_date
4 FROM
5 dual
6 )
7 SELECT
8 id,
9 label,
10 parent_id,
11 batch.start_date + max_duration
12 FROM
13 ( SELECT
14 win.id,
15 win.label,
16 win.parent_id,
17 MAX( ( SELECT
18 SUM_INTERVAL(win_parent.duration)
19 FROM
20 dt_test_window win_parent
21 WHERE
22 win_parent.id = win_dep.dependee_id
23 CONNECT BY
24 PRIOR dependee_id = dependent_id
25 ) + win.duration
26 ) max_duration
27 FROM
28 dt_test_window win,
29 dt_test_window_dependency win_dep
30 WHERE
31 win.id = win_dep.dependent_id(+)
32 GROUP BY
33 win.id,
34 win.label,
35 win.parent_id
36 ),
37 batch
38 ORDER BY
39 id
40 /
ID LABEL PARENT_ID BATCH.START_DATE+MA
1 Main batch window 23/01/2007 23:00:00
2 Data preparation 1 23/01/2007 13:00:00
3 Thread 1 1 23/01/2007 13:30:00
4 Thread 2 1 23/01/2007 14:30:00
5 Thread 3 1 23/01/2007 16:00:00
6 Thread 4 1 23/01/2007 13:10:00
7 Thread 0 Reports 1 23/01/2007 17:10:00<-should be 18:10
7 rows selected.Thank you
David
p.s.
Here's the code for sum_interval:
CREATE OR REPLACE TYPE SumInterval AS OBJECT
( runningSum INTERVAL DAY(9) TO SECOND(9),
Responsible for initialising the aggregation context
STATIC FUNCTION ODCIAggregateInitialize
( actx IN OUT SumInterval
) RETURN NUMBER,
Adds values to the aggregation context - NULLS are ignored
MEMBER FUNCTION ODCIAggregateIterate
( self IN OUT SumInterval,
val IN DSINTERVAL_UNCONSTRAINED
) RETURN NUMBER,
Routine invoked by Oracle to combine two aggregation contexts. This happens when an
aggregate is invoked in parallel
MEMBER FUNCTION ODCIAggregateMerge
( self IN OUT SumInterval,
ctx2 IN SumInterval
) RETURN NUMBER,
Terminate the aggregation context and return the result
MEMBER FUNCTION ODCIAggregateTerminate
( self IN SumInterval,
returnValue OUT DSINTERVAL_UNCONSTRAINED,
flags IN NUMBER
) RETURN NUMBER
CREATE OR REPLACE TYPE BODY SumInterval AS
STATIC FUNCTION ODCIAggregateInitialize
( actx IN OUT SumInterval
) RETURN NUMBER
IS
BEGIN
IF actx IS NULL THEN
actx := SumInterval (INTERVAL '0 0:0:0.0' DAY TO SECOND);
ELSE
actx.runningSum := INTERVAL '0 0:0:0.0' DAY TO SECOND;
END IF;
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateIterate
( self IN OUT SumInterval,
val IN DSINTERVAL_UNCONSTRAINED
) RETURN NUMBER
IS
BEGIN
self.runningSum := self.runningSum + val;
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateTerminate
( self IN SumInterval,
ReturnValue OUT DSINTERVAL_UNCONSTRAINED,
flags IN NUMBER
) RETURN NUMBER
IS
BEGIN
returnValue := self.runningSum;
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateMerge
( self IN OUT SumInterval,
ctx2 IN SumInterval
) RETURN NUMBER
IS
BEGIN
self.runningSum := self.runningSum + ctx2.runningSum;
RETURN ODCIConst.Success;
END;
END;
CREATE OR REPLACE FUNCTION sum_interval
( x DSINTERVAL_UNCONSTRAINED
) RETURN DSINTERVAL_UNCONSTRAINED
PARALLEL_ENABLE
AGGREGATE USING SumInterval;
/Hello
After some more experimentation I've found that I need to decode the dependent/dependee ids and group on that to get the correct result. Using id 7(Reports) as the starting point
tylerd@DEV2> SELECT
2 win_lookup.parent_id,
3 win_lookup.id,
4 win_dep_lookup.dependent_id,
5 win_dep_lookup.dependee_id,
6 win_lookup.duration,
7 CONNECT_BY_ISLEAF
8 FROM
9 dt_test_window_dependency win_dep_lookup,
10 dt_test_window win_lookup
11 WHERE
12 win_lookup.id = win_dep_lookup.dependee_id
13 START WITH
14 win_dep_lookup.dependent_id=7
15 CONNECT BY
16 PRIOR win_dep_lookup.dependee_id = win_dep_lookup.dependent_id
17 /
PARENT_ID ID DEPENDENT_ID DEPENDEE_ID DURATION CONNECT_BY_ISLEAF
1 3 7 3 +00 00:30:00.000000 0
1 2 3 2 +00 01:00:00.000000 1
1 4 7 4 +00 01:30:00.000000 0
1 2 4 2 +00 01:00:00.000000 1
1 5 7 5 +00 03:00:00.000000 0
1 2 5 2 +00 01:00:00.000000 1
1 6 7 6 +00 00:10:00.000000 0
1 2 6 2 +00 01:00:00.000000 1
8 rows selected.I can see that if I group using decode and CONNECT_BY_ISLEAF like so:
tylerd@DEV2> SELECT
2 DECODE(CONNECT_BY_ISLEAF, 0, win_dep_lookup.dependee_id,win_dep_lookup.dependent_id) grouping_id,
3 SUM_INTERVAL(win_lookup.duration)
4 FROM
5 dt_test_window_dependency win_dep_lookup,
6 dt_test_window win_lookup
7 WHERE
8 win_lookup.id = win_dep_lookup.dependee_id
9 START WITH
10 win_dep_lookup.dependent_id=7
11 CONNECT BY
12 PRIOR win_dep_lookup.dependee_id = win_dep_lookup.dependent_id
13 GROUP BY
14 DECODE(CONNECT_BY_ISLEAF, 0, win_dep_lookup.dependee_id,win_dep_lookup.dependent_id)
15 /
GROUPING_ID SUM_INTERVAL(WIN_LOOKUP.DURATION)
3 +000000000 01:30:00.000000000
4 +000000000 02:30:00.000000000
5 +000000000 04:00:00.000000000
6 +000000000 01:10:00.000000000I get the correct result for each thread. The problem is, when I try to plug this into the main query to select the MAX of these dependencies, I get
tylerd@DEV2> WITH batch AS
2 ( SELECT
3 TO_DATE('23-JAN-2007 12:00:00','DD-MON-YYYY HH24:MI:SS') start_date
4 FROM
5 dual
6 )
7 SELECT
8 id,
9 label,
10 parent_id,
11 batch.start_date + max_duration
12 FROM
13 ( SELECT
14 win.id,
15 win.label,
16 win.parent_id,
17 MAX( ( SELECT
18 SUM_INTERVAL(win_lookup.duration)
19 FROM
20 dt_test_window_dependency win_dep_lookup,
21 dt_test_window win_lookup
22 WHERE
23 win_lookup.id = win_dep_lookup.dependee_id
24 START WITH
25 win_dep_lookup.dependent_id=win.id
26 CONNECT BY
27 PRIOR win_dep_lookup.dependee_id = win_dep_lookup.dependent_id
28 GROUP BY
29 DECODE(CONNECT_BY_ISLEAF, 0, win_dep_lookup.dependee_id,win_dep_lookup.dependent_id)
30 ) + win.duration
31 ) max_duration
32 FROM
33 dt_test_window win,
34 dt_test_window_dependency win_dep
35 WHERE
36 win.id = win_dep.dependent_id(+)
37 GROUP BY
38 win.id,
39 win.label,
40 win.parent_id
41 ),
42 batch
43 ORDER BY
44 id
45 /
MAX( ( SELECT
ERROR at line 17:
ORA-01427: single-row subquery returns more than one rowWhich I do understand, I just don't know how to get round it.
Any ideas?
David -
Request does not save for real time cube
Hello,
I have created the real time cube and copied the data from the actual cube.
I have careted the MP on top of actual adn real time cube.
created the aggregation level -> Planning function etc.
I have created the input query and trying make some modification and then clicking on save the data.
I am getting the message that " data is saved " but request is not getting generated in BW for the real time cube... The real time cube is set to planning.
Appreciate your immediate help.
ThanksData is saved ... resolved
Thanks -
Aggregation function for field in ALV Webdynpro ABAP
Dear all,
Can I create aggregation function on one field of ALV WDA which that is currency type field and then display the result based on currency key field?
i've tried using code as shown below
lo_wd_field = lo_model->if_salv_wd_field_settings~get_field( 'TOTAL' ).
DATA: lo_field_aggr TYPE REF TO cl_salv_wd_aggr_rule.
lo_field_aggr = lo_wd_field->if_salv_wd_aggr~create_aggr_rule( ).
CALL METHOD lo_field_aggr->set_aggregation_type
EXPORTING
value = if_salv_wd_c_aggregation=>aggrtype_total.
lo_wd_field->set_reference_field_type( if_salv_wd_c_field_settings=>reffieldtype_curr ).
lo_wd_field->set_reference_field( 'WAERS' ).
and then i've tried enabled standard function as shown below
lo_model->if_salv_wd_std_functions~set_aggregation_allowed( abap_true ).
But Why result of aggregation is not formatted based on currency key?
Best regards,
Agnis Virtinova AvencyHi Virtonova,
My requirement is also simillar.
I want to aggregate total based on Currency key.
In the total for sub groups and final total with out any text or Description getting only single or double or triple ( dots ). .
Is it possible to populate / change with useful text or decription for those rows.
I am not getting Currency type after the total/sub totals.
how to fix this in ALV list after using aggregate for the ALV.
Thanks in advance.
Dav -
Create a View with Aggregation Function (COUNT)
I've been looking up and down for a way to create a view with a few basic fields and some other fields containing aggregation function.
For instance:
To display a view that contain all the Contract Agreement and the corresponding count of the PO releases.
Agreement Nbr, Total PO releases
I need this view so that I can create a search help with this view.
I found something about the "CREATE VIEW" statement, but I don't have any idea how to use it.
Any helps toward this matter is very much appreciated, thanks.Hello Aldern
I guess you have read about the SQL statement "CREATE VIEW". When we create a view in the dictionary this SQL statement is finally called to create the view on the DB. Thus, since we do not have any aggregation options in views you cannot achieve what you want using views.
The solution for your problem is to create a <b>search help</b> having a <b>search help exit</b>. Within the exit you can do your aggregation functions and add these values to the displayed search help data.
Regards
Uwe -
How to disable Row label from the aggregation function in Pivot table
Hello everyone,
I have table in Power Pivot like shown below:
Item_Name
Category
Vendor
Sales_Amount
Item 1
Category 1
Vendor 1
30
Item 2
Category 1
Vendor 2
25
Item 3
Category 2
Vendor 3
50
Item 3
Category 2
Vendor 3
60
Item 3
Category 2
Vendor 3
20
Item 2
Category 1
Vendor 2
10
Item 2
Category 1
Vendor 2
30
Item 2
Category 1
Vendor 2
100
Item 2
Category 1
Vendor 1
20
By using above table i have to create Rank(Based on Sales amount) by Category and Item name in Pivot table, i have done easily like added two dimension attribute(Category, Item_Name) into row label and Sales_Amount into aggregation tab then I calculated
rank:=RANKX(ALL(Sales[Item_Name]),[Sum of Sales_Amount]) so finally pivot table looks like shown below:
But end user want to see the vendor name also in the pivot table but the Rank suppose to be based on Sales amount by Category and Item name. if i added the vendor name also into the row label, rank calculated based on on Sales amount by Category, Item
name and vendor.
I would be really grateful if anyone advise how to fix this problem as it will be helpful my most of the reports.
Regards,
RobertDarren Gosbell,
Thanks for your reply.
Item_Name
Category
Vendor
Sales_Amount
Item 1
Category 1
Vendor 1
30
Item 2
Category 1
Vendor 2
25
Item 3
Category 2
Vendor 3
50
Item 3
Category 2
Vendor 3
60
Item 3
Category 2
Vendor 3
20
Item 2
Category 1
Vendor 2
10
Item 2
Category 1
Vendor 2
30
Item 2
Category 1
Vendor 2
100
Item 2
Category 1
Vendor 2
20
Item 4
Category 1
Vendor 2
3
Item 4
Category 1
Vendor 2
50
Item 4
Category 1
Vendor 2
3
The above is my new source data.
I used this function to calculate Rank:=RANKX(ALL(Sales[Item_Name]),[Sum of Sales_Amount])
and also used yours below:
Rank2:=RANKX(SUMMARIZE(ALL(Sales),[Item_Name],[Category]),CALCULATE([Sum of Sales_Amount],ALLEXCEPT(Sales,Sales[Item_Name],Sales[Category])))
The Preceding screenshot is the result of our two function but i wanna pivot table like shown below:
Could please help me to fix it out. -
Does the Resume Function work in an Aggregator Project?
I hope this is a simple yes or no question. If I use the aggregator to publish several SWF's, will i be able to use the resume function that is built into Captivate's TOC settings?
In my experience no it doesn't. The Self-paced Learning bookmarking is more designed for single projects, not aggregated projects.
-
Dimension table to support Hierarchy and the aggregation functions
Hello expert,
Now I seem to know why those aggregation functions e.g. SUM, COUNT failed whenever the report is executed.
I have a fact table REJECT_FACT contains all the information of rejects:
Reject ID
Reject Category
Reject Code
Reject Desc
Site Desc
Site Code
Region Desc
Age Group
Reject Date
So I created a alias REJECT_DIM based on REJECT_FACT. After several trials, I think that the aggregation functions do not work with alias because after I remove the REJECT_DIM, the aggregation seem working.
Is my concept right? Or I am missing something? I don't understand that the data model for datawarehouse should be simple, why do we need to create many dimension tables to support the hierarchy?Hello expert,
Thank you very much for your reply.
Actually the data model is very simple. There is only one physical table REJECT_FACT. The structure is as follows:
Reject ID (NUMBER)
Reject Category (VARCHAR2)
Reject Code (VARCHAR2)
Reject Code Desc (VARCHAR2)
Site Desc (VARCHAR2)
Site Code (VARCHAR2)
Region Desc (VARCHAR2)
Age Group (VARCHAR2)
Reject Date (DATE)
The hierarchy required is as follows:
Reject Category -> Reject Code Desc -> Site Desc -> Region Desc -> Age Group -> Reject Date.
I want to produce count on each hierachy level.
How to populate the hierachy structure effectively?
Thanks......
Maybe you are looking for
-
Memory leak using Oracle thin driver on wls6.1...
Hi, I've been attempting to find a memory leak in an application that runs on WLS 6.1 SP2, on Solaris 8 and accessing an Oracle 9i db. We are using the Type 4 (Thin) driver and JProbe reports that hundreds of oracle.jdbc.* objects are left on the hea
-
Error Returned From Bapi 'BAPI_INSPECTIONPLAN_CREATE'
hi I am using 'BAPI_INSPECTIONPLAN_CREATE' for creating inspection plan in QM.the bapi is returning error"Inspection characteristic cannot be uniquely assigned to one operation".I am using one operation and one characteristic.I am also providing oper
-
I am trying to update and ipod touch 2008 8g. I keep getting the -1 error. I have reset and taken the apps and music off and am still getting that error.
-
Commercial invoice is missing hoew to find
Dear SAP Guru's One commercial invoice no. is missing,no where it is available.User has created exice invoice for that particular delivery no. in J1iin,but he has not created commercial invoice no. for that delivery no. But further he has creeated an
-
Hi , i´d like to wirte a Java Code so that if there´s an Exception my Webservice generates a Soap Fault Message and sends it back. I ´m using the messaging style WebService. Does anyone know how to do this. Thanks in advance Hakan