SDO_UNION AGGREGATION ROUTINE
I am in search of a routine to perform aggregations of grouped data using the sdo_union function. Using the procedure below, a recordset is created from my ZIP_TEMP table WHERE State='NY'...(basically I'm trying to create a New York object from all of the zip codes therein)...and it loops through the recordset aggregating the objects. The procedure runs to completion, but it only creates on point...not even an object. Any takers?
create or replace procedure agg_union
as
cnt number := 0;
result_geoloc mdsys.sdo_geometry;
cursor c1 is
SELECT GEOLOC FROM ZIP_TEMP WHERE STATE = 'NY';
diminfo mdsys.sdo_dim_array;
begin
SELECT diminfo INTO diminfo
FROM user_sdo_geom_metadata
WHERE table_name = 'ZIP_TEMP'
AND column_name = 'GEOLOC';
for r in c1 loop
cnt := cnt + 1;
if (cnt = 1)
then
result_geoloc := r.geoloc;
else
result_geoloc :=
mdsys.sdo_geom.sdo_union (r.geoloc, diminfo, result_geoloc, diminfo);
end if;
end loop;
INSERT INTO NY VALUES (1, result_geoloc);
commit;
end;
/
Daniel,
Your example looks like an example we recieved from an Oracle Spatial employee. Is this true?
If not, I can e-mail the example we recieved.
We are using SDO_Union successfully, but only after using the Validate_Layer to verify that our geometries were valid. Until we had our data in Oracle correctly, we were unable to use SDO_Union and SDO_Difference.
Our goemetries are now 2-d(x,y) instead of 3-d(x,y,z) which uses 25% less space.
Thomas LeBlanc
[email protected]
Similar Messages
-
Hi,
We have a log table that is notoriously getting corrupt.
The usage is to log all web requests, and in an batch job aggregate those requests and delete the aggregated rows.
Approx rows pr day is 400000 (during holliday...).
The table is defined as:
CREATE TABLE wisweb.DBA.Log (
Id int NOT NULL,
Time timestamp NULL,
Servlet char(100) NULL,
Params text NULL,
Type int NOT NULL,
TableName char(50) NULL,
IdFieldName char(50) NULL,
IdFieldValue int NULL,
Referer text NULL,
SearchString char(100) NULL,
RecordCount int NOT NULL,
Url text NULL,
TableId int NOT NULL,
ServletId int NOT NULL,
IP varchar(50) NULL,
UserAgent varchar(200) NULL
-- Creating indexes
CREATE INDEX "Time" ON Log ( "Time" ASC );
CREATE INDEX "Type" ON Log ( "Type" ASC );
CREATE INDEX "IdFieldValue" ON Log ( "IdFieldValue" ASC );
CREATE INDEX "RecordCount" ON Log ( "RecordCount" ASC );
CREATE INDEX "TableId" ON Log ( "TableId" ASC );
CREATE INDEX "ServletId" ON Log ( "ServletId" ASC );
CREATE INDEX "IdFieldName" ON Log ( "IdFieldName" ASC );
And when I validate the table I get:
Validate table log;
ERROR: Row count mismatch between table "Log" and index "Time"
SQL Anywhere Error -300: Run time SQL error -- Validation of table "Log" has failed
And the corresponding:
Validate index "Time" on Log;
Fails with the same error:(
I can rebuild the Time index without problem, but it still will not validate.
I can even drop the index and recreate it and it still fails validation!
And if I validate the table without the Time index, it fails on the IdFieldValue index.
Last time I had to rename table, create a new one and copy data over to the new table. But I only stayed uncorrupt for about two weeks:(
We are running the version:
dbsrv12 GA 12 0 1 3967 linux 2013/09/04 15:54:03 posix 64 production
Best regards
Ove HalsethAhh, I did not notice the Caution notice in the documentation:(
Then it makes sense that the table validates if we drop all indexes.
We found a bug in our aggregation routine that stopped the deletion of aggregated rows.
So when the number of rows grew, we tried to validate the table. And when that failed, we expected that to be the problem...
Thanks for the guidance:)
Ove
PS: I don't know why I can't mark your answer as correct... -
Transformation End Routine - Aggregation - Dummy rule
Hello,
I am writing a transformation end routine.
I would like to use a 'SUM' aggregation behaviour the key figures in my Result_Table instead of a MOVE.
The help at sap http://help.sap.com/saphelp_nw04s/helpdata/en/e3/732c42be6fde2ce10000000a1550b0/content.htm
says that
"key figures are updated by default with the aggregation behavior Overwrite (MOVE)".
In ordrer to do otherwise "You have to use a dummy rule to override this.".
Could someone explain me how to do this?
ClaudioClaudio,
Map your KF to a dummy KF and then set the aggregation to Summation.
Then apply your transformation end routine and then the value calculated in the end routine will get summed up to the existing value.
Arun -
How to add new records in Start routine or end routine.
Hi All,
My requirement is to transfer data from one DSO to anothe DSO. But while transfering a single record frorm DSO1 i want to add 7 records to DSO2 for each record in DSO1 with slight change in data( with a different key). I want to do it in start routine or end routine. How can i do it. If you have any ABAP code for this then please send.
Regards
Amlanyou can use this code, replace the fields where i have marked with <>.
DATA : WA_RESULT_PACKAGE TYPE DSO2,
WA_RESULT_PACKAGE1 LIKE WA_RESULT_PACKAGE.
DATA : IT_RESULT_PACKAGE LIKE TABLE OF WA_RESULT_PACKAGE.
DATA : DATE1 TYPE SY-DATUM.
DATA : DAYDIFF TYPE i.
DATA : RECORD_NO type rsarecord.
SORT RESULT_PACKAGE BY <KEY FIELDS> "specify the key fields here
RECORD_NO = 1.
LOOP AT RESULT_PACKAGE INTO WA_RESULT_PACKAGE.
IF WA_RESULT_PACKAGE_1-<KEYFIELDS> NE WA_RESULT_PACKAGE-<KEYFIELDS>.
WA_RESULT_PACKAGE_1 = WA_RESULT_PACKAGE.
DAYDIFF = WA_RESULT_PACKAGE-ENDDATE - WA_RESULT_PACKAGE-STARTDATE.
WHILE DAYDIFF NE 0.
DATE1 = WA_RESULT_PACKAGE-STARTDATE + DAYDIFF.
MOVE DATE1 TO WA_RESULT_PACKAGE-<KEYFIELDDATE>.
MOVE RECORD_NO TO WA_RESULT_PACKAGE-RECORD.
APPEND WA_RESULT_PACKAGE INTO IT_RESULT_PACKAGE.
DAYDIFF = DAYDIFF - 1.
RECORD_NO = RECORD_NO + 1.
CLEAR DATE1.
ENDWHILE.
CLEAR DAYDIFF.
ENDIF.
ENDLOOP.
DELETE RESULT_PACKAGE[].
RESULT_PACKAGE[] = IT_RESULT_PACKAGE[].
Reg Point 3.
The Key figures will then show up in the report aggregated.Hope that is fine with you.
Note:
Before loading data, in DTP set the semantic key with the key field of the DSO1.This brings all the similar data w.r.t the key fields from the PSA together in a single package.
rgds, Ghuru -
InfoSet in SAP BI 7.10 and Key figure aggregation
HI SAP Gurus,
I am new in SAP BI area. I have my first problem.
I want to create a report for the profit of goods.
The cost of goods(cogs) are constant for each material for one month.
The formula to calculate the profit of goods = sales turn over u2013 cogs of month *sales amount.
I have defined in BW time dependent infoObejct with attribute cogs.
I have 2 info Sources. InfoCube for transactional sales data from R/3 and material cogs master data loaded from csv file each month to infoObject.
The info Provider for report is InfoSet (transactional Cube and cogs infoObject) .
My problems are
1) When I create an InfoSet, SAP BW create automatically new technical name for all characteristics and key figures and the first technical name should be alias fr each InfoCube and InfoObject in the InfoSet.
2) The new technical name infoSet erased my aggregation references characteristic (=calmonth)
3) In the report the key figure cogs was aggregated for each customer sales and customers, that means the value of cogs is not constant, when it is aggregated according to customer sales order.
Thanks a lot for your support
Solomon Kassaye
Munich GermanySolomon find some code below for the start routine, change the fields and edit code to suit your exact structure and requirements but the logic is all there.
4) Create a Start Routine on the transformation from sales DSO to Profit of Goods InfoCube.
Use a lookup from the the COG DSO to populate the monthly COG field in the COG DSO.
**Global Declaration
TYPES: BEGIN OF I_S_COG,
/BIC/GOODS_NUMBER TYPE /BIC/A<DSO Table name>-/BIC/GOODS_NUMBER,
/BIC/GOODS_NAME TYPE /BIC/A<DSO Table name>-/BIC/GOODS_NAME,
/BIC/COG TYPE /BIC/A<DSO Table name>-/BIC/COG,
/BIC/PERIOD TYPE /BIC/A<DSO Table name>-/BIC/PERIOD,
END OF I_S_COG.
DATA: I_T_COG type standard table of I_S_COG,
wa_COG like line of i_t_COG.
*Local Declaration
data: temp type _ty_t_SC_1.
*move SOURCE_PACKAGE[] to temp[].
temp[] = SOURCE_PACKAGE.
select /BIC/GOODS_NUMBER /BIC/GOODS_NAME /BIC/COG /BIC/PERIOD from
/BIC/A<DSO Table name>
into corresponding fields of table i_t_COG for all entries in
temp where /BIC/GOODS_NUMBER = temp-/BIC/GOODS_NUMBER.
sort i_t_COG by /BIC/GOODS_NUMBER.
loop at SOURCE_PACKAGE assigning <source_fields>.
move-corresponding <source_fields> to wa.
loop at i_t_COG into wa_COG where /BIC/GOODS_NUMBER =
<source_fields>-/BIC/GOODS_NUMBER and /BIC/PERIOD =
<source_fields>-/BIC/PERIOD.
modify SOURCE_PACKAGE from wa transporting /bic/COG.
endloop.
endloop.
5) Create an End Routine which calculates Profit using the formula and updates the result set with the value in the Profit column.
Given your requirement for the profit calculation
profit of goods = sales turn over u2013 cogs of month * sales amount
Write a simple end routine yourself
*Local Declaration
loop at RESULT_PACKAGE.
<result_fields>-profit = <result_fields>-sales turn over - <result_fields>-COG * <result_fields>-sales amount.
modify RESULT_PACKAGE from <result_fields> transporting profit.
endloop.
As the above start and end routines are used to enhance your sales DSO, your fields for customer number and the sales order should already be in your DSO for drilldown.
Let me know how you get on. -
Help with Aggregation Summation into DSO
Hi, I have a question about Key Figure Aggregation Summation in transformation rules into a DSO from 2LIS_11_VAITM.
Currently had an old order for Order Qty of 600 pcs. Recent request came in to change it to 400. After Delta our Order Qty was -200. The Rule is for Summation and I figure it should work like 600 + -600 +400 = 400, but that is not what happened. It's almost like the rule considered the original order qty to be 0 and when the -600 and +400 delta came in they get summarized to get the -200. Does it have anything to do with the change logs only having recent last 30 days available?
Can anyone tell me what is wrong here?Kennet:
Could you please provide more details? For example:
- Is the problem (differences on the Key Figure values) at the DSO level or at the Cube level?
- Does your DataSource version have DSO capability? (please refer to SAP Note 440416 - "BW OLTP: Correction report for change of delta process").
- If your DataSource supports "ABR" extraction, Does the Data on the PSA looks ok? (After / Before and Reverse images).
- Have you enhanced the DataSource to include Custom Fields? If so, Does the ABAP Routine uses the SORT command?
- Do you update the DSO with the 2LIS_11_VAITM DataSource only? or Do you use another DataSource to send data to the same DSO?
- Have you considered changing the Rules to "Overwrite" instead of "Summation"?
- What fields are included as part your DSO Key?
- Do you have the ROCANCEL field mapped to 0STORNO / 0RECORDMODE InfoObjects?
Regards,
Francisco Milán.
Edited by: Francisco Milan on Jul 1, 2010 9:13 AM -
Key Figues - SUMMATION in Expert Routine
Hi All,
Is there any to way to have to aggregation of Key Figures as "Summation" in Expert Routine??
Please let me know if there is any work around for this..
Thanks,
KapilThis type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.
Hi Kapil,
You can use expert routine to program the transformation yourself without using the available rule types. You must implement the message transfer to the monitor yourself.
If you have already created transformation rules, the system deletes them once you have created an expert routine.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE). So i think expert routine aggregation does not work for transformation to the standard DSO.
For more details:
[https://forums.sdn.sap.com/click.jspa?searchID=13078965&messageID=2823817]
Hope it helps you.
Regards,
Yokesh -
Calculations in Update rules/Start routine/End Routine
Hi Friends,
I have loaded data to a DSO and i have three fields in to it. Lets say Filed1, Filed2 and Field3. Field1 and Field2 are being populated through an update rule in transformation. Aggregation type for these two fields are "Summation".
Now, after the transformation executed, the Field1 and Field2 are filled with values. I want to calculate the value of Field3 as follows:
Field3 = Field1 - Field2
Can anyone tell me where can i do this calculation? I know we can do this in End routine and in Query but i want to know if there is any other place i can do this calculation in transformation? In try to do this calculation in the update rule for Field3 in transformation, i dont see Field1 and Field2 as these are not source fields. I can not write formula also because we can write formula only on source fields not the data target fields.
Your help will be appreciated in terms of points.
Thanks,
manmitHi,
in the start routine in the global section define the two fields:
data: g_amount1 type /bic/oi<your keyfigure name>,
g_amount2 type /bic/oi<your keyfigure name>.
in the routines to your 2 keyfigures store the result in that fields too.
routine for field1.
g_amount1 = result.
routine for field2.
g_amount2 = result.
and in the routine to field3
result = g_amount1 - g_amount2." or whatever calculation/derivation has to be done.
kind regards
Siggi
Message was edited by:
Siegfried Szameitat -
PO Qty is getting Aggregated.
Hi All,
I am having a requrement like i need Invoice Accouting Doc No, GR Accouting Doc No,GR Qty.IR Qty ,PO no and PO Qty.
I have created one cube which is getting updated from FI ODS
for the PO No and PO Qty i have written a routine in Update rules which will update from PO ODS.
My Problem was while updating the PO Quantity from PO ODS to Cube The PO qty is displaying for each Accounting document number (WE and RE) and in report level its getting aggregated.
PO Number | Item | Accounting Doc No | Type | PO Qty
2520555 10 45465454 WE 100
2520555 10 43546546 RE 100
2520555 20 465464 RE 200
this is how its coming which should not come ..
for the PO the PO Qty is 300 but is aggregating and showing 400 in the report..
Kindly provide me the logic how to handle this one..Hi All,
We have created one Z programfor fetching the PO Qty only once for all the accouting document nos per PO and its working fine .So we copied the same code in update rules for PO Qty and its not working there its taking ZERO's for the entire PO's.
Pls find the below Code and kindly let me know how to correct the code in the update rules .
iam doing a lookup for PO qty.
Code in Zprogram.
DATA : begin of itab occurs 0,
Doc_Num type /BI0/OIOI_EBELN,
AC_DOC_NO type /BI0/OIAC_DOC_NO,
Item_Num type /BI0/OIOI_EBELP,
qty type /BI0/OIORDER_QUAN,
end of itab.
DATA : Temp type /BI0/OIOI_EBELN,
Temp1 type /BI0/OIOI_EBELP.
SELECT DISTINCT
OI_EBELN
AC_DOC_NO
OI_EBELP
ORDER_QUAN
FROM /BIC/AYSPND_O300 INTO table itab WHERE /BIC/YSPNDIND = 'IT'.
SORT itab BY Item_Num.
Loop at itab.
IF Temp <> itab-Doc_Num OR Temp1 <> itab-Item_Num.
write : / itab-Doc_Num, itab-AC_DOC_NO, itab-Item_Num, itab-qty.
ELSE. write : / itab-Doc_Num, itab-AC_DOC_NO, itab-Item_Num, '0'.
ENDIF.
Temp = itab-Doc_Num.
Temp1 = itab-Item_Num.
endloop.
CLEAR itab.
Thnx -
What is the use of end routine in bi 7.0
hi friends,
what is the use of end routine in bi 7.0. what scenerio we use end routine.
Thanking u
suneel.hi Suneel,
check
http://help.sap.com/saphelp_nw70/helpdata/en/e3/732c42be6fde2ce10000000a1550b0/frameset.htm
End Routine
An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE). You have to use a dummy rule to override this.
hope this helps. -
Syntax error while executing Key Figure routine
Hello,
I am posting my question again, as I have not got any solution. Please help it will be really appreciated. Here's the description
I am loading data from flatfile to an Infocube with 3 keyfigures: Sales Price , Sales Quantity, Sales Revune. Getting values for Sales Price and Salea Quantity from flatfile and calculating for Sales Revunue IO_VB_REV using routine
- I created Transformations, and under Rule Group: Standard Group box I have not mapped IO_VB_REV any datasource field and see 'X' sign against the field.
- In rules detail screen, I add two source fields of Rule , IO_VB_QU and IO_VB_PRC. I see IO_VB_REV already added under 'Target fields of Rule' section
- I then add the only following line to routine
RESULT = SOURCE_FIELDS_RULE-/BIC/IO_VB_QU * SOURCE_FIELDS_RULE-/BIC/IO_VB_PRC .
- Clicking on check button it gives no syntax error message. I save and exit back to Rule Details page.
- For IO_VB_REV field If I select any of 2 options i.e. Fixed Unit or No conversion it gives me a dump. So I select 'from conversion' option. In that case I enter USD against 'Conversion Type' field. It gives me information box popup with 'Incorrect Input Message'
- When check button hit on 'Rule Details' page, it gives me error message ' Syntax error in routine '
Why is it giving me syntax error, inspite of the fact that I get 'no syntax error' message on routine page. Also why am I getting short dumps on changing Conversion TypeHi Olivier,
I really thank you for ur efforts in helping me solving this problem. Below is complete description of KF defined,
1. Created InfoObjects for Sales Quantity, Sales Price, Sales Revunue
Definition of <b>Sales Quantity</b>
Name: IO_VB_QU
Type/Data Type : Quantity
Data Type: QUAN - Quantity field, points to unit field with format UN
Currency / unit of measure
Unit / Currency: 0UNIT
Definition of <b>Sales Price</b>
Name: IO_VB_PRC
Type/Data Type : Amount
Data Type: CURR - Currency field, stored as DEC
Currency / unit of measure
Fixed Currency: USD
Definition of <b>Sales Revunue</b>
Name: IO_VB_REV
Type/Data Type : Amount
Data Type: CURR - Currency field, stored as DEC
Currency / unit of measure
Fixed Currency: USD
2. As the data is being read from flatfile, created DataSource with fields for Sales ID, Sales Price, Sales Quantity. As I am reading unit for quantity from file (has values EA,BOX,CSE), I have a corresponding field UNIT in DataSource. No field for Sales Revunue.
3. I use 'Create Transformation' functionality to automatically create transformations.
4. Rule Details page of each of 3 KFs has following values
<b>Rule Details page of Sales Quantity</b>
Rule Type: Direct Assignment
Aggregation : Summation
Target Unit: 0UNIT
Unit: from Source
Source Unit: UNIT
Source Fields of Rule: /BIC/IO_VB_QU, UNIT
Target Fields of Rule: 0UNIT, IO_VB_QU
<b>Rule Details page of Sales Price</b>
Rule Type: Direct Assignment
Aggregation : Maximum
Fixed Target Currency : USD
Currency: No Conversion
Source Fields of Rule: /BIC/IO_VB_PRC
Target Fields of Rule: IO_VB_PRC
<b>Rule Details page of Sales Revunue</b>
Rule Type: Routine
Aggregation : Summation
Fixed Target Currency : USD
Currency: from Conversion
Conversion Type: ??????.....(I entered USD it gives me Incorrect Input message)
Source Fields of Rule: /BIC/IO_VB_PRC, /BIC/IO_VB_QU, UNIT
Target Fields of Rule: IO_VB_REV
I have this line is the routine
RESULT = SOURCE_FIELDS_RULE-/BIC/IO_VB_QU * SOURCE_FIELDS_RULE-/BIC/IO_VB_PRC.
Let me know if u need any other info......
I really appreciate u trying to help me
Vidya -
Use of filters and aggregations based on hierarchy nodes in an update rule
Hello,
I need to calculate some indicators from a ODS (BW 3.5) that contain raw data to another one that will contain indicators. These figures are the results of the use of filters and aggregations based on hierarchy nodes (for example: all sales accounts under a node).
In fact, this is typically a query but I want to store these figures, so I need
I understood I have to use a start routine. I never did that before.
Could you provide me with easy-to-understand-for-newbies examples of:
- filtering data based on the value of an infoobject (value must be empty, for example)
- filtering and aggregation of data based on the appartenance to hierarchy nodes (all sales figures, ....)
- aggregation of the key figures based on different characteristics after filtering of these
Well, I am asking a lot ...
Thank you very much
ThomasPlease go through the following link to learn more on aggregates:
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e55aaca6-0301-0010-928e-af44060bda32
Also go through the very detailed documentation:
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2-b582e94bcf8a
Regards,
Mahesh -
Wrong index being chosen-start routine slow
Hi,
We are uploading 6Million odd records using lookup and aggregation in the start routine of update rules from an ODS into an infocube.
One such lookup is on the source ODS itself and takes the longest. The WHERE clause of the SELECT exactly matches one of the secondary indices 020 but is not used when the datapack size is 50,000 but is used only when the datapacksize is a few 100 records. However, the index that does get used in the case of large datapackages is 030 which only partially matches the fields in the WHERE clause.
I have tried the following modifications of the SELECT as two alternatives but neither works :
1) used the Oracle hint INDEX specifically for index 020
2) removed the FOR ALL ENTRIES IN DATA_PACKAGE clause and used a ranges table for selection in the WHERE clause on certain fields
The index 020 has been 'Analysed' and statistics have been created.
Does someone have ideas on why a particular index will not get used in this specific case ?
Best regards
Anuradha
Message was edited by: anuradha govilHi,
could you post the code ?
/manfred -
Problem with exception aggregation
Hi forums,
Can any one help in this issue?
We checked the data in the ECOM cube for the order 63349312 where it is giving 6 records eventhough we are deleting the adjacent duplicates of the order number in the start routine. (Is it like the deletion of adjacent duplicates happen in data packet level instead of happening in the entire cube level? B'cos the same order might present in the different packets.)
Hence the data is aggregating for the key figure, 'Number of Days' in the ECOM report. But if we check the keyfigure properties in the RSA1, there we selected the Exception Aggregation as 'First Value' and aggregation reference char as 'ZN_CORD' (Order number). Still it is aggregating the number of days value in the report.
Regards,
Charan.Hi Rohit,
We are doing same procedure as you mentioned. We assigned order number as refer characteristic to keyfigure No. of days in infi object level. But this operation is not working in report level
Order number ZCUDATE ZN_GSTRP ZN_CORDKF ZN_DAYSKF
63349312 18.01.2009 01.10.2008 1,000 109
63349312 18.01.2009 01.10.2008 1,000 109
63349312 18.01.2009 01.10.2008 1,000 109
63349312 18.01.2009 01.10.2008 1,000 109
63349312 18.01.2009 01.10.2008 1,000 109
63349312 18.01.2009 01.10.2008 1,000 109
I want o/p as
Order number ZCUDATE ZN_GSTRP ZN_CORDKF ZN_DAYSKF
63349312 18.01.2009 01.10.2008 1,000 109
but it is showing as
Order number ZCUDATE ZN_GSTRP ZN_CORDKF ZN_DAYSKF
63349312 18.01.2009 01.10.2008 1,000 676 -
Query rewrites with Nested materialized views with different aggregations
Platform used : Oracle 11g.
Here is a simple fact table (with measures m1,m2) and dimensions (a) Location (b) Calendar and (c) Product. The business problem is that aggregation operator for measure m1,m2 are different along location dimension and Calendar dimension. The intention is to preaggregate the measures for a product along the calendar dimension and Location dimension and store it as materialized views.
The direct option is to define a materialized view with Inline queries (Because of the different aggrergation operator, it is not possible to write a query without Inline query). http://download-uk.oracle.com/docs/cd/B28359_01/server.111/b28313/qradv.htm#BABEAJBF documents the limitations that it works only for 'Text match' and 'Equivalent queries' and that is too limiting.
So decided to have nested materialized view, with first view having just joins(my_dim_mvw_joins), the second view having aggregations along Calendar dimension (my_dim_mvw_calendar) and third view having aggregations along the Location dimension(my_dim_mvw_location). Obviously I do not want the query I fire to know about materialized views and I fire it against the fact table. I see that for the fired query (Which needs aggregations along both Calendar and Location), is rewritten with just second materialized view but not the third. (Had set QUERY_REWRITE_INTEGRITY as TRUSTED) .
Wanted to know whether there are limitations on Query Writes with nested materialized views? Thanks
(Have given a simple testable example below. Pls ignore the values given in 'CALENDAR_IDs', 'PRODUCT_IDs' etc as they are the same for all the queries)
-- Calendar hierarchy table
CREATE TABLE CALENDAR_HIERARCHY_TREE
( "CALENDAR_ID" NUMBER(5,0) NOT NULL ENABLE,
"HIERARCHY1_ID" NUMBER(5,0),
"HIERARCHY2_ID" NUMBER(5,0),
"HIERARCHY3_ID" NUMBER(5,0),
"HIERARCHY4_ID" NUMBER(5,0),
CONSTRAINT "CALENDAR_HIERARCHY_TREE_PK" PRIMARY KEY ("CALENDAR_ID")
-- Location hierarchy table
CREATE TABLE LOCATION_HIERARCHY_TREE
( "LOCATION_ID" NUMBER(3,0) NOT NULL ENABLE,
"HIERARCHY1_ID" NUMBER(3,0),
"HIERARCHY2_ID" NUMBER(3,0),
"HIERARCHY3_ID" NUMBER(3,0),
"HIERARCHY4_ID" NUMBER(3,0),
CONSTRAINT "LOCATION_HIERARCHY_TREE_PK" PRIMARY KEY ("LOCATION_ID")
-- Product hierarchy table
CREATE TABLE PRODUCT_HIERARCHY_TREE
( "PRODUCT_ID" NUMBER(3,0) NOT NULL ENABLE,
"HIERARCHY1_ID" NUMBER(3,0),
"HIERARCHY2_ID" NUMBER(3,0),
"HIERARCHY3_ID" NUMBER(3,0),
"HIERARCHY4_ID" NUMBER(3,0),
"HIERARCHY5_ID" NUMBER(3,0),
"HIERARCHY6_ID" NUMBER(3,0),
CONSTRAINT "PRODUCT_HIERARCHY_TREE_PK" PRIMARY KEY ("PRODUCT_ID")
-- Fact table
CREATE TABLE RETAILER_SALES_TBL
( "PRODUCT_ID" NUMBER,
"PRODUCT_KEY" VARCHAR2(50 BYTE),
"PLAN_ID" NUMBER,
"PLAN_PERIOD_ID" NUMBER,
"PERIOD_ID" NUMBER(5,0),
"M1" NUMBER,
"M2" NUMBER,
"M3" NUMBER,
"M4" NUMBER,
"M5" NUMBER,
"M6" NUMBER,
"M7" NUMBER,
"M8" NUMBER,
"LOCATION_ID" NUMBER(3,0),
"M9" NUMBER,
CONSTRAINT "RETAILER_SALES_TBL_LOCATI_FK1" FOREIGN KEY ("LOCATION_ID")
REFERENCES LOCATION_HIERARCHY_TREE ("LOCATION_ID") ENABLE,
CONSTRAINT "RETAILER_SALES_TBL_PRODUC_FK1" FOREIGN KEY ("PRODUCT_ID")
REFERENCES PRODUCT_HIERARCHY_TREE ("PRODUCT_ID") ENABLE,
CONSTRAINT "RETAILER_SALES_TBL_CALEND_FK1" FOREIGN KEY ("PERIOD_ID")
REFERENCES CALENDAR_HIERARCHY_TREE ("CALENDAR_ID") ENABLE
-- Location dimension definition to promote query rewrite
create DIMENSION LOCATION_DIM
LEVEL CHAIN IS LOCATION_HIERARCHY_TREE.HIERARCHY1_ID
LEVEL CONSUMER_SEGMENT IS LOCATION_HIERARCHY_TREE.HIERARCHY3_ID
LEVEL STORE IS LOCATION_HIERARCHY_TREE.LOCATION_ID
LEVEL TRADING_AREA IS LOCATION_HIERARCHY_TREE.HIERARCHY2_ID
HIERARCHY PROD_ROLLUP (
STORE CHILD OF
CONSUMER_SEGMENT CHILD OF
TRADING_AREA CHILD OF
CHAIN
-- Calendar dimension definition
create DIMENSION CALENDAR_DIM
LEVEL MONTH IS CALENDAR_HIERARCHY_TREE.HIERARCHY3_ID
LEVEL QUARTER IS CALENDAR_HIERARCHY_TREE.HIERARCHY2_ID
LEVEL WEEK IS CALENDAR_HIERARCHY_TREE.CALENDAR_ID
LEVEL YEAR IS CALENDAR_HIERARCHY_TREE.HIERARCHY1_ID
HIERARCHY CALENDAR_ROLLUP (
WEEK CHILD OF
MONTH CHILD OF
QUARTER CHILD OF
YEAR
-- Materialized view with just joins needed for other views
CREATE MATERIALIZED VIEW my_dim_mvw_joins build immediate refresh complete enable query rewrite as
select product_id, lht.HIERARCHY1_ID, lht.HIERARCHY2_ID, lht.HIERARCHY3_ID, lht.location_id, cht.HIERARCHY1_ID year,
cht.HIERARCHY2_ID quarter, cht.HIERARCHY3_ID month, cht.calendar_id week, m1, m3, m7, m9
from retailer_sales_tbl RS, calendar_hierarchy_tree cht, location_hierarchy_tree lht
WHERE RS.period_id = cht.CALENDAR_ID
and RS.location_id = lht.location_id
and cht.CALENDAR_ID in (10,236,237,238,239,608,609,610,611,612,613,614,615,616,617,618,619,1426,1427,1428,1429,1430,1431,1432,1433,1434,1435,1436,1437,1438,1439,1440,1441,1442,1443,1444,1445,1446,1447,1448,1449,1450,1451,1452,1453,1454,1455,1456,1457,1458,1459,1460,1461,1462,1463,1464,1465,1466,1467,1468,1469,1470,1471,1472,1473,1474,1475,1476,1477)
AND product_id IN (5, 6, 7, 8, 11, 12, 13, 14, 17, 18, 19, 20)
AND lht.location_id IN (2, 3, 11, 12, 13, 14, 15, 4, 16, 17, 18, 19, 20)
-- Materialized view which aggregate along calendar dimension
CREATE MATERIALIZED VIEW my_dim_mvw_calendar build immediate refresh complete enable query rewrite as
select product_id, HIERARCHY1_ID , HIERARCHY2_ID , HIERARCHY3_ID ,location_id, year, quarter, month, week,
sum(m1) m1_total, sum(m3) m3_total, sum(m7) m7_total, sum(m9) m9_total,
GROUPING_ID(product_id, location_id, year, quarter, month, week) dim_mvw_gid
from my_dim_mvw_joins
GROUP BY product_id, HIERARCHY1_ID , HIERARCHY2_ID , HIERARCHY3_ID , location_id,
rollup (year, quarter, month, week);
-- Materialized view which aggregate along Location dimension
CREATE MATERIALIZED VIEW my_dim_mvw_location build immediate refresh complete enable query rewrite as
select product_id, year, quarter, month, week, HIERARCHY1_ID, HIERARCHY2_ID, HIERARCHY3_ID, location_id,
sum(m1_total) m1_total_1, sum(m3_total) m3_total_1, sum(m7_total) m7_total_1, sum(m9_total) m9_total_1,
GROUPING_ID(product_id, HIERARCHY1_ID, HIERARCHY2_ID, HIERARCHY3_ID, location_id, year, quarter, month, week) dim_mvw_gid
from my_dim_mvw_calendar
GROUP BY product_id, year, quarter, month, week,
rollup (HIERARCHY1_ID, HIERARCHY2_ID, HIERARCHY3_ID, location_id)
-- SQL Query Fired (for simplicity have used SUM as aggregation operator for both, but they will be different)
select product_id, year, HIERARCHY1_ID, HIERARCHY2_ID,
sum(m1_total) m1_total_1, sum(m3_total) m3_total_1, sum(m7_total) m7_total_1, sum(m9_total) m9_total_1
from
select product_id, HIERARCHY1_ID , HIERARCHY2_ID , year,
sum(m1) m1_total, sum(m3) m3_total, sum(m7) m7_total, sum(m9) m9_total
from
select product_id, lht.HIERARCHY1_ID , lht.HIERARCHY2_ID , lht.HIERARCHY3_ID ,lht.location_id, cht.HIERARCHY1_ID year, cht.HIERARCHY2_ID quarter, cht.HIERARCHY3_ID month, cht.calendar_id week,m1,m3,m7,m9
from
retailer_sales_tbl RS, calendar_hierarchy_tree cht, location_hierarchy_tree lht
WHERE RS.period_id = cht.CALENDAR_ID
and RS.location_id = lht.location_id
and cht.CALENDAR_ID in (10,236,237,238,239,608,609,610,611,612,613,614,615,616,617,618,619,1426,1427,1428,1429,1430,1431,1432,1433,1434,1435,1436,1437,1438,1439,1440,1441,1442,1443,1444,1445,1446,1447,1448,1449,1450,1451,1452,1453,1454,1455,1456,1457,1458,1459,1460,1461,1462,1463,1464,1465,1466,1467,1468,1469,1470,1471,1472,1473,1474,1475,1476,1477)
AND product_id IN (5, 6, 7, 8, 11, 12, 13, 14, 17, 18, 19, 20)
AND lht.location_id IN (2, 3, 11, 12, 13, 14, 15, 4, 16, 17, 18, 19, 20)
GROUP BY product_id, HIERARCHY1_ID , HIERARCHY2_ID , HIERARCHY3_ID , location_id, year
) sales_time
GROUP BY product_id, year,HIERARCHY1_ID, HIERARCHY2_ID
This Query rewrites only with my_dim_mvw_calendar. (as saw in Query Plan and EXPLAIN_MVIEW). But we would like it to use my_dim_mvw_location as that has aggregations for both dimensions.blackhole001 wrote:
Hi all,
I'm trying to make my programmer's life easier by creating a database view for them to query the data, so they don't have to worry about joining tables. This sounds like a pretty horrible idea. I say this because you will eventually end up with programmers that know nothing about your data model and how to properly interact with it.
Additionally, what you will get is a developer that takes one of your views and see's that of the 20 columns in it, it has 4 that he needs. If all those 4 columns comes from a simple 2 table join, but the view has 8 tables, you're wasting a tonne of resources by using the view (and heaven forbid they have to join that view to another view to get 4 of the 20 columns from that other view as well).
Ideally you'd write stored routines that satisfy exactly what is required (if you are the database resource and these other programmers are java, .net, etc... based) and the front end developers would call those routines customized for an exact purpose.
Creating views is not bad, but it's by no means a proper solution to having developers not learn or understand SQL and/or the data model.
Maybe you are looking for
-
Sql syntax for converting a long datatype value in to a integer datatype value
I have to make a sql query where in i have a value of long datatype and i want to convert it into integer datatype value null
-
How do I sync my Groups in Panorama
When I sync with my work computer from home, the groups of tabs in Panorama don't sync. What am I doing wrong?
-
How do I do a "save as" on Pages?
Want to resave a document monthly in order to make small change each month but save the previous ones (monthly billing). Is there any way? If not, will Apple issue me a refund so that I can buy Microsoft again? This is a huge billing problem for me!
-
Hello, We are on EP7.0 and trying to connect to a different R/3 system from the one which we are currently connected to ...after creating the jco's I get the following error when trying to access ESS services: Caused by: com.sap.tc.webdynpro.services
-
Email icon there but no message
well the message icon to show i have an email is showing but there are no emails in my inbox how do i get rid of the icon, as it's now doing my head in? nice one