Flat aggregates
Hi,
What is the concept of flat aggregates and its relationship with line item/high cardinality dimensions...
hi
If you create an Aggregate with more than 16 characteristics, then we call that aggregate as FLAT Aggregate
When an aggregate is active then new data is not available for reporting,These aggregates are called Flat Aggregates.
If an aggregate has less than 15 components, BW 3.x puts each component automatically into a separate dimension that will be marked as line item (except package and unit dimension); these aggregates are called flat aggregates.
Note : Flat aggregates can be rolled up on DB Server (without loading data into Application Server).
simply said flat aggregate is when each characteristic is put into separate dimension, these dimensions are marked as 'line item' dimension, in other word every dimension has only one characteristic (except the package and unit dimension).
check these links
aggregate
aggregate
line item and high cardinality
http://help.sap.com/saphelp_nw04/helpdata/en/a7/d50f395fc8cb7fe10000000a11402f/frameset.htm
Similar Messages
-
What is meant by flat aggregates?
hi,
Can any one explain what is meant by flat aggregates with an example.
thanxs in advance
harihai hari,
https://www.sdn.sap.com/irj/sdn/profile?userid=2297751
Flat aggregates
FLAT AGGREGATES
aggregate
regards
KP
asign points if helpfull -
What is Flat Aggregate ?
Hi,
I have checked in many forums but did not get any correct definition.
Can someone please send the definition, advantages, business scenario and technical details about it?
Thanks,
DineshNice Quiz. Keep this running Dinesh, the Derek O'Brien of SDN
Dinesh Raju wrote:
Thanks!
>
> Here the correct answer:
>
> What is flat aggregate?
>
> If 14 or fewer characteristics are included in the aggregate, the BI system does not create real dimensions; a line item dimension is created instead. In the case of a line item dimension, the dimension table is eliminated and the Characteristic InfoObjects SID is instead written directly to the fact table. When this happens, the aggregates are called flat aggregates.
>
> If 15 or more characteristics are included in an aggregate, the BI system may proceed in two ways:
>
> 1. If two or more characteristics come from one dimension in the InfoCube, the DIM ID of the InfoCube is stored as a key in the fact table.
> 2. If only one characteristic comes from one dimension in the InfoCube, the SID is stored as a key in the fact table. A line-item dimension is used here.
>
> BR
> Dinesh R -
What is flat aggrigates and normal aggregates
hi experts,
I could not find the meaning of flat aggrigates and what is the business case to be applied.
thanks
vijayPlease search SDN.
aggregate
aggregate
Flat Aggregates ? -
What is "iTea: Aggregate"?
On occasion my my volume keys on my keyboard are disabled, so I go into
System Preferences, Sound, and low and behold under the Sound Output tab
there is about 10 duplicates labeled "iTea: Aggregate", and one is highlighted.
This is the name of the device, but there is NO port indicated.
I switch it back to "Line Out", and my volume keys are functional again.
I don't know how they got there in the first place, and I haven't been able to
determine a cause/effect relationship, as it appears the switching occurs at
random.
Where did these items come from, and how do I eliminate them?
Many thanks!Nice Quiz. Keep this running Dinesh, the Derek O'Brien of SDN
Dinesh Raju wrote:
Thanks!
>
> Here the correct answer:
>
> What is flat aggregate?
>
> If 14 or fewer characteristics are included in the aggregate, the BI system does not create real dimensions; a line item dimension is created instead. In the case of a line item dimension, the dimension table is eliminated and the Characteristic InfoObjects SID is instead written directly to the fact table. When this happens, the aggregates are called flat aggregates.
>
> If 15 or more characteristics are included in an aggregate, the BI system may proceed in two ways:
>
> 1. If two or more characteristics come from one dimension in the InfoCube, the DIM ID of the InfoCube is stored as a key in the fact table.
> 2. If only one characteristic comes from one dimension in the InfoCube, the SID is stored as a key in the fact table. A line-item dimension is used here.
>
> BR
> Dinesh R -
Please help me out with some fundamentals in BW
Hello,
Please guide me regarding the below mentioned questions.
1. what is the key date in query designer.
2. when do we perform attribute change run
like once the master data is loaded then we perform attribute change run and load the transactional data ?
3.what is the disadvantage of using aggregates.
4. what is full repair options?
please help me out with these questionsHI,
Repair full request :
If you indicate a request in full update mode as a repair request, then it is able to be updated in all data targets. This is also true if they already contain data from initial runs or deltas for this DataSource / source system combination, and they have overlapping selections.
Consequently, a repair request can be updated at any time without checking each ODS object. The system supports loading in an ODS object by using the repair request without having to check the data for overlapping or request sequencing. This is because you can also delete selectively without checking an ODS object.
Posting such requests can lead to duplicate data records in the data target.
Hierarchy/attribute change run after loading master data;
When hierarchies or attributes of characteristics change, the aggregate affected by the change can be adjusted manually or calculated automatically in process chains.
Aggregates:
Aggregates are materialized, pre-aggregated views on InfoCube fact table data. They are independent structures where summary data is stored within separate transparent InfoCubes. The purpose of aggregates is purely to accelerate the response time of queries by reducing the amount of data that must be read in the database for a given query navigation step. In the best case, the records presented in the report will exactly match the records that were read from the database.
Aggregates can only be defined on basic InfoCubes for dimension characteristics, navigational attributes (time-dependent and time-independent) and on hierarchy levels (for time-dependent and time-independent hierarchy structures). Aggregates may not be created on ODS objects, MultiProviders or Remote Cubes.
Queries may be automatically split up into several subqueries, e.g for individual restricted key figures (restricted key figures sales 2001 and sales 2002). Each subquery can use one aggregate; hence, one query can involve several aggregates.
If an aggregate has less than 15 components, BW 3.x puts each component automatically into a separate dimension that will be marked as line item (except package and unit dimension); these aggregates are called flat aggregates. Hence, dimension tables are omitted and SID tables referenced directly. Flat aggregates can be rolled up on the DB server (i.e., without loading data into the application server). This accelerates the roll up (hence the upload) process.
Disadvantage : The more aggregates exist, the more time-consuming is the roll-up process and thus the data loading process; the change run is also affected.
Hope this info Helps.
Thanks,Ramoji. -
Hello Gurus,
I have some misunderstanding and confusions.I know what partitioning is done on and what time chars are required.I also know how to do compression.Please read.
Everytime I read about partitioning it has to do with E table.
My questions are:
1-Is F Fact Table partitioned before partitioing it with Time Chars.Meaning does Request IDs partition the F Fact Table?
2- When Time Char partitioning takes place its only for E Fact Table.But to have data in E Fact Table i understand you have to compress data.So then the question is does compression need to follow after partitioning?
I mean Partition on empty cube>Load Data>compress?
3- Is partitioning possible on F Fact Table.Will it help performance?
4- How to do partitioning on Aggregates if the cube is not partitioned?
Please answer these concerns for me.Points shall be assigned.
Thanks.1) The F-table is partitioned. It is portitioned by request ID, so that individual requests can be added and deleted easily.
2) You need to setup partitioning before the cube has data. Then you need to load data into the F-table, then compress those requests, which will put them in the individual time partitioning buckets.
3) As mentioned earlier partitioning on the F-table is automatically done by BW, it is partitioned by request ID
4) I dont think you can control the database design on aggregates, they are handled by BW. Aggregates can be flat aggregates, they share dimension tables with cubes and there are some other idiosyncrasies that are all handled internally. I wouldnt worry about the aggregates, focus on your InfoCube design first.
hope that helps,
-John -
Key Figure Aggregate in Bex Query
Hi Gurus
I am using BI7.0; but 3.5x BEx tools
I am loading 6 fields from a flat file. I am loading data for tickets. I have create an InfoObject that counts the number of tickets. No problem. Also I also have key figures that I am assigning the same value to all Charactersitics: 10,30 per ticket.
The Key figures are (Sum) with a Summation aggregation type.
In my Query, the 10,30 aggregate up based on the number of tickets (characteristic) that are available.
Question: How do I remove/stop my key figures from aggregating up (sums) the values of 10, 30 based on the charactersitic? I want only 10, 30 to be present regardless of the number of tickets (constant value applied to the key figure).
Should I change my aggregation type? If so, to what? I see a number of options, such as Last Number, No aggregation, etc
Thank youI found a solution to my requirement.
-
Key Figure Aggregate in Bex Query based on Charactersitic
Hi Gurus
I am using BI7.0; but 3.5x BEx tools
I am loading 6 fields from a flat file. I am loading data for tickets. I have create an InfoObject that counts the number of tickets. No problem. Also I also have key figures that I am assigning the same value to all Charactersitics: 10,30 per ticket.
The Key figures are (Sum) with a Summation aggregation type.
In my Query, the 10,30 aggregate up based on the number of tickets (characteristic) that are available.
Question: How do I remove/stop my key figures from aggregating up (sums) the values of 10, 30 based on the charactersitic? I want only 10, 30 to be present regardless of the number of tickets (constant value applied to the key figure).
Should I change my aggregation type? If so, to what? I see a number of options, such as Last Number, No aggregation, etc or can I override this in my Query property?
Thank youHi Client
I would like to know how to avoid aggregation of a key figure in Bex 3.5?
Thanks
GS -
Part 2: Flat files and Business Contents: Any issues with this scenario?
I will appreciate some clarification on the some points made in response to my previous post "Flat files and Business Contents: Any issues with this scenario?"
1.
" ...youd better analyze those cubes for data redundancy and presence of data youll never use. " I will appreciate some clarification on the type of analysis you are referring to. Examples will help.
2.
"If you want to combine several found IOs in your custom dataprovider, then again you must know (or figure out) relationships between these IOs." I will appreciate some clarification on the type of relationship you are referring to. Examples will help.
3.
I am a bit confused with "..include into ODS structure ALL fields required for the cube" but you also noted noted that "...except navigational attributes and chars and KFs that are going to be determined in TRs or URs."
If you exclude ALL these, haven't you excluded all the fields you included in the ODS structure?
4.
"Consider carefully the ODS key fields selection. Their combination should not allow data aggregation that you dont need."
I may be missing the point here, I understand that you need to select the fields which will form the unique ID for the records in the ODS under the Key Field (please correct me if I am wrong with the purpose of the Key Field), but I don't understand the discussion of "aggregation" in the context.
Thanks in advanceHallo
I try to give some exaplanation based on the previous answer.
1. Data redundancy - make sure you do not store the same information. does not make sense to have data redundanty across you Data Warehouse. this is also a cost. just sotre the same information one time if you get all what you need.
2. whatwhever you build you dp, which consist of IO, you need to know with kind of relation (1:1 or 1:n - n:n and so on) exist between them. that will help you when you model you infoprovider. For example I would never pit togheter IO (n:m) in the same dimension if you expect an high number of cardinality. Sometime an IO can be an attribute of another one (depend on relation. For example
Business Partner and his Address. Usually you have a relation 1:1, in this case address is an attribute of BParten and store it in the Masterdata instead then DP
3. Sometime when you load from ODS to CUBE, you can fill some IO (which are in the infocube and not in the ODS)through ABAP routine in TR-Start Routine of Update Rule. Does not make sense to include these IO in the ODS as they are NULL or Blank (the deault value). This can happen when for example, you first load in the ODS (Price and Quantity) and then you calculate Sell price later (Price * Quantity). of course it could be doen also in the Bex. Depends on other factors (Performance - Loaidng -Sizing)
4. Make sure that the KEY definition of ODS is accordingly to the data otherwise you will aggregate the data and later maybe if you need the detail you miss it.
for example: customer - product - Distr Chan - Sell Price
if each Customer can buy each product for any Distrution Channel, then when you build your ODS(Customer - Product and Distribution must be KEY) otherwise (if you have only Customer - Product KEY for example) you will lose the details for Distribution Channel.
I hope eveyrhting is clear
Regards
Mike -
Data loaded all level in a hierarchy and need to aggregate
I am relatively new to Essbase and I am having problems with the aggregation of cube.
Cube outline
Compute_Date (dense)
20101010 ~
20101011 ~
20101012 ~
Scenario (dense)
S1 ~
S2 ~
S3 ~
S4 ~
Portfolio (sparse)
F1 +
F11 +
F111 +
F112 +
F113 +
F12 +
F121 +
F122 +
F13 +
F131 +
F132 +
Instrument (sparse)
I1 +
I2 +
I3 +
I4 +
I5 +
Accounts (dense)
AGGPNL ~
PNL ~
Porfolio is a ragged hierarchy
Scenario is a flat hierrachy
Instrument is a flat hierarchy
PNL values are loaded for instruments at different points in the portfolio hierarchy.
Then want to aggregate the PNL values up the portfolio hierarchy into AGGPNL, which is not working the loaded PNL values should remain unchanged.
Have tried defining the following formula on AGGPNL, but this is not working.
IF (@ISLEV (folio,0))
pnl;
ELSE
pnl + @SUMRANGE("pnl",@RELATIVE(@CURRMBR("portfolio"),@CURGEN("portfolio")+1));
ENDIF;
using a calc script
AGG (instrument);
AGGPNL;
Having searched for a solution I have seen that essbase does implicit sharing when a parent has a single child. This I can disable but this is not the sole cause of my issues I think.The children of F11 are aggregated, but the value which is already present at F11 is over wriiten and the value in F11 is ignored in the aggregation.^^^That's the way Essbase works.
How about something like this:
F1 +
===F11 +
======F111 +
======F112 +
======F113 +
======F11A +
===F12 +
======F121 +
======F122 +
===F13 +
======F131 +
======F132 +
Value it like this:
F111 = 1
F112 = 2
F113 = 3
F11A = 4
Then F11 = 1 + 2 + 3 + 4 = 10.
Loading at upper levels is something I try to avoid whenever possible. The technique used above is incredibly common, practically universal as it allows the group level value to get loaded as well as detail and agg up correctly. Yes, you can load to upper level members, but you have hit upon why it isn't all that frequently done.
NB -- What you are doing is only possible in BSO cubes. ASO cube data must be at level 0.
Regards,
Cameron Lackpour -
Duplicate records in flat file extracted using openhub
Hi folks
I am extracting data from the cube to opnhub into a flat file, I see duplicate records in the file.
I am doing a full load to a flat file
I cannot have technical key because I am using a flat file.
PoonamI am using aggregates(In DTP there is a option to use aggregates) and the aggregates are compressed and I am still facing thiis issue.
Poonam -
Building RULES file to load shared members in Aggregate storgage outline
Hi there
I posted this yesterday (sorry, somewhat different subject description). I am trying to create an alternate hierarchy containing shared members using a load rule and flat file in an Aggregate Storage outline. One response was to remove the "allow moves" option and use parent child build in the load rule. I tried that and it did not work. i was pointed to a section in the essbase guide (which is about as clear as mud), and still cannot get this to work. First of all - can you do this with an ASO outline? If so, how? I tried a simple 6 line flat file and recreated the load rule based on the above recommendations and it will not the shared members. Can someone out there help?
thanksHere is an example in the simplest form.
Create Aso db (duplicate members names not selected)
Create Product dimension member, set to "Multiple Hieararchies Enabled", (though probably would work if dynamic)
Create load rule, set as parent/child dimension build for product, make sure allow moves is not ticked.
Data file
Parent,Child
100,100-20
200,200-20
Diet,100-20
Assign field properties in rule for Parent.Child
Load data, 100-20 will be created as shared.
Cheers
John
http://john-goodwin.blogspot.com/ -
Hi,
Im trying to create aggregates in Cubes. I right click and go under "Maintain aggregates" . I choose the option for the system to generate them for me, when i do that a window pops up " SPECIFY STATISTICS DATA EVALUATION" what dates am i supposed to put in there for "FROM" im putting todays date, and for "TO" do i put 12/31/9999 ? What is this for?
Also what can i do to improve a DSO's performance?
ThanksHi............
Do you have secondary indexes on your ODS?
As mentioned above.........that will be the best way............
I think you want to improve DSO performance to improve query performance...........if so......its a good way to proceed would be to have your reporting based on the InfoCube, or MultiProviders based on these InfoCubes .
But this will be a bit of development work, and you will have to move the queries from the ODS to Cubes, deal with workbooks etc.
Then you can also create aggregates..........because you cannot create aggregates on ODS........
Check OSS Note 444287
Using ODS will be performance hit in terms of Activation Time , Report Excution Time ( Tabular Reporting ) . Better to load the data from these ODS to Indiviual Cubes .......and then create a multiprovider of them and then report out from multiprovider . Since reporting from Multi-dimensional Structure is faster than Flat table reporting...
Also check this link
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
Regards,
Debjani.......
Edited by: Debjani Mukherjee on Sep 26, 2008 8:07 AM -
Hi all.
I have a flat file data source with detailed records:
Field1 Field2 Amount
1 22 100
1 23 200
2 24 220
3 25 150
3 26 600
I want to get aggregated results by Field1 in my DSO:
Field1 Amount
1 300
2 220
3 750
How and there I can do these aggregates during the data load process into DSO?hi
go to the transformation and go to the field amount and choose the Aggregation as summation.now the values wld be gettint added automatically.
Regards
Dina
Maybe you are looking for
-
I sign onto Ebay though the Mozilla Firefox and all goes well until I actually place a bid and confirm the bid. Then Firefox crashes and I have to start all over again. This is very annoying when I'm bidding on something.
-
CLOB processing consuming large memory in temp tablespace
We are having a stored procedure, which is called from a Java process to enqeue a message in to AQ. The input for the procedure is CLOB. In some installations where oracle standard editions is used, the temp tablespace is growing in large volumes and
-
Passing BAPI standard Inputs using User inputs
Hi, I have a requirement where i need to update SAP using an Adobe Interactive form. I got a BAPI which can do this job.It worked fine with BAPI standard inputs... But my requirement is to show something different in form and its corresponding value
-
Help desk application written in coldfusion
Hello I was assigned to re-create a help desk application using coldfusion. I am so far clueless from the business requirements Know the IT dept is expanding to more than one geographic location. Is there a sample application written in coldfusion. M
-
Question Mark.... missing files....
The location for my 'Saved Files' is /Users/Me/Pictures/Adobe and when I click on that folder and 'Get Info' it tells me it the size of the folder is over 31Gig. The folder is full of .xmp files.... When I open PSE 12, I see all of my thumbnails, but