Roll Up Aggregates
Could someone please explain to me what it means to "roll up aggregates"? I've read through some documentation and it really doesn't make perfect sense to me. Could someone provide an analogy or an example that could better illistrate what this term means in the BW world?
Thank you!
Hi Shelly,
Aggregates make it possible to access InfoCube data quickly in Reporting. Blindly one can say that repeated or slightly changed data can be retrieved from an aggregare instaed of an infocube. Aggregates serve, in a similar way to database indexes, to improve performance.
Recommended to create aggregates when the execution and navigation of query data leads to delays with a group of queries.When you want to speed up the execution and navigation of a specific query. When you often use attributes in queries. And when you want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.
Now coming to rollup, if new data packages (requests) are loaded into the InfoCube, they are not immediately available for Reporting via an aggregate. To provide the aggregate with the new data from the InfoCube, you must first load the data into the aggregate tables at a time which you can set. This process is known as a Rollup.
For this new data packages (requests) were loaded into an InfoCube.Aggregates for this InfoCube were already activated and filled with data.
I hope this will provide some insight if not full...
Similar Messages
-
Roll out aggregate performance.
Hi, experts,
now we get performance issues about rull up step, it take too much time. even I have already try to speed up it. (implement notes 778709)
some question about roll up. anyone can tell me the detail steps when rull up job running?
1. why rull up almost take almost same time when I roll up 1 request and multiple request ( 5 requests)? I think roll up 1 request should be much faster than multiple.
2. sometime, the later roll up job will cancel the previous roll up job if it start on previous job still running. why?
3. anyone can tell me how to reduce rull up time?
4. is compress request in cube can reduce time for rull up ?
thanks a lotHi Jie,
The roll-up adds newly loaded transaction data to the existing aggregates. When aggregates are active, new data is not available for reporting until it is rolled up. For information on aggregates, see below. The time spent for the roll-up is determined by the number and the size of the aggregates; if aggregates can be built from other aggregates, they are arranged in an aggregate hierarchy.
Take the following hints into consideration in order to improve the aggregate hierarchy and, thus, the roll-up:
Build up very few basis aggregates out of the underlying InfoCube fact table
Try for summarization ratios of 10 or higher for every aggregate hierarchy level
Find good subsets of data (frequently used)
Build aggregates only on selected hierarchy levels (not all)
Build up aggregates that are neither too specific nor too general; they should serve many different query navigations
Monitor aggregates, remove those aggregates that are not used frequently (except basis aggregates)
I hope this information will resolve your doubts. -
DB stat's & roll up job failures
Hi team,
0costcenter master & test load fail due to (0COSTCENTER : Data record 4101 ('100040012071 E '): Version '100040012071 ' is not valid RSDMD).
later this value for costcenter seen for transaction data loads but error was different (saying value '100040012071 ' has invalid characters).
Trans loads were corrected after couple of days failure's & data was loaded to cube successfully ,In PC after this lads, DB statistic & roll up job execute's but since that manual load of all correced data done, they r failing .
In logs for build stat's -PCA_C01B
Roll up for PCA_C01B terminated.
RSSM_PROCESS_DBSTAT terminated because Info cube PCA_C01B could not be locked.
In logs for roll up aggregates:PCA_C01B
performing check & potential updates for status updae table.
rollup is runing :data target PCA_C01B from 0000291125 to 0000295022
ABAP/4 processor :UNCAUGHT EXCEPTION
I have checked sm12 tcode when dbstats runs & there is no additional lock , also basis team has checked the sm37 jobs BI_PROCESS_DBSTAT processing for any lock suitations, usually this job takes around 3 hr to turn red.
I require to know wants is going wrong with DB stat & roll up jbs & hw cn i correct them.
Edited by: Gemita on Jan 9, 2009 11:17 AMHi surendra,
Only error , that is shown for data base test i performed in rsrv, with no suggestion or solution provided.
Error /BIC/FPCA_C01B PARTITIONED 263 ( 30 - 50 ]
there are warning's that suggest
The statistics can be automatically created for some database systems. Start the repairs on the initial screen for this by choosing the symbol for Remove error.
You can also find this functionality in the Administrator Workbench under Maintain InfoCube content.
wat u suggest? -
How to make Attribute Change run alignment & Hierarchy changes in Aggregat
Hello
I want to understand that How to make Attribute Change run alignment & Hierarchy changes in Aggregate?
I posted previously the same question but there were not good answers and i was not able to understand clearly .....
If there is Process chain XXY which makes Attribute change run for Master Data 0SPELLING ?
Now there is Aggregate TRT which includes :
0SPELLING , Fiscal Period , Purchase Product, Purchase Category ?
Now pls answer following question ?
1) Does the Process Chain XXY which makes only attribute change run alignment for 0SPELLING , Will this process chain automatically do the Change run alignment for 0SPELLING in Aggregate TRT ? YES or NO
2) If Yes, then we are just suppose to do Roll up for Aggregate TRT after Process chian XXY completes and finish job ?
3) If No, then what steps are suppose to be DONE so as to make sure that Aggregate TRT has new values and perfect values for 0SPELLING ?
Please answer and coorect if i have any wrong question....for e.g.
u have 0spelling whicha has attributes x,y and z on day 1 with 10 records
so do ur aggregates on day1 with same values
now on day2 u had new values of attributes y,z,s,d and new hierarchies and so u add new records
with data load u will load the data with version M of modified and is not available for reporting
If u do attribute change run then this modified version is activated to A i.e. active version .
It will also do the change run alignment for Aggregate for new attribute values and new hierarchy values for aggregate.
now in order for this data to be available for reporting u will need to do the roll up of aggregate.....
if u roll up aggregate before attribute change run , new data is not avaialable for reporting
if u roll up aggregate after attribute change run, then data is available for reporting
if u dont roll up aggregate eventhough new data is in dataprovider, still new data will not be available for reporting.
this is how it works -
How to make Attribute Change run alignment & Hierarchy changes in Aggregate
Hello
I want to understand that How to make Attribute Change run alignment & Hierarchy changes in Aggregate?
I posted previously the same question but there were not good answers and i was not able to understand clearly .....
If there is Process chain XXY which makes Attribute change run for Master Data 0SPELLING ?
Now there is Aggregate TRT which includes :
0SPELLING , Fiscal Period , Purchase Product, Purchase Category ?
Now pls answer following question ?
1) Does the Process Chain XXY which makes only attribute change run alignment for 0SPELLING , Will this process chain automatically do the Change run alignment for 0SPELLING in Aggregate TRT ? YES or NO
2) If Yes, then we are just suppose to do Roll up for Aggregate TRT after Process chian XXY completes and finish job ?
3) If No, then what steps are suppose to be DONE so as to make sure that Aggregate TRT has new values and perfect values for 0SPELLING ?
Please answer and coorect if i have any wrong question....for e.g.
u have 0spelling whicha has attributes x,y and z on day 1 with 10 records
so do ur aggregates on day1 with same values
now on day2 u had new values of attributes y,z,s,d and new hierarchies and so u add new records
with data load u will load the data with version M of modified and is not available for reporting
If u do attribute change run then this modified version is activated to A i.e. active version .
It will also do the change run alignment for Aggregate for new attribute values and new hierarchy values for aggregate.
now in order for this data to be available for reporting u will need to do the roll up of aggregate.....
if u roll up aggregate before attribute change run , new data is not avaialable for reporting
if u roll up aggregate after attribute change run, then data is available for reporting
if u dont roll up aggregate eventhough new data is in dataprovider, still new data will not be available for reporting.
this is how it works -
How does attribute change run works for Aggregates and Master data?
Hi
Can anybody xplain how does the attribute change run works for Master data ?
For e.g.
There is 0spelling and it has master data
On Day 1 there are 10 records
day 2 it has 12 records
so with attribute change run this 2 new records will get added....
The values for this 12 records will added seperately in Data load
Is this how it workss
So how about Aggregates which has Master data.????for e.g.
u have 0spelling whicha has attributes x,y and z on day 1 with 10 records
so do ur aggregates on day1 with same values
now on day2 u had new values of attributes y,z,s,d and new hierarchies and so u add new records
with data load u will load the data with version M of modified and is not available for reporting
If u do attribute change run then this modified version is activated to A i.e. active version .
It will also do the change run alignment for Aggregate for new attribute values and new hierarchy values for aggregate.
now in order for this data to be available for reporting u will need to do the roll up of aggregate.....
if u roll up aggregate before attribute change run , new data is not avaialable for reporting
if u roll up aggregate after attribute change run, then data is available for reporting
if u dont roll up aggregate eventhough new data is in dataprovider, still new data will not be available for reporting.
this is how it works -
Data in aggregates not updated after planning in SEM-BPS cubes
We are running BW 3.0B SP27 with SEM-BPS. Our database is DB/2. When users modify plan data on planning layouts, the data in the cube and the aggregates can become inconsistent.
For example, when a user runs a report that uses an aggregate, they get one number, but when they add a drilldown that causes the report to bypass the aggregate and hit the cube directly, the number will be different. The only way to fix this is to deactivate and then reactivate the aggregate that has stale data.
We've tried opening an OSS message, but that hasn't been helpful, since we aren't able to reliably reproduce the problem. Sometimes the aggregates are fine, other times the data will be stale.
We roll up aggregates in the SEM cubes at the end of each business day. We include a request ID filter for "most current data" (0S_RQMRC) in all our SEM queries, so the queries should be bringing in data from the latest open request in addition to the data in the relevant aggregate.
What causes this, and how can we fix it without manually reactivating aggregates?
Thanks,
Jason
Message was edited by: Jason Kraftis the request still open. close the request and then probably the aggregates will start..
Regards,
BWer
Assign points if helpful. -
Aggregates with Attribute Change Run and Master data
Hi !
I created Aggregate which includes Master Data such as Customer , Customer Number , Material , Material Number.....
It also contains navigational attribute and hierarchy...
Now we all know that if Master data and Hierarchy changes frequently(or some new values are added or new attributes are added etc.) we want to see the respective change in Aggregates too
So just Rolling up aggregate will not help.
We need to apply Attribute Change run for this purpose, to aggregates..
Now the question is HOW TO APPLY ATTRIBUTE CHANGE RUN to aggregates?
How to automate this process with Process chains..?
If i create Aggregate on Master Data CUSTOMER NO., DOES IT AUTOMATICALLY INCLUDE THAT AGGREGATE IN ATTRIBUTE CHANGE RUN FOR THAT CUSTOMER NO. PROCESS CHAINS ? Yes or No..........
(What i mean to say is if there is Attribute Change Run for Customer No. and if this is in Process chains specially created for Customer Number , than as aggregates are created on Customer No., does i t automatically applies and make changes to Aggregate too, or do we hvae to create special process chian for it ?)
Please reply asap ? its urgenthi,
check these links for attribute change run
What's the attribute change run? and the common sequence of a process chain
http://help.sap.com/saphelp_bw30b/helpdata/en/80/1a67ece07211d2acb80000e829fbfe/content.htm
regards
harikrishna N -
Compress aggregates via a process chain
I am rolling up aggregates and not compressing the aggregate. I would like to do this in a process chain for anything older than 10 days. Can anyone advise on how I can do this
ThanksS B Deodhar wrote:S B Deodhar wrote:hi,
Thanks for the input.
>
> My scenario is this:
>
> We have had to drop and reload contents from a cube because it was quicker than dropping specific requests that had been rolled up and compressed
i guess you cant delete that request which is already compressed, system doent allow you to delete those request and regarding problematic request you can do selective deletion if required
>
> So what I would like to know is as follows:
>
> 1. Can I only compress an aggregate at the same time as I carry out a rollup i.e. if I do not check the compress flag for a request at the time I rollup, I am not able to compress that request going forward
> 2. If I choose to compress data in the cube is it specifically the cube or will it also take into consideration the compression of requests in aggregates which are not compressed
*If an info-cube is compressed then keeping aggregates uncompressed wont help you as request id will be lost.
also you can try out collapse-> Select radio button Calculate Request ids ->Only compress those requests that are older than certain days also please note one thing request ,it will compress the below requests*
hope it helps
regards
laksh -
Request for reporting available after rollup. Why not before?
Hi,
In infocubes with aggregates, a dataload is not available for reporting until you roll-up the aggregates. This is very unwanted behaviour and we'd like to have the loads available at all times, with or without a rolled up aggregate. All our aggregates are rolled up in the same process chain block after all loads are done. This means that data will not be available for reporting until all loads have finished, something our customers complain about.
Because parallel rollup jobs collide, rolling up up after every infopackage would make our loadschedule very unflexible (difficult to plan parallel processes).
Is this possible (by changing a setting somewhere) to change this and make load available for reporting at all times, or is this one of SAP's standard programming decisions without any workarounds?
Cheers,
EduardAggregates data is maintained in aggregate tables.
Unless, you roll it up after every data load, the aggregate data wont be correct and it wont be consistent with the data in the cube.
YES< the roll up job will not be complete unless the loading job is compete.
One suggestion is to review the process chain jobs and see if you can reshuffle the jobs.
But aggregate roll up has to happen after data loading job of that cube on which aggregate is built .
Ravi Thothadri -
Global Temp Table or Permanent Temp Tables
I have been doing research for a few weeks and trying to comfirm theories with bench tests concerning which is more performant... GTTs or permanent temp tables. I was curious as to what others felt on this topic.
I used FOR loops to test out the performance on inserting and at times with high number of rows the permanent temp table seemed to be much faster than the GTTs; contrary to many white papers and case studies that have read that GTTs are much faster.
All I did was FOR loops which iterated INSERT/VALUES up to 10 million records. And for 10 mil records, the permanent temp table was over 500k milliseconds faster...
Anyone have an useful tips or info that can help me determine which will be best in certain cases? The tables will be used for staging for ETL Batch processing into a Data Warehouse. Rows within my fact and detail tables can reach to the millions before being moved to archives. Thanks so much in advance.
-Tim> Do you have any specific experiences you would like to share?
I use both - GTTs and plain normal tables. The problem dictates the tools. :-)
I do have an exception though that does not use GTTs and still support "restartability".
I need to to continuously roll up (aggregate) data. Raw data collected for an hour gets aggregated into an hourly partition. Hourly partitions gets rolled up into a daily partition. Several billion rows are processed like this monthly.
The eventual method I've implemented is a cross between materialised views and GTTs. Instead of dropping or truncating the source partition and running an insert to repopulate it with the latest aggregated data, I wrote an API that allows you to give it the name of the destination table, the name of the partition to "refresh", and a SQL (that does the aggregation - kind of like the select part of a MV).
It creates a brand new staging table using a CTAS, inspects the partitioned table, slaps the same indexes on the staging table, and then performs a partition exchange to replace the stale contents of the partition with that of the freshly build staging table.
No expensive delete. No truncate that results in an empty and query-useless partition for several minutes while the data is refreshed.
And any number of these partition refreshes can run in parallel.
Why not use a GTT? Because they cannot be used in a partition exchange. And the cost of writing data into a GTT has to be weighed against the cost of using that data by writing it (or some of it) into permanent tables. Ideally one wants to plough through a data set once.
Oracle has a fairly rich feature set - and these can be employed in all kinds of ways to get the job done. -
Hi Gurus,
Whenever I try to transport a process chain ZFI_0FIGL_C10 from dev. to qua. & prod. the error “Object 'DM' (APCO) of type 'Application' is not available in version 'A'” is displayed in the end of creating transport package.
The process chain was created to load data to 0figl_c10 and it works all right on all systems (dev. qua. prod.).
The process chain contains processes for:
- starting the process chain
- executing infopackage (for loading data to 0figl_o10),
- ACR (activate data in 0figl_o10)
- delete index 0figl_c10
- DTP loading data from 0figl_o14 to 0figl_c10
- executing infopackage 80FIGL_O10
- creating index in 0figl_c10
- building DB statistics
- rolling up aggregates in 0figl_c10
- deleting of requests from the changelog 0figl_o10
- deleting of requests from PSA (0fi_gl_10)
How to find what cause that error.
Regards,
LeszekHi,
A SAP consultant told me to ignore that error.
Indeed, after creating transport with some new process chains (again I was informed about that error "APCO ...") and transporting it to production everything works all right.
The problem is described in sap note: 883745.
Regards, Leszek -
Unable to Rollup a request in InfoCube
I am unable to rollup one request in Infocube. I am getting the following error messages in Detail Tab.
InfoCube XXX is locked by a terminated change run
Lock NOT set for :Roll up aggregates
Rollup terminated : Data target XXX from X to Y.
In SM12 I havent found any lock on the Infocube XXX.
Please let me know how can I solve this issue.hi,
please check my replies in your previous postings
UInfoCube XXX is locked by a terminated change run
Rlgmnt Run (Attribute change run) -
Hi Gurus,
*Sales Query 1 *
*CUSTOMER ID *
*MATERIAL ID *
*Key Figures *
*SALES REP ID *
*No Applicable Data Found. *
Could you guys please help me to solve the above problem in BEX when excute query.
No data is showing in BEX though infocube shows the data.
Cheers
ShrinuCheck if data in infocube is available for reporting, Goto context menu of cube -
> manage -
> requests tab --> check and see if the 'request for reporting' column has the query icon.
Also, do you have aggregates defined on u r cube. If uo do, you will have to roll up aggregates.
right click cube -> manage --> Rollup tab -> Press Execute button
after its complete goto requests tab and refresh, the u will be able to see the icon for 'request for reporting' available.
run your query again and you shd be able to see data.
If the above dont work search the forums for more info, this problem has been discussed several times.
Thanks
K
Edited by: Krishna K on Jan 20, 2008 2:48 AM -
Rollup process is failing every day
Hi,
Bi statistics data is loading to info cube and this cube have Adjust process in process chain.
Adjust process is failing daily and giving following 18 messages and showing 16th point as error.
And one more thing is, even though Adjust process is failu2026.at cube level compression of aggregates and rollup of cube and aggregates is happening daily but in process chain this adjust process is failing daily.
1. Roll up aggregates of Info Cube 0TCT_C01
2. Rollup is running: Data target 0TCT_C01, from 0000603080 to 0000603371
3. Editing of aggregate 100533 in main process
4. Aggregate 100533(0TCT_C01) is rolled up to 603371
5. Aggregate 100533 (0TCT_C01) rolled up in 4 seconds
6. 1 aggregates are condensed
7. Editing of aggregate 100533 in main process
8. Aggregate 100533 is condensed to 0REQUID <= 603371 and 0CNSID <= 0
9. Aggregation of Info Cube 100533 to request 603371
10. Statistic data written
11. Requirements check compelted: ok
12. enqueue set: 100533
13. compond index checked on table: /BIC/E100533
14. Request statistics calculated for p-dimid 1864
15. enqueue released: 100533
16. *******SQL0802N Arithmetic overflow or other arithmetic exception occurred. SQLSTATE=22003 row=1***error
17. Rollup was successful: Data target 0TCT_C01, from 0000603080 to 0000603371
18. Rollup is finished: Data target 0TCT_C01, from 0000603080 to 0000603371
Can anybody please tell what is happening here and how we can avoid this perminently.ASAP?
Thanks and Regards,
Venkatone of the key figure has type INT4. the figure to be uploaded into this key figure is bigger than the upper limit of the INT4 defenition. hence the overflow error
check note 992805
M.
Maybe you are looking for
-
My iPod Touch is not recognized by my HP dv7
My wife has a Gen 1 iPod Touch that is not being recognized by our new HP dv7. I can connect it to my busted Toshiba laptop and it works fine. My Gen 1 Nano works on the new dv7. What gives?
-
Sir in VA01 i am getting error message order type not assined to sales area. Sir can u please tell me in which tcode i will have to do customization? I online wait.
-
OutOfMemoryError while attempting to read 60MB file into HashMap objects
Hi, I've a 60MB file with about 60000 records that I'm trying to read into a HashMap of <String, String[]>. However, as far as I can see it can only manage reading in 12000 before it falls over with an OutOfMemoryError. My eclipse.ini contains the fo
-
I am trying to install two registry edits via a Zen bundle: [HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\14.0\O utlook\Preferences] "DelegateSentItemsStyle"=dword:00000001 and [HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\14.0\O utlook\Preferences] "I
-
OC4j10.1.2CMP bean not updating field after creating a record in the table.
I have a simple table in Oracle 9i database: SQL> desc temp_tbl Name Null? Type OPER_ID NOT NULL NUMBER(5) OPER_STS_TM_DT NOT NULL DATE OPER_STS_CD NUMBER(2) First 2 fields are the Primary Key. last field is standalone. No FK references. I generate a