Aggregate roll up

Hi Guys
I created an aggreagate for a info cube(which had data) which already has a couple of aggregates built on to it.I activated and filled my particular aggregate.My question now is my aggreagate ready for the query to use,or do i have to run "aggregate roll up".
Thanks in advance
Raju

You probably don't want to set the Automatic aggregate rollp on an InfoCube if you have Requests daily that perform a full load, automatically deleting an idnetical Rques previously loaded earlier (e.g. day before), as this causes the aggregate to have to be refilled completely, and while that should happen automatically, it seems to be problem prone and can take a long time if the base InfoCube is large.
PS.
Put yourself on the SDN world map (http://sdn.idizaai.be/sdn_world/sdn_world.html) and earn 25 points.
Spread the wor(l)d!

Similar Messages

  • Aggregate Roll Up and Compression

    HI,
    I am familiar with the basics of Aggregate Roll Up and Compression.
    What I need to know is which one should follow the Other. I mean, should Aggregate Roll Up done before Compression or Vice-Versa ? What r the advantages of doing it in a particular order ?
    Regards
    Arjun

    Hi Arjun,
    Aggregate rollup must be performed befor the compression, it should not be vice versa.
    The reason is, aggregate rollup is performed on a request basis, it needs request ID to rollup the data. Compression of the request in cube leads to loss of request ID as the data is compressed. Then the rollup is not possiable.
    Please let me know if you need any more information.
    Regards,
    Pankaj

  • Problem in Process chain due to Aggregate Roll-up

    Hi,
    I have a Infocube with Aggregates built on it.  I have loaded data in the Infocube from 2000 to 2008, Rolled up & Compressed the aggregates for this.
    I have also loaded the 2009 data in the same Infocube using Prior Month & Current Month Infopackage for which i am only Rolling up the aggregate and no Compression of aggregates is done.  The Current & Prior month load runs through Process chain on a daily basis at 4 times per day.  The Process chain is built in such a way that it deletes the overlapping requests when it is loading for the second/third/fourth time on a day.
    The problem here is, when the overlapping requests are deleted, the Process Chain is also taking the Aggregates compressed requests (2000 to 2008 Data), de-compressing it, De-activating the aggregates, Activating the Aggregates again, Re-filling & compressing the aggregates again.  This nearly takes 1 hour of time for the Process Chain to run which should take not more than 3 minutes.
    So, what could be done to tackle this problem?  Any help would be highly appreciated.
    Thanks,
    Murali

    Hi all,
    Thanks for your reply.
    Arun: The problem with the solution you gave is "Untill i roll-up the aggregates for the Current & Prior Month Infopackage the Ready for Reporting symbol is not appearing for the particular request".
    Thanks,
    Murali

  • Aggregate not rolling up automatically !

    Experts !
    i have created an aggreagte last month. i just noticed that its showing last rolled up date for the last month.
    there are couple other aggregates created on that cube and those are rolling up automatically everyday after the date load.
    Do i need to do any kind of settings ? i thought bcoz of other aggregates , mine shoud be rolled up with those too all the time.
    what is the problem ? and how do i rolled up manually ?
    or just, deavtive it and activate it again. ( if i do that, then it will load all the data again and it mite take time to load. )
    is there a way that it loads only remaining data form last month.
    Thanks

    there are couple other aggregates created on that cube and those are rolling up automatically everyday after the date load.
    If u have added aggregate rolling up in daily load process chain, then these aggregate roll up would happen in that PC load.
    what is the problem ? and how do i rolled up manually ?
    U should add "RollUp of filled aggregates" process in process chain. OR to do that manually
    Infocube --> manage .. 'RollUp' tab.. make sure displayed Request ID is the most recent request Id as that of under 'Request' tab.
    then Execute.

  • Back end activities for Activation & Deactivation of Aggregates

    Hi ,
    Could any body help me to understand the back-end activites performed at the time of activation and deactivation of aggregates.
    Is filling of Agreegate is same as Roll up?
    What is the diffrence between de-activation and deletion of Aggregate?
    Thanks.
    Santanu

    Hi Bose,
    Activation:
    In order to use an aggregate in the first place, it must be defined activated and filled.When you activate it, the required tables are created in the database from theaggregate definition. Technically speaking, an aggregate is actually a separate BasicCube with its own fact table and dimension tables. Dimension tables that agree with the InfoCube are used together. Upon creation, every aggregate is given a six-digit number that starts with the figure1. The table names that make up the logical object that is the aggregate are then derived in a similar manner, as are the table names of an InfoCube. For example, if the aggregate has the technical name 100001, the fact tables are called: /BIC/E100001 and /BIC/F100001. Its dimensions, which are not the same as those in the InfoCube,have the table names /BIC/D100001P, /BIC/D100001T and so on.
    Rollup:
    New data packets / requests that are loaded into the InfoCube cannot be used at first for reporting if there are aggregates that are already filled. The new packets must first be written to the aggregates by a so-called “roll-up”. In other words, data that has been recently loaded into an InfoCube is not visible for reporting, from the InfoCube or aggregates, until an aggregate roll-up takes place. During this process you can continue to report using the data that existed prior to the recent data load. The new data is only displayed by queries that are executed after a successful roll-up.
    Go for the below link for more information.
    http://sapbibw2010.blogspot.in/2010/10/aggregates.html
    Naresh

  • Error when running report: "Error in aggregate table for InfoCube"

    Hi Experts
    We had a temporary error, which I would like to find the root cause for.
    We where running a Workbook which is based on a multiprovider. For a short period of time (around 10 minuts) we got the following error when we executed the workbook based on this multiprovider:
    "Error in aggregate table for InfoCube"
    There where no loads or aggregates rolling up on the cubes in the multiprovider.
    I see no short dumps as well in ST22.
    Have anybody seen this error before, and how can I trace how this error occured?
    Thanks in advance.
    Kind regards,
    Torben

    Hi Sneha,
    I will Suggest u to do some RSRV tests.
    Go to transaction RSRV. There you will find test for aggregates. Just Perform them and see whether you get any discrepencies.
    Regds,
    Shashank
    Edited by: Shashank Dighe on Jan 4, 2008 10:51 AM

  • Aggregate for master data ?

    Hello friends,
    I created Aggregates on Navigational Attributes.
    Now they are working fine, but i need to know that how shall i create the Roll up for Aggregates on Navigational Attributes.
    The point is the master data changes frequently a lot of time for e.g. for 0customer etc.....
    So if any one can send me the step by step documents so as to know how to roll up the Aggregates for Navigation attributes or for aggregates created on Master data....
    How to create process chains for the same ?????????
    Because if master data changes, then rolling up the aggregates straight forward will not help.
    So we need to write a process chain so that it deactivates the aggregate and reactivate again and fill up again..........
    If i mis interpreted something please rectify it.......
    Please advise

    Hello , thanks for replying.
    Wond i am not talking about Aggregation on Master data.
    I have aggregate which has master data in it.... for e.g. 0customer...
    and this 0customer changes most often.... lets say every 4-5 days.
    Now if i do roll up in aggregates with 0customer ... i t will not add new values of 0customer to aggregattes but it will only load new values for existing chara. in aggregates.
    So my question is how to do this in process chain...
    With aggregate roll up i will not be able to load new values added to master data...
    For e.g. if 0customer has 5 chars with 15 values it is okey, aggregate is filled with this value.
    Now in this 5 chars if new values are added this will be solved with aggregate rollu p.
    But if in 0customer if 2 more chars are added for e.g. now it has 7 chars....
    then this 2 chars will not be added in aggregate...
    I experiemented and saw that with aggregate on and off again it adds this 2 new chars...
    Ho w to automate this in aggregate....
    i hope u understand....

  • Compression or Aggregates.

    We have an accounts payable cube and the data is loaded daily in the cube and it has 11 reports built on it. So we have a proposal to compress the data in the cube to improve query performance.
    Could any one of you please let me know if its feasible to go for compression or first build aggregates on the cube and then compress the data?

    Hi,
    First you have to build the 'Aggregates and do the Compression'.
    If  you do Compression first then what happens na it Delete all the Requests and transfer records from F Table to E Table. So, it is always preferred to 'First Aggregate Roll up and Compress'.
    Ali.

  • Request for reporting available after rollup. Why not before?

    Hi,
    In infocubes with aggregates, a dataload is not available for reporting until you roll-up the aggregates. This is very unwanted behaviour and we'd like to have the loads available at all times, with or without a rolled up aggregate. All our aggregates are rolled up in the same process chain block after all loads are done. This means that data will not be available for reporting until all loads have finished, something our customers complain about.
    Because parallel rollup jobs collide, rolling up up after every infopackage would make our loadschedule very unflexible (difficult to plan parallel processes).
    Is this possible (by changing a setting somewhere) to change this and make load available for reporting at all times, or is this one of SAP's standard programming decisions without any workarounds?
    Cheers,
    Eduard

    Aggregates data is maintained in aggregate tables.
    Unless, you roll it up after every data load, the aggregate data wont be correct and it wont be consistent with the data in the cube.
    YES< the roll up job will not be complete unless the loading job is compete.
    One suggestion is to review the process chain jobs and see if you can reshuffle the jobs.
    But aggregate roll up has to happen after data loading job of that cube on which aggregate is built .
    Ravi Thothadri

  • Records not populating in BEx report

    I have a business content cube which was installed once in august. I run a query from that cube, and got records upto the august. I have lots of new records after august too, but its not coming into my query. I thought I have to update my cube. I don't know how to update ..I went manually to the infopackage and reload the data but the request got failed. So, I deleted the cube and it's content and reinstalled it again. I can see in the infopackage I have more records uploaded but when I ran a query I still see upto the same old date. so, can anyone please tell me how to update the cube? and how to bring my data after the august date?
    Thank you.
    sajita

    Hi Sajita,
    you should check the status of your requests in the Admin workbench (RSA1), right mouse click the InfoCube and check the status of the requests (there is an indicator whether a request is available for reporting), reasons may be missing aggregate roll up or missing ok status (the traffic light, you can click on it and set it manually). If after deleting everything you still see the old data, then you may even be on the wrong INfoCube with your query. Check the technical names carefully.
    regards, Klaus

  • Error handling in process chain-doubts

    Hi ,
    I have some doubts in error handling of process chains.
    1) I have  aprocess load infopackeage and subsequent process is update fromPSA.
    process load infopackage got failed  , so i loaded  the IP manually and repeated the next i.e process update from PSA .
    How to correct the process chain from now?
    2) I have  aprocess load infopackeage and subsequent process is Delete request in infocube .process load infopackage got failed  , so i loaded  the IP manually and repeated the next process i.e Delete request in infocube. Chain continued by deleting the right request . How this is possible ?
    Plz  help me  as this is urgent and daily i have to deal with this  issues. If any documents on error handling is greatly appreciated.
    My mail id is [email protected]
    Regards,
    Pavan

    Hi Pavan,
    Hope the following links will give u a clear idea about process chains and clear ur doubts.
    Business Intelligence Old Forum (Read Only Archive)
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    Errors in monitoring of process chains can be categorized into 4 different sections...
    Master data - Full Update
    Master data - Delta Update
    Transaction data - Full Update
    Transaction data - Delta Update.. in terms of loading of data which can be due to server shut down or system maintenance... errors due to incorrect entries in the OLTP system in which case you'll have to fix the errors in the PSA and manually load the data...
    Otherwise it can have errors on Attribute change run being locked by some other job... Aggregate Roll up failing because of attribute change run being run at the same time... Problem with hierarchies and save hierarchies...
    There can be problems with the data store activation if the ODS object contains any incorrect request sitting inside it... then, you need to delete the incorrect request and reload the data again...
    In case of Transaction Delta failure, you'll have to request for a repeat either manually in the infopackage or using the repeat option if available on right clicking the load event...
    For Master Data Delta failures, you need to do an Re-init by deleteing the previous initalization condition in the "initalization option for source systems" in the menu scheduler or reschedule the enitre chain... because, master data generally do not support repeat of last delta ...
    U can even look into these links:
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    For common data load errors check this link:
    /people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
    ****Assign Points if Helpful****
    Regards,
    Ravikanth.

  • How to store as single record data coming from different ods to cube

    Hi All,
            we have the scenario like .
    Same contract values are uploaded from 3 ods to cube.In the cube this information is stored as 3 different records.
    Is there any option having same contract no and same fiscal period values should be stored as single record in cube.
    Thanks in advance.
    Regards,
    Shradda.

    Hi Shradda,
    On Performance Side .....consider below points
    1. Indexes on Cube ( That delete indexes before load and create them after Load ).
    2. Aggregate Design ( Decision on Base Aggregates, roll up Hierarchy , BW Statistics etc ).
    3. Partition of InfoCube ( Basically decision on No. of Partition )
    4. Data Package Size ( Always try to have larger Data Package so that preaggreation will reduce the no. of Data Records ).
    Best is Service.sap.com on BI page u will find material for performance . THat will help you .
    To assign points on the left of screen u will get radio buttons for assigning points for each person who has responded .
    Regards,
    Vijay

  • 10g - cache, report contains oracle user defined function

    hi, experts,
    from http://obiee101.blogspot.com/2008/07/obiee-cache-management.html
    Reasons Why a Query is Not Added to the Cache:
    •Non-cacheable SQL element. If a SQL request contains Current_Timestamp, Current_Time, Rand, Populate, or a parameter marker then it is not added to the cache.
    •Non-cacheable table. Physical tables in the Oracle BI Server repository can be marked 'noncacheable'. If a query references any non-cacheable table then the query results will not be added to the cache.
    •Cache hit. In general, if the query gets a cache hit on a previously cached query, then the results of the current query are not added to the cache. The exception is query hits that are aggregate roll-up hits.
    •Result set is too big.
    Query is cancelled. This can happen by explicit cancellation from Oracle BI Presentation Services or the Administration Tool, or implicitly through timeout.
    •Oracle BI Server is clustered. Queries that fall into the ‘cache seeding’ family are propagated throughout the cluster. Other queries continue to be stored locally. Therefore, even though a query may be put into the cache on Oracle BI Server node 1, it may not be on Oracle BI Server node 2.
    I would like to know
    if the request (report on dashboard) calls an oracle user defined function,  can the cache be created and saved for this report?
    thank you very much!

    Hi stephen,
    if the request (report on dashboard) calls an oracle user defined function, can the cache be created and saved for this report?Yes,it is cached.....function defined in database is called in OBIEE is cached and saved.
    More information and example can be found here http://oraclebizint.wordpress.com/2007/09/10/oracle-bi-ee-10133-support-for-native-database-functions-and-aggregates/
    Hope it helps you.Check all other questions you posted are answered?
    By,
    KK

  • System Log SM21

    Hi,
    In system log it is showing the following message.could you please explain me the meaning of it.
    Enqueue: Accumulated wait time for lock: 1300 seconds              
    > Uname: LVARA enxxhead8324                                      
    > Obj: SLEEP # enxxhead8328                                        
    > Key: 20070419082004848475002600TC030D71........................# 
    Regards,
    Neeraj

    Neeraj,
    Check the below stuff:
    Check which of the below are the reasons for your issue....
    During loading
    you are not permitted to
    •        Delete data
    •        Archive data
    •        Delete indexes and reconstruct them
    •        Reconstruct statistics
    While indexes are being built
    you are not permitted to
    •        Delete indexes
    •        Fill aggregates
    •        Roll up requests in aggregates
    •        Create statistics
    •        Compress requests
    •        Archive data
    •        Update requests to other data targets
    •        Perform change run
    While statistics are being built
    you are not permitted to
    •        Delete indexes
    •        Build indexes
    •        Fill aggregates
    •        Roll up requests in aggregates
    •        Compress requests
    •        Archive data
    •        Update requests to other data targets
    •        Perform change run
    During roll up
    you are not permitted to
    •        Delete indexes
    •        Build indexes
    •        Create statistics
    •        Fill aggregates
    •        Compress requests
    •        Perform change run
    During compression
    you are not permitted to
    •        Delete indexes
    •        Build indexes
    •        Create statistics
    •        Fill aggregates
    •        Roll up requests in aggregates
    •        Archive data
    •        Update requests to other data targets
    •        Perform change run
    During deletion
    you are not permitted to
    •        Load data
    •        Delete indexes
    •        Build indexes
    •        Create statistics
    in certain circumstances you are not allowed to
    •        Fill and roll up aggregates
    •        Perform change run
    •        Compress requests
    •        Update requests to other data targets
    During updating
    you are not permitted to
    •        Compress requests
    •        Archive data
    [email protected]

  • Migration from  Sun to HP Unix to Windows

    Hi ,
    There is hardware and OS migration activity is scheduled in my current project. To give you the detail, BI system will be migrated from SUN to HP servers and OS will be changed from Unix to windows.
    I am asked to test the system to check that any existing functionality that works before migration should work after migration.
    I was suggested to have a look at the Installation guide to prepare the checklist. I know that the guides are available on service market place, however I am wondering which one I need to refer for the BI 7.0 installation.
    Can any one please send the link of Installation guide  specifically for  BI 7.0. ?
    If any one has done the similar kind of migration activity before then please share the pre and post migration test steps that you have performed during migration.
    Regards,
    Neeraj

    Thanks Effen,
    DB will remain ORACLE. I have just gone through the system copy guide . I found it bit confusing
    I have prepared a list of all necessary activities and check required pre and post migration. Appreciate if you just have a look at it. Add any check that you think should be added in the list.
    Pre Migration check:
    SAP R3 Side:
    u2022Take the snapshot of SM37 for all the LIS jobs whether it is in released state.
    u2022Need to flush all the delta queues from LBWE
    u2022Make the delta Queues Zero (By scheduling the respective info packages for the data targets).
    SAP BW:
    Analysis phase
    u2022Understand the BW landscape and identify all Interfaces (SAP and Non SAP).
    u2022Prepare a list of all outstanding issues of BAU. Also record all problems if any thing is broken and not working pre migration.
    u2022Prepare the test scripts for technical and functional testing.
    Migration Phase:
    u2022Run process chain to flush all the delta queues.
    u2022Remove all the jobs from Schedule position; identify all the chains based on event and Schedule chains.
    u2022Stop all chains, remove them from schedule.
    Post Migration check:
    SAP R3 Side:
    u2022Change the state of the jobs position from Schedule to Release and schedule the LBWE job to flush the records from LBWQ to RSA7 as per the one Hour frequency
    u2022Check the next released job time and makes sure that the LBWE job is started sending the records from LBWQ to RSA7.
    u2022Take the screen shots before sending the data from LBWQ to RSA7.
    u2022Check for the number records in RSA7 as well as LBWQ.
    u2022After the successful job run, check for the number of records in RSA7.
    u2022Make sure that the records in RSA7 are matching with the LBWQ.
    u2022Keep an eye on all the LIS jobs for two or three consequent runs.
    SAP BW Side:
    Technical Check:
    u2022Before going to start the process chains, check all the source system connections. This includes SAP and Non SAP source systems.
    u2022Check the connection of all SAP source systems with BW. Ensure that the RFC connection is established for all SAP source systems.
    u2022Check the connection of all Non SAP source systems with BW. Ensure that the DB connect setting is correct and connection is established. Need to check all the source system connections through DB to make sure that all the tables/Views are not dropped out after migration. If there is any table with inactive status need to bring to the notice of the admin people of that particular database.
    u2022Double check all the process chains based on Event and Schedule are successfully scheduled as per the existing timings.
    u2022While scheduling the jobs, there might be some chances that system will automatically postpone the loads by one day based on the job priority defined, ensure that the jobs are running as per the scheduled date and time.
    u2022Need to keep an eye on all the critical process chains like APO loads Etc... Mentioned in the excel sheet.
    u2022Keep on monitoring the process chains running also compare the execution time for performance check.
    u2022Randomly check the infocubes, update rules, Info objects  and other objects and verify that the objects are in active mode.
    u2022Ensure that all externally scheduled processes are rescheduled Backups, DB statistics, monitoring processes, etc
    u2022Verify that the connectivity with SAP portal.
    u2022For verifying the BI Java configuration, run the IP transaction RSPLAN, open BI admin cockpit and check that it is working fine after migration.
    u2022Verify the all infocubes administration activities like index creation/deletion, aggregate roll up, compression & DB ststistics refresh are working fine after migration.
    u2022Create/delete/ maintain BW objects: DSO, infocube, infosource, update rule, transformation, DTP etc in dev systems.
    u2022Check that Full Request InfoPackage, Repair Full Request InfoPackage, Initialisation with Data Request InfoPackage, Initialisation without Data Request InfoPackage and Delta Request InfoPackage can be executed successfully.
    u2022Check RSRV transactions for consistency.
    u2022Create, delete and modify the Query in query designer. Verify that the query designer works properly after migraition.
    Functional Check:
    Execute all functional test scripts and validate that the data is being loaded in all relevant info providers.

Maybe you are looking for