Roll up to aggregates

Hello Gurus:
Lets take the scenario where the cube is new and has no data & I have activated & filled one aggregates.
I can activate & fill an aggreagte even though the cube has no records. I already tested this & it works.
My question is --- if I include this new cube in a process chain for rollup to aggregate & when it runs and finds there are 0 records to be rolled up (assuming that the load to the cube got zero records) <i>will the roll up step fail</i> ??
Please suggest...
Thanks

Hi MK,
I don't think that the process will fail if there are 0 records to be rolled up...

Similar Messages

  • Process chain fails every time at "roll up of aggregates"

    hi all,
    A process is deffined in such a way that...                                        start
                                                                                    i
                                                                                    i
                                                            Delta loads to infocube1(load)  deltaload to infocube2(load)       deltaload to infocube3(load)
                                                                            i                                                 i                                          i
                                                                            i                                                 i                                          i
                                                            delta load of aggregate(cube1)   delta load  of aggregates(cube2)     delta load  of aggregates
    Every time when this process chain get executed,this fails at delta laod of aggregates step.If i repeat again it works fine.But repeating is not a solution for this.So,i want a permanent soltuion for this..
    Please guide me what to do?and explain me why this happens.
    Error message:-process rollup of filled aggregates/bia indexes,variant delta load of aggregates
    Thanks( in advance)
    Moushmi

    Hi Moushmi,
    Have you checked if the system has processes available while the chain is running?  It could be that your system is overloaded and the aggregates wouldn't roll up because you have no processes or available memory.
    Check with your Basis resource.  Also, try changing your chain to do only 1 at a time.  Do you have many aggregates to load, are they big, and most important are they being used.
    Kathleen

  • Problem in Process chain due to Aggregate Roll-up

    Hi,
    I have a Infocube with Aggregates built on it.  I have loaded data in the Infocube from 2000 to 2008, Rolled up & Compressed the aggregates for this.
    I have also loaded the 2009 data in the same Infocube using Prior Month & Current Month Infopackage for which i am only Rolling up the aggregate and no Compression of aggregates is done.  The Current & Prior month load runs through Process chain on a daily basis at 4 times per day.  The Process chain is built in such a way that it deletes the overlapping requests when it is loading for the second/third/fourth time on a day.
    The problem here is, when the overlapping requests are deleted, the Process Chain is also taking the Aggregates compressed requests (2000 to 2008 Data), de-compressing it, De-activating the aggregates, Activating the Aggregates again, Re-filling & compressing the aggregates again.  This nearly takes 1 hour of time for the Process Chain to run which should take not more than 3 minutes.
    So, what could be done to tackle this problem?  Any help would be highly appreciated.
    Thanks,
    Murali

    Hi all,
    Thanks for your reply.
    Arun: The problem with the solution you gave is "Untill i roll-up the aggregates for the Current & Prior Month Infopackage the Ready for Reporting symbol is not appearing for the particular request".
    Thanks,
    Murali

  • Error in Roll up & Filling up an aggregate

    Hi all,
    We have created 4 aggregates on 0SD_C03 ( Sales Overview cube). Initially we had filled the data to all the aggregates after the history data load successfully. Now after the delta load we are trying to manually roll up the aggregates. We have only one delta package request currently in the system. Now as we tried to schedule the roll up manually it gave an error stating:
    Roll up terminated: SQL error 12801. the diagnosis of the error says:
    "The database system registered an SQL error. As available, the error number and a description are included in the short text. Possible causes for SQL errors include:
    1. Overflow of database objects such as buffers, temporary tablespaces, rollback segments or data containers/tablespaces.
    ->These problems can generally be eliminated by system administrators.
    2. Missing generated database objects such as tables or views based on inconsistent or inactive InfoCubes or InfoObjects. Examples of this include the view of the fact table for an InfoCube or the attribute SID table (X/Y table) of a characteristic.
    -> These problems can generally be eliminated by a BW administrator.
    3. SQL requests with incorrect content.
    -> Problems of this type are generally programming errors"
    In the manage>requests Tab for the cube under the roll up status it says the roll up is still running !
    now when we are trying to fill one of the aggregates after deactivating & activating sequentially that process is also getting terminated. error message from the job log states:
    The roll up for InfoCube <cubename> has terminated
    The system is not able to fill the aggregates at this time
    Lock NOT set for: Fill aggregate (initial)
    So we are not able to fill the aggregate either. Need your help badly as we are short on timeline as well.
    Thanks in advance,
    Abhishek.

    Thanks M & Maithili for your prompt replies.
    Let me give you more details.
    1. There are no run time errors in ST22
    2. There was a lock for the cube in SM12. I deleted that as suggested by you & then tried filling up the aggregate again but unfortunately it is giving me exactly  the same error once again.
    3. in SM37 I could find a job BI_SAGR* getting triggered as soon as I tried to fill the aggregate. But the job terminates in 4 seconds.
    4. in SM21 I could find 2 errors --
    a. database error 12801 in FET:
    Program                                                  Cl                            Problem cl                                 Packag
    RSBATCH_EXECUTE_PROZESS             K                             SAP Web    AS Problem           SBAC
    Module nam      Line          Table Name        Field Nam
    dbacds             1433          12801                FET
    Documentation for system log message BY 2 :
    After the attempt to call a database operation, the database sy
    returned the error code specified in the message, which indicat
    the operation concerned could not be executed successfully.
    b. database error 12801:
    The log of this is exactly same as the above, only thing is the field name is missing.
    Thanks,
    Abhishek

  • Proc Chain - Delete Overlapping Requests fails with aggregates

    BW Forum,
    Our weekly/daily load process chain loads several full (not delta) transaction infopackages. Those infopackages are intended to replace prior full loads and are then rolled up into aggregates on the cubes.
    The problem is the process chains fail to delete the overlapping requests. I manually have to remove the aggregates, remove the infopackages, then rebuild the aggregates. It seems that the delete overlapping request fails due to the aggregates or a missing index on the aggregates, but I'm not certain. The lengthy job log contains many references to the aggregate prior to it failing with the below messages.
    11/06/2004 13:47:53 SQL-END: 11/06/2004 13:47:53 00:00:00                                                 DBMAN        99
    11/06/2004 13:47:53     SQL-ERROR: 1,418 ORA-01418: specified index does not exist                        DBMAN        99
    11/06/2004 13:47:59 ABAP/4 processor: RAISE_EXCEPTION                                                       00        671
    11/06/2004 13:47:59 Job cancelled                                                                           00        518
    The raise_exception is a short dump with Exception condition "OBJECT_NOT_FOUND" raised.
    The termination occurred in the ABAP program "SAPLRRBA " in
    "RRBA_NUMBER_GET_BW".                                    
    The main program was "RSPROCESS ".                        
    I've looked for OSS notes. I've tried to find a process to delete aggregates prior to loading/deletion of overlapping requests. In the end, I've had to manually intervene each time we execute the process chain, so I've got to resolve the issue.
    Do others have this problem? Are the aggregates supposed to be deleted prior to loading full packages which will require deletion of overlapping requests? I presume not since there doesn't seem to be a process for this. Am I missing something?
    We're using BW 3.3 SP 15 on Oracle 9.2.0.3.
    Thanks for your time and consideration!
    Doug Maltby

    Are the aggregates compressed after the rollup?  If you compress the aggregate completely, the Request you are trying to delete is no longer identifiable once it is in the compressed E fact table (since it throws away the Request ID).
    So you need to change the aggregate so that it the most recent Requests remain in the uncompressed the F fact table.  Then the Request deletion should work.
    I thought what was supposed to happen if the aggregate was fully compressed and then you wanted to delete a Request, the system would recognize that the Request was unavailable due to compression and that it would automatically refill the aggregate - but I'm not sure where I read that. Maybe it was a Note, maybe that doesn't happen in a Process Chain, just not sure.
    The better solution when you regularly backout a Request  is just not the fully compress the aggregate, letting it follow the compression of the base cube, which I'm assuming you have set to compress Requests older than XX days.

  • Non-compressed aggregates data lost after Delete Overlapping Requests?

    Hi,
    I am going to setup the following scenario:
    The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
    Now, the flow in the process chain should be:
    Delete indexes
    Load delta
    Load full
    Create indexes
    Delete overlapping requests
    Roll-up
    Compress
    In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
    Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
    Therefore, I expect that after the compression it should look like this:
    RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
    110 |                    |                    | X      | F
    109 |                    |                    | X      | D
    108 |                    |                    | X      | D
    107 |                    |                    | X      | D
    106 |                    |                    | X      | D
    105 |                    |                    | X      | D
    104 |                    |                    | X      | D
    103 |                    |                    | X      | D
    102 |                    |                    | X      | D
    101 |                    |                    | X      | D
    100 | X                  | X                  | X      | D
    099 | X                  | X                  | X      | D
    098 | X                  | X                  | X      | D
    If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
    My question is:
    What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
    Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
    I hope that my question is clear
    Any answer is highly appreciated.
    Thanks
    Michal

    Hi,
    If i get ur Q correct...
    Compress After Roll-up option is for the aggregtes of the cube not for the cube...
    So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
    If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
    Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
    This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
    And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
    even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
    Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
    here in urcase i guess there is no need to remove the step from the chain...
    correct me if any issue u found......
    rgds,

  • Parent - Child aggregates

    Hello BW Experts,
    What are the parent child aggregates..
    Suggestionso appreciated.
    Thanks,
    BWer

    How about an example-
    You have a InfoCube with characteristics A, B, C, D, F.
    Some of your queries only use chars A,B,C,D, so you create an aggregate 1 that consists of those four.
    Now you have other queries taht only use chars A,B,C, so you create aggregate 2, consisting of just those three chars.
    Now you load data to your InfoCube and that data needs to be rolled up to your two aggregates.
    Concievably, the updates could be pulled from the base InfoCube and applied to each aggregate separately.  But in our case, the BW is smart enough to realize that the detailed data can first be rolled up to aggregate 1, and then use the more summarized data from aggregate 1 to get the data to rollup into aggregate 2.
    Aggregate 1 is the parent, aggregate 2 is the child.  You could create aggregate three that just had chars A,B, and it would be child of aggr 2.
    This parent / child relationship is dynamic - that is to say, the BW figures it out - it's not anything you have to configure.  And if you create additional aggregates or change the ones you have, the BW figures out the parent /child relationships after the changes.
    One of the icons on the aggregate maint screen shows this relationship.  
    Pizzaman

  • Aggregate tables have many partitions per request

    We are having some performance issues dealing with aggregate tables and
    Db partitions. We are on BW3.5 Sp15 and use Oracle DB 9.2.06. After
    some analysis, we can see that for many of our aggregates, there are
    sometimes as much as a hundred partitions in the aggregates fact table.
    If we look at infocube itself, there are only a few requests (for
    example, 10). However, we do often delete and reload requests
    frequently. We understood that there should only be one partition per
    request in the aggregate (infocube is NOT set up for partitioning by
    other than request).
    We suspect the high number of partitions is causing come performance
    issues. But we don;t understand why they are being created.
    I have even tried deleting the aggregate (all aggregate F tables and
    partitions were dropped) and reloading, and we still see many many more
    partitions than requests. (we also notice that many of the partitions
    have a very low record count - many less than 10 records in partition).
    We'd like to understand what is causing this. Could line item
    dimensions or high cardinality play a role?
    On a related topic-
    We also have seen an awful lot of empty partitions in both the infocube
    fast table and the aggregate fact table. I understand this is probably
    caused by the frequent deletion and reload of requests, but I am
    surprised that the system does not do a better job of cleaning up these
    empty partitions automatically. (We are aware of program
    SAP_DROP_EMPTY_FPARTITIONS).
    I am including some files which show these issues via screen shots and
    partition displays to help illustrate the issue.
    Any help would be appreciated.
    Brad Daniels
    302-275-1980
    215-592-2219

    Ideally the aggregates should get compressed by themselves - there could be some change runs that have affected the compression.
    Check the following :
    1. See if compressing the cube and rolling up the aggregates will merge the partitions.
    2. What is the delta mode for the aggregates ( are you loading deltas for aggregates or full loads ) ?
    3. Aggregates are partitioned according to the infocube and since you are partitioning according to the requests - the same is being done on the aggregates.
    Select another partitioning characteristic if possible. - because it is ideally recommended that request should not be used for partitioning.
    Arun
    Assign points if it helps..

  • How to create Process chain for Aggregate for Master data

    Hello friends,
    I created Aggregates on Navigational Attributes.
    Now they are working fine, but i need to know that how shall i create the Roll up for Aggregates on Navigational Attributes.
    The point is the master data changes frequently a lot of time for e.g. for 0customer etc.....
    So if any one can send me the step by step documents so as to know how to roll up the Aggregates for Navigation attributes or for aggregates created on Master data....
    How to create process chains for the same ?????????
    Because if master data changes, then rolling up the aggregates straight forward will not help.
    So we need to write a process chain so that it deactivates the aggregate and reactivate again and fill up again..........
    If i mis interpreted something please rectify it.......
    Please advise

    Hello,
    the changerun that you have to schedule in order to activate the master data will adjust the aggregates automatically.  There is no need to deactivate them after master data loads.
    Best regards,
    Ralf

  • How to delete an request on infocube and on aggregates?

    hi all,
    how to delte an request after it is rolled up in aggregate.
    I want to delete an request infocube and it should be reflected in aggregates.
    Can anyone explain me a scenario how to delete request in cube that shd be reflected in aggregates.
    regds
    hari

    Hi hari,
    When you delete the request the system automatically adjusts the aggregates. It is only when you load that you need to "roll-up" the aggregates after the load.
    Bye
    Dinesh

  • Error when structuring the index of aggregate 100165 for InfoCube 0TCT_C01

    Hi,
    I was getting following error:
    **Error when structuring the index of aggregate 100165 for InfoCube 0TCT_C01
    I used this link Please Help me...
    I followed all the steps:  
    1.Delete indexes.
    2.Load the InfoCube.
    3.Create Indexes.
    4.Roll-up of aggregates
    but still getting the same error(Roll-up of aggregate is failing).
    After the failure if i repeat the step then it gets executed successfully.
    I need to monitor the PC ...and at the end of the complition it fails(daily)...
    Please can anyone help me on this issue(that why the chain is failing at the roll-up)?

    Go to TX RSDDV with your cube an identify the technical ID of ur aggregate (a number 100045 for instance).
    TX SE11 goto table RSDDAGGRDIR filter the selection with AGGRCUBE = 100045.
    Copy the AGGRUID (something like 3QL29Z7ZLO3BQZDSSLRU0MGOI)
    Then come to TX SE37; RSDDK_AGGREGATES_FILL; hit the single test button
    I_T_Aggregate enter your AGGRUID
    I_T_INFOCUBE enter the techid of your cube
    Execute.
    This will fill your aggregate.
    Hope this will help......
    Regards,
    Mahesh

  • How to make Attribute Change run alignment & Hierarchy changes in Aggregat

    Hello
    I want to understand that How to make Attribute Change run alignment & Hierarchy changes in Aggregate?
    I posted previously the same question but there were not good answers and i was not able to understand clearly .....
    If there is Process chain XXY which makes Attribute change run for Master Data 0SPELLING ?
    Now there is Aggregate TRT which includes :
    0SPELLING , Fiscal Period , Purchase Product, Purchase Category ?
    Now pls answer following question ?
    1) Does the Process Chain XXY which makes only attribute change run alignment for 0SPELLING , Will this process chain automatically do the Change run alignment for 0SPELLING in Aggregate TRT ? YES or NO
    2) If Yes, then we are just suppose to do Roll up for Aggregate TRT after Process chian XXY completes and finish job ?
    3) If No, then what steps are suppose to be DONE so as to make sure that Aggregate TRT has new values and perfect values for 0SPELLING ?
    Please answer and coorect if i have any wrong question....

    for e.g.
    u have 0spelling whicha has attributes x,y and z on day 1 with 10 records
    so do ur aggregates on day1 with same values
    now on day2 u had new values of attributes y,z,s,d and new hierarchies and so u add new records
    with data load u will load the data with version M of modified and is not available for reporting
    If u do attribute change run then this modified version is activated to A i.e. active version .
    It will also do the change run alignment for Aggregate for new attribute values and new hierarchy values for aggregate.
    now in order for this data to be available for reporting u will need to do the roll up of aggregate.....
    if u roll up aggregate before attribute change run , new data is not avaialable for reporting
    if u roll up aggregate after attribute change run, then data is available for reporting
    if u dont roll up aggregate eventhough new data is in dataprovider, still new data will not be available for reporting.
    this is how it works

  • How to make Attribute Change run alignment & Hierarchy changes in Aggregate

    Hello
    I want to understand that How to make Attribute Change run alignment & Hierarchy changes in Aggregate?
    I posted previously the same question but there were not good answers and i was not able to understand clearly .....
    If there is Process chain XXY which makes Attribute change run for Master Data   0SPELLING ?
    Now there is Aggregate TRT which includes :
    0SPELLING , Fiscal Period , Purchase Product, Purchase Category ?
    Now pls answer following question ?
    1) Does the Process Chain XXY which makes only attribute change run alignment for 0SPELLING ,   Will this process chain automatically do the Change run alignment for 0SPELLING in Aggregate TRT ?         YES or NO
    2)  If Yes, then we are just suppose to do Roll up for Aggregate TRT after Process chian XXY  completes and finish job ?
    3) If No, then what steps are suppose to be DONE so as to make sure that  Aggregate TRT has new values and perfect values for  0SPELLING ?
    Please answer and coorect if i have any wrong question....

    for e.g.
    u have 0spelling whicha has attributes x,y and z on day 1 with 10 records
    so do ur aggregates on day1 with same values
    now on day2 u had new values of attributes y,z,s,d and new hierarchies and so u add new records
    with data load u will load the data with version M of modified and is not available for reporting
    If u do attribute change run then this modified version is activated to A i.e. active version .
    It will also do the change run alignment for Aggregate for new attribute values and new hierarchy values for aggregate.
    now in order for this data to be available for reporting u will need to do the roll up of aggregate.....
    if u roll up aggregate before attribute change run , new data is not avaialable for reporting
    if u roll up aggregate after attribute change run, then data is available for reporting
    if u dont roll up aggregate eventhough new data is in dataprovider, still new data will not be available for reporting.
    this is how it works

  • How does attribute change run works for Aggregates and Master data?

    Hi
    Can anybody xplain how does the attribute change run works for Master data ?
    For e.g.
    There is 0spelling and it has master data
    On Day 1 there are 10 records
    day 2 it has 12 records
    so with attribute change run this 2 new records will get added....
    The values for this  12 records will added seperately in Data load
    Is this how it workss
    So how about Aggregates which has Master data.????

    for e.g.
    u have 0spelling whicha has attributes x,y and z on day 1 with 10 records
    so do ur aggregates on day1 with same values
    now on day2 u had new values of attributes y,z,s,d and new hierarchies and so u add new records
    with data load u will load the data with version M of modified and is not available for reporting
    If u do attribute change run then this modified version is activated to A i.e. active version .
    It will also do the change run alignment for Aggregate for new attribute values and new hierarchy values for aggregate.
    now in order for this data to be available for reporting u will need to do the roll up of aggregate.....
    if u roll up aggregate before attribute change run , new data is not avaialable for reporting
    if u roll up aggregate after attribute change run, then data is available for reporting
    if u dont roll up aggregate eventhough new data is in dataprovider, still new data will not be available for reporting.
    this is how it works

  • Creating aggregate based on plan cube

    Hi,
    For the standard baisc cube, we can use the aggregate to improve the query performacne.
    and when the new request comes, it will be ready for reading only after this request is rolled up to aggregate.
    Now I create a aggregate and do the initialzation fill-up for this aggregate, all the requests with green status have been rolled up to the aggregate. There is no any problem.
    but I get one quesion here, for the most updated request, it is usally with yellow status because this cube is ready for input. and this yellow request can not be rolled up to aggregate until it is turned to green. But from our testing, the query can read the required data from both the aggregate and this yellow request. that's to say, it seems the query based on this plan cube can summarize the data from both the aggregate and this yellow request.
    Is there anyone can confirm our testing is correct and it is the specific property of the plan cube?
    Many Thanks
    Jonathan

    Hi Jonathan,
    the OLAP processor knows whether requests are already contained in an aggregate or not; depending on the actualdata setting (cf. note 1136163) the query also is able to read data from the yellow request; this is automatically the case for input ready queries.
    In fact, even the OLAP cache may be involved to read the data, cf. note 1138864 for more details.
    Regards,
    Gregor

Maybe you are looking for

  • Calendar app contains multiple entries for birthday calendar and Apple US Holiday's calendar.

    After the most recent update, my calendar app began showing duplicate entries for all of my calendars (google, birthday and US Holidays). I deleted all of the google calendars from my phone and then added each calendar back to the phone one at a time

  • Help! AME Stops in the Middle of Encoding - AME 5.5

    Hello - I am hoping someone can help me as I am hitting a brick wall with my production suite! I am going to be as detailed as possible however I am not an expert in how software and my hardware interact. Issue - My Adobe Media Encoder stops in the m

  • How to manage BPEL engine API fusion middleware 11g

    Hi guys, I have read a lot trying to found a way to integrate non Weblogic applications to the BPEL engine I have read that There is a fuego.papi but I guess this is just for aqualogic Process engine and besides I have not found it!!! I have read we

  • Tuner plugin working in 9.1.1?

    Is everyones Logic Tuner plugin working within Logic? Noticed over the last couple of days that it's not responding to anything here. Yes I have soft monitoring on and everything else is working as expected. Not sure when this started.

  • Create xml document with xsd

    Hi, i'm new at this, so i need some help. I am developing web service that interacts with user over SOAP. User has to send xsd and web service has to create or update xml whitch is something like database. Is this posible? Thanx, Alan.