Cube Compression and Aggregation

Hello BW Gurus,
Can I first compress my infocube data and load data into the aggregates.
The reason being, that when the infocube is compressed the Request Id's are removed.
Are the Request Id's necessary for data to be transfered to the Aggregates and then later on for aggregate compression.
Kindly suggest.
regards,
TR PRADEEP

Hi,
just to clarify this:
1) you can compress your infocube and then INITIALLY fill the aggregates. The Request information is then no longer needed.
2) But you can NOT compress requests in your infocube, when your aggregates are already filled and these requests are not yet "rolled up" into the aggregates (this action is prohibited anyway by the system).
Hope this helps,
Klaus

Similar Messages

  • Difference between Compression and Aggregation

    Hi,
      Can anybody explain the Difference between Compression and Aggregation.Performance wise which is better and explain me in detail.
      Thnaks,
      Chinna

    Hi,
    suppose you are having three charecteristics in a cube say X,Y,Z..
    Even for the records which are having the same combination of these charecteristics but are loaded with different request they won't get aggregated.
    So when you go for compression the records , it deletes the request number, and aggregates the records which are having the same combination of these records.
    Coming to the aggregates , if you build a aggregate on the charectaristic say 'X' then it aggregates the records which are having the same value for a particular charecteristic.
    ex: say you are having the recrds as
    x1, y1 ,z1......(some key figures)
    xi, y2,z1,.....
    x1,y1,z1,....
    x3,y3,z3...
    If you compress them, you will get three records.
    If you go for aggregates based on the charecteristic 'X' you will get two records.
    So aggregates will give more aggregate level of data than compression
    regards,
    haritha.

  • Cube Compression and InfoSpoke Delta

    Dear Experts,
    I submitted a message ("InfoSpoke Delta Mechanism") the other day regarding a problem I am having with running an InfoSpoke as a delta against a cube and didn't receive an answer that would fix the problem.   Since that time I have been told that we COMPRESS the data in the cube after it is loaded.  It is after the compression that I have been trying to run the delta InfoSpoke.   As explained earlier there have been 18 loads to a cube since the initial (Full) run of the InfoSpoke.  I am now trying to run the InfoSpoke in Delta mode and get the "There is no new data" message. Could the compression of the cube be causing the problem with the "There is no new data" message that appears when I try to run the InfoSpoke after the cube load and compression?    If someone could explain what happens in a compression also this would be helpful.
    Your help is greatly appreciated.
    Thank you,
    Dave

    You need uncompressed requests to feed your deltas. Infocube deltas uses request ids. Compressed requests cannot be used since the request ids are set to zero.
    You need to resequence the events.
    1. Load into infocube.
    2. Run infospoke delta to extract delta requests.
    3. Compress.

  • Cube compression and DB Statistics

    Hi,
       I am going to run Cube compressions on a number of my cubes and was wondering few facts about DB Statistics. Like:
    1) How does the % of Info Cube space used for DB stats helps.  I know that the more % we use the bigger is the stat and faster is the access but stats run longer.  But would increasing the default value of 10% make any difference or overall performance improvements.
    2) I will compress the cubes on a weekly basis and most of them will have around one request per day so will probably compress 7 requests for each cube.  So it is advisable to run stats also on a weekly basis or can it be run on bi-weekly or monthly basis? and what factors does it depend on?
    Thanks.  I think we can have a good discussion on these apart from points.

    What DB are we talking about?
    Oracle provides so many options on when and how to collect statistics, even allowing Oracle itself to make the decisions.
    At any rate - no point in collecting statistics more than weekly if you are only going to compress weekly.  Is your polan to compress all the requests when you run, or are you going to leave the most recent Reqs uncompressed in case you need to back out a Req for some reason.  We compress weekly, but only Reqs that are more 14 days old so we can back out a Req if there is a data issue.
    As far as sampling percent, 10% is good, and I definitely would not go below 5% on very large tables.  My experience has been that sampling at less than 5% results in useful indices don't get selected.  I have never seen a recommendation below 5% in any data warehouse info I have seen.
    Are you running the statistics on the InfoCube by using the performance table option or process chain?  I can not speak to the process chain statistics aproach, but I imagine it is similar, but I know when you run the statistics collection from performance tab, it not only collects the stats on the fact and dimension tables, but it also gos after all the master data tables for every InfoObject in the cube. That can cause some long run times.

  • Cube compression and partition related ?

    Hello BW Experts,
    Is it only possible to partition the cube after the cube compression. that means can we only parition the E table and not the F table.?
    Thanks,
    BWer

    InfoCube Partitioning is not supported by all DBs that BW runs on - the option is greyed out for DBs that do not support it.
    You can partition on 0FISCPER or 0CALMONTH, although if you have a need to partition on someting else it might be worth a customer message to SAP.  You should review any proposed partitioning scheme with your DBA in you are not familiar with the concepts and DB implications.
    The E fact table is what gets partitioned using this option.  The F fact table would already be partitioned by Req ID.   In 3.x, the partitioning you specify for the InfoCube is also applied to any aggregate E tables that get created if the partitioning characteristics (0FISCPER/0CALMONTH) is in that aggregate.  In NW2004s, you will have a choice whether you want the partitioning to apply to the aggregate or not.
    NW2004s also provides some additional partiion tools, e.g. the ability to change the partitioning.

  • Cube operator and aggregation of measures on existing records

    Hi there,
    I'm using a cube operator with loading type LOAD in order to perform merge on fact table.
    We have following situation
    Record already exists on fact table for customer and product with qty 8.
    Incoming record has qty 2, so tried using cube operator hoping that because aggragtion specfied on qty measure was sum and set to solve cube, new qty would be 10.
    However, looked at sql generated and record simply gets updated with new qty rather than adding new to existing qty.
    I can achieve our aim by simply reading for any existing record and adding new record qty to existing record qty, but was hoping the cube operator would do thi s for me.
    Anybody achivede anything similar using sinmply cube operator.
    Many Thanks

    Do you mean that you want to load the data into the AW using the AVG function instead of SUM? If this is true, are you planning to use AVG as the aggregation operator in the AW as well? Will this give the answer you want? The code currently defaults to SUM for load even if you aggregate the cube using AVG since AVG of AVG is not usually what people want. If you want to do it anyway then there it is possible if you hand edit the XML to add an attribute named AggregationMethod to the CubeMap. E.g.
        <CubeMap
          Name="MAP1"
          Query="SALES_FACT"
          AggregationMethod="AVG">But the simpler way to do it is to define a SQL View that aggregates to the load level using AVG and then map the cube to the view.

  • Infocube- Roll up ,compression and Aggregation

    Hi,
       Any one can explain me the follolwing info cube concepts:
    F-table -->  F-Aggregate table
    F-table --> E-Table
    F-Aggregate table --> E-Aggregate table
    With some examples.
    Thanks in advance
    Regards,
    Swarnalatha.M

    Hi Swarna,
    Info Cube Cantain Two Fact Tables.
    1. F- Fact Table 2. E- Fact Table
    When ever you do the Compresstion Request ID is deleted and Data is Aggreated based on characterstic values go and sit the E Fact Table.
    Ex:
    F-Fact Table Contians
    Request ID   Customer no      Material No          Price           Qty       Rev
    1516                    10000                  101               1000          2               20000
    1517                    10001                  101               1000           2              20000.
    1518                    10000                  101                1000         2               20000.
    Once you done the compression E-Table like this
    Customer no      Material No          Price           Qty       Rev
    10000                    101                    1000           4            4000
    10001                    101                     1000         2              2000
    Aggreate Table : Aggreate table also smaller cube it's not contain any fact tables but it's aggreates your data based on charastricts values
    In Aggreation Tables like this
    Customer no           Price           Qty       Rev
    10000                     1000             4          4000
    10001                      1000             2         2000.
    I think you can understand above ex.
    Thanks & Regards,
    Venkat.

  • Cube compression and request's id

    Can we decompress compressed cube using the request id's?
    What happens to the request id's when the cube gets compressed?
    rgds

    Hi Nitin,
    when you load data into the InfoCube, entire requests can be inserted at the same time.
    Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in Reporting, as the system has to aggregate using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs.
    Hope now is clearer (and don't forget to assign some points by clickin'on the star to the contributors that helped you !!!)
    Bye,
    Roberto

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • Performance of my query based on cube ? and ods?

    hi all,
    how to identify the performance of my query based on a cube nor ods. I have requirement which enables to do flat file extraction and the extraction is only once and the records are less too. I need to sort whether my query will be faster based upon cube nor on ods.
    Can anyone let me know how to measure the performance of my query based upon cube and ods and how to find out which one will be faster. bcos i need to explain them the entire process of going to load the data directly to ods and do reporting from there nor data loaded directly to cube and do reporting from cube.
    thanxs
    haritha

    Hi,
    ODS is 2 Dimensional  so avoid reporting on ODS,
    Cube is MultiDim, for analysis perpose we can go reporting on Cube only
    Records in ODS are Overwritten whereas in Cube records are Aggregated
    and can also do compression on Cube, which will increase the query performance and so data retrieval in cube is faster
    Thanks

  • Cube Compression & Process Chains

    Hello Friends
    Few Questions as I am a beginner.
    1) What is the entire concept behind Cube Compression. Why is it preferred for Delta uploads and not for full uploads.
    2) What do we mean by deleting and creating indexes using process chains.
    3) What is meant by the process chain "DB Statistics Refresh"? why do we need it.
    Any help is appreciated. Points will be generously assigned.
    Thanks and Regards
    Rishi

    Hello Rishi,
    As you may know, an InfoCube consists of fact tables and dimension tables. The fact table hold all key figures and the corresponding dimension keys, the dimension tables refer from dimension keys to InfoObject values.
    Now, there is not only one fact table but two - the F table and the E table. The difference from a technical point of view is just one InfoObject: 0REQID, the request number. This InfoObject is missing in the E table. As a result, different records in the F table could be aggregated to one record in the E table if they have the same key and were loaded by different requests.
    As you may know, you can delete any request from an InfoCube by selecting the request number. And here is the disadvantage of the E table. As there is no request number you cannot delete a request from this table.
    When data is loaded into an InfoCube it is stored in the F table. By compressing the InfoCube records are transmitted into the E table. Because of the disadvantage of the E table it can be defined per InfoCube if and when data has to be transmitted.
    More information can be found here: http://help.sap.com/saphelp_nw70/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    An index is a database mechanism to accelerate the access to single records within a table. In BW indexes are used to increase the reporting speed.
    Whenever data in a table is added or deleted - in our case loaded - the index has to be modified. Depending on the amount of changes in the table it could be less time consumpting to delete the index, load without an existing index and to rebuild the index afterwards. This can be done in process chains.
    DB Statistics is something special for an Oracle database. As far as I know (I do not work with Oracle) it is used to optimize SQL commands which are needed for BW reports.
    I hope that these explanations are helpful.
    Kind regards,
    Stefan

  • Data in the Cube not getting aggregated

    Hi Friends
    We have Cube 1 and Cube 2.
    The data flow is represented below:
    R/3 DataSource>Cube1>Cube2
    In Cube1 data is Stored according to the Calender Day.
    Cube2 has Calweek.
    In Transformations of Cube 1 and Cube 2 Calday of Cube 1 is mapped to Calweek of Cube 2.
    In the CUBE2 when i upload data from Cube1.Keyfigure Values are not getting summed.
    EXAMPLE: Data in Cube 1
    MatNo CustNo qty calday
    10001  xyz     100  01.01.2010
    10001  xyz      100  02.01.2010
    10001  xyz      100   03.01.2010
    10001  xyz     100  04.01.2010
    10001  xyz      100  05.01.2010
    10001  xyz      100   06.01.2010
    10001  xyz      100   07.01.2010
    Data in Cube 2:
    MatNo CustNo qty calweek
    10001  xyz     100  01.2010
    10001  xyz      100  01.2010
    10001  xyz      100   01.2010
    10001  xyz     100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    10001  xyz      100   01.2010
    But Expected Output Should be:
    MatNo CustNo qty calweek
    10001  xyz     700  01.2010
    How to acheive this?
    I checked in the transformations all keyfigures are maintained in aggregation summation
    regards
    Preetam

    Just now i performed consisyency check for the cube:
    I a getting following warnings:
    Time characteristic 0CALWEEK value 200915 does not fit with time char 0CALMONTH val 0
    Consistency of time dimension of InfoCube &1
    Description
    This test checks whether or not the time characteristics of the InfoCube used in the time dimension are consistent. The consistency of time characteristics is extremely important for non-cumulative Cubes and partitioned InfoCubes.
    Values that do not fit together in the time dimension of an InfoCube result in incorrect results for non-cumulative cubes and InfoCubes that are partitioned according to time characteristics.
    For InfoCubes that have been partitioned according to time characteristics, conditions for the partitioning characteristic are derived from restrictions for the time characteristic.
    Errors
    When an error arises the InfoCube is marked as a Cube with an non-consistent time dimension. This has the following consequences:
    The derivation of conditions for partitioning criteria is deactivated on account of the non-fitting time characteristics. This usually has a negative effect on performance.
    When the InfoCube contains non-cumulatives, the system generates a warning for each query indicating that the displayed data may be incorrect.
    Repair Options
    Caution
    No action is required if the InfoCube does not contain non-cumulatives or is not partitioned.
    If the Infocube is partitioned, an action is only required if the read performance has gotten worse.
    You cannot automatically repair the entries of the time dimension table. However, you are able to delete entries that are no longer in use from the time dimension table.
    The system displays whether the incorrect dimension entries are still being used in the fact table.
    If these entries are no longer being used, you can carry out an automatic repair. In this case, all time dimension entries not being used in the fact table are removed.
    After the repair, the system checks whether or not the dimension is correct. If the time dimension is correct again, the InfoCube is marked as an InfoCube with a correct time dimension once again.
    If the entries are still being used, use transaction Listcube to check which data packages are affected.  You may be able to delete the data packages and then use the repair to remove the time dimension entries no longer being used. You can then reload the deleted data packages. Otherwise the InfoCube has to be built again.

  • Effect of Cube Compression on BIA index's

    What effect does cube compression have on a BIA index?
    Also does SAP recommend rebuilding indexes on some periodic basis and also can we automate index deletes and rebuild processes for a specific cube using the standard process chain variants or programs?
    Thank you

    <b>Compression:</b> DB statistics and DB indexes for the InfoCubes are less relevant once you use the BI Accelerator.
    In the standard case, you could even completely forgo these processes. But please note the following aspects:
    Compression is still necessary for inventory InfoCubes, for InfoCubes with a significant number of cancellation requests (i.e. high compression rate), and for InfoCubes with a high number of partitions in the F-table. Note that compression requires DB statistics and DB indexes (P-index).
    DB statistics and DB indexes are not used for reporting on BIA-enabled InfoCubes. However for roll-up and change run, we recommend the P-index (package) on the F-fact table.
    Furthermore: up-to-date DB statistics and (some) DB indexes are necessary in the following cases:
    a)data mart (for mass data extraction, BIA is not used)
    b)real-time InfoProvider (with most-recent queries)
    Note also that you need compressed and indexed InfoCubes with up-to-date statistics whenever you switch off the BI accelerator index.
    Hope it Helps
    Chetan
    @CP..

  • Inventory cube compression

    Hi
    We are live with Inventory cube filling and delta of material movements, for past 1 year, However we had not automated compression of cube with marker update, What are the steps to automate rollup & compression of inventory cube, is this recommended as a standard practice?.

    Dear Anil
    I have followed the same procedure
    i.e.
    BX-init: Compression with marker update
    BF-init: Compression without marker update
    UM-init: Compression without marker update
    BF-delta loads: Compression with marker update
    UM-delta loads: Compression with marker update
    But surprisingly the values are not reflecting correctly
    Thanks & Regards
    Daniel

  • How data update when there is two planning cubes in one aggregation level

    If we have two planning cubes and a multi-cube includes these two planning cubes.
    then we have the aggregation level defined based on this multi-cube.
    if the characteristics and key figures defined in aggregation level all are included in these two planning cubes. When we plan the data with plan query based on this aggregation level, which plan cube will be updated with plan data, or both of them?

    HI Wang,
    yes as Bindu said you need to restrict the layout based on the infoprovider.
    Lets assume that we have a multi provider M1 and it has two planning cubes cube1 and cube1.
    in case u develop a layout for updating cube 1 give the restriction in the filter section as infoprovider = cube1.
    or the vice versa.
    Else ur layout must input values for both the cubes simultaneously, then in that case create two restricted KF with each having a restriction for cube1 and cube 2.
    so wen the user enters value for the first restricted KF it will get saved in cube1 and wen he enters value for the second  restricted KF it will gets saved in cube2.
    This is how it works. Guess this was useful to u.
    Regards.
    Shafi.

Maybe you are looking for

  • How to use BAPI_HELPVALUES_GET with functional location

    I try to create an RFC connection searchhelp for my ECC system to the functional location table. I thought of using the bapi_helpvalues_get but I get stuck in determing the right input data I have found the right objectype in SWOI BUS0010 now I found

  • Source System Connection - Partner Profile Not Available

    Alright guys, My Basis Team have undertaken a 'copy-back' of PRD data in R3 to QA and this means I have lost my partner profiles in BW QA. I have looked at the other threads with regard to 'Partner Profiles', but one simple question - do I need to re

  • Best Practice - Order Types for Consumers

    I was curious what the best practice is for entering orders within the CRM Interaction Center.   Do you have a separate order type within the CRM system and pass it to the SAP system for "Consumer" orders and then have a different order type that is

  • Limit Password Characters in Portal?

    We have EP 6.0 Stack 9 installed.  Is there a method to limit the characters used in usernames and passwords to non-Unicode characters?  We have a filtering mechanism that only works with the Latin-1 character so we want to limit characters to that c

  • TNS problems in 9i on Windows 2003 Server

    Goody day I have installed Oracle9i Database (and 9i Developer Suite) onto an HP ML370 Server running Windows 2003 Server. My problem is that wherever I need to use the username / password for things like sqlplus, forms, imp, etc I get SERVICE_NAME e