InfoCube Compression Policy

Hi,
     Right now my client compresses Infocubes every month leaving the most current 60 days uncompressed.  
I am considering changing this policy to where we compress on a daily basis leaving only the most recent 3 or 4 days uncompressed.  This policy would not be in affect for infocubes where we load some flatfiles due to the fact we would not be able to remove the loads once compressed.
I've talked to a few cohorts, but some say they don't even compress regularly.
Anyone see any issues with this policy/practice I want to setup?

I have 7 uncompressed requests in the cube. Sundays are when reporting activity is low and compressions are scheduled for that time. We have a 24X7 support team that does the same on Sundays.
There were cases where sales orders were changed and had to be reextracted. We have reconciliations done every weekend and once the numbers tally we run the compression - otherwise the reconciliation between R/3 and BI runs longer and compression is held back till then so that delta can be repeated else filling up the setup tables again will require downtime etc which is to be avoided for business reasons.
Also we have FI cubes that are compressed only once the month end closing is over and we also have cubes with flat file loads which are compressed daily.
FI cubes - compressing them once a month because no one looks at the number other than at the month end!!!
Edited by: Arun Varadarajan on Dec 5, 2008 9:24 PM
Edited by: Arun Varadarajan on Dec 5, 2008 9:25 PM
Edited by: Arun Varadarajan on Dec 5, 2008 9:26 PM

Similar Messages

  • Infocube compression error CX_SQL_EXCEPTION

    Hi,
    We are encountering a infocube compression error ( CX_SQL_EXCEPTION: parameter is missing).
    Have applied two notes:1028847 and 973969. But did not fix the error. In the system log we have the following error:
    ORA-00054: resource busy and acquire with NOWAIT specified.
    Every time when the compression failed, we always repeated it and completed successfully in second.
    Could anyone know what we can do to fix this.
    Thanks!
    JXA

    Hello Girija,
    Please check OSS note 973969.
    Regards,
    Praveen

  • What is a Zero elimination in infocube compression?

    Can any body please explain me in detail what is a Zero elimination in infocube compression? when and why we need to switch this on? i appreciate your help. Thank you.

    Hi Rafi,
       If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    More info at:
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    Hope it Helps
    Srini

  • Performance Impact with InfoCube Compression

    Hi,
    Is there any delivered content which gives a comparative analysis of performance before InfoCube compression and after it? If not then which is the best way to have such stats?
    Thank you,
    sam

    The BW Technical Content cubes/queries can tell you if a query is performing better at differnet points in time.  I like ot always compare a volume of queries before and after, rather than look at a single execution.  As mentioned, ST03 can provide info, as can RSRT.
    Three major components of compression that can aid performance:
    <u><b>Compression</b></u>
    The compression itself - how many rows do you end up with in the E fact table compared to what you had in the F fact table.  this all depends on the data - some cubes compress quite a bit, others, not at all, e.g.
    Lets say you have a cube with a time grain of Calendar Month.  You load trans to it daily.  A particular combination of characteristic values on a transaction occurs every day so after a month you have 30 transactions spread across 30 Requests in the F fact table.  Now you run compression - these 30 rows would compress to just 1 row.  You have now reduced the volume of ddata in your cube to just about 3% of what it used to be.  Queries should run much faster in this case.  In real life, doubt you would see a 30 - 1 reduction, but perhaps a 2 - 1 or 3 - 1 is reasonable.  It all depends on your data and your model.
    <b><u>Zero Elimination</u></b>
    Some R3 appls generate trans where all the KFs are 0, or generate trans that offset each other, netting to 0.  Specifying Aero Elimination during compression will get rid of those records.
    [<b>u]Partitioning</u></b>
    The E fact table can be partitioned on 0FISCPER or 0CALMONTH.  If you have queries that restrict on those characteristics, the DB can narrow in on just the partitions that hold the relevant data (partition pruning is how it is usually referred to).  if a query on goes after 1 month of data form a cube that has 5 years of data, this can be a big benefit.

  • No Marker Update in InfoCube compression

    Hi,
    Please explain me how the ‘No Marker Update’ works in InfoCube compression of inventory management.
    Best Regards,
    Ramesh

    Marker update when uploading/compressing
    We will use an example to explain the procedure for a stock InfoCube
    when executing a query. The scenario is as follows:
    •     Current date: 31.03.2002
    •     You have set up an opening balance of 100 units on 01.01.2002 and loaded it into the stock InfoCube.
    •     Historical material movements from the three previous months (October 2001:10 units; November 2001: 20 units; December 2001: 10 units) are loaded into the BW.
    •     Since this point, successive material movements have been transferred into the BW in the delta process. Delta requests transferred at the end of January (20 units) and February (10 units) were already compressed after successful validation, the last delta request from the end of March (10 units) is still in the InfoCube in uncompressed form.
    To help explain the role of the marker (= reference point), the different upload steps are considered over time.
    After uploading the opening balance, the InfoCube looks like this:
    You can see that the opening stock is not assigned to the actual date, but posted to a
    point in infinity (0CALDAY= 31.12.9999, for example).
    After the three previous months have been uploaded and compressed, the InfoCube
    content looks like this:
    Note here that the marker value remains unchanged at 100 units. This can be achieved
    using the “No marker update” indicator during compression (see section 3.2.2, step 6).
    The marker is thus not changed.
    After successively uploading deltas from January to March, of which only the first two are
    compressed, the InfoCube content has the following appearance:
    Compressing the requests for January and February executes a marker update that can
    be seen by the marker now having the value 130 units. The values for March have not
    been included in the marker yet.
    Please go though the document:
    How To…Handle Inventory Management Scenarios in BW
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Cheers
    Pagudala

  • Did Infocube compression process locks the infocube?

    HI All,
    First of all thanks for ur active support and co-operation.
    Did the compression process locks the cube?, my doubt is, while the compression process is running on a cube, if i try to load data into the same cube, will it allow or not? please reply me as soon as u can.
    Many Thanks in Advance.
    Jagadeesh.

    hi,
    Compression: It is a process used to delete the Request IDs and this saves space.
    When and why use infocube compression in real time?
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Check these Links:
    http://www.sap-img.com/business/infocube-compression.htm
    compression is done to increase the performance of the cube...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/b2/e91c3b85e6e939e10000000a11402f/frameset.htm
    Infocube compression and aggregate compression are mostly independent.
    Usually if you decide to keep the requests in the infocube, you can compress the aggregates. If you need to delete a request, you just have to rebuild an aggregate, if it is compressed. Therefore there are no problems in compressing aggregates, unless the rebuild of the aggregates take a lot of time.
    It does not make sense to compress the infocube without compressing the aggregates. The idea behind compressing is to speed up the infocube access by adding up all the data of the different requests. As a result you get rid of the request number. All other attributes stay the same. If you have more than one record per set of characteristics, the key figures will add the key figures by aggregat characteristic (ADD, MIN, MAX etc.). This will reduce the number of records in the cube.
    Example:
    requestid date 0material 0amount
    12345 20061201 3333 125
    12346 20061201 3333 -125
    12346 20061201 3333 200
    will result to
    requestid date 0material 0amount
    20061201 3333 200
    In this case 2 records are saved.
    But once the requestid is lost (due to compression) you cannot get it back.
    Therefore, once you compressed the infocube, there is no sense in keeping the aggregates uncompressed. But as long as your Infocube is uncompressed you can always compress the aggregates, without any problem other than rebuild time of the aggregates.
    hope it helps..

  • Drawbacks of Infocube compression

    Hi Experts,
    is there any drawbacks of infocube compression??
    Thanks
    DV

    Hi DV
    During the upload of data, a full request will always be inserted into the F-fact table. Each request gets
    its own request ID and partition (DB dependent), which is contained in the 'package' dimension. This
    feature enables you, for example, to delete a request from the F-fact table after the upload. However,
    this may result in several entries in the fact table with the same values for all characteristics except the
    Best Practice: Periodic Jobs and Tasks in SAP BW
    request ID. This will increase the size of the fact table and number of partitions (DB dependent)
    unnecessarily and consequently decrease the performance of your queries. During compression,
    these records are summarized to one entry with the request ID '0'.
    Once the data has been
    compressed, some functions are no longer available for this data (for example, it is not possible to
    delete the data for a specific request ID).
    Transactional InfoCubes in a BPS environment
    You should compress your InfoCubes regularly, especially the transactional InfoCubes.
    During compression, query has an impact if it hits its respective aggregate. As every time you finish compressing the aggregates are re-built.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced.
    "If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing."
    Hope this may help you
    GTR

  • Non cumulative infocube compression

    Hi All,
    Presently we have a problem with Non cumulative infocube comression.
    It is like: Presently Non Cumulative infocube contains 2 years of data and we are extracting data from two datasources (2LIS_03_BX AND 2LIS_03_BF) since 2years it has not been compressed.
    Now we are going to do the comprresion
    It is like : 2LIS_03_BX Init load compreesing with marker update (not selecting check box).
    2LIS_03_BF init load copressing without marker update (selecting check box)
    2LIS_03_BF delta load compressing with marker update(not selecting check box)
    2LIS_03_BX delta upload compressing without marker update(selecting check box)
    Here my doubt is in between delta loads there are some full uploads from 2LIS_03_BF datasource how can i copress this full uploads from 2LIS_03_BF req's?
    Please help me it is quiet urget.
    Thanx,
    Niranjan.

    Hi Nirajan,
    First of all as I undestood 2LIS_03_BX is the initial upload of stocks, so there is no need of delta load for this datasource, it collects data from MARC, and MARD tables when running the stock setup in R3 and you ahve to load it just one time.
    If between delta loads of 2LIS_03_BF you're loading full updates you are dupplicating material movements data, the idea of compression with marker update is that this movements affects to the stock value in the query, it's because of that when you load the delta init you do it without marker update because these movements are contained in the opening stock loaded with 2LIS_03_BX so you dont want to affect the stock calculation.
    You can refer to the How to handle Inventory management scenarios in BW for more detail on the topic.
    I hope this helps,
    Regards,
    Carlos.

  • Infocube compression report

    friends,
    I would like to know if there are any standard reports which can give me the list of infocubes which are not compressed yet.
    If there are none can any one give me steps to achieve this or if any ABAP code to be written to extract this information.
    thanks

    Partitions on the E facttable may be more numerous because in queries there usually will be time restrictions that reduce the number of partition accesses (partition pruning) because the partitions on the E facttable are by some time criterion (OCALMONTH or 0FISCPER).
    On the F table, every partition must be accessed for every query because there is no effective restriction on the p-dimension partitioning key on the F facttable, and due to load performance the Indexes on the F table are all local indexes. This will cause unacceptable query response times if there are too many partitions.
    1) Use the report SAP_DROP_EMPTY_FPARTITIONS on an infocube to see how many partitions there are on a cube (work for aggregates as well) and how many data they contain.
    2) Have a look at the infocubesP dimensiontable (/BI[C|0]/D<cubename>P). The number of entries (-1 or 2) reflect the number of unconmpressed partitions.
    3) Use the report RSORAVDV to have a look at the partitioned table on the Oracle database catalog. Use DBA / DBA_PART_TABLES , display TABLE_NAME and PARTITION_COUNT and see, whether there are tables with more than 10 partitions ( PARTITION_COUNT > 9) and eventually restrict TABLE_NAME like '/BI%/F%' to see only the F facttables.
    4)use program SAP_EXTEND_PARTITIONING_INF for E table information
    Hope it helps
    Chetan
    @CP..

  • Infocube Compression

    Hi All,
    Presently we have a problem with Non cumulative infocube comression.
    It is like: Presently Non Cumulative infocube contains 2 years of data and we are extracting data from two datasources (2LIS_03_BX AND 2LIS_03_BF) since 2years it has not been compressed.
    Now we are going to do the comprresion
    It is like : 2LIS_03_BX Init load compreesing with marker update (not selecting check box).
    2LIS_03_BF init load copressing without marker update (selecting check box)
    2LIS_03_BF delta load compressing with marker update(not selecting check box)
    2LIS_03_BX delta upload compressing without marker update(selecting check box)
    Here my doubt is in between delta loads there are some full uploads from 2LIS_03_BF datasource how can i copress this full uploads from 2LIS_03_BF req's?
    Please help me it is quiet urget.
    Thanx,
    Niranjan.

    pamireddy,
    The best way to see if it is okay is to check the aggregates... Especially since you are using non *** KF - check if rollup to aggregates is happening fine - the aggregates are automatically compressed , only thing is that there is no delta rollup for aggregates with Non *** KF.
    If there are no aggregates - create one basis aggregate and roll it up and test - if it goes through fine and the query hits the aggregate and is behaving fine - then you can go ahead with compression of the base data.
    Arun
    Assign points if useful

  • Ncum InfoCub compression very slow

    subj
    2lis_03_bx has loaded and compressed (30000 rows, with marker update) without any problems
    2lis_03_bf/um has loaded, but compression (11 mln rows, without marker update) not ended (take very long time)
    But some time, compression (bf/bx), is processed ok (after delete all data from infocube, and loading again (bx/bf/um)) (but not always)
    ver. BW - 3.5 17 patchlevel
    please help mi!

    Hi,
    Are you having large amount of records for compression ie data volume is huge in your cube.
    Try to delete index,load data, compress infoocube and generate index for them.
    Try to refresh stsatistics of cube once a week atleast.
    Follow compression of data at regular intervals like weekly or daily.
    If you go for a month large amount of data combined during which you can try compression of 5 to 10 . Sometimes comprressing larger number of request may result in dump due to temporary space unavailable
    Hope this helps for you.
    Thanks,
    Arun
    Edited by: Arunkumar Ramasamy on Oct 27, 2009 12:50 AM

  • Infocube compression already lasts 7 days

    Dear BI Experts,
    In our system a BI_COMPxxx job is running for more than 610.000 seconds (7 days). And I want to know if I can stop this job?
    Some observations:
    - At the start of the compression of the infocube, 2 aggegates were not rolled up/compressed.
    - Daily loads to the infocube via Process Chains fail at the Delete Index variant due to locking. = expected behaviour
    - The F-table of the infocube contains 21.674.550 records.
    - The E-table (compression target) contains 1.402.950 records at the moment.
    - There are no ST22 and SM21 entries (as the job is still running)
    - During the last 4 hours this number of records in the E-table has not grown.
    - In the job log there are no entries after 1,5 days of runtime and the last time entry shows nothing unusual: MERGE /*+ USE_NL ( FACT E ) / INTO "/BIC/EZCOPABUD" E USING ( SELECT  /+ PARALLEL ( FACT , 3 ) */0 "PDIMID" , "KEY_ZCOPABUDT" , "KEY_ZCOPABUDU" , "KEY_ZCOPABUD1" , "KEY_ZCOPABUD2" , "KEY_ZCOPABUD3", "KEY_ZCOPABUD4" , "KEY_ZCOPABUD5" , "KEY_ZCOPABUD6" , "KEY_ZCOPABUD7" , "KEY_ZCOPABUD8" , "KEY_ZCO... etc. (goes on for 7 pages/screens)
    My main questions:
    - How unsafe is it to stop this job under these conditions?
    - And what may be the effect of stopping this job on data consistency?
    Other questions:
    - Is the ratio between F/E table a normal ratio when the compression is finished, or does it seem like the compression has failed? There are a lot of 0 records requests loaded.
    - Is this a documented error? Are there any notes, procedure, ... available? I could not find any.

    Hi
    Did you take the count of records in the E table before you started the compression ? I did the compression once which took some 21 odd hrs. But at that time I encountered that records in E table were not getting increased as expected and it was my observation that somehow the commit on the database level is done once the compression job is finished completely.
    At that time I check the processing in the sm66( double clicking the required process by checking the pid of the job which is responsible for the compression) and it was doing something on the SQL level.
    What I can say at this point of time is that if you stop your process then everything will be rolled back and your cube should be safe but may be nothing will be compressed and that status will be same as it is now. This can also be because of the fact that the SAP generally create the rollback point before doing any task and if the task is not successful, it rolls back the status which was before.
    This was my experience, check in sm66 for the processing of your job and see how to proceed .
    Another idea is you can compress your cube at some interval level.It seems to me that you have given in selection the top request number and this is now compressing the whole cube and that is why it is taking time.
    Try to confirm if you stop the job then how sap behaves, acording to me nothing should happen and data state should be same and if that is the case, I would suggest you to compress your cube for some 5- 10 requests on regular intervals and this will take less time to complete the job and rest of the processing will not be hindered.
    Please see how to proceed now.
    I could help only this much.
    regards
    Vishal

  • Infocube compression with zero elemination

    Hi all,
    I have cube which is being compressed daily for almost a year. But without zero elemination.
    Now I have a requirement to compress with zero elemination. I am planing to do this after next load into the cube. So once I compress with zero elemination check box ticked, will all the data records be compressed with zero elemination? Will this cause any problem?
    What is the best way to do it?
    I cannot delete the contents of the cube.
    Expecting a reply ASAP.
    Regards,
    Adarsh

    I Hope nothing wil happen to the data values they will remian same. It is just you are removing zeros. If the zeros are there also they have to aggregate and show the value in the reprort.
    So you can go ahead with the Zero elimination mode.
    Regards,
    Vikram

  • System crash during infoCube compression

    Hello Community,
    Our system crashed due to failed memory DIMM while two fact table compression jobs were active. 
    Because it was a hard crash without an orderly shutdown, any transactions in SAP not yet committed were lost. 
    However, the oracle database did a successful recovery when the system was restarted, so no committed transaction were lost.
    Is it possible that data was lost, corrupted, or duplicated in those cubes as a result of the interrupted compression jobs ?

    Hi Fredrik,
    you can either:
    a) remove all necessary parameters from the paramfile using the relevant dbmcli commands (param_directdel)
    b) perform a parameterfile recovery using dbmcli (param_restore)
    You're okay as long as nothing was written on the new volume.
    When using a) you need to make sure you delete all necessary parameter entries belonging to the datavolume (size, type etc.).
    When you're restoring to an old parameterfile, you need to make sure the parameterfile's only difference is the datavolume extension as the last change, otherwise you'd possibly reset more than you'd like. To check the parameterhistory, you can for example take a look at the <DBSID>.pah file in the /sapdb/data/config directory.
    If you're unsure what to do, i.e. if it's a productive system and you're an SAP customer: open an OSS message.
    Regards,
    Roland

  • Unable to compress a request.

    Hi experts,
    I am unable compress a request by manually and by scheduling in the process chain also it is not compressing.
    when i open the collapse tab it saying No valid request exists for compressing
    then i entered manually with request id. job finishing without compression and i found in the job log(SM37)
    *No requests needing to be aggregated have been found in InfoCube
    *Compression not necessary; no requests found for compressing   
    *Job finished
    in 10 seconds.
    Process chain is also completed successfully without compression.
    i don't know y it is not compressing the requests ?
    Thanks in advance.
    Regards,
    <BMP>

    Hi,
    Note: Is "Tick" mark there against your request in requests tab?
    If you are able to see your request in requests tab as not compressed, then delete this request and reconstruct it back from the Reconstruction tab. But please check in reconstruction tab to make sure the same request is available.
    After the above activity, you can try to compress this request.
    Hope you understand.......
    Regards,
    Suman

Maybe you are looking for

  • Printer sharing with Equium A200

    Hi, have aquired a Equium A200 and have connected it wirelessly to another PC in my house via a wireless modem router. I have the connection sorted and all works ok, except when I try and printer share. The printer connected to my older pc is an epso

  • Day wise production order

    Hi Gurus, Can any one please help me down?? Do we have standard SAP report to track on day wise production order status other than COOIS,CO26,CO28 and MD4C. Regards, VINODH

  • Has any one managed to re-attach the stand after using the VESA adaptor?

    I have just bought a 27-inch replacement for my 24 inch LED display and wanted to swap them over, using the 24" as an external for my MacBook Pro. The 24" was wall mounted and I had a VESA adaptor fitted. I took the 24" off and replaced with the 27"

  • Adobe flash crashes only in Firefox.

    This is driving me nuts. Adobe flash crashes every time in multiple sites. I have reset Firefox, uninstalled every add on to test them. Right now there are no add on's (post resetting Firefox). Going down the list of fixes here: https://support.mozil

  • Revert deleting XI objects

    Hi @ll; is there a way of reverting the deletion of XI objects e.g. message mapping. Possible szenario. I delete a object, activate the change list and afterwards i want to undo this. how to do that? Also there is a possibilty to delete the change li