0ic_c03 cube compression

Hello gurus,
I want to compress the stock cube (0IC_C03) for the performance issues.
Can you please confirm the process? I'm not sure about steps 3 and 4.
1- Delete the data in the cube
2- Start Delta Dtp from Ods to Cube
3- After the Dtp execution completes, compress the cube by setting "No marker Update" checked
4- After the compression completes,  set "No marker Update" NOT checked
5- Start Delta Dtp
Here is the model we have for stock cube:
There is many documents about the issue. For example, I read Oss note 1548125 -Interesting facts about Inventory Cubes.
It is very usefull, but stock issue is a little complicated.Thats why I wanted to create this discussion.
I use aggreagation for my cube. Is it a problem to compress the cube?

Thanks for your messages.
The data in the cube is ensured, we have the historical movements.
I have the ODS with material movements  (I have related key fields).
So the steps 3 and 4 are confirmed?
3- After the Dtp execution completes, compress the cube by setting "No marker Update" checked
4- After the compression completes,  set "No marker Update" NOT checked
Suhas Karnik, guess you tell the inverse way:
"If you have ensured the above, perform setup, do a full load of all the material movements and compress with marker update (i.e. checkbox unticked). If you are splitting the data loads, ensure that you load the movements chronologically (i.e. load the oldest movements first)."

Similar Messages

  • Effect of Cube Compression on BIA index's

    What effect does cube compression have on a BIA index?
    Also does SAP recommend rebuilding indexes on some periodic basis and also can we automate index deletes and rebuild processes for a specific cube using the standard process chain variants or programs?
    Thank you

    <b>Compression:</b> DB statistics and DB indexes for the InfoCubes are less relevant once you use the BI Accelerator.
    In the standard case, you could even completely forgo these processes. But please note the following aspects:
    Compression is still necessary for inventory InfoCubes, for InfoCubes with a significant number of cancellation requests (i.e. high compression rate), and for InfoCubes with a high number of partitions in the F-table. Note that compression requires DB statistics and DB indexes (P-index).
    DB statistics and DB indexes are not used for reporting on BIA-enabled InfoCubes. However for roll-up and change run, we recommend the P-index (package) on the F-fact table.
    Furthermore: up-to-date DB statistics and (some) DB indexes are necessary in the following cases:
    a)data mart (for mass data extraction, BIA is not used)
    b)real-time InfoProvider (with most-recent queries)
    Note also that you need compressed and indexed InfoCubes with up-to-date statistics whenever you switch off the BI accelerator index.
    Hope it Helps
    Chetan
    @CP..

  • When running a query of a 0IC_C03 cube copy inventory shows as blank

    When loading the 0IC_C03 cube copy ZMM_C01 Cube: Material Stocks/Movements, if there
    is no movements for that day in a plant the inventory values do not show when we run a query on that key date
    When we create a movement type and run the extraction then the ending inventory values
    will show when we run a query usig that key date.
    So if there are no movements types for a plant  then the inventory balance does not show up in our query reports. Once we post a
    movement type for a plant and extract data then rerun our reports then the inventory balances show up.

    Refer link
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/30f15839-0cf1-2b10-c6a7-ebe68cc87cdc?QuickLink=index&overridelayout=true
    & check ur e-mail.

  • Missing movement type data in 0IC_C03 cube

    Hi Gurus ,
    i  have deltas running for 0ic_c03 cube in B1 7.0 i have used transformations for loading cube .
    In a report of this cube for plant material stock,  i can c the quantities matching with MB51  report of R/3 where as the values are not matching for sum material ( negative stock values with 0 qtys )
    i found that a movement type 121 data is missing in BW
    i have read thread on the forum suggesting  to work on OMJJ transaction and make 121 movement type as statistics relevant.
    But i want to know how this action will effect on R/3 update controlling ?
    any inputs will b a grt help
    Edited by: Rajesh Dalwadi on Jan 13, 2009 12:09 PM

    Hi,
    We faced lot of problems with missing movement type, fo that we taken correct confirmation from MM team and then added it in Update Rules and deleted the data in Cube and then Re-Initialized the Cube.
    To do this, you need ECC Down Time. I'm talking about BW 3.5. I think in BI7, it may be easy, (without taking down time, pls check).
    Thanks
    Reddy

  • Adding fields to 0IC_C03 cube

    Hi friends,
    I have to add some fields into 0IC_C03 cube. The fields are reson for movement ,special stock indicator,sales order etc. They are cmg in the datasource 2lis_03_bf. I did add them and load the data , but the data was not coming properly. Is there any other method by which i can make use of these fields without adding them in the cube and make a multiprovider on that. I need to have the customer field also. Will this data come if i make a multiprovider on top of this . Can i make a generic datasource from the table which is giving these fields and make a DSO with these fields and make a multiprovider on it.
    Will the key date concept work on this .
    I would appreciate ur help.
    Thanks,
    Kapil

    Dear Kapil,
    Pleae go through the link provided bleow hope this wil help full.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Cheers,
    VEERU.

  • Cube compression issue

    Hello Gurus,
    we have some strange behaviours with cube compression.
    All requests are compressed, but in F table we still have some records.
    The same records are stored in E table too, but with BEx query execution we can see correct result.
    If we execute query in debug on RSRT, with SQL code display, the query reads only from F table or aggregates.
    How it is possible?
    We just provide to insert the COMPNOMERGE object in RSADMIN table, but only after the first compression. Do you think thath with a initialization of cube and a new compression with COMPNOMERGE object could solve our problem?
    Could you help us?
    Thanks in advance.
    Regards.

    Vito Savalli wrote:>
    > Hi Lars, thanks for your support.
    > We don't have an open support message for this issue, but if it will be necessary, we will open it.
    >
    > I - The same records are stored in E table too, but with BEx query execution we can see correct result.
    > You - The first part of this sentence is technically impossible. At least the request ID must be different in F- and E-fact table.
    >
    > Ok for the request ID, I know it. But, if we don't consider request ID (of course isn't equal) and we check the characteristics values by SID analysis, we find the same complete key both in F and in E table.
    >
    Well, but that's the whole point - the request ID!
    That's why we do compression for at all - to merge together the data for the same key figures if they exist in both tables.
    It's completely normal to have this situation.
    > I - If we execute query in debug on RSRT, with SQL code display, the query reads only from F table or aggregates. How it is possible?
    > You - Easy - you're statement about all requests being compressed is not true and/or it reads the necessary data from the aggregates.
    >
    > I executed with RSRT one of record which is in both tables.
    Well, obviously there was some other implicit restriction that lead to the selections made by OLAP.
    Maybe the request read from the F-Facttable was neither rolled up nor compressed.
    > Very helpful, thanks.
    > Any others suggestions?
    I'd check exactly the status of the requests and where they can be read from.
    You may also try out to disable the aggregate usage in RSRT to see whether or not the data is also read from the E-facttable and check the result of the query.
    regards,
    Lars

  • Cube Compression - How it Affects Loading With Delete Overlapping Request

    Hi guys,
    Good day to all !!!
    Our scenario is that we have a process chain that loads a data to infocube and that has delete overlapping step. I just want to ask how does the cube compression affects the loading with delete overlapping request. Is there any conflict/error that will raise? Kindly advice.
    Marshanlou

    Hi,
    In the scenario you have mentioned:
    First the info cube would be loaded.
    Next when it goes to the step i.e delete overlapping request:  in this particular step, it checks if the request is overlapping (with the same date or accd to the overlapping condition defined in the infopackage, if the data has been loaded). 
    If the request is overlapping, then only it deletes the request. Otherwise, no action would be taken.  In this way,it checks that data is not loaded twice resulting in duplicasy.
    It has nothing to do with compression and in no way affect compression/loading. 
    Sasi

  • Cube compression will affect any data availability

    Hi,
    have an issue where I have a user running exactly the same report with the same selection criteria but getting different results.
    The report was run from backlog this morning at 09:56 and again at 10:23. Although the batch was delayed, the data was actually loaded prior to 09:45. However, there was a cube compress running between 09:45 and 10:11.
    So, the first report was run during the compress, the second after the compress was complete.
    Could the compress process affect data availability to the end users? I can find no other explanation for this behaviour.
    Thanks,
    R Reddy

    Hi,
    one thing in advance: The next only applies to oracle databases. I have no experience with other databases.
    the compression will usually not affect the reported data. But in case of the user doing the reporting while the compression is ongoing, it is indeed possible that the query will deliver wrong results. The reason is, that the collapsing collects the data of the not yet collapsed infopackages into the F table. The query will usually start parallel processes on all the available infopackage E tables and on the F fact table. Because of the amount of data F- table is larger, so the job there will be the longest running. After collecting the results, the results are added up.
    Depending on the timing of the collapse run and the timing of the query it is possible that the collapsed data package was already successfully packed in the fact table, but the deletion of the infopackage was not completed (result: Key figures to high). Or alternativly the infopackage was already deleted but the F-table not completly commited - because of the query (result Key figures to low).
    All in all I would strongly recommend to do collapsing at times where no query is run on the cube.
    Kind regards,
    Jürgen

  • Display of Vendor in the 0IC_C03 cube

    Hi all,
    I am using the 0IC_C03 cube to display quantity and value of raw materials received, issued and consumed. Each one is being restricted by movement types. Is there any means to display the vendor with regard to received from, issued to and consumed by.
    Cause the Vendor is displayed against items which do not have any movement type.
    Please suggest.
    Thanks in advance,
    BP

    Hi BP,
                I dont hink so you can add any Vendor detail irrespective of Movement type, that cube is especially for movement type wise details.
    Regards,
    Rajdeep Rane.

  • Basic cube compression

    Hi colleagues:
    I have started a basic cube compression. Considering there are lots of delta load requisitions, the entire process will take more than 30 hours.
    However I need to run the process chain to load delta in others data providers and also in the same basic cube that has being compressed.
    <b>May I run the delta loads in parallel to the compression process?</b>
    Best regards
    Waldemar

    I think Jkyle has probably identified the biggest concern.
    This is a good example of why you should break up large processes into smaller pieces - I can't imagine a requirement to compress everything at once on a large InfoCube.
    Always process manageable chunks of data whenever possible and benchmark before running larger processes, that way you can minimize:
    - impacts to system availability.
    - impact to system resources of large jobs.

  • Cube compression and DB Statistics

    Hi,
       I am going to run Cube compressions on a number of my cubes and was wondering few facts about DB Statistics. Like:
    1) How does the % of Info Cube space used for DB stats helps.  I know that the more % we use the bigger is the stat and faster is the access but stats run longer.  But would increasing the default value of 10% make any difference or overall performance improvements.
    2) I will compress the cubes on a weekly basis and most of them will have around one request per day so will probably compress 7 requests for each cube.  So it is advisable to run stats also on a weekly basis or can it be run on bi-weekly or monthly basis? and what factors does it depend on?
    Thanks.  I think we can have a good discussion on these apart from points.

    What DB are we talking about?
    Oracle provides so many options on when and how to collect statistics, even allowing Oracle itself to make the decisions.
    At any rate - no point in collecting statistics more than weekly if you are only going to compress weekly.  Is your polan to compress all the requests when you run, or are you going to leave the most recent Reqs uncompressed in case you need to back out a Req for some reason.  We compress weekly, but only Reqs that are more 14 days old so we can back out a Req if there is a data issue.
    As far as sampling percent, 10% is good, and I definitely would not go below 5% on very large tables.  My experience has been that sampling at less than 5% results in useful indices don't get selected.  I have never seen a recommendation below 5% in any data warehouse info I have seen.
    Are you running the statistics on the InfoCube by using the performance table option or process chain?  I can not speak to the process chain statistics aproach, but I imagine it is similar, but I know when you run the statistics collection from performance tab, it not only collects the stats on the fact and dimension tables, but it also gos after all the master data tables for every InfoObject in the cube. That can cause some long run times.

  • Cube compression WITH zero elimination option

    We have tested turning on the switch to perform "zero elimination" when a cube is compressed.  We have tested this with an older cube with lots of data in E table already, and also a new cube with the first compression.  In both cases, at the oracle level we still found records where all of the key figures = zero.  To us, this option did not seem to work. What are we missing? We are on Oracle 9.2.0.7.0 and BW 3.5 SP 17
    Thanks, Peggy

    Haven't looked at ZERO Elimination in detail in the latest releases to see if there have been changes, but here's my understanding based on the last time I dug into it -
    When you run compression with zero elimination, the process first excluded any individual F fact table rows with all KFs = 0, then if any of the summarized F fact table rows had all KF  = 0, that row was excluded ( you could have two facts with amounts that net to 0 in the same request or different requests where all other Dims IDs are equal) and not written to the E fact table.  Then if an E fact table row was updated as a result of a new F fact table row being being merged in, the process checked to see if the updated row had all KF values = 0, and if so, deleted that updated row from the E fact table.
    I don't beleive the compression process has ever gone thru and read all existing E fact table rows and deleted ones where all KFs = 0. 
    Hope that made sense.  We use Oracle, and it is possible that SAP has done some things differently on different DBs.  Its also possible, that the fiddling SAP had done over that last few years trying to use Oracle's MERGE functionality at different SP levels comes into play.
    Suggestions -
    I'm assuming that teh E fact table holds a significant percentage of rows have all KFs = 0.  If it doesn't, it's not worth pursuing.
    Contact SAP, perhaps they have a standalone pgm that deletes E fact table rows where all KFs = 0.  It could be a nice tool to have.
    If they don't have one, consider writing your own pgm that deletes the rows in question.  You'll need to keep downstream impacts in mind, e.g. aggregates (would need to be refilled - probably not a big deal), and InfoProviders that receive data from this cube.
    Another option would be to clone the cube, datamart the data to the new cube.  Once in the new cube, compress with zero elimination - this should get rid of all your 0 KF rows.  Then delete the contents of the original cube and datamart the cloned cube data back to the original cube. 
    You might be able to accomplish this same process by datamarting the orig cube's data to itself which might save some hoop jumping. Then you would have to run a selective deletion to get rid of the orig data, or perhaps if the datamarted data went thru the PSA, you could just delete all the orig data from the cube, then load datamarted data from the PSA.  Once the new request is loaded, compress with zero elimination.
    Now if you happen to have built all your reporting on the this cube to be from a MultiProvider on teh cube rather than directly form the this cube, you could just create anew cube, export the data to it, then swap the old and new cubes in the MultiProvider.  This is one of the benefits of always using a MultiProvider on top of a cube for reporting (an SAP and consultant recommended practice) - you can literally swap underlying cubes with no impact to the user base.

  • Cube compression before Change run

    Hello,
    This might sound foolish, but I need to be sure...
    Does anyone know if there's any implication if cube compression runs before attribute change run?
    Thank you.

    I don't think so either but I wanted to check if anyone experienced any issues arising from that sequence of execution. This cube contains a lot of data and I just want to be sure before proceeding.

  • Cube Compression & Process Chains

    Hello Friends
    Few Questions as I am a beginner.
    1) What is the entire concept behind Cube Compression. Why is it preferred for Delta uploads and not for full uploads.
    2) What do we mean by deleting and creating indexes using process chains.
    3) What is meant by the process chain "DB Statistics Refresh"? why do we need it.
    Any help is appreciated. Points will be generously assigned.
    Thanks and Regards
    Rishi

    Hello Rishi,
    As you may know, an InfoCube consists of fact tables and dimension tables. The fact table hold all key figures and the corresponding dimension keys, the dimension tables refer from dimension keys to InfoObject values.
    Now, there is not only one fact table but two - the F table and the E table. The difference from a technical point of view is just one InfoObject: 0REQID, the request number. This InfoObject is missing in the E table. As a result, different records in the F table could be aggregated to one record in the E table if they have the same key and were loaded by different requests.
    As you may know, you can delete any request from an InfoCube by selecting the request number. And here is the disadvantage of the E table. As there is no request number you cannot delete a request from this table.
    When data is loaded into an InfoCube it is stored in the F table. By compressing the InfoCube records are transmitted into the E table. Because of the disadvantage of the E table it can be defined per InfoCube if and when data has to be transmitted.
    More information can be found here: http://help.sap.com/saphelp_nw70/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    An index is a database mechanism to accelerate the access to single records within a table. In BW indexes are used to increase the reporting speed.
    Whenever data in a table is added or deleted - in our case loaded - the index has to be modified. Depending on the amount of changes in the table it could be less time consumpting to delete the index, load without an existing index and to rebuild the index afterwards. This can be done in process chains.
    DB Statistics is something special for an Oracle database. As far as I know (I do not work with Oracle) it is used to optimize SQL commands which are needed for BW reports.
    I hope that these explanations are helpful.
    Kind regards,
    Stefan

  • Cube compression and partition related ?

    Hello BW Experts,
    Is it only possible to partition the cube after the cube compression. that means can we only parition the E table and not the F table.?
    Thanks,
    BWer

    InfoCube Partitioning is not supported by all DBs that BW runs on - the option is greyed out for DBs that do not support it.
    You can partition on 0FISCPER or 0CALMONTH, although if you have a need to partition on someting else it might be worth a customer message to SAP.  You should review any proposed partitioning scheme with your DBA in you are not familiar with the concepts and DB implications.
    The E fact table is what gets partitioned using this option.  The F fact table would already be partitioned by Req ID.   In 3.x, the partitioning you specify for the InfoCube is also applied to any aggregate E tables that get created if the partitioning characteristics (0FISCPER/0CALMONTH) is in that aggregate.  In NW2004s, you will have a choice whether you want the partitioning to apply to the aggregate or not.
    NW2004s also provides some additional partiion tools, e.g. the ability to change the partitioning.

Maybe you are looking for

  • Itunes says I need to restore ipon settings, but it doesn't work

    Hi, I would really appreciate your help. I connected my ipod to my Windows as usual and itunes told me "itunes has detected an ipod in recovery mode. You must restore this ipod before it can be ued with itunes. I did that but nothing happened. itunes

  • Display images in epub

    I have a problem in the file (EPUB -> ibooks app on Ipad) that does not display a images, but when I rotate the iPad screen all the pictures show , Note I Create HTML file from Adobe Edge (html + js files) , Do you have any suggestions.

  • Values are not gettin gposetd to _TL table

    Hi We have a EO based on a DB view and in turn it is on top of Base and TL table.When we create a row from Ui, i can see only Insert statement for B table and not for TL table from the log.Through ADF BC browser it's inserting fine in two tables agai

  • Provider Hosted Site not able to query SP site for information

    Hi everyone, This is the scenario. We have an application server (http://serv-wfm-app:102) not a part of the SharePoint farm that is hosting the website. It is a non-SharePoint server that communicates to a SharePoint site (http://onboarding and quer

  • How do I get my voicemails off of my iPhone 3g?

    I'm getting ready to upgrade to the 4S and don't want to lose my voicemails.  Is there anyway to get those off the iphone 3g?