Cube compression and request's id

Can we decompress compressed cube using the request id's?
What happens to the request id's when the cube gets compressed?
rgds

Hi Nitin,
when you load data into the InfoCube, entire requests can be inserted at the same time.
Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in Reporting, as the system has to aggregate using the request ID every time you execute a query.
Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs.
Hope now is clearer (and don't forget to assign some points by clickin'on the star to the contributors that helped you !!!)
Bye,
Roberto

Similar Messages

  • Cube Compression and InfoSpoke Delta

    Dear Experts,
    I submitted a message ("InfoSpoke Delta Mechanism") the other day regarding a problem I am having with running an InfoSpoke as a delta against a cube and didn't receive an answer that would fix the problem.   Since that time I have been told that we COMPRESS the data in the cube after it is loaded.  It is after the compression that I have been trying to run the delta InfoSpoke.   As explained earlier there have been 18 loads to a cube since the initial (Full) run of the InfoSpoke.  I am now trying to run the InfoSpoke in Delta mode and get the "There is no new data" message. Could the compression of the cube be causing the problem with the "There is no new data" message that appears when I try to run the InfoSpoke after the cube load and compression?    If someone could explain what happens in a compression also this would be helpful.
    Your help is greatly appreciated.
    Thank you,
    Dave

    You need uncompressed requests to feed your deltas. Infocube deltas uses request ids. Compressed requests cannot be used since the request ids are set to zero.
    You need to resequence the events.
    1. Load into infocube.
    2. Run infospoke delta to extract delta requests.
    3. Compress.

  • Cube compression and DB Statistics

    Hi,
       I am going to run Cube compressions on a number of my cubes and was wondering few facts about DB Statistics. Like:
    1) How does the % of Info Cube space used for DB stats helps.  I know that the more % we use the bigger is the stat and faster is the access but stats run longer.  But would increasing the default value of 10% make any difference or overall performance improvements.
    2) I will compress the cubes on a weekly basis and most of them will have around one request per day so will probably compress 7 requests for each cube.  So it is advisable to run stats also on a weekly basis or can it be run on bi-weekly or monthly basis? and what factors does it depend on?
    Thanks.  I think we can have a good discussion on these apart from points.

    What DB are we talking about?
    Oracle provides so many options on when and how to collect statistics, even allowing Oracle itself to make the decisions.
    At any rate - no point in collecting statistics more than weekly if you are only going to compress weekly.  Is your polan to compress all the requests when you run, or are you going to leave the most recent Reqs uncompressed in case you need to back out a Req for some reason.  We compress weekly, but only Reqs that are more 14 days old so we can back out a Req if there is a data issue.
    As far as sampling percent, 10% is good, and I definitely would not go below 5% on very large tables.  My experience has been that sampling at less than 5% results in useful indices don't get selected.  I have never seen a recommendation below 5% in any data warehouse info I have seen.
    Are you running the statistics on the InfoCube by using the performance table option or process chain?  I can not speak to the process chain statistics aproach, but I imagine it is similar, but I know when you run the statistics collection from performance tab, it not only collects the stats on the fact and dimension tables, but it also gos after all the master data tables for every InfoObject in the cube. That can cause some long run times.

  • Cube Compression and Aggregation

    Hello BW Gurus,
    Can I first compress my infocube data and load data into the aggregates.
    The reason being, that when the infocube is compressed the Request Id's are removed.
    Are the Request Id's necessary for data to be transfered to the Aggregates and then later on for aggregate compression.
    Kindly suggest.
    regards,
    TR PRADEEP

    Hi,
    just to clarify this:
    1) you can compress your infocube and then INITIALLY fill the aggregates. The Request information is then no longer needed.
    2) But you can NOT compress requests in your infocube, when your aggregates are already filled and these requests are not yet "rolled up" into the aggregates (this action is prohibited anyway by the system).
    Hope this helps,
    Klaus

  • Cube compression and partition related ?

    Hello BW Experts,
    Is it only possible to partition the cube after the cube compression. that means can we only parition the E table and not the F table.?
    Thanks,
    BWer

    InfoCube Partitioning is not supported by all DBs that BW runs on - the option is greyed out for DBs that do not support it.
    You can partition on 0FISCPER or 0CALMONTH, although if you have a need to partition on someting else it might be worth a customer message to SAP.  You should review any proposed partitioning scheme with your DBA in you are not familiar with the concepts and DB implications.
    The E fact table is what gets partitioned using this option.  The F fact table would already be partitioned by Req ID.   In 3.x, the partitioning you specify for the InfoCube is also applied to any aggregate E tables that get created if the partitioning characteristics (0FISCPER/0CALMONTH) is in that aggregate.  In NW2004s, you will have a choice whether you want the partitioning to apply to the aggregate or not.
    NW2004s also provides some additional partiion tools, e.g. the ability to change the partitioning.

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • Unable to Compress the request of a Cube

    Hi All,
    I was trying to compress the cube data,At one request I got stuck. When ever I try to compress that request manually, but it is unable to compress the request and sends the following  job logs
    Job started
    Step 001 started (program RSCOMP1, variant &0000000000190, user ID XXXXXX0006)
    Performing check and potential update for status control table
    No requests needing to be aggregated have been found in InfoCube 0XX_X01
    Compression not necessary; no requests found for compressing
    Compression not necessary; no requests found for compressing
    Job finished
    After further analysis, I observed the request which I am compressing , has an selective deletion.Since its is 2year old data I would like to compress the data,please suggest is it possible to compress,If the request has selective deletion.
    please suggest.
    Thanks & Regards,
    Abdul

    Hi Abdul,
    Check that, if the cube that is being compressed feeds the data to a different InfoProvider, if yes then complete that laod first and then do the compression.  this might be the reason sometimes. if the cube is not feeding anything currently try compressing the request again.
    In some cases ,when  the compression is manually performed (in the tab collapse of the cube) no compression would be  done. In those cases you can try creating a process chain with only the compression step.  This often solves the problem.
    If both the above  process don't solve your problem,
    Try the SAP note :407230   (such problems occure after installing SP15.)
    Regards,
    Sudheer.

  • Cube Compression - How it Affects Loading With Delete Overlapping Request

    Hi guys,
    Good day to all !!!
    Our scenario is that we have a process chain that loads a data to infocube and that has delete overlapping step. I just want to ask how does the cube compression affects the loading with delete overlapping request. Is there any conflict/error that will raise? Kindly advice.
    Marshanlou

    Hi,
    In the scenario you have mentioned:
    First the info cube would be loaded.
    Next when it goes to the step i.e delete overlapping request:  in this particular step, it checks if the request is overlapping (with the same date or accd to the overlapping condition defined in the infopackage, if the data has been loaded). 
    If the request is overlapping, then only it deletes the request. Otherwise, no action would be taken.  In this way,it checks that data is not loaded twice resulting in duplicasy.
    It has nothing to do with compression and in no way affect compression/loading. 
    Sasi

  • When do we compress the request in cube. ?

    Hello all,
    We have been live for a few months now. After how many days/weeks/months should we compress the request in cube. ?How often do we have to compress the data in the cube.
    I know once compress than we cannot delete the data by request but we there is an issue and we need to delete data completely from the cube that is possible right ?
    Thanks in advance

    How often Compression needs to be done depends on the volume of data that is loaded everyday. Unless the volume of records loaded everyday is unusually high, doing compression once a week might be good enough.
    Also you are right abt losing the ability to delete by request in the infocube after compressing the request. But we dont necessarily have to delete the whole cube incase of issues. There is always the option of doing 'SELECTIVE DELETION'.
    Next, when doing the compression you have an option of mentioning the date range. Choose the option of compressing requests that are older than 4 days (arbitrary) to have a buffer for deleting the request incase of issues.
    Hope this helps!

  • Compress the request of inventory cube 0ic_c03

    Hi experts,
    I want to compress the infocube 0ic_c03 daily for the delta requests through process chains.
    In the process chain, process types there are 2 options:
    1.Collapse only those requests that were loaded XXX days ago
    2.Number of requests that you do not want to collapse
    Which one i should select and how many days i need to go.
    I want to the daily delta request to be compressed.
    Pls guide me accordingly.
    Regards,
    Nishuv.

    Hi,
    Inventory data is non-cumulative data and we usually keep 30 days. Once compressed we cannot make any changes if there are any errors on particular day. We had problem when FICO consultants changed valuation in the systems directly and we were not aware of that. Later when values in the FICO is different in Inventory then we figured out that there were some manual changes in FICO. Luckly we didn't compress those requests. Later thru some routines we fixed the issue.
    Thanks
    Srikanth

  • Effect of Cube Compression on BIA index's

    What effect does cube compression have on a BIA index?
    Also does SAP recommend rebuilding indexes on some periodic basis and also can we automate index deletes and rebuild processes for a specific cube using the standard process chain variants or programs?
    Thank you

    <b>Compression:</b> DB statistics and DB indexes for the InfoCubes are less relevant once you use the BI Accelerator.
    In the standard case, you could even completely forgo these processes. But please note the following aspects:
    Compression is still necessary for inventory InfoCubes, for InfoCubes with a significant number of cancellation requests (i.e. high compression rate), and for InfoCubes with a high number of partitions in the F-table. Note that compression requires DB statistics and DB indexes (P-index).
    DB statistics and DB indexes are not used for reporting on BIA-enabled InfoCubes. However for roll-up and change run, we recommend the P-index (package) on the F-fact table.
    Furthermore: up-to-date DB statistics and (some) DB indexes are necessary in the following cases:
    a)data mart (for mass data extraction, BIA is not used)
    b)real-time InfoProvider (with most-recent queries)
    Note also that you need compressed and indexed InfoCubes with up-to-date statistics whenever you switch off the BI accelerator index.
    Hope it Helps
    Chetan
    @CP..

  • Compression and Index

    Hi BW Experts,
    I deleted Indexes before loading the data.
    Then I compressed the request without Creating the Indexes.
    It is taking so much time to compress.
    Is this the right procedure? Will it take too long to compress after deleting the index?
    Thanks in advance.
    Regards,
    Anjali

    hi anjali
    Deletion of indexes , creation of index and then compression is the general procedure
    if u r doing a data load in Cube then only its worth deleting and creating index,
    As far as I know, for compression there is no significance of index operation as a separate E table will be generated as a result of compression whereas index works with F table only
    if u r doing cube load followed by compression then standard steps are as below:
    1.Delete cube contents (depends on yr req)
    2.Delete index
    3. load cube data
    4.Compression
    5.DB Statistics (u could skip this step if no performance issue)
    6.Create Index
    Very best method is not include the compression method in the Process chain. Because once the compression is done there is no possiblity to delete the data by requestwise. This may require if there is any error in data extraction.
    Drop the index during data extraction because this may cause the performance problem during data loading.
    Edited by: Amar on Oct 14, 2008 11:10 AM

  • Cube compression issue

    Hello Gurus,
    we have some strange behaviours with cube compression.
    All requests are compressed, but in F table we still have some records.
    The same records are stored in E table too, but with BEx query execution we can see correct result.
    If we execute query in debug on RSRT, with SQL code display, the query reads only from F table or aggregates.
    How it is possible?
    We just provide to insert the COMPNOMERGE object in RSADMIN table, but only after the first compression. Do you think thath with a initialization of cube and a new compression with COMPNOMERGE object could solve our problem?
    Could you help us?
    Thanks in advance.
    Regards.

    Vito Savalli wrote:>
    > Hi Lars, thanks for your support.
    > We don't have an open support message for this issue, but if it will be necessary, we will open it.
    >
    > I - The same records are stored in E table too, but with BEx query execution we can see correct result.
    > You - The first part of this sentence is technically impossible. At least the request ID must be different in F- and E-fact table.
    >
    > Ok for the request ID, I know it. But, if we don't consider request ID (of course isn't equal) and we check the characteristics values by SID analysis, we find the same complete key both in F and in E table.
    >
    Well, but that's the whole point - the request ID!
    That's why we do compression for at all - to merge together the data for the same key figures if they exist in both tables.
    It's completely normal to have this situation.
    > I - If we execute query in debug on RSRT, with SQL code display, the query reads only from F table or aggregates. How it is possible?
    > You - Easy - you're statement about all requests being compressed is not true and/or it reads the necessary data from the aggregates.
    >
    > I executed with RSRT one of record which is in both tables.
    Well, obviously there was some other implicit restriction that lead to the selections made by OLAP.
    Maybe the request read from the F-Facttable was neither rolled up nor compressed.
    > Very helpful, thanks.
    > Any others suggestions?
    I'd check exactly the status of the requests and where they can be read from.
    You may also try out to disable the aggregate usage in RSRT to see whether or not the data is also read from the E-facttable and check the result of the query.
    regards,
    Lars

  • Cube Compression : How do I solve the issue

    Hello,
    I had 4 requests (21 million records)  in a cube to be compressed. The compression process started and successfully loaded the data into the E table of the cube and compression flag got checked into the cube. So I saw the records are successfully compressed.
    However when it was doing the next step , that is to  delete records from the F table , the server re-booted and SAP rollback the deletion from the F table.
    Question : How do I delete records from the F table. There are current 21 million records lying in Fact table for no reason. If retry to compress the request SAP gives an message request already compress.
    I tried loading one more request and compressing the requesting , however the program just delete records from F table for current loads and not previous loads.
    Please help.
    thanks,
    -HAri

    From the FAQ Note on Compression - 407260
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back. The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
                   Insert or update every row of the request, that should be compressed into the E-facttable
                   Delete the entry for the corresponding request out of the package dimension of the cube
                   Change the 'compr-dual'-flag in the table rsmdatastate
                   Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted. This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
                   Concluding this:
                   If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
                           If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.

  • Cube Compression

    Hi all,
    in one of the GEO client is using 3.o version. and asked us to compress the cubes. we have more than 20 cubes and data is exiting from 2005.
    there are no process chains to load the data. they manually load by users. through jobs.
    is there any way to compress tall the requests in the cube in one go or all the cubes at a time.
    please let me know which is the best option.
    thanks

    Hi,
    Execute the  Manage option for each of your infocube and from the "Performance Tab" (I think) select the compression option.
    Now here you can give the days range for request calculation or from the request tab select any request number and put it in earlier compression input box. All the previous requests for this particular cube till this request will be compressed automatically.
    You can give the request number of your choice and all the previous requests will be compressed no need to go each request by request. After compression you will still see all the requests present in Request tab but now they will have compression status as green.
    In the same fashino you will have to repeate the procedure for all the cubes.
    Regards,
    Durgesh.

Maybe you are looking for

  • Fields are getting displayed two times in alv grid

    Hi experts, I have developed an interactive alv. in basic list i have given two push buttons. if first button is selected it should display one list and second button is selected another list. I am displaying all the three lists using the FM 'REUSE_A

  • Report on smartform.Replace select endselect with something to efficient it

    To make a smartform report efficient by removing all occurs & modify internal table statements. What i have been given to do is to select all data from respective tables put them into one internal table & then finally making tab2 or tab3 in the repor

  • TS3682 iCloud backup problem

    How do I re-instate iCloud backups? Last automatic back up was right before I left for Japan in June. Has not backed up since. Have followed instructions on-line with no fix.

  • Tape_copy_command

    Hi, We are using Netbackup for taking the backup.In the init<sid>.sap we are using tape_copy_command=cpio. Our os is Solaris. Please confirm whether we can take the backup using Netbackup with tape_copy_command=dd. Regards Mukunthan

  • Problem with message delivery time in C201

    Hello All, I am facing a strange problem in my recently purchased C201 handset.Whenever i get the delivry report of a sent messge , the report shows wrong time. One sample delivery reprt is here : Recipent : xxxxxxx Content : xxxxxxxx Sent : 08:39:56