Did Infocube compression process locks the infocube?

HI All,
First of all thanks for ur active support and co-operation.
Did the compression process locks the cube?, my doubt is, while the compression process is running on a cube, if i try to load data into the same cube, will it allow or not? please reply me as soon as u can.
Many Thanks in Advance.
Jagadeesh.

hi,
Compression: It is a process used to delete the Request IDs and this saves space.
When and why use infocube compression in real time?
InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Check these Links:
http://www.sap-img.com/business/infocube-compression.htm
compression is done to increase the performance of the cube...
http://help.sap.com/saphelp_nw2004s/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
http://help.sap.com/saphelp_erp2005vp/helpdata/en/b2/e91c3b85e6e939e10000000a11402f/frameset.htm
Infocube compression and aggregate compression are mostly independent.
Usually if you decide to keep the requests in the infocube, you can compress the aggregates. If you need to delete a request, you just have to rebuild an aggregate, if it is compressed. Therefore there are no problems in compressing aggregates, unless the rebuild of the aggregates take a lot of time.
It does not make sense to compress the infocube without compressing the aggregates. The idea behind compressing is to speed up the infocube access by adding up all the data of the different requests. As a result you get rid of the request number. All other attributes stay the same. If you have more than one record per set of characteristics, the key figures will add the key figures by aggregat characteristic (ADD, MIN, MAX etc.). This will reduce the number of records in the cube.
Example:
requestid date 0material 0amount
12345 20061201 3333 125
12346 20061201 3333 -125
12346 20061201 3333 200
will result to
requestid date 0material 0amount
20061201 3333 200
In this case 2 records are saved.
But once the requestid is lost (due to compression) you cannot get it back.
Therefore, once you compressed the infocube, there is no sense in keeping the aggregates uncompressed. But as long as your Infocube is uncompressed you can always compress the aggregates, without any problem other than rebuild time of the aggregates.
hope it helps..

Similar Messages

  • Compression(Collapse) of the Infocube

    After partial completion of Realignment process (approximately 20,000 CVCs) for the Planning Object Structure and Infocube, we are unsuccessfull to Compress the Infocube. There were around 20000+ requests were generated during realignment process in the Infocube.
    We Realigned both Planning Object Structure & Infocube at a time.
    As part of master and transaction data daily loads, we follow some steps to this infocube;
    1) Delete Index ....... Delta
    2) DTP ...... Delta 
    3) Create Index
    4) Create Statistics
    5) Compress/Collapse .... Delta
    The job getting failed when it reaches to the "Compress/Collapse of Infocube".
    I appreciate all suggestions......

    What does the job log say?Any errors?
    I think this is more a BI post and get better answers if you post there.

  • How to find which process locked the directory

    I have a folder with some TFS-projects. So now i want to rename or move my directory, it says that "the action can't be completed because
    thefolder or a file in
    it is open in another program". So i downloaded "Process Explorer", ctrl+f, entered directory name, but it found nothing. I
    thought that files used are presented by full names, eg C:\SomeDir\SmthElse\...

    Hi,
    You can check with:
    1. MMC - Shared Folders (if it is a shared folder). Here you could check Open Files to see if files are locked by other users with opening a file in your folder. You can kill the session to unlock a file.
    2. Download this tool:
    http://technet.microsoft.com/en-us/sysinternals/bb896655.aspx
    Handle.exe could help check locking session on your computer. See if your file in that folder is listed and use -c to close the handle.
    If you have any feedback on our support, please send to [email protected]

  • Infocube compression already lasts 7 days

    Dear BI Experts,
    In our system a BI_COMPxxx job is running for more than 610.000 seconds (7 days). And I want to know if I can stop this job?
    Some observations:
    - At the start of the compression of the infocube, 2 aggegates were not rolled up/compressed.
    - Daily loads to the infocube via Process Chains fail at the Delete Index variant due to locking. = expected behaviour
    - The F-table of the infocube contains 21.674.550 records.
    - The E-table (compression target) contains 1.402.950 records at the moment.
    - There are no ST22 and SM21 entries (as the job is still running)
    - During the last 4 hours this number of records in the E-table has not grown.
    - In the job log there are no entries after 1,5 days of runtime and the last time entry shows nothing unusual: MERGE /*+ USE_NL ( FACT E ) / INTO "/BIC/EZCOPABUD" E USING ( SELECT  /+ PARALLEL ( FACT , 3 ) */0 "PDIMID" , "KEY_ZCOPABUDT" , "KEY_ZCOPABUDU" , "KEY_ZCOPABUD1" , "KEY_ZCOPABUD2" , "KEY_ZCOPABUD3", "KEY_ZCOPABUD4" , "KEY_ZCOPABUD5" , "KEY_ZCOPABUD6" , "KEY_ZCOPABUD7" , "KEY_ZCOPABUD8" , "KEY_ZCO... etc. (goes on for 7 pages/screens)
    My main questions:
    - How unsafe is it to stop this job under these conditions?
    - And what may be the effect of stopping this job on data consistency?
    Other questions:
    - Is the ratio between F/E table a normal ratio when the compression is finished, or does it seem like the compression has failed? There are a lot of 0 records requests loaded.
    - Is this a documented error? Are there any notes, procedure, ... available? I could not find any.

    Hi
    Did you take the count of records in the E table before you started the compression ? I did the compression once which took some 21 odd hrs. But at that time I encountered that records in E table were not getting increased as expected and it was my observation that somehow the commit on the database level is done once the compression job is finished completely.
    At that time I check the processing in the sm66( double clicking the required process by checking the pid of the job which is responsible for the compression) and it was doing something on the SQL level.
    What I can say at this point of time is that if you stop your process then everything will be rolled back and your cube should be safe but may be nothing will be compressed and that status will be same as it is now. This can also be because of the fact that the SAP generally create the rollback point before doing any task and if the task is not successful, it rolls back the status which was before.
    This was my experience, check in sm66 for the processing of your job and see how to proceed .
    Another idea is you can compress your cube at some interval level.It seems to me that you have given in selection the top request number and this is now compressing the whole cube and that is why it is taking time.
    Try to confirm if you stop the job then how sap behaves, acording to me nothing should happen and data state should be same and if that is the case, I would suggest you to compress your cube for some 5- 10 requests on regular intervals and this will take less time to complete the job and rest of the processing will not be hindered.
    Please see how to proceed now.
    I could help only this much.
    regards
    Vishal

  • Adding a new field in the infocube

    Hi Friends,
    I have a small issue i need to add a field into the the infocube which already has data in it.
    I went through the concept of remodelling and through the posts in SDN and also a link which guided me through the entire process but still i have a problem.
    I have 5 characteristics and 3 keyfigures in my infocube which have data in it.
    Through Help.sap.com i cam to know that i need to take a backup of the data before i do the remodelling.
    Is it that i need to create a new infocube push all the data there.Do the remodelling on the old infocube and again push the data from the new infocube to the old infocube.How do i do this.
    How do i populate data to the newfields  in the old infocube.
    Can someone explain me with any steps.It could be very useful to me.
    Thanks in advance.
    Regards,
    Harish

    Hi Kanagaraj,
    Thanks a lot for your help.
    Actually those steps dint work but i created a new infocube and copied the structure from old infocube and then just created a Transformation and DTP.It worked fine.Did not generate a export datasource.
    But for my previous question
    3 characteristics and 3 keyfugures.
    Want to add a new field based on the department ID
    Tthe values have to be populated for the new field.
    It is not a constant value.So what should i choose in the conditons.
    Regards,
    Harish

  • I have request in the report level but the same is missing in the infocube

    Dear Experts,
    I have request in the report level but the same is missing in the compressed infocube level. What could be the cause? does the compressed infocube deletes the request ? if so, I could able to view other requests under infocube manage level.
    Kindly provide with enough information.
    Thanks.............

    Hi
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Edited by: Allu on Dec 20, 2007 3:26 PM

  • Infocube compression error CX_SQL_EXCEPTION

    Hi,
    We are encountering a infocube compression error ( CX_SQL_EXCEPTION: parameter is missing).
    Have applied two notes:1028847 and 973969. But did not fix the error. In the system log we have the following error:
    ORA-00054: resource busy and acquire with NOWAIT specified.
    Every time when the compression failed, we always repeated it and completed successfully in second.
    Could anyone know what we can do to fix this.
    Thanks!
    JXA

    Hello Girija,
    Please check OSS note 973969.
    Regards,
    Praveen

  • SM12 not showing infocube name when locked

    Hi experts,
    we upgraded to BW 7.4 on HANA and SM12 works different than before
    When an infocube is locked in an IP process, SM12 used to say which infocube is locked in the lock argument.
    In this image you can see SM12 in older versions. In "lock argument" column, we were able to see which infocube is locked.
    In this image you can see SM12 in BW 7.4 on HANA. No info about which infocube is locked.
    that is important for us, because we have lots of IP infocubes and it is important for us to know who is locking an infocube.
    Does anybody knows how to see that info?
    thanks in advance!
    Miguel

    HI Miguel, Can you check  RSPLSE and reply what you find there.
    I was looking at the below info and I don't see anything changed in 7.4
    Displaying Active Locks - Planning Business Data with BW Integrated Planning - SAP Library
    If you want to delete active locks, use SAP lock management (transaction SM12). Depending on where the lock table is stored, you might have to select a different lock structure.
      Lock table on SAP lock server: Table name RSPLS_S_LOCK. 
      Lock table in shared objects memory: Table name RSPLS_S_LOCK_SYNC. 
      Lock table in SAP liveCache: Table name LCA_GUID_STR. 
    In SAP lock management (transaction SM12), you cannot see which data records are locked however. To do this, use the maintenance transaction for lock settings in planning (transaction RSPLSE).
    Perhaps after the upgrade some security tasks might be pending, just wondering if you can try using ddic and let us know.
    Thanks
    Srikanth M

  • No Marker Update in InfoCube compression

    Hi,
    Please explain me how the ‘No Marker Update’ works in InfoCube compression of inventory management.
    Best Regards,
    Ramesh

    Marker update when uploading/compressing
    We will use an example to explain the procedure for a stock InfoCube
    when executing a query. The scenario is as follows:
    •     Current date: 31.03.2002
    •     You have set up an opening balance of 100 units on 01.01.2002 and loaded it into the stock InfoCube.
    •     Historical material movements from the three previous months (October 2001:10 units; November 2001: 20 units; December 2001: 10 units) are loaded into the BW.
    •     Since this point, successive material movements have been transferred into the BW in the delta process. Delta requests transferred at the end of January (20 units) and February (10 units) were already compressed after successful validation, the last delta request from the end of March (10 units) is still in the InfoCube in uncompressed form.
    To help explain the role of the marker (= reference point), the different upload steps are considered over time.
    After uploading the opening balance, the InfoCube looks like this:
    You can see that the opening stock is not assigned to the actual date, but posted to a
    point in infinity (0CALDAY= 31.12.9999, for example).
    After the three previous months have been uploaded and compressed, the InfoCube
    content looks like this:
    Note here that the marker value remains unchanged at 100 units. This can be achieved
    using the “No marker update” indicator during compression (see section 3.2.2, step 6).
    The marker is thus not changed.
    After successively uploading deltas from January to March, of which only the first two are
    compressed, the InfoCube content has the following appearance:
    Compressing the requests for January and February executes a marker update that can
    be seen by the marker now having the value 130 units. The values for March have not
    been included in the marker yet.
    Please go though the document:
    How To…Handle Inventory Management Scenarios in BW
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Cheers
    Pagudala

  • The delta update for the InfoCube ZCUBE is invalidated

    Hi BI Experts,
    I am working on BW 3.5.
    For the datasource 2LIS_13_VDITM there were some requests(deltas) missing for 3 days.  So I got all the documents there were missing or not updated in BW. I deleted the setup tables and filled them with the new document numbers.
    I did a full repair request from the Infopackage level and the data got updated in ODS.
    But the DM Status is missing for all the requests in the ODS.
    And the data for the full repair request didnt get updated in the Cube and I got the following error:-
    The delta update for the InfoCube ZCUBE is invalidated. The can be due to one of the following:
    1. A request, which was already retrieved from the target system, was deleted from the InfoCube.
    2. A request, which was not yet retrieved from the target system, was compressed in the InfoCube.
    And the following delta was also not successfull.. How can I clear up this mess and ensure smooth flow of deltas. How can I set the DM status.
    Thanks
    Kumar

    Thanks for your reply.
    When I did a Full Repair Request on the ODS the data is in the New table and the DM status got reset(vanished). And the following day delta got failed( data for that request did not get updated in ODS and Cube)
    How can we bring back the DM Status and make the deltas successfull.
    Thanks
    Kumar

  • Steps for loading data into the infocube in BI7, with dso in between

    Dear All,
    I am loading data into the infocube in BI7, with dso in between. By data flow looks like....
    Top to bottom:
    InfoCube (Customized)
    Transformation
    DSO (Customized)
    Transformation
    DataSource (Customized).
    The mapping and everything else looks fine and data is also seen in the cube on the FULL Load.
    But due to some minor error (i guess), i am unable to see the DELTA data in DSO, although it is loaded in DataSource through process chains.
    Kindly advise me, where did i miss.
    OR .. Step by step instructions for loading data into the infocube in BI7, with dso in between would be really helpful.
    Regards,

    Hi,
    my first impulse would be to check if the DSO is set to "direct update". In this case there is no Delta possible, because the Change log is not maintained.
    My second thought would be to check the DTP moving data between the DSO and the target cube. If this is set to full, you will not get a delta. It is only possible to create one DTP. So if you created one in FULL mode you can't switch to Delta. Just create the DTP in Delta mode.
    Hope this helps.
    Kind regards,
    Jürgen

  • Loading performance of the infocube & ODS ?

    Hi Experts,
    Do we need to turn off the aggregates on the infocubes before loading so that it will decrease the loading time or it doesn't matter at all, I mean if we have aggregates created on the infocube..Is that gonna effect in anyway to the loading of the cube ? Also please let me know few tips to increase the loading performance of a cube/ods. Some of them are
    1. delete index and create index after loading.
    2. run paralled processes.
    3. compression of the infocube , how does the compression of an infocube decrease the loading time ?
    Please throw some light on the loading performance of the cube/ods.
    Thanks,

    Hi Daniel,
    Aggregates will not affect the data loading. Aggregates are just the views similar to InfoCube.
    As you mentioned some performance tuning options while loading data:
    Compression is just like archiving the InfoCube data. Once compressed, data cannot be decompressed. So need to ensure the data is correct b4 Compressing. When you compress the data, you will have some free space available, which will improve data loading performance.
    Other than the above options:
    1.If you have routines written at the transformation level, just check whether it is tuned properly.
    2.PSA partition size: In transaction RSCUSTV6 the size of each PSA partition can be defined. This size defines the number of records that must be exceeded to create a new PSA partition. One request is contained in one partition, even if its size exceeds the user-defined PSA size; several packages can be stored within one partition.
    The PSA is partitioned to enable fast deletion (DDL statement DROP PARTITION). Packages are not deleted physically until all packages in the same partition can be deleted.
    3. Export Datasource:The Export DataSource (or Data Mart interface) enables the data population of InfoCubes and ODS Objects out of other InfoCubes.
    The read operations of the export DataSource are single threaded (i.e. sequential). Note that during the read operations u2013 dependent on the complexity of the source InfoCube u2013 the initial time before data is retrieved (i.e. parsing, reading, sorting) can be significant.
    The posting to a subsequent DataTarget can be parallelized by ROIDOCPRMS settings for the u201Cmyselfu201D system. But note that several DataTargets cannot be populated in parallel; there is only parallelism within one DataTarget.
    Hope it helps!!!
    Thanks,
    Lavanya.

  • Delete Overlapping Requests from InfoCube: Before or After the Generate Ind

    Hi,
    Delete Overlapping Requests from InfoCube Before or After the Generate Index of the Infocube? Why?
    I think "After", but the system (transaction RSPC)suggest 1.Generate Index 2.Overlapping Requests from InfoCube ...
    Thanks
    Alessandro

    Hi Alessandro,
       Bottom Line Index will speed up the Process. While loading the Data you need to delete the Index.
    Index will degrade the performence while updating or modifying DB Entries(Loading). Index will improve the performence while reading the DB(Reporting).
    It's not with BW. all RDBMS need this.
    Regards,
    Nagesh.

  • Fact Table and the InfoCube

    Hi All,
    When I search for Inventory quantity in an InfoCube, it is giving me all zero values. I had verified and found correct values in the Fact Table for that InfoCube. The Update Rules applied is a direct InfoObject to InfoObject mapping(Source Key Figure). But I am getting all zero values for Inventory quantity in the InfoCube. Why???
    Thanks & Regards 
    YJ

    hi,
    there you can look for answer:
    a)
    http://service.sap.com/bi , choose BI InfoIndex from the left navigation area and then follow the Non cumulatives link in the list. There are two very important docs.
    b)
    SAP Note 586163 Composite Note on SAP R/3 inventory management in SAP BW
    I had similiar problem with incorrect requests compression or incorrect validity table structure (detailed info you can find in those docs).
    Regards,
    Andrzej

  • Archiving Infocube through Process Chain...

    Hi All,
    I need help in creating process chain for archiving infocube.I am able to archive infocube manually but not getting through process chain.
    Is it possible to archive infocube through Process Chain?If yes then please give steps to create process chain for archiving.
    Thanks in advance.
    Bandana.

    Hi,
    It is possible to Archive data from an infocube via process chain.
    Have a start process followed by Archive data from an infoprovider. Here the trick lies in the variants used for the archive steps used to make the chain.
    Create a process by dragging the "Archive data..." process and give name for the variant and use this variant for writing the archive file, choose your archive process (the same archive process u have created to archive data from the infocube) , as this is write phase do not check "Continue Open Archiving  requests" chk box, and choose "40 Wirte phase completed succesfully option" from the options in Continue Process Until target status.Now enter the required selection conditions. In case u want to reuse this chain give a relative value under 'Primary Time Restriction tab' and save this variant. This is your variant 1.
    Now drag the same 'Archiving process' and create a variant and in this process u need to select "Continue Open Archiving  request" and choose the option from the dropdown list '70 Deletion phase confirmed and request completed' this variant for deleting the data from the infocube. This is your variant 2.
    So now have a Process chain with Start >Archive Process with Variant 1 ( to write the archive file) > Archive Process with Variant 2 ( to delete the data from the cube).
    Thats it !!
    Regards
    Edited by: Ellora Dobbala on Apr 8, 2009 5:28 PM

Maybe you are looking for

  • O/p problem

    Hi all. I have a serious issue..I have a simple report that selects the data from database table and Downloads into a file. My problem is -there is only 2 records in table and it should come only 2 in Output.I m getting the same 2 o/p.But if run ,s t

  • Applications fail to work after upgrading to Snow Leopard

    I can't believe how poor a job the upgrade is! Ok, Snow Leopard installed ok and saved me a good bit of space. It also boots a lot quicker too. But..... ..what is the point of all of this if it no longer opens my applications without crashing every t

  • TreePath//JTree help

    I posted this yesturday and I thought I had it working, but I guess it isn't. So once more if somebody can help that is great. I am using a FileTree Class (extending JTree) to represent the file structure on my computer. What I am trying to do is the

  • Ampliacion memoria 7600 de 1GB que usa BGP

    Buenos días¡¡¡¡ Tenemos este equipo, un router 7600 que se utiliza para bgp e intercambia 500.000 rutas con un proveedor internacional Cogent. Tiene muchas pérdidas de pines hacia direcciones de internet. Tiene 1 Giga de memoria. Le hemos comprado 2

  • How do i setup my iphone using an imac

    I have a new iPhone 4GS and previously setup and synched everthing on my MacBook Pro, but want to use my iMac moving forward.  However, the first step on setting up the iPhone ask for me to plug it into my computer.  The iMac doesn't have a port to p