Compress and rollup

Hello,
It seems that for non cumulative infoproviders, the order of process between compress and rollup is important (example 0IC_C03).
We need to compress before rolling up in the aggregates.
However, in the process chain, if I try to compress before rolling up, the 2 processes are in error (RSMPC011 and RSMPC015).
In the management of the infoprovider the "compress after rollup" is unchecked.
Please can you tell me how can I do ?
Thank you everybody.
Best regards.
Vanessa Roulier

Hi
We can use any of the option
Aggregates are compressed automatically following a successful roll up.If, subsequently,  you want to delete a request, you first need to deactivate all the aggregates.
This process is very time consuming.
If you compress the aggregates first, even if the InfoCube is compressed, you are able to delete requests that have been rolled up, but not yet compressed, without any great difficulty.
Just try to check that option and load if it works
Thanks
Tripple k

Similar Messages

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • How to compress and decompress a pdf file in java

    I have a PDF file ,
    what i want to do is I need a lossless compression technique to compress and decompress that file,
    Please help me to do so,
    I am always worried about this topic

    Here is a simple program that does the compression bit.
    static void compressFile(String compressedFilePath, String filePathTobeCompressed)
              try
                   ZipOutputStream zipOutputStream = new ZipOutputStream(new FileOutputStream(compressedFilePath));
                   File file = new File(filePathTobeCompressed);
                   int iSize = (int) file.length();
                   byte[] readBuffer = new byte[iSize];
                   int bytesIn = 0;
                   FileInputStream fileInputStream = new FileInputStream(file);
                   ZipEntry zipEntry = new ZipEntry(file.getPath());
                   zipOutputStream.putNextEntry(zipEntry);
                   while((bytesIn = (fileInputStream.read(readBuffer))) != -1)
                        zipOutputStream.write(readBuffer, 0, bytesIn);
                   fileInputStream.close();
                   zipOutputStream.close();
              catch (FileNotFoundException e)
              catch (IOException e)
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         }

  • The effect of cube and rollup function

    1. For under 500,000 in Oracle 9i, rollup is faster than cube.
    But above 1,000,000, cube is faster than rollup.
    Right?
    2. Why do rollup(a) and cube(a) all have a parameter?
    3. Under only one parameter, the compute process of cube and rollup is not the same, right?
    Thx.

    >
    1. For under 500,000 in Oracle 9i, rollup is faster than cube.
    But above 1,000,000, cube is faster than rollup.
    Right?
    >
    Rollup summarises data differently to cube, so to say one is faster or slower than the other is not a question one should consider necessarily.
    The main consideration should be in what format you would like the result set to be in.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/aggreg.htm#DWHSG8609
    http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#WBETL03002
    >
    2. Why do rollup(a) and cube(a) all have a parameter?
    >
    The parameter(s) refers to the Grouping against which the data is summarised.
    The Oracle documentation states the following:
    >
    CUBE takes a specified set of grouping columns and creates subtotals for all of their possible combinations
    >
    So both rollup and cube could summarise across a group of column(s).
    >
    3. Under only one parameter, the compute process of cube and rollup is not the same, right?
    >
    Correct, depends what you want to summarise against.

  • Cube Reload and rollup ?

    Hello BW Experts,
    I have a cube A. There are two requests Req1 and Req2. on day 1 i have loaded the req1.1 and req2.1 and rollup done for roll.req1.1 and roll.req2.1. Now i have to reload the req1.r and req2.r. so what is the procedure to be followed? and how to do the rollup for roll.req1.r and roll.req2.r
    1) delete the req2.1
    2) delete the req1.1
    3) anything to do with the rollups of req1 and req2???
    4) reload the req1.r
    5) reload the req2.r
    6) i am supposing the rollups is happneing automatic??? or anything to be done to do the rollups.
    Q1) please let me know what i have to do in steps 3 and 6 ?
    Q2)is the sequence above mentioned correct ? is there anything i am missing.
    Q3) is rollup done automatically ? how to verify it ?
    suggestions appreciated.
    Thanks,
    BWer

    Hi BWer,
    Q1) When you delete the requests, the aggregates are calculated again (rollup). if you reload the requests, it depends of how rollup is made, if its automatic or you should start it manually. you will see this with the requests.
    Q2) the sequence seems to be correct
    Q3) you can check the rollup when you manage the infocube, in the requests page, the fifth column (Rollup Status) it should have a check.
    Hope it helps.
    Regards,
    Diego Lombardini

  • Cube and rollup

    How to use cube and rollup what are they exactly...

    Hi,
    They are aggregate functions.
    Here's a nice explanation:
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1512805503041
    See also:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/aggreg.htm#sthref1669

  • When cd jewel song list is printed from play list in Itunes, the list is compressed and unable to read.No problem before lates software change. How do I fix this ?

    When song list is printed from play list in Itunes for inserting into CD jewel case, the song list is compressed and is indecipherable. Did not have this problem prior to latest software change.How can I fix this ?

    Can you play the song in iTunes?
    If you can't the song file is probably corrupt and needs to be replaced.

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • Basic questions re zip, compression, and e-mailing large attachments

    Hi, I have never really understood what is meant by "zip" and compressing a file. 
    I believe these things make it possible to make a file smaller for sending via e-mail, and then when the person at the other end opens up the attachment, it returns to its full size in all its glory.  
    Is this true? 
    Reason I ask is that I have a couple of .mov files that I want to send as attachments to a friend, but they are both mammoth in size--1 gig each.
    I know it's probably a lost cause, but can compressing and zip help me out here?   Thanks for your patient replies.

    Yes, that's right. Read this to see how. http://docs.info.apple.com/article.html?path=Mac/10.6/en/8726.html. You'll have to experiment to see if you save enough space to send by email. Different Internet providers have different max file limits. Just try a test email attachment. Your sig says 10.5.8 and the article is for 10.6 but hopefully it is the same.

  • Compress and Encryption Folder

    I would like to use Automator to compress and encrypt a folder.
    I've tried using Automator to create an encrypted compressed file (.ZIP) but don't appear to have the options, e.g., encryption. Can someone suggest a workflow to encrypt a file?
    Thanks in advance!

    When I want to encrypt a file, I save it as an encrypted .pdf file.
    I found that here:
    http://docs.info.apple.com/article.html?path=Mac/10.4/en/mh1035.html
    Too bad it can only encrypt one file and not a folder.
    PowerBook G4   Mac OS X (10.4.4)  

  • Compress and uncompress data size

    Hi,
    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.

    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.
    Unless you have actually performed any BULK inserts of data then NONE of your data is compressed.
    You haven't posted ANYTHING that suggests that ANY of your data might be compressed. For 10G compression will only be performed on NEW data and ONLY when that data is inserted using BULK INSERTS. See this white paper:
    http://www.oracle.com/technetwork/database/options/partitioning/twp-data-compression-10gr2-0505-128172.pdf
    However, data, which is modified without using bulk insertion or bulk loading techniques will not be compressed
    1. Who compressed the data?
    2. How was it compressed?
    3. Have you actually performed any BULK INSERTS of data?
    SB already gave you the answer - if data is currently 'uncompressed' it will NOT have a 'compressed size'. And if data is currently 'compressed' it will NOT have an 'uncompressed size'.
    Now our management wants that how much compressed data size is and how much uncompressed data size is?
    1. Did that 'management' approve the use of compression?
    2. Did 'management' review the tests that were performed BEFORE compression was done? Those tests would have reported the expected compression and any expected DML performance changes that compression might cause.
    The time for testing the possible benefits of compression are BEFORE you actually implement it. Shame on management if they did not do that testing already.

  • How to find data compression and speed

    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.

    Rajarshi Muhuri wrote:
    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    The data is stored the same way in memory as it is on disk. In fact, scans, joins etc. are performed on compressed data.
    To calculate the compression factor, we check the required storage after compression and compare it to what would be required to save the same amount of data uncompressed (you know, length of data x number of occurance for each distinct value of a column).
    One thing to note here is: compression factors must always be seen for one column at a time. There is no such measure like "table compression factor".
    > 2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    >
    > I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.
    Well, CPU cycles wouldn't be an absolute measure as well.
    Think about the time that is not  spend on the CPU.
    Wait time for locks for example.
    Or time lost because other processes used the CPU.
    In reality you're ususally not interested so much in the perfect execution on one query that has all resources of the system bound to it, but instead you strive to get the best performance when the system has it's typical workload.
    In the end, the actual response time is what means money to business processes.
    So that's what we're looking at.
    And there are some tools available for that. The performance trace for example.
    And yes, query runtimes will always differ and never be totally stable all the time.
    That is why performance benchmarks take averages for multiple runs.
    regards,
    Lars

  • The detail algorithm of OLTP table compress and basic table compress?

    I'm doing a research on the detail algorithm of OLTP table compress and basic table compress, anyone who knows, please tell me. 3Q, and also the difference between them

    http://www.oracle.com/us/products/database/db-advanced-compression-option-1525064.pdf
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM

  • Compress and Aggrigates

    hai experts,
    Plz..What is difference between Aggrigates and Compress.
    How to do can any body give the step by step........
    Thanks in advance......
    with regards..
    raghu

    Dear Raghu,
    Both, compression and aggregates, are used to increase reporting speed.
    To understand how compression works, you have to know BW's extended star schema. From a technical point of view InfoCubes consist of fact tables and dimension tables. Fact tables store all your key figures, dimension tables tell the system which InfoObject identification are being used with the key figures. Now, every InfoCube has <b>two</b> fact tables, a so-called F-table and an E-table. The E-table is an aggregation of the F-tables's records as the request ID is being removed. Therefore an E-table normally has less records than an F-table. When you load data to an InfoCube, it is just stored in the F-table. By compressing the InfoCube you update the E-table and delete the corresponding records from the F-table.
    Aggregates are, from a technical point of view, InfoCubes themselves. They are related to your "basis" InfoCube, but you have to define them manually. They consist of a subset of all the records in your InfoCube. In principal there are two ways to select the relevant records for an aggregate. Either you select not all Infobjects which are included in your InfoCube, or you choose fixed values for certain InfoObjects. Like the compression, updating aggregates is a task which takes place after the loading of your InfoCube.
    When a report runs BW automatically takes care of F- and E-tables and existing aggregates.
    Further information and instructions can be found in the SAP Help:
    http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/91/270f38b165400fe10000009b38f8cf/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
    Greetings,
    Stefan

  • Compress and Ship Archive Logs to DR

    I am currently working on Oracle 10.2.0.3, Linux 5.5 DR Setup. DR is at a remote location. Automatic Synchronous Archive shipping is enabled. As the DR is at a remote location transferring 500MB archive logs has a load on the bandwitdh. Is there anyway I can compress and send the archivelogs to DR? I am aware of the 11g parameter Compression=enabled, but i guess theres nothing on similiar lines for 10g.
    Kindly help!
    Regards,
    Bhavi Savla.

    AliD wrote:
    What is "Automatic Synchronous Archive shipping"? Do you mean your standby is in MAXPROTECTION or MAXAVAILABILITY mode? I fail to see how transfering only 500MB of data to anywhere is a problem. Your system should be next to idle!
    Compression is an 11g feature and is licensed (advanced compression license). Also if I'm not mistaken, 10.2.0.3 is out of support. You should plan to upgrade it as soon as possible.
    Edit - It seems you mean your archivelogs are 500MB each not total of 500MB for the day. In that case to eliminate the peaks in transfer, use LGWR in async mode.
    Edited by: AliD on 10/05/2012 23:24It is a problem as we have archives generating very frequently!!!

Maybe you are looking for

  • Photoshop unable to find Java script plugin

    Since the last ACR update to PSCS6, I am unable to browse in Bridge or Mini Bridge and I get the error message "Could not complete the Browse in Bridge Command because Photoshop was unable to find the JavaScript plugin". When I try to open mini bridg

  • Remove and re-add old NW65 server in mixed NW/Linux Tree 614

    Wasn't sure where to post this as it's really an eDirectory issue caused by trying to remove and re-add a rebuilt NW65 server. We have a mixed Netware and Linux Tree and due to some very strange hardware issues the Netware server completely died and

  • Date stamp the column

    Hi I have a requirement, in a column i need to show the balnces for date 2012/12/31. The column name is Amount Balance, and I need to show that date at the top of the column .Please help me how to achieve this. Thanks SR

  • Subforms across page breaks

    In LiveCycle, how do you keep subforms from being split across page breaks?  I'm working an a form in which several subforms can be added or deleted.  If I add enough of any of them, when they get to the bottom of the page, instead of getting a new p

  • Plz send special purpose ledger doc

    Hi Sap Gurus plz send special purpose ledger documentation ds is urgent requirement my mail id:[email protected] Thanks&Regards Prakash Reddy