Archiving only compressed InfoCubes?

Hello Colleges,
We have an InfoCube with 5 Mill Records. If I try to select  a daterange for Archiving, I get allways the error the request is not compressed. I could not find in any document any record size when it has to be compressed. Is it a generall restriction, which is never mentioned in a document. Does anybody has experince with archiving and had also this problem?
Thanks for the help
Henning

Hi,
It is recommended to archive only fully compressed InfoCubes as you want to ensure that you do not archive data from requests while they are being loaded.
In addition, if you InfoCube has a non-cumulative key figure, there is a hard check to ensure all requests are compressed first before an archive job can execute.
I hope this helps,
Mike

Similar Messages

  • Aggregation, archiving and compression

    Hello,
    How important is it to begin the aggregation, archiving and compression setup before go-live of a BW-environment?
    Is it possible to do this after the cubes have been filled?
    We are going live after the weekend and haven't given this much consideration... The plan is to start thinking about this now, just after the go-live...
    Need I worry?
    Best regards,
    Fredrik

    hi Fredrik,
    aggregate, compression and archiving are done to improve performance.
    yes, we can do compression and rollup(after aggregate created) after cubes have been filled, and normally we will include this process in process chain, there are process type for this. compression and rollup are done after daily data filled in infocube.
    normally you can set cube for compression.
    aggregate is created based on query run examination.
    check the aggregate on query performance doc.
    Business Intelligence Performance Tuning [original link is broken]
    archiving, after go live we won't need archiving so soon,
    archiving is done to historical data that not used.
    you can start include the compression in process chain,
    and evaluate you most used queries and see if aggregate needed.
    hope this helps.

  • Compress infocube with zero elimination

    Experts, Is that true you should always check "with zero elimination" when you compress infocubes?  I was reading the famous SAP doc "How to Handle Inventory Management Scenarios in BW", the screen shots for cube compression does not have that checkbox checked.  Does anybody know why?
    Thanks!

    Hello,
    Using the zero elimination the entries where all key figures are equal to zero are deleted from the fact table during compression. You don't want that in Inventory Management.
    Regards,
    Jorge Diogo

  • Media recovery not enabled or manual archival only

    db-version oracle 10.2.0.2
    os windows server 2003
    after starting up the database
    i get the following trace-file of the lgwr
    *** SERVICE NAME:() 2007-01-21 02:20:06.326
    *** SESSION ID:(166.1) 2007-01-21 02:20:06.326
    Maximum redo generation record size = 156160 bytes
    Maximum redo generation change vector size = 150672 bytes
    *** 2007-01-21 10:00:32.272
    Media recovery not enabled or manual archival only 0x10000
    *** 2007-01-22 13:22:19.801
    Media recovery not enabled or manual archival only 0x10000
    *** 2007-01-23 05:49:22.517
    Media recovery not enabled or manual archival only 0x10000
    is it the bug 4591936?

    Hi,
    >>is it the bug 4591936?
    Yes. These messages were added for debugging purpose and can be ignored.
    Cheers

  • Aggregate = compress infocube?

    I ve heard that an aggregate has E and F fact tables , which It's a property of compressing infocubes.
    so what's the difference between aggregate and compress infocubes?
    Joseph

    whenever we have huge volume of data , then its prefarable to use aggregate, there are some other parameters as well.
    Pls do refer the following link
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/content.htm

  • Archive only 3 batch classification tables

    Hi Gurus,
    I want to archive only AUSP, KSSK and INOB tables.
    Is there any way that i can archive only those 3 tables? (i.e. Without archive any other tables which links to the particular archiving object.)
    Also what are the prerequisites ( technically complete production orders, close purchase orders) for that?
    Thanks & Best Regards,
    Sandun

    there is no archiving object that can archive records from these tables without archiving records from any other leading table.
    These tables are not only used for batch classification, they hold values for several other classifications as well, they are just the central tables of classification system.
    You have to archive the business object, e.g. MM_SPSTOCK for batches, to get dependend records from tables AUSP, KSSK ... archived too.

  • Archive Log Compression

    Hi
    There is a feature that allows us to compress archive logs. This is set using "ALTER DATABASE ARCHIVELOG COMPRESS enable;"
    What I need to know is
    1) Which versions of oracle supports this.
    2) Has anyone experienced any drawbacks using this.
    Thanks

    This is what I got back from ORACLE.
    Generic Note
    Hi,
    The archive log compression feature was withdrawn from the release before the oracle 10g production release.
    It may appear to function as expected, but it is not yet supported and is intentionally not documented.
    Bug 3464260 - ALTER DATABASE ARCHIVELOG COMPRESS ENABLE/DISABLE IS MISSING IN 10G DOCS
    Status: 92 - Closed, Not a Bug
    - An Enhancement Request been created Bug 6780870 GENERATION OF COMPRESSED ARCHIVELOG FILES to ask
    if this functionality will be offered in a future release which is yet not been update by development..
    In other words, The SQL statement ALTER DATABASE ARCHIVELOG COMPRESS enable is possible, but indeed
    currently unsupported and undocumented even in Oracle Database 11g
    Archive compression was a planned new feature for 10G, but unfortunately it was withdrawn and it is still
    not available in 11g .This feature is expected in future releases.
    Please let me know if that answers your questions .
    Regards,
    One more question.
    We are running windows, I know someone mentioned gzip for linux, can we use that for windows compresssion, or will winzip or winrar or any other compression tool work.
    Thanks

  • Archiving Only option in Output Control

    Hello Gurus,
    There is an option Archiving only in output type. Could you please provide us the use of it and how exactly it works. Wht requirement needs to be used while adding this output type in procedure.
    Points wud be rewarded generously for apt answers.
    Thanks and Regards,
    Pavan P.

    hi,
    The document type to be used can then be entered in the Storage system title of the message type The Storage mode selection field controls whether a message is outputted, stored or outputted and stored. The entry of a document type is only necessary in the last two cases.
    <b>Follow this steps</b>
    <b>Tcode:NACE</b>
    let take example of BA00 order confirmation
    1.select V1:sales--->click output type ikon which is in the top> now inthis screen double click BA00> here click stoarge system tab>in the stoarge mode select <b>2 Archive only</b>--> save
    but y u want to archive the document while outputing,please revert
    regards,
    Arun prasad

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • How to delete the data in the compressed  infocube

    hi, bi gurus
    we are facing a problem in inventory management the info cube in BW production
    normally every time inventory cube gets compressed that means the data will be moving to F fact table to E fact table
    now the problem is we are having some bad data for latest five requests in this cube as we all know compressed data
    can't be delete by deleting request in the request tab the only way is go for selective deletion but i don't find any selective
    option in the cube we are PSA data for that five request which having correctdata please help how to delete the bad data in the
    info cube and load the correct data which we are having in PSA
    Thanks
    Joe

    Hi André
    Thanks you for ur answer
    what i am telling is their is an option for selective deletion for inventory cube
    but i don't find any specific option to delete the data that means like calendar day like that
    i hope you got my question.
    hi Saveen Kumar,
    Thank you again
    we are using 3.xx flow if we do the request reverse posting for all the 5 requests which has updated incorrect data
    next we need to do compression also or not
    and how to reload the data from PSA to Infocube becuse if request still avaliable in info cube it will not allow me to do that
    can you please tell me detail how to proceed in step by step this is first time i am doing request reverse post and i have to do it production please tell me
    Thanks in adavance
    Thanks,joe

  • A question on different options for data archival and compression

    Hi Experts,
    I have production database of about 5 terabyte size in production and about 50 GB in development/QA. I am on Oracle 11.2.0.3 on Linux. We have RMAN backups configured. I have a question about data archival stretegy.  To keep the OLTP database size optimum, what options can be suggested for data archival?
    1) Should each table have a archival stretegy?
    2) What is the best way to archive data - should it be sent to a seperate archival database?
    In our environment, only for about 15 tables we have the archival stretegy defined. For these tables, we copy their data every night to a seperate schema meant to store this archived data and then transfer it eventualy to a diffent archival database. For all other tables, there is no data archival stretegy.
    What are the different options and best practices about it that can be reviewed to put a practical data archival strategy in place? I will be most thankful for your inputs.
    Also what are the different data compression stregegies? For example we have about 25 tables that are read-only. Should they be compressed using the default oracle 9i basic compression? (alter table compress).
    Thanks,
    OrauserN

    You are using 11g and you can compress read only as well as read-write tables in 11g. Both read only and read write tables are the candidate for compression always. This will save space and could increase the performance but always test it before. This was not an option in 10g. Read the following docs.
    http://www.oracle.com/technetwork/database/storage/advanced-compression-whitepaper-130502.pdf
    http://www.oracle.com/technetwork/database/options/compression/faq-092157.html

  • Online Archive only visible in Outlook 2010 after OWA logon

    Dear.
    We implemented Exchange 2010 native archiving. We created our own retention policies, applied them to the mailboxes and archiving started successfully exactly as we want it.
    We’re using Outlook 2010 clients with both: cached and non-cached mode. The strange thing is that for the online archive to be visible in outlook, the user needs to logon to OWA
     first and only once. Afterwards, online archive stills visible in Outlook 2010.
    Is there any other way to make the online archive visible without forcing the user to do a one-time logon to OWA?
    Thanks for the feedback.
    Regards.
    Peter
    Peter Van Keymeulen, IT Infrastructure Solution Architect, www.edeconsulting.be

    Hi,
    I'm glad you've solved the problem!
    Thanks for you feedback!
    Niko Cheng
    TechNet Community Support

  • How do I archive only the last Time Machine backup?

    Hi everyone
    So i've done a "clean reinstall" of Lion and restored some of my files manually from TM backup located on an external hardrive by dragging them over via Finder. Now I want to delete all old TM backups on my ext hardrive and only save the last backup (the one I did before the Lion reinstall).
    If I save the latest TM backup folder (named ~/Backups.backupdb/~/2012-04-11-233905) and delete all the older ones – would that save my entire last backup? Or does that folder contain aliases to other files in previous backup folders? i.e. does the last created Time Machine backup folder contain all files from the last backup?
    Please help.

    Linc, you don't seem to understand my problem – i want to start my TM backups from scratch from my "new" computer and archive the last backup I did on my "old" one. I don't need all the other backupped versions from my "old" computer – just the last backup which I want to archive.
    My question is still:
    Is there a way to archive my entire last backup (and delete all the previous ones)?

  • Archive only masked records error

    Hi SDNers,
    I want to Archive repository only with 100 records.
    I have tried Masking those records and on Archiving used the option to select only that Mask.
    But it is resulting in Archive fail and the report says:
    4952  2009/10/13 15:53:20.744 Report Info  XCatDBCopy.cpp     Table 55-HCLIQ00CUSTOMERLTM_m000-A2i_48_Tu_114
    4952  2009/10/13 15:53:20.744 Report Error  XCatDBCopy.cpp       Error accessing table A2i_48_Tu_114
    4952  2009/10/13 15:53:20.744 Report Info  XCatDBCopy.cpp   --~ Total KBytes: 34048, Partition#0(Main) KBytes: 33803
    4952  2009/10/13 15:53:20.837 Report Warning  XCatDBCopy.cpp   $$$ Operation ended in ERROR : 80000001H : Illegal value for parameter
    4952  2009/10/13 15:53:20.868 Report Warning  XCatDBCopy.cpp   Archiving operation for HCLI Q00 CUSTOMER LT Masking1 ended in error
    4952  2009/10/13 15:53:20.868 Report Info  XCatDBCopy.cpp       0min 10sec
    4952  2009/10/13 15:53:20.868 Report Info UNC TextReportLog.cpp   Report Duration: 0 day(s), 00:00:10
    Any idea?
    Thanks,
    Priti

    Hi Priti,
    I came across this note and thought of sharing it with you
    "Note 1424126 - Archiving a repository, that includes Tuples, via Mask"
    Check if this is the cause of your problem.
    Best Regards,
    Shiv

  • CiscoWorks: Why all devices Sync archive only partially successful.

    Hi,
    Whenever we sync archive a device, its always partially successfull only,  and when we click partial successfull , below details appears. We have 4000 devices and all are only partially successful only Below details are from partial successful device, it has the vlan. dat and before it was working ok.
    *** Device Details for Device1 *** Protocol ==> Telnet Selected Protocols with order ==> SSH,Telnet,TFTP,RCP,HTTPS
    Execution Result: CM0062 Polling Device1 for changes to configuration. CM0065 No change in PRIMARY STARTUP config, config fetch not required CM0065 No change in PRIMARY RUNNING config, config fetch not required CM00 Polling not supported on VLAN RUNNING config, defaulting to fetch. VLAN CM0057 VLAN RUNNING Config fetch SUCCESS, archival failed for Device1 Cause: CM0005: Archive does not exist for Device1 Action: Verify that archive exists for device.
    Also If I goto e:\NMSROOT\files\rme\jobs\ArchiveMgmt\6523(JOBID) and then log, below is output from logs.
    [ Mon Nov 14  14:55:37 GMT 2011 ],ERROR,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,initJobPolicies,925,Could not configure Thread.
    [ Mon Nov 14  14:55:37 GMT 2011 ],ERROR,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,sendMail,1334,sendEmailMessage: Null recipient list
    [ Mon Nov 14  14:56:20 GMT 2011 ],ERROR,[main],com.cisco.nm.rmeng.dcma.jobdriver.DcmaJobExecutor,sendMail,1334,sendEmailMessage: Null recipient list
    I am getting below logs,please advice, how to resolve the issue.
    Thanks.
    I am using LMS 3.2.1

    Yes you are right that we have restore from backup and since then this problem occurs. We have uninstall the CiscoWorks from c: drive and install it on e:drive, due to high disk utilization issue on c: drive.
    I have checked the
    RME > Administration > Config Management > Archive Management > Archive settings and setting is already e: /CIS~1/files/rme/dcma,
    Also from dbreade, by executing the sql "select * from Config_Device_Archive" in rmeng, I can see that some file location is C:drive and orther files location is E: drive, so I have run the below sql to correct it, although it executed correctly, by I can see that Config_Device_Archive table that still some devices location is c:, others is e: drive.
    update Config_Device_Archive set
    location=replace(location,'C:\\PROGRA~1\\CSCOpx','E:\\CAPCIS~1')
    Please advice, that how to resolve this issue.
    Thanks

Maybe you are looking for