OLTP compression and Backupset Compression

We are testing out a new server before we migrate our production systems.
For the data we are using OLTP compression.
I am now testing performance of rman backups, and finding they are very slow and CPU bound (on a single core).
I guess that this is because I have also specified to create compressed backupsets.
Of course for the table blocks I can understand this attempt at double compression will cause slowdown.
However for index data (which of course cannot be compressed using OLTP compression), compression will be very useful.
I have attempted to improve performance by increasing the parallelism of the backup, but I from testing this only increases
the channels writing the data, there is still only one core doing the compression.
Any idea how I can apply compression to index data, but not the already compressed table segments?
Or is it possible that something else is going on?

Hi Patrick,
You can also check my compression level test.
http://taliphakanozturken.wordpress.com/2012/04/07/comparing-of-rman-backup-compression-levels/
Thanks,
Talip Hakan Ozturk
http://taliphakanozturken.wordpress.com/

Similar Messages

  • Compression and non-compression questions

    Hi Folks,
    I want to use my PC as a music juke box. I plan on having an audio output lead plugged into my living room stereo from my PC and using iTunes to do the playing and managing of the downloaded music files. I want to import my music CDs into the PC using a lossless or rather using no form of compression because I want the music to be at it's audio best.
    Then, when I want to use my iPod I want to make a copy of the original PC stored copy into an iPod appropriate file such as Apples MPeg4 format. This conversion because of the limited storage space on an iPod. You can get a whole lot more MPeg4 files on a iPod than you can Wav format files. I don't have the exact ratio but it's something like 8 to 1 in terms of the numbers of files you can upload onto an iPod.
    So there you have it. Quality, unadulterated, clear, dynamic music playing out of my living room stereo from the PC and a compress copy of the music on my iPod where the audio quality isn't crucial.
    Got any ideas?
    Thanks
    Bart

    You have two options:
    1) Use completely separate libraries for your lossless files and lossy files (probably more pain that it's worth).
    2) Use separate playlists
    With 1), you have to hold shift when starting iTunes to select or create an iTunes Library. You'll also have to either import seperately, or delete the undesired files out of the library (without deleting the file from the hard disk), then add them into the other library. Possible but moderately difficult. To sync your iPod you connect your iPod when the "lossy" library is selected.
    With 2), you can just make two smart playlists -- one where the criteria is "Kind" is Apple Lossless, and another with "Kind" = "AAC audio file". To sync your iPod, you change the preference to only sync selected playlists, and select the playlist that has the lossy music.
    P.S. Lossless compression sounds identical to the source because "lossless" compression means exactly that -- no information is lost. Examples are the Apple Lossless Encoder and FLAC (FLAC is not supported by iTunes/iPod).

  • Rman compression of backupsets

    Hi Gurus,
    I am in the process of implementing backup startegy using rman.Does rman by default compress the backupsets ?if so what would be the compression ratio?if i compress with "backup as compressed backupset" it would be CPU instensive by what means?does this do addition compression on the already compressed backupset which rman does ?Please your Repps will be invaluable.
    Regards

    Hi there,
    I generally dnt like to say that read docs but some times thats really easy to do.Mostly all of your questions are posted in this page.
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/bkup002.htm#BRBSC138
    Rman doesnt by default does the compression.One can enable it if he wants.Yes it will be really intensive for the performane and as per the docs,it will be visible.I am not sure about the rate of it but its significant for the tape media.Again,oracle doesnt recommend combining both tape compression and rman compression both and recommends to go for tape/hardware compression in comparison to rman.
    I didnt get this,
    does this do addition compression on the already compressed backupset which rman does
    What does what?If you are talking about rman compression than it has already done the compression or you are asking about rman plus o/s compression or vice versa?
    Aman....

  • Compression for oracle database and index compression during import of data

    Hi All,
    I have a query , in order to import into oracle database and also have compression and index compression , do we have some kind of load args for r3load and also do we have to change the tpl file ?

    Hello guy,
    I did this kind of compression within migration project before.
    I performed index compress first and then export -> import with table compress.
    One thing you should take care, delete nocompress flag from TARGET.SQL (created by program SMIGR_CREATE_DDL, program SMIGR_CREATE_DDL created pure non-compression objects for these considered non-standard tables). For table columns > 255, we should not delete this flag.
    Regarding to the index compress in source system, please check the following notes:
    Note 1464156 - Support for index compression in BRSPACE 7.20
    Note 1109743 - Use of Index Key Compression for Oracle Databases
    Note 682926 - Composite SAP note: Problems with "create/rebuild index"
    Best Regards,
    Ning Tong

  • The detail algorithm of OLTP table compress and basic table compress?

    I'm doing a research on the detail algorithm of OLTP table compress and basic table compress, anyone who knows, please tell me. 3Q, and also the difference between them

    http://www.oracle.com/us/products/database/db-advanced-compression-option-1525064.pdf
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM

  • Image compressed and stored in the database.

    Hi,
    We have a tiff file stored in the database and then compressed using the following Intermedia Oracle method.
    process ('compressionFormat=FAX4,maxScale=1696 2200'); and save it back in the database.
    This oracle database is 8.1.7.4. The tiff file gets processed when we use TIFF file generated by old scanner but errors out with the TIFF file generated by latest scanners.
    The error I get is ORA 29400 , IMG-00704.
    There is a difference in TIFF version in the outputs generated by two scanners,
    The compression option : TIFF modified G3 in the old one
    while the new one has Lempel-Ziv
    Also, the tiff is loaded and saved fine but the image gets corrupted after the intermedia process method is run.

    Thank you for your reply.  I am glad to find that I did not miss an option.  I was aware that I could move my pictures into some other folder, but you have forgotten the solution that I chose.  That was to go back and use ZoomBrowser, which works to access photos in any folder I choose.  In addition it loads promptly.  I have only spent a brief time using the ImageBrowser but don't recall seeing any enhancements that over ZoomBrwoser.  Perhaps if I was attempting to interface with other software but as a stand-alone product, well perhaps you could enlighten me as to what makes ImageBrowser EX a better product..  Regarding the Accept as Solution: are you implying that Canon restricts their search function to only those responses that the OP marks as accepted?

  • I uploaded a video from a flashdrive to my Mac and it compressed the video to lower quality and shrunk it in size. How do I get it to the original size?

    I uploaded a video from a flashdrive to my Mac and it compressed the video to lower quality and shrunk it in size. How do I get it to the original size?

    It changed the Aspect ratio of the video, as if it had been shot on a phone instead of a camera. When first looking at the footage it was in landscape aspect but when I transfered it to a memory unit to put on a different computer it uploaded it in a different ratio

  • Content and Font Compression using ASCII85Decode filter alone in Adobe Acrobat Pro

    I want to convert FlateDecode to ASCII85Decode for specifically Fonts , Colorspaces and Content. Is there a way to do it ?? @

    I can't imagine there is a plug-in - because I can't imagine why anyone would want to (or at least, not enough people who would pay for the work!)
    There is a purpose to modifying the decode for images: to reduce the file size. Clearly one only does it if the file is going to get smaller (e.g. DCTDecode to Flate is pointless) and much software will only do the conversion if space is saved. Clearly removing compression and inserting ASCII85Decode can only make the file larger, for which it is hard to make a business case!
    Adobe have in fact for more than a decade stopped using ASCII85Decode for any normal purpose - in effect the idea of an ASCII PDF was a failed experiment. Only PDF readers need to still support it, writers need not.

  • How to compress and decompress a pdf file in java

    I have a PDF file ,
    what i want to do is I need a lossless compression technique to compress and decompress that file,
    Please help me to do so,
    I am always worried about this topic

    Here is a simple program that does the compression bit.
    static void compressFile(String compressedFilePath, String filePathTobeCompressed)
              try
                   ZipOutputStream zipOutputStream = new ZipOutputStream(new FileOutputStream(compressedFilePath));
                   File file = new File(filePathTobeCompressed);
                   int iSize = (int) file.length();
                   byte[] readBuffer = new byte[iSize];
                   int bytesIn = 0;
                   FileInputStream fileInputStream = new FileInputStream(file);
                   ZipEntry zipEntry = new ZipEntry(file.getPath());
                   zipOutputStream.putNextEntry(zipEntry);
                   while((bytesIn = (fileInputStream.read(readBuffer))) != -1)
                        zipOutputStream.write(readBuffer, 0, bytesIn);
                   fileInputStream.close();
                   zipOutputStream.close();
              catch (FileNotFoundException e)
              catch (IOException e)
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         }

  • H.264 and other compression

    apple makes H.264 sound like its loss-less. Is it? is there any compression codec out there thats loss-less besides Animation?

    I wanna be able to compress the video files I have, and then be able to uncompress them without losing any info
    Can't be done. The best you can do is archive them as ZIP files, but that won't save you much room. If you compress the footage into another format, it tosses information away...revert back to the original form and you are missing a lot of information.
    No...can't compress and then decompress with any real savings in space. That isn't how video works.
    Shane

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

  • When cd jewel song list is printed from play list in Itunes, the list is compressed and unable to read.No problem before lates software change. How do I fix this ?

    When song list is printed from play list in Itunes for inserting into CD jewel case, the song list is compressed and is indecipherable. Did not have this problem prior to latest software change.How can I fix this ?

    Can you play the song in iTunes?
    If you can't the song file is probably corrupt and needs to be replaced.

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • One integration scenario Many target system and Outgoing compressed data

    I have some theoretical questions about XI usage.
    At first, I'm not quite sure about the following thing: for example we have a corporate system and a lot of mobile computers (let's name them "clients") with installed small databases. Number of clients is not constant, but integration scenario is one for all of them. Next, if we add new client we must add new technical system, business system, communication channel, configure scenario, add required receiver and interface determination, and so on and so on. If we remove client we also must do all these steps. In this case, is it possible in XI (anywhere) to automate these procedures of scenario configuration? For example create something like business system template with communication channels or have one business system and a lot of communication channels in one integration scenario (in this way I suppose that message will be delivered to target system on the message content-based routing) and quickly activate/deactivate this channels (in fact receiver determination).
    At second, we know that XI can receive and send compressed data if client initiates communication with XI. Anybody know about reverse functionality, when XI initiates communication with target system and compresses data (on the understanding that target system is able to receive and handle compressed data). If anybody know that is possible - can you navigate me to this information?
    Thank you for help,
    Best regards.

    Hi Maxim,
    You dont need to create techn and bus systems in SLD for your clients as you can use business services. Of course you have to create a comm channel for every client (receiver address is individual!). Receiver determination and interface determination can be configured dynmamicly with wild cards (*). You can find receivers by condition of payload content.
    If you want to change this conditions very quickly (with a customizing table) you can use following trick: Use a ABAP mapping which is reading that table. Unfortunately the message is first routed and then mapped. So you have to send the message over http adapter back to XI, where the changed message is now routed by a condition. Of course a quite complex scenario, the monitoring is even more complex bcoz doubled no of messages (no performance problem - http adapter is never a bottleneck).
    Regards,
    Udo

  • Compression and Index

    Hi BW Experts,
    I deleted Indexes before loading the data.
    Then I compressed the request without Creating the Indexes.
    It is taking so much time to compress.
    Is this the right procedure? Will it take too long to compress after deleting the index?
    Thanks in advance.
    Regards,
    Anjali

    hi anjali
    Deletion of indexes , creation of index and then compression is the general procedure
    if u r doing a data load in Cube then only its worth deleting and creating index,
    As far as I know, for compression there is no significance of index operation as a separate E table will be generated as a result of compression whereas index works with F table only
    if u r doing cube load followed by compression then standard steps are as below:
    1.Delete cube contents (depends on yr req)
    2.Delete index
    3. load cube data
    4.Compression
    5.DB Statistics (u could skip this step if no performance issue)
    6.Create Index
    Very best method is not include the compression method in the Process chain. Because once the compression is done there is no possiblity to delete the data by requestwise. This may require if there is any error in data extraction.
    Drop the index during data extraction because this may cause the performance problem during data loading.
    Edited by: Amar on Oct 14, 2008 11:10 AM

Maybe you are looking for