OSB Unused block compression and Undo backup compression

Dear all.
I know that osb backup software is faster than other vendor software especially oracle database backup area.
There are two special method (Unused block compression and Undo backup compression)
But, normally in case of comparable product using same RMAN API(Veritas/Legato etc...), they only backup used block and skip unused block, right?
I'm confused, what is different from used block only backup and unused block compression.
Please explain detail about unused block compression and Undo backup compression.
Sorry about my poor knowledge.
Thanks.

This is explained in detail in the OSB technical white paper:
http://www.oracle.com/technetwork/products/secure-backup/learnmore/osb-103-twp-166804.pdf
Let me know if you have any questions
Donna

Similar Messages

  • The detail algorithm of OLTP table compress and basic table compress?

    I'm doing a research on the detail algorithm of OLTP table compress and basic table compress, anyone who knows, please tell me. 3Q, and also the difference between them

    http://www.oracle.com/us/products/database/db-advanced-compression-option-1525064.pdf
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM
    Edited by: Sanjaya Balasuriya on Dec 5, 2012 2:49 PM

  • RMAN backup compression and tape hardware compression

    Hello,
    Should tape backup compression be ON and RMAN compression OFF? What is the correct Oracle answer?
    Common knowledge says that if both hardware and software compression were allowed to be enabled at the same time the resulting file in some cases may actually be bigger than the original file. Hardware data compression is done by specialized chips and faster than software compression and increases the tape performance.
    I was reading somewhere some time ago that tape hardware will actually not attempt to compress data if the the data being processed cannot be compressed any further. But even if this is a hoax, wouldn't rman compression, at the cost of CPU, save I/O bandwidth?
    Thanks.
    Edited by: MaC on Dec 19, 2010 4:00 PM

    MaC wrote:
    I just located Oracle document ID 1018242.1, which outlines:
    Use either hardware or software compression, but NOT both. Using both will actually increase the size of the resulting file. The rule of thumb is: compression + compression = no_compression.
    Sounds kind a logic.I was actually going to post the same sentence but you did find it on your own :) . Another thing which is worth to mention that from 11.2, the RMAN backup compression is licensed with 4 options, High, Medium and Low . These all three options requires you to have Advanced Compression License to buy. There is another 4th option, Default which is free from license and the compression ratio of that lies in between of Medium and Low.
    HTH
    Aman....

  • Related to Unused compression in RMAN backup for 10.2.0.4 database

    Hi
    I recently read through one of the document related to Unused compression in RMAN backup for 10.2.0.4 database. According to this
    Unused Block Compression is done, if all of the following conditions apply:
    + The COMPATIBLE initialization parameter is set to 10.2
    + There are currently no guaranteed restore points defined for the database
    + The datafile is locally managed
    + The datafile is being backed up to a backup set as part of a full backup or a level 0 incremental backup
    + The backup set is being created on disk.
    I have two queries
    1) What is meant by "uaranteed restore points defined for the database"
    2) If backup is being directly taken to tape, this feature would not be used
    Rgds
    Harvinder

    1. Normal and Garanteed Restore Points
    2. No

  • RMAN 10.2 "unused" block compression

    Has anyone tested for the unused block compression. I realize the docs say it is the case now in 10.2 that rman only backs up currently used blocks, and not blocks that belonged to dropped or truncated tables. I can't proove it on my own system.
    I tried...
    1) creating a tbsp 100mb, with an 80mb table. Backed it up as a backupset
    2) Dropped the 80mb table and purged it from the recyclebin. Backed it up as a backupset.
    Both backupsets were the exact same size, indicating only that rman doesn't back up "never" used blocks.
    I did the same test AS COMPRESSED BACKUPSET and got the same result, both were compressed but were of equal size.
    Thanks,
    A

    Okay, no luck. I've tried every which way to backup...
    1.4GB db with a 720MB table...
    BACKUP INCREMENTAL LEVEL 0 AS COMPRESSED BACKUPSET DATABASE results in a 230.6MB backupset file.
    DROP TABLE <720mb table> PURGE;
    Same backup commands. Same backup size 230.6MB.
    Doesn't work as far as I can tell.
    A

  • Compressing Time Machine backup folder and burning to DVD

    Can I compress the backup folder and burn to a DVD disc?
    Most people seem to assume that an external hard drive won't fail and so keeping all files backed up to it is a good thing. Surprise - external HDDs fail also.
    I'd really like to have compressed back ups on a DVD in order to have 3 locations with my files.

    It would have to be a pretty small backup to fit on a DVD. You won't likely find too much benefit from compressing the data because much of it is in binary form that is most likely not going to reduce much if at all.
    A single DL DVD can hold 8 GBs. If you have just OS X installed that will require 16 GBs less whatever you may save compressing the data. Personally, I would not risk compressing the data since that could prove just as problematic or more so than keeping it on a hard drive. Furthermore, if you want more than 5 years storage life out of the DVDs, then you have to avoid the stuff they sell at places like Office Depot which have a relatively short shelf life.
    If you need to use more than one DVD, then you need a backup utility designed to span a backup across multiple media. Your choices are somewhat limited: Retrospect, Apple's Backup (requires a .Mac account to download and use,) and possible Tri-Backup (but I am not sure about it.)
    Or you can buy several hard drives of suitable size, make multiple backup copies to each drive, store one in a vault off-site. Rotate your weekly and monthly backups to the other drives and once every quarter rotate the off-site drive. The chance of all the drives failing at once is pretty small.

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • OLTP compression and Backupset Compression

    We are testing out a new server before we migrate our production systems.
    For the data we are using OLTP compression.
    I am now testing performance of rman backups, and finding they are very slow and CPU bound (on a single core).
    I guess that this is because I have also specified to create compressed backupsets.
    Of course for the table blocks I can understand this attempt at double compression will cause slowdown.
    However for index data (which of course cannot be compressed using OLTP compression), compression will be very useful.
    I have attempted to improve performance by increasing the parallelism of the backup, but I from testing this only increases
    the channels writing the data, there is still only one core doing the compression.
    Any idea how I can apply compression to index data, but not the already compressed table segments?
    Or is it possible that something else is going on?

    Hi Patrick,
    You can also check my compression level test.
    http://taliphakanozturken.wordpress.com/2012/04/07/comparing-of-rman-backup-compression-levels/
    Thanks,
    Talip Hakan Ozturk
    http://taliphakanozturken.wordpress.com/

  • How to compress and decompress a pdf file in java

    I have a PDF file ,
    what i want to do is I need a lossless compression technique to compress and decompress that file,
    Please help me to do so,
    I am always worried about this topic

    Here is a simple program that does the compression bit.
    static void compressFile(String compressedFilePath, String filePathTobeCompressed)
              try
                   ZipOutputStream zipOutputStream = new ZipOutputStream(new FileOutputStream(compressedFilePath));
                   File file = new File(filePathTobeCompressed);
                   int iSize = (int) file.length();
                   byte[] readBuffer = new byte[iSize];
                   int bytesIn = 0;
                   FileInputStream fileInputStream = new FileInputStream(file);
                   ZipEntry zipEntry = new ZipEntry(file.getPath());
                   zipOutputStream.putNextEntry(zipEntry);
                   while((bytesIn = (fileInputStream.read(readBuffer))) != -1)
                        zipOutputStream.write(readBuffer, 0, bytesIn);
                   fileInputStream.close();
                   zipOutputStream.close();
              catch (FileNotFoundException e)
              catch (IOException e)
                   // TODO Auto-generated catch block
                   e.printStackTrace();
         }

  • Gdal compression and pyramids

    Hi!
    I'm having problems importing raster-data using gdal with compression.
    DB-Version: 11.2.0.1.0
    gdal-Version: 1.72
    gdal-Statement: gdal_translate -of georaster \\Path\file.tif geor:user/pw@server,A_TABLE_GK2,GEORASTER -co compress=DEFLATE -co nbits=1 -co "INSERT=VALUES(100,'file.tif','GK2',SDO_GEOR.INIT('RDT_A_TABLE_GK2'))
    The import works fine and the data is loaded into my table. I can validate the file using
    1. sdo_geor.validateblockMBR(georaster),
    2. SDO_GEOR_UTL.calcRasterStorageSize(georaster),
    3. substr(sdo_geor.getPyramidType(georaster),1,10) pyramidType, sdo_geor.getPyramidMaxLevel(georaster) maxLevel
    4. SELECT sdo_geor.getCompressionType(georaster) compType,sdo_geor.calcCompressionRatio(georaster) compRatio
    5. SELECT sdo_geor.getCellDepth(georaster) CellDepth,substr(sdo_geor.getInterleavingType(georaster),1,8) interleavingType,substr(sdo_geor.getBlockingType(georaster),1,8) blocking
    and all results are true (or feasible).
    Now my problem:
    DECLARE
    gr sdo_georaster;
    BEGIN
    SELECT georaster INTO gr
    FROM A_TABLE_GK2 georid = 11 FOR UPDATE;
    sdo_geor.generatePyramid(gr, 'resampling=CUBIC');
    UPDATE A_TABLE_GK2 SET georaster = gr WHERE georid = 11;
    COMMIT;
    END;
    Error report:
    ORA-01403: no data found
    ORA-06512: at line 4
    01403. 00000 - "no data found"
    *Cause:
    *Action:
    The pyramid cannot be calculated. Leaving out the parameter -co compress=DEFLATE allows me to generate pyramids (though this results in an exploding tablespace as 2GB data in file-system rise to about 120 GB in database without compression - and 2GB is only a small amount of the data needed).
    I already recognized gdal needs the Parameter -co compress=DEFLATE in Upper-Case to allow validation of georaster - but this doesn't change my problems calculating pyramids.
    Anybody heaving an idea?
    NilsO

    We definately need colordepth of 1bit as the input-files are b/w. Importing with 8 bit blows up the filesize by 8 (surprise ;-) ) and our customer has a lot of data he can't handle with 8 bit.
    The georid in the import-statement is only a dummy. We're using a trigger to insert the georid (at the moment we're around georid 7000) but all data I gave is taken from the same georaster-object. I already ran a series of tests using nbits, compression, srid-statements in gdal. Importing using srid and nbits works fine with validation and pyramids. Using compression-parameter (with or without srid, nbits) doesn't.
    Current workaround is to import without compression and every 50 files we compress the data and shrink tablespace. Slow performance and I needed to write a tool to create a set of gdal-import statements combined with a function call on oracle using sqlplus. Works for the moment, but no solution for the future....
    C:\Program Files (x86)\gdal172>gdalinfo georaster:user/pw@db,A_TABLE_GK2,
    GEORASTER,GEORID=100 -mdd oracle
    Driver: GeoRaster/Oracle Spatial GeoRaster
    Files: none associated
    Size is 15748, 15748
    Coordinate System is `'
    Metadata (oracle):
    TABLE_NAME=A_TABLE_GK2
    COLUMN_NAME=GEORASTER
    RDT_TABLE_NAME=RDT_A_TABLE_GK2
    RASTER_ID=13209
    METADATA=<georasterMetadata xmlns="http://xmlns.oracle.com/spatial/georaster">
    <objectInfo>
    <rasterType>20001</rasterType>
    <isBlank>false</isBlank>
    </objectInfo>
    <rasterInfo>
    <cellRepresentation>UNDEFINED</cellRepresentation>
    <cellDepth>1BIT</cellDepth>
    <totalDimensions>2</totalDimensions>
    <dimensionSize type="ROW">
    <size>15748</size>
    </dimensionSize>
    <dimensionSize type="COLUMN">
    <size>15748</size>
    </dimensionSize>
    <ULTCoordinate>
    <row>0</row>
    <column>0</column>
    </ULTCoordinate>
    <blocking>
    <type>REGULAR</type>
    <totalRowBlocks>62</totalRowBlocks>
    <totalColumnBlocks>62</totalColumnBlocks>
    <rowBlockSize>256</rowBlockSize>
    <columnBlockSize>256</columnBlockSize>
    </blocking>
    <interleaving>BIP</interleaving>
    <pyramid>
    <type>NONE</type>
    </pyramid>
    <compression>
    <type>DEFLATE</type>
    </compression>
    </rasterInfo>
    <layerInfo>
    <layerDimension>BAND</layerDimension>
    </layerInfo>
    </georasterMetadata>
    Image Structure Metadata:
    INTERLEAVE=PIXEL
    COMPRESSION=DEFLATE
    NBITS=1
    Corner Coordinates:
    Upper Left ( 0.0, 0.0)
    Lower Left ( 0.0,15748.0)
    Upper Right (15748.0, 0.0)
    Lower Right (15748.0,15748.0)
    Center ( 7874.0, 7874.0)
    Band 1 Block=256x256 Type=Byte, ColorInterp=Gray
    With -checksum it's
    Can't see the beginning anymore on console...
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    ERROR 1: ZLib return code (-3)
    More than 1000 errors or warnings have been reported. No more will be reported f
    rom now.
    Checksum=12669
    regards,
    NilsO

  • What thrid party tools are you using for backup compression?

    I have a few SQL Server 2008 standard edition instances and I'm curious to what third party tools people are using for backup compression. I know upgrading to enterprise edition or  SQL Server 2008R2 or above will give me that compression but I do not
    have that option right now. I'm familiar with Idera, Red gate and Litespeed and I just want to know what else is out there.
    Thank you in Advance.

    There are many third party tools.
    But Why third party tools?
    SQL Server integrated Backup and restore feature is complete. For more info please see these links:
    Backup Overview (SQL Server)
    Introduction to Backup and Restore Strategies in SQL Server
    SQL
    Server Backup Best Practices
    sqldevelop.wordpress.com

  • Attack and Release in Compression

    Okay, rather than a problem, another of my academic "how does it work" discussions.
    On another forum, I've been debating the meaning of "attack" and "release" in compression.  My contention is that the attack time is how long it takes to achieve the preset level of gain reduction once the signal crosses the threshod level...and release is the opposite: the time it takes to return to no gain reduction, again once the levels cross the threshold in downward direction.
    Another poster, who usually knows what he is talking about, contents that the attack and release times are triggered, not by levels hitting the threshold, but rather simply by the compressor sensing an upward or downward level change.
    As proof he posted a test from Soundforge:  A tone that transitioned between -12 and -3dB at 300ms intervals.  He then applied compression with the threshold set to -16 (so everything should be compressed) and, indeed, the waveform shows the effect of attack and release at each transition even though the whole test should be compressed.
    I tried the same in Audition and, although the artefacts aren't as noticeable as in Sound forge, you can see the pumping on the waveform.
    I can't see that the attack and release can be clever enough to react to every change in level (especially on a real signal, not his test) so have to assume the cause is a bit more basic.
    Here's a picture of the original file I made:
    And here's the same file compressed with an attack of 2ms and a release of 5ms and a threshold of -15dB, i.e. lower than anything in the clip.
    As you can see, you can see where the attack and release happens (I won't post a pic but, with attack and release set to zero, you can't see any transitions) so if anyone can explain this you'll cure my curiosity!

    But I'd still like to know how software compression works anyway.
    I do not claim credit for writing this description. It comes from another recording site I frequent but puts things in pretty plain words that we made a "sticky."
    [quoted post]
    Shotgun's Compressor Tools 1 of 2
    What you have to do is understand what compressors do, and what each of the controls do IN GENERAL.  Then you apply that knowlege to what you want out of using the compressor and what your ears hear AT THE TIME OF USE so that you can adjust as necessary.  So, read below for an overview of the box as a whole and each knob you're likely to find on it.
    Compression
    From the name, one can surmise that a compressor is going to squish, squash, mash or pulverize something.  Given that we plug audio signals into it, we can further surmise that what is getting squished, squashed, mashed or pulverized is, indeed, our audio signal.  And one would be completely correct in assuming that.  But what does that really mean?
    Well, consider an audio signal.  Let's say it's a recording of my mom yelling at me about leaving my laundry piled haphazardly in the hallway. First, mom starts out trying to reason with me, gently, "Shotgun, you know, it's just not condusive to laundry efficiency leaving that stuff piled haphazardly like that..." her voice is calm, even and even somewhat soft.  As I stare at her blankly, not understanding the finer points of sorting one's laundry and transporting it to the appropriate room in the house her voice becomes stronger and louder.  "SHOTGUN! I'M GOING TO BEAT THE LIVING **** OUT OF YOU WITH A TIRE IRON IF YOU DON'T PICK THIS **** UP IMMEDIATELY AND PUT IT WHERE IT BELONGS SO HELP ME GOD!"  Now she's yelling, screaming, in fact.  Her face is red and frankly, I've just soiled myself which makes the entire laundry issue even more complicated. 
    Now, let's assume we're going to lay this recording of mom over some Nine Inch Nails-style door slamming, pipe clanging, fuzz guitar backing tracks.  It's going to be an artistic tour-de-force.  However, when mom starts out, her voice was hitting only about 65-70dB--normal conversational speech.  By the time she's done it's more like 105dB worth of banshee howling.  Unfortunately, our backing tracks are a pretty even volume the whole way through.  So, at the beginning of the track mom will be virtually inaduible whereas at the end she'll be drowning out my samples of whacking a stapler on a desk.  How do we deal with that?
    WE USE A COMPRESSOR!
    You see, what a compressor compresses is volume.  That is, technically, it compresses the amplitude of the signal, or its "gain".  So for every decibel that goes into the compressor, only a fraction of it will come out.  That means that (depending on our settings, see below) if mom's voice uncompressed winds up at 105dB then we can set our compressor so that it only gets as high as 52dB if we want.  How does that help you ask?  Won't it still be too low to hear over the backing music?  Yes it will, but read on and we'll cover that in the controls discussion.
    Threshold
    The threshold control on a compressor sets a level below which the compressor will do no work.  The control is graduated in dB (in this case dBV of signal level) and allows you to set an "on/off" point so that you can compress the LOUD parts of a signal, and leave the soft parts alone.  At times you may want to set this control low enough so that you're affecting the entire signal, at times you may not.  In the case of mom's rant-on-tape, what we may want to do is set the compressor so that it doesn't touch the signal until her voice reaches something like 85dB or so***, say, about halfway up the scale from softest to loudest.  So, we set the threshold so that we only see activity on our "gain reduction" meter when the track gets to a certain point. 
    To USE the threshold control effectively, you generally need to use your ears.  Have some idea, before you start, of what you hope to accomplish by using the compressor and set the threshold to Capture the part of a signal you wish to do whatever that is to.  In our example I want to lower the louder parts of my mom's tirade so I set the threshold to activate the compression at some arbitrary point in the track.  I could have done it several other ways and the only way to learn which is best is to experiment and listen.
    Ratio
    This is the control that tells us how much signal comes out of the box relative to what's coming in.  It is graduated in terms of a ratio (hence the name) of output to input.  So, let's say we set the control to point at "2:1".  That means that for every 2dB of incoming signal, we're only going to get 1dB of outgoing signal.  Which means that at its very loudest, mom's voice isn't going to be nearly as loud as it was originally.  Keep in mind that this ratio only applies to signals that meet or exceed the threshold setting.  Any signal that is below the threshold just passes through as though the compressor weren't there (kinda). 
    To use the ratio control effectively you, again, need some idea of what you want out of your compressor overall.  In our case I just need mom's voice to be more easily mixed in with the backing tracks so I just want it to be kind of even.  However, I still want it to start softer and get louder, just maybe not AS soft at the beginning and not AS loud at the end.  That is, still changing, just not as much.
    Attack
    The attack control tells us that, once a signal meets or exceeds the threshold, how quickly does the compressor put the smack down on said signal?  The control is usually graduated in intervals of time, usually marked in milliseconds.  So, let's say that I set my attack control to say 5ms.  That means that when the signal passing through reaches the threshold I've set, the compressor waits an additional 5ms before it begins to reduce the amplitude (again, gain).  This seems counter-intuitive doesn't it?  I mean, we want the level controlled WHEN it reachest threshold, right?  Not 5ms later.  Well, there are reasons for slightly delaying the attack (and for that matter release) times. 
    To use the attack time effectively (and by now you should have seen this coming) you need to know what you want out of your compressor in general.  Do you want the signal clamped down on fairly quickly?  Or not?  How do you know?  This brings in one of the most important concepts of recording: attack and decay.  Each sound has an attack and a release.  Imagine hitting a drum (the easiest place to see this concept).  You hear the sharp, immediately loud sound as the stick hits the head, but you also hear the sound gently fade away, also.  That initial WHACK, that initial spike in amplitude is the sound's attack. Everything else is it's decay.  Note that I use these terms in a "Shotgun" type of way and there are more correct ways to say this, I think, but I tend to, over time, develop my own language, so you're at a disadvantage. 
    So then, we can hear an attack in mom's voice, too.  It's more subtle than the attack of a drum hit with a stick, or a guitar player's pick against a string, but it's there.  And if we set our compressor's attack time too short, we will lose all the definition of the attack of the sound.  Sometimes that's desirable, but in our case it is not.  A very large percentage of how people perceive sounds comes from the attack. You must strive to preserve that unless it is your desire to purposely not.  Therefore, be very careful with the attacks under your care.  In the case of a vocal track, the attack of the voice will lend very much to the intelligibility of the track, so we do NOT want to destroy it. So, we may want a slightly longer attack time than 5ms here.  But we can only tell BY LISTENING.  LISTEN to the track, sweep the attack control back and forth and listen to what happens to the attack of the sounds. If it sucks, move the control.  Don't look at where it's pointing until you're satisfied with how it sounds.  Then only look for the sake of curiosity because that setting may never work the same way again.  if you're using a plug-in make sure you allow ample time for the movement to take effect.  Moving a plug-in's controls can sometimes not take effect for a full second or two after you move it so if you're sweeping it back and forth rapidly you'll fool yourself.  In the case of plugins, make a move and pause until it changes.  If it doesn't change within 2-3 seconds, maybe you didn't move it far enough.
    Release
    As you might guess the release control handles the other end of the signal from the attack.  That is, when a signal drops back below the threshold, how long does the compressor wait to actually stop compressing.  All the same counter-intuitiveness applies here as well. However, remember that the decay or "tail" of a signal isn't as important to the listener as the attack so you can get away with a little more here.  Again this control is going to be graduated in units of time, usually ms.  However, the numbers will be larger than the attack times.  Sometimes up into the 100's of ms or even full seconds. 
    To set a proper release time, again, understand what you want out of your compressor.  Do you want a major thrashing to your sound, or do you just want kind of a gentle corrective measure?  What you have to look out for in the case of release times is pumping.  If your release time is set too short then the sound will drop below the threshold, the compressor will release it, but the sound will then jump UP in level because the compression is no longer making it softer, but it's below threshold.  That probably sounds confusing, but it happens.  And it will sound pretty odd.  The first time you hear it you'll understand why it's called "pumping".  It sounds almost like there's a new "attack" near the end of the signal's decay.  As I've said before, sometimes this is actually desirable.  Usually it's not though.  Your goal is to set a release time long enough to give the sound time to naturally decay to a point that when the compressor lets go it won't "pump" yet short enough so that the compressor isn't still active when the next "attack" comes along.  If you set your release time too long it will start ****ing around with the attacks because it's taking so long to let go the next loud signal is there before the last one is finished compressing.  So, if you get your attack set where you think it's right, but then you start losing your attack again, consider dropping that release time lower (faster). 
    Make up gain
    Here's where we answer your initial question of "Won't it still be too low to hear over the backing music?"  Remember that we noted that mom's voice started out so low that it was lost in the music.  And all we've done so far is to use our compressor to take the bite out of the louder part of the track so that it's not overpowering.  So, doesn't this leave the softer part still lost?  And, possibly, doesn't it make the WHOLE TRACK too soft now?  Yes, it absolutely does.  But that's what we have makeup gain for. 
    The makeup gain is going to look very similar to any other gain control you have seen.  It will be marked off in dB, possibly starting at 0dB and moving up to some obscene amount like 20 or 40 or 60 or 100,000 or something.  (It won't really be 100,000).  The makeup gain does just what it says it does, too.  It allows you to "make up" the gain that you're losing by compressing in the first place.  Now, that doesn't mean it UNDOES what you just did, not by any means.  It means that you can now take your newly compressed signal and make the WHOLE THING louder. This is how we're going to get the parts that are too soft up where they belong. 
    To set this control we're going to, of course, listen.  What we've done thus far is to compress down the loudest parts of the signal so that they're not so loud.  You can say that the loud parts are now "closer" to the soft parts so to speak.  So what you do with your makeup gain is to take the whole lot and move it back UP some smaller amount so that now the loudest parts are just still loud, but not AS loud and the softer parts are still soft, but loud enough to be heard.  Think of yourself playing basketball.  If you're short like me, there's no way you can slam dunk a basketball.  However, let's say you can lower your basketball goal by one foot.  Now it's lower, but you still can't slam dunk it, but lowering it any more would ruin the rest of the game because you'd just be dropping the thing in and not shooting.  So what you do is you make yourself magically grow a foot as well.  Now the goal is still a reasonable height, but you can slam dunk because you've grown a bit yourself.  Same sorta thing.  Your signal isn't so low it sucks now, but it isn't so high you can't get anything useful out of it as well.
    Here's a shocker: in terms of makeup gain there IS a general rule you can keep in mind.  As you're setting your compressor's other settings you will notice the meter marked "gain reduction" giving you some idea of what you're doing to the signal.  It could be a schitzophrenic little peak meter or it could be a big, slow, thoughtful VU meter.  Either way it'll tell you "hey, you're getting about 5dB of gain reduction here pal!"  So, this tells you you can START your makeup gain at a setting of +5dB.  That should give you a compressed signal at the same general level as the uncompressed signal.  Kinda.  Sorta.  It's a really ROUGH starting point, but it's a starting point nonetheless.  Again, though, twist it and listen to get it where it really needs to be.  You may want more, you may want less.
    The ******** you'll hear
    Now, as you get replies to this thread there will be plenty of numbskulls along to give the following answers:
    (1) Shotgun you're such a ****ing *******.  The guy just wanted some basic info, some basic starting points for his compressor why do you have to be such a *****? 
    (2) Shotgun, you don't understand compression and you've never done any recording, HAVE you?
    (3) Here are my basic settings and they'll probably work
    None of that is even remotely true.  Sure, there are plenty of basic starting points anybody here could give you.  In fact, many of these folks have only been using compressors for about 6 months, but even THEY will have ONE setting group that they like for some reason and are DYING to tell you it in order to appear knowlegeable.  Do not listen to any of this ****.  Develop your own views on good starter compression settings by appying what you learn and what you hear and what you observe in your own experience.  There are so many different kinds of compressors that anybody who gives you a rough setting diatribe is just pissing in the wind.  In fact, many types of compressors don't even HAVE some of the controls I mentioned.  Some have more.  Also, there are plenty of points we haven't covered.  For example limiting, which is a special kind of compression that uses a very high ratio (often infinity:1).
    [/quote]
    This isn't the complete post but it is the pertinent section.
    Jack

  • Rman backup Compress question

    Oracle ver 10.2.0.1 -- 24/7
    250G database. I ran the script
    run {
    allocate channel c1 type disk format '/hotbackup/%d/db_%d_t%t_s%s_p%p%c';
    set limit channel c1 kbytes 2024800;
    sql "alter system archive log current";
    backup as compressed backupset
    incremental level 0
    filesperset=20 (database include current controlfile);
    release channel c1;
    # Backup all archive log files
    allocate channel t1 type disk format '/hotbackup/%d/al_%d_%s_%p%c';
    backup as compressed backupset filesperset=20 (archivelog all delete input);
    release channel t1;
    delete noprompt backup completed before 'sysdate-1';
    And the backup file is still around 220G. Is there another way to compress? Anybody have an idea how to make the backup smaller?
    Thank You

    How many CPU's does your server have?
    And why don't you let RMAN choose the default's regarding the files per set, the backup set piece size, etc. Whenever you run a compressed backup, RMAN would normally use at approx.12% of CPU per channel. A 45hr backup time is clearly unacceptable that too when it is a 24/7 database but as your are using a single channel, which might be the culprit. I would suggest to use about 4 channels (depending on your CPU load) and do not put any restrictions on the files per set and backup set piece size.
    We do a compressed backup to disk of a 230G database on a 8 proc server with 4 channels allocated and the backup normally completes in about 1.45 hours and the FULL backup size produced is around 30G ( level 0 is a FULL backup).

  • I am exporting a Pages document to Epub and Pages is compressing my jpg images.  How do I keep the original jpg size during the export to epub process?

    I am exporting a Pages document to Epub and Pages is compressing my jpg images (I think to 72 dpi from original 600 dpi). 
    How do I keep the original jpg size during the export to epub process?

    We are still trying learn how to use Pages to build ePub documents with high resolution graphics that will expand clearly when they are tapped. Very large screen shots are my example here.

  • Compress and rollup the cube

    Hi Experts,
    do we have to compress and then rollup the aggregates? what whappends if we rollup before compression of the cube
    Raj

    Hi,
    The data can be rolled up to the aggregate based upon the request. So once the data is loaded, the request is rolled up to aggregate to fill up with new data. upon compression the request will not be available.
    whenever you load the data,you do Rollup to fill in all the relevent Aggregates
    When you compress the data all request ID s will be dropped
    so when you Compress the cube,"COMPRESS AFTER ROLLUP" option ensures that all the data is rolledup into aggrgates before doing the compression.
    hope this helps
    Regards,
    Haritha.
    Edited by: Haritha Molaka on Aug 7, 2009 8:48 AM

Maybe you are looking for