Tar command with compress option

We are in process of developing the shell script to take the oracle database backup and using the TAR commnad with compress option as like below. But this commnad is not working. Please hlep
cd /u01/oradev/datafiles
tar cvf - data | compress -c > /u02/oraprod/dbf/proddata_`date +%d%b%Y`.tar.Z
Appreciate your kind help.
OS : Linux v5.5

Assuming that you know what you are doing ito using primitive tar to backup a database, as oppose to rman, the command on Linux would look something as follows:
tar -czvf database-backup.tgz /u01/orclwhere
-c = create tar
-z = compress tar
-v = verbose
-f database-backup.tgz  = filename of tarball
/u01/orcl = database directory containing the data filesAlso make sure that the control files and pfile or spfile, are included.
And seriously reconsider this approach. This process can run happily and without any errors, and still provide a totally useless database backup.

Similar Messages

  • BSP adds CR/LF marker at the end of the page even with compression options

    Hello!
    I'm creating normal BSP Page with Flow Logic and setting the contents myself:
    <%@page language="abap" %><%
      response->set_cdata( '-=-' ).
    %>
    The problem is that WebAS somehow always adds a CR/LF (new line) marker at the end of the page (even if I set compression option to "Remove leading and trailing spaces"). I also tried to set different mime-types.
    On the other side there is a BSP Application it00 (described here http://help.sap.com/saphelp_nw04/helpdata/en/eb/8c683c8de8a969e10000000a114084/content.htm ) that shows the uploaded file right.
    So I was trying to include that 'solution' setting OnInputProcessing event to this
    *         set response data to be the file content
              runtime->server->response->set_cdata( '!=!' ).
    *         set the mime-type and file size in the response
              runtime->server->response->set_header_field(
                name  = 'Content-Type'
                value = 'text/plain' ).
              runtime->server->response->set_header_field(
                name  = 'Content-Length'
                value = '3' ).
              navigation->response_complete( ).
    But nothing happens at all.
    Please, help me getting rid of these two annoying bytes ('CR/LF') at the end of page.

    Thank you, Cornelia!
    It works now.
    The following code is "must have"
    navigation->response_complete( ).

  • SDO_RELATE command with 'coveredby' option doesnt return record as expected

    Hi all,
    I use oracle spatial 10.2. I use sdo_relate to compare 2 tables' geometries. The case is table 1 has ~2000 records, each is a retangle and has sdo_geometry field. 'Table 2' is transient geometry defined by sdo_geometry to a rectengle.
    The definition of the table 2 is:
    SDO_GEOMETRY(2003, NULL,NULL, SDO_ELEM_INFO_ARRAY(1,1003,3), SDO_ORDINATE_ARRAY(-50,50, 50,-50)), 'mask=COVEREDBY') = 'TRUE';
    I am working in world coordinates.
    and the whole SQL for comparison is:
    select name from KMK.summary where sdo_relate(mbr,SDO_GEOMETRY(2003, NULL,NULL, SDO_ELEM_INFO_ARRAY(1,1003,3), SDO_ORDINATE_ARRAY(-50,50, 50,-50)), 'mask=COVEREDBY') = 'TRUE';
    The problem is that this command doesnt return any values, i.e no records are selected from table 1. I am sure that there are features that evaluate to true for 'mask=COVEREDBY'.
    here is one of them:
    SQL> SELECT SDO_AGGR_MBR(mbr) FROM kmk.summary where name = ' GETDB_GLOBAL_004065_00O';
    SDO_AGGR_MBR(SHAPE)(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_
    SDO_GEOMETRY(2003, NULL, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 3), SDO_ORDINATE_ARR
    AY(-50, -25.25, -48.25, -24.5))
    this teature's MBR goes to -50 and it should be selected.
    What do I wrong?
    Thanks

    Hi I will give it a try on Monday for the numwidth20
    I tried with inside mask and it returns the right number of records - only those that are completely within/inside the mask. Thisone record is still not selected.
    I use ARCGIS on the same table with SDE so I know what number of records to expect.
    When you did the test was GETDB_GLOBAL_004065_00O selected with coveredby mask?
    We are running v 10.2.0.3 could this make a difference?
    It seems to me that when the coordinates are evaluated the algorithm uses the following for comparison:
    (in case of coveredby/inside) it comapres every point to the mask pounts and if it is less than it returns FALSE, it doesn't check if is is equal as well to return TRUE. On the other hand it checks if it is more or equal and if it is returns TRUE. That is why it selects record that have one boundary on top of the search window but extent outside it.
    EDIT: It seems it is the precision. When I set it to 25 it returned -50.0000000003723 for the Xmin.
    Now, this table has been created from and ESRI shp file, imported to oracle using ArcSDE. Do you have any idea of shp files and how the precisions are utilized in oracle?
    Edited by: garnet on Mar 16, 2009 10:13 AM
    Edited by: garnet on Mar 16, 2009 4:31 PM

  • Command, Control, and Option keys no longer works with Citrix Receiver after upgrading to OSX 10.9.5

    At first my keyboard did not work with Citrix Receiver after upgrading to OSX 10.9.5 and I found the workaround by using "cmd + tab," however, once my keyboard work within Citrix, the "command, control, and option" keys no longer work.  Does anyone have a remedy for this?

    Keyboard no longer works with Citrix Receiver after upgrading to OSX 10.9.5

  • Repairing Library Permissions: But the instructions didn't work  clicking the iPhoto icon with the Option and Command keys opened iPhoto but not the First Aid as promised

    Went to Help.  But the instructions didn't work
    clicking the iPhoto icon with the Option and Command keys opened iPhoto but not the First Aid as promised

    Give this a try:
    1 -be sure to have a current backup of the iPhoto library.
    2 - download and launch http://www.macchampion.com/arbysoft/BatChmod.
    3 - click on the File button, locate and select your iPhoto Library.
    4 - check the Unlock and Apply to enclosed checkboxes.
    5 - make no changes to the other checkboxes
    6 - click on the Apply button.
    OT

  • For some reason, my compression option is grayed out under "File" menu, and when I select a folder with multiple files, or multiple documents, and try to right-click or tap the mousepad with two fingers, I do not get a "compress" option available. Help.

    I don't know why, but when I select a folder with multiple docs contained within it, or multiple files, the Compress option is grayed out under the File menu, nor is it available when I right-click. Please help.

    Yes it is, but I have the same problem of the option does not appear with "right click" (I'm actually control-clicking), and COMPRESS is grayed out under the file menu. I'd like to fix it.
    Hmm. After finding the Archive Utility,* I tried archiving the folder. It came out as .cpgz, ugh. Found there was a preference to make .zip the default. It worked!
    And now…the menus are not grayed out. ?!?! OK.'
    Hope this helps someone.
    *You cannot find this via search, apparently it is a hidden file, you have to click from the root drive through to /System/Library/CoreServices (Thanks for the file path Alberto!)

  • Running reports via command-line with /batch option

    Hi,
    I have created a batch file which runs, and exports the results of 7 different discoverer reports with /batch option.
    Contents of batch file:
    c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input1.txt"
    c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input2.txt"
    c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input3.txt"
    c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input4.txt"
    c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input5.txt"
    c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input6.txt"
    c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input7.txt"
    Sample of a cmd file the bacth file runs:....
    /connect user/password@oraprd /apps_responsibility "BIS Super User"
    /open "H:\Projects\DRP Import Modelling\Dev\Intransit Extract.dis"
    /sheet 1 /export xls "H:\Projects\DRP Import Modelling\automation\Intransit Input.xls" /batch
    The batch file runs ok and processes each report in sequence, and creates the excel export one after the other.
    However, I need to process the 7 reports in parallel, so changed the batch file to process in own thread :
    start c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input1.txt"
    start c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input2.txt"
    start c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input3.txt"
    start c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input4.txt"
    start c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input5.txt"
    start c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input6.txt"
    start c:\orant\discvr4\dis4usr /cmdfile "H:\Projects\DRP Import Modelling\automation\DRP Model Input7.txt"
    The reports kick off at the same time and I can see 7 dis4usr.exe processes running via task manager
    Problem:
    The first reports runs and exports ok.
    All the others dis4usr.exe processing stops with the following error pop-up:
    ! Could not open file. OK
    Question:
    Can you only process reports sequentially via command line mode with /batch option?
    If not...what am I missing...its driving me insane :)
    All asistance is greatly appreciated. Thanks

    Ladies and gentlemen.....I have solved my problemo!
    The reason why the rest of the reports were getting the 'could not open file.' error was due to the fact that each discoverer instance launched was
    trying to access the same standard log file simultaneously.
    I modified each cmd file to write logging info to individual files: eg
    /logfile "H:\Projects\DRP Import Modelling\Automation\input2_log.txt"
    Now that's there's no contention with logging, the reports are firing off beautifully in parallel :)
    I am definitely having a beer this evening!
    Thanks for everyone's input...kept me on the righteous path :)

  • Script combining menu command with options bar option?

    Would like to call the Transform:Scale command AND activate the "Maintain aspect ratio" option from the options bar.
    Any idea? thx.

    This shoud call the transform command with the aspect ratio linked. And you should be able to use it in a Configurator button.
    function transformDialog() {
        var desc = new ActionDescriptor();
            var ref = new ActionReference();
            ref.putEnumerated( charIDToTypeID('Lyr '), charIDToTypeID('Ordn'), charIDToTypeID('Trgt') );
        desc.putReference( charIDToTypeID('null'), ref );
        desc.putEnumerated( charIDToTypeID('FTcs'), charIDToTypeID('QCSt'), charIDToTypeID('Qcsa') );
            var desc1 = new ActionDescriptor();
            desc1.putUnitDouble( charIDToTypeID('Hrzn'), charIDToTypeID('#Pxl'), 0.000000 );
            desc1.putUnitDouble( charIDToTypeID('Vrtc'), charIDToTypeID('#Pxl'), 0.000000 );
        desc.putObject( charIDToTypeID('Ofst'), charIDToTypeID('Ofst'), desc1 );
        desc.putUnitDouble( charIDToTypeID('Wdth'), charIDToTypeID('#Prc'), 100.000000 );
        desc.putUnitDouble( charIDToTypeID('Hght'), charIDToTypeID('#Prc'), 100.000000 );
        desc.putBoolean( charIDToTypeID('Lnkd'), true );
        desc.putEnumerated( charIDToTypeID('Intr'), charIDToTypeID('Intp'), charIDToTypeID('Bcbc') );
        try{
            executeAction( charIDToTypeID('Trnf'), desc, DialogModes.ALL );
        }catch(e){}
    transformDialog();

  • Repost: TAR command issue

    I am moving this item to its own post because it turned out to not be related to the original item.
    I have been having a problem when attempting to create a VM using the VMs provided by JumpBox (www.jumpbox.com). It seems to get stuck untarring the data drive or any tar for that matter. My hypervisor system locks up completely when I simply try to untar the file at the command line. It requires me to do a hard power reset of the server to recover from the lock.
    The file is a 10g compressed file which on my other (standard Oracle Linux) box decompresses in 1 second. I can then transfer the file using sftp to the Oracle VM server and create the VM with no problems.
    I have not retested since changing the /OVS partition to sync. Could leaving this as default been the problem?

    The problem with the TAR command was the mounting of the file system with the sync option. After retesting the IO to the disk is very very slow. uncompressing a virtual disk file of 10gb takes roughly 1 hour on the vm server and 3 seconds on my regular linux box. Any ideas on what I can do to increase the performance?

  • Using tar command to copy faster

    Hi,
    I want to copy the apps directories and subdirectories to another location on same machine for cloning purpose.
    I know this can be achieved using tar command.
    Will you pls send me the tar command to archive and copy at the same time using pipe "|" .
    Thanks,
    Dnyanesh

    Hi Funky,
    Permissions are assigned based on uid, not username. When you unpack the tar file with the 'p' option as root, the files will have the same uid on the target system as they did on the source system. In this case, preserving permissions for 'root' should not be a problem. The uid for root (0) does not usually change. The complication is with the uid values for non-root users.
    As far as preserving the uid for non-root users (like oracle or applmgr)...mostly I just try to keep these in sync across my systems as much as possible. :) For example, although the applmgr user on my production system is 'applprod', and on my test system is 'appltest', they both have the same uid (1003). That way, when I unpack a tarball from my production system on my test system, all the files that were owned by 'applprod' show as being owned by 'appltest'.
    In your example, if user 'xxx' on production and user 'yyy' on the clone server had different uids (for example, 1003 and 1004), the easiest thing to do is to try changing the uid of user 'yyy' on the clone system:
    usermod -u 1003 yyy
    This will only work, of course, if there is no other user on your clone/target system with uid 1003. :-)
    If you do have another user on your target system with uid 1003, then when you unpack the tar file on the target, your files will definitely have the wrong owner. You can still change that without touching root-owned files by using chown to change just the files owned by the "wrong" user:
    chown -R --from wronguser correctuser $ORACLE_HOME
    If your Linux doesn't support the --from syntax for chown (I primarily use SUSE/SLES and don't have my Redhat VM handy to check), you could also use find:
    find $ORACLE_HOME -user wronguser -exec chown correctuser {} \;
    Again, the point of the chown and find commands is that they'll leave the files that are supposed to be owned by root untouched.
    Sorry for the long post, I tried to go for comprehensive instead of brief this time. :-)
    Regards,
    John P.

  • Does Mac OS X v10.8.5's Preview have a compression option for JP(E)G images?

    Hello.
    Does Mac OS X v10.8.5's Preview have a compression option for JP(E)G images? I did not see an option for it.
    Thank you in advance.

    Always make and work with a copy of your original image.
    .jpg is a 'lossy' compression, so once saved to a smaller file size, the detail cannot be regained by attempting to re-save the smaller image back to a larger file size.
    In OS X 10.8.5, using Finder, I used File Export (but you can also use Save As, revealed with Command+Duplicate in the Finder menu)
    The original digital image was 6.7MB, created with a digital camera at 4000 pixels wide x 3000 pixels height.
    Using the slider as shown above to change the Quality setting in increments of 2 'bars' here are the file size results.
    Example image saved at '2' quality (smallest file size): {click on the images to enlarge on your screen}
    A very 'busy' scene, saved at the smallest file size, might be OK for casual web use, such as social media posts.
    Viewed at magnification, the artifacts, loss of detail, and digital 'noise' become apparent.

  • Accessing tape device from Terminal program using UNIX tar command

    I am trying to create some tape backups of very large files (.dmg images) using the UNIX tar command.
    I have a 72GB HP 4mm DAT USB drive, which shows up in the system profiler as:
    DAT72 USB Tape:
    Version: 30.30
    Bus Power (mA): 500
    Speed: Up to 480 Mb/sec
    Manufacturer: Hewlett Packard
    Product ID: 0x0125
    Serial Number: 4855310528334645
    Vendor ID: 0x03f0
    But I do not see the device in the /dev/ directory (see below).
    Is there a way I can determine the UNIX device name?
    Or must I locate and load a driver for this DAT drive?
    Thank you
    Jeff Cameron
    Last login: Mon Jun 9 20:14:36 on ttys000
    caladan:~ jeff$ ls /dev/
    appleAE0 ptyq7 ptyuc ttyp0 ttyt4
    autofs ptyq8 ptyud ttyp1 ttyt5
    autofs_control ptyq9 ptyue ttyp2 ttyt6
    autofs_nowait ptyqa ptyuf ttyp3 ttyt7
    bpf0 ptyqb ptyv0 ttyp4 ttyt8
    bpf1 ptyqc ptyv1 ttyp5 ttyt9
    bpf2 ptyqd ptyv2 ttyp6 ttyta
    bpf3 ptyqe ptyv3 ttyp7 ttytb
    console ptyqf ptyv4 ttyp8 ttytc
    cu.Bluetooth-Modem ptyr0 ptyv5 ttyp9 ttytd
    cu.Bluetooth-PDA-Sync ptyr1 ptyv6 ttypa ttyte
    cu.Palm ptyr2 ptyv7 ttypb ttytf
    cu.modem ptyr3 ptyv8 ttypc ttyu0
    disk0 ptyr4 ptyv9 ttypd ttyu1
    disk0s1 ptyr5 ptyva ttype ttyu2
    disk0s3 ptyr6 ptyvb ttypf ttyu3
    disk1 ptyr7 ptyvc ttyq0 ttyu4
    disk1s1 ptyr8 ptyvd ttyq1 ttyu5
    disk1s3 ptyr9 ptyve ttyq2 ttyu6
    disk1s5 ptyra ptyvf ttyq3 ttyu7
    disk2 ptyrb ptyw0 ttyq4 ttyu8
    disk2s1 ptyrc ptyw1 ttyq5 ttyu9
    disk2s10 ptyrd ptyw2 ttyq6 ttyua
    disk2s11 ptyre ptyw3 ttyq7 ttyub
    disk2s2 ptyrf ptyw4 ttyq8 ttyuc
    disk2s3 ptys0 ptyw5 ttyq9 ttyud
    disk2s4 ptys1 ptyw6 ttyqa ttyue
    disk2s5 ptys2 ptyw7 ttyqb ttyuf
    disk2s6 ptys3 ptyw8 ttyqc ttyv0
    disk2s7 ptys4 ptyw9 ttyqd ttyv1
    disk2s8 ptys5 ptywa ttyqe ttyv2
    disk3 ptys6 ptywb ttyqf ttyv3
    disk3s1 ptys7 ptywc ttyr0 ttyv4
    disk3s3 ptys8 ptywd ttyr1 ttyv5
    disk3s5 ptys9 ptywe ttyr2 ttyv6
    dtrace ptysa ptywf ttyr3 ttyv7
    dtracehelper ptysb random ttyr4 ttyv8
    fbt ptysc rdisk0 ttyr5 ttyv9
    fd ptysd rdisk0s1 ttyr6 ttyva
    fsevents ptyse rdisk0s3 ttyr7 ttyvb
    klog ptysf rdisk1 ttyr8 ttyvc
    lockstat ptyt0 rdisk1s1 ttyr9 ttyvd
    machtrace ptyt1 rdisk1s3 ttyra ttyve
    null ptyt2 rdisk1s5 ttyrb ttyvf
    profile ptyt3 rdisk2 ttyrc ttyw0
    ptmx ptyt4 rdisk2s1 ttyrd ttyw1
    ptyp0 ptyt5 rdisk2s10 ttyre ttyw2
    ptyp1 ptyt6 rdisk2s11 ttyrf ttyw3
    ptyp2 ptyt7 rdisk2s2 ttys0 ttyw4
    ptyp3 ptyt8 rdisk2s3 ttys000 ttyw5
    ptyp4 ptyt9 rdisk2s4 ttys1 ttyw6
    ptyp5 ptyta rdisk2s5 ttys2 ttyw7
    ptyp6 ptytb rdisk2s6 ttys3 ttyw8
    ptyp7 ptytc rdisk2s7 ttys4 ttyw9
    ptyp8 ptytd rdisk2s8 ttys5 ttywa
    ptyp9 ptyte rdisk3 ttys6 ttywb
    ptypa ptytf rdisk3s1 ttys7 ttywc
    ptypb ptyu0 rdisk3s3 ttys8 ttywd
    ptypc ptyu1 rdisk3s5 ttys9 ttywe
    ptypd ptyu2 sdt ttysa ttywf
    ptype ptyu3 stderr ttysb urandom
    ptypf ptyu4 stdin ttysc vn0
    ptyq0 ptyu5 stdout ttysd vn1
    ptyq1 ptyu6 systrace ttyse vn2
    ptyq2 ptyu7 tty ttysf vn3
    ptyq3 ptyu8 tty.Bluetooth-Modem ttyt0 zero
    ptyq4 ptyu9 tty.Bluetooth-PDA-Sync ttyt1
    ptyq5 ptyua tty.Palm ttyt2
    ptyq6 ptyub tty.modem ttyt3
    caladan:~ jeff$ logout

    I downloaded the BRU LE 30 day evaluation, and my first impression is favorable. I still need to perform a bit of testing. The HP drive I have does 36 GB per tape (72 at 50% hardware compression). My first attempt was to back up a single 22 GB .dmg file. Even with no compression it should have fit on one DAT tape, but after backing up 90% of the 22 GB it asked for a second tape. I still have to do more testing, but I very much appreciate the point in the very promising direction.
    I will reply again to this thread with my results.
    Jeff Cameron

  • Can the Terminal add a user(with all options) on a single line?

    Howdy All,
    Can the Mac Terminal add a user, with all options desired, on a single line?    If so, can I get an example of this single terminal line to work from?
    Thanks

    Thanks Drew!   You have some great thoughts there. Perhaps I should provide a clearer environment of each high school(4). All classrooms have a lock down Windows environment. (There are no Macs any where in any classrooms.) The mini-tower will be the only Mac on each high school campus. we're in Dell country. The desktops are locked in such a way that only the ghosted/installed software on each hard drive works and no other software can be installed. USB sticks can't run any exe, jar, or other executable file. Even the other drives on the network can't run them. This has been setup to prevent students from bringing games into the environment and running them. Only the single classroom in each high school, where Web Tech is taught, has a ftp tool installed. Students cannot bring and use their own laptops either. So there really isn't any stray ftp activity happening. You cannot "ftp" out of the network! And there is only the one Mac available per campus.  All this makes it certainly easier to maintain for sure. I am the only user in the school district they let run a personal laptop...mine has Ubuntu (tweaked).  My knowledge of Linux has led them to assign this project to me. They are MS based entirely and so is their knowledge base.
    I've got to teach the other 4 teachers how to manage the mini-tower.  Basically it is this:
         Start and stop Apache2,
         Start and stop vsftp,
         Add ftp users(students will only have directory:           /Library/Server/Web/Data/Sites/Default/studentlastname.firstname  )  Students will not log on to the min-   tower directly as any normal user would, but only through ftp. (I.E. no /home directory) ,
         Remove/delete students as needed
         (I'll pre-install cgi-bin scripts on these to keep it simpler on the other teachers and myself.)
    With this in mind, you can see why a basic script or single command line to install these students would be great! I want to keep it simple to avoid mistakes the teachers might make adding users. Once I understand adding a user (in a terminal) better on a Mac, I may write a script to make it happen for them.
    I'd also like to understand removing/deleting a student/user better to insure everything is gone when executed.
    I hope this clear things up. Any help is much appreciated.    THANKS!!!

  • In my MacBook Pro, finder is crashing after logging in. It is showing crash report with Relaunch option, when i click on Relaunch option finder is working properly but i am unable to see top bar(Apple, Date, Wifi). Even in safari address bar not accepting

    In my MacBook Pro(10.7.5), finder is crashing after logging in. It is showing crash report with Relaunch option, when i click on Relaunch option finder is working properly but i am unable to see Top bar(Apple logo, Date and Wifi etc... bar). Even in safari address bar, TextEdit not accepting any text means when type something not showing up and also i am unable operate any thing from keyboard. Plz help me…..

    One thing that you can try is installing a 'fresh' version of OS X Lion. Boot into your Recovery partition (holding down the command and R keys whilst booting) and elect to install OS X from the Recovery screen. You need not erase your hard drive and you should not lose any of your data.
    Oh, and just as a precaution, I would use Disk Utility, once you're in Recovery mode, to verify your hard drive before trying to install the OS again.
    Clinton

  • Mapping problem with compressed key update record

    Hi, could you please advise?
    I'm getting the following problem:
    About a week ago replicat abened with "Error in mapping" error. I found in discard file some record looking like:
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Where filed9 = @GETENV("GGHEADER", "COMMITTIMESTAM"), field10 = = @GETENV("GGHEADER", "COMMITTIMESTAM"), others are table fields mapped by USEDEFAULTS
    So I got Mapping problem with compressed key update record at 2012-06-01 15:44
    I guess I need to mention that extract failed in 5 minuts before it with: VAM function VAMRead returned unexpected result: error 600 - VAM Client Report <[CFileInfo::Read] Timeout expired after 10 retries with 1000 ms delay, waiting to read transaction log or backup files. To increase the number of retries, use SETENV (GGS_CacheRetryCount = n) in Extract parameter file. To control retry delay time, use SETENV (GGS_CacheRetryDelay = n). handle: 0000000000000398 ReadFile GetLastError:997 Wait GetLastError:997>.
    I don't know if it has ther same source as data corruption, could you tell me if it is?
    Well, I created new extract, starting 2012-06-01 15:30 to check if there was something with extract at the time, but got the same error.
    If I run extract beging at 15:52 it starts and works.
    But well, I got another one today. Data didn't look that bad, but yet one column came with null value:( And I'm using it as a key column, so I got Mapping problem with compressed key update record again:(
    I'm replicating from SQL Server 2008 to Oracle 11g.
    I'm actually using NOCOMPRESSUPDATES in Extract.
    CDC is enabled for all tables replicated. The only thing is that it is enabled not by ADD TRANDATA command, but by SQL Server sys.sp_cdc_enable_table, does it matter?
    Could you please advise why does it happen?

    Well, the problem begins somewhere in extract or before extract, may be in transaction log, I don't know:(
    Here are extract parameters:
    EXTRACT ETCHECK
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    SOURCEDB TEST, USERID **, PASSWORD *****
    exttrail ./dirdat/ec
    NOCOMPRESSUPDATES
    NOCOMPRESSDELETES
    TABLE tst.table1, COLS (field1, field2, field3, field4, field5, field6, field7, field8 );
    TABLE tst.table2, COLS (field1, field2, field3, field4 );
    Data pump:
    EXTRACT DTCHECK
    SOURCEDB TEST, USERID **, PASSWORD *****
    RMTHOST ***, MGRPORT 7809
    RMTTRAIL ./dirdat/dc
    TABLE tst.table1;
    TABLE tst.table2;
    Replicat:
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    Rpt file for replicat:
    Oracle GoldenGate Delivery for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows x64 (optimized), Oracle 11g on Apr 22 2011 00:34:07
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2012-06-05 12:49:38
    Operating System Version:
    Microsoft Windows Server 2008 R2 , on x64
    Version 6.1 (Build 7601: Service Pack 1)
    Process id: 2264
    Description:
    ** Running with the following parameters **
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 512M
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1G
    CACHESIZEMAX (strict force to disk): 881M
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Database Language and Character Set:
    NLS_LANG = "AMERICAN_AMERICA.CL8MSWIN1251"
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "CL8MSWIN1251"
    For further information on character set settings, please refer to user manual.
    ** Run Time Messages **
    Opened trail file ./dirdat/dc000000 at 2012-06-05 12:49:39
    2012-06-05 12:58:14 INFO OGG-01020 Processed extract process RESTART_ABEND record at seq 0, rba 925 (aborted 0 records).
    MAP resolved (entry tst.table1):
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    2012-06-05 12:58:14 WARNING OGG-00869 No unique key is defined for table table1. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
    Using the following default columns with matching names:
    field1=field1, field2=field2, field3=field3, field4=field4, field5=field5, field6=field6, field7=field7, field8=field8
    Using the following key columns for target table R_019_000001.TCALCULATE: field3.
    2012-06-05 12:58:14 WARNING OGG-01431 Aborted grouped transaction on 'tst.table1', Mapping error.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    2012-06-05 12:58:14 WARNING OGG-01151 Error mapping from tst.table1 to tst.table1.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    Source Context :
    SourceModule : [er.main]
    SourceID : [er/rep.c]
    SourceFunction : [take_rep_err_action]
    SourceLine : [16064]
    ThreadBacktrace : [8] elements
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x143034) [0x00000001402192B4]]
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x11dd44) [0x00000001401F3FC4]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x000000014009F102]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B29CC]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B8887]]
    : [C:\App\OGG\replicat.exe(releaseCProcessManagerInstance+0x25250) [0x000000014028F200]]
    : [C:\Windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007720652D]]
    : [C:\Windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007733C521]]
    2012-06-05 12:58:14 ERROR OGG-01296 Error mapping from tst.table1 to tst.table1.
    * ** Run Time Statistics ** *
    Last record for the last committed transaction is the following:
    Trail name : ./dirdat/dc000000
    Hdr-Ind : E (x45) Partition : . (x04)
    UndoFlag : . (x00) BeforeAfter: A (x41)
    RecLength : 249 (x00f9) IO Time : 2012-06-01 15:48:56.285333
    IOType : 115 (x73) OrigNode : 255 (xff)
    TransInd : . (x03) FormatType : R (x52)
    SyskeyLen : 0 (x00) Incomplete : . (x00)
    AuditRBA : 44 AuditPos : 71176199289771
    Continued : N (x00) RecCount : 1 (x01)
    2012-06-01 15:48:56.285333 GGSKeyFieldComp Len 249 RBA 987
    Name: DBO.TCALCULATE
    Reading ./dirdat/dc000000, current RBA 987, 0 records
    Report at 2012-06-05 12:58:14 (activity since 2012-06-05 12:58:14)
    From Table tst.table1 to tst.table1:
    # inserts: 0
    # updates: 0
    # deletes: 0
    # discards: 1
    Last log location read:
    FILE: ./dirdat/dc000000
    SEQNO: 0
    RBA: 987
    TIMESTAMP: 2012-06-01 15:48:56.285333
    EOF: NO
    READERR: 0
    2012-06-05 12:58:14 ERROR OGG-01668 PROCESS ABENDING.
    Discard file:
    Oracle GoldenGate Delivery for Oracle process started, group RTCHECK discard file opened: 2012-06-05 12:49:39
    Key column filed3 (0) is missing from update on table tst.table1
    Missing 1 key columns in update for table tst.table1.
    Current time: 2012-06-05 12:58:14
    Discarded record from action ABEND on error 0
    Aborting transaction on ./dirdat/dc beginning at seqno 0 rba 987
    error at seqno 0 rba 987
    Problem replicating tst.table1 to tst.table1
    Mapping problem with compressed key update record (target format)...
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Process Abending : 2012-06-05 12:58:14

Maybe you are looking for