Full db backup using expdp

Hi All,
My database version is 10.2.0.2 & I want to take full database backup using expdp.
This is the parfile I use.
userid=system/manager
full=y
directory=exp_dmp
parallel=4
The database size is 45 gb & the expdp dumpfile size comes to around 31gb. What can be done to compress the expdp dumpfile?
or is there some other parameters which i need to use?
thanks

I agree expdp is logical backup.
In 10g compression=metadata_only is the only option that can be used, so that does not help to compress the backup.
due to size restrictions, i want the dumpfile to be compressed as and when the backup is running, how to achieve this?

Similar Messages

  • Date-wise backup using expdp

    Dear Friends ,
    I am using Oracle database 10g . Every week I take full export backup using 'expdp' . Beetween the days of the week, I dont take any backup . Now I need backup of two days ago . In this moment , is it possible to take backup using datapump (expdp) before two days ago ?
    i.e., Is there any facility in oracle 10g datapump to take export backup using date-wise ?
    My another question ,
    Is there any incremental backup procedure available in oracle 10g datapump ?
    Waiting for kind reply ... ...

    Hi,
    Dear Friends ,
    I am using Oracle database 10g . Every week I take full export backup using 'expdp' . Beetween the days of the week, I dont take any backup . Now I need backup of two days ago . In this moment , is it possible to take backup using datapump (expdp) before two days ago ?
    i.e., Is there any facility in oracle 10g datapump to take export backup using date-wise ?
    My another question ,
    Is there any incremental backup procedure available in oracle 10g datapump ?
    Waiting for kind reply ... ...You must not use or treat Expdp or logical backup as the Backup which can be used for future purpose. Please search forums their are so many links you lan find out and so many people told many time.. even ORACLE suggest that.
    Go for Custom backups and RMAn backups.
    - Pavan Kumar N
    - Pavan Kumar N

  • How to Restore Full Offiline Backup using BRTOOLS from a Tape

    Dear all,
    I want to recover PRD Full Offline Backup using BRTOOLS from a tape in a new machine.
    I have Install SAP in the new machine.
    Now I want to restore the PRD Full Offiline Backup taken through BRTOOLs in tape in the new machine.
    1. I have shutdown the SAP.
    2. I have shutdown the database Oracle 10g.
    3. I have inserted the backup tape
    4. Through user ORASID I have executed BRTOOLS
    I got the following options: -
    BRTools main menu
    1 = Instance management
    2 - Space management
    3 - Segment management
    4 - Backup and database copy
    5 - Restore and recovery
    6 - Check and verification
    7 - Database statistics
    8 - Additional functions
    9 - Exit program
    I selected Option 5 Restore and recovery
    And got the Following Options
    Restore and recovery
    1 = Complete database recovery
    2 - Database point-in-time recovery
    3 - Tablespace point-in-time recovery
    4 - Whole database reset
    5 - Restore of individual backup files
    6 - Restore and application of archivelog files
    7 - Disaster recovery
    8 - Reset program status
    1. Now which option do I need to select and then what all other options do I need to select.
    2. Whether the database should be a mount state or shutdown ?
    Kindly help.

    Dear Mark,
    Thanks.
    As suggested I tried both the options however face some difficulties please suggest
                                       brrestore -u / -c -b <backup_log.ext> -p init<SID>.sap -m full
    hinrnddev:oradev 1> brrestore -u / -c -b bedflluv.fft -p initDEV.sap -m full
    BR0401I BRRESTORE 7.00 (32)
    BR0405I Start of file restore: redhafeh.rsb 2010-05-22 11.55.55
    BR0484I BRRESTORE log file: /oracle/DEV/sapbackup/redhafeh.rsb
    BR0252E Function fopen() failed for '/oracle/DEV/sapbackup/bedflluv.fft' at location BrbLogRead-1
    BR0253E errno 2: No such file or directory
    BR0121E Processing of log file /oracle/DEV/sapbackup/bedflluv.fft failed
    BR0406I End of file restore: redhafeh.rsb 2010-05-22 11.55.55
    BR0280I BRRESTORE time stamp: 2010-05-22 11.55.55
    BR0404I BRRESTORE terminated with errors
    When I used the brrestore command I got the above error message.
    Then I tried the next option Complete database recover
    BRRECOVER options for restore and recovery
    1 * Recovery type (type) ............. [complete]
    2 - BRRECOVER profile (profile) ...... [initDEV.sap]
    3 ~ BACKINT/Mount profile (parfile) .. []
    4 - Database user/password (user) .... [/]
    5 - Recovery interval (interval) ..... [30]
    6 - Confirmation mode (confirm) ...... [yes]
    7 - Scrolling line count (scroll) .... [20]
    8 - Message language (language) ...... [E]
    9 - BRRECOVER command line (command) . [-p initDEV.sap -t complete -i 30 -s 20 -l E]
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR0662I Enter your choice:
    C
    Complete database recovery main menu
    1 = Check the status of database files
    2 * Select database backup
    3 * Restore data files
    4 * Restore and apply incremental backup
    5 * Restore and apply archivelog files
    6 * Open database and post-processing
    7 * Exit program
    8 - Reset program status
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR0662I Enter your choice:
    C
    Complete database recovery main menu
    1 + Check the status of database files
    2 # Select database backup
    3 # Restore data files
    4 # Restore and apply incremental backup
    5 # Restore and apply archivelog files
    6 + Open database and post-processing
    7 = Exit program
    8 - Reset program status
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR0662I Enter your choice:
    the + sign got changed to +
    and the * got changed to #
    then it is getting in loop please suggest what do I do now.

  • Full Hot Backup using RMAN?

    Dear all,
    I want to take a full hot backup every week on sunday and the followings are the commands.
    run
    allocate channel ch1 type disk format '/db/BACKUP/RMAN/backup_%d_%t_%s_%p_%U.bck';
    backup
    incremental level 0
    database
    plus archivelog delete input;
    backup current controlfile;
    backup spfile;
    release channel ch1;
    I have the following questions:
    1) Am I need to define a directory named as "'/db/BACKUP/RMAN/backup_%d_%t_%s_%p_%U.bck", so that the backup copies will put it here,right?
    2) What format needs to be save in where and how to run it on weekly schedule basis?
    3) I saw above script with "channel ch1", can I change it to "channel ch2"?
    I am a beginner using RMAN, please let me know.
    Best Regards,
    amy

    Which database version on which OS (looks like Unix/Linux)?
    You have to create the directory '/db/BACKUP/RMAN', 'backup_%d...' is the filename created by RMAN.
    %d Specifies the name of the database;
    %t Specifies the backup set time stamp, which is a 4-byte value derived as the number of seconds elapsed since a fixed reference time. The combination of %s and %t can be used to form a unique name for the backup set;
    %s Specifies the backup set number. This number is a counter in the control file that is incremented for each backup set. The counter value starts at 1 and is unique for the lifetime of the control file. If you restore a backup control file, then duplicate values can result. Also, CREATE CONTROLFILE initializes the counter back to 1;
    %p Specifies the piece number within the backup set. This value starts at 1 for each backup set and is incremented by 1 as each backup piece is created;
    %U Specifies a system-generated unique filename (default).
    For scheduling a cron job could be defined or you use Enterprise Manager (10g and higher)
    Channel name doesn't matter.
    Werner

  • Full DB backup using RMAN

    Hi
    I have RMAN backup taken using "backup database" command. Does this backup include control files, spfile, redo logs, archive logs along with datafiles?
    How should be a backup strategy in order that db is recovered without any dataloss?
    Thanks
    JIL

    If you CONFIGURE CONTROLFILE AUTOBACKUP ON controlfile and spfile backups are automated. They go to the FRA (FlashRecoveryArea) if you have an FRA defined, else they go to $ORACLE_HOME/dbs (on Unix/Linux).
    Redo Logs are never backed up by RMAN.
    BACKUP DATABASE does not include ArchiveLogs unless you run BACKUP DATABASE PLUS ARCHIVELOG. Alternatively, run BACKUP ARCHIVELOG ..specification.. seperately after the BACKUP DATABASE.
    Please read the Oracle documentation on Backup and Recovery.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Full HD Backup using Mac OS Tiger 10.4

    Dear all,
    After a few years I've decided that I need to backup all my HD info before I loose it.... again.
    I can't find any application that will do that for me, so all I can think of is to drag folders into an external HD every now and then (as this would do the job).
    However, I was wondering if:
    - A: Is there a way of getting Tiger to do this for me every now and then, so I don't have to worry about it?
    - B: If I decide it do do it manually, will I be able to paste all my backed up info (mainly iTunes and iPhoto) onto a new HD, if I buy a new mac (or my HD busts)?
    As I don't want to spend anymore money on new software (say Leopard = Time Machine), I was wondering if anyone has an answer to my questions.
    Many thanks,
    geovladi

    *Hi geovladi, Welcome (Back)* to Apple's Users Help Users Forums.
    You already have some of the below but I have it preset so it's included again. The point is the purchase of SD so you can do the quick Smart BU. Relatively quick usually means more often. That's good.
    http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html
    Purchase at $27.95 will allow smart backups that look at the bu files and only move over new ones. It's quick at ~7 mins to change ~1 gig out of 20.
    *Be sure to test that the clone boots and apps behave properly.*
    Here are other popular cloners.
    http://www.bombich.com/software/ccc.html
    http://www.prosoftengineering.com/products/drivegeniusinfo.php?PHPSESSID=909c070fb2e13b35097fa9cc1340bfc0
    Good Luck, JP
    Message was edited by: Jpfresno for typo

  • Error in Database backup using RMAN.

    Hi
    While taking full online backup using RMAN i got the following error.
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/DEV/sapdata2/dev640_6/dev640.data6
    Please help me how to resolve this issue.

    Hi,
    Please perform DBverify Job and Also analyze the alert<SID>.log file for your Database to get more information about such logical or physical corruption.  You may require Block level recovery for the complained DB Files. Please refer this useful document " [Early Detection and Correction of Data Block Corruptions Using RMAN |http://www.ioug.org/client_files/members/select_pdf/04q4/RMAN.pdf]" and share the same with your Oracle DBA.
    You can execute the following commands to get information about corrupted block(s), if its there.
    select * from v$backup_corruption;
    select * from v$database_block_corruption;
    Regads,
    Bhavik G. Shroff

  • Recovery using Full BCV backup & archive logs

    Hello DBA's ,
    I have Last Sunday's Full BCV (Buisness Copy Volume) backup (i.e. ..... the tablespaces were taken into Backup mode then Sync the BCV, then split the BCV , take the backup of BCV and take the tabespaces in normal mode in short what we call as HOT BACKUP ........ ) & also I am taking only archive log & control file backup Daily.
    Now I want to Run my database on Different Server using the above FULL BCV & daily backed up Archive logs & Control file.
    Kindly update me- Can I go for following recovery steps for above scenario.
    1) Install ORACLE software on new server.
    2) Restore the last Sunday's FULL BCV (Hot backup)
    3) Restore the archive log from last Sunday's Full BCV backup.
    4) Restore the latest backed up control file.
    5) Mount the database.
    6) Recover the database using backed up control file & archive logs.
    7) Open the database with RESETLOGS option.
    Please Guide me whether I can Fully Recover/Start database on this new server using above steps. ---OR---- " Are there any more files except data files that need to be backed up daily after Full BCV ? " ---OR ---- More suggestions are Welcome.
    I want to test this scenario bcz.
    a) I can't afford daily BCV due to lack of storage with current retention given by buisness.
    b) I can't afford long time taken by RMAN for backup & recovery, as it disturbs my other backup shedules, also RTO should be very low as per my buisness requirement.
    c) Currently we are not thinking for any another Backup options.
    OS platform -- SOLARIS 10
    Database version -- ORACLE 9i/10g
    Atul.

    Hello everybody ....
    Actually I am doing the above scenario practically and I am facing the problem that when I am applying my archive logs using backup controlfile until cancel, the recovery completes upto the available Archive logs/sequence. But it throws the following error along with it ......
    ORA-00279: change 7001791222 generated at 01/13/2008 07:23:42 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9623_601570270.dbf
    ORA-00280: change 7001791222 for thread 1 is in sequence #9623
    ORA-00278: log file '/oracle/EPR/oraarch/1_9622_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00279: change 7001816252 generated at 01/13/2008 12:53:05 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9624_601570270.dbf
    ORA-00280: change 7001816252 for thread 1 is in sequence #9624
    ORA-00278: log file '/oracle/EPR/oraarch/1_9623_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    SQL> SQL> SQL> SQL> SQL> SQL> SQL>
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    So, I opened my database using Last (Sunday's) full hot backup only and it opens successfully without applying any archives.
    Now, my question is... why my database doesn't open with OPEN RESETLOGS option when I am recovering it using command " RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE; "
    I want to recover my database upto Saturday then What should I do ?
    Please help me out !
    Atul.

  • Incomplete Recovery Fails using Full hot backup & Archive logs !!

    Hello DBA's !!
    I am doing on Recovery scenario where I have taken One full hot backup of my Portal Database (EPR) and Restored it on New Test Server. Also I restored Archive logs from last full hot backup for next 6 days. Also I restored the latest Control file (binary) to their original locations. Now, I started the recovery scenario as follows....
    1) Installed Oracle 10.2.0.2 compatible with restored version of oracle.
    2) Configured tnsnames.ora, listener.ora, sqlnet.ora with hostname of Test server.
    3) Restored all Hot backup files from Tape to Test Server.
    4) Restored all archive logs from tape to Test server.
    5) Restored Latest Binary Control file from Tape to Test Server.
    6) Now, Started recovery using following command from SQL prompt.
    SQL> recover database until cancel using backup controlfile;
    7) Open database after Recovery Completion using RESETLOGS option.
    Now in Above scenario I completed steps upto 5) successfully. But when I execute the step 6) the recovery completes with Warning : Recovery completed but OPEN RESETLOGS may throw error " system file needs more recovery to be consistent " . Please find the following snapshot ....
    ORA-00279: change 7001816252 generated at 01/13/2008 12:53:05 needed for thread
    1
    ORA-00289: suggestion : /oracle/EPR/oraarch/1_9624_601570270.dbf
    ORA-00280: change 7001816252 for thread 1 is in sequence #9624
    ORA-00278: log file '/oracle/EPR/oraarch/1_9623_601570270.dbf' no longer needed
    for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    SQL> SQL> SQL> SQL> SQL> SQL> SQL>
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/oracle/EPR/sapdata1/system_1/system.data1'
    Let me know What should be the reason behind recovery failure !
    Note : I tried to Open the database using Last Full Hot Backup only & not applying any archives. Then Database Opens successfully. It means my Database Installation & Configuration is OK !
    Please Let me know why my Incomplete Recovery using Archive logs Goes Fail ?
    Atul Patil.

    oh you made up a new thread so here again:
    there is nothing wrong.
    You restored your backup, archives etc.
    you started your recovery and oracle applyed all archives but the archive
    '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    does not exist because it represents your current online redo log file and that is not present.
    the recovery process cancels by itself.
    the solution is:
    restart your recovery process with:
    recover database until cancel using backup controlfile
    and when oracle suggests you '/oracle/EPR/oraarch/1_9624_601570270.dbf'
    type cancel!
    now you should be able to open your database with open resetlogs.

  • Incremental and Full backups using WBADMIN and Task Scheduler in Server 2008 R2

    I'd like to create an automated rotating schedule of backups using wbadmin and task scheduler, which would backup Bare Metal Recovery; System State; Drive C: and D: to a Network Share in a pattern like this:
    Monday - Incremental, overwrite last Monday's
    Tuesday - Incremental, overwrite last Tuesday's
    Wednesday - Incremental, overwrite last Wednesday's
    Thursday - Incremental, overwrite last Thursday's
    Friday - Incremental overwrite last Friday's
    Saturday - Full, overwrite last Saturday's
    I need to use the wbadmin commands within the Task Scheduler and do not know any of the required Syntax to make sure everything goes smoothly, I do not want to do this through the CMD.

    I know each backup for the previous corresponding day will be replaced, how do you figure I wont be able to do incremental backups...
    Because incremental backup is based on Volume Shadow Copy (VSS) feature and due to Windows Server 2008 R2 limitations (this limitation is resolved in Windows 8) only one version of backed up data can be stored in a shared folder. So the
    result is that every time you back up some data on a shared folder, you actually creating a full backup of them.
    is it not supported through task scheduler?
    The Task Scheduler is only a feature that does the tasks that you have defined for it. Actually it runs the
    wbadmin command that runs on an operating system with the mentioned limitation.
    I know you can do Incremental backups through Windows Server Backup, but my limitation using that is I cant setup multiple backups.
    Yes, you are right. Windows Server Backup feature in Windows Server 2008/2008 R2 has not this functionality (although
    ntbackup in Windows XP and Windows Server 2003 had this functionality). So, the only workaround to this limitation is through using Task Scheduler feature with wbadmin command. For more information see the following article:
    http://blogs.technet.com/b/filecab/archive/2009/04/13/customizing-windows-server-backup-schedule.aspx
    So are you saying that even though I want each backup to go to a different place on the Shared Folder that it will replace the previous backup anyways?
    No and because of this I said in my previous post that with some modifications and additions you can do the scenario. For example, you back up to a shared folder with the name of Shared1 on Mondays. You also have been configured the backup feature to back
    up data on another shared folder, named Shared2, on Wednesday. When you repeat the backup operation in Shared1, only the backed up data that resides on it will be affected, and the data on Shared2 remains intact.
    Please feel free to let us know if you have any question or concern.
    Please VOTE as HELPFUL if the post helps you and remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.
    Hi R.Alikhani
    Then do you know if wbadmin supports incremental backup in Windows 8? As you said the VSS issue is fixed in Win8. However, the wbadmin has less options then in windows server. I tried a bit but it seems it only supports full backup? ps, I use a network share
    - will the incremental backup works if I define a ISCCI then? My remote backup PC is also running Win8.

  • Can we take backup of external table using expdp

    Can we take backup of external table using expdp, please suggest any doc.
    thanks in advance.

    @Fahd
    About the ACCESS_METHOD parameter
    -It's undocumented data pump parameter and it's should be used only when requested by Oracle Support
    -It's purpose is to specify the loading/unloading method, not filtering object types during the export or import
    -If this parameter is not specified,the Data Pump will automatically choose the best method to load or unload the data...
    so,
    the External Tables method
    -Is used by the data pump if data cannot be moved in the default method to load/unload data ,
    and that is direct path mode, or if there is a situation where parallel SQL can be used to speed up the data move even more

  • TC full, had to delete backups using Airport Utility. Now connection failure. Any hope?

    Recently discovered that the TC hadn't backed up either laptop or desktop since October due to being full (used Time Machine Buddy to figure this out). I tried deleting some of the oldest backups for both the laptop and desktop but it gave me minimal room, so I bit the bullet and tried deleting all of the backups for my Desktop. Although the TC then had around 400gb of free memory, neither computer would backup, so I deleted all of the backups using Airport Utility.
    Now, neither computer will connect to the TC.
    Message: Connection failed - The server "TC-Time-Capsule.local" may not exist or it is unavailable at this time. Check the server name or IP address, check your network connection, and then try again.
    The network is fine, I've done a check through Airport Utility and everything is green. The desktop is connected via Ethernet but still gives me the same error message. If I click on it in the Finder, it says "Connection Failed". If I click on "Connect As..." I get the same error message above.
    Has my TC reached its useful life as a backup or is there a solution?
    Desktop: 20" iMac running Snow Leopard
    Laptop: MacBook Pro running Snow Leopard

    Try re-selecting the Time Capsule via Time Machine Preferences.
    If that doesn't help, try a "full reset" per #A4 in Time Machine - Troubleshooting.
    If still no help, try resetting the TC per http://support.apple.com/kb/ht3728
    Once you get it working on the iMac, leave it connected via Ethernet for the first, full backup, but do not try to back up the laptop yet.   
    When the iMac's backup is done, connect the laptop via Ethernet to do it's first, full backup.
    It sounds like there was some other problem with the earlier backups; you should have received some messages about why TM couldn't back up. 
    How much data, in total, is on your 2 Macs?  If your Time Capsule isn't at least twice that size, consider getting an external HD, connecting it to the iMac, and backing it up that way (that will be much faster and more reliable, too).

  • Recreate Standby using full datafile backup

    Dear All,
    I have to recreate my standby database due to huge archive gap. I have full datafile backup of primary. Pls help me.
    Thank you

    Dear CKPT,
    Thanx for your advice..
    At Primary Site
    ==========
    1. select max(sequence#) from v$archived_log;
    28595
    2. select severity,error_code,message,timestamp from v$dataguard_status where dest_id=2;
    Error     12541     FAL[server, ARCb]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCk]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCf]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCs]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCq]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCi]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC5]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCj]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCp]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCm]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCd]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCl]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCg]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCt]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCo]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC4]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC3]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC8]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC6]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC0]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCn]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC1]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARC7]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCe]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     FAL[server, ARCa]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:44:06 PM
    Error     12541     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12541.     12/6/2012 12:44:22 PM
    Error     12541     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12541.     12/6/2012 12:45:22 PM
    Error     12541     Error 12541 for archive log file 2 to 'cstdby'     12/6/2012 12:45:56 PM
    Error     12541     FAL[server, ARCc]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:45:56 PM
    Error     12541     FAL[server, ARCe]: Error 12541 creating remote archivelog file 'cstdby'     12/6/2012 12:45:57 PM
    Error     12541     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12541.     12/6/2012 12:46:25 PM
    Error     12541     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12541.     12/6/2012 12:47:25 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 17929) hung on a network operation     12/6/2012 12:55:35 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 17031) hung on a network operation     12/6/2012 12:55:36 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 6032) hung on a network operation     12/6/2012 1:00:43 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 18578) hung on a network operation     12/6/2012 1:04:58 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 6030) hung on a network operation     12/6/2012 1:05:00 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 5391) hung on a network operation     12/6/2012 1:05:01 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 6448) hung on a network operation     12/6/2012 1:05:02 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 19209) hung on a network operation     12/6/2012 1:05:04 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 6904) hung on a network operation     12/6/2012 1:10:47 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 13268) hung on a network operation     12/6/2012 1:13:17 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 6906) hung on a network operation     12/6/2012 1:13:19 PM
    Error     12514     FAL[server, ARCj]: Error 12514 creating remote archivelog file 'cstdby'     12/6/2012 1:13:25 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 7386) hung on a network operation     12/6/2012 1:15:52 PM
    Error     3113     FAL[server, ARCq]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARCj]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     16198     WARN: ARCq: Terminating ARCH (pid 18233) hung on a network operation     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARC2]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARCg]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARC1]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARC9]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARC0]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     NSA: Error 3113 archiving log 2 to 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARCf]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARC8]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3135     FAL[server, ARCt]: FAL archival, error 3135 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     3113     FAL[server, ARCs]: FAL archival, error 3113 closing archivelog file 'cstdby'     12/6/2012 1:20:12 PM
    Error     12514     FAL[server, ARCj]: Error 12514 creating remote archivelog file 'cstdby'     12/6/2012 1:20:17 PM
    Error     12514     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12514.     12/6/2012 1:20:17 PM
    Error     12541     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12541.     12/6/2012 1:24:02 PM
    Error     12514     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12514.     12/6/2012 1:25:05 PM
    Error     12514     PING[ARC2]: Heartbeat failed to connect to standby 'cstdby'. Error is 12514.     12/6/2012 1:26:05 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 18810) hung on a network operation     12/6/2012 1:32:13 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 8224) hung on a network operation     12/6/2012 1:33:19 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 7836) hung on a network operation     12/6/2012 1:35:26 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 9182) hung on a network operation     12/6/2012 1:36:33 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 9260) hung on a network operation     12/6/2012 1:39:42 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 9468) hung on a network operation     12/6/2012 1:39:44 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 7653) hung on a network operation     12/6/2012 1:39:45 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 6910) hung on a network operation     12/6/2012 1:39:46 PM
    Error     16198     WARN: ARC9: Terminating ARCH (pid 5361) hung on a network operation     12/6/2012 1:40:14 PM
    Error     16198     WARN: ARCd: Terminating ARCH (pid 5361) hung on a network operation     12/6/2012 1:40:15 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 9665) hung on a network operation     12/6/2012 1:40:56 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 9665) hung on a network operation     12/6/2012 1:40:58 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 19673) hung on a network operation     12/6/2012 1:44:04 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 9940) hung on a network operation     12/6/2012 1:44:05 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 10044) hung on a network operation     12/6/2012 1:45:13 PM
    Error     16198     WARN: ARCf: Terminating ARCH (pid 10044) hung on a network operation     12/6/2012 1:45:15 PM
    Error     16198     WARN: ARCr: Terminating ARCH (pid 9938) hung on a network operation     12/6/2012 1:46:00 PM
    Error     16198     WARN: ARCh: Terminating ARCH (pid 9938) hung on a network operation     12/6/2012 1:46:03 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 18371) hung on a network operation     12/6/2012 1:47:31 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 20131) hung on a network operation     12/6/2012 1:48:37 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 10341) hung on a network operation     12/6/2012 1:48:38 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 10425) hung on a network operation     12/6/2012 1:51:55 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 9934) hung on a network operation     12/6/2012 1:53:02 PM
    Error     16198     WARN: ARC2: Terminating ARCH (pid 10786) hung on a network operation     12/6/2012 1:53:03 PM
    At Standby Site
    ===========
    1. select max(sequence#) from v$archived_log where applied='YES';
    28549
    Thanx & Rgds,
    Athurumithuru.

  • What are other thing should i consider using expdp and impdp

    (11g express) r2 windows 2008 and also xp ( i installed it at two places for testing.)
    i wanted to make backup of one schema
    so i used expdp
    please tel me is it ok or is there any thing more reliable to make backup of a schema and import(impdp) of a schema to any oracle database.
    from window prompt
    MKDIR c:\oraclexe\app\tmp
    from sql plus
    sqlplus SYSTEM/password
    CREATE OR REPLACE DIRECTORY dmpdir AS ’c:\oraclexe\app\tmp’;
    GRANT READ,WRITE ON DIRECTORY dmpdir TO hr;
    from windows command line.
    expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=schema.dmp
    LOGFILE=expschema.log
    then following to import
    impdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=schema.dmp
    REMAP_SCHEMA=hr:hrdev EXCLUDE=constraint, ref_constraint, index
    TABLE_EXISTS_ACTION=replace LOGFILE=impschema.log
    2) pls tel me should i consider any thing else while exporting or importing.
    3) i have removed EXCLUDE=constraint, ref_constraint, index because i need constrainta index etc.
    so pls tel me why in documentation it is written.
    yours sincerely
    Edited by: 944768 on Mar 31, 2013 2:02 AM

    Hi,
    Yes, you can (should) remove exclude parameter if you like import all objects from export.
    Here is example how you export full schema
    expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=hr.dmp LOGFILE=exp_hr.logAnd import it to another database
    impdp SYSTEM/password SCHEMAS=hr DIRECTORY=dmpdir DUMPFILE=hr.dmp LOGFILE=imp_hr.logPlease note that you need create directory dmpdir to target database also.
    Regards,
    Jari
    My Blog: http://dbswh.webhop.net/htmldb/f?p=BLOG:HOME:0
    Twitter: http://www.twitter.com/jariolai

  • Backup using backint fails for maxdb

    Hi All, I have configured backint for backup of maxdb for content server 640. I configured it as per the documents available, created the configuration fiale and the parameter file. Created the backup medium in dbmgui. Now when i try to run the backup using the pipe am getting the above mentioned error. Please find below the dbm.ebp log for the same...
    more dbm.ebp 2009-10-22 02:06:08 Setting environment variable 'TEMP' for the directory for temporary files and pi pes to default ''. Setting environment variable 'TMP' for the directory for temporary files and pip es to default ''. Using connection to Backint for MaxDB Interface. 2009-10-22 02:06:08 Checking existence and configuration of Backint for MaxDB. Using configuration variable 'BSI_ENV' = '/sapdb/CFC/lcbackup/apoatlas.env' as path of the configuration file of Backint for MaxDB. Setting environment variable 'BSI_ENV' for the path of the configuration fil e of Backint for MaxDB to configuration value '/sapdb/CFC/lcbackup/apoatlas.env' . Reading the Backint for MaxDB configuration file '/sapdb/CFC/lcbackup/apoatl as.env'. Found keyword 'BACKINT' with value '/sapdb/CFC/db/bin/backint'. Found keyword 'INPUT' with value '/tmp/backint4sapdbCFC.in'. Found keyword 'OUTPUT' with value '/tmp/backint4sapdbCFC.out'. Found keyword 'ERROROUTPUT' with value '/tmp/backint4sapdbCFC.err'. Found keyword 'PARAMETERFILE' with value '/sapdb/CFC/lcbackup/param.cfg' . Found keyword 'TIMEOUT_SUCCESS' with value '1800'. Found keyword 'TIMEOUT_FAILURE' with value '1800'. Finished reading of the Backint for MaxDB configuration file. Using '/sapdb/CFC/db/bin/backint' as Backint for MaxDB program. Using '/tmp/backint4sapdbCFC.in' as input file for Backint for MaxDB. Using '/tmp/backint4sapdbCFC.out' as output file for Backint for MaxDB. Using '/tmp/backint4sapdbCFC.err' as error output file for Backint for MaxDB . Using '/sapdb/CFC/lcbackup/param.cfg' as parameter file for Backint for MaxD B. Using '1800' seconds as timeout for Backint for MaxDB in the case of success . Using '1800' seconds as timeout for Backint for MaxDB in the case of failure . Using '/sapdb/data/wrk/CFC/dbm.knl' as backup history of a database to migra te. Using '/sapdb/data/wrk/CFC/dbm.ebf' as external backup history of a database to migrate. Checking availability of backups using backint's inquire function. Check passed successful. 2009-10-22 02:06:08 Checking medium. Check passed successfully. 2009-10-22 02:06:08 Preparing backup. Setting environment variable 'BI_CALLER' to value 'DBMSRV'. Setting environment variable 'BI_REQUEST' to value 'NEW'. Setting environment variable 'BI_BACKUP' to value 'FULL'. Constructed Backint for MaxDB call '/sapdb/CFC/db/bin/backint -u CFC -f back up -t file -p /sapdb/CFC/lcbackup/param.cfg -i /tmp/backint4sapdbCFC.in -c'. Created temporary file '/tmp/backint4sapdbCFC.out' as output for Backint for MaxDB. Created temporary file '/tmp/backint4sapdbCFC.err' as error output for Backi nt for MaxDB. Writing '/sapdb/CFC/lcbackup/pipe1 #PIPE' to the input file. Writing '/sapdb/CFC/lcbackup/pipe2 #PIPE' to the input file. Prepare passed successfully. 2009-10-22 02:06:08 Creating pipes for data transfer. Creating pipe '/sapdb/CFC/lcbackup/pipe1' ... Done. Creating pipe '/sapdb/CFC/lcbackup/pipe2' ... Done. All data transfer pipes have been created. 2009-10-22 02:06:08 Starting database action for the backup. Requesting 'SAVE DATA QUICK TO '/sapdb/CFC/lcbackup/pipe1' PIPE,'/sapdb/CFC/ lcbackup/pipe2' PIPE BLOCKSIZE 8 NO CHECKPOINT MEDIANAME 'BACKINT_ONLINE1'' from db-kernel. The database is working on the request. 2009-10-22 02:06:09 Waiting until database has prepared the backup. Asking for state of database. 2009-10-22 02:06:09 Database is still preparing the backup. Waiting 1 second ... Done. Asking for state of database. 2009-10-22 02:06:10 Database is still preparing the backup. Waiting 2 seconds ... Done. Asking for state of database. 2009-10-22 02:06:12 Database has finished preparation of the backup. The database has prepared the backup successfully. 2009-10-22 02:06:12 Starting Backint for MaxDB. Starting Backint for MaxDB process '/sapdb/CFC/db/bin/backint -u CFC -f back up -t file -p /sapdb/CFC/lcbackup/param.cfg -i /tmp/backint4sapdbCFC.in -c >>/tm p/backint4sapdbCFC.out 2>>/tmp/backint4sapdbCFC.err'. Process was started successfully. Backint for MaxDB has been started successfully. 2009-10-22 02:06:12 Waiting for end of the backup operation. 2009-10-22 02:06:12 The backup tool is running. 2009-10-22 02:06:12 The database is working on the request. 2009-10-22 02:06:14 The backup tool process has finished work with return co de 2. 2009-10-22 02:06:17 The database is working on the request. 2009-10-22 02:06:27 The database is working on the request. 2009-10-22 02:06:42 The database is working on the request. 2009-10-22 02:07:02 The database is working on the request. 2009-10-22 02:07:15 Canceling Utility-task after a timeout of 60 seconds ela psed ... OK. 2009-10-22 02:07:17 The database has finished work on the request. Receiving a reply from the database kernel. Got the following reply from db-kernel: SQL-Code :-903 The backup operation has ended. 2009-10-22 02:07:17 Filling reply buffer. Have encountered error -24920: The backup tool failed with 2 as sum of exit codes. The database request was canceled and ended with error -903. Constructed the following reply: ERR -24920,ERR_BACKUPOP: backup operation was unsuccessful The backup tool failed with 2 as sum of exit codes. The database request was canceled and ended with error -903. Reply buffer filled. 2009-10-22 02:07:17 Cleaning up. Removing data transfer pipes. Removing data transfer pipe /sapdb/CFC/lcbackup/pipe2 ... Done. Removing data transfer pipe /sapdb/CFC/lcbackup/pipe1 ... Done. Removed data transfer pipes successfully. Copying output of Backint for MaxDB to this file. -
    Begin of output of Backint for MaxDB (/tmp/backint4sapdbCFC.out)- -
    Data Protection for mySAP(R) Interface between BR*Tools and Tivoli Storage Manager - Version 5, Release 4, Modification 0.0 for Linux x86_64 - Build: 303 compiled on Nov 16 2006 (c) Copyright IBM Corporation, 1996, 2006, All Rights Reserved. BKI0008E: The environment variable BI_CALLER is not set correctely. The current value is "DBMSRV" usage: backint -p  [-u ] [-f ] [-t ] [-i ] [-o ] [-c] where:  backint utility user  backup | restore | inquire | password | delete  file | file_online  parameter file for backup utility  name of a text file that defines the objects default: STDIN  Pool for processing messages and the results of the executed function. default: STOUT BKI0020I: End of program at: Thu 22 Oct 2009 02:06:14 AM EDT . BKI0021I: Elapsed time: 01 sec . BKI0024I: Return code is: 2. -
    End of output of Backint for MaxDB (/tmp/backint4sapdbCFC.out)- - Removed Backint for MaxDB's temporary output file '/tmp/backint4sapdbCFC.out '. Copying error output of Backint for MaxDB to this file. - Begin of error output of Backint for MaxDB (/tmp/backint4sapdbCFC .err) - End of error output of Backint for MaxDB (/tmp/backint4sapdbCFC.e rr)--
    Removed Backint for MaxDB's temporary error output file '/tmp/backint4sapdbC FC.err'. Removed the Backint for MaxDB input file '/tmp/backint4sapdbCFC.in'. Have finished clean up successfully.
    Also, is there any specification about the user permissions and about how the backup should be run?

    Hi Lars,
    I understand that its a clumpsy over here, but i already have raised an OSS message and SAP said, that they cannot support this issue with backint. If you can provide me with an email id, i can send you the log files which would be easy to read.
    My issue is that am not able run backup for maxdb of content server 640 using the backint tool.
    I have created the configuration file and the parameter file as per the specifications from http://maxdb.sap.com/doc/7_7/a9/8a1ef21e4b402bb76ff75bb559a98a/content.htm and http://maxdb.sap.com/doc/7_7/50/075205962843f69b9ec41f34427be7/content.htm.
    THe server is registered to the TSM server. Now when i run the wizard to take the backup using the backint tool, it gives the error "Begin of output of Backint for MaxDB (/tmp/backint4sapdbCFC.out)- -
    Data Protection for mySAP(R) Interface between BR*Tools and Tivoli Storage Manager - Version 5, Release 4, Modification 0.0 for Linux x86_64 - Build: 303 compiled on Nov 16 2006 (c) Copyright IBM Corporation, 1996, 2006, All Rights Reserved. BKI0008E: The environment variable BI_CALLER is not set correctely. The current value is "DBMSRV" usage: backint -p  [-u ] [-f ] [-t ] [-i ] [-o ] [-c] where:  backint utility user  backup | restore | inquire | password | delete  file | file_online  parameter file for backup utility  name of a text file that defines the objects default: STDIN  Pool for processing messages and the results of the executed function. default: STOUT BKI0020I: End of program at: Thu 22 Oct 2009 02:06:14 AM EDT . BKI0021I: Elapsed time: 01 sec . BKI0024I: Return code is: 2. -
    End of output of Backint for MaxDB (/tmp/backint4sapdbCFC.out)- - Removed Backint for MaxDB's temporary output file '/tmp/backint4sapdbCFC.out '. Copying error output of Backint for MaxDB to this file. - Begin of error output of Backint for MaxDB (/tmp/backint4sapdbCFC .err) - End of error output of Backint for MaxDB (/tmp/backint4sapdbCFC.e rr)--
    Removed Backint for MaxDB's temporary error output file '/tmp/backint4sapdbC FC.err'. Removed the Backint for MaxDB input file '/tmp/backint4sapdbCFC.in'. Have finished clean up successfully."
    I think this should be fine to read...
    Krishna KK

Maybe you are looking for