Archive/backup p2

Hi there,
I've just completed my first p2 project and I'm looking for some advice on archiving. Currently the media is on 3 two-TB drives: one has the original MXF files, one has a MXF backup, and the 3rd has the QT files that I cut with. Is one copy of all of the originals and one copy of media-managed QT files a wise choice? We'd like to erase one drive for the next project.
Thanks for the help. Gobble gobble!

You can only use 2 drives as a simple solution to backup and media managed. To ensure you do have the backup as was described by other posts, which is absolutely correct, all you need to do is purchase something like the Gtech GRaid solution. a 2 TB drive (I know its not big enought for what you said, but its a price i know) has a 2GB storage with a built in 2GB Raided drive for about £200 or prob £250.
This means if your drive does go down.. you at least have the raid to backup on.. (they come with esata) too..
Therefore you can go for a media managed drive (do same again for xtra backup) and have a Original Media Drive.
I am opting to take off card straight onto original media drive (not to be touched)
Then transferring onto bays 3&4 in MacPro to actively work on.
Then Media Managing onto another Drive with final production.
You can then erase the media on Bays 3 and 4 if you like.
Should explain use of bays in MacPro actually...
Bay 1. Operating System (Do not use at all for capture scratch !!)
Bay 2. Active Project Files
Bay 3. Project Footage Storage (already backed up)
Bay 4. Project Footage Storage (already backed up)
3 Steps. 2 Drives. Loads of Backup there.
Message was edited by: Mad Mac Man

Similar Messages

  • RMAN archive backup takes more than 1 hour.

    Hello,
    We execute the following command from HP Data Protector 5.50 in order to backup archives from an Oracle 8.1.7 database running in HP-UX 11.00:
    run {
    allocate channel 'dev_0' type 'sbt_tape'
    parms 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=inf,OB2BARLIST=Oracle_Archivers_Inf)';
    allocate channel 'dev_1' type 'sbt_tape'
    parms 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=inf,OB2BARLIST=Oracle_Archivers_Inf)';
    allocate channel 'dev_2' type 'sbt_tape'
    parms 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=inf,OB2BARLIST=Oracle_Archivers_Inf)';
    sql 'alter system switch logfile'
    sql 'alter system archive log current'
    backup
    filesperset 1
    format 'Oracle_Archivers_Inf<inf_%s:%t:%p>.dbf'
    archivelog until time 'SYSDATE-2'
    delete input;
    backup
    filesperset 1
    format 'Oracle_Archivers_Inf<inf_%s:%t:%p>.dbf'
    archivelog all;
    This process takes more than 2 hours for just 3 GB and we don't think this is network issue.
    Is our RMAN command doing more thinks than just archives backup?
    Almost at the end of backup we get following message:
    Default exp binary /INF/oracle/app/product/8.1.7/bin/exp/ is used for catalog export.
    Does this message mean an export is done at the end of archive backup?
    Note that message belongs it is more than 2 hours later.
    Thanks in advance for your help.
    Regards,
    Carles

    Yes, it appears one of your utilities is doing an export backup, check HP Data Protector settings.
    -- and/or -- you are having tape problems: ... type[b] 'sbt_tape'

  • Archive Backup Failure in Quality Server

    Hello Guru's
    Since morning archive backup failure in our quality server.
    I am attaching log  for your reference, Please guide us to resolve this issue
    BR0002I BRARCHIVE 6.40 (50)                                                                               
    BR0006I Start of offline redo log processing: aeardegy.svd 2009-05-28 07.08.28                                                  
    BR0484I BRARCHIVE log file: /oracle/QBS/saparch/aeardegy.svd                                                                    
    BR0252W Function remove() failed for '/oracle/QBS/920_64/dbs/sap.ora' at location BrInitOraCreate-1                             
    BR0253W errno 13: Permission denied                                                                               
    BR0252W Function fopen() failed for '/oracle/QBS/920_64/dbs/initQBS.ora' at location BrInitOraCopy-2                            
    BR0253W errno 13: Permission denied                                                                               
    BR0101I Parameters                                                                               
    Name                           Value                                                                               
    oracle_sid                     QBS                                                                               
    oracle_home                    /oracle/QBS/920_64                                                                               
    oracle_profile                 /oracle/QBS/920_64/dbs/initQBS.ora                                                               
    sapdata_home                   /oracle/QBS                                                                               
    sap_profile                    /oracle/QBS/920_64/dbs/initQBS.sap                                                               
    backup_dev_type                util_file                                                                               
    util_par_file                  /oracle/QBS/920_64/dbs/initQBS.utl                                                               
    archive_dupl_del               only                                                                               
    system_info                    oraqbs/oracbs bascop08 AIX 3 5 00396F7C4C00                                                      
    oracle_info                    QBS 9.2.0.7.0 8192 423 5563906655                                                                
    sap_info                       640 SAPR3 QBS W1421187153 R3_ORA 0020121351                                                      
    make_info                      rs6000_64 OCI_920 Oct 11 2008                                                                    
    command_line                   brarchive -u system/******** -c -d util_file -sd                                                 
    BR0252E Function fopen() failed for '/oracle/QBS/saparch/archQBS.log' at location arch_last_get-1                               
    BR0253E errno 13: Permission denied                                                                               
    BR0016I 0 offline redo log files processed, total size 0.000 MB                                                                 
    BR0252W Function fopen() failed for '/oracle/QBS/saparch/archQBS.log' at location BrCleanup-8                                   
    BR0253W errno 13: Permission denied                                                                               
    BR0121W Processing of log file /oracle/QBS/saparch/archQBS.log failed                                                           
    BR0007I End of offline redo log processing: aeardegy.svd 2009-05-28 07.08.29                                                    
    BR0280I BRARCHIVE time stamp: 2009-05-28 07.08.29                                                                               
    BR0005I BRARCHIVE terminated with errors                                                                               
    Thanks & Regards
    Shishir

    Hello Shishir,
    It looks as if the issue is with the permissions of saparch directory,
    Check the permissions of oracle/QBS/saparch/archQBS and /oracle/QBS/920_64 oracle_profile /oracle/QBS/920_64/dbs/initQBS.ora  directories and compare with the production server
    Also make sure that initQBS.ora is present in the location specified
    Rohit

  • Log file sync  during RMAN archive backup

    Hi,
    I have a small question. I hope someone can answer it.
    Our database(cluster) needs to have a response within 0.5 seconds. Most of the time it works, except when the RMAN backup is running.
    During the week we run one time a full backup, every weekday one incremental backup, every hour a controlfile backup and every 15 minutes an archival backup.
    During a backup reponse time can be much longer then this 0.5 seconds.
    Below an typical example of responsetime.
    EVENT: log file sync
    WAIT_CLASS: Commit
    TIME_WAITED: 10,774
    It is obvious that it takes very long to get a commit. This is in seconds. As you can see this is long. It is clearly related to the RMAN backup since this kind of responsetime comes up when the backup is running.
    I would like to ask why response times are so high, even if I only backup the archivelog files? We didn't have this problem before but suddenly since 2 weeks we have this problem and I can't find the problem.
    - We use a 11.2G RAC database on ASM. Redo logs and database files are on the same disks.
    - Autobackup of controlfile is off.
    - Dataguard: LogXptMode = 'arch'
    Greetings,

    Hi,
    Thank you. I am new here and so I was wondering how I can put things into the right category. It is very obvious I am in the wrong one so I thank the people who are still responding.
    -Actually the example that I gave is one of the many hundreds a day. The respone times during the archive backup is most of the time between 2 and 11 seconds. When we backup the controlfile with it, it is for sure that these will be the response times.
    -The autobackup of the controfile is put off since we already have also a backup of the controlfile every hour. As we have a backup of archivefiles every 15 minutes it is not necessary to also backup the controlfile every 15 minutes, specially if that even causes more delay. Controlfile is a lifeline but if you have properly backupped your archivefiles, a full restore with max 15 minutes of data loss is still possible. We put autobackup off since it is severely in the way of performance at the moment.
    As already mentioned for specific applications the DB has to respond in 0,5 seconds. When it doesn’t happen then an entry will be written in a table used by that application. So I can compare the time of failure with the time of something happening. The times from the archivelog backup and the failure match in 95% of the cases. It also show that log file sync at that moment is also part of this performance issue. I actually built a script that I used for myself to determine out of the application what the cause is of the problem;
    select ASH.INST_ID INST,
    ASH.EVENT EVENT,
    ASH.P2TEXT,
    ASH.WAIT_CLASS,
    DE.OWNER OWNER,
    DE.OBJECT_NAME OBJECT_NAME,
    DE.OBJECT_TYPE OBJECT_TYPE,
    ASH.TIJD,
    ASH.TIME_WAITED TIME_WAITED
    from (SELECT INST_ID,
    EVENT,
    CURRENT_OBJ#,
    ROUND(TIME_WAITED / 1000000,3) TIME_WAITED,
    TO_CHAR(SAMPLE_TIME, 'DD-MON-YYYY HH24:MI:SS') TIJD,
    WAIT_CLASS,
    P2TEXT
    FROM gv$active_session_history
    WHERE PROGRAM IN ('yyyyy', 'xxxxx')) ASH,
    (SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, OBJECT_ID FROM DBA_OBJECTS) DE
    WHERE DE.OBJECT_id = ASH.CURRENT_OBJ#
    AND ASH.TIME_WAITED > 2
    ORDER BY 8,6
    - Our logfiles are 250M and we have 8 groups of 2 members.
    - Large pool is not set since we use memory_max_target and memory_target . I know that Oracle maybe doesn’t use memory well with this parameter so it is truly a thing that I should look into.
    - I looked for the size of the logbuffer. Actually our logbuffer is 28M which in my opinion is very large so maybe I should put it even smaller. It is very well possible that the logbuffer is causing this problem. Thank you for the tip.
    - I will also definitely look into the I/O. Eventhough we work with ASM on raid 10 I don’t think it is wise to put redo logs and datafiles on the same disks. Then again, it is not installed by me. So, you are right, I have to investigate.
    Thank you all very much for still responding even if I put this in the totally wrong category.
    Greetings,

  • Archive Backup procedure & Advantage

    Hi,
    We want to configure archive backup. As well as, we want to know something about archive backup so could anyone help & explain following inquiry.
    1)How to configure archive backup.
    2)what is the advantages of this backup.
    3)When does it require & perform.
    4)Can it be configured both way (SAP & Database)
    Our System Information:
    SAP ECC6
    Database : DB2 9.1
    Regards
    Abhijit

    Hi,
    I didn't get any feedback regarding my issue.
    Regards
    Abhijit

  • Archive backup issue

    Hi ,
    We are taking archive backup using rman but all the archive is not deleted after backup.only few archive is delete
    Please help on this
    OS is window and database verion is 10g
    Below is the script for takinhg a backup
    run
    sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    allocate channel CH01 type 'sbt_tape';
    send 'NB_ORA_POLICY=N10-NDLORADB02-DB-Oracle-billhost-Archive-Full-Backup-Daily,NB_ORA_SERV=CNDBBKPRAPZT33,NB_ORA_CLIENT=NDLORADB02,NB_ORA_SCHED=Default-Application-Backup';
    backup filesperset 20 format 'ARC%S_%R.%T' (archivelog like 'T:\ARCHIVE\ARC%');
    backup format 'cf_%d_%U_%t' current controlfile tag='backup_controlfile';
    delete noprompt archivelog all backed up 1 times to sbt_tape ;
    release channel CH01;
    Regards
    Gaurav

    No evidence with respect to your assertion exists in your post.
    These two commands
    backup filesperset 20 format 'ARC%S_%R.%T' (archivelog like 'T:\ARCHIVE\ARC%');
    delete noprompt archivelog all backed up 1 times to sbt_tape ;are potentially contradictory, but as you also don't provide the log_archive_format parameter who can tell.
    This forum is about providing help. In order to be able to help people asking help need to provide sufficient information.
    You don't ask for help, you ask to solve a riddle, without providing a webcam.
    Sybrand Bakker
    Senior Oracle DBA

  • Archive backup error

    Hi,
    we are taking archive backup for that i getting below error plz help me to resolve the error.
    BR0002I BRARCHIVE 7.00 (32)
    BR0006I Start of offline redo log processing: aecmkecv.cds 2010-02-02 10.51.45
    BR0484I BRARCHIVE log file: /oracle/KBP/saparch/aecmkecv.cds
    BR0280I BRARCHIVE time stamp: 2010-02-02 10.51.45
    BR0301E SQL error -9925 at location BrInitOraCreate-2, SQL statement:
    'CONNECT / AT PROF_CONN IN SYSOPER MODE'
    ORA-09925: Unable to create audit trail file
    IBM AIX RISC System/6000 Error: 13: Permission denied
    Additional information: 9925
    BR0303E Determination of Oracle version failed
    BR0007I End of offline redo log processing: aecmkecv.cds 2010-02-02 10.51.45
    BR0280I BRARCHIVE time stamp: 2010-02-02 10.51.46
    BR0005I BRARCHIVE terminated with errors
    kvl

    Hi kvl
    Check the parameter AUDIT_FILE_DEST.  This will point to a directory which has to exist and be writable by the orasid account.
    Do you want to have an audit trail  for you SAP database?  Are you ever going to us it?  If not, another option is to turn it off with another parameter, audit_trail = none.
    Regards
    Doug

  • SnapshotDB failed for archive backup

    hi!
    one server i manage had a power failure. now when i try to start the calendar service i get this error:
    20070528154729 - 1 Fatal Error: SnapshotDB failed for archive backup at /var/opt/SUNWics5/csdb/archive/archive_20070528
    20070528154729 - csstoredExit: Exiting with [-1].the whole log:
    20070528154726 -   caldb.berkeleydb.homedir.path  = /var/opt/SUNWics5/csdb
    20070528154726 -   caldb.berkeleydb.archive.path  = /var/opt/SUNWics5/csdb/archive
    20070528154726 -   caldb.berkeleydb.hotbackup.path  = /var/opt/SUNWics5/csdb/hotbackup
    20070528154726 -   caldb.berkeleydb.archive.enable  = 1
    20070528154726 -   caldb.berkeleydb.hotbackup.enable  = 1
    20070528154726 -   caldb.berkeleydb.hotbackup.mindays  = 3
    20070528154726 -   caldb.berkeleydb.hotbackup.maxdays  = 6
    20070528154726 -   caldb.berkeleydb.hotbackup.threshold  = 70
    20070528154726 -   caldb.berkeleydb.archive.mindays  = 3
    20070528154726 -   caldb.berkeleydb.archive.maxdays  = 6
    20070528154726 -   caldb.berkeleydb.archive.threshold  = 70
    20070528154726 -   caldb.berkeleydb.circularlogging  = no
    20070528154726 -   caldb.berkeleydb.archive.interval  = 120
    20070528154726 -   alarm.msgalarmnoticercpt  = [email protected]
    20070528154726 -   service.admin.sleeptime  = 2
    20070528154726 -   local.serveruid  = icsuser
    20070528154726 -   local.hostname  = jespre.dominio.local
    20070528154726 -   service.http.calendarhostname  = jespre.dominio.local
    20070528154726 - Reading configuration file - Done
    csstored is started
    Calendar service(s) were started
    20070528154728 - Notice: Store Archiving is Enabled
    20070528154728 - Notice: Hot Backup is Enabled
    20070528154728 -   WARNING: Removing directory [archive_20061102] from [/var/opt/SUNWics5/csdb/archive] according to system settings.
    20070528154728 -            In backup directory [/var/opt/SUNWics5/csdb/archive] we had [6] days worth of backup
    20070528154728 -            According to the system settings, we must keep between [3] and [6] days of backup
    20070528154728 -            We now have [5] days of backup in [/var/opt/SUNWics5/csdb/archive]
    20070528154728 - Creating directory [/var/opt/SUNWics5/csdb/archive/archive_20070528] 20070528154728 - ... success
    20070528154729 - Checking condition for [/var/opt/SUNWics5/csdb/archive], threshold [70], DB [257031]KB
    20070528154729 - Hotbackup on [/var/opt/SUNWics5/csdb/hotbackup] mounted on [/dev/dsk/c0d0s6]
    20070528154729 - Archivebackup on [/var/opt/SUNWics5/csdb/archive] mounted on [/dev/dsk/c0d0s6]
    20070528154729 - Checking condition for [/var/opt/SUNWics5/csdb/archive], threshold [70], DB [257031]KB
    20070528154729 - Hotbackup on [/var/opt/SUNWics5/csdb/hotbackup] mounted on [/dev/dsk/c0d0s6]
    20070528154729 - Archivebackup on [/var/opt/SUNWics5/csdb/archive] mounted on [/dev/dsk/c0d0s6]
    20070528154729 - SnapshotDB: Creating archive copy at /var/opt/SUNWics5/csdb/archive/archive_20070528
    20070528154729 - Run CheckpointDB prior to backing up the database files
    20070528154729 -        Running CheckpointDB: /opt/SUNWics5/cal/lib/../tools/unsupported/bin/db_checkpoint -1 -h /var/opt/SUNWics5/csdb 2> /tmp/csstored.checkpoint.out
    20070528154729 -        Running CheckpointDB: /opt/SUNWics5/cal/lib/../tools/unsupported/bin/db_checkpoint -1 -h /var/opt/SUNWics5/csdb 2> /tmp/csstored.checkpoint.out
    20070528154729 - Copying database files to /var/opt/SUNWics5/csdb/archive/archive_20070528
    20070528154729 -        Copying database file ics50alarms.db to /var/opt/SUNWics5/csdb/archive/archive_20070528
    20070528154729 - SnapshotDB - Copy failed to /var/opt/SUNWics5/csdb/archive/archive_20070528 for ics50alarms.db
    20070528154729 - 1 Fatal Error: SnapshotDB failed for archive backup at /var/opt/SUNWics5/csdb/archive/archive_20070528
    20070528154729 - csstoredExit: Exiting with [-1].any hint?

    long time ago, in a galaxy far away...
    i discovered that csstored.pl doesn't play very well with spanish locale (not sure with other locale).
    if i launch /opt/SUNWics5/cal/sbin/start-cal with this locale
    LANG=es_ES.UTF-8
    LC_CTYPE=es_ES.UTF-8
    LC_NUMERIC=es_ES.UTF-8
    LC_TIME="es_ES.UTF-8"
    LC_COLLATE=es_ES.UTF-8
    LC_MONETARY=es_ES.UTF-8
    LC_MESSAGES=es.UTF-8
    LC_ALL=
    i get the error described before. the i just reset LANG and LC_ALL to C and all goes well.
    thanks to truss for the inspiration.

  • DB13 ARCHIVE backup error

    Hi ,
    Can any body help me. i taking backup using TCODE: DB13 i got follwing errors
    Job log
    Job started
    Step 001 started (program RSDBAJOB, variant &0000000000025, user ID KUMARS)
    Execute logical command BRARCHIVE On host blrsap002
    Parameters:-u / -jid LOG__20070706121833 -c force -p initVMD.sap -s
    BR0002I BRARCHIVE 7.00 (22)
    BR0006I Start of offline redo log processing: advqhufa.sve 2007-07-06 12.18.42
    BR0484I BRARCHIVE log file: E:\oracle\VMD\saparch\advqhufa.sve
    BR0252W Function remove() failed for 'E:\oracle\VMD\102\database\sap.ora' at location BrInitOraCopy-7
    BR0253W errno 13: Permission denied
    BR0166I Parameter 'log_archive_dest' not found infile E:\oracle\VMD\102\database\initVMD.ora - default assumed
    BR0280I BRARCHIVE time stamp: 2007-07-06 12.18.43
    BR0008I Offline redo log processing for database instance: VMD
    BR0009I BRARCHIVE action ID: advqhufa
    BR0010I BRARCHIVE function ID: sve
    BR0048I Archive function: save
    BR0011I 2 offline redo log files found for processing, total size 59.521 MB
    BR0112I Files will not be compressed
    BR0130I Backup device type: disk
    BR0106I Files will be saved on disk in directory:F:\SAPDEVBACKUP\ARC
    BR0134I Unattended mode with 'force' active - no operator confirmation allowed
    BR0202I Saving init_ora
    BR0203I to F:\SAPDEVBACKUP\ARC\VMD ...
    BR0278E Command output of 'E: && cd E:\oracle\VMD\102\database && E:\usr\sap\VMD\SYS\exe\uc\NTI386\brtools.exe -f copyfile F:\SA
    Access is denied.
            0 file(s) copied.
    BR0280I BRARCHIVE time stamp: 2007-07-06 12.18.43
    BR0279E Return code from 'E: && cd E:\oracle\VMD\102\database && E:\usr\sap\VMD\SYS\exe\uc\NTI386\brtools.exe -f copyfile F:\SAP
    BR0222E Copying init_ora to/from F:\SAPDEVBACKUP\ARC\VMD failed due to previous errors
    BR0016I 0 offline redo log files processed, totalsize 0.000 MB
    BR0007I End of offline redo log processing: advqhufa.sve 2007-07-06 12.18.43
    BR0280I BRARCHIVE time stamp: 2007-07-06 12.18.43
    BR0005I BRARCHIVE terminated with errors
    External program terminated with exit code 5
    BRARCHIVE returned error status E
    Job finished
    Thanks
    siva kumar
    [email protected]

    Hi,
    i think It seems to be a cause of incorrect authorizations/roles. Please check the following two methods
    1)Make sure that directory or executables like SAPDBA,
    BRBACKUP, BRARCHIVE, BRRESTORE, BRRECOVER, BRCONNECT
    and BRTOOLS have correct authorizations
    Please refer the snote: 113747.
    2)Run the sapdaba role to update the proper authorizations
    Oracle 10g: sqlplus /nolog @sapdba_role <SID>
    you can get it from \usr\sap\<SID>\SYS\exe\run folder
    Regards,
    Venkat

  • Archive Backup job failing with no error

    Hi All,
    Can anybody help me to fix the backup issue. please find the rman log below and help me to fix this.
    Script /opt/rman_script/st80_oracle_arch.sh
    ==== started on Fri Jun 28 11:05:11 SGT 2013 ====
    RMAN: /OraBase/V10203/bin/rman
    ORACLE_SID: ST801
    ORACLE_USER: oracle
    ORACLE_HOME: /OraBase/V10203
    NB_ORA_SERV: zsswmasb
    NB_ORA_POLICY: bsswst80_archlog_daily
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    Script /opt/rman_script/st80_oracle_arch.sh
    ==== ended in error on Fri Jun 28 11:05:11 SGT 2013 ====
    Thanks,
    Nayab

    Hi Sarat,
    Hope it has got solved now it was due to the archive log full causing it to hung it worked after my system admin moving few logs manually.
    Thanks,
    Nayab

  • RMAN archive backup retention

    Hi,
    My backed up archive file is not getting deleted from backup location as per my retention policy.
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '${BACKUP_DIR}/${THEDBNAME}/cf_%d_%F';
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO BACKUPSET PARALLELISM 3;
    CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE = 5G;
    RUN
    BACKUP DATABASE format '${BACKUP_DIR}/${THEDBNAME}/db_%d_%t_%s_%p' TAG = 'DB_backup' ;
    sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    backup filesperset 25 archivelog all format '${BACKUP_DIR}/${THEDBNAME}/arch_%d_%U' TAG = 'ARCH_backup';
    delete noprompt archivelog until time 'sysdate -1' ;
    CROSSCHECK BACKUP;
    DELETE NOPROMPT EXPIRED BACKUP;
    DELETE NOPROMPT FORCE OBSOLETE;
    EOF
    The retention policy is keeping 2 sets of backups but it's not removing the backed up archive files from the backup location.
    Regards,
    DBA

    Hi
    I have the same issue. It looks like a BUG:
    list backup of archivelog .. : reports the existence aof the archivelog backups
    But the "delete obsolete recovery wiondow" deletes only the data files backups !!
    I can also confirm that by checking the catalog directly:
    select bck_type, status, max(completion_time), min(completion_time), count(*) from rman.bs where db_key=5297591 group by bck_type , status
    My catalog is 11g and my target db is 9i
    Have you succeeded to get any workaround ?
    Thanks

  • Finding the archive backup or not

    How to check weather the particular archive log backup or not in Rman ?

    You can use list backup of archivelog sequence <number>
    Example:
    RMAN>list backup of archivelog sequence 256;Please mark the question as answered if you feel you have the answer and keep the forum clean.

  • Digital Photos-Best Way to Archive, Backup - Your Opinion...?

    With all the photos my family are taking with the new digital camera craze, I pose a question for the experienced photographers. If you only had ONE way to backup and archive your files, which ONE way would you use and why???
    PowerMacDualGig   Mac OS X (10.4.3)  

    Dave:
    If I had only one way I'd burn the jpg files, organized by year and shoot (Figure 1) to DVD disks. Make two of each disk, one to store in another location under stable conditions. If you had a large enough safety deposit box that would be a very good choice.
    Now some will argue "will we be able to read them in 20 years, etc." I would think yes. As technology advances there's always going to be a way to copy for the current standard to the new. What I do is create slide shows of all the digital images that I and other family members take and burn the to disk via DVD. I arrange them by month (Figure 2). Then I burn the source files in the folders shown in Figure 1 to DVD disk. I then distribute the slideshow DVD and the source file DVD to each family member. A little work but it keeps me off the streets at night. Good luck.
    G4 DP-1G, 1.5G RAM, 22 Display, 2-80G HD, QT 7.0.3P   Mac OS X (10.4.3)   Canon S400, i850 & LIDE 50, Epson R200, 2G Nano

  • Oracle 10g RAC Archive Backup RMAN

    Hi All
    We are running a 3 node Oracle 10.2.0.4 RAC instanes on Solaris OS 10. Due to the dependecy during peak loads, we configured our Archive destinations LOCAL on each node and have them NFS mounted on alternate nodes participating on this cluster.
    Due to some NFS issue, when ever a node is rebooted/panicked to restart these NFS mounts are not mounting on the alternate nodes although all other services are restarted and fully functional.
    In such events, our backup job is failing to backup the archives +(obviously for non availability of the archives)+. So I tried to modify the job by making an expliit connection to each of the nodes before running the block "backup archive log all delete all input;". So one archive job is now broke down to 3 different independant blocks. However, this is still causing the archive job on each of the nodes failing for the not reacheable archives from alterate nodes.
    I want to now ensure that this backup archive log block is run as a single job and is still successful backing up archives from all instances irrespective of which node(instance) the job has made connection to. Can this be done? Please Advice.
    Sarat.

    Is there a way at RMAN> prompt so that I could connect to the other instances of the database and backup the archives although they are not available?
    I mean, lets say the rman job has made conection to orcl1 instance on orcl database running on node 1 of a 3 ode cluster.. I normally make a connetion like below to make a connection.
    rman target \ rman/rman123@catalogue cmdfile=cmdtext.lst log=rmanbkp.log
    I the example, I now need to connect to orcl2 and orcl3 instances and complete the archive log backup.. Can this be done in one single run block at rman prompt?
    Sarat

  • External Hard drives archive/backup duration

    If I backup an external hard drive on the first of January, and then back the machine up [1000 times] WITHOUT that drive till December, am I going to have the external hard drive backup in my time machine or the many backups will discard it so that I won't have it in December?

    From [Apple's site|http://www.apple.com/macosx/features/timemachine.html]:
    *Timing is everything.*
    Every hour, every day, an incremental backup of your Mac is made automatically as long as your backup drive is attached to your Mac. Time Machine saves the hourly backups for the past 24 hours, daily backups for the past month, and weekly backups for everything older than a month.
    *Backing up to a full disk.*
    One day, no matter how large your backup drive is, it will run out of space. And Time Machine has an action plan. It alerts you that it will start deleting previous backups, oldest first. Before it deletes any backup, Time Machine copies files that might be needed to fully restore your disk for every remaining backup.
    So, it depends on the size of your backup disk and what changes are made on the drives that are being backed up by the incremental backups. I mean, 1000 backups may not necessarily take a lot of space. But, once there's no more space, the first backups to go will be the earlier ones; thus likely the one of your external drive.
    /p

Maybe you are looking for

  • Getting xterm to behave like Terminal?

    Hi, The Terminal.app does just about everything I need, but I've noticed since installing 10.6 that occasionally (and sporadically), it steadfastly refuses to paste text that I've just copied from Safari (or from any other application for that matter

  • What are the external events in report programming

    hi experts pz help me for this qn

  • Message body part inputstream wrong size

    Hi to everyone, I'm trying to download a message (created with JavaMail) from a server: the attachment is 345.302 byte, but when I get the InputStream of the part, the available() method returns "354388". This is the code: // getting the message mult

  • Executable creating extra folders for oscilloscope and Signal Analyzer

    Hi everyone, I am facing problem while creating executable. I am using 8.x layout. I am talking to signal analyzer and oscilloscope in my application. When i create executable of my application, In the target folder I found 2 extra folders with names

  • How to use CONVERTTOCLOB

    I'm using Oracle 10g I have a database filed of type BLOB and I'm trying to use CONVERTTOCLOB My select gives Ora-00904 invalid identifer