ASM RMAN backup to File System

Hi all,
I have a rman backup (datafile and controlfile) which was took in an ASM instance (not a RAC) ORACLE 11.2.0.2 in a Linux server, now I want restore the backup in a new database in windows/Linux OS using general File System storage (single instance rdbms) instead of ASM.
Is this possible?
Can I restrore an ASM rman backup in a file system storage mechanisim in a new server?
Kindly clarify my question.
Thanks in Advance..
Nonuday

Nonuday wrote:
Hi Levi,
Thanks for your invaluable script and blog.
can you clarify me on this query:
I have a RMAN backup taken from ASM and the backup is database and controlf file backup which contains datafiles and controlfiles.
Now I need to restore this on my system and here I dont use ASM or archive log, I use single instance in no archive log mode database.
I have restored the control file from the RMAN controfile backup.
Before restoring the control file I have checked the orginal pfile of the backup database which had parameters like
'db_create_file_dest',
'db_create_online_log_dest',
'db_recovery_file_dest_size',
'db_recovery_dest',
'log_archive_dest'.
Since I am not gng to create a DB in no archive log mode, I didnt use any of the above parameters and created a database.
Now my question is:
If i restore the database and the datafile will get restored and after renaming all the logfiles, database will be opened.
I want to know whether this method is correct or wrong and will the database work as it was working previously. Or do i need create the db_file_recovery and other parameters also for this database.About Parameter:
All these parameters should reflect your current environment any reference to the old environment must be modified.
About Filesystem used:
Does not matter what Filesystem you are using the File (datafile/redolog/controlfile/archivelog/backuppiece) are created on Binary Format which depend on Platform only. So, The same binary file ( e.g datafile) have same format and content on raw device, ASM, ext3, ext2, and so on. So, to database it's only a location where file are stored, but the file are the same. ASM has a different architecture from Regular Filesystem and need be managed in a different manner (i.e using RMAN).
About Database:
Since your database files are the same even using different filesystem what you need is rename your datafiles/redofiles on controlfile during restore, the redo files will be recreated.
So, does not matter if you database are noarchivelog or archivelog, the same way which you will do a restore on ASM is the same way to restore on Regular Filesystem. (it's only about renaming database file on controlfile during restore)
On blog the post "How Migrate All Files on ASM to Non-ASM (Unix/Linux)" is about move the file from filesystem to another. But you can modify the script used to restore purposes;
## set newname tell to RMAN where file will be restored and keep this files location on memory buffer
RMAN> set newname for datafile 1 to <location>;
### swich get list of files from memory buffer (rman) and rename on controlfile the files already restored.
RMAN>switch datafile/tempfile all ;With database mounted use this script below:
I just commented three lines that are unnecessary in your case.
SET serveroutput ON;
DECLARE
  vcount  NUMBER:=0;
  vfname VARCHAR2(1024);
  CURSOR df
  IS
    SELECT file#,
      rtrim(REPLACE(name,'+DG_DATA/drop/datafile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
    FROM v$datafile;
  CURSOR tp
  IS
    SELECT file#,
      rtrim(REPLACE(name,'+DG_DATA/drop/tempfile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
    FROM v$tempfile;
BEGIN
--  dbms_output.put_line('CONFIGURE CONTROLFILE AUTOBACKUP ON;'); ### commented
  FOR dfrec IN df
  LOOP
    IF dfrec.name  != vfname THEN
      vcount      :=1;
      vfname     := dfrec.name;
    ELSE
      vcount := vcount+1;
      vfname:= dfrec.name;
    END IF;
  --  dbms_output.put_line('backup as copy datafile ' || dfrec.file# ||' format  "'||dfrec.name ||vcount||'.dbf";');  ### commented
  END LOOP;
  dbms_output.put_line('run');
  dbms_output.put_line('{');
  FOR dfrec IN df
  LOOP
    IF dfrec.name  != vfname THEN
      vcount      :=1;
      vfname     := dfrec.name;
    ELSE
      vcount := vcount+1;
      vfname:= dfrec.name;
    END IF;
    dbms_output.put_line('set newname for datafile ' || dfrec.file# ||'  to  '''||dfrec.name ||vcount||'.dbf'' ;');
  END LOOP;
  FOR tprec IN tp
  LOOP
    IF tprec.name  !=  vfname THEN
      vcount      :=1;
      vfname     := tprec.name;
    ELSE
      vcount := vcount+1;
      vfname:= tprec.name;
    END IF;
    dbms_output.put_line('set newname for tempfile ' || tprec.file# ||'  to  '''||tprec.name ||vcount||'.dbf'' ;');
    END LOOP;
      dbms_output.put_line('restore database;');
    dbms_output.put_line('switch tempfile all;');
    dbms_output.put_line('switch datafile all;');
    dbms_output.put_line('recover database;');
    dbms_output.put_line('}');
---   dbms_output.put_line('alter database open;');  ### comented because you need rename your redologs on controlfile before open database
    dbms_output.put_line('exit');
END;
/After restore you must rename your redologs on controlfile from old location to new location:
e.g
##  use this query to get current location of redolog
SQL>  select group#,member from v$logfile order by 1;
## and change from <old_location> to <new_location>
SQL > ALTER DATABASE
  RENAME FILE '+DG_TSM_DATA/tsm/onlinelog/group_3.263.720532229' 
           TO  '/u01/app/oracle/oradata/logs/log3a.rdo'  When you change all redolog on controlfile issue command below:
SQL> alter database open resetlogs;PS: Always track database in real time using alert log file of database.
HTH,
Levi Pereira

Similar Messages

  • Backup - ASM vs regular cooked file system

    Use Oracle 11g on Linux.
    The disk system is ASM. For instance, the FRA is configured based on a ASM disk group +FRA (along withe multiplexed logs and control files).  This is one place for the backup and multiplexing.
    Now, I intend to place the multiplexed files also to a second location (disk). For this disk, I have two choices: 1) Use the ASM diskgroup 2) Use the regular datafile (/u01/oracle/oracdata/back).
    Good thing about the cooked file system ( I can think) is that I can see the location of the file by going to the file manager and locate the directory of the files, kind of transparent.
    For doing so, will it incur the opertion cost in the future? (As compared to the ASM diskgroup, though files somewhat hidden from the view, but Oracle will take care of "everything").
    So, any comment on the file system of the second disk? (Oracle ASM vs regular cooked file system).
    Thanks

    I suggest you continue using ASM and use the ACFS feature.
    From 11.2.0.3 I don't use FRA on ASM only, I'm using FRA under ASM/ACFS mount point.
    Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is a multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of Oracle Database. Oracle ACFS supports many database and application files, including executables, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.
    Starting with Oracle Automatic Storage Management 11g Release 2 (11.2.0.3), Oracle ACFS supports RMAN backups (BACKUPSET file type), archive logs (ARCHIVELOG file type), and Data Pump dumpsets (DUMPSET file type).

  • RAC 10gr2 using ASM for RMAN a cluster file system or a Local directory

    The environment is composed of a RAC with 2 nodes using ASM. I have to determine what design is better for Backup and Recovery with RMAN. The backups are going to be saved to disk only. The database is only transactional and small in size
    I am not sure how to create a cluster file system or if it is better to use a local directory. What's the benefit of having a recovery catalog that is optional to the database?
    I very much appreciate your advice and recommendation, Terry

    Arf,
    I am new to RAC. I analyzed Alejandro's script. He is main connection is to the first instance; then through sql*plus, he gets connected to the second instance. he exits the second instance and starts with RMAN backup to the database . Therefore the backup to the database is done from the first instance.
    I do not see where he setenv again to change to the second instance to run RMAN to backup the second instance. It looks to me that the backup is only done to the first instance, but not to the second instance. I may be wrong, but I do not see the second instance backup.
    Kindly, I request your assistance on the steps/connection to backup the second instance. Thank you so much!! Terry

  • Rman backup control file

    hii i am working on oracle 10g 10.2.0.4.0 on solaris 10 have asm and rac setup(2 node rac).
    i have only one control file--+DATA_DG1/ftssdb/controlfile/current.270.664476369
    i am backing up these control file with rman
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/backup/rman_node1/%F';
    c-31850833-20100909-00 is a backed up piece of control file.
    now suddenly one system admin deleted that control file...how can i recover my database using rman backup??

    You can found these entries like :
    CREATE CONTROLFILE REUSE DATABASE "DBNAME" NORESETLOGS NOARCHIVELOG ==depends what is your db log mode
    MAXLOGFILES 32
    MAXLOGMEMBERS 2
    MAXDATAFILES 30
    MAXINSTANCES 8
    MAXLOGHISTORY 800
    LOGFILE
    GROUP 1 '/u01/oracle/7.1.6/dbs/log1p716.dbf' SIZE 500K,
    GROUP 2 '/u01/oracle/7.1.6/dbs/log2p716.dbf' SIZE 500K,
    GROUP 3 '/u01/oracle/7.1.6/dbs/log3p716.dbf' SIZE 500K
    DATAFILE
    '/u01/oracle/7.1.6/dbs/systp716.dbf' SIZE 40M,
    '/u01/oracle/7.1.6/dbs/tempp716.dbf' SIZE 550K,
    '/u01/oracle/7.1.6/dbs/toolp716.dbf' SIZE 15M
    # Recovery is required if any of the datafiles are restored backups,
    # or if the last shutdown was not normal or immediate.
    RECOVER DATABASE
    # Database can now be opened normally.
    ALTER DATABASE OPEN;
    Edited by: user00726 on Sep 9, 2010 4:42 AM

  • Backup to file system and sbt_tape at the same time?

    hello!
    is it possible to do a rman backup to disk and sbt_tape at the same time (just one backup, not two)? the reason is, that i want to copy the rman backup files from the local file system via robocopy to another server (in a different building), so that i can use them for fast restores in the case of a crash.
    if not, what is the recommended strategy in this case? backups should be available on tape AND the file system of a different machine.
    environment: oracle 10g, windows server 2008, commvault for backup on tape
    thanks for your advice.
    best regards,
    christian

    If you manually copy backupsets out of the FRA to another location, but still accessible as "local disks", you can use the CATALOG command in RMAN to "catalog" the copies.
    Thus, if you opy or move files from the FRA to /mybackups/MYDB, you would use
    "CATALOG START WITH /mybackups/MYDB" or individually "CATALOG BACKUPPIECE /mybackups/MYDB/backuppiece_1" etc
    Once you have moved or removed the backupsets out of the FRA, you must use
    "CROSSCHECK BACKUP"
    and
    "DELETE EXPIRED BACKUP"
    to update the RMAN repository. Else, Oracle will continue to "account" for disks pace consumed by the BackupPieces in V$FLASH_RECOVERY_AREA_USAGE and will soon "run out of space".
    You would do the same for ArchiveLogs (CROSSCHECK ARCHIVELOG ALL; DELETE EXPIRED ARCHIVELOG ALL) if your ArchiveLogs go to the FRA as USE_DB_RECOVERY_FILE_DEST

  • After Restoring/Backup of File System XI Java Instances are not up!

    Hello all,
    We are facing problem in restoring the SAP XI System, after taking backup of the system the <b>java instances</b> in SAP XI System are not starting again. ABAP connections are fine.
    Can anyone provide suggestions/solutions in order to restore the XI System back.
    The system information is as follows.
    System Component:     SAP NetWeaver 2004s, <b>PI 7.0</b>
    Operating System:     SunOS 5.9, SunOS 5.10
    Database:          ORACLE 9.2.0.
    Regards,
    Ketan Patel

    If it´s REALLY a PI 7.0 (SAP_BASIS 700 and WebAS Java 7.00) then it´s not compatible. WebAS 7.00 needs Oracle 10g (http://service.sap.com/pam)
    Also see
    http://service.sap.com/nw2004s
    --> Availibility
    --> SAP NetWeaver 7.0 (2004s) PAM
    If you open the Powerpoint, you will see that Oracle 9 is not listed. I wonder, how you got that installed.
    Neverless, if you recover a Java instance, both filesystem and database content (of the Java schema) must be in sync, means, you need to restore both, database (schema) and filesystem, that have been backed up at the same time.
    Check Java Backup and Restore :
    Restoring the System
           1.      Shut down the system.
           2.      Install a new AS Java system using SAPInst, or restore the file system from the offline backups that you created.
           3.      Import the database backup using the relevant tools provided by the database vendor.
           4.      Overwrite the SAP system directory /usr/sap/.
           5.      Start the system (see Starting and Stopping SAP NetWeaver ABAP and Java.)
    The J2EE Engine is restored with the last backup.
    Markus

  • Backup into file system

    Hi
    Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
    Thanks
    sunder
    <distributed-scheme>
         <scheme-name>distributed-Customer</scheme-name>
         <service-name>DistributedCache</service-name>
         <!-- <thread-count>5</thread-count> -->
         <backup-count>1</backup-count>
         <backup-storage>
         <type>file-mapped</type>
         <directory>/data/xx/backupstorage</directory>
         <initial-size>1KB</initial-size>
         <maximum-size>1KB</maximum-size>
         </backup-storage>
         <backing-map-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>DBCacheLoaderScheme</scheme-name>
                   <internal-cache-scheme>
                   <local-scheme>
                        <scheme-ref>blaze-binary-backing-map</scheme-ref>
                   </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.xxloader.DataBeanInitialLoadImpl
                             </class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>com.xx.CustomerProduct
                                       </param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>CUSTOMER</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
                   <read-only>true</read-only>
              </read-write-backing-map-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>blaze-binary-backing-map</scheme-name>
    <high-units>{back-size-limit 1}</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <expiry-delay>{back-expiry 0}</expiry-delay>
    <cachestore-scheme></cachestore-scheme>
    </local-scheme>

    Hi
    We did try out with the following configuration
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>off-heap</type>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    With configuration the amount of residual main memory consumption is like 15 G.
    When we changed this configuration to
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    Note backup storage is file-mapped
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
    Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
    Appreciate any pointers in this regard.
    Thanks
    sunder

  • DNFS with ASM over dNFS with file system - advantages and disadvantages.

    Hello Experts,
    We are creating a 2-node RAC. There will be 3-4 DBs whose instances will be across these nodes.
    For storage we have 2 options - dNFS with ASM and dNFS without ASM.
    The advantages of ASM are well known --
    1. Easier administration for DBA, as using this 'layer', we know the storage very well.
    2. automatic re-balancing and dynamic reconfiguration.
    3. Stripping and mirroring (though we are not using this option in our env, external redundancy is provided at storage level).
    4. Less (or no) dependency on storage admin for DB file related tasks.
    5. Oracle also recommends to use ASM rather than file system storage.
    Advantages of DNFS(Direct Network File System) ---
    1. Oracle bypasses the OS layer, directly connects to storage.
    2. Better performance as user's data need not to be loaded in OS's kernel.
    3. It load balances across multiple network interfaces in a similar fashion to how ASM operates in SAN environments.
    Now if we combine these 2 options , how will be that configuration in terms of administration/manageability/performance/downtime in future in case of migration.
    I have collected some points.
    In favor of 'NOT' HAVING ASM--
    1. ASM is an extra layer on top of storage so if using dNFS ,this layer should be removed as there are no performance benefits.
    2. store the data in file system rather than ASM.
    3. Stripping will be provided  at storage level (not very much sure about this).
    4. External redundancy is being used at storage level so its better to remove ASM.
    points for 'HAVING' ASM with dNFS --
    1. If we remove ASM then DBA has no or very less control over storage. He can't even see how much is the free space left as physical level.
    2. Stripping option is there to gain performance benefits
    3. Multiplexing has benefits over mirroring when it comes to recovery.
    (e.g, suppose a database is created with only 1 controlfile as external mirroring is in place at storage level , and another database is created with 2 copies (multiplexed within Oracle level), and an rm command was issued to remove that file then definitely there will be a time difference between restoring the file back.)
    4. Now familiar and comfortable with ASM.
    I have checked MOS also but could not come to any conclusion, Oracle says --
    "Please also note that ASM is not required for using Direct NFS and NAS. ASM can be used if customers feel that ASM functionality is a value-add in their environment. " ------How to configure ASM on top of dNFS disks in 11gR2 (Doc ID 1570073.1)
    Kindly advise which one I should go with. I would love to go with ASM but If this turned out to be a wrong design in future, I want to make sure it is corrected in the first place itself.
    Regards,
    Hemant

    I agree, having ASM on NFS is going to give little benefit whilst adding complexity.  NAS will carrying out mirroring and stripping through hardware where as ASM using software.
    I would recommend DNFS only if NFS performance isn't acceptable as DNFS introduce an additional layer with potential bugs!  When I first used DNFS in 11gR1, I came across lots of bugs and worked with Oracle Support to have them all resolved.  I recommend having read of this metalink note:
    Required Diagnostic for Direct NFS Issues and Recommended Patches for 11.1.0.7 Version (Doc ID 840059.1)
    Most of the fixes have been rolled into 11gR2 and I'm not sure what the state of play is on 12c.
    Hope this helps
    ZedDBA

  • Backup root file system on DVD

    I have a root file system about 4Gb large.
    Can I back it up on a DVD?
    I read the ufsdump works with tapes.
    Can it work with a DVD?
    If yes, what is the command format?

    I have a root file system about 4Gb large.
    Can I back it up on a DVD?
    I read the ufsdump works with tapes.
    Can it work with a DVD?
    If yes, what is the command format?

  • RMAN Backup Log File

    I have 3 control files in datafile directory..... it becomes a single backupset file for 3 control files. How can I know which backupset includes these 3 control files?
    Here are the hot backup log:
    Starting Control File and SPFILE Autobackup at 29-JUN-09
    piece handle=/backup/db/backup/RMAN/c-1357907388-20090629-00.bck comment=NONE
    Finished Control File and SPFILE Autobackup at 29-JUN-09
    Starting backup at 29-JUN-09
    channel ch1: starting full datafile backupset
    channel ch1: specifying datafile(s) in backupset
    including current controlfile in backupset
    channel ch1: starting piece 1 at 29-JUN-09
    channel ch1: finished piece 1 at 29-JUN-09
    piece handle=/backup/db/backup/RMAN/backup_PROD_690781724_4213_1_3lkiovgs_1_1.bck comment=NONE
    channel ch1: backup set complete, elapsed time: 00:00:01
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    channel ch1: starting full datafile backupset
    channel ch1: specifying datafile(s) in backupset
    including current SPFILE in backupset
    channel ch1: starting piece 1 at 29-JUN-09
    channel ch1: finished piece 1 at 29-JUN-09
    piece handle=/backup/db/backup/RMAN/backup_PROD_690781725_4214_1_3mkiovgt_1_1.bck comment=NONE
    channel ch1: backup set complete, elapsed time: 00:00:02
    Finished backup at 29-JUN-09
    Starting Control File and SPFILE Autobackup at 29-JUN-09
    piece handle=/backup/db/backup/RMAN/c-1357907388-20090629-01.bck comment=NONE
    Finished Control File and SPFILE Autobackup at 29-JUN-09
    FAN

    Hi FAN,
    The following three pieces contain controlfile backups.
    c-1357907388-20090629-00.bck
    backup_PROD_690781724_4213_1_3lkiovgs_1_1.bck
    c-1357907388-20090629-01.bck
    Accordingto your output.
    It does not contain three copies but it will use the control_files parameter to restore three.
    Regards,
    Tycho

  • Aperture backup strategi - file system compatibility

    Hi all,
    I am beginning to run out of space for a managed library so I'm starting to look for alternatives. Currently I have a managed library, with a vault on another disk on the same computer. This vault is then mirrored to another computer (Linux) via rsync. This seems to work fine.
    The solution I'm thinking about it to skip the vault and mirror the library + referenced files directly, while Aperture is not running of course.
    The question that arises is whether Aperture does/needs anything "nonstandard" in it's files - resource forks for example. The file names inside the library are a bit weird but within specs for a Unix filesystem so that isn't as problem, but I would hate to have to recover a library and then discover that all the files are "disconnected" because of some minor change that occurred during backup/restore...
    Anyone with experience of this (mirroring libraries to a non-Apple filesystem and restoring it)?
    PowerMac G5 Dual 2.0 GHz, 2GB RAM Mac OS X (10.4.8) ATI Radeon 9650

    OK,
    I actually tried this myself instead. I created a new Library, imported a project into it to get some photos with metadata and versions, etc. The masters were relocated to outside the library. I then copied the library to and from a Linux server with rsync, and moved the masters to the file server as well.
    After opening Aperture again and reconnecting the masters, all seemed well.

  • Raw Device Backup to file system(OPS 8i)

    Hi
    Our currently setup is
    Oracle database 8.1.6 (Oracle Parallel Server) Two Node
    Noarchive Mode
    Solaris 2.6
    all database file ,redo logfiles,controlfiles under raw device.
    database size 16 G.B
    oracle block size 8192
    currently we are using only export backup of oracle.
    But now i want to take cold backup of oracle database to disk.
    cold backup Raw --> Disk
    How we can take cold backup with dd command and skip parameter ?
    Is anybody have practical idea of dd command with skip parameter.
    Thanks and regards
    Kuljeet pal singh

    you can use ufsdump instead of dd

  • Do we need to backup OS (file systems) on Exadata storage cells?

    We got confusing messages about if we need to or not. I'd like to hear your opinons.
    Thanks!

    Hi,
    The answer is no.
    There is no need to backup the OS of the storage cell.
    Worst case a complete storage cell needs to be replaced. A field engineer will open your broken storage cell and take out the onboard USB and put it inside the new storage cell.
    The system will boot from this USB and you can choose to implement the 'old' configuration on the new cell.
    Regards,
    Tycho

  • ASM Vs File system

    1. With file system, we were able to set some threshold alerts at the OS mount point level. Is this possible with ASM since it is a raw device at the OS level?
    2. The ASM directories are logical and is visible ONLY at the oracle ASM level. True?
    3. Is cold backup an option with ASM?
    4. Is RMAN the only solution with ASM disks?
    5. Are there any suggested Hardware snap/clone available for ASM storage?
    Thanks for your insight on this.

    KR wrote:
    1. With file system, we were able to set some threshold alerts at the OS mount point level. Is this possible with ASM since it is a raw device at the OS level?
    2. The ASM directories are logical and is visible ONLY at the oracle ASM level. True?
    3. Is cold backup an option with ASM?
    4. Is RMAN the only solution with ASM disks?
    5. Are there any suggested Hardware snap/clone available for ASM storage?1. Not possible from an OS point-of-view. See earlier reply for workaround;
    2. Yep, or use asmcmd (from the ASM ORACLE_HOME) or ftp
    3. If you use RMAN to cold-backup: yes. You cannot use a regular backup to back up your devices, unless you have designated files as 'disks' to ASM (But who would want that?). In this case you should shut down ASM and backup the file system where these files reside.
    4. Yes, it is.
    5. That depends on your hardware configuration. It is possible to run ASM on concurrent/shared, host based mirrored volume groups/volumes. If you use SAN technology, you can always verify whether these options are available from the storage manufacturer. However, I would be really careful with these, always taking things down before taking snapshots/flashcopies/clones/whatever you call them.
    HTH.
    Arnoud Roth

  • RMAN to Disk (shared mounted file system) then secure backup to Tape

    Hi
    This is a little away from the norm and I am new to Oracle secure Backup.
    We have several databases on physically separate servers, all backup to a central disk (a ZFS shared file-system).
    We have a media server also with the same mounted file-system so it can see all of the RMAN backups there.
    Secure backup is installed on the media server and is configured as such.
    The question I have is I need to backup the file system to tape where all the RMAN backups live, I have configured the data set but I get file permission errors for each of the RMAN backup files in the directory.
    I have tried to change these permissions but to no avail (assuming it is just a read write access change that is needed - but this may be a problem in the long run). What is the general process for backup of already created RMAN backups sat in a shared area? I know its not the norm to do to disk then to tape backups but can this be done? I would have installed Secure backup client on each server and managed the whole backup through secure backup but this is not possible I must do the to tape from the file system, any advise and guidance on this would be much appreciated.
    Kind regards
    Vicky
    Edited by: user10090654 on Oct 4, 2011 4:50 AM

    You can easily accomplish a very streamlined D2D2T strategy! RMAN backup to disk...then backup that disk to tape via RMAN and OSB. Upon restore, RMAN will restore from best media disk or tape based on where files are located.
    Donna

Maybe you are looking for