Raw Device Backup to file system(OPS 8i)

Hi
Our currently setup is
Oracle database 8.1.6 (Oracle Parallel Server) Two Node
Noarchive Mode
Solaris 2.6
all database file ,redo logfiles,controlfiles under raw device.
database size 16 G.B
oracle block size 8192
currently we are using only export backup of oracle.
But now i want to take cold backup of oracle database to disk.
cold backup Raw --> Disk
How we can take cold backup with dd command and skip parameter ?
Is anybody have practical idea of dd command with skip parameter.
Thanks and regards
Kuljeet pal singh

you can use ufsdump instead of dd

Similar Messages

  • Raw devices versus Cluster File Systems in RAC 10gR2

    Hi,
    Does anyone using cluster file systems in a RAC 10gR2 installation, specifically IBM’s GPFS?
    I’ve visited a company that is running RAC 10gR2 in AIX over raw devices. Why someone would choose to use raw devices , with all the problems to administer , when all the modern file systems are so powerful? Is there any issues when using cluster file systems + RAC? Is there considerable performance benefits when using raw devices with RAC ?
    I´ve always used Oracle stand alone instances over file systems (since version 7) , and performance was always very good. I´ve tested raw devices almost 10 years ago , and even in that time (the hardware today is much better - SAN , 15K rpm disks , huge caches - and the file systems software today is much better) the cost to administer it does not compensate the benefits (only 5% more faster than file systems in Oracle 7).
    So , besides any limitations imposed by RAC , why use raw devices nowadays ?
    Regards,
    Antonio Belloni

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • Raw devices and cluster file system

    what is difference between raw and cluster file system

    See to this thread if this can help
    http://asktom.oracle.com/pls/asktom/f?p=100:11:3285616048047775::::P11_QUESTION_ID:7931107631402

  • ASM RMAN backup to File System

    Hi all,
    I have a rman backup (datafile and controlfile) which was took in an ASM instance (not a RAC) ORACLE 11.2.0.2 in a Linux server, now I want restore the backup in a new database in windows/Linux OS using general File System storage (single instance rdbms) instead of ASM.
    Is this possible?
    Can I restrore an ASM rman backup in a file system storage mechanisim in a new server?
    Kindly clarify my question.
    Thanks in Advance..
    Nonuday

    Nonuday wrote:
    Hi Levi,
    Thanks for your invaluable script and blog.
    can you clarify me on this query:
    I have a RMAN backup taken from ASM and the backup is database and controlf file backup which contains datafiles and controlfiles.
    Now I need to restore this on my system and here I dont use ASM or archive log, I use single instance in no archive log mode database.
    I have restored the control file from the RMAN controfile backup.
    Before restoring the control file I have checked the orginal pfile of the backup database which had parameters like
    'db_create_file_dest',
    'db_create_online_log_dest',
    'db_recovery_file_dest_size',
    'db_recovery_dest',
    'log_archive_dest'.
    Since I am not gng to create a DB in no archive log mode, I didnt use any of the above parameters and created a database.
    Now my question is:
    If i restore the database and the datafile will get restored and after renaming all the logfiles, database will be opened.
    I want to know whether this method is correct or wrong and will the database work as it was working previously. Or do i need create the db_file_recovery and other parameters also for this database.About Parameter:
    All these parameters should reflect your current environment any reference to the old environment must be modified.
    About Filesystem used:
    Does not matter what Filesystem you are using the File (datafile/redolog/controlfile/archivelog/backuppiece) are created on Binary Format which depend on Platform only. So, The same binary file ( e.g datafile) have same format and content on raw device, ASM, ext3, ext2, and so on. So, to database it's only a location where file are stored, but the file are the same. ASM has a different architecture from Regular Filesystem and need be managed in a different manner (i.e using RMAN).
    About Database:
    Since your database files are the same even using different filesystem what you need is rename your datafiles/redofiles on controlfile during restore, the redo files will be recreated.
    So, does not matter if you database are noarchivelog or archivelog, the same way which you will do a restore on ASM is the same way to restore on Regular Filesystem. (it's only about renaming database file on controlfile during restore)
    On blog the post "How Migrate All Files on ASM to Non-ASM (Unix/Linux)" is about move the file from filesystem to another. But you can modify the script used to restore purposes;
    ## set newname tell to RMAN where file will be restored and keep this files location on memory buffer
    RMAN> set newname for datafile 1 to <location>;
    ### swich get list of files from memory buffer (rman) and rename on controlfile the files already restored.
    RMAN>switch datafile/tempfile all ;With database mounted use this script below:
    I just commented three lines that are unnecessary in your case.
    SET serveroutput ON;
    DECLARE
      vcount  NUMBER:=0;
      vfname VARCHAR2(1024);
      CURSOR df
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/datafile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$datafile;
      CURSOR tp
      IS
        SELECT file#,
          rtrim(REPLACE(name,'+DG_DATA/drop/tempfile/','/u01/app/oracle/oradata/drop/'),'.0123456789') AS name
        FROM v$tempfile;
    BEGIN
    --  dbms_output.put_line('CONFIGURE CONTROLFILE AUTOBACKUP ON;'); ### commented
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
      --  dbms_output.put_line('backup as copy datafile ' || dfrec.file# ||' format  "'||dfrec.name ||vcount||'.dbf";');  ### commented
      END LOOP;
      dbms_output.put_line('run');
      dbms_output.put_line('{');
      FOR dfrec IN df
      LOOP
        IF dfrec.name  != vfname THEN
          vcount      :=1;
          vfname     := dfrec.name;
        ELSE
          vcount := vcount+1;
          vfname:= dfrec.name;
        END IF;
        dbms_output.put_line('set newname for datafile ' || dfrec.file# ||'  to  '''||dfrec.name ||vcount||'.dbf'' ;');
      END LOOP;
      FOR tprec IN tp
      LOOP
        IF tprec.name  !=  vfname THEN
          vcount      :=1;
          vfname     := tprec.name;
        ELSE
          vcount := vcount+1;
          vfname:= tprec.name;
        END IF;
        dbms_output.put_line('set newname for tempfile ' || tprec.file# ||'  to  '''||tprec.name ||vcount||'.dbf'' ;');
        END LOOP;
          dbms_output.put_line('restore database;');
        dbms_output.put_line('switch tempfile all;');
        dbms_output.put_line('switch datafile all;');
        dbms_output.put_line('recover database;');
        dbms_output.put_line('}');
    ---   dbms_output.put_line('alter database open;');  ### comented because you need rename your redologs on controlfile before open database
        dbms_output.put_line('exit');
    END;
    /After restore you must rename your redologs on controlfile from old location to new location:
    e.g
    ##  use this query to get current location of redolog
    SQL>  select group#,member from v$logfile order by 1;
    ## and change from <old_location> to <new_location>
    SQL > ALTER DATABASE
      RENAME FILE '+DG_TSM_DATA/tsm/onlinelog/group_3.263.720532229' 
               TO  '/u01/app/oracle/oradata/logs/log3a.rdo'  When you change all redolog on controlfile issue command below:
    SQL> alter database open resetlogs;PS: Always track database in real time using alert log file of database.
    HTH,
    Levi Pereira

  • Backup to file system and sbt_tape at the same time?

    hello!
    is it possible to do a rman backup to disk and sbt_tape at the same time (just one backup, not two)? the reason is, that i want to copy the rman backup files from the local file system via robocopy to another server (in a different building), so that i can use them for fast restores in the case of a crash.
    if not, what is the recommended strategy in this case? backups should be available on tape AND the file system of a different machine.
    environment: oracle 10g, windows server 2008, commvault for backup on tape
    thanks for your advice.
    best regards,
    christian

    If you manually copy backupsets out of the FRA to another location, but still accessible as "local disks", you can use the CATALOG command in RMAN to "catalog" the copies.
    Thus, if you opy or move files from the FRA to /mybackups/MYDB, you would use
    "CATALOG START WITH /mybackups/MYDB" or individually "CATALOG BACKUPPIECE /mybackups/MYDB/backuppiece_1" etc
    Once you have moved or removed the backupsets out of the FRA, you must use
    "CROSSCHECK BACKUP"
    and
    "DELETE EXPIRED BACKUP"
    to update the RMAN repository. Else, Oracle will continue to "account" for disks pace consumed by the BackupPieces in V$FLASH_RECOVERY_AREA_USAGE and will soon "run out of space".
    You would do the same for ArchiveLogs (CROSSCHECK ARCHIVELOG ALL; DELETE EXPIRED ARCHIVELOG ALL) if your ArchiveLogs go to the FRA as USE_DB_RECOVERY_FILE_DEST

  • After Restoring/Backup of File System XI Java Instances are not up!

    Hello all,
    We are facing problem in restoring the SAP XI System, after taking backup of the system the <b>java instances</b> in SAP XI System are not starting again. ABAP connections are fine.
    Can anyone provide suggestions/solutions in order to restore the XI System back.
    The system information is as follows.
    System Component:     SAP NetWeaver 2004s, <b>PI 7.0</b>
    Operating System:     SunOS 5.9, SunOS 5.10
    Database:          ORACLE 9.2.0.
    Regards,
    Ketan Patel

    If it´s REALLY a PI 7.0 (SAP_BASIS 700 and WebAS Java 7.00) then it´s not compatible. WebAS 7.00 needs Oracle 10g (http://service.sap.com/pam)
    Also see
    http://service.sap.com/nw2004s
    --> Availibility
    --> SAP NetWeaver 7.0 (2004s) PAM
    If you open the Powerpoint, you will see that Oracle 9 is not listed. I wonder, how you got that installed.
    Neverless, if you recover a Java instance, both filesystem and database content (of the Java schema) must be in sync, means, you need to restore both, database (schema) and filesystem, that have been backed up at the same time.
    Check Java Backup and Restore :
    Restoring the System
           1.      Shut down the system.
           2.      Install a new AS Java system using SAPInst, or restore the file system from the offline backups that you created.
           3.      Import the database backup using the relevant tools provided by the database vendor.
           4.      Overwrite the SAP system directory /usr/sap/.
           5.      Start the system (see Starting and Stopping SAP NetWeaver ABAP and Java.)
    The J2EE Engine is restored with the last backup.
    Markus

  • Backup into file system

    Hi
    Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
    Thanks
    sunder
    <distributed-scheme>
         <scheme-name>distributed-Customer</scheme-name>
         <service-name>DistributedCache</service-name>
         <!-- <thread-count>5</thread-count> -->
         <backup-count>1</backup-count>
         <backup-storage>
         <type>file-mapped</type>
         <directory>/data/xx/backupstorage</directory>
         <initial-size>1KB</initial-size>
         <maximum-size>1KB</maximum-size>
         </backup-storage>
         <backing-map-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>DBCacheLoaderScheme</scheme-name>
                   <internal-cache-scheme>
                   <local-scheme>
                        <scheme-ref>blaze-binary-backing-map</scheme-ref>
                   </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.xxloader.DataBeanInitialLoadImpl
                             </class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>com.xx.CustomerProduct
                                       </param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>CUSTOMER</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
                   <read-only>true</read-only>
              </read-write-backing-map-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>blaze-binary-backing-map</scheme-name>
    <high-units>{back-size-limit 1}</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <expiry-delay>{back-expiry 0}</expiry-delay>
    <cachestore-scheme></cachestore-scheme>
    </local-scheme>

    Hi
    We did try out with the following configuration
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>off-heap</type>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    With configuration the amount of residual main memory consumption is like 15 G.
    When we changed this configuration to
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    Note backup storage is file-mapped
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
    Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
    Appreciate any pointers in this regard.
    Thanks
    sunder

  • Raw device backup with rman

    I would like to backup our Oracle9i db(raw device) with rman.
    Is such a thing possible?

    Yes, you can use RMAN to take the backup of database sitting on raw devices And you can use OSB to take the backup on tape.
    Daljit Singh

  • Backup root file system on DVD

    I have a root file system about 4Gb large.
    Can I back it up on a DVD?
    I read the ufsdump works with tapes.
    Can it work with a DVD?
    If yes, what is the command format?

    I have a root file system about 4Gb large.
    Can I back it up on a DVD?
    I read the ufsdump works with tapes.
    Can it work with a DVD?
    If yes, what is the command format?

  • Aperture backup strategi - file system compatibility

    Hi all,
    I am beginning to run out of space for a managed library so I'm starting to look for alternatives. Currently I have a managed library, with a vault on another disk on the same computer. This vault is then mirrored to another computer (Linux) via rsync. This seems to work fine.
    The solution I'm thinking about it to skip the vault and mirror the library + referenced files directly, while Aperture is not running of course.
    The question that arises is whether Aperture does/needs anything "nonstandard" in it's files - resource forks for example. The file names inside the library are a bit weird but within specs for a Unix filesystem so that isn't as problem, but I would hate to have to recover a library and then discover that all the files are "disconnected" because of some minor change that occurred during backup/restore...
    Anyone with experience of this (mirroring libraries to a non-Apple filesystem and restoring it)?
    PowerMac G5 Dual 2.0 GHz, 2GB RAM Mac OS X (10.4.8) ATI Radeon 9650

    OK,
    I actually tried this myself instead. I created a new Library, imported a project into it to get some photos with metadata and versions, etc. The masters were relocated to outside the library. I then copied the library to and from a Linux server with rsync, and moved the masters to the file server as well.
    After opening Aperture again and reconnecting the masters, all seemed well.

  • Do we need to backup OS (file systems) on Exadata storage cells?

    We got confusing messages about if we need to or not. I'd like to hear your opinons.
    Thanks!

    Hi,
    The answer is no.
    There is no need to backup the OS of the storage cell.
    Worst case a complete storage cell needs to be replaced. A field engineer will open your broken storage cell and take out the onboard USB and put it inside the new storage cell.
    The system will boot from this USB and you can choose to implement the 'old' configuration on the new cell.
    Regards,
    Tycho

  • Convert Raw Device to file system based file systems for datafiles [HP-UX]

    Hello experts,
    Once again in seek of guidance..
    I am in the process of migrating my database (Oracle 7.2.3 on HP-UX 10.20 to Oracle 8.1.7 64-bits on HP-UX 11).
    Amongst one of our steps is to convert our RAW Device datafiles to File System based files within the same server and version - Oracle 7.2.3 on HP-UX 10.20.
    E.g. /dev/vg00/rlvol1 to become /d01/oradata/cmtdb/tbs1.dbf
    Is this something possible?
    Can i just do the following:
    a. Shutdown database (normal)
    b. dd if=/dev/vg00/rlvol1 of=/d01/oradata/cmtdb/tbs1.dbf bs=20k
    c. chown oracle7:dba /d01/oradata/cmtdb/tbs1.dbf
    d. svrmgrl> startup mount
    e. alter database rename file '/dev/vg00/rlvol1' to '/d01/oradata/cmtdb/tbs1.dbf'
    f. alter database open.
    Thanks very much for your replies.
    Please tell me about possible problems that i can anticipate.
    Best Regards
    Yogeeraj

    Hello experts,
    Once again in seek of guidance..
    I am in the process of migrating my database (Oracle 7.2.3 on HP-UX 10.20 to Oracle 8.1.7 64-bits on HP-UX 11).
    Amongst one of our steps is to convert our RAW Device datafiles to File System based files within the same server and version - Oracle 7.2.3 on HP-UX 10.20.
    E.g. /dev/vg00/rlvol1 to become /d01/oradata/cmtdb/tbs1.dbf
    Is this something possible?
    Can i just do the following:
    a. Shutdown database (normal)
    b. dd if=/dev/vg00/rlvol1 of=/d01/oradata/cmtdb/tbs1.dbf bs=20k
    c. chown oracle7:dba /d01/oradata/cmtdb/tbs1.dbf
    d. svrmgrl> startup mount
    e. alter database rename file '/dev/vg00/rlvol1' to '/d01/oradata/cmtdb/tbs1.dbf'
    f. alter database open.
    Thanks very much for your replies.
    Please tell me about possible problems that i can anticipate.
    Best Regards
    Yogeeraj

  • Using raw device

    Good morning,
    I would like to make some i/o test with a database version 10g, actually I have the redo log on standard zfs filesystem (cooked), I would like to
    migrate this redo log on raw device without recreating the database, is this possible ?
    Thanks for your help
    Fabrice Chapuis

    One more question, in case of node failure , redo on raw device are more critical for a recover than if they were on a filesystem ?More critical? Or more difficult?
    Redo is critical, period. Does not matter whether you use a cooked file system or a raw device.
    As for difficulty - that depends on what you are attempting to do. If you want to treat the raw device as a file system, that will be difficult as it it not a file system. (which kind of begs the question as to treat it like a file system when it is not?)
    From a RMAN perspective - a device is a device. RMAN does not care.
    From a DB perspective - that is why ASM exist. To remove the complexities of using raw devices, and eliminate the requirement of needing to use an external Volume Manager. And provide the DBA with a familiar SQL*Plus interface and SQL commands to administer ASM.
    It may seem difficult at first - but anything that needs a learning curve to go through, tends to seem difficult in the beginning. All you need to do is learn the basics and grasp the concepts.. and that +"difficulty"+ disappears.

  • RAW DEVICE와 FILE SYSTEM 간에 오라클 데이터화일 이동하기

    제품 : ORACLE SERVER
    작성날짜 : 1999-11-30
    RAW DEVICE와 FILE SYSTEM 간에 오라클 데이터화일 이동하기
    ======================================================
    유닉스 명령이 dd를 이용하여 오라클 데이터화일을 Unix File System과 Raw
    Device 간에 이동할 수 있으나, 플랫폼 별 Raw Device의 특성으로 인하여 주의할
    점이 있다. 만일 잘못된 이동으로 인하여 데이터베이스가 기동하지 못하면,
    ORA-7366 (sfifi: invalid file, file does not have valid header block.)
    에러가 발생할 수 있다.
    예를 들어 Digital unix의 경우는 raw device에 64k OS 헤더가 필요하므로 dd
    명령어 옵션 중 iseek와 oseek를 활용하여야 한다.
    다음은 예제를 통하여 데이터화일을 Raw Device에서 Unix File System으로
    이동하는 절차이다.
    (운영 현황)
    - 현재의 위치: /dev/rdsk/c0t15d0s7
    - 이동할 위치: /oracle/file/system.dbf
    - 화일의 크기: 488636416 bytes <--- V$DATAFILE.BYTES column 값!
    - DB_BLOCK_SIZE: 2048 bytes
    (준비 단계)
    1. Oracle 블럭 수 계산:
    BYTES / DB_BLOCK_SIZE = 488636416 / 2048 = 238592 (블럭)
    2. O/S file header 블럭 추가:
    238592 + 1 = 238593 (블럭)
    : "ls -l /oracle/file/system.dbf" 명령으로 확인 가능하며, O/S file
    header는 1블럭으로 항상 일정함.
    3. Raw Device OS header 블럭 계산:
    64K / DB_BLOCK_SIZE = 65536 / 2048 = 32 (블럭)
    : 사용할 dd 명령어 중 블럭의 크기를 DB_BLOCK_SIZE(2048바이트)로 할
    예정이므로 2048로 나누어야 함.
    (명령어 형식)
    $ dd if=<raw device> of=<UFS file> bs=<oracle blocksize>
    iseek=<blocks to skip> count=<total count>
    (명령어 수행 절차)
    (1) SVRMGR> STARTUP MOUNT
    (2) SVRMGR> !dd if=/dev/rdsk/c0t15d0s7 of=/oracle/file/system.dbf
    bs=2048 iseek=32 count=238593
    (3) SVRMGR> ALTER DATABASE RENAME FILE '/dev/rdsk/c0t15d0s7' TO
    '/oracle/file/system.dbf';
    (4) SVRMGR> ALTER DATABASE OPEN;
    ========================================================================
    반대로 Unix File System에서 Raw Device로 이동하는 명령어 형식은 아래와 같다.
    (명령어 형식)
    $ dd if=<UFS file> of=<raw device> bs=<oracle blocksize> \
    oseek=<blocks to skip> count=<total count>
    유사한 방식으로 데이타 화일 뿐 아니라 리두 로그 화일도 이동할 수 있다.
    [주의] Raw Device의 블럭 헤더의 크기는 OS마다 다를 수 있으므로 플랫폼
    벤더사에 미리 확인한 후 작업하여야 한다.

  • IS RAW DEVICES SUPPORTED OVER A CLUSTER FILE SYSTEM

    Can raw partions be defined for datafiles after having choosen Cluster file system as storage option for database while creating fresh database using
    DBCA?

    > Do update on how the partitions have to be defined in either cases?
    For both ASM and OCFS, a partition must exist on the disk - it can be of any partition type. Does not matter. Simply that the s/w references a partition and not an entire disk.
    So for example, /dev/sdaf and dev/sdag are two shared devices on the cluster (LUNs on the SAN or whatever).
    You create a partition on each. E.g
    # fdisk -l /dev/sdaf
    Disk /dev/sdaf: 36.5 GB, 36573020160 bytes
    255 heads, 63 sectors/track, 4446 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdaf1 1 4446 35712463+ 83 LinuxTo use the first device as a OCFS device, you need to build an ocfs file system on it using mkfs.
    And then it can be mounted as a "normal" cooked file system mount. Remember that /etc/fstab needs to be updated for mounting it on startup.
    To use the second device for ASM, you have two choices. If you have the ASMlib kernel module installed, you can use that to configure a volume label and assign it for use by ASM.
    Alternatively, you simply map the device (partition) to a raw device for detection by ASM. E.g.
    # raw /dev/raw/raw1 /dev/sdag1Of course, you also need to make this permanent by updating the raw device list config file so that this mapping is performed on reboot. On Linux, this is the /etc/sysconfig/rawdevices file. Also remember that the user and group access for the logical raw device created, must allow ASM full access to it (e.g. use chmod oracle.dba /dev/raw/raw1).
    In a nutshell, this is how to raw devices are used as ocfs and asm volumes. (on RHEL specifically, but I expect no major differences in this approach on other o/s's)

Maybe you are looking for

  • Migrated new MBPro = slow . Best way to start again without same problem

    Hi I bought a Macbook Pro and on bootup I put my old powerbook with all my data into firewire mode and just migrated things across. I have been very disappointed with the MBPro speed and have tried to clean it up a bit as I am assuming that Migrating

  • Posting complete retirement not poss.with value date

    Hello, The error is coming as "Posting complete retirement not poss.with value date" when i am executing ABUMN. I am providing the asset value date same as the old assets Capitalisation date.The output comes as: Diagnosis Transactions have been poste

  • Flash panels disappear on app switch

    This is more of a preference thing then an actual problem but I was wondering if anyone knew of a way I could keep my panels from hiding when I switch programs in a mac. When I am using flash I am often using dreamweaver or an internet browser to loo

  • New item addition with Characteristic Values - BAPI_SALESORDER_CHANGE

    Hi, I need to add new item in existing sales order. The new item to be added is a configurable material, with Characteristic values to be filled. Iam using 'BAPI_SALESORDER_CHANGE' to insert new item. Iam able to insert normal item. The problem is, i

  • ITunes says "update available." App Store disagrees.

    I saw the announcement for iTunes 11. I opened iTunes, and got a window saying an update was available. The App Store opened, and I got a message saying no updates were available. Help?