Datafile location migration

Due to lack of space that may happen in next few months I have to move oradata(datafiles) to different disc subsystem(partition). Is there manual on how to do it properly?
Thanks to all replies!!
Edited by: gracic on Nov 10, 2008 3:30 PM

For UNDO tablespace you can create a new undo tablespace with datafiles on new location and drop the old one but since you have to restart database to change system datafile location, you can do it with system tablespace,
basic steps are:
Connect as SYS DBA with CONNECT / AS SYSDBA.
Shutdown the database instance SHUTDOWN immiediate.
Rename or/and move the datafiles at OS level.
Start Oracle database with STARTUP MOUNT.
ALTER DATABASE RENAME FILE ‘<original datafile name>’ TO ‘<new datafile name>’;
Open Oracle database instance ALTER DATABASE OPEN.

Similar Messages

  • Template Datafile Location issue

    Hello,
    I previously created a new database template based on an existing database. I later deleted this template i created. Now when i attempt to create a new database with the DBCA using ANY template, i.e. General Purpose etc, i receive a DBCA screen that was not displayed before.
    The screen is labelled; Template Datafile Location
    It gives the following message:
    The template datafile “{ORACLE_HOME}\assistants\dbca\templates\Data_Warehouse.dfj” specified in the template doesn’t exist. Specify new location of the template datafile:
    I looked in the registry and made sure the parameter for {ORACLE_HOME} is the same as the location of the 'assistants' folder.
    The template files currently in the “{ORACLE_HOME}\assistants\dbca\templates\ location are;
    Data_Warehouse.dbc
    General_Purpose.dbc
    New_Database.dbt
    Transaction_Processing.dbc
    Transaction_Processing.dfj
    Any useful advice would be appreciated. Thank you

    Hello,
    My mistake, the files are *.dfj as you say.
    I am using Oracle 9i on a Windows server. I will look into the files on the disks and i will check if any of my colleagues have these files in thier systems too.
    Thanks again.
    I don't know what may have occurred to delete them!!!

  • Change datafile locations

    How to change datafile locations through RMAN.
    D:\user.dbf
    I want to move it to E drive. So first i copy the datafile to E drive then how i updated controlfile for new datafile location. Please guide me.
    Thanks

    change datafile locations You can do it when the Database is open...for this the tablespace must be in read only.
    For this first you need to move the tablespace to read only - as the database is open...
    ALTER TABLESPACE current_tablespace read only;
    copy the datafiles to the new location.
    Take the current_tablespace offline
    ALTER TABLESPACE current_tablespace offline;
    SQL> ALTER DATABASE RENAME FILE
    2 'd:/user.dbf'
    3 TO
    4 'e:/user.dbf';
    SQL> ALTER TABLESPACE current_tablespace online;
    and then make it online
    SQL>alter tablespace current_tablespace read write;
    If you want to check the status - i mean
    how i updated controlfile for new datafile locationsql>alter database backup controlfile to trace;
    This syntax produce a readable copy of the contents of your controlfile which will be in user_dump_dest.
    And dont forget to remove the datafiles from the old location using OS commands...

  • Reuse existing MaxDB datafiles during migration

    Hello all,
    We did test heterogeneous migration of the SAP system with MaxDB 7.7 using SAPInst together with Migration Monitor. Now we are planning to proceed with productive migration.
    Creating of datafiles during test run took 4 hours, so I am interested if there is some how to reuse existing datafiles and logfiles of MaxDB. Because when I configure SAPInst and I define number of log volumes which I need, SAPInst is asking if I want to reuse the logfiles (I respond "OK"). But when I configure number of datafiles which I need, I just receive input error regarding not enough of free space.
    Is there any way how to reuse these MaxDB datafiles and not deleting and recreating it ?
    Thanks in advance for any suggestions.
    Best Regards,
    Emil

    > Is there any way how to reuse these MaxDB datafiles and not deleting and recreating it ?
    No, there's no way to reuse data volumes.
    One of the reasons for that is, that the data volumes are chained over pointers in the respective volume headers and have to fit into the chain of data volumes. Also the the data volumes have to fit to what is configured in the database parameters.
    You can however tell MaxDB not to format the data volumes, by setting the parameter FormatDataVolume (FORMAT_DATAVOLUME) to NO.
    This setting however means that the volumes are not completely formatted and only the space is allocated at the file system.
    Anyhow - are you positively sure that you need to perform a heterogeneous system copy?
    Be aware that this is really only necessary when you want to switch the platforms byte-order.
    regards,
    Lars

  • RC_datafile location changed after try recover datafile into another host

    Hi,
    I has a database whose datafiles was all in i:\ofs1, yesterday,to verify that my tape backup,
    I tried to restore and recover the database into another host and location c:\ofs1,
    I connected to the recover catalog for the database while performing the restoring and recovering, everything completed successfully.
    But then I found out that the location of datafile in rc_database view in my recovery catalog been updated from i:\ofs1 to c:\ofs1,
    apparent while doing the restoring, rman also update the location of the datafile in rc_datafile.
    And now when I perform backup for the original database, it search the datafiles in c:\ofs1 instead of i:\ofs1.
    Is there anyone know how should I do to update back the datafile location in rc_datafile?
    Thanks
    Vincent

    Hi,
    I can still perform backup, but the rman output will show that the datafile are now in c:\ofs1 as below
    input datafile fno=00005 name=C:\OFS1\DATA_SE.DBF
    input datafile fno=00006 name=C:\OFS1\INDEX_SE.DBF
    input datafile fno=00013 name=C:\OFS1\DATA_SE01.DBF
    input datafile fno=00014 name=C:\OFS1\INDEX_SE01.DBF
    input datafile fno=00003 name=C:\OFS1\SYSAUX01.DBF
    input datafile fno=00009 name=C:\OFS1\RMAN_CATALOG.DBF
    input datafile fno=00001 name=C:\OFS1\SYSTEM01.DBF
    input datafile fno=00002 name=C:\OFS1\UNDOTBS01.DBF
    input datafile fno=00010 name=C:\OFS1\DATA_ODCSE01.DBF
    input datafile fno=00012 name=C:\OFS1\FLOW_1.DBF
    input datafile fno=00007 name=C:\OFS1\DATA_Q107.DBF
    input datafile fno=00008 name=C:\OFS1\INDEX_Q107.DBF
    input datafile fno=00011 name=C:\OFS1\INDEX_ODCSE01.DBF
    input datafile fno=00004 name=C:\OFS1\USERS01.DBF
    channel oem_backup_disk1: starting piece 1 at 08-JUL-10
    channel oem_backup_disk1: finished piece 1 at 08-JUL-10
    piece handle=E:\BACKUP\OFS1\30LI990V_1_1 tag=TAG20100708T133318 comment=NONE
    channel oem_backup_disk1: backup set complete, elapsed time: 00:18:35
    Finished backup at 08-JUL-10
    and when I try to recover datefile copy, the below error show
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 07/08/2010 13:52:22
    RMAN-00600: internal error, arguments [8064] [1] [C:\OFS1\SYSTEM01.DBF] []
    Is it ok if I change the data in rman catalog to update the rc_datafile location to i:\ofs1?
    Thanks
    Vincent

  • Migrating with RMAN from 10g to 11g

    Hi gurus,
    I am following the following procedure to migrate database from 10g to 11g using rman
    Source side:
    RMAN>connect target
    RMAN>backup database;
    RMAN>backup archivelog all;
    RMAN>backup current controlfile;
    SQL> create pfile from spfile;
    Copied datafile, archivelog backup files and pfile and password files to target side. i.e. on 11g server side
    Target side:
    Set proper parameters for 11g
    SQL>startup nomount;
    RMAN>connect target
    RMAN>set dbid=<source database id>
    RMAN>catalog start with '<rman backup file location';
    RMAN>restore controlfile;
    RMAN>run
    set newname for datafile 1 to '<target datafiles location with file name';
    restore database;
    switch datafile all;
    finished
    upto now it's success full when i am trying to recover
    RMAN>recover database;
    it is saying
    rman 00571
    rman 00569
    rman 00571
    rman 03002
    ora - 19698
    can you please suggest the solution for this.
    thanks a lot.

    I'm not sure what you're doing is supported.
    You are taking a 10g database and restore and recover it using 11g software.
    I think you are allowed to do that with 10g software only.

  • Location of user .dbf files

    Hi
    I am a fresh database student.
    I have created a table by name employee. In which location these tables are stored.
    Please help me.
    Thnx

    Hello,
    You may have straightly the logical (Tablespace) and physical (Datafile) location of a Table with the query below:
    select A.owner, A.segment_name, A.segment_type, A.tablespace_name, A.extent_id, B.file_name
    from dba_extents A, dba_data_files B
    where A.file_id = B.file_id
    and A.owner = '<owner_of_your_table>'
    and A.segment_type = 'TABLE'
    and A.segment_name = '<your_table>';In fact, a (non partitioned) Table is a Segment. The Segment has one or several Extents and each Extent may
    be located in separated Datafiles (of the same Tablespace ).
    Hope this help.
    Best regards,
    Jean-valentin
    Edited by: Lubiez Jean-Valentin on Mar 20, 2010 8:33 AM

  • How to write program for the long text in EMIGALL for the DEVICE LOCATION.

    Hi,
    I am pretty new to this ISU field and i have been asked to code for a Long Text in EMIGALL for DEVICE LOCATION,
    so i would like someone can help me with it.
    thanks in advance.
    Robert.

    Robert,
    You can find most of the answers to your questions in the Guidelines:
    Chapter 2.4.2 describes how to configure the field rule Fixed Value.
    Chapter 2.4.5 describes how to configure the field rule via KSM.
    Chapter 2.5 descibes the Key- and Status Management and the usage of the the KSM in field rules.
    The specific answer to your questions are:
    (1) ...When i was adding the fixed rule, it was asking for domain so what should be the Domain that i should add ... You may ignore the domain field when creating a fixed value object. It's more for information purposes.
    (2) ... and what should i keep the fixed value, string or filed or abap rule ... I'd suggest to use 'String' and enter the specific value in the field 'FV contents'.
    (3) ... Finally you need to adjust the RETURN-FIELD of the newly created BAPI migration object to AUTO-X_HEAD-TDNAME, where do i make this adjustment can you specify that ... The return field can only be adjusted int the migration object maintenance screen (MigObject -> Change). Please see chapter 3.1 for more details on the 'return field' and figure 3-8 in chapter 3.4.2 how to generate a BAPI migration object.  
    (4) ... Well i would also like to ask about the x_head-tdname =  via KSM (e.g. DEVLOC), where should i put the value ... You wanted to know how to migrate a long text of a device location migration object DEVLOC). According to chapter Chapter 2.4.5 you need to enter the name of the superior migration object (here DEVLOC) in the 'MigObject1'field on the 'via KSM' sub-screen on the field maintenance screen.
    (5) ... and what exactly would it be can i put DEVLOC in the ID of technical Object and where should i put this value... I am not sure I understand your question. In the end you will need to pass the number (ID) of the technical object in the TDNAME field. Either you put the id into the import file (only if you know the id) or you need to use the 'via KSM' field rule to replace the legacy system key of the device location by the SAP key by the load report and prior to passing the auto data to the application thus your new function module.
    Kind regards,
    Fritz

  • Datafile Status - Recover

    Hi All,
    My currently configuration is as follows
    DB: 11.1.0.6.0
    OS: Enterprise Linux 5.2I was away most of last week and only came into the office to be greeted by the error below
    ORA-00372: file 318 cannot be modified at this time
    SQL> SELECT *
    FROM v$recover_file;
      2
         FILE# ONLINE  ONLINE_ ERROR                   CHANGE# TIME
           318 OFFLINE OFFLINE CANNOT OPEN FILE              0
    SQL>I dug a little dipper and discovered a datafile was deleted on the 12th.
    Fri Mar 12 12:46:28 2010
    ALTER DATABASE DATAFILE '/oradata13/zambia/cdrstorage131.dbf' RESIZE 10752M
    ORA-3297 signalled during: ALTER DATABASE DATAFILE '/oradata13/zambia/cdrstorage131.dbf' RESIZE 10752M ...
    Fri Mar 12 12:47:24 2010
    ALTER DATABASE DATAFILE '/oradata13/zambia/cdrstorage147.dbf' RESIZE 10752M
    ORA-3297 signalled during: ALTER DATABASE DATAFILE '/oradata13/zambia/cdrstorage147.dbf' RESIZE 10752M ...
    Fri Mar 12 12:56:48 2010
    ALTER TABLESPACE AGGREGATES ADD DATAFILE  '/oradata04/zambia/cdrstorage148.dbf' SIZE 30720M REUSE
    Fri Mar 12 13:00:35 2010
    Completed: ALTER TABLESPACE AGGREGATES ADD DATAFILE  '/oradata04/zambia/cdrstorage148.dbf' SIZE 30720M REUSE
    Fri Mar 12 13:00:45 2010
    Thread 1 advanced to log sequence 573213
      Current log# 5 seq# 573213 mem# 0: /var/zambia/ZAMBIA/onlinelog/redo5_1.log
      Current log# 5 seq# 573213 mem# 1: /oradata01/app/oracle/flash_recovery_area/ZAMBIA/onlinelog/redo5_2.log
    Fri Mar 12 13:05:24 2010
    ALTER TABLESPACE CDRSTORAGE ADD DATAFILE  '/oradata04/zambia/cdrstorage148.dbf' SIZE 30720M REUSE
    ORA-1537 signalled during: ALTER TABLESPACE CDRSTORAGE ADD DATAFILE  '/oradata04/zambia/cdrstorage148.dbf' SIZE 30720M REUSE ...
    Fri Mar 12 13:08:50 2010
    ALTER DATABASE DATAFILE '/oradata04/zambia/cdrstorage148.dbf'  OFFLINE DROP
    Completed: ALTER DATABASE DATAFILE '/oradata04/zambia/cdrstorage148.dbf'  OFFLINE DROP
    Fri Mar 12 13:09:25 2010
    ALTER TABLESPACE CDRSTORAGE ADD DATAFILE  '/oradata04/zambia/cdrstorage148.dbf' SIZE 30720M REUSE
    ORA-1537 signalled during: ALTER TABLESPACE CDRSTORAGE ADD DATAFILE  '/oradata04/zambia/cdrstorage148.dbf' SIZE 30720M REUSE ...
    Fri Mar 12 13:14:06 2010
    ALTER TABLESPACE AGGREGATES DROP DATAFILE '/oradata04/zambia/cdrstorage148.dbf'
    ORA-3264 signalled during: ALTER TABLESPACE AGGREGATES DROP DATAFILE '/oradata04/zambia/cdrstorage148.dbf' ...
    Fri Mar 12 13:14:43 2010
    ALTER DATABASE DATAFILE '/oradata04/zambia/cdrstorage148.dbf'  ONLINE
    ORA-1113 signalled during: ALTER DATABASE DATAFILE '/oradata04/zambia/cdrstorage148.dbf'  ONLINE ...
    Fri Mar 12 13:15:03 2010
    ALTER TABLESPACE AGGREGATES DROP DATAFILE '/oradata04/zambia/cdrstorage148.dbf'
    ORA-3264 signalled during: ALTER TABLESPACE AGGREGATES DROP DATAFILE '/oradata04/zambia/cdrstorage148.dbf' ...
    Fri Mar 12 13:16:10 2010
    Thread 1 advanced to log sequence 573214
      Current log# 6 seq# 573214 mem# 0: /var/zambia/ZAMBIA/onlinelog/redo6_1.logUnfortunately, I do not have a valid backup (we only export base tables on a daily basis).
    1. How do I check for objects affected by the missing datafile?
    2. Most of the data that sits on the affected TABLESPACE can easily be re-computed, but how do I go
    about sorting this issue out?
    3. As far as the database is concerned, the datafile needs media recovery; how do I sort this issue out?
    Regards,
    Phiri

    Unfortunately, I do not have a valid backup (we only export base tables on a daily basis).
    1. How do I check for objects affected by the missing datafile?
    2. Most of the data that sits on the affected TABLESPACE can easily be re-computed, but how do I go
    about sorting this issue out?
    3. As far as the database is concerned, the datafile needs media recovery; how do I sort this issue out?
    Regards,
    PhiriHi,
    Am not sure you want resolution of which error. Moreover am not sure why on earth anybody would like to resize, if resize is not possible then drop that datafile without migrating the data.
    Anyways,
    1) You can query dba_segments for tablespaces, dba_extents for datafiles to get the list of objects that are affected by missing datafile
    2) What issue you meant.
    3) Is your databases in archive mode. And do you have all the archives from the point when you issued the command may be we could recover but all depends
    Regards
    Anurag

  • RMAN can't SET NEWNAME for datafiles added after Level 1

    Version: 11.2.0.3
    Platform : Solaris 10
    I have the most recent Level 0 , Level 1 and post-L1 Archive logs of the source DB.
    I am trying restore, recover in a different machine using plain RMAN (not RMAN DUPLICATE) into a new datafile location.
    After the Level 1 backup was taken, 2 datafiles (namdata01.dbf, finaldata01.dbf) were added ( this got 'recorded' on the subsequent post-L1 archivelogs )
    Before I ran restore and recover, I restored the latest control file from the most recent L1
    RMAN> restore controlfile from '/u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0cnjqk54_1_1_20120829.rmbk' ;Understandably, this control file doesn't have info about the 2 datafiles added after L1 .Wish I could restore control file from archive log :)
    So, I cataloged the archive logs as well using CATALOG command.
    RMAN> catalog start with '/u01/CATALOGTST/rmanBkpPieces';
    using target database control file instead of recovery catalog
    searching for all files that match the pattern /u01/CATALOGTST/rmanBkpPieces
    List of Files Unknown to the Database
    =====================================
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_08njqj8u_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0cnjqk54_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc
    File Name: /u01/CATALOGTST/rmanBkpPieces/06njqj6h_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/09njqj90_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0anjqk3b_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1
    Do you really want to catalog the above files (enter YES or NO)? YES
    cataloging files...
    cataloging done
    List of Cataloged Files
    =======================
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_08njqj8u_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0cnjqk54_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc                         -------------------> arch logs that contain info on the new datafiles
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc                         -------------------> arch logs that contain info on the new datafiles
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc                          -------------------> arch logs that contain info on the new datafiles
    File Name: /u01/CATALOGTST/rmanBkpPieces/06njqj6h_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/09njqj90_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0anjqk3b_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1
    RMAN> EXITDuring Recovery , RMAN applied the archive logs and managed to create the datafiles successfully. But it can't restore the datafiles to the new location specified in the SET NEWNAME location. Luckily , I had created the original path and these 2 datafiles got restored there.
    RMAN can't seem enforce SET NEWNAME for datafiles added after Level 1 backup despite cataloging.
    Does SET NEWNAME .... thing work only for RESTORE ?
    Log of restore and recover
    $ cat restore-recover.txt
    run
    set newname for database to '/u01/app/CLONE1/oradata/sntcdev/%b' ;
    set newname for tempfile '/u01/app/oradata/sntcdev/temp01.dbf' to '/u01/app/CLONE1/oradata/sntcdev/temp01.dbf' ;
    restore database;
    switch datafile all;
    switch tempfile all;
    recover database;
    $
    $ rman target / cmdfile=restore-recover.txt
    Recovery Manager: Release 11.2.0.3.0 - Production on Sun Sep 16 21:27:49 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: SNTCDEV (DBID=2498462290, not open)
    RMAN> run
    2> {
    3> set newname for database to '/u01/app/CLONE1/oradata/sntcdev/%b' ;
    4> set newname for tempfile '/u01/app/oradata/sntcdev/temp01.dbf' to '/u01/app/CLONE1/oradata/sntcdev/temp01.dbf' ;
    5> restore database;
    6> switch datafile all;
    7> switch tempfile all;
    8> recover database;
    9> }
    10>
    11>
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    Starting restore at 16-SEP-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=18 device type=DISK
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to /u01/app/CLONE1/oradata/sntcdev/system01.dbf
    channel ORA_DISK_1: restoring datafile 00002 to /u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    channel ORA_DISK_1: restoring datafile 00003 to /u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    channel ORA_DISK_1: restoring datafile 00004 to /u01/app/CLONE1/oradata/sntcdev/users01.dbf
    channel ORA_DISK_1: restoring datafile 00005 to /u01/app/CLONE1/oradata/sntcdev/example01.dbf
    channel ORA_DISK_1: restoring datafile 00006 to /u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    channel ORA_DISK_1: reading from backup piece /u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    channel ORA_DISK_1: errors found reading piece handle=/u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    channel ORA_DISK_1: failover to piece handle=/u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk tag=TAG20120828T234834
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:35
    Finished restore at 16-SEP-12
    datafile 1 switched to datafile copy
    input datafile copy RECID=8 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/system01.dbf
    datafile 2 switched to datafile copy
    input datafile copy RECID=9 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    datafile 3 switched to datafile copy
    input datafile copy RECID=10 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    datafile 4 switched to datafile copy
    input datafile copy RECID=11 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/users01.dbf
    datafile 5 switched to datafile copy
    input datafile copy RECID=12 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/example01.dbf
    datafile 6 switched to datafile copy
    input datafile copy RECID=13 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    renamed tempfile 1 to /u01/app/CLONE1/oradata/sntcdev/temp01.dbf in control file
    Starting recover at 16-SEP-12
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting incremental datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    destination for restore of datafile 00001: /u01/app/CLONE1/oradata/sntcdev/system01.dbf
    destination for restore of datafile 00002: /u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    destination for restore of datafile 00003: /u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    destination for restore of datafile 00004: /u01/app/CLONE1/oradata/sntcdev/users01.dbf
    destination for restore of datafile 00005: /u01/app/CLONE1/oradata/sntcdev/example01.dbf
    destination for restore of datafile 00006: /u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    channel ORA_DISK_1: reading from backup piece /u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    channel ORA_DISK_1: errors found reading piece handle=/u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    channel ORA_DISK_1: failover to piece handle=/u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk tag=TAG20120829T000356
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
    starting media recovery
    archived log for thread 1 with sequence 13 is already on disk as file /u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc
    archived log for thread 1 with sequence 14 is already on disk as file /u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc
    archived log for thread 1 with sequence 15 is already on disk as file /u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc
    channel ORA_DISK_1: starting archived log restore to default destination
    channel ORA_DISK_1: restoring archived log
    archived log thread=1 sequence=12
    channel ORA_DISK_1: reading from backup piece /u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1
    channel ORA_DISK_1: piece handle=/u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1 tag=TAG20120829T000454
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    archived log file name=/u01/archLogs/arch_1_12_790513173.arc thread=1 sequence=12
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc thread=1 sequence=13
    creating datafile file number=7 name=/u01/app/oradata/sntcdev/namdata01.dbf
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc thread=1 sequence=13
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc thread=1 sequence=14
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc thread=1 sequence=15
    creating datafile file number=8 name=/u01/app/oradata/sntcdev/finaldata01.dbf
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc thread=1 sequence=15
    unable to find archived log
    archived log thread=1 sequence=16
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 09/16/2012 21:29:51
    RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 16 and starting SCN of 1004015
    Recovery Manager complete.
    $
    $
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Sun Sep 16 21:30:04 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select name from v$datafile;
    NAME
    /u01/app/CLONE1/oradata/sntcdev/system01.dbf
    /u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    /u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    /u01/app/CLONE1/oradata/sntcdev/users01.dbf
    /u01/app/CLONE1/oradata/sntcdev/example01.dbf
    /u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    /u01/app/oradata/sntcdev/namdata01.dbf           ----------------------> restored to old location ignoring SET NEWNAME ....
    /u01/app/oradata/sntcdev/finaldata01.dbf         ----------------------> restored to old location ignoring SET NEWNAME ....
    8 rows selected.
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $ cd /u01/app/oradata/sntcdev            # -----------------------------> the old location
    $
    $ ls -alrt
    total 243924
    drwxr-xr-x   3 oracle   oinstall     512 Aug  5 10:55 ..
    drwxr-xr-x   2 oracle   oinstall     512 Sep 16 20:59 .
    -rw-r-----   1 oracle   oinstall 104865792 Sep 16 21:29 namdata01.dbf
    -rw-r-----   1 oracle   oinstall 19931136 Sep 16 21:29 finaldata01.dbf

    RMAN> run
    2> {
    3> set newname for database to '/u01/app/CLONE1/oradata/sntcdev/%b' ;
    4> set newname for tempfile '/u01/app/oradata/sntcdev/temp01.dbf' to '/u01/app/CLONE1/oradata/sntcdev/temp01.dbf' ;
    5> restore database;
    6> switch datafile all;
    7> switch tempfile all;
    8> recover database;
    9> }RMAN executes the commands in the run block stepwise. In your case, starting from "set newname for database..." and lastly executing "recover database...".
    Let me interpret it for you.
    1. You restored the controlfile from the L1 backup which does not have any information about the 2 newly added datafiles. You cataloged the backuppieces and the archives to this controlfile, which means that the controlfile would now be aware that the required backups and archives are in this cataloged location.
    2. You set newname for database to the desired location, thereby this command is executed restoring the database from the L0 and L1 backups. (These 2 backups do not have any information about the newly added datafiles and hence the 2 files would still not be restored).
    3. You execute restore database which restores the files from L0 and L1 backup.
    4. Switch datafile all, this renames all the files that were restored in the previous steps to the desired name/location that was mentioned in step 2.
    5. Recover database: This is where the archivelogs come into picture. The data in the archives would be created & recovered. The newly added datafiles are now created & recovered but RMAN does not go back to STEP 2 and STEP 4 to re-execute the commands in STEP2 and STEP4 to restore it to the desired location (STEP 2) and Rename it (STEP 4). The files will have to renamed later by moving them manually to the location that you require.
    So, RMAN does not execute the SET NEWNAME for datafiles which were added after the backup as the information about these files do not exist in the RMAN backuppieces.

  • Standby Database Datafile Move

    Hi All
    Oracle v10g (10.2.0.4)
    Windows 2003
    I need to move some datafiles onto a larger volume on our primary database. I have no issue with this task, except for some uncertainty on what to do with the Standby database?
    Our Standby database (on the other side of the atlantic) has the exact same drive/file layout as the primary. I have a larger volume ready for the move on the Standby.
    We have standby_file_management set to auto.
    I assume that the data dictionary/control file information (datafile locations) will be updated via the logs being shipped? I assume I would need to manually move the files on the standby database?
    Has anyone practiced this recently? could you give me a little guidance.
    Much appreciated.

    user3655049 wrote:
    Hi All
    Oracle v10g (10.2.0.4)
    Windows 2003
    I need to move some datafiles onto a larger volume on our primary database. I have no issue with this task, except for some uncertainty on what to do with the Standby database?
    Our Standby database (on the other side of the atlantic) has the exact same drive/file layout as the primary. I have a larger volume ready for the move on the Standby.
    We have standby_file_management set to auto.
    I assume that the data dictionary/control file information (datafile locations) will be updated via the logs being shipped? I assume I would need to manually move the files on the standby database?
    Has anyone practiced this recently? could you give me a little guidance.
    Much appreciated.You no need to modify datafile locations on standby. structure can be different from primary and standby. Even primary can be ASM and standby can be non-ASM.
    So no need to worry. But after changing the files location, you have to update DB_FILE_NAME_CONVERT parameter on databases.
    Some more information to you, check this http://docs.oracle.com/cd/E11882_01/server.112/e17022/manage_ps.htm#i1034172
    >
    When you rename one or more datafiles in the primary database, the change is not propagated to the standby database. Therefore, if you want to rename the same datafiles on the standby database, you must manually make the equivalent modifications on the standby database because the modifications are not performed automatically, even if the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO.
    >
    Edited by: CKPT on Jan 17, 2013 9:02 PM

  • How to migrate from a standard store setup in a splitted store (msg - idx) setup

    How can I migrate from a standard store setup in a splitted setup described in
    https://wikis.oracle.com/display/CommSuite/Best+Practices+for+Messaging+Server+and+ZFS
    can a 'reconstruct' run do the migration or have I do a
    imsbackup - imsrestore ?

    If your new setup would use the same filesystem layout as the old one (i.e. directory paths to the files would be the same when your migration is complete) you can just copy the existing store into the new structure, rename the old store directory into some other name, and mount the new hierarchy instead of it (zfs set mountpoint=...). The CommSuite Wiki also includes pages on more complex migrations, such as splitting the user populace into several stores (on different storage) and/or separate mailhosts. That generally requires that you lock the user in LDAP (perhaps deferring his incoming mail for later processing into the new location), migrate his mailbox, rewrite the pointers from LDAP, reenable account. The devil is in the details, for both methods. For the latter, see Wiki; for the former I'll elaborate a bit here
    1) To avoid any surprises, you should stop the messaging services before making the filesystem switch, finalize the data migration (probably with prepared data already mostly correct in the new hierarchy before you shut down the server, just resync'ing the recent changes into new structure), make the switch and reenable the server. If this is a lightly-used server which can tolerate some downtime - good for you If it is a production server, you should schedule some time when it is not very used so you can shut it down, and try to be fast - so perhaps practice on a test system or a clone first.
    I'd strongly recommend taking this adventure in small reversible steps, using snapshots and backups, and renaming old files and directories instead of removing them - until you're sure it all works, at least.
    2) If your current setup already includes a message store on ZFS, and it is large enough for size to be a problem, you can save some time and space by tricks that lead to direct re-use of existing files as if they are the dataset with a prepopulated message store.
    * If this is a single dataset with lots of irrelevant data (i.e. one dataset for the messaging local zone root with everything in it, from OS to mailboxes) you can try zfs-cloning a snapshot of the existing filesystem and moving the message files to that clone's root (eradicating all irrelevant directories and files on the clone). Likewise, you'd remove the mailbox files on the original system (when the time is right, and after sync-ing).
    * If this is already a dedicated store dataset which contains the directories like dbdata/    mboxlist/  partition/ session/   and which you want to split further to store just some files (indices, databases) separately, you might find it easier to just make new filesystem datasets with proper recordsizes and relocate these files there, and move the partition/primary to the remaining dataset's root, as above. In our setups, the other directories only take up a few megabytes and are not worth the hassle of cloning - which you can also do for larger setups (i.e. make 4 clones and make different data at each one's root). Either way, when you're done, you can and should make sure that these datasets can mount properly into the hierarchy, yielding the pathnames you need.
    3) You might also look into separating the various log-file directories into datasets, perhaps with gzip-9 compression. In fact, to reduce needed IOPS and disk space at expense of available CPU-time, you might use lightweight compression (lzjb) on all messaging data, and gzip on WORM data sets - local zone, but not global OS, roots; logs; etc. Structured databases might better be left without compression, especially if you use reduced record sizes - they might just not compress enough to make a difference, just burning CPU cycles. Though you could look into "zle" compression which would eliminate strings of null bytes only - there's lots of these in fresh database files.
    4) If you need to recompress the data as suggested in point (3), or if you migrate from some other storage to ZFS, rsync may be your friend (at least, if your systems don't rely on ZFS/NFSv4 ACLs - in that case you're limited to Solaris tar or cpio, or perhaps to very recent rsync versions which claim ACL support). Namely, I'd suggest "rsync -acvPHK --delete-after $SRC/ $DST/" with maybe some more flags added for your needs. This would retain the hardlink structure which Messaging server uses a lot, and with "-c" it verifies file contents to make sure you've copied everything over (i.e. if a file changes without touching the timestamp).
    Also, if you were busy preparing the new data hierarchy with a running server, you'd need to rsync old data to new while the services are down. Note that reading and comparing the two structures can take considerable time - translating to downtime for the services.
    Note that if you migrate from ZFS to ZFS (splitting as described in (2)), you might benefit from "zfs diff" if your ZFS version supports it - this *should* report all ofjects that changes since the named snapshot, and you can try to parse and feed this to rsync or some other migration tool.
    Hope this helps and you don't nuke your system,
    //Jim Klimov

  • What advantages/disadv to having a none-rac db datafiles on rac clusterware

    Hello,
    We have a rac 10.2.0.3 with ocfs2 files on linux 4. We want to create a none-rac database on one of the rac nodes. The none-rac 10.2.0.4 is in its own new separate oracle home. Would it be a bad idea to use the existing ocfs2 mountpoints for the none-rac datafile locations? What would be adv/dis-adv to using ocfs2 for none-rac database files? Thank you.

    I will disagree of if a non-RAC database should use CFS or not. Since the file systems in use are clustered I see no reason to use a non-clustered file system and by using a CFS should the server the database is built on fail you can with a few minor edits start the database on the other node. If you used a traditional file system to hold the non-RAC database then its data files would not be accessible to the remaining node.
    Placing the Oracle database on CFS then starting it from another node in the advent of failure is one of the early high availability (quick recovery) methods. This kind of set up was not uncommon in the VMS (DEC) world back in the late 1980's and early 1990's.
    Also what if at a future data you determine you want to convert the database to RAC? It will be easier if the database is already on a CFS.
    IMHO -- Mark D Powell --

  • Why not to place Oracle Datafiles on local disks

    Hi, I want to ask a basic question.
    Almost any Oracle installation I saw, the datafiles were placed on mount points, disks etc.
    Is there a reason for this? Why does no one place oracle datafiles to local disks? I couldnt find the answer on installation guide's
    I am asking this because I am going to install oracle (10g) on vmware and trying to decide whether or not to put datafiles on vmdk (local disk of virtual machine)...
    Is this recommended? Or otherwise, why it is recommended to put datafiles to a mount point/external disk etc.
    Thanks in advance

    Hi,
    Oracle is flexiable to place datafiles at your desired location. To reduce IO contention, It's adviced to place datafiles of particular database on specific folder.
    For an instance, While you create database using DBCA, There will be an option asked to choose the datafile location.
    Regards
    KSG

  • 9i Standby without dataguard

    Hi all.
    Oracle 9i ( SE )
    I'm trying to figure out how to create a standby database without dataguard. I've read that its possible, but I can't find any documentation on exactly how to do it. From what I've gathered I should be able to
    Create a duplicate database from primary backups
    Put standby database into "some recovery mode" - this is the bit I'm stumped at
    Copy archive logs from primary to standby, where they will magically be applied?
    I need to migration lots of datafiles to new disks, and cannot afford the downtime to just move them, so I'm thinking the best way to do it is to create a standby and keep it as close to synced to the primary as possible, then shutdown primary, apply file logs, point primary to the new datafile location..
    Can anyone either point me to a good article on doing standby with dataguard, or if there is a better way to move large ( move procedure will take about 4-5 hours) tablespaces/databases with minimum downtime.
    Thanks.

    Hrmm, maybe? so is "Data Guard" by itself available for SE?
    When I startup the database it reports
    "Oracle Data Guard is not available in this edition of Oracle."
    I'll go poke around that document too.
    btw. the document I read which suggests this is possible is, http://www.dba-oracle.com/oracle_tips_failover.htm
    Thanks
    Message was edited by:
    nib000

Maybe you are looking for

  • Invoice/credit number -  which table?

    Hi friends, I have to create a new report and I need to include the information: "INVOICE/CREDIT NUMBER" I have never seen this in SD before. Does anybody have a clue where I can find the table and the name of the field? Thank you, Roger

  • Eclipse using MySQL

    Hello everyone: I have a problem trying to connect to MySQL. When I create a new Project and then I suppose to go to Build Path and choose Add External Archieves. I'm suppose to see a Jar File which is suppose to attached to the project. But there ar

  • Redeploying jar files in Jdeveloper - 10.1.3.3.0

    Hi, Is there a way to redeploy an existing .jar file in Jdeveloper 10g (10.1.3.3.0). We are upgrading from Oracle - 11i to R12 and this .jar file is working in 11i and if I try to use the same in R12, I'm getting an IllegalAccessError in the OAF page

  • Errors "Load tModels into the registry" option & publishing proxy services

    Hi, I'm making a study with BEA Aqualogic Service Bus and BEA Aqualogic Service Registry. Reading official documentation (Console Help), I added with the BEA ALSB Console two registries. But during the configuration it doesn't allow me to set the *"L

  • What is "mach_kernel"?

    I found this file named "mach_kernel" at the finder base under Applications and Library and above System, User Information and Users. It is system locked. I will not be trashed unless I type in my computer's password. Question: what is mach_kernel? d