Adding datafile - ORA-27038

Hello,
Trying to add data file and getting following error:
alter tablespace SALES_BIG1 add datafile '/u02/oracle/admin/SALES/dbf/sales_big1_02.dbf' SIZE 4G REUSE;
ERROR at line 1:
ORA-01119: error in creating database file
'/u02/oracle/admin/SALES/dbf/sales_big1_02.dbf'
ORA-27038: created file already exists
HPUX-ia64 Error: 17: File exists
Additional information: 2
There is soft link exists but there is no physical file.
How can I add a datafile if that symbolic link exists?
Database is 10.2.0.4
Thanks
Bell

I want to tell u clear.
same thing happen with me also. I added 1 temp file 20 days before because i was getting temp space error. later i see the same error . i tried to check how many tempfile in the tmep tablespace by dba_temp_files view but i saw only 1 tempfile there (i dont know why this was happen).
error was: ORA-1652: unable to extend temp segment by 128 in tablespace TEMP
So i was trying to add again tempfile with 1gb size and 32gb max size when i tried i got error that tempfile with the same name exist. i tried again to check the files exist or not but i could not means i can see only 1 tempfile another temp file was not there but actuall thing i have created it earlier.
Because it was temp file.
So i added again 2nd tempfile with reuse command and i worked fine. i wanted to share thats it.
alter tablespace temp add tempfile '/data/orabi/temp02.dbf' size 1g reuse;
then
alter database tempfile '/data/orabi/temp02.dbf' autoextend on maxsize 32g;
Thanks
Ravi Kumar Shrivastava
ORACLE DBA

Similar Messages

  • Error during recover datafile - ORA-01422: exact fetch returns more than ..

    Hi,
    we have got actual a serious problem in our database. Some days ago we created a new datafile for a tablespace in the wrong directory:
    ALTER TABLESPACE "ANZSIIDX" ADD DATAFILE '/oralunadata/anzora8/ANZSIIDX08.dbf' SIZE 500M
    We recognized our mistake and generated the datafile with the same name in teh right directory:
    ALTER TABLESPACE "ANZSIIDX" ADD DATAFILE '/oralunaindex/anzora8/ANZSIIDX08.dbf' SIZE 500M
    We set the "wrong" datafile offline in order to rename and replace this in file in the right directory:
    alter database datafile '/oralunadata/anzora8/ANZSIIDX08.dbf' offline;
    ALTER TABLESPACE 'ANZSIIDX'
    RENAME DATAFILE '/oralunadata/anzora8/ANZSIIDX08.dbf',
    TO '/oralunaindex/anzora8/ANZSIIDX09.dbf';
    After this we wanted to bring the datafile online again with a recovery but
    this fails with teh strange error-message:
    SQL> recover datafile 109;
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at line 20
    ORA-00279: change 10322956311023 generated at 04/10/2013 18:51:23 needed for
    thread 1
    ORA-00289: suggestion : /oralunaarchiv/anzora8/anzora8_1_315326_636567403.arc
    ORA-00280: change 10322956311023 for thread 1 is in sequence #315326
    A similar thing happens with our RMAN backup from last weekend, which failed:
    channel c4: backup set complete, elapsed time: 00:32:33
    input datafile fno=00109 name=/oralunadata/anzora8/ANZSIIDX08.dbf
    input datafile fno=00103 name=/oralunaindex/anzora8/ITOPROTOKOLLEIDX01.dbf
    input datafile fno=00097 name=/oralunadata/anzora8/ITOPROTOKOLLE03.dbf
    input datafile fno=00096 name=/oralunadata/anzora8/ITOPROTOKOLLE02.dbf
    channel c4: specifying datafile(s) in backupset
    channel c4: starting compressed incremental level 0 datafile backupset
    continuing other job steps, job failed will not be re-run
    ORA-00600: internal error code, arguments: [krbbfmx_notfound], [109], [12801], [], [], [], [], []
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-00604: error occurred at recursive SQL level 1
    Has anybody an idea, how can we bring back the datafile online in order to run succesfull an RMAN backup?
    Actually we see just the workaraound to move the objects from the affected tablespace to new tablespace
    and the drop the empty tablespace what would be quite time consuming and not really practicable for us.
    kind regards,
    Marco

    Hi,
    actual we see this in v$datafile:
    /oralunaindex/anzora8/ANZSIIDX01.dbf     15     ANZSIIDX     10737418240     1310720     AVAILABLE     15     NO     0     0     0     10737352704     1310712     ONLINE
    /oralunaindex/anzora8/ANZSIIDX02.dbf     46     ANZSIIDX     10737418240     1310720     AVAILABLE     46     NO     0     0     0     10737352704     1310712     ONLINE
    /oralunaindex/anzora8/ANZSIIDX03.dbf     58     ANZSIIDX     10737418240     1310720     AVAILABLE     58     NO     0     0     0     10737352704     1310712     ONLINE
    /oralunaindex/anzora8/ANZSIIDX04.dbf     65     ANZSIIDX     10737418240     1310720     AVAILABLE     65     NO     0     0     0     10737352704     1310712     ONLINE
    /oralunaindex/anzora8/ANZSIIDX05.dbf     78     ANZSIIDX     10737418240     1310720     AVAILABLE     78     NO     0     0     0     10737352704     1310712     ONLINE
    /oralunaindex/anzora8/ANZSIIDX06.dbf     85     ANZSIIDX     10737418240     1310720     AVAILABLE     85     NO     0     0     0     10737352704     1310712     ONLINE
    /oralunaindex/anzora8/ANZSIIDX07.dbf     88     ANZSIIDX     10737418240     1310720     AVAILABLE     88     NO     0     0     0     10737352704     1310712     ONLINE
    /oralunaindex/anzora8/ANZSIIDX09.dbf     109     ANZSIIDX               AVAILABLE     109                                   RECOVER
    /oralunaindex/anzora8/ANZSIIDX08.dbf     110     ANZSIIDX     10737418240     1310720     AVAILABLE     110     NO     0     0     0     10737352704     1310712     ONLINE
    We dont use RMAN-Catalog for backup the information is only stored in the controlfile.
    The recovery datafile command with full path including for the datafile failed with the same error message:
    SQL> connect / as sysdba
    Connected.
    SQL> recover datafile '/oralunaindex/anzora8/ANZSIIDX09.dbf';
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at line 20
    ORA-00279: change 10322956311023 generated at 04/10/2013 18:51:23 needed for
    thread 1
    ORA-00289: suggestion : /oralunaarchiv/anzora8/anzora8_1_315326_636567403.arc
    ORA-00280: change 10322956311023 for thread 1 is in sequence #315326
    I guess it is a bug of oracle which will sometimes occur when you give two datafiles the same name in different directories that this poduces errors as above in the RMAN inerface(packages)!?
    Maybe we could force to set he tablespace offline, rename the new added datafiles and ry to bring the tablespace online but nobody knows if it works really and we get the tablespace online again?
    Therefore at the moment maybe it's the best way to move the objects away from this tablespace and than drop them, isn't it?
    regards,
    Marco

  • OS 명령으로 DATAFILE을 삭제한 경우:ORA-1157, ORA-1110

    제품 : ORACLE SERVER
    작성날짜 : 2004-03-08
    OS 명령으로 DATAFILE을 삭제한 경우 : ORA-1157, ORA-1110
    ======================================================
    [주의] 다음의 경우는 system tablespace에 대해서 적용되지 않는다.
    DATABASE RECOVERY에 앞서 ORACLE INSTANCE(즉, ORACLE RDBMS)의 STARTUP
    단계를 우선 살펴보기로 하자.
    첫번째 단계로 INSTANCE를 START시키며, 여기서는 initORACLE_SID.ora
    화일의 파라미터를 참조하여 SGA(SYSTEM GLOBAL AREA)를 할당하고
    백그라운드 프로세스를 START 시킨다.
    두번째 단계로 DATABASE의 MOUNT이며 파라미터 화일에 명시된 CONTROL
    FILE을 오픈한다. CONTROL FILE로부터 DATABASE NAME과 REDO LOG FILE의
    이름을 읽는다.
    세번째 단계로 CONTROL FILE 내의 정보를 이용하여 모든 데이타 화일을
    오픈한다.
    SVRMGR> CONNECT INTERNAL;
    Connected.
    SVRMGR> STARTUP;
    ORACLE instance started.
    Database mounted.
    Database opened.
    Total System Global Area 1913196 bytes
    Fixed Size 27764 bytes
    Variable Size 1787128 bytes
    Database Buffers 65536 bytes
    Redo Buffers 32768 bytes
    RDBMS의 STARTUP 시 문제의 데이타 화일이 CONTROL FILE 정보에서는 존재하지만,
    실제로 O/S 상에서는 존재하지 않으므로 DATABASE OPEN 단계에서 삭제된 데이타
    화일을 OPEN할 수 없다. 따라서 다음과 같은 데이타 화일 오픈에 관련된 에러가
    발생된다 :
    SVRMGR> STARTUP;
    ORACLE instance started
    Database mounted
    ORA-01157 : cannot identify data file 11 - file not found
    ORA-01110 : data file 11 : '/user1/oracle7/dbs/user2.dbf'
    Attempting to dismount database .... Database dismounted
    Attempting to shutdown instance .... ORACLE instance shut down
    DATABASE OPEN 단계에서 CONTROL FILE에서는 ORA-1157 에러에서 나타난
    11번 데이타 화일이 존재하는 것으로 인식하지만, 실제로 O/S 상의 데이타
    화일 (ORA-1110 에러에 명시된 '/user1/oracle7/dbs/user2.dbf' 화일)이
    삭제된 상태이다.
    이러한 경우에는 DATABASE STARTUP 시 STARTUP MOUNT 단계까지 실행한 후,
    문제의 데이타 화일을 OFFLINE시킨 다음 데이타베이스를 오픈한다.
    단, 데이타베이스 오픈이 정상적으로 수행되면 문제가 발생한 데이타 화일을
    포함하고 있는 TABLESPACE를 DROP하지 않을 경우에는 DATABASE STARTUP 시
    항상 데이타 화일의 오픈 단계에서 에러가 발생된다.
    따라서, 문제의 데이타 화일의 OFFLINE과 TABLESPACE의 DROP 전에 반드시 해당
    TABLESPACE를 사용하고 있는 USER의 데이타 백업을 수행해야 한다.
    데이타 화일의 OFFLINE과 관련된 명령은 다음과 같다.
    먼저 SVRMGR을 Line Mode로 기동시킨다.
    $ svrmgrl
    SVRMGR> CONNECT INTERNAL;
    SVRMGR> STARTUP MOUNT;
    ORACLE instance started.
    Database mounted.
    SVRMGR> ALTER DATABASE DATAFILE '/user1/oracle7/dbs/user2.dbf'
    OFFLINE DROP;
    Statement processed.
    SVRMGR> ALTER DATABASE OPEN;
    Statement processed.
    SVRMGR> DROP TABLESPACE tablespace_name INCLUDING CONTENTS;
    Statement processed.
    (이와 같이 offline drop된 datafile을 포함하는 tablespace는 drop하여야 한다.
    이 tablespace에 다른 datafile도 포함되어 있다면 export를 받아낸 후
    tablespace를 drop하고 재생성 후 import하도록 한다.)
    정상적으로 DATABASE가 Open된 후 CONTROL FILE로부터의 데이타베이스
    정보를 갖는 DATA DICTIONARY TABLE인 V$DATAFILE(SYS USER에서 액세스
    가능)의 내용과 데이타베이스 화일에 관한 정보를 가지고 있는 DATA
    DICTIONARY VIEW인 DBA_DATA_FILES(SYSTEM USER)을 조회하면 아래와 같은
    내용을 확인할 수 있다 :
    (1) SQL> SELECT * FROM V$DATAFILE ;
    FILE# STATUS NAME
    9 ONLINE /user1/oracle7/dbs/tools.dbf
    10 ONLINE /user1/oracle7/dbs/user1.dbf
    11 RECOVER /user1/oracle7/dbs/user2.dbf
    (2) SQL> SELECT * FROM DBA_DATA_FILES ;
    FILE_NAME FILE_ID TABLESPACE_NAME STATUS
    /user1/oracle7/dbs/tools.dbf 9 TOOLS AVAILABLE
    /user1/oracle7/dbs/user1.dbf 10 TEST AVAILABLE
    /user1/oracle7/dbs/user2.dbf 11 TEST AVAILABLE

    제품 : ORACLE SERVER
    작성날짜 : 2004-03-08
    OS 명령으로 DATAFILE을 삭제한 경우 : ORA-1157, ORA-1110
    ======================================================
    [주의] 다음의 경우는 system tablespace에 대해서 적용되지 않는다.
    DATABASE RECOVERY에 앞서 ORACLE INSTANCE(즉, ORACLE RDBMS)의 STARTUP
    단계를 우선 살펴보기로 하자.
    첫번째 단계로 INSTANCE를 START시키며, 여기서는 initORACLE_SID.ora
    화일의 파라미터를 참조하여 SGA(SYSTEM GLOBAL AREA)를 할당하고
    백그라운드 프로세스를 START 시킨다.
    두번째 단계로 DATABASE의 MOUNT이며 파라미터 화일에 명시된 CONTROL
    FILE을 오픈한다. CONTROL FILE로부터 DATABASE NAME과 REDO LOG FILE의
    이름을 읽는다.
    세번째 단계로 CONTROL FILE 내의 정보를 이용하여 모든 데이타 화일을
    오픈한다.
    SVRMGR> CONNECT INTERNAL;
    Connected.
    SVRMGR> STARTUP;
    ORACLE instance started.
    Database mounted.
    Database opened.
    Total System Global Area 1913196 bytes
    Fixed Size 27764 bytes
    Variable Size 1787128 bytes
    Database Buffers 65536 bytes
    Redo Buffers 32768 bytes
    RDBMS의 STARTUP 시 문제의 데이타 화일이 CONTROL FILE 정보에서는 존재하지만,
    실제로 O/S 상에서는 존재하지 않으므로 DATABASE OPEN 단계에서 삭제된 데이타
    화일을 OPEN할 수 없다. 따라서 다음과 같은 데이타 화일 오픈에 관련된 에러가
    발생된다 :
    SVRMGR> STARTUP;
    ORACLE instance started
    Database mounted
    ORA-01157 : cannot identify data file 11 - file not found
    ORA-01110 : data file 11 : '/user1/oracle7/dbs/user2.dbf'
    Attempting to dismount database .... Database dismounted
    Attempting to shutdown instance .... ORACLE instance shut down
    DATABASE OPEN 단계에서 CONTROL FILE에서는 ORA-1157 에러에서 나타난
    11번 데이타 화일이 존재하는 것으로 인식하지만, 실제로 O/S 상의 데이타
    화일 (ORA-1110 에러에 명시된 '/user1/oracle7/dbs/user2.dbf' 화일)이
    삭제된 상태이다.
    이러한 경우에는 DATABASE STARTUP 시 STARTUP MOUNT 단계까지 실행한 후,
    문제의 데이타 화일을 OFFLINE시킨 다음 데이타베이스를 오픈한다.
    단, 데이타베이스 오픈이 정상적으로 수행되면 문제가 발생한 데이타 화일을
    포함하고 있는 TABLESPACE를 DROP하지 않을 경우에는 DATABASE STARTUP 시
    항상 데이타 화일의 오픈 단계에서 에러가 발생된다.
    따라서, 문제의 데이타 화일의 OFFLINE과 TABLESPACE의 DROP 전에 반드시 해당
    TABLESPACE를 사용하고 있는 USER의 데이타 백업을 수행해야 한다.
    데이타 화일의 OFFLINE과 관련된 명령은 다음과 같다.
    먼저 SVRMGR을 Line Mode로 기동시킨다.
    $ svrmgrl
    SVRMGR> CONNECT INTERNAL;
    SVRMGR> STARTUP MOUNT;
    ORACLE instance started.
    Database mounted.
    SVRMGR> ALTER DATABASE DATAFILE '/user1/oracle7/dbs/user2.dbf'
    OFFLINE DROP;
    Statement processed.
    SVRMGR> ALTER DATABASE OPEN;
    Statement processed.
    SVRMGR> DROP TABLESPACE tablespace_name INCLUDING CONTENTS;
    Statement processed.
    (이와 같이 offline drop된 datafile을 포함하는 tablespace는 drop하여야 한다.
    이 tablespace에 다른 datafile도 포함되어 있다면 export를 받아낸 후
    tablespace를 drop하고 재생성 후 import하도록 한다.)
    정상적으로 DATABASE가 Open된 후 CONTROL FILE로부터의 데이타베이스
    정보를 갖는 DATA DICTIONARY TABLE인 V$DATAFILE(SYS USER에서 액세스
    가능)의 내용과 데이타베이스 화일에 관한 정보를 가지고 있는 DATA
    DICTIONARY VIEW인 DBA_DATA_FILES(SYSTEM USER)을 조회하면 아래와 같은
    내용을 확인할 수 있다 :
    (1) SQL> SELECT * FROM V$DATAFILE ;
    FILE# STATUS NAME
    9 ONLINE /user1/oracle7/dbs/tools.dbf
    10 ONLINE /user1/oracle7/dbs/user1.dbf
    11 RECOVER /user1/oracle7/dbs/user2.dbf
    (2) SQL> SELECT * FROM DBA_DATA_FILES ;
    FILE_NAME FILE_ID TABLESPACE_NAME STATUS
    /user1/oracle7/dbs/tools.dbf 9 TOOLS AVAILABLE
    /user1/oracle7/dbs/user1.dbf 10 TEST AVAILABLE
    /user1/oracle7/dbs/user2.dbf 11 TEST AVAILABLE

  • ORA-27038: created file already exists

    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    RMAN-03009: failure of backup command on t1 channel at 01/06/2013 21:24:49
    ORA-19504: failed to create file "/oradbs/disk_bakup/amx53s/amx53s_arch_060113_1747"
    ORA-27038: created file already exists
    MY RMAN SCRIPT:
    export ORACLE_HOME
    ORACLE_SID=amx53s
    export ORACLE_SID
    date=`date '+%d%m%y_%H%M`
    JOBLOG=/oradbs/disk_bakup/amx53s/amx53s.rman_$date.log
    $ORACLE_HOME/bin/rman target / NOCATALOG log "$JOBLOG" <<EOF
    run {
    allocate channel t1 type disk;
    backup
    format '/oradbs/disk_bakup/amx53s/amx53s_offline_$date'
    tag='amx53s_offline'
    database;
    backup
    format '/oradbs/disk_bakup/amx53s/amx53s_arch_$date'
    tag='amx53s_ARC_Tape_backup'
    archivelog all delete input ;
    backup current controlfile format '/oradbs/disk_bakup/amx53s/amx53s_Ctl_$date';
    crosscheck backup;
    release channel t1;
    }

    Hi,
    format '/oradbs/disk_bakup/amx53s/amx53s_offline_$date'Can you modify output file format to amx53s_offline_%p_%s_$date to add piece number set number to make the outpfile name unique? Because here RMAN is creating a backup set with the name you are providing, and then for next backup set, it uses same name and thats why returns you error message of "file already exists". If you second backup set is created on next minute, file name will be different, but if in the same minute, file name will be same and this error will come.
    Salman

  • Adding datafile on RAW partition

    hi masters,
    facing a problem regarding adding datafile to a raw partition. we are using Linux and oracle 10 g R2.
    whenever we create a raw partition a datafile(.dbf) is created with the partition.
    now we want oracle to use that datafile as an online datafile.
    we cant delete that file as one monitoring file is created with every raw partition and datafile.
    is there any command in oracle that will instruct oracle to use that created datafile and make it online???
    what is the standard procedure in this kind of scenario??
    we also have created symbolic link for all datafiles.
    any suggestions???
    thanks and regards
    VD

    Hi Vikrant,
    Assuming you have done something like this to create the sym link
    ln -s /dev/raw/disk1 /oracle/links/datafile01.dbf
    then
    alter tablespace xxxx add datafile '/oracle/links/datafile01.dbf' size xxxM;
    should work fine.
    I've not got access to any raw devices to check this though....
    Have you got another system with some free devices where you can create a new tablespace and add a file to it to check? Are the other files for the tablespace using sym links also?
    Regards,
    Harry

  • Error in adding datafile in standby databse.

    Hi all.
    My Environment is as below:
    Oracle-8.1.7.4.0
    OS-HP Unix-11
    Primary database (only 1): Production
    Standby database: Different Machine but same location (HP box)
    Yesterday I have added 2 datafiles to the two different tablespace. I have checked file is available at Production box and one of the file also avilable at standby databse.
    When I am following steps for applying redo log to the standby manually.
    I got error.
    SVRMGRL>connect internal
    SVRMGRL>show parameter db_name
    SVRMGRL>recover standby databse
    After above step I got the Error:
    ORA-00283 recovery session canceled due to error
    ORA-01157 can not identify/lock datafile 24 -see DBWR trace file
    ORA-01110 data file 24: '/location of .dbf file on standby databse disk'
    Please let me know in detail because I am new in this field.
    Thanks in advance

    You will have the datafile information on the standby alert log.
    Something like '/u01/app/oracle/product/8174/db/<filename>.dbf'.
    1. connect as sysdba on standby database.
    2. alter database create datafile 'Production datafile name' as 'alert log filename';
    Example :
    alter database create datafile '/u01/data/user1.dbf'
    as '/u01/app/oracle/product/8174/db/<filename>.dbf';
    3. Recovery managed standby database;
    HTH.
    Regards,
    Arun

  • Adding Datafile

    Hi
    I am running on DB2/Linux(2.14.18)/ECC 5.0.
    I would like to know as to how I can extend a datafile to a tablespace I have this question coz i am very new to DB2 (Never done it on this platform)
    Please advise.
    best reagrds
    Ravi

    Hi ravi,
    <u><b>Adding a datafile using BRTOOLS</b></u>
    1) <b>su – ora<sid></b>
    2) start <b>brtools</b>
    3) Select option <b>2 -- Space management</b>
    4) Select option <b>1 -- Extend tablespace</b>
    5) Select option <b>3 --Tablespace name (specify tablespace name)</b> and say continue(<b>c- cont</b>)
    6) Select option <b>3 – New data file to be added</b> and return
    7) Select option<b> 5 -- Size of the new file in MB</b> (specify the size of the file)  and say continue
    regards,
    kanthi

  • Error while adding Image: ORA-00001: unique constraint

    Dear all,
    I have an error while adding images to MDM I can´t explain. I want to add 7231 images. About 6983 run fine. The rest throws this error.
    Error: Service 'SRM_MDM_CATALOG', Schema 'SRMMDMCATALOG2_m000', ERROR CODE=1 ||| ORA-00001: unique constraint (SRMMDMCATALOG2_M000.IDATA_6_DATAID) violated
    Last CMD: INSERT INTO A2i_Data_6 (PermanentId, DataId, DataGroupId, Description_L3, CodeName, Name_L3) VALUES (:1, :2, :3, :4, :5, :6)
    Name=PermanentId; Type=9; Value=1641157; ArraySize=0; NullInd=0;
    Name=DataId; Type=5; Value=426458; ArraySize=0; NullInd=0;
    Name=DataGroupId; Type=4; Value=9; ArraySize=0; NullInd=0;
    Name=Description_L3; Type=2; Value=; ArraySize=0; NullInd=0;
    Name=CodeName; Type=2; Value=207603_Img8078_gif; ArraySize=0; NullInd=0;
    Name=Name_L3; Type=2; Value=207603_Img8078.gif; ArraySize=0; NullInd=0;
    Error: Service 'SRM_MDM_CATALOG', Schema 'SRMMDMCATALOG2_m000', ERROR CODE=1 ||| ORA-00001: unique constraint (SRMMDMCATALOG2_M000.IDATA_6_DATAID) violated
    Last CMD: INSERT INTO A2i_Data_6 (PermanentId, DataId, DataGroupId, Description_L3, CodeName, Name_L3) VALUES (:1, :2, :3, :4, :5, :6)
    Name=PermanentId; Type=9; Value=1641157; ArraySize=0; NullInd=0;
    Name=DataId; Type=5; Value=426458; ArraySize=0; NullInd=0;
    Name=DataGroupId; Type=4; Value=9; ArraySize=0; NullInd=0;
    Name=Description_L3; Type=2; Value=; ArraySize=0; NullInd=0;
    Name=CodeName; Type=2; Value=207603_Img8085_gif; ArraySize=0; NullInd=0;
    Name=Name_L3; Type=2; Value=207603_Img8085.gif; ArraySize=0; NullInd=0;
    I checked all data. There is no such dataset in the database. Can anybody give me a hint how to avoid this error.
    One thing I wonder: The PermanentId is allways the same but I can´t do anything here.
    BR
    Roman
    Edited by: Roman Becker on Jan 13, 2009 12:59 AM

    Hi Ritam,
    For such issues, can you please create a new thread or directly email the author rather than dragging back up a very old thread, it is unlikely that the resolution would be the same as the database/application/etc releases would most probably be very different.
    For now I will close this thread as unanswered.
    SAP SRM Moderators.

  • Script for adding datafile to tablespace

    Hi
    Does anyone have a template alter tablespace add datafile script handy?
    Basically, we need to, in the event of a tablespace alert, do two things:
    query that tablespace to see how much space is remaining.
    ascertain the size of the datafiles added to that tablespace already (to clarify what size we should make the additional datafile)
    finally, the command itself for adding the extra datafile.
    Thanks.
    10.0.2.0

    SELECT Total.tablespace_name "TSPACE_NAME",
    round(nvl(TOTAL_SPACE,0),2) Tot_space,
    round(nvl(USED_Space,0),2) Used_space,
    round(nvl(TOTAL_SPACE - USED_Space,0),2) Free_space,
    round(nvl(round(nvl((TOTAL_SPACE - USED_Space)*100,0),2)/(Total_space),0),2) "USED%"
    FROM
    (select tablespace_name, sum(bytes/1024/1024) TOTAL_SPACE
    from sys.dba_data_files
    group by tablespace_name
    ) Total,
    (select tablespace_name, sum(bytes/1024/1024) USED_Space
    from sys.dba_segments
    group by tablespace_name
    ) USED
    WHERE Total.Tablespace_name(+) = USED.tablespace_name and
    round(nvl(round(nvl((TOTAL_SPACE - USED_Space)*100,0),2)/(Total_space),0),2)< 20;
    the above query will display tablespaces which are having freespace below 20%
    select file_name,autoextensible,sum(bytes)/1024/1024,sum(maxbytes)/1024/1024
    from dba_data_files
    where tablespace_name='TBS_NAME'
    group by file_name,autoextensible;
    the above query will give details of datafiles like autoextensible or not etc.
    SELECT SUBSTR (df.NAME, 1, 40) file_name,dfs.tablespace_name, df.bytes / 1024 / 1024 allocated_mb,
    ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))
    used_mb,
    NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
    FROM v$datafile df, dba_free_space dfs
    WHERE df.file# = dfs.file_id(+)
    GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes,dfs.tablespace_name
    ORDER BY file_name;
    the above query will show how much freespace is left in each datafile.

  • Adding datafile that is smaller than the original

    Hi,
    I was thinking of adding a new datafile to my existing two. The existing datafiles are set at maximum size 20Gb each. At the present moment the datafiles are already at 90% usage.
    Could I know if I could add another datafile with maxextent of 5Gb instead of 20Gb? Would there be any problems?
    Thanks!
    Flintz

    There's no reason you have to have your datafiles the same size, although most dbas probably do just for consistancy.

  • After adding datafile with OEM a trace file genarating in alertSID.log file

    Hi to All,
    I have added a datafile with OEM but while adding its taking some time so terminated that program and added the datafile with command prompt but after some time i seen that datafile in datafile list and after this thing Im getting error and our server is very slow users gettting access to over server very slow and a trace file is generating in over alertSID.log file.with system language.

    user12239004 wrote:
    Hi anurag,
    # ls -lart /dev/vx/rdsk/datadg |grep his_link_y10_01_index
    crw------- 1 oracle dba 315,129093 Mar 14 09:53 his_link_y10_01_index_ts01
    crw------- 1 root root 315,129077 Mar 15 13:58 his_link_y10_01_index_ts02
    # chown oracle:dba his_link_y10_01_index_ts02
    # ls -lart /dev/vx/rdsk/datadg |grep his_link_y10_01_index
    crw------- 1 oracle dba 315,129093 Mar 14 09:53 his_link_y10_01_index_ts01
    crw------- 1 oracle dba 315,129077 Mar 15 13:58 his_link_y10_01_index_ts02
    root : root , I have changed the permisison now .. hope the error will subside and my next startup will be smooth
    Regards,Great,
    So give a try and let us know if you see any problem.
    Regards
    Anurag

  • Adding datafile to ASM file system tablespace

    Hi
    Can some one plz help in writing a script to add a datafile to the system tablespace on ASM filesystems.
    below is the result of the query ..
    select file_name, bytes, autoextensible, maxbytes from dba_data_files where tablespace_name='SYSTEM';
    FILE_NAME BYTES AUT MAXBYTES
    +DATA1/cir_p/datafile/system.260.6037360 5892997120 NO 0
    Thanks

    790072 wrote:
    Hi
    Can some one plz help in writing a script to add a datafile to the system tablespace on ASM filesystems.
    below is the result of the query ..
    select file_name, bytes, autoextensible, maxbytes from dba_data_files where tablespace_name='SYSTEM';
    FILE_NAME BYTES AUT MAXBYTES
    +DATA1/cir_p/datafile/system.260.6037360 5892997120 NO 0
    Thanks
    You can use
    ALTER TABLESPACE "SYSTEM" ADD DATAFILE '+DATA1' SIZE 1024M
    Cheers

  • Adding datafile automatically when tablespace is 90% full

    Hi,
    I have a tablesspace having a name user01.dbf(Auto extensible to 4 GB) user02.dbf(Auto extensible to 4GB) ..........
    I am just thinking about automatically creating datafile(alter tablespace add datafile).
    1. (total number of datafiel in tablespace) *4 - space used by dba_segments belonging to user tablespace. if it returns less then 1 GB space, then add a datafile.
    And I will put it on CRONTAB. and it will run every 24/7.
    Does not this look weired?
    Advice:
    1. Don't we have any option like the database server will trigger an event when it is 90% and I can trap this event and run a command to add a datafile. I know internally it will do the same but still I feel if you have a feature then why not use it.
    Any information will be helpful in this regards. And the same question is for OS level do I have to check it every second, space on the file system?

    Do one thing ,write a script which will check free space of your tablespaces like..
    SELECT NAME FROM V$DATABASE;
    select to_char(sysdate,'dd-MON-yyyy hh:mi:ss')Snap_ShotTime from dual;
    SELECT Total.name "Tablespace Name",
    nvl(free_space, 0) free_space,
    nvl(total_space-free_space, 0) Used_space,
    total_space
    FROM
    (select tablespace_name, sum(bytes/1024/1024) free_Space
    from sys.dba_free_space
    group by tablespace_name
    ) free,
    (select b.name, sum(bytes/1024/1024) TOTAL_SPACE
    from sys.v_$datafile a, sys.v_$tablespace B
    where a.ts# = b.ts#
    group by b.name
    ) Total
    WHERE free.Tablespace_name(+) = Total.name
    AND Tablespace_name <> 'PERFSTAT'
    ORDER BY Total.name;
    and find the %age of used space Vs free space.Enable your mail server which will send you the alerts.
    Put in crotab file and set them accordingly..
    Pratap

  • Maxdb does not start after adding datafile

    Hi All,
    After trying to add extra datafiles maxdb does not start any more.
    The error is:
    -24994 Runtime environment error [db_online ]; 4,connection broken
    In knldiag.err the following is shown but I do not understand what's going on:
        0x14B4 ERR     8 Messages End of the message list registry dump
    2007-10-16 15:57:41                               ___ Stopping GMT 2007-10-16 13:57:41           7.6.00   Build 018-123-119-055
    2007-10-16 16:20:40                               --- Starting GMT 2007-10-16 14:20:40           7.6.00   Build 018-123-119-055
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   RTEMem_Allocator  : could not allocate m
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   emory
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   required   : 7224
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   allocated  : 95666176
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   supplement : 1048576
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   limit      : -1
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   free blocks size 24 : 25
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   RTEMem_Allocator  : could not allocate m
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   emory
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   required   : 8672
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   allocated  : 95666176
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   supplement : 1048576
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   limit      : -1
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   free blocks size 24 : 25
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   RTEMem_Allocator  : could not allocate m
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   emory
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   required   : 9344
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   allocated  : 95666176
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   supplement : 1048576
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   limit      : -1
    2007-10-16 16:25:31     0x17DC ERR 54008 MEMORY   free blocks size 24 : 25
    2007-10-16 16:25:31     0x17DC ERR     3 Admin    Kernel_Administration.cpp:606
    2007-10-16 16:25:31     0x17DC ERR     3 Admin    2007-10-16 16:25:31 Admin Error 3
    2007-10-16 16:25:31     0x17DC ERR     3 Admin    Database state: OFFLINE
    2007-10-16 16:25:31     0x17DC ERR     3 Admin     + Kernel_Trace.cpp:223
    2007-10-16 16:25:31     0x17DC ERR     3 Admin     + 2007-10-16 16:25:31 KernelCommon Info 6
    2007-10-16 16:25:31     0x17DC ERR     3 Admin     -   Internal errorcode, Errorcode 1910 "sysbuf_storage_exceeded"
    2007-10-16 16:25:31     0x17DC ERR     3 Admin     + Kernel_Administration.cpp:387
    2007-10-16 16:25:31     0x17DC ERR     3 Admin     + 2007-10-16 16:25:31 Admin Warning 20024
    2007-10-16 16:25:31     0x17DC ERR     3 Admin     -   Sql lock manager could not be initialized:Too many buffers requested
    2007-10-16 16:25:31     0x174C ERR     7 Messages Msg_List.cpp:3617
    2007-10-16 16:25:31     0x174C ERR     7 Messages 2007-10-16 16:25:31 Messages Error 7
    2007-10-16 16:25:31     0x174C ERR     7 Messages Begin of dump of registered messages
    2007-10-16 16:25:31     0x174C ERR    10 Messages Msg_List.cpp:3631
    2007-10-16 16:25:31     0x174C ERR    10 Messages 2007-10-16 16:25:31 Messages Error 10
    2007-10-16 16:25:31     0x174C ERR    10 Messages abort dump of registered messages
    2007-10-16 16:25:31     0x174C ERR     8 Messages Msg_List.cpp:3636
    2007-10-16 16:25:31     0x174C ERR     8 Messages 2007-10-16 16:25:31 Messages Error 8
    2007-10-16 16:25:31     0x174C ERR     8 Messages End of the message list registry dump
    2007-10-16 16:25:32     0x165C ERR 54008 MEMORY   RTEMem_RteAllocator  : could not allocat
    2007-10-16 16:25:32     0x165C ERR 54008 MEMORY   e memory
    2007-10-16 16:25:32     0x165C ERR 54008 MEMORY   required   : 5270704
    2007-10-16 16:25:32     0x165C ERR 54008 MEMORY   allocated  : 7340032
    2007-10-16 16:25:32     0x165C ERR 54008 MEMORY   supplement : 5271552
    2007-10-16 16:25:32     0x165C ERR 54008 MEMORY   limit      : -1
    2007-10-16 16:25:33                               ___ Stopping GMT 2007-10-16 14:25:33           7.6.00   Build 018-123-119-055
    2007-10-16 16:48:56                               --- Starting GMT 2007-10-16 14:48:56           7.6.00   Build 018-123-119-055
    2007-10-16 16:49:21     0x150C ERR 20000 Log      Log_QueueRingBuffer.hpp:68
    2007-10-16 16:49:21     0x150C ERR 20000 Log      2007-10-16 16:49:21 Log Error 20000
    2007-10-16 16:49:21     0x150C ERR 20000 Log      Assertion of state Log_FrameAllocator::New() failed. failed!
    2007-10-16 16:49:21     0x150C ERR 18196 DBCRASH  vabort:Emergency Shutdown, Log_QueueRingBuffer.hpp: 68
    2007-10-16 16:49:21     0x10C8 ERR 18340 CONNECT  Could not get named-shared-memory: 'Global\SAPDBTech-CONSOLE-SHM-00000001-fd256071-0000044c-00002575-9e7e2559aa947573', rc = 8
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE      RTE_ConsoleDataCommunication.cpp:1581
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE      2007-10-16 16:49:21 RTE Error 13600
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE      Console: Attaching shared memory '0X77AFB83C' failed, rc = 8
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       + RTE_ConsoleDataCommunication.cpp:217
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       + 2007-10-16 16:49:21 RTE Error 13609
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       -   Console: Opening shared memory failed
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       + RTEThread_ConsoleConnections.cpp:266
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       + 2007-10-16 16:49:21 RTE Error 13110
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       -   Console Thread: Initialization of  reply worker failed
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       + RTEThread_ConsoleWorkerThread.cpp:129
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       + 2007-10-16 16:49:21 RTE Error 13114
    2007-10-16 16:49:21     0x10C8 ERR 13600 RTE       -   Console Thread: Connect request failed
    2007-10-16 16:49:51     0x150C ERR     7 Messages Msg_List.cpp:3617
    2007-10-16 16:49:51     0x150C ERR     7 Messages 2007-10-16 16:49:51 Messages Error 7
    2007-10-16 16:49:51     0x150C ERR     7 Messages Begin of dump of registered messages
    2007-10-16 16:49:51     0x150C ERR     8 Messages Msg_List.cpp:3636
    2007-10-16 16:49:51     0x150C ERR     8 Messages 2007-10-16 16:49:51 Messages Error 8
    2007-10-16 16:49:51     0x150C ERR     8 Messages End of the message list registry dump
    2007-10-16 16:49:52                               ___ Stopping GMT 2007-10-16 14:49:52           7.6.00   Build 018-123-119-055
    Please help.
    Regards,
    Koen
    Message was edited by:
            Koen Engelen

    Hi, Markus
    Reading these replies I have seen that we have a similar error.
    The version of Maxdb is 7.5.00.52, O.S is 2003 32bit with 4GB physical memory. The database space is approximately 4 TB.
    Recently the backup on disk and the backup with external storage fails with the following error:
    0x1480 ERR 54008 MEMORY   RTEMem_Allocator : could not allocate me
    0x1480 ERR 54008 MEMORY   mory
    0x1480 ERR 54008 MEMORY   required  : 8176
    0x1480 ERR 54008 MEMORY   allocated : 371347456
    0x1480 ERR 54008 MEMORY   supplement: 1048576
    0x1480 ERR 54008 MEMORY   limit
    : 4294967295
    0x1480 ERR 20020 Converte No more memory for back up page no container. max page no= 479590865 #perm conv leaf p.= 257133 'leaf page entries=
    0x1480 ERR 20020 Converte  1861
    0x1480 ERR
    0 SAPDBErr Assertion of state BeginSaveData/Pages because of exhausted memory failed!
    0x1480 ERR 18196 DBCRASH  vabort:Emergency Shutdown, Log_Savepoint.cpp: 570
    --- Starting GMT 2012-07-24 12:54:03      
    7.5.0
    Build 052-123-232-633
    I checked out some parameters which CACHE_SIZE that is currently set at 260000.
    When we tried to change it to lower values the resource MaxDB never went online.
    Best regards,
    Emiliano Zannelli

  • RMAN can't SET NEWNAME for datafiles added after Level 1

    Version: 11.2.0.3
    Platform : Solaris 10
    I have the most recent Level 0 , Level 1 and post-L1 Archive logs of the source DB.
    I am trying restore, recover in a different machine using plain RMAN (not RMAN DUPLICATE) into a new datafile location.
    After the Level 1 backup was taken, 2 datafiles (namdata01.dbf, finaldata01.dbf) were added ( this got 'recorded' on the subsequent post-L1 archivelogs )
    Before I ran restore and recover, I restored the latest control file from the most recent L1
    RMAN> restore controlfile from '/u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0cnjqk54_1_1_20120829.rmbk' ;Understandably, this control file doesn't have info about the 2 datafiles added after L1 .Wish I could restore control file from archive log :)
    So, I cataloged the archive logs as well using CATALOG command.
    RMAN> catalog start with '/u01/CATALOGTST/rmanBkpPieces';
    using target database control file instead of recovery catalog
    searching for all files that match the pattern /u01/CATALOGTST/rmanBkpPieces
    List of Files Unknown to the Database
    =====================================
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_08njqj8u_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0cnjqk54_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc
    File Name: /u01/CATALOGTST/rmanBkpPieces/06njqj6h_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/09njqj90_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0anjqk3b_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1
    Do you really want to catalog the above files (enter YES or NO)? YES
    cataloging files...
    cataloging done
    List of Cataloged Files
    =======================
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_08njqj8u_1_1_20120828.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0cnjqk54_1_1_20120829.rmbk
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc                         -------------------> arch logs that contain info on the new datafiles
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc                         -------------------> arch logs that contain info on the new datafiles
    File Name: /u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc                          -------------------> arch logs that contain info on the new datafiles
    File Name: /u01/CATALOGTST/rmanBkpPieces/06njqj6h_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/09njqj90_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0anjqk3b_1_1
    File Name: /u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1
    RMAN> EXITDuring Recovery , RMAN applied the archive logs and managed to create the datafiles successfully. But it can't restore the datafiles to the new location specified in the SET NEWNAME location. Luckily , I had created the original path and these 2 datafiles got restored there.
    RMAN can't seem enforce SET NEWNAME for datafiles added after Level 1 backup despite cataloging.
    Does SET NEWNAME .... thing work only for RESTORE ?
    Log of restore and recover
    $ cat restore-recover.txt
    run
    set newname for database to '/u01/app/CLONE1/oradata/sntcdev/%b' ;
    set newname for tempfile '/u01/app/oradata/sntcdev/temp01.dbf' to '/u01/app/CLONE1/oradata/sntcdev/temp01.dbf' ;
    restore database;
    switch datafile all;
    switch tempfile all;
    recover database;
    $
    $ rman target / cmdfile=restore-recover.txt
    Recovery Manager: Release 11.2.0.3.0 - Production on Sun Sep 16 21:27:49 2012
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: SNTCDEV (DBID=2498462290, not open)
    RMAN> run
    2> {
    3> set newname for database to '/u01/app/CLONE1/oradata/sntcdev/%b' ;
    4> set newname for tempfile '/u01/app/oradata/sntcdev/temp01.dbf' to '/u01/app/CLONE1/oradata/sntcdev/temp01.dbf' ;
    5> restore database;
    6> switch datafile all;
    7> switch tempfile all;
    8> recover database;
    9> }
    10>
    11>
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    Starting restore at 16-SEP-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=18 device type=DISK
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to /u01/app/CLONE1/oradata/sntcdev/system01.dbf
    channel ORA_DISK_1: restoring datafile 00002 to /u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    channel ORA_DISK_1: restoring datafile 00003 to /u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    channel ORA_DISK_1: restoring datafile 00004 to /u01/app/CLONE1/oradata/sntcdev/users01.dbf
    channel ORA_DISK_1: restoring datafile 00005 to /u01/app/CLONE1/oradata/sntcdev/example01.dbf
    channel ORA_DISK_1: restoring datafile 00006 to /u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    channel ORA_DISK_1: reading from backup piece /u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    channel ORA_DISK_1: errors found reading piece handle=/u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk
    channel ORA_DISK_1: failover to piece handle=/u01/CATALOGTST/rmanBkpPieces/SNTCDEV_full_07njqj6j_1_1_20120828.rmbk tag=TAG20120828T234834
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:35
    Finished restore at 16-SEP-12
    datafile 1 switched to datafile copy
    input datafile copy RECID=8 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/system01.dbf
    datafile 2 switched to datafile copy
    input datafile copy RECID=9 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    datafile 3 switched to datafile copy
    input datafile copy RECID=10 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    datafile 4 switched to datafile copy
    input datafile copy RECID=11 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/users01.dbf
    datafile 5 switched to datafile copy
    input datafile copy RECID=12 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/example01.dbf
    datafile 6 switched to datafile copy
    input datafile copy RECID=13 STAMP=794179772 file name=/u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    renamed tempfile 1 to /u01/app/CLONE1/oradata/sntcdev/temp01.dbf in control file
    Starting recover at 16-SEP-12
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting incremental datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    destination for restore of datafile 00001: /u01/app/CLONE1/oradata/sntcdev/system01.dbf
    destination for restore of datafile 00002: /u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    destination for restore of datafile 00003: /u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    destination for restore of datafile 00004: /u01/app/CLONE1/oradata/sntcdev/users01.dbf
    destination for restore of datafile 00005: /u01/app/CLONE1/oradata/sntcdev/example01.dbf
    destination for restore of datafile 00006: /u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    channel ORA_DISK_1: reading from backup piece /u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    channel ORA_DISK_1: errors found reading piece handle=/u01/RMAN_bkp/BKP_sntcdev/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk
    channel ORA_DISK_1: failover to piece handle=/u01/CATALOGTST/rmanBkpPieces/SNTCDEV_L1_0bnjqk3d_1_1_20120829.rmbk tag=TAG20120829T000356
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
    starting media recovery
    archived log for thread 1 with sequence 13 is already on disk as file /u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc
    archived log for thread 1 with sequence 14 is already on disk as file /u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc
    archived log for thread 1 with sequence 15 is already on disk as file /u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc
    channel ORA_DISK_1: starting archived log restore to default destination
    channel ORA_DISK_1: restoring archived log
    archived log thread=1 sequence=12
    channel ORA_DISK_1: reading from backup piece /u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1
    channel ORA_DISK_1: piece handle=/u01/CATALOGTST/rmanBkpPieces/0dnjqk56_1_1 tag=TAG20120829T000454
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    archived log file name=/u01/archLogs/arch_1_12_790513173.arc thread=1 sequence=12
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc thread=1 sequence=13
    creating datafile file number=7 name=/u01/app/oradata/sntcdev/namdata01.dbf
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_13_790513173.arc thread=1 sequence=13
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_14_790513173.arc thread=1 sequence=14
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc thread=1 sequence=15
    creating datafile file number=8 name=/u01/app/oradata/sntcdev/finaldata01.dbf
    archived log file name=/u01/CATALOGTST/rmanBkpPieces/arch_1_15_790513173.arc thread=1 sequence=15
    unable to find archived log
    archived log thread=1 sequence=16
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 09/16/2012 21:29:51
    RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 16 and starting SCN of 1004015
    Recovery Manager complete.
    $
    $
    $ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Sun Sep 16 21:30:04 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select name from v$datafile;
    NAME
    /u01/app/CLONE1/oradata/sntcdev/system01.dbf
    /u01/app/CLONE1/oradata/sntcdev/sysaux01.dbf
    /u01/app/CLONE1/oradata/sntcdev/undotbs01.dbf
    /u01/app/CLONE1/oradata/sntcdev/users01.dbf
    /u01/app/CLONE1/oradata/sntcdev/example01.dbf
    /u01/app/CLONE1/oradata/sntcdev/cisdata01.dbf
    /u01/app/oradata/sntcdev/namdata01.dbf           ----------------------> restored to old location ignoring SET NEWNAME ....
    /u01/app/oradata/sntcdev/finaldata01.dbf         ----------------------> restored to old location ignoring SET NEWNAME ....
    8 rows selected.
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $ cd /u01/app/oradata/sntcdev            # -----------------------------> the old location
    $
    $ ls -alrt
    total 243924
    drwxr-xr-x   3 oracle   oinstall     512 Aug  5 10:55 ..
    drwxr-xr-x   2 oracle   oinstall     512 Sep 16 20:59 .
    -rw-r-----   1 oracle   oinstall 104865792 Sep 16 21:29 namdata01.dbf
    -rw-r-----   1 oracle   oinstall 19931136 Sep 16 21:29 finaldata01.dbf

    RMAN> run
    2> {
    3> set newname for database to '/u01/app/CLONE1/oradata/sntcdev/%b' ;
    4> set newname for tempfile '/u01/app/oradata/sntcdev/temp01.dbf' to '/u01/app/CLONE1/oradata/sntcdev/temp01.dbf' ;
    5> restore database;
    6> switch datafile all;
    7> switch tempfile all;
    8> recover database;
    9> }RMAN executes the commands in the run block stepwise. In your case, starting from "set newname for database..." and lastly executing "recover database...".
    Let me interpret it for you.
    1. You restored the controlfile from the L1 backup which does not have any information about the 2 newly added datafiles. You cataloged the backuppieces and the archives to this controlfile, which means that the controlfile would now be aware that the required backups and archives are in this cataloged location.
    2. You set newname for database to the desired location, thereby this command is executed restoring the database from the L0 and L1 backups. (These 2 backups do not have any information about the newly added datafiles and hence the 2 files would still not be restored).
    3. You execute restore database which restores the files from L0 and L1 backup.
    4. Switch datafile all, this renames all the files that were restored in the previous steps to the desired name/location that was mentioned in step 2.
    5. Recover database: This is where the archivelogs come into picture. The data in the archives would be created & recovered. The newly added datafiles are now created & recovered but RMAN does not go back to STEP 2 and STEP 4 to re-execute the commands in STEP2 and STEP4 to restore it to the desired location (STEP 2) and Rename it (STEP 4). The files will have to renamed later by moving them manually to the location that you require.
    So, RMAN does not execute the SET NEWNAME for datafiles which were added after the backup as the information about these files do not exist in the RMAN backuppieces.

Maybe you are looking for

  • Liquify as a smart object filter CS6

    I understand that I should be able to use Liquify as a smart object filter in CS6 however when I get a layer rendered as a smart object and attempt to use the smart filter Liquify is greyed out. my only opiton thus far has been to stamp it down and w

  • Java Instalation Failed. ~Please Help?~

    When i get to the installation step of download Java Version 6 Update 16 this error shows up. Error Msg: Installation Failed The wizard was interrupted before Java(TM) 6 Update 16 could be completely installed. To complete installation at another tim

  • My new tab opens with chrome://quick_start/content/index.html on the adress bar. how do i get rid of it

    after installing camstudio i'm stuck with this. although my Firefox isn't disturbed i wanna rid it asap.help!

  • Reversing tracks

    I know how to take an audio file and reverse it, I have done it in LE8. But when I tried it in LP9 the reverse option under FUNCTION in the sample editor is greyed out (along with every other function but 'change gain'). I bounced a few tracks from a

  • Business Process integration Carreer in coming future

    Hi all, Recently i came across BPI, i googled it but i got very limited understanding about this like , which talks about major core modules knowlege, SD,MM,FICO,PP etc, but don't know the future of this course. So kindly let me know how is the futur