Transportable tablespace copy during cold backup
Is it possible to start the copy of the transportable tablespace files while the backup is taking place? Logically it doesn't sound like there should be a conflict, but I wanted to run the idea past you guys first.
i know that.
process i want to use is:
alter tablespace(s) read only.
export transportable tablespace metadata
start file copies
shutdown instance
startup mount
perform rman
startup database
when copies complete, alter tablespaces read write.
sorry i wasn't absolutely clear before. it made sense to me when i wrote it.
Similar Messages
-
User tablespace did not copy during cold backup
Can anybody explain why this happened and how to fix?
I was doing regular weekly coldbackup and all other tablespaces got copied except user tablespace. I can not find any error in alert log and windows event log. The file size is 25G. Any idea?
ThanksThe original poster has declared that this a cold backup, so presumably no BEGIN/END BACKUP involved here. Further, how would those commands do anything to cause a backup script to SKIP the files for a tablespace?
As for the OP's problem, I will guess that this is occurring on Windows. Because the default flag that Windows programs use to open files often specify exclusivity, many backup scripts and packages will not be able to open (or copy) a file that is open by another process. Notably, if the backup is hitting such a problem it should be receiving (and hopefully logging) and error.
I can't guess what other program might have had the files open, but if Oracle did not successfully shut down completely, or if some threads failed to exit during shutdown, you would end up with some files with open filehandles. That would prevent commands like COPY from working on those files.
Solutions (if this is windows and my theory holds water) would include:
- Providing exception handling in the cold backup script to verify that Oracle has shutdown completely and that no Oracle threads remain.
- Switching to ocopy.exe (Oracle nonexclusive copy program for Windows) so that the backup would not fail on files with open filehandles. I guess this is a little reckless since the database could be open and the cold backup could proceed heedless of that fact.
- Use process explorer to track down what program has the files open after shutdown.
Hope this helps,
Jeremiah Wilton
ORA-600 Consulting
http://www.ora-600.net -
11.5.10.2 Cold Backup
Helo all my friend;
I am soo new in backup thing,thatswhy i need ur help.I wanna take cold backup of my 11.5.10.2 system. But i dont know how i have to do it.I read some documents in here but its not so clear.
I know if i want to take cold backup then i have to close APPS service,database and listener and then i will do what? Which folder i have to copy for cold backup?(eg:All my apps and database directory or just APPL_TOP's folder or $ORACLE_HOME's folders etc..)
If anyone explain cold backup in 11.5.10.2 step by step it would be great ... I am waiting ur answers..
Thanks
heliosHi my friend i found something like in
"how we backup a database forms"
"Please have a look at the following links for a similar discussion:
System Backup
http://forums.oracle.com/forums/thread.jspa?messageID=1674975
Apps Backup
http://forums.oracle.com/forums/thread.jspa?messageID=1622999
Backup Oracle Applications (11.5.10.2)
http://forums.oracle.com/forums/thread.jspa?messageID=1765123
Best Backup Strategy
http://forums.oracle.com/forums/thread.jspa?messageID=1972656
how to clone with rapid clone utilityNote: 230672.1 - Cloning Oracle Applications Release 11i with Rapid Clone
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230672.1 " but its not what i am asking you...
i was looking my system.Should i copy all my apps and database directory or
i should copy my $APPS_TOP(by applvis user) and $ORACLE_HOME(by oravis user) ??
But i checked my dbf file stored in visdata directory if i copy $ORACLE_HOME its mean i wont copy visdata directory and its makes me confused here...
Plz help
thanks -
Build standby database using cold backup on a different file system & host
Hi gurus,
The database version is 11.2.0.3. OS is HP UX Itanium 11.31
I am building a standby database using cold backup of primary. The primary mount points are (/p003/oracle, /p004/oracle) on HOST1 and the standby file system on HOST2 are (/s003/oracle,/s004/oracle). I am not using Data Guard to apply logs as we have a script that mounts the log location periodically to standby server and apply the logs to make it current. I am using cold backup as the database is small like 200G and can be taken down. Could someone help me with the steps to build standby using cold backup for a different file location in standby ?? My concern is I can copy the data files from /p003 to /s003. How will I build the controlfile ?
If it was same File system on both HOST 1 and HOST2 I can copy the cold backup to the standby server, build a standby control file on primary and copy and replace the standby control file and everything was set.
Thanks
Cherrish VaidiyanHello;
I have a note on this using a cold copy of the current files instead of a copy backup :
http://www.visi.com/~mseberg/data_guard_on_oracle_10_step_by_step.html
I will post SCP SQL in a moment.
Best Regards
mseberg
set heading off
set feedback off
set pagesize 100
set linesize 400
select 'scp '||a.name ||' server_name:' || a.name as newname from v$datafile a;
select 'scp '||a.name ||' server_name:' || a.name as newname from v$controlfile a;
select 'scp '||a.member ||' server_name:' || a.member as newname from v$logfile a;Edited by: mseberg on May 25, 2013 10:35 AM -
Oracle RAC cold backup to a non-RAC DB ...
We are running a Windows 2003 (SP2) box with Oracle 10g release 2 (10.2.0.3). This was originally configured to be a RAC server (having node1 and node2) but we lost node 2 some how.
Now we want to create a fresh DB (no RAC just plain DB installation having the same OS and DB version as given above on the dead node 2) and copy the DB from node1 to node2 with cold backup method.
My question: will the cold back work in this scenario or we will have to do a EXPDP/IMP thing. I am not too familiar with impdp/expdp so thats is why the plan is to copy the cold backup and bring up the instance.
Is this possible?Thanks.
So, doing the cold backup is:
take CONTROL FILES, DATA FILES and REDO LOGFILES copy. Is that it.
Would someone have a document which provides a script to know what files backup is to be taken and what is thier location in the file system. I know this is a basic question but i am just being careful here.
thanks, -
Cold backup tablespace restore
From a cold backup can you restore a tablespace to a different database.
A datafile was created and dropped and now we are recieving ORA-1186, ORA-1157 cannnot/identify lock datafile
We know that if our database goes down it won't come back up. The table space has about 250 tables and its huge about 100g in size. Does anyone know of what steps needs to be taken?
Edited by: user10767182 on Jan 6, 2009 8:35 PMI couldn't quite workout what you were saying, but I think you were suggesting that copies of the lost tables and data are sitting in a second database somewhere, and you would like to pull them out of that database and plug them into the broken database. Is that right?
If so, you cannot take a datafile from one database and plug it into another, unless you use the transportable tablespace option.
Basically, on your broken database, you'd shut it down, bring it back to the mount state and then say alter database datafile X offline drop
That will let you issue an alter database open followed by a drop tablespace X, and your broken database will at least be open, minus the important tablespace
You then get your second database open and make the important tablespace read-only
You'd drop to the command line and do an export using the TRANSPORT_TABLESPACE option -the command is too susceptible to the specifics to show you here. Check the documentation at http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta063.htm
You then copy the datafile and the export dump file to the server where your broken database is running
You then run an import, again specifying the TRANSPORT_TABLESPACE option
Effectively, the datafile copy gets 'plugged in' to the broken database and gets adopted as a native, brand new tablespace, complete with contents. You finish off by making the tablespace read-write in both databases once more.
Obviously, you lose data using this sort of process: the data comes back into your 'broken' database in the same state it was in your second database, and you can't apply redo to it to recover it to a more recent state. But that's going to be the best you can do if you don't have proper physical backups of the file. Your subject mentioning cold backups confused me a little on that score too.
So I won't go into any more detail for now. It may be that I misunderstood the reference to 'restore a tablespace to a different database' and your requirements completely. But if this sounds like what you are after, and if you are stuck on any of the details, then you can always post a follow-up. -
Upgrade 10.2 to 11.2 manualy using cold backup copy on new server.
Hi,
I am going to manually upgrade my databases to 11.2 from 10.2. I will use cold backup of 10.2 and copy in to the new server Red Hat Linux 5.
I got a Note (742108.1) done for upto 10.2.0.5 and it tells to set
orapwd file=$oracle_HOME/dbs/orapw<SID>
But my upgrade path is 10.2 to 11.2 and i would like to verify whether i need to do the same before startup in upgrade mode?? Nothing mention regarding this in Upgrade Guideline. Just telling only to copy password file to $oracle_HOME/dbs/
if so how i set it ? i mean in $ prompt or SQL Prompt?I have 10 databases in one server and 2 seperate servers. I will do 3 changes mainly 32 bit to 64 bit, Suse Linux 9 to red Hat Linux 5 and 10.2.0.4 to 11.2.0.1
So best path to do all at once is cold backup and i verified it from oracle Support as well. because 32 bit o 64 bit conversion will automatically happend when upgrade.
My databses are nearly 300GB each and not much big . use for datawarehouse. -
How to backup only one tablespace in cold backup noarchivelog mode
Hi,
How to backup only one tablespace in cold backup noarchivelog mode
Reagrds,
RushangWe have to restore all 50 GB database if we fail in scripts meanwhile to start again the data load.
Out scripts populater only two tablespaces.
That's why if i take backup of those tabblespace then i have to restore only those tablespace not whole database.
Regards,
Rushang -
What is the best way to copy Oracle Transportable Tablespace datafiles betw
Hi All,
We are planning to implement Oracle Transportable Tablespace feature to copy huge data (around 900 GB) between the DBs.
Our plan is to:
1. Offline the tablespace 1 and tablespace 2 on Linux BOX1.
2. Take export of these two tablespaces using TTS on Linux BOX1.
3. zip datafiles of tablespace1 & 2 on Linux BOX1.
4. Copy the zipped datafiles and dump file to Linux BOX2.
5. Unzip datafiles of tablespace1 & 2 on Linux BOX1.
6. Make the tablespaces online on Linux BOX1.
7. Unzip datafiles of tablespace1 & 2 on Linux BOX2.
8. Import the dump file to DB on Linux BOX2.
9. Online the tablespace1 & 2 on Linux BOX2.
However I do have below queries before I proceed.
1. Do you see any issue with the above approach?
2. To improve the copying speed across the network, Do you suggest any solution than rcp and ftp
Our Environment: Oracle 9.2.0.8 running on Linux 2.6.5-7.308-bigsmp #1 SMP
Please share your experiences.
Thanks in advance for your time and suggestions :)
Krishna Bussu.Take a look at Note:77523.1.
Hope this Helps
Regards -
Error ORA-39125 and ORA-04063 during export for transportable tablespace
I'm using the Oracle Enterprise Manager (browser is IE) to create a tablespace transport file. Maintenance...Transport Tablespaces uses the wizard to walk me through each step. The job gets created and submitted.
The 'Prepare' and 'Convert Datafile(s)' job steps complete successfully. The Export step fails with the following error. Can anyone shed some light on this for me?
Thank you in advance!
=======================================================
Output Log
Export: Release 10.2.0.2.0 - Production on Sunday, 03 September, 2006 19:31:34
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."GENERATETTS000024": SYS/******** AS SYSDBA dumpfile=EXPDAT_GENERATETTS000024.DMP directory=EM_TTS_DIR_OBJECT transport_tablespaces=SIEBEL job_name=GENERATETTS000024 logfile=EXPDAT.LOG
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while calling DBMS_METADATA.FETCH_XML_CLOB [TABLE_STATISTICS]
ORA-04063: view "SYS.KU$_IOTABLE_VIEW" has errors
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 6241
----- PL/SQL Call Stack -----
object line object
handle number name
2CF48130 14916 package body SYS.KUPW$WORKER
2CF48130 6300 package body SYS.KUPW$WORKER
2CF48130 2340 package body SYS.KUPW$WORKER
2CF48130 6861 package body SYS.KUPW$WORKER
2CF48130 1262 package body SYS.KUPW$WORKER
2CF0850C 2 anonymous block
Job "SYS"."GENERATETTS000024" stopped due to fatal error at 19:31:44More information:
Using SQL Developer, I checked the view SYS.KU$_IOTABLE_VIEW referred to in the error message, and it does indeed report a problem with that view. The following code is the definition of that view. I have no idea what it's supposed to be doing, because it was part of the default installation. I certainly didn't write it. I did, however, execute the 'Test Syntax' button (on the Edit View screen), and the result was this error message:
=======================================================
The SQL syntax is valid, however the query is invalid or uses functionality that is not supported.
Unknown error(s) parsing SQL: oracle.javatools.parser.plsql.syntax.ParserException: Unexpected token
=======================================================
The SQL for the view looks like this:
REM SYS KU$_IOTABLE_VIEW
CREATE OR REPLACE FORCE VIEW "SYS"."KU$_IOTABLE_VIEW" OF "SYS"."KU$_IOTABLE_T"
WITH OBJECT IDENTIFIER (obj_num) AS
select '2','3',
t.obj#,
value(o),
-- if this is a secondary table, get base obj and ancestor obj
decode(bitand(o.flags, 16), 16,
(select value(oo) from ku$_schemaobj_view oo, secobj$ s
where o.obj_num=s.secobj#
and oo.obj_num=s.obj#),
null),
decode(bitand(o.flags, 16), 16,
(select value(oo) from ku$_schemaobj_view oo, ind$ i, secobj$ s
where o.obj_num=s.secobj#
and i.obj#=s.obj#
and oo.obj_num=i.bo#),
null),
(select value(s) from ku$_storage_view s
where i.file# = s.file_num
and i.block# = s.block_num
and i.ts# = s.ts_num),
ts.name, ts.blocksize,
i.dataobj#, t.bobj#, t.tab#, t.cols,
t.clucols, i.pctfree$, i.initrans, i.maxtrans,
mod(i.pctthres$,256), i.spare2, t.flags,
t.audit$, t.rowcnt, t.blkcnt, t.empcnt, t.avgspc, t.chncnt, t.avgrln,
t.avgspc_flb, t.flbcnt, t.analyzetime, t.samplesize, t.degree,
t.instances, t.intcols, t.kernelcols, t.property, 'N', t.trigflag,
t.spare1, t.spare2, t.spare3, t.spare4, t.spare5, t.spare6,
decode(bitand(t.trigflag, 65536), 65536,
(select e.encalg from sys.enc$ e where e.obj#=t.obj#),
null),
decode(bitand(t.trigflag, 65536), 65536,
(select e.intalg from sys.enc$ e where e.obj#=t.obj#),
null),
(select c.name from col$ c
where c.obj# = t.obj#
and c.col# = i.trunccnt and i.trunccnt != 0
and bitand(c.property,1)=0),
cast( multiset(select * from ku$_column_view c
where c.obj_num = t.obj#
order by c.col_num, c.intcol_num
) as ku$_column_list_t
(select value(nt) from ku$_nt_parent_view nt
where nt.obj_num = t.obj#),
cast( multiset(select * from ku$_constraint0_view con
where con.obj_num = t.obj#
and con.contype not in (7,11)
) as ku$_constraint0_list_t
cast( multiset(select * from ku$_constraint1_view con
where con.obj_num = t.obj#
) as ku$_constraint1_list_t
cast( multiset(select * from ku$_constraint2_view con
where con.obj_num = t.obj#
) as ku$_constraint2_list_t
cast( multiset(select * from ku$_pkref_constraint_view con
where con.obj_num = t.obj#
) as ku$_pkref_constraint_list_t
(select value(ov) from ku$_ov_table_view ov
where ov.bobj_num = t.obj#
and bitand(t.property, 128) = 128), -- IOT has overflow
(select value(etv) from ku$_exttab_view etv
where etv.obj_num = o.obj_num)
from ku$_schemaobj_view o, tab$ t, ind$ i, ts$ ts
where t.obj# = o.obj_num
and t.pctused$ = i.obj# -- For IOTs, pctused has index obj#
and bitand(t.property, 32+64+512) = 64 -- IOT but not overflow
-- or partitioned (32)
and i.ts# = ts.ts#
AND (SYS_CONTEXT('USERENV','CURRENT_USERID') IN (o.owner_num, 0) OR
EXISTS ( SELECT * FROM session_roles
WHERE role='SELECT_CATALOG_ROLE' ));
GRANT SELECT ON "SYS"."KU$_IOTABLE_VIEW" TO PUBLIC; -
ORA-00313: open failed for members during restore from COLD backup
Hi all,
I took a cold backup of an 11.1 database using RMAN (database in mount state).
I used the following command to restore it:
restored the controlfile and then
RESTORE DATABASE FROM TAG 'TAGxxxxxxxxxx';
Now I'm restoring it and it's taking too much time. When checking alert log it says:
ORA-51106: check failed to complete due to an error. See error below
ORA-48318: ADR Relation [HM_FINDING] of version=3 cannot be supported
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: '/oracle1/oradata/******/redo1_02.log'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
RMAN is still in progress, why does it complain about redo?
I'm not doing recovery since it was a cold backup.
Thanks in advance.Hi Michael,
Yes, it's the correct one because it says "full restore complete":
ORA-00312: online log 1 thread 1: '/oracle1/oradata/******/redo1_01.log'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3
Tue Apr 30 14:38:14 2013
Full restore complete of datafile 32 /oracle1/oradata/******/apr_sesm_index_01.dbf. Elapsed time: 1:25:42
checkpoint is 12652187135448
last deallocation scn is 12651877642745
Tue Apr 30 15:38:04 2013
Full restore complete of datafile 34 /oracle1/oradata/******/apr_stage_data_01.dbf. Elapsed time: 6:42:52
checkpoint is 12652187135448
last deallocation scn is 12651877637877
thanks, -
Recovering cold backup - running in circles - part 2 -Second opinion needed
Hello all,
Here I'm again but this time not to ask for a complete solution where I have to do nothing but a second opinion and some help in solving.
In the thread [Recovering cold backup - alter database open failing - running in circles |http://forums.oracle.com/forums/thread.jspa?messageID=9663720] , I was trying to recover a database (I still am) for one of our clients.
Since then a few things have changed, from our client we have no received the installation media for their database software: Oracle 10.0.1.0.2.
I have installed this on a Linux, an opensuse while they used a RHEL 4, placed the database files in the a similar map structure as theirs and made sure that all the ORACLE variables were correctly set, we have also received the pfile and I made the corresponding initfile from.
I start sqlplus / as sysdba
Mounting the database no problem.
Trying to open the database I receive the following error ORA-01207: file is more recent than controlfile – old controlfile, the error refers to the Datafile 2: '/home/oracle/product/10.1.0/oradata/orcl/undotbs01.dbf'.
Their is no corresponding error message in the alert log, well their is but it doesn't really give me the information I was looking for.
Mon Jul 11 16:38:26 2011
alter database open
ORA-1122 signalled during: alter database open...
Mon Jul 11 16:44:51 2011
ALTER DATABASE RECOVER database
Mon Jul 11 16:44:51 2011
Media Recovery Start
Media Recovery failed with error 1122
ORA-283 signalled during: ALTER DATABASE RECOVER database ...
I then attempted to do a media recovery using recover database using backup controlfile, which fails because it can not find a corresponding backup controlfile.
The section generated in the alert log did however worry me more:
ALTER DATABASE RECOVER database using backup controlfile
Mon Jul 11 16:46:15 2011
Media Recovery Start
WARNING! Recovering data file 1 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 2 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 3 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 4 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 5 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
Starting datafile 1 with incarnation depth 0 in thread 1 sequence 1818761
Datafile 1: '/home/oracle/product/10.1.0/oradata/orcl/system01.dbf'
Starting datafile 2 with incarnation depth 0 in thread 1 sequence 1818764
Datafile 2: '/home/oracle/product/10.1.0/oradata/orcl/undotbs01.dbf'
Starting datafile 3 with incarnation depth 0 in thread 1 sequence 1818761
Datafile 3: '/home/oracle/product/10.1.0/oradata/orcl/sysaux01.dbf'
Starting datafile 4 with incarnation depth 0 in thread 1 sequence 1818765
Datafile 4: '/home/oracle/product/10.1.0/oradata/orcl/users01.dbf'
Starting datafile 5 with incarnation depth 0 in thread 1 sequence 1818761
Datafile 5: '/home/oracle/product/10.1.0/oradata/orcl/temp02.dbf'
Media Recovery Log
So that could mean that they've just taken a copy off the files without offlining the database first, anyway to confirm this.
I've come to a point where I have not an immediately clue of what to do, some help even a little push could be helpful.I did
SQL> recover database using backup controlfile until cancel;
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/home/oracle/product/10.1.0/oradata/orcl/redo01.log
Tue Jul 12 12:36:03 2011
ALTER DATABASE RECOVER LOGFILE '/home/oracle/product/10.1.0/oradata/orcl/redo01.log'
Tue Jul 12 12:36:03 2011
Media Recovery Log /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Tue Jul 12 12:36:03 2011
Incomplete recovery applied all redo ever generated.
Recovery completed through change 996740226
Tue Jul 12 12:36:03 2011
orcl; Media Recovery Complete
ARCH: Connecting to console port...
Completed: ALTER DATABASE RECOVER LOGFILE '/home/oracle/pr
Tue Jul 12 12:36:18 2011
ALTER DATABASE RECOVER database using backup controlfile until cancel
Tue Jul 12 12:36:18 2011
Media Recovery Start
Starting datafile 1 with incarnation depth 0 in thread 1 sequence 1
Datafile 1: '/home/oracle/product/10.1.0/oradata/orcl/system01.dbf'
Starting datafile 2 with incarnation depth 0 in thread 1 sequence 1
Datafile 2: '/home/oracle/product/10.1.0/oradata/orcl/undotbs01.dbf'
Starting datafile 3 with incarnation depth 0 in thread 1 sequence 1
Datafile 3: '/home/oracle/product/10.1.0/oradata/orcl/sysaux01.dbf'
Starting datafile 4 with incarnation depth 0 in thread 1 sequence 1
Datafile 4: '/home/oracle/product/10.1.0/oradata/orcl/users01.dbf'
Starting datafile 5 with incarnation depth 0 in thread 1 sequence 1
Datafile 5: '/home/oracle/product/10.1.0/oradata/orcl/temp02.dbf'
Media Recovery Log
It said media recovery complete in sqlplus
when I tried to open I got disconnection forced
Tue Jul 12 12:37:18 2011
alter database open resetlogs
RESETLOGS after complete recovery through change 996740226
Resetting resetlogs activation ID 1284019345 (0x4c889491)
Online log /home/oracle/product/10.1.0/oradata/orcl/redo02.log: Thread 1 Group 2 was previously cleared
Online log /home/oracle/product/10.1.0/oradata/orcl/redo03.log: Thread 1 Group 3 was previously cleared
Setting recovery target incarnation to 4
Tue Jul 12 12:37:18 2011
Setting recovery target incarnation to 4
Tue Jul 12 12:37:18 2011
Flashback Database Disabled
Tue Jul 12 12:37:18 2011
Assigning activation ID 1284062526 (0x4c893d3e)
Maximum redo generation record size = 120832 bytes
Maximum redo generation change vector size = 116476 bytes
Private_strands 7 at log switch
Thread 1 opened at log sequence 1
Current log# 1 seq# 1 mem# 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Successful open of redo thread 1
Tue Jul 12 12:37:18 2011
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Tue Jul 12 12:37:18 2011
SMON: enabling cache recovery
Tue Jul 12 12:37:18 2011
Successfully onlined Undo Tablespace 1.
Dictionary check beginning
Dictionary check complete
Tue Jul 12 12:37:18 2011
SMON: enabling tx recovery
Tue Jul 12 12:37:18 2011
WARNING: The following temporary tablespaces contain no files.
This condition can occur when a backup controlfile has
been restored. It may be necessary to add files to these
tablespaces. That can be done using the SQL statement:
ALTER TABLESPACE <tablespace_name> ADD TEMPFILE
Alternatively, if these temporary tablespaces are no longer
needed, then they can be dropped.
Empty temporary tablespace: TEMP
Database Characterset is UTF8
Tue Jul 12 12:37:18 2011
Published database character set on system events channel
Tue Jul 12 12:37:18 2011
All processes have switched to database character set
Tue Jul 12 12:37:18 2011
Errors in file /home/oracle/product/10.1.0/db_1/admin/orcl/udump/orcl_ora_19084.trc:
ORA-00600: internal error code, arguments: [4194], [21], [17], [], [], [], [], []
Tue Jul 12 12:37:19 2011
Errors in file /home/oracle/product/10.1.0/db_1/rdbms/log/orcl_smon_19072.trc:
ORA-00600: internal error code, arguments: [4194], [75], [7], [], [], [], [], []
Tue Jul 12 12:37:20 2011
Doing block recovery for file 2 block 25323
Block recovery range from rba 1.65.0 to scn 0.996740294
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery stopped at EOT rba 1.67.16
Block recovery completed at rba 1.67.16, scn 0.996740294
Doing block recovery for file 2 block 25
Block recovery range from rba 1.65.0 to scn 0.996740293
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery completed at rba 1.67.16, scn 0.996740294
Tue Jul 12 12:37:20 2011
Errors in file /home/oracle/product/10.1.0/db_1/rdbms/log/orcl_smon_19072.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4194], [75], [7], [], [], [], [], []
Tue Jul 12 12:37:21 2011
Doing block recovery for file 2 block 1295
Block recovery range from rba 1.63.0 to scn 0.996740295
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery stopped at EOT rba 1.67.16
Block recovery completed at rba 1.67.16, scn 0.996740294
Doing block recovery for file 2 block 41
Block recovery range from rba 1.63.0 to scn 0.996740292
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery completed at rba 1.65.16, scn 0.996740293
Tue Jul 12 12:37:21 2011
Errors in file /home/oracle/product/10.1.0/db_1/admin/orcl/udump/orcl_ora_19084.trc:
ORA-00600: internal error code, arguments: [4193], [57818], [10145], [], [], [], [], []
Doing block recovery for file 2 block 622
Block recovery range from rba 1.67.0 to scn 0.996740298
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery stopped at EOT rba 1.69.16
Block recovery completed at rba 1.69.16, scn 0.996740298
Doing block recovery for file 2 block 9
Block recovery range from rba 1.67.0 to scn 0.996740297
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery completed at rba 1.69.16, scn 0.996740298
WARNING: Files may exists in db_recovery_file_dest
that are not known to the database. Use the RMAN command
CATALOG RECOVERY AREA to re-catalog any such files.
One of the following events caused this:
1. A backup controlfile was restored.
2. A standby controlfile was restored.
3. The controlfile was re-created.
4. db_recovery_file_dest had previously been enabled and
then disabled.
Tue Jul 12 12:37:23 2011
Errors in file /home/oracle/product/10.1.0/db_1/admin/orcl/udump/orcl_ora_19084.trc:
ORA-00600: internal error code, arguments: [4194], [75], [7], [], [], [], [], []
Doing block recovery for file 2 block 25323
Block recovery range from rba 1.65.0 to scn 0.996740294
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery completed at rba 1.67.16, scn 0.996740297
Doing block recovery for file 2 block 25
Block recovery range from rba 1.65.0 to scn 0.996740301
Recovery of Online Redo Log: Thread 1 Group 1 Seq 1 Reading mem 0
Mem# 0 errs 0: /home/oracle/product/10.1.0/oradata/orcl/redo01.log
Block recovery completed at rba 1.71.16, scn 0.996740302
Tue Jul 12 12:37:24 2011
Errors in file /home/oracle/product/10.1.0/db_1/admin/orcl/udump/orcl_ora_19084.trc:
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4194], [75], [7], [], [], [], [], []
Error 607 happened during db open, shutting down database
USER: terminating instance due to error 607
Instance terminated by USER, pid = 19084
ORA-1092 signalled during: alter database open resetlogs...
************************************************************************************************************************************************************************************************ -
Migrate Oracle 11g database from Windows To Linux using RMAN hot / cold backup ?
Hi Friends,
Is it possible to Migrate Oracle 11g database from Windows To Linux using RMAN hot / cold backup ? ( as i would like to perform Point In Time recovery)
(or) The only way is to use RMAN convert as mentioned here - Transporting Data Across Platforms
(or) Is there any other method (except exp/imp and data pump)
Regards,
DBHI
his post describes the procedure required to migrate a database from Windows to Linux using the RMAN Convert Database command.
Both Windows and Linux platforms have the same endian format, which makes possible to transfer the whole database, making the migration process very straightforward and simple.
To migrate between platforms that have a different endian format, Cross Platform Transportable Tablespaces (XTTS) needs to be used instead.
List of Steps Needed to Complete the Migration
The migration process is simple, but as it has several steps it is convenient to be familiar with them before running it.
1. Check platform compatibility between source and target OS
2. Start the database in read only mode
3. Check database readiness for transport from Windows to Linux using DBMS_TDB.CHECK_DB
4. Check if there are any external objects
5. Execute the Rman Convert database command
6. Copy converted datafiles, generated Transport Script and Parameter File to Linux
7. Edit the init.ora for the new database
8. Edit the Transport Script and Parameter File changing the windows paths to Linux Paths
9. Execute the Transport Script
10.Change the Database ID
11.Check database integrity
Thank you -
Why transportable tablespace for platform migration of same endian format?
RDBMS Version : 10.2.0.4
We are planning to migrate our DB to a different platform. Both platforms are of BIG endian format. From googling , I came across the following link
http://levipereira.files.wordpress.com/2011/01/oracle_generic_migration_version_1.pdf
In this IBM document, they are migrating from Solaris 5.9 (SPARC) to AIX 6 . Both are of BIG endian format.Since they both are of same endian format can't they use TRANSPORTABLE DATABASE ? Why are they using RMAN COVERT DATAFILE (Transportable tablespace ) ?In this IBM document, they are migrating from Solaris 5.9 (SPARC) to AIX 6 . Both are of BIG endian format.Since they both are of same endian format can't they use TRANSPORTABLE DATABASE ? Why are they using RMAN COVERT DATAFILE (Transportable tablespace ) ?they are using transportable database - they are not importing data to dictionary, not creating users... - instead of using convert database, they used convert datafile to avoid of converting all datafiles (you need to convert only undo + system tablespace) - there's MOS note: Avoid Datafile Conversion during Transportable Database [ID 732053.1].
Basic steps for convert database:
1. Verify the prerequisites
2. Identify any external files and directories with DBMS_TDB.CHECK_EXTERNAL.
3. Shutdown (consistent) and restart the source database in READ ONLY mode.
4. Use DBMS_TDB.CHECK_DB to make sure the database is ready to be transported.
5. Run the RMAN convert database command.
6. Copy the converted files to the target database. Note that this implies that you will need 2x the storage on the source database for the converted files.
7. Copy the parameter file to the target database.
8. Adjust configuration files as required (parameter, listener.ora, tnsnames, etc).
9. Fire up the new database!
All other details are in:
http://docs.oracle.com/cd/B19306_01/backup.102/b14191/dbxptrn.htm#CHDFHBFI
Lukas -
How to recover a cold backup (Oracle 9i)
Hello,
We got the following situation, one of our clients wants to switch from Oracle 9i to Microsoft SQL Server 2008.
Our client is located in Singapore and we operate in Belgium, they have sent us an external HD with a media/cold backup of their database.
We also got both the database name and the passwords for the database.
In order to get this database available I have taken the following steps:
1)Created an Virtual Machine with Windows XP SP3,
2)Installed Oracle 9i on this VM,
3)Created a database using the database name and passwords given by the client.
Now I want to recover our clients database, how do I do this.
I know there texts about it, but I need a step for step guide to do this (that's the way I learn by seeing how its done and then reproduce it)
Thanks for any responseAfter some calls and putting pressure on our client IT I've been able to find out that there either using 11 or 10g.
Also found out that the name they gave me is not the name they use
So I followed the instructions on [http://download.oracle.com/docs/cd/B10501_01/server.920/a96521/create.htm#1000691] , I did find some mistakes in the page
I got the following create database statement
CREATE DATABASE dbname
USER SYS IDENTIFIED BY pz6r58
USER SYSTEM IDENTIFIED BY y1tz5p
LOGFILE GROUP 1 ('D:\dbname\redo01.log') SIZE 100M,
GROUP 2 ('D:\dbname\redo02.log') SIZE 100M,
GROUP 3 ('D:\dbname\redo03.log') SIZE 100M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 100
MAXINSTANCES 1
CHARACTER SET US7ASCII
DATAFILE 'D:\dbname\system01.dbf' SIZE 700M REUSE
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE D:\dbname\temp01.dbf' *(The documentation said datafile instead of tempfile)*
SIZE 270M REUSE
SYSAUX DATAFILE 'D:\dbname\sysaux01.dbf' size 700M reuse
UNDO TABLESPACE undotbs
DATAFILE 'D:\dbname\undotbs01.dbf'
SIZE 3984M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED;
after executing
ORA-01501: CREATE DATABASE failed
ORA-00200: controlfile could not be created
ORA-00202: controlfile: 'D:\dbname\control01.ctl'
ORA-27038: file exists
OSD-04010: <create> option specified, file already exist
So what did I do wrong, altough I'm guessing I copied the original files to early
[code="sql"]DATAFILE 'D:\dbname\system01.dbf' SIZE 700M REUSE
Although I did use reuse
I shutdown the database and removed the files to and replaced them with the files from the client,
Mounted the database succesfully and tried to query the metadata
Mixing the information found on
Creating an Oracle Database,
Oracle Recovery Procedure
Got ORA-01219, which is logical since the databse isn't open yet.
I open the database using alter database open
ORA-01157: cannot identify data file 1 - file not found
ORA-01110: data file 1: '/home/oracle/product/10.1.0/oradata/orcl/system01.dbf'
So any ideas want went wrong this time
Edited by: Resender on 19-mei-2011 1:00
Maybe you are looking for
-
How to add events to Safari context menu
Hi there, I have yet to buy an iPad, the choice is currently between the iPad and the Kindle. Although I would in fact prefer a general-purpose computer such as the iPad, the Kindle does meet all my current needs. Additionally, the Kindle has a singl
-
I'd like to use small images/icons in my vertical spry menu bar. this works ok, exept that the white separation line (border) between the menu items disappears in those spesific cells. The cells containing just text keep their white line, and I simpl
-
How to parse characters non ascii in a string
i am stucked in this crictical problem and i don't know how to cater this. I sometimes receive this String ����►☻☺ and sometimes these are non US-ASCII characters. These ����►☻☺ characters are replaced by ????? and is represented as it is. I sometime
-
how would i go about fixing this problem ?
-
Interacting with perl through java.
I'm trying to do something with Java that I haven't managed to do until now. I'm trying to interact with a perl process giving him input and reading output. I have a perl module that has several functions and the idea is to launch a process in a java