Datafile problem
Hi Gurus,
I have two different environments A and B(diff hardware) where two identical oracle user exists.
My problem is in one environment which is currently live(env A) has the data file size 8G whereas in the other one(env B) I have to resized the data file to 40G. The count of table and there data are same for both these two users.
I have to resize the datafile as when I am running the following query it is giving 0% for the tablespace:
SELECT b.tablespace_name,
c.size_kb,
b.free_kb,
Trunc((b.free_kb/c.size_kb) * 100) percent_free
FROM (SELECT tablespace_name,
Trunc(Sum(bytes)/1024) free_kb
FROM dba_free_space
GROUP BY tablespace_name) b,
(SELECT tablespace_name,
Trunc(Sum(bytes)/1024) size_kb
FROM dba_data_files
GROUP BY tablespace_name) c
WHERE b.tablespace_name = c.tablespace_name
AND Round((b.free_kb/c.size_kb) * 100,2) < 10;
I used to run the above query to check the tablespace dosent get out of space.
Can anyone tell me where is problem I can look into?
//saby
Why are you resizing them manually?
select file_name, autoextensible from dba_data_files;And what do you mean by "The count of table and there data are same for both these two users."
No user should ever do anything in either of these two tablespaces.
Similar Messages
-
Locking datafile problem using oracle agent
Hi All
we are facing locking datafile problems when we are taking backups using oracle agent
in veritas we have selected the option not to lock files when taking backups
is there any other settng which we have missed so that it doe s not lock files
regards
kedarIf you use a version locking policy, you should use a version field, not reuse your primary key for that. If you map the version field (you don't have to), you need to map it as read only. This makes sense: you don't want the application to change this version field, only TopLink should change it. This is where your exception comes from.
If you want to use existing database fields for optimistic locking, you should use a field locking policy. It does not make sense to use the primary key for that: it never changes, so you never know when the object has been changed by another user.
So you can do two things now to fix your code:
create a version column in your database and use a version policy (preferably numeric) or use a field locking policy and use job and salary, or all fields)
There is a pretty good description of locking policies in the TopLink guide:
http://www.oracle.com/technology/products/ias/toplink/doc/10131/main/_html/descun008.htm#CIHCFEIB
Hope this helps,
Lonneke -
Hi, I am using Oracle 7.3.4 on NT 4.
I am having problem with one of my rollback segments call HISTORY which contain two datafiles in e:\rollback\history.ora and f:\rollback\history.ora
This rollback segment was used for transfer old data to HISTORY datafile in
f:\HISTORY\HISTORY.ora
(the sql look like this -
set transaction use rollback segment history;
insert into history.bk_tableA
where ..... )
Recently the HISTORY rollback was droped by mistake. Since then I lost access to history tables( in HISTORY datafile). When I open the storage manager
it shows these 3 files are in recover status.
What is the relationship between the History rollback seqment and History datafile?
The list below were the errors I got when trying to recover a datafile
First attemp:
SVRMGR>
recover datafile 'f:\history\history.ora';
ORA-00279: Change 525082216 generated at 12/08/99 16:24:02 needed for thread 1
ORA-00289: Suggestion : d:\ORANT\RDBMS73\%ORACLE_SID%25290.001
ORA-00280: Change 525082216 for thread 1 is in sequence #252904
Specify log: {=suggested | filename | AUTO | CANCEL}
ORA-00310: archived log contains sequence 252909; sequence 252904 required
ORA-00334: archived log: 'D:\ORANT\RDBMS73\ORCL25290.001'
I try auto, and even the suggested log file orcl25290.001 and still could not recover the datafile. It seems the recover program
cannot find the specific sequence # in the log file! Any suggestion?
Second attemp:
I try the incomplete, change base recovery -
SVRMGR>recover until chang 525082215
it shows Media recovery complete. Then, I open the database with
'alter database open noresetlogs'
But I still cannot access to that data file.
svrmgr>select count(*) from history.bk_sn_err;
count(*)
ora-00376: file 5 cannot be read at this time
ora-01110: data file 5: 'f:\history\history.ora'
From the storage manager the file is still in recover status.
Does the file been recovered at all?
Is it possible that I could drop the damaged
rollback segment and recreate a new one.
Should I be able to gain access to the f:\history\HISTORY.ora datafile?
[email protected]
nullHi Micheal,
when you said you had dropped the history, did you drop the rollback segment or did you delete the history.ora file from the NT or did you drop the file from Oracle by doing an
alter database adatfile '...history...'offline drop?
also is your database running in archivelog mode?
depending on the above, you will have to use different methods to recover.
from the error messages you seem to be running with no archivelog mode so you have to offline drop the datafile.
If you do an offline drop on the datafile, then you will have to drop tha tablespace and recreate it.
Thanks,
Mandar
null -
2 offline datafiles - problems with recovery?
Hi, in a database there are 2 datafiles that were brought offline. They cannot be online again becouse I have no archive logs, but they are empty and not used at all. Backups are made regularly without any warnings. I wonder if there could be any problem during any future database restore or database recovery?
Grid control show that 2 files need media recovery and severity is : X
Edited by: 852326 on 2011-06-07 06:31Hi, in a database there are 2 datafiles that were brought offline. They cannot be online again becouse I have no archive logs, but they are empty and not used at all. Backups are made regularly without any warnings. I wonder if there could be any problem during any future database restore or database recovery?
Grid control show that 2 files need media recovery and severity is : XYou can drop the datafiles with offline option, however you are sure that there is no data in that, Later perform once again FULL backup
alter database datafile <file#> offline drop; -
My database is in noarchivelog mode
I had a tablespace with 3 datafiles and 1 datafile is deleted accidently.
I know i cant recover the datafile as the database is in noarchivelog mode.
my DB is only a test database.
Now what i have to do inorder to run my database with out any errors.I dont care about recovering the data.
I am getting error ORA-01116: error in opening database file string
Thankx...Hi Taj,
Ok I will do as what u suggested.But is there any minimum possiblity that i can recover the lost datafile.
i have the backup of the of lost datafile on production database.can i copy that datafile to the test database and add this datafile to the tablespace. -
Restore datafile problem (RMAN)
I delete the datafile branch.
Than restore it form the yesterdays backup
RMAN>restore tablespace branch.
successfully restored branch.dbf.
RMAN> recover tablespace branch;
archive log thread 1 sequence 12 is already on disk as file C:\ESAS\ORACLE\APP\O
RACLE\FLASH_RECOVERY_AREA\XE\ARCHIVELOG\2008_01_21\O1_MF_1_12_3S8KWFJM_.ARC
archive log thread 1 sequence 13 is already on disk as file C:\ESAS\ORACLE\APP\O
RACLE\FLASH_RECOVERY_AREA\XE\ARCHIVELOG\2008_01_21\O1_MF_1_13_3S8KWZM1_.ARC
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 01/21/2008 10:20:57
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of log thread 1 seq 5 lowscn 28752248 found to restore
RMAN-06025: no backup of log thread 1 seq 4 lowscn 28749190 found to restore
RMAN-06025: no backup of log thread 1 seq 3 lowscn 28735538 found to restore
RMAN-06025: no backup of log thread 1 seq 2 lowscn 28732630 found to restore
RMAN-06025: no backup of log thread 1 seq 1 lowscn 28704663 found to restore
RMAN-06025: no backup of log thread 1 seq 4 lowscn 28704627 found to restore
RMAN-06025: no backup of log thread 1 seq 3 lowscn 28684595 found to restore
RMAN-06025: no backup of log thread 1 seq 2 lowscn 28664563 found to restore
RMAN-06025: no backup of log thread 1 seq 1 lowscn 28642042 found to restore
RMAN-06025: no backup of log thread 1 seq 2225 lowscn 28640983 found to restore
RMAN-06025: no backup of log thread 1 seq 2224 lowscn 28640978 found to restore
RMAN-06025: no backup of log thread 1 seq 2223 lowscn 28633884 found to restore
SQL> alter tablespace branch online;
alter tablespace branch online
ERROR at line 1:
ORA-01190: control file or data file 4 is from before the last RESETLOGS
ORA-01110: data file 4: 'C:\ESAS\ORACLE\ORADATA\XE\BRANCH.DBF'
Also the size of the branch.dbf is 2,800GB normally, but when I restore it from the backupset it is nearly 500MB.If you did a RESETLOGS 3 days ago, that would have been between
the SCNs 26840983 and 28642042 ! based on these lines :
RMAN-06025: no backup of log thread 1 seq 1 lowscn 28642042 found to restore
RMAN-06025: no backup of log thread 1 seq 2225 lowscn 28640983 found to restore
However you do not expect to see a 500MB file. So I wonder if Oracle found
no backup of the datafile from recent days but found the most recent backup to be of so long ago (3+ days) that the datafile then was only 500MB.
That would mean that your recent daily backups have been failing or silently erroring out.
If RMAN cannot find a most recent (eg yesterday's) backup of a datafile, it will go back in time till it can find the "lastest" backup -- so the "lastest" backup of that datafile was a 500MB image many days ago ?
Check your backup destination directory -- your FlashbackRecoveryArea to see if backups are really going there. Check your backup logs. -
Problem in Database Recovery..
I m working at Test Environment, My database is running in ArchiveLogMode. I have current backup and old backup (one month old),
For practice of Backup & Recovery, I deleted the control files, online redo logs and data files and restored from old backup(one month old).
Note: I want to let you know that I have all the ArchivedRedoLog files and I have also created one tablespace (tablespace: abamco_test)that is not available in old backup.
Is it possible to recover that tablespace with only ArchivedRedoLog files (No datafile backup)??????????
=========================================================================
SQL> select name from v$database;
NAME
ROCK
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
<><> deleted all the datafiles, redologs, and control files.
<><> copied & pasted all the redologs, and control files from one month old backup
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
ORA-00205: error in identifying control file, check alert log for more info
SQL> shutdown immediate
ORA-01507: database not mounted
ORACLE instance shut down.
<><> COpied One month old control file and pasted in OraData folder.
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
<><> Copied all the datafiles, oneline redo log files from one month Old backup.
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
Database mounted.
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 6: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
ORA-01157: cannot identify/lock data file 6 - see DBWR trace file
ORA-01110: data file 6: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
Database mounted.
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
ORA-01207: file is more recent than control file - old control file
SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF'
ORA-01207: file is more recent than control file - old control file
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
SQL> alter database backup controlfile to trace;
alter database backup controlfile to trace
ERROR at line 1:
ORA-01507: database not mounted
SQL> alter database mount;
Database altered.
SQL> alter database backup controlfile to trace;
Database altered.
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
<><> copied the (contents) from generated file and saved in controlfile_recover.sql script.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ROCK" NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\REDO01.LOG' SIZE 50M,
GROUP 2 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\REDO02.LOG' SIZE 50M,
GROUP 3 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\REDO03.LOG' SIZE 50M
-- STANDBY LOGFILE
DATAFILE
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\UNDOTBS01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSAUX01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\USERS01.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF',
'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
CHARACTER SET WE8MSWIN1252
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01565: error in identifying file 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> alter tablespace ABAMCO_TEST offline;
alter tablespace ABAMCO_TEST offline
ERROR at line 1:
ORA-01109: database not open
SQL>
SQL>
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
<><> I removed one line 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\ABAMCO_TEST01.DBF' from controlfile_recover.sql
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 12800 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> SHUTDOWN IMMEDIATE;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 16384 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 32768 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> @D:\controlfile_recover.sql
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1247876 bytes
Variable Size 83887484 bytes
Database Buffers 75497472 bytes
Redo Buffers 7139328 bytes
CREATE CONTROLFILE REUSE DATABASE "ROCK" RESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 65536 (blocks), but should match header 1536
ORA-01110: data file 5: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\SYSTEM03.DBF'
ORA-01507: database not mounted
ALTER DATABASE OPEN RESETLOGS
ERROR at line 1:
ORA-01507: database not mounted
ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ROCK\TEMP01.DBF' REUSE
ERROR at line 1:
ORA-01109: database not open
SQL>
Any suggestions? what should I do now?Here's what I found on Metalink :
Subject: ORA-1163 creating a controlfile
Doc ID: Note:377933.1 Type: PROBLEM
Last Revision Date: 24-JUL-2006 Status: REVIEWED
Problem Description:
====================
You are attempting to recreate your controlfile after a clean shutdown ( shutdown normal or immediate)
Upon running the create controlfile script you receive:
CREATE CONTROLFILE REUSE DATABASE "PRODAB" NORESETLOGS ARCHIVELOG
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 12800 (blocks), but should match header 240160
ORA-01110: data file X: '<full path of datafile>'
Problem Explanation:
====================
Sample controlfile.sql
CREATE CONTROLFILE REUSE DATABASE "PRODAB" NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 454
LOGFILE
GROUP 1 '/oradata/PROD/redo01.log' SIZE 10M,
GROUP 2 '/oradata/PROD/redo02.log' SIZE 10M,
GROUP 3 '/oradata/PROD/redo03.log' SIZE 10M
-- STANDBY LOGFILE
DATAFILE
'/oradata/PROD/system01.dbf',
'/oradata/PROD/undotbs01.dbf',
'/oradata/PROD/sysaux01.dbf',
'/oradata/PROD/users01.dbf', <----------------Notice the extra comma after thelast datafile.
CHARACTER SET WE8ISO8859P1
Search Words:
=============
create controlfile ORA-1503 ORA-1163 ORA-1110
Solution Description:
=====================
This extra comma is causing the create controlfile to raise this unexpected error as seen above.
Solution Explanation:
=====================
This is a syntax error and by removing the trailing comma the create control should bypass this error.
Wow ! Oracle made a syntax error ! why am i not surprised ? :)
Thanks Khurram for your help ! -
RAC instance, trying to recover UNDO datafile, RMAN gives RMAN-06054
Hello all,
This has been a troublesome instance..a quick bit of background. This was created awhile back by someone else, I inherited this 3 mode RAC clusterof instance1.
I'm exporting out of one database (10G) into this instance1 (11G). When I was about to start the import..I found this instance wouldn't start. Turned out no backup had been going on of this empty instance. I backed up the archive logs to tape to free up the FRA..and things fired up.
I began the import, and found a bunch of errors...basically tellling me that I couldn't access one of the undo tablespaces...datafile problems.
I went to look and saw:
SQL> select a.file_name, a.file_id, b.status, a.tablespace_name
2 from dba_data_files a, v$datafile b
3 where a.file_id = b.file#
4 order by a.file_name;
FILE_NAME FILE_ID STATUS TABLESPACE_NAME
+DATADG/instance1/datafile/sysaux.270.696702269 2 ONLINE SYSAUX
+DATADG/instance1/datafile/system.263.696702253 1 SYSTEM SYSTEM
+DATADG/instance1/datafile/undotbs1.257.696702279 3 ONLINE UNDOTBS1
+DATADG/instance1/datafile/undotbs2.266.696702305 4 ONLINE UNDOTBS2
+DATADG/instance1/datafile/undotbs3.269.696702313 5 RECOVER UNDOTBS3
+DATADG/instance1/datafile/users.268.696702321 6 ONLINE USERS
+DATADG/instance1/l_data_01_01 11 ONLINE L_DATA_01
+DATADG/instance1/s_data_01_01 7 ONLINE S_DATA_01
+DATADG/instance1/s_data_01_02 8 ONLINE S_DATA_01
+INDEXDG/instance1/l_index_01_01 12 ONLINE L_INDEX_01
+INDEXDG/instance1/s_index_01_01 9 ONLINE S_INDEX_01
FILE_NAME FILE_ID STATUS TABLESPACE_NAME
+INDEXDG/instance1/s_index_01_02 10 ONLINE S_INDEX_01
There is is, file #5.
So, I went into RMAN to try to restore/recover:
RMAN> restore datafile 5;
Starting restore at 06-APR-10
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=222 instance=instance1 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: NMO v4.5.0.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=222 instance=instance1 device type=DISK
creating datafile file number=5 name=+DATADG/instance1/datafile/undotbs3.269.696702313
restore not done; all files read only, offline, or already restored
Finished restore at 06-APR-10
RMAN> recover datafile 5;
Starting recover at 06-APR-10
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1
starting media recovery
RMAN-06560: WARNING: backup set with key 343546 will be read 2 times
available space of 8315779 kb needed to avoid reading the backup set multiple times
unable to find archived log
archived log thread=1 sequence=1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/06/2010 14:33:07
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 1 and starting SCN of 16016
This is all on ASM, and am a bit of a newb with that. I bascially have no data I'm worried about losing, I just need to get everything 'on the air' so I can import successfully, and let users on this instance. I've set up the backups in GRID now....so, it will be backed up on the future, but what is the quickest, most efficient way to get this UNDO tablespace datafile recovered?
Thank you,
cayenneHemant K Chitale wrote:
SET UNTIL SEQUENCE 27wouldn't work if the Recovery requires Sequence 1 and it is missing.
Hemant K ChitaleOops...meant to have start and set until both to "1"
However, I see what you mean. It seems I cannot find the file on tape.
Since the RAC instance hasn't yet had any data put into it, I'm thinking it might be best to just blow it away, and recreate everything.
Trouble is, I'm a bit new at RAC and ASM. I was thinking the best route might be to use DBCA to remove the database...? Would this not take care of removing all the datafiles from all the ASM instances on the RAC..as well as all the other directories, etc on all 3x nodes?
I've already used the dbca to create templates of this instance, so recreation shouldn't be too difficult (although it will be my first RAC creation)...
Thank you in advance for the advice so far,
cayenne -
How to use old archieve logs with a new control file
Environment:
ORACLE_BASE="/export/home/oracle"
ORACLE_HOME="/export/home/oracle/product/8.1.6"
NLS_LANG=".UTF8"
2 partitions:
i) /data1 -- contains important datafiles (OS striping on 3 hard
disks)
ii) /export/home -- contains the oracle program, and SYS/SYSTEM
datafiles
Problem:
-/data1 cannot be read/mount (damaged)
-oracle failed
Action Performed:
-reinstalled OS
-mount /export/home successfully (all oracle system files,
instance init files exist)
-/data1 is an empty partition
-created the oracle user, and its groups
-chown recusively for the $ORACLE_BASE directory
-set all the oracle environment variables
-attempted to start the instance, but failed due to a control
file was
missing
-since control files were set to be mirrored, i copied a control
file somewhere
from /export/home/oracle/oradata/<SID>/control1.ctl to
/data1/oracle/oradata/<SID>/control2.ctl (i.e. have them back to
their original locations)
-the instance failed to start as well, since the datafile set in
the control files couldn't be found
-this forced me to re-create the control file... b4 i re-create
a new control file, i backed up the old one
-once the control file was created, the database can be started,
but, to oracle, all achieve log information are lost (although
the achieve log's physical files r still there)
-i imported the important data from a dump file that was
exported a week ago b4 the system failure
-since we r using the new control file, the redo logs switch
contains no achieve log information that the old control file
has, so running "recover database" doesn't do anything
Purpose:
since the dump file is a week old, i'd like to get the data
after my last export and b4 the system failure. the database was
run in arhieve log mode, how can i recover those data with the
new control file
Question:
-how can we create a new control file that can drive the old
achieve logs?
-can we convert the achieve log data (.dbf format) into text
format?
-can we still use the old control files to start the database?
-what's a suggested solution if we'd like to re-construct the
database up to the moment b4 the system failure in another
server?
thxuser3930585 wrote:
I am in an unenviable position, with an unsupported database.
We are running Oracle 9i on Windows XP. We are upgrading soon to Oracle 11g on a newer platform, but need to get our development environment working first.
We lost a system that was running our development database without having a database export. The C drive was placed into a new system as the D drive.
I have loaded Oracle 9i on the C drive, but I have been unable to determine how to point it to the existing data files on the D drive. My search skills may be the limiting factor here...
We cannot simply load the drive as C, since the hardware is different.
What are the steps to point the new database software at the data files on the D drive? Or, how do I copy the old data files into the new Oracle Home and have them recognized properly?
Are you stating that you don't know how to use COPY command?
Can you recreate same directory structure on new C drive as exist on old C drive?
Can you then drag & drop copies of the files? -
Transaction execution time and block size
Hi,
I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
2K oracle block database had 3 datafiles, each 7GB.
4K oracle block database had 2 datafiles, each 10GB.
8K oracle block database had 1 datafile of 20GB.
Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
THX to all.>
It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
(each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
2K oracle block database had 3 datafiles, each 7GB.
4K oracle block database had 2 datafiles, each 10GB.
8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
Regards
Jonathan Lewis -
Facing Problem in resizing datafile
Hi i m facing problem in resizing a datafile of size 2.5 gb, i infact import data of 2gb in this file then reorganize data in different tablespace now the used size of this datafile is 96mb , when i issue command to reduce it to 200mb it gives me error that data exist u cannot resize datafile,
tell me what should be done to resize it.
thanksHi,
You can create a working tablespace with a good size
CREATE TABLESPACE tbs_tmp
DATAFILE 'D:\Oracle\oradata\SID\file_tmp.dbf' SIZE 100M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K;
And move all segments from your tablespace TBS1 into this new tbs
(select segment_name,segment_type from dba_segments where tablespace_name = 'TBS1')
If indexes :
alter index owner.index_name rebuild tablespace tbs_tmp;
If table :
alter table owner.table_name move tablespace tbs_tmp;
Ensure that the tablespace TBS1 is empty
select segment_name,segment_type from dba_segments where tablespace_name = 'TBS1'
After what, if no row return, you can drop your first tablespace,
DROP TABLESPACE tbs1 INCLUDING CONTENTS CASCADE CONSTRAINTS;
recreate it with a goos size,
CREATE TABLESPACE tbs1
DATAFILE 'D:\Oracle\oradata\SID\file_tbs1.dbf' SIZE 100M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K;
and move all segment from tbs_tmp into this new tbs1
(select segment_name,segment_type from dba_segments where tablespace_name = 'TBS_TMP')
If indexes :
alter index owner.index_name rebuild tablespace tbs1;
If table :
alter table owner.table_name move tablespace tbs1;
Ensure that the tablespace TBS1 is empty
select segment_name,segment_type from dba_segments where tablespace_name = 'TBS_TMP'
If no row return, drop the working tablespace
DROP TABLESPACE tbs_tmp INCLUDING CONTENTS CASCADE CONSTRAINTS;
Nicolas. -
Problem while resizing datafiles..
Hi Experts,
Im facing problems while resizing my datafiles. I am using Oracle 10g on Windows 2003 server.
I had a datafile of size 20GB.
I have 3 schemas sch1, sch2,sch3 and all have objects.
I dropped sch1 and truncated some tables in sch2.
I found the freespace available in my datafile 16GB.
When I try to resize the datafile to 5GB, I am getting ora-03297 error.
I checked for object that are beyond 5GB using the follwing query.
I found sch3 objects are present beyond 5GB mark on that datafile.
How can I resize the datafile to 5GB??
Is there any other way to resize the datafile?? Please help me.
SELECT owner, segment_name, segment_type, tablespace_name, file_id,
((block_id+1)*(SELECT value FROM v$parameter
WHERE UPPER(name)='DB_BLOCK_SIZE')+BYTES) end_of_extent_is_at_this_byte
FROM dba_extents
WHERE ((block_id+1)*(SELECT value FROM v$parameter
WHERE UPPER(name)='DB_BLOCK_SIZE')+BYTES) > (<needed size in MB>*1024*
1024)
AND tablespace_name='<tablespace_name>'
ORDER BY file_id, end_of_extent_is_at_this_byte;
Thanks in advance.take a look at the metalink docs...probably coz of high water marks u r not able to do that..
Note 130866.1 - How to Resolve ORA-03297 When Resizing a Datafile by Finding the Table Highwatermark
Note 237654.1 - Resizing a Datafile Returns Error ORA-03297 -
Dataguard Problem missing datafile
problem alert in logfile as follow:
Errors in file $ORACLE_BASE/admin/dump/bdump/db_mrp0_647370.trc:
ORA-01111: name for data file 29 is unknown - rename to correct file
ORA-01110: data file 29: '/home/ora10g/10.2/dbs/UNNAMED00029'
ORA-01157: cannot identify/lock data file 29 - see DBWR trace file
ORA-01111: name for data file 29 is unknown - rename to correct file
ORA-01110: data file 29: '/home/ora10g/10.2/dbs/UNNAMED00029'
please help me .Hello;
Errors in file $ORACLE_BASE/admin/dump/bdump/db_mrp0_647370.trc:
ORA-01111: name for data file 29 is unknown - rename to correct file
ORA-01110: data file 29: '/home/ora10g/10.2/dbs/UNNAMED00029'
ORA-01157: cannot identify/lock data file 29 - see DBWR trace file
ORA-01111: name for data file 29 is unknown - rename to correct file
ORA-01110: data file 29: '/home/ora10g/10.2/dbs/UNNAMED00029'
The issue is probably STANDBY_FILE_MANAGEMENT related, but could be a disk space or incorrect path issue too.
On the Standby check for a missing file:
SELECT * FROM V$RECOVER_FILE WHERE ERROR LIKE '%MISSING%';
Then on the Primary find that file:
SELECT FILE#,NAME FROM V$DATAFILE WHERE FILE#=29;
Back on the Standby find the file:
SELECT FILE#,NAME FROM V$DATAFILE WHERE FILE#=29;
Set to STANDBY_FILE_MANAGEMENT to MANUAL:
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
ALTER DATABASE CREATE DATAFILE '/home/ora10g/10.2/dbs/UNNAMED00029' AS '<correct_path_and_filename>';
Set to STANDBY_FILE_MANAGEMENT back to AUTO:
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO
Check the alert log and monitor the apply.
Best Regards
mseberg -
Problems with DUPLICATE DATABASE when datafile was added after full backup
Hi,
I'm facing a problem when performing database duplication with the RMAN duplicate database command on a 10g database. If I preform the duplication from a full backup that is missing a datafile which was added to the database after the full backup, I get the following error message:
Starting restore at 10-10-2009 18:00:38
released channel: t1
released channel: t2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/10/2009 18:00:39
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06100: no channel to restore a backup or copy of datafile 43The redo log which was CURRENTat the time of the datafile's 43 creation is also available in the backups. It seems like RMAN can't use the information from the archived redo logs to reconstruct the contents of datafile 43. I suppose that because the failure is reported already in the RESTORE and not in the RECOVER phase, so the archived redo logs aren't even accessed yet. I get the same message even if I make a separate backup of datafile 43 (so a backup that is not in the same backupset as the backup of all other datafiles).
From the script the duplicate command produces, I guess that RMAN reads the contents of the source database's controlfile and tries to get a backup which contains all the datafiles to restore them on the auxiliary database - if such a backup is not found, it fails.
Of course if I try to perform a restore/recover of the source database it works without problems:
RMAN> restore database;
Starting restore at 13.10.09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
creating datafile fno=43 name=F:\ORA10\ORADATA\SOVDEV\SOMEDATAFILE01.DBF
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to F:\ORA10\ORADATA\SOVDEV\SYSTEM01.DBF
.....The datafile 43 is recreated and then redo is applied over.
So, does anyone know if duplicate database can't use archived redo logs to recreate the contents of a datafile as a normal restore/recover does? If it's so, then it means it's necessary to perform a full database backup before every run of duplicate database if a datafile was added after such a backup.
Thanks in advance for any answers.
Regards,
JureHi Jure,
I have hit exactly the same problem during duplication.
Because we backup the archive logs every 6 hours with rman I added an extra run block to this script.
run
backup incremental level 0
format 'bk_%d_%s_%p_%t'
filesperset 4
database not backed up;
(I also than hit a bug in the catalog which was solved by patching up the catalog dbs from 11.1.0.6 to 11.1.0.7.)
This will narrow down the datafile not being part of any rman backup to 6 hours while skipping datafiles for which a backup already exists.
Regards,
Tycho -
When I choose to display the datafiles from the Administration page the page loads but I have 5 listings for each datafile except for temp tablespace files. Now I have 2 other instances of 10gR2 and they are fine the only difference is they are upgrades from 9i where the one that has the problem is a new instance. I have looked at the v$datafile view in the database and the table and view in the sysman schema and they list the datafiles correctly. Any idea what the problem is?
Thanks
LarryHi
I think you forget some step.
Step 2, must copy rather than move.
And step 3, notice the change to the control file.
1) Alter tablespace xxx offline;
2) Copy the affected datafiles to the new location, again, copy rather than move.
3) For each data file affected, alter tablespace xxx rename datafile <old filename> to <new filename>;
4) Alter tablespace xxx online;
5) Alter database backup controlfile to trace, because we have changed the structure of the database and must preserve it.
6) Delete the old datafiles from the old file system.
Maybe you are looking for
-
Error while restoring database with different name on subscriber
Hi all, I am using merge replication in sql server 2008R2. Yesterday I wanted to create a test environment, so I did: 1. on publisher I restored from backup production db with another name ( lets call it TEST and new files .mdf, .ldf and no switch KE
-
How to access the attributes in an object using TestStand.?
hi, I have a class named Status in C# that has 2 data members. There is another Class named Parameter and it has functions that return objects of type Status. I made the DLL of the class Parameter. Then i added that class to NI TestStand and called a
-
How can an OS software bug be submitted to Apple, including screen shots?
A systematic software problem with OS X Lion, occuring on MacBook, Mac Pro, Mac Book Pro after OS upgrade and Fire Vault switch to "ON": the login screen after screen saver is black, and the screen is restored in partial blocks while typing in the u
-
Hello, We have implemented PCR for Change Position using custom Workflow. The problem is random where in we can see the workf item raised but when you click on the work item log it is blank. This causes the portal to error when the user clicks on the
-
Select a folder, not a file
how does one make some sort of dialog box to choose folders? I have tried JFileChooser, but to no avail. thanks