Corrupt block relative dba: in alert log
hi,
In our recovery database,I've found following error in alert log.
Thu Mar 27 07:05:57 2008
Hex dump of (file 3, block 30826) in trace file /u01/app/oracle/admin/catdb/bdump/catdb_m000_21795.trc
Corrupt block relative dba: 0x00c0786a (file 3, block 30826)
Bad check value found during buffer read
Data in bad block:
type: 6 format: 2 rdba: 0x00c0786a
last change scn: 0x0000.0013ed4d seq: 0x1 flg: 0x06
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0xed4d0601
check value in block header: 0x937c
computed block checksum: 0x8000
Reread of rdba: 0x00c0786a (file 3, block 30826) found same corrupted data
Thu Mar 27 07:05:59 2008
Corrupt Block Found
TSN = 2, TSNAME = SYSAUX
RFN = 3, BLK = 30826, RDBA = 12613738
OBJN = 8964, OBJD = 8964, OBJECT = WRH$_ENQUEUE_STAT, SUBOBJECT =
SEGMENT OWNER = SYS, SEGMENT TYPE = Table Segment
Now, how solve this error?
Thanks in advance.
Leo
Refer to metalink Note:77587.1. You may need to do further sanity block checking by placing the following parameters in the database instance init.ora parameter file:
db_block_checking=true
db_block_checksum=true
dbblock_cache_protect= true
Consult Oracle support before changing hidden parameters.
Similar Messages
-
Dear:
Running the CheckDB in the transaction DB13 and i get the following
message of error:
BR0976W Database message alert - level: ERROR, line: 16991, time: 2008-02-05 23.16.19, message:
Corrupt block relative dba: 0x00c02702 (file 3, block 9986)
Other Details :-
OS :- Win 2003 server
DB :- Oracle 10g
So please help on this issue.
Thanks.
Regards.Hello,
>> Corrupt block relative dba: 0x00c02702 (file 3, block 9986)
you have to check, if the block 9986 in file 3 is allocated and used in an active segment.
If it is an index ... you can easily rebuild it and the problem is gone... but if it is a table segment it is more complicated... but first check the block....
> sqlplus "/ as sysdb"
> alter system dump datafile 3 block 9986
> cd /oracle/<SID>/saptrace/usertrace
Take a look at the tracefile and check the seg/obj .. convert the hex value to a decimal ...after that query dba_objects;
> sqlplus "/ as sysdba"
> SELECT OBJECT_NAME FROM ALL_OBJECTS WHERE DATA_OBJECT_ID = <OBJN>;
Regards
Stefan -
Corrupt block relative dba: 0x0041470c
Corrupt block relative dba: 0x0041470c (file 1, block 83724)
Fractured block found during buffer read
Data in bad block:
type: 6 format: 2 rdba: 0x0041470c
last change scn: 0x0009.90b485ad seq: 0x1 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0x00000000
check value in block header: 0x3092
computed block checksum: 0x19de
Wed Oct 03 09:58:32 GMT-4 2012
Reread of rdba: 0x0041470c (file 1, block 83724) found same corrupted data
Wed Oct 03 09:58:32 GMT-4 2012
Errors in file /opt/oracle/admin/IXP/bdump/ixp_smon_19661.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-08103: object no longer exists962800 wrote:
Corrupt block relative dba: 0x0041470c (file 1, block 83724)
Fractured block found during buffer read
Data in bad block:
type: 6 format: 2 rdba: 0x0041470c
last change scn: 0x0009.90b485ad seq: 0x1 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0x00000000
check value in block header: 0x3092
computed block checksum: 0x19de
Wed Oct 03 09:58:32 GMT-4 2012
Reread of rdba: 0x0041470c (file 1, block 83724) found same corrupted data
Wed Oct 03 09:58:32 GMT-4 2012
Errors in file /opt/oracle/admin/IXP/bdump/ixp_smon_19661.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-08103: object no longer exists
Its not a corrupted block but a fractured block and also the object is not there anymore. So there is nothing that you are supposed to do.
Aman.... -
Cjq process related entries in alert log
Hi,
I have recently set up 10g (10.2.0.4) database on solaris 10. In the alert log of the db , there are entries like this;
Mon May 25 02:31:56 2009+
Starting background process CJQ0+
CJQ0 started with pid=15, OS id=18484+
Mon May 25 03:12:47 2009+
Stopping background process CJQ0+
the database is up and it is running but in alert log several such messages are there...not sure why the CJQ processes should start and shut down. can you suggest regarding possible cause for it.
thanks
Edited by: orausern on May 25, 2009 7:22 AM
Edited by: orausern on May 25, 2009 7:22 AMFrom the documentation you get the following:
This is the Oracle’s dynamic job queue coordinator. It periodically selects jobs that need to be run, scheduled by the Oracle job queue. The coordinator process dynamically spawns job queue slave processes (J000…J999) to run the jobs. These jobs could be PL/SQL statements or procedures on an Oracle instance. Please note that this is not a persistent process. It comes and goes.
Regards, Gerwin -
os Sun 5.10 oracle version 10.2.0.2 RAC 2 node
alert.log 내용
Hex dump of (file 206, block 393208) in trace file /oracle/app/oracle/admin/DBPGIC/udump/dbpgic1_ora_1424.trc
Corrupt block relative dba: 0x3385fff8 (file 206, block 393208)
Bad header found during backing up datafile
Data in bad block:
type: 32 format: 0 rdba: 0x00000001
last change scn: 0x0000.98b00394 seq: 0x0 flg: 0x00
spare1: 0x1 spare2: 0x27 spare3: 0x2
consistency value in tail: 0x00000001
check value in block header: 0x0
block checksum disabled
Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
corrupt 발생한 Block id 를 검색해 보면 Block id 가 검색이 안됩니다.
dba_extents 로 검색
corrupt 때문에 Block id 가 검색이 안되는 것인지 궁금합니다.
export 받으면 데이타는 정상적으로 export 가능.다행이네요. block corruption 이 발생한 곳이 데이터가 저장된 블록이
아닌 것 같습니다. 그것도 rman백업을 통해서 발견한 것 같는데
맞는지요?
scn이 scn: 0x0000.00000000 가 아닌
0x0000.98b00394 인 것으로 봐서는 physical corrupt 가 아닌
soft corrupt인 것 같습니다.
그렇다면 버그일 가능성이 높아서 찾아보니
Bug 4411228 - Block corruption with mixture of file system and RAW files
의 버그가 발견되었습니다. 이것이 아닐 수도 있지만..
이러한 block corruption에 대한 처리방법 및 원인분석은
오라클(주)를 통해서 정식으로 요청하셔야 합니다.
metalink를 통해서 SR 요청을 하십시오.
export는 high water mark 이후의 block corruption을 찾아내지 못하고 이외에도
아래 몇가지 경우에서도 찾아내지 못합니다.
db verify( dbv)의 경우에는 physical corruption은 찾아내지 못하고
soft block corruption만 찾아낼 수 있습니다.
경험상 physical corruption 이 발생하였으나 /dev/null로
datafile copy가 안되는데도 dbv로는 이 문제를 찾아내지
못하였습니다.
그렇다면 가장 좋은 방법은 rman 입니다. rman은 high water mark까지의
데이터를 백업해주면서 전체 데이터파일에 대한 체크를 하기도 합니다.
physical corruption뿐만 아니라 logical corruption도 체크를
하니 점검하기로는 rman이 가장 좋은 방법이라 생각합니다.
The Export Utility
# Use a full export to check database consistency
# Export performs a full scan for all tables
# Export only reads:
- User data below the high-water mark
- Parts of the data dictionary, while looking up information concerning the objects being exported
# Export does not detect the following:
- Disk corruptions above the high-water mark
- Index corruptions
- Free or temporary extent corruptions
- Column data corruption (like invalid date values)
block corruption을 정상적으로 복구하는 방법은 restore 후에
복구하는 방법이 있겠으나 이미 restore할 백업이 block corruption이
발생했을 수도 있습니다. 그러므로 다른 서버에 restore해보고
정상적인 datafile인 것을 확인 후에 실환경에 restore하는 것이 좋습니다.
만약 백업본까지 block corruption이 발생하였거나 또는 시간적 여유가
없을 경우에는 table을 move tablespace 또는 index rebuild를 통해서
다른 테이블스페이스로 데이터를 옮겨두고 문제가 발생한 테이블스페이스를
drop해버리고 재생성 하는 것이 좋을 것 같습니다.(지금 현재 데이터의
손실은 없으니 move tablespace, rebuild index 방법이 좋겠습니다.
Handling Corruptions
Check the alert file and system log file
Use diagnostic tools to determine the type of corruption
Dump blocks to find out what is wrong
Determine whether the error persists by running checks multiple times
Recover data from the corrupted object if necessary
Preferred resolution method: media recovery
Handling Corruptions
Always try to find out if the error is permanent. Run the analyze command multiple times or, if possible, perform a shutdown and a startup and try again to perform the operation that failed earlier.
Find out whether there are more corruptions. If you encounter one, there may be other corrupted blocks, as well. Use tools like DBVERIFY for this.
Before you try to salvage the data, perform a block dump as evidence to identify the actual cause of the corruption.
Make a hex dump of the bad block, using UNIX dd and od -x.
Consider performing a redo log dump to check all the changes that were made to the block so that you can discover when the corruption occurred.
Note: Remember that when you have a block corruption, performing media recovery is the recommended process after the hardware is verified.
Resolve any hardware issues:
- Memory boards
- Disk controllers
- Disks
Recover or restore data from the corrupt object if necessary
Handling Corruptions (continued)
There is no point in continuing to work if there are hardware failures. When you encounter hardware problems, the vendor should be contacted and the machine should be checked and fixed before continuing. A full hardware diagnostics should be run.
Many types of hardware failures are possible:
Bad I/O hardware or firmware
Operating system I/O or caching problem
Memory or paging problems
Disk repair utilities
아래 관련 자료를 드립니다.
All About Data Blocks Corruption in Oracle
Vijaya R. Dumpa
Data Block Overview:
Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks (also called logical blocks, Oracle blocks, or pages), extents, and segments. The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information. The level of logical database storage above an extent is called a segment. The high water mark is the boundary between used and unused space in a segment.
The header contains general block information, such as the block address and the type of segment (for example, data, index, or rollback).
Table Directory, this portion of the data block contains information about the table having rows in this block.
Row Directory, this portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area).
Free space is allocated for insertion of new rows and for updates to rows that require additional space.
Row data, this portion of the data block contains rows in this block.
Analyze the Table structure to identify block corruption:
By analyzing the table structure and its associated objects, you can perform a detailed check of data blocks to identify block corruption:
SQL> analyze table_name/index_name/cluster_name ... validate structure cascade;
Detecting data block corruption using the DBVERIFY Utility:
DBVERIFY is an external command-line utility that performs a physical data structure integrity check on an offline database. It can be used against backup files and online files. Integrity checks are significantly faster if you run against an offline database.
Restrictions:
DBVERIFY checks are limited to cache-managed blocks. It’s only for use with datafiles, it will not work against control files or redo logs.
The following example is sample output of verification for the data file system_ts_01.dbf. And its Start block is 9 and end block is 25. Blocksize parameter is required only if the file to be verified has a non-2kb block size. Logfile parameter specifies the file to which logging information should be written. The feedback parameter has been given the value 2 to display one dot on the screen for every 2 blocks processed.
$ dbv file=system_ts_01.dbf start=9 end=25 blocksize=16384 logfile=dbvsys_ts.log feedback=2
DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Output:
$ pg dbvsys_ts.log
DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = system_ts_01.dbf
DBVERIFY - Verification complete
Total Pages Examined : 17
Total Pages Processed (Data) : 10
Total Pages Failing (Data) : 0
Total Pages Processed (Index) : 2
Total Pages Failing (Index) : 0
Total Pages Processed (Other) : 5
Total Pages Empty : 0
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Detecting and reporting data block corruption using the DBMS_REPAIR package:
Note: Note that this event can only be used if the block "wrapper" is marked corrupt.
Eg: If the block reports ORA-1578.
1. Create DBMS_REPAIR administration tables:
To Create Repair tables, run the below package.
SQL> EXEC DBMS_REPAIR.ADMIN_TABLES(‘REPAIR_ADMIN’, 1,1, ‘REPAIR_TS’);
Note that table names prefix with ‘REPAIR_’ or ‘ORPAN_’. If the second variable is 1, it will create ‘REAIR_key tables, if it is 2, then it will create ‘ORPAN_key tables.
If the thread variable is
1 then package performs ‘create’ operations.
2 then package performs ‘delete’ operations.
3 then package performs ‘drop’ operations.
2. Scanning a specific table or Index using the DBMS_REPAIR.CHECK_OBJECT procedure:
In the following example we check the table employee for possible corruption’s that belongs to the schema TEST. Let’s assume that we have created our administration tables called REPAIR_ADMIN in schema SYS.
To check the table block corruption use the following procedure:
SQL> VARIABLE A NUMBER;
SQL> EXEC DBMS_REPAIR.CHECK_OBJECT (‘TEST’,’EMP’, NULL,
1,’REPAIR_ADMIN’, NULL, NULL, NULL, NULL,:A);
SQL> PRINT A;
To check which block is corrupted, check in the REPAIR_ADMIN table.
SQL> SELECT * FROM REPAIR_ADMIN;
3. Fixing corrupt block using the DBMS_REPAIR.FIX_CORRUPT_BLOCK procedure:
SQL> VARIABLE A NUMBER;
SQL> EXEC DBMS_REPAIR.FIX.CORRUPT_BLOCKS (‘TEST’,’EMP’, NULL,
1,’REPARI_ADMIN’, NULL,:A);
SQL> SELECT MARKED FROM REPAIR_ADMIN;
If u select the EMP table now you still get the error ORA-1578.
4. Skipping corrupt blocks using the DBMS_REPAIR. SKIP_CORRUPT_BLOCK procedure:
SQL> EXEC DBMS_REPAIR. SKIP_CORRUPT.BLOCKS (‘TEST’, ‘EMP’, 1,1);
Notice the verification of running the DBMS_REPAIR tool. You have lost some of data. One main advantage of this tool is that you can retrieve the data past the corrupted block. However we have lost some data in the table.
5. This procedure is useful in identifying orphan keys in indexes that are pointing to corrupt rows of the table:
SQL> EXEC DBMS_REPAIR. DUMP ORPHAN_KEYS (‘TEST’,’IDX_EMP’, NULL,
2, ‘REPAIR_ADMIN’, ‘ORPHAN_ADMIN’, NULL,:A);
If u see any records in ORPHAN_ADMIN table you have to drop and re-create the index to avoid any inconsistencies in your queries.
6. The last thing you need to do while using the DBMS_REPAIR package is to run the DBMS_REPAIR.REBUILD_FREELISTS procedure to reinitialize the free list details in the data dictionary views.
SQL> EXEC DBMS_REPAIR.REBUILD_FREELISTS (‘TEST’,’EMP’, NULL, 1);
NOTE
Setting events 10210, 10211, 10212, and 10225 can be done by adding the following line for each event in the init.ora file:
Event = "event_number trace name errorstack forever, level 10"
When event 10210 is set, the data blocks are checked for corruption by checking their integrity. Data blocks that don't match the format are marked as soft corrupt.
When event 10211 is set, the index blocks are checked for corruption by checking their integrity. Index blocks that don't match the format are marked as soft corrupt.
When event 10212 is set, the cluster blocks are checked for corruption by checking their integrity. Cluster blocks that don't match the format are marked as soft corrupt.
When event 10225 is set, the fet$ and uset$ dictionary tables are checked for corruption by checking their integrity. Blocks that don't match the format are marked as soft corrupt.
Set event 10231 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing full table scans:
Event="10231 trace name context forever, level 10"
Set event 10233 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing index range scans:
Event="10233 trace name context forever, level 10"
To dump the Oracle block you can use below command from 8.x on words:
SQL> ALTER SYSTEM DUMP DATAFILE 11 block 9;
This command dumps datablock 9 in datafile11, into USER_DUMP_DEST directory.
Dumping Redo Logs file blocks:
SQL> ALTER SYSTEM DUMP LOGFILE ‘/usr/oracle8/product/admin/udump/rl. log’;
Rollback segments block corruption, it will cause problems (ORA-1578) while starting up the database.
With support of oracle, can use below under source parameter to startup the database.
CORRUPTEDROLLBACK_SEGMENTS=(RBS_1, RBS_2)
DB_BLOCK_COMPUTE_CHECKSUM
This parameter is normally used to debug corruption’s that happen on disk.
The following V$ views contain information about blocks marked logically corrupt:
V$ BACKUP_CORRUPTION, V$COPY_CORRUPTION
When this parameter is set, while reading a block from disk to catch, oracle will compute the checksum again and compares it with the value that is in the block.
If they differ, it indicates that the block is corrupted on disk. Oracle makes the block as corrupt and signals an error. There is an overhead involved in setting this parameter.
DB_BLOCK_CACHE_PROTECT=‘TRUE’
Oracle will catch stray writes made by processes in the buffer catch.
Oracle 9i new RMAN futures:
Obtain the datafile numbers and block numbers for the corrupted blocks. Typically, you obtain this output from the standard output, the alert.log, trace files, or a media management interface. For example, you may see the following in a trace file:
ORA-01578: ORACLE data block corrupted (file # 9, block # 13)
ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'
$rman target =rman/rman@rmanprod
RMAN> run {
2> allocate channel ch1 type disk;
3> blockrecover datafile 9 block 13 datafile 2 block 19;
4> }
Recovering Data blocks Using Selected Backups:
# restore from backupset
BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET;
# restore from datafile image copy
BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM DATAFILECOPY;
# restore from backupset with tag "mondayAM"
BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 199 FROM TAG = mondayAM;
# restore using backups made before one week ago
BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
UNTIL 'SYSDATE-7';
# restore using backups made before SCN 100
BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL SCN 100;
# restore using backups made before log sequence 7024
BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
UNTIL SEQUENCE 7024;
글 수정:
Min Angel (Yeon Hong Min, Korean) -
ORACLE 8.0.5 on SuSE 5.3 and 6.0 - Corrupt Block
I do some heavy loading (Designer 2000) and I do get similar
errors on 2 different computer with mirrored disks -different
systems - on each one. So I'd like to exclude hardware problems.
it's experimental - so I do not run archives -
Designer on W95 crashes quite often but this should never lead
to corrupted data blocks.
Linux was hanging too - and disk-cache might not have been
written to disk ????? could this be a reason ???
Corrupt block relative dba: 0x01003aa8 file=4. blocknum=15016.
Fractured block found during buffer read
Data in bad block - type:6. format:2. rdba:0x01003aa8
last change scn:0x0000.00014914 seq:0x3 flg:0x00
consistancy value in tail 0x496b0605
check value in block header: 0x0, check value not calculated
spare1:0x0, spare2:0x0, spare2:0x0
would be happy to get some feedback
nullFerdinand Gassauer (guest) wrote:
: I do some heavy loading (Designer 2000) and I do get similar
: errors on 2 different computer with mirrored disks -different
: systems - on each one. So I'd like to exclude hardware
problems.
: it's experimental - so I do not run archives -
: Designer on W95 crashes quite often but this should never lead
: to corrupted data blocks.
: Linux was hanging too - and disk-cache might not have been
: written to disk ????? could this be a reason ???
: Corrupt block relative dba: 0x01003aa8 file=4. blocknum=15016.
: Fractured block found during buffer read
: Data in bad block - type:6. format:2. rdba:0x01003aa8
: last change scn:0x0000.00014914 seq:0x3 flg:0x00
: consistancy value in tail 0x496b0605
: check value in block header: 0x0, check value not calculated
: spare1:0x0, spare2:0x0, spare2:0x0
: would be happy to get some feedback
Please check first /var/log/messages for any linux errors. It is
likely that if linux crashes and cannot sync to disk that there
might be some corruption problems. For this reason lts of people
would like to see raw device support but apparently Linus is not
willing for some reason...
I assume some hardware relevant problems though
Marcus
null -
Corrupt block error + valid data found ???
Hi,
I am getting a peculiar block corruption error in my production database (9.2.0.8).
It also says "valid data found". I am able to analyze the suspect table without any reported issues. Can any one please suggest?
Details from Alertlog below:-
Corrupt block relative dba: 0x01463cbf (file 5, block 408767)
Bad header found during user buffer read
Data in bad block -
type: 50 format: 0 rdba: 0x3a383020
last change scn: 0x0338.303a3630 seq: 0xc2 flg: 0x38
consistency value in tail: 0x36302036
check value in block header: 0x3c27, block checksum disabled
spare1: 0x30, spare2: 0x36, spare3: 0x1502
Reread of rdba: 0x01463cbf (file 5, block 408767) found valid data
Hex dump of Absolute File 5, Block 408768 in trace file d:\oracle\admin\fm\udump\fm_ora_5236.trc
---------------------------------------------------------------------------------------------Hi,
May be this will help
Data block corruption…..
Regards
Jafar -
When I create one datafile bigger of the one than 2 gb, occurs error of block corrupted in oracle10g, when I execute the DBV
This only occurs with datafiles > 1gb, therefore below of this it does not occur.
OS: Suse enterprise 9 (X86)
2 gb memoryI created for sqlplus and for the Enterprise Manager and it does not give error of creation
here it is the execution of dbv and the message of error.
Page 191883 is influx - most likely media corrupt
Corrupt block relative dba: 0x0142ed8b (file 5, block 191883)
Fractured block found during dbv:
Data in bad block:
type: 0 format: 0 rdba: 0x00000000
last change scn: 0x0000.00000000 seq: 0x0 flg: 0x00
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0x00000000
check value in block header: 0x0
block checksum disabled -
Recovery is repairing media corrupt block x of file x in standby alert log
Hi,
oracle version:8.1.7.0.0
os version :solaris 5.9
we have oracle 8i primary and standby database. i am getting erorr in alert log file:
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
bash-2.05$ tail -f alert_pindb.log
Recovery is repairing media corrupt block 991886 of file 179
Recovery is repairing media corrupt block 70257 of file 184
Recovery is repairing media corrupt block 70258 of file 184
Recovery is repairing media corrupt block 70259 of file 184
Recovery is repairing media corrupt block 70260 of file 184
Recovery is repairing media corrupt block 70261 of file 184
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
Recovery is repairing media corrupt block 500027 of file 181
Recovery is repairing media corrupt block 500028 of file 181
Recovery is repairing media corrupt block 500029 of file 181
Recovery is repairing media corrupt block 500030 of file 181
Recovery is repairing media corrupt block 500031 of file 181
Recovery is repairing media corrupt block 991837 of file 179
Recovery is repairing media corrupt block 991838 of file 179
how i can resolve this.
[pre]
Thanks
Prakash
Edited by: user612485 on Aug 28, 2008 10:53 AMDear satish kandi,
recently we have created index for one table with nologgign option, i think for that reason i am getting that error.
if i run dbv utility on the files which are shown in alert log file i am getting the following results.
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx055.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:18:27 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx053.dbf
Block Checking: DBA = 751593895, Block Type =
Found block already marked corrupted
Block Checking: DBA = 751593896, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 1036952
Total Pages Failing (Index): 0
Total Pages Processed (Other): 7342
Total Pages Empty : 4282
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx053.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:23:12 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx054.dbf
Block Checking: DBA = 759492966, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492967, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492968, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 585068
Total Pages Failing (Index): 0
Total Pages Processed (Other): 8709
Total Pages Empty : 454799
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx054.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:32:28 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx055.dbf
Block Checking: DBA = 771822208, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822209, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822210, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 157125
Total Pages Failing (Index): 0
Total Pages Processed (Other): 4203
Total Pages Empty : 887248
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
My doubts are :
1.if i drop the index and recreate the index with logging option will this error won't repeat in alert log file
2.in future if i activate the standby database will database is going to open without any error.
Thanks
Prakash
. -
Corrupt block detected in control file
Hi All,
I have a scenario where I have set up Active/Standby RACs and successfully have archive redo logs being applied to Standby - everything was ok
Versions - Oracle 11g R2 , of RHEL 5
Scenario 1:
Redo log application on Standby works perfectly when I do not create our software application tables using sql scripts on the Primary until AFTER the steps for Dataguard/RAC is completed successfully.
Scenario 2:
Redo log application does not work when I do run our sql scripts BEFORE I take a RMAN backup of the Primary to duplicate in the Standby
Everything comes up on the Standby after the rman duplicate, archive logs get transferred , but now they do not get applied.
I see the ORA-00227: corrupt block detected in control file: (block 1, # blocks 1) in the alert log when I put standby in Recovery Mode
My theory is that somehow our sql scripts are breaking my rman backups when I run them before creating an RMAN backup of Primary to load on Standby- I just need someone to advise whether this is a possibility from their experience, if so I will contact Oracle support to investigate further. This is my first time working on RAC DG etc
ThanksHi All ,
Ive tried to upgrade Oracle to 11.2.0.2 to fix this issue - that I can no longer remember !
Managed to complete upgrade on the standby node (after having to reinstall due to hostname change )
Now trying the Active node, I see the following error during the grid upgrade where i execute rootupgrade.sh
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0/grid2/crs/install/crsconfig_params
Creating trace directory
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
The fixes for bug 9413827 are not present in the 11.2.0.1 crs home
Apply the patches for these bugs in the 11.2.0.1 crs home and then run rootupgrade.sh
/opt/app/11.2.0/grid2/perl/bin/perl -I/opt/app/11.2.0/grid2/perl/lib -I/opt/app/11.2.0/grid2/crs/install /opt/app/11.2.0/grid2/crs/install/rootcrs.pl execution failed
I have to download this patch from MOS for bug 9413827, somehow apply it to the old version of grid 11.2.0.1 and then rootupgrade.sh -
What means about Non-local Process blocks cleaned out in RACalert log
Hi Experts,
We have 4 nodes oracle 11.1 RAC in redhat5.1.
I saw lots of message about Non-local Process blocks cleaned out in alert log files today.
such as
Tue Sep 8 16:31:04 2009
Reconfiguration started (old inc 18, new inc 20)
List of nodes:
0 1 2 3
Global Resource Directory frozen
Communication channels reestablished
* domain 0 valid = 1 according to instance 0
Tue Sep 8 16:31:04 2009
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Tue Sep 8 16:43:46 2009
LMS 0: 0 GCS shadows cancelled, 0 closed
Tue Sep 8 16:43:46 2009
LMS 1: 0 GCS shadows cancelled, 0 closed
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Does some expert above message for me?
Thanks
JimThanks- good observation.
Unusual, perhaps, but it is what we need in our setting. And- allegedly this is supported / encouraged based on my understanding of the OS X Server docs. I don't have any control over the AD server (since it's in the university-level IT management's hands) but I -do-, of course, have control over my own server. So I just want to use their authentication (and save my students / lab folk the trouble of having multiple logins, etc).
You make a good point / observation / point-of-debuggery. Indeed, if I set the client machines to use -only- the main campus AD server (and thus allow logins from everyone on campus) it works first time. So it is some interesting interaction betwixt the Mac OS Server and the client methinks. In fact, across campus, all the 'public' machines are simply bound to the AD server and you can just log in that way. -
Block corruption error keep on repeating in alert log file
Hi,
Oracle version : 9.2.0.8.0
os : sun soalris
error in alert log file:
Errors in file /u01/app/oracle/admin/qtrain/bdump/qtrain_smon_24925.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-01578: ORACLE data block corrupted (file # 1, block # 19750)
ORA-01110: data file 1: '/u01/app/oracle/admin/qtrain/dbfiles/system.dbf'system datafile is restored from backup still the error is logged in alert log file
Inputs are appreciated.
Thanks
PrakashHi,
Thanks for the inputs
OWNER SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENT_ID FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO
SYS SMON_SCN_TO_TIME CLUSTER SYSTEM 1 1 19749 16384 1 1
SYS SMON_SCN_TO_TIME CLUSTER SYSTEM 2 1 19750 32768 2 1Thanks
Prakash -
Question about the Initialization Parameters Information in the Alert.log
Hi, All -
What is the correct answer for the following question.
Specifically, what information does Oracle provide you with in the alert.log regarding initialization parameters?
a. Values of all initialization parameters at startup
b. Values of initialization parameters modified since last startup
c. Values of initialization parameters with non-default values
d. Only values of initialization parameters that cannot be modified dynamically.
I think the answer should be B, but I would like to confirm.The answer is C
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/process.htm#sthref1633
The alert log is a special trace file. The alert log of a database is a chronological log of messages and errors, and includes the following items:
All internal errors (ORA-600), block corruption errors (ORA-1578), and deadlock errors (ORA-60) that occur
Administrative operations, such as CREATE, ALTER, and DROP statements and STARTUP, SHUTDOWN, and ARCHIVELOG statements
Messages and errors relating to the functions of shared server and dispatcher processes
Errors occurring during the automatic refresh of a materialized view
The values of all initialization parameters that had nondefault values at the time the database and instance start
Kamran Agayev A. (10g OCP)
http://kamranagayev.wordpress.com -
We have recently migrated our database from Solaris to Linux (RHEL5) and since this migration we are seeing weird errors related to archive log shipping to the remote standby site and a corresponding ORA-600 in the standby site.
What's interesting is everything gets resolved by itself, it always seems to happen during the heavy database load when there is frequent log switch happening (~3 min). So initially it tries to archive to the remote site and it fails with the following error in the primary alert log.
Errors in file /app/oracle/admin/UIIP01/bdump/uiip01_arc1_9772.trc:
ORA-00272: error writing archive log
Mon Jul 14 10:57:36 2008
FAL[server, ARC1]: FAL archive failed, see trace file.
Mon Jul 14 10:57:36 2008
Errors in file /app/oracle/admin/UIIP01/bdump/uiip01_arc1_9772.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Mon Jul 14 10:57:36 2008
ORACLE Instance UIIP01 - Archival Error. Archiver continuing.
And then we see a ORA-600 on standby database related to this which complains about redo block corruption.
Mon Jul 14 09:57:32 2008
Errors in file /app/oracle/admin/UIIP01/udump/uiip01_rfs_12775.trc:
ORA-00600: internal error code, arguments: [kcrrrfswda.11], [4], [368], [], [], [], [], []
Mon Jul 14 09:57:36 2008
And the trace file has this wonderful block corruption error..
Corrupt redo block 424432 detected: bad checksum
Flag: 0x1 Format: 0x22 Block: 0x000679f0 Seq: 0x000006ef Beg: 0x150 Cks:0xa2e5
----- Dump of Corrupt Redo Buffer -----
*** 2008-07-14 09:57:32.550
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [kcrrrfswda.11], [4], [368], [], [], [], [], []
So ARC tries to resend this redo log again and it succeeds, end of the day all we have is a bunch of these ORA- errors in our alert logs, triggering off our monitors and these errors resolve themselves without any manual intervention, opened a tar with Oracle support as this is not affecting our primary database, they are in no hurry to get this one prioritized and also they are reluctant to accept that it's a bug that resolves itself.
Just wanted to get it out here to see if anyone experienced a similar problem, let me know if you need any more details.
As I said earlier this behaviour happens only during peak loads espceially when we have full 500M redo logs that are being archived.
Thanks in Advance.Thanks Madrid!..
I scoured thru these metalink notes before looking for possible solutions and almost all of them were closed citing a customer problem related to OS, firewall, network, etc or some were closed saying requested data not provided.
Looks as if they were never successfully closed with a resolution.
I just want to assure myself that the redo corruption that standby is reporting will not haunt me later, when I am doing a recovery or even a crash recovery using redo logs?..
I have multiplexed my logs, just in case and have all the block checking parameters enabled on both primary and standby databases.
Thanks,
Ramki -
Logical data corrupton in alert log
Thanks for taking my question! I am hoping someone can help me because I am really in a bind. I had some issues restoring my database the other day. I thought I recovered everything but now I am seeing more errors in the alert log and I have no idea how what to do. Any help would be greatly appreciate!!!!!
I listed alert log errors at the end and I am 11g windows 2008 .
My first mistake was stop archive logging while a large job ran to save some time.
After the job completed I restarted the archive logging and then backed up the database and took an scn.
I then recovered back to the scn above and it appeared to go ok excetp I then noticed I had (2) v%database_block_corruptions.
I then decided to recover back several days and restore an the prd schema via an export takien before 1st restor.
I changed the incarniation back 1 and recovered and imported schema back.
I checheck v$database_block_corruption and it as empty. I am thinking ai am good.
I restart several batch jobs which finished ok.
I backup and it is good.
I now have the below logical errors in the alert log. They appeared inbetween the nightly rman backup and export.
Help! What is causing this and how do I fix it?
Kathie
export taken at 9:30
Wed Jul 22 22:00:03 2009
Error backing up file 2, block 47924: logical corruption
Error backing up file 2, block 47925: logical corruption
Error backing up file 2, block 47926: logical corruption
Error backing up file 2, block 47927: logical corruption
Error backing up file 2, block 71194: logical corruption
Error backing up file 2, block 78234: logical corruption
Error backing up file 2, block 78236: logical corruption
Error backing up file 2, block 78237: logical corruption
Error backing up file 2, block 78238: logical corruption
Error backing up file 2, block 78239: logical corruption
Error backing up file 2, block 78353: logical corruption
Error backing up file 2, block 78473: logical corruption
Error backing up file 2, block 79376: logical corruption
Error backing up file 2, block 79377: logical corruption
Error backing up file 2, block 79378: logical corruption
Error backing up file 2, block 81282: logical corruption
Error backing up file 2, block 81297: logical corruption
Error backing up file 2, block 81305: logical corruption
Error backing up file 2, block 81309: logical corruption
Error backing up file 2, block 81313: logical corruption
Error backing up file 2, block 81341: logical corruption
Error backing up file 2, block 81370: logical corruption
Error backing up file 2, block 81396: logical corruption
Error backing up file 2, block 82115: logical corruption
Error backing up file 2, block 82116: logical corruption
Error backing up file 2, block 82117: logical corruption
Error backing up file 2, block 82118: logical corruption
Error backing up file 2, block 82119: logical corruption
Error backing up file 2, block 85892: logical corruption
Error backing up file 2, block 85897: logical corruption
Error backing up file 2, block 85900: logical corruption
Error backing up file 2, block 85901: logical corruption
Error backing up file 2, block 85904: logical corruption
Error backing up file 2, block 85905: logical corruption
Error backing up file 2, block 85906: logical corruption
Error backing up file 2, block 85909: logical corruption
Error backing up file 2, block 85910: logical corruption
Error backing up file 2, block 85913: logical corruption
Error backing up file 2, block 85917: logical corruption
Error backing up file 2, block 85918: logical corruption
Error backing up file 2, block 85925: logical corruption
Error backing up file 2, block 85937: logical corruption
Error backing up file 2, block 85943: logical corruption
Error backing up file 2, block 85944: logical corruption
Error backing up file 2, block 85947: logical corruption
Error backing up file 2, block 85949: logical corruption
Error backing up file 2, block 85951: logical corruption
Error backing up file 2, block 85953: logical corruption
Error backing up file 2, block 85956: logical corruption
Error backing up file 2, block 85958: logical corruption
Error backing up file 2, block 85965: logical corruption
Error backing up file 2, block 85976: logical corruption
Error backing up file 2, block 85977: logical corruption
Error backing up file 2, block 85980: logical corruption
Error backing up file 2, block 85981: logical corruption
Error backing up file 2, block 85988: logical corruption
Error backing up file 2, block 85989: logical corruption
Error backing up file 2, block 85995: logical corruption
Error backing up file 2, block 86001: logical corruption
Error backing up file 2, block 86003: logical corruption
Error backing up file 2, block 86005: logical corruption
Error backing up file 2, block 86012: logical corruption
Error backing up file 2, block 86013: logical corruption
Error backing up file 2, block 86015: logical corruption
Error backing up file 2, block 93961: logical corruption
Error backing up file 2, block 93965: logical corruption
Error backing up file 2, block 93968: logical corruption
Error backing up file 2, block 93971: logical corruption
Error backing up file 2, block 93975: logical corruption
Error backing up file 2, block 93979: logical corruption
Error backing up file 2, block 93983: logical corruption
Error backing up file 2, block 93984: logical corruption
Error backing up file 2, block 93987: logical corruption
Error backing up file 2, block 93988: logical corruption
Error backing up file 2, block 93992: logical corruption
Error backing up file 2, block 93996: logical corruption
Error backing up file 2, block 94008: logical corruption
Error backing up file 2, block 94022: logical corruption
Error backing up file 2, block 94026: logical corruption
Error backing up file 2, block 94027: logical corruption
Error backing up file 2, block 94030: logical corruption
Error backing up file 2, block 94031: logical corruption
Error backing up file 2, block 94034: logical corruption
Error backing up file 2, block 94038: logical corruption
Error backing up file 2, block 94041: logical corruption
Error backing up file 2, block 94042: logical corruption
Error backing up file 2, block 94047: logical corruption
Error backing up file 2, block 94074: logical corruption
Error backing up file 2, block 94077: logical corruption
Error backing up file 2, block 118881: logical corruption
Error backing up file 2, block 118882: logical corruption
Error backing up file 2, block 118883: logical corruption
Error backing up file 2, block 118884: logical corruption
Wed Jul 22 22:00:27 2009
Thread 1 advanced to log sequence 279 (LGWR switch)
Current log# 3 seq# 279 mem# 0: F:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO03.LOG
Current log# 3 seq# 279 mem# 1: E:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO03A.LOG
Wed Jul 22 22:04:29 2009
Thread 1 advanced to log sequence 280 (LGWR switch)
Current log# 4 seq# 280 mem# 0: F:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO04.LOG
Current log# 4 seq# 280 mem# 1: E:\ORACLE\11.1.0\ORADATA\CS90QAP\REDO04A.LOG
Wed Jul 22 22:37:15 2009
ALTER SYSTEM ARCHIVE LOG -this is the nightly backup.
Edited by: user579885 on Jul 23, 2009 7:36 AMIf the index is part of EM why wait? How many EM users can you have? It should just be the DBA's.
Being the object is a PK there could be FK that references it. If there are FK to the PK I would try the alter index rebuild to see if that 1- works and 2- fixes the issue before I resorted to drop/create since the rebuild can be done without having to disable and re-enable the FK.
Also note that under certain conditions such as if the index status is INVALID the alter index rebuild will read the table rather than just read the index for its data.
HTH -- Mark D Powell --
Maybe you are looking for
-
Internal hard drive- can it be used as an external hard drive???
Well, this may be ranked right up there with dumb stupid questions but I have to ask. My sons both have iMacs- one has a new 80G internal hard drive. The other has the original one (600mhz iMac). I have the 20G internal one that was removed from iMac
-
What are some of the Oracle provided packages that DBAs should be aware of?
Can anybody let me know what are the ORacle packages that one should aware of in case oracle 9i/10g
-
Lenovo G550 and Sagem Fast 800 ADSL modem - limits DL speed to approx. 500kbit/s
Hi, I bought Lenovo G550 on Tuesday and since then I've got a strange issue with my Internet connection. Until Tuesday I was using Sagem Fast 800 ADSL modem from my ISP on Toshiba Satellite A100-712 notebook since June 2008 with 1Mbit connection and
-
MacBook Pro: Swapping Hard Drives Between Two Identical Macs?
I have two Macbook Pro notebooks. They are both the June 11th 2012 models both with cd/dvd drives. They both have Mavericks installed on both. I want to give my 15 inch to my son who has the 13 inch. They 13 will be mine and the 15 his. Can i just sw
-
Hi, Oracle Version: 11gR2 Operating System : Cent Os I was unable to connect to my rac database through SCAN IP after creating a new rac database .But i was able to connect through my public and VIP . Can any one please tell me what are the changes d