Logical corruption in datafile
what is logical corruption.
How this can occur in datafile , is it related caused due to disk.
how to avoid this.
Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
Any good reading/url is most welcomed.
Thank You Very Much.
user642237 wrote:
what is logical corruption.
How this can occur in datafile , is it related caused due to disk.
how to avoid this.
Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
Any good reading/url is most welcomed.
Thank You Very Much.What's the db version and o/s? Where did you read the term logical corruption in datafiles? AFAIK, datafiles get physically corrupted only. The logical corruption happens within the blocks , for example some index entry pointing towards a null rowid. I am not sure that I have come across any situation/reference where this corruption is mentioned for files as well. To check it, the best possible tool is RMAN which can do the job by some simple commands.
HTH
Aman....
Similar Messages
-
Hello,
I am running a backup and checking for any logical corruption -
RMAN> backup check logical database;
Starting backup at 03-MAR-10
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=135 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.1.0
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=137 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 5.5.1.0
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=138 device type=SBT_TAPE
channel ORA_SBT_TAPE_3: Data Protection for Oracle: version 5.5.1.0
channel ORA_SBT_TAPE_1: starting full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00014 name=/oracle1/data01/TESTDB/TESTDB_compress_test_01.dbf
input datafile file number=00006 name=/oracle/TESTDB/data01/TESTDB_shau_01.dbf
input datafile file number=00015 name=/oracle/product/11.1/dbs/ILM_TOOLKIT_IML_TEST_TAB_A.f
channel ORA_SBT_TAPE_1: starting piece 1 at 03-MAR-10
channel ORA_SBT_TAPE_2: starting full datafile backup set
channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/oracle/TESTDB/data02/TESTDB_undo_01.dbf
input datafile file number=00013 name=/oracle/TESTDB/data01/TESTDB_roop_01.dbf
input datafile file number=00012 name=/oracle/TESTDB/data01/TESTDB_example_01.dbf
input datafile file number=00005 name=/oracle/TESTDB/data01/TESTDB_sysaud_tab_1m_01.dbf
channel ORA_SBT_TAPE_2: starting piece 1 at 03-MAR-10
channel ORA_SBT_TAPE_3: starting full datafile backup set
channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
input datafile file number=00004 name=/oracle/TESTDB/data01/TESTDB_users_01.dbf
input datafile file number=00001 name=/oracle/TESTDB/data01/TESTDB_system_01.dbf
input datafile file number=00002 name=/oracle/TESTDB/data01/TESTDB_sysaux_01.dbf
input datafile file number=00025 name=/oracle/export_files/TESTDB_users_02.dbf
channel ORA_SBT_TAPE_3: starting piece 1 at 03-MAR-10
channel ORA_SBT_TAPE_3: finished piece 1 at 03-MAR-10
piece handle=5ul7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_3: backup set complete, elapsed time: 00:05:15
channel ORA_SBT_TAPE_2: finished piece 1 at 03-MAR-10
piece handle=5tl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:06:56
channel ORA_SBT_TAPE_1: finished piece 1 at 03-MAR-10
piece handle=5sl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:08:16
Finished backup at 03-MAR-10
Starting Control File and SPFILE Autobackup at 03-MAR-10
piece handle=c-2109934325-20100303-0c comment=API Version 2.0,MMS Version 5.5.1.0
Finished Control File and SPFILE Autobackup at 03-MAR-10
Question: By looking at the output, how can I say that RMAN did an Logical Check for the corruption? This output looks same as a simple backup without logical corruption check. Please advice how to check about this?
Thanks!hi
I think you won't see any summary on this, only when corruption is found.
There is also one related setting that can be incorporated here - see example:
Example 2-25 Specifying Corruption Tolerance for Datafile Backups
This example assumes a database that contains 5 datafiles. It uses the SET MAXCORRUPT command to indicate than no more than 1 corruption should be tolerated in each datafile. Because the CHECK LOGICAL option is specified on the BACKUP command, RMAN checks for both physical and logical corruption.
RUN
+{+
SET MAXCORRUPT FOR DATAFILE 1,2,3,4,5 TO 1;
BACKUP CHECK LOGICAL
DATABASE;
+}+
use this to see clear output:
-- Check for physical corruption of all database files.
VALIDATE DATABASE;
-- Check for physical and logical corruption of a tablespace.
VALIDATE CHECK LOGICAL TABLESPACE USERS;
eg.
List of Datafiles
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
+1 FAILED 0 3536 57600 637711+
File Name: /disk1/oradata/prod/system01.dbf
Block Type Blocks Failing Blocks Processed
Data 1 41876
Index 0 7721
Other 0 4467 -
Logically corrupted blocks in standby
Hi
Assume I have a primary database and standby database.
Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
Force logging is not set.
When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
How can I get rid of these corrupted blocks?
If I rebuild indexes with logging option, and recreate table as logging,
Will it solve the problem? or any other suggestion
Many thanksSivok wrote:
Hi
Assume I have a primary database and standby database.
Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
Force logging is not set.
When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
How can I get rid of these corrupted blocks?
If I rebuild indexes with logging option, and recreate table as logging,
Will it solve the problem? or any other suggestion
Many thanksyour primary should run in force logging mode (ALTER DATABASE FORCE LOGGING) then the object level setting is ignored for direct path operations. You can apply an incremental backup to the standby to catchup (or just recreate the standby which might be as quick depending on volumes).
Niall Litchfield
http://www.orawin.info/ -
Questions on Logical corruption
Hello all,
My DB version is 10g+ - 11.2.0.3 on various different OS. We are in process of deploying RMAN on our system and i am having a hard time on testing/get a grip around the whole logical corruption... from what i understand(please correct me if i am wrong)
1. I can have a check logical syntax in my backup cmd(and that will check both physical and logical corruption)...But how much overhead dose it have, Seems to be anywhere from 14-20% overhead on backup time.
2. Leaving the maxCorrupt to default(which i beleive is 0)...if there is a physical corruption my backup will break and i should get an email/alert saying backup broke...
3. Would this be same for logical corruption too ??, would RMAN report logical corrution right away like physical corruption would do? Or do i have to query v$database_block_corruption after backup is done to figure out if i have logical corruption
4. how would one test logical corruption ?? (besides the NO_LOGGING operation, as our DB have force logging turned on)
5. Is it a good practice to have check logical corruption in your daily backup? ( i guess i have no problems for it if DB are small, but some of our DB are close to 50TB+ and i think the check logical is going to increase the backup time significantly)
6. If RMAN cannot repair logical corruption, then why would i want to do the check logical (besides knowing i have a problem and the end user have to fix it by reload the data...assuming its a table not index that is corrupt)..
7. any best practices when it comes for checking logical corruption for DB in 50+ TB
I have actually searched on here and on google, but i could not find any way to reproducing logical corrpution(maybe there is none), but i wanted to ask the community about it....
Thank you in advance for your time.General info:
http://www.oracle.com/technetwork/database/focus-areas/availability/maa-datacorruption-bestpractices-396464.pdf
You might want to google "fractured block" for information about it without RMAN. You can simulate that by writing a C program to flip some bits, although technically that would be physical corruption. Also see Dealing with Oracle Database Block Corruption in 11g | The Oracle Instructor
One way to simulate is to use nologging operations and then try to recover (this is why force logging is used, so google corruption force logging). Here's an example: Block corruption after RMAN restore and recovery !!! | Practical Oracl Hey, no simulate, that's for realz!
Somewhere in the recovery docs it explains... aw, I lost my train of thought, you might get better answers with shorter questions, or one question per thread, for this kind of fora. Oh yeah, somewhere in the docs it explains that RMAN doesn't report the error right away, because later in the recovery stream it may decide the block is newly formatted and there wasn't really a problem.
This really is dependent on how much data is changing and how. If you do many nologging operations or run complicated standby, you can run into this more. There's a trade-off between verifying everything and backup windows, site requirements control everything. That said, I've found only paranoid DBA's check enough, IT managers often say "that will never happen." Actually, even paranoid DBA's don't check enough, the vagaries of manual labor and flaky equipment can overshadow anything. -
Logical corruption found in the sysaux tablespace
Dear All:
We lately see the logical corruption error when running dbverify command which shows the block corruption. It is always on the the sysaux tablespace. The database is 11g and platform is Linux.
we get the error like:error backing up file 2 block xxxx: logical corruption and this comes to alert.log out of the automated maintenance job like sqltunning advisor running during maintenance window.
Now As far as I know,we can't drop or rename the sysaux tablespace. there is a startup migrate option to drop the SYSAUX but it does not work due to the presence of domain indexes. you may run the rman block media recovery but it ends with not fixing since rman backups are more of physical than maintain the logical integrity.
Any help, advise, suggestion will be highly appreciated.If you let this corruption there then you are likely to face a big issue that will compromise database availability sooner or later. The sysaux is a critical tablespace, so you must proceed with caution.
Make sure you have a valid backup and don't do any thing unless you are sure about what you are doing and you have a fall back procedure.
if you still have a valid backup then you can use rman to perform a db block level recovery, this will help you in fixing the block. Otherwise try to restore and recover the sysaux. In case you cannot fix the block by refreshing the sysaux tablespace then I suggest you to create a new database and use aTransportable Tablespace technique to migrate all tablespaces from your current database to the new one and get rid of this database.
~ Madrid
http://hrivera99.blogspot.com -
Ocrcheck shows "Logical corruption check failed"
Hi, I have a strange issue, that I am not sure how to recover from...
In a random 'ocrcheck' we found the above 'logical corruption'. In the CRS_HOME/log/nodename/client/ I found the previous ocrcheck was done a month earlier and was successful. So, something in the last month caused a logical corruption. The cluster is functioning ok currently.
So, I tried doing an ocrdump on some backups we have and I am receiving the following error -
#ocrdump -backupfile backup00.ocr <<< any backup I try for the past month
PROT-306: Failed to retrieve cluster registry data
This error occurrs even on the backup file taken just prior to the successful ocrcheck from a month earlier. The log for this ocrdump shows -
cat ocrdump_6494.log
Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
2010-08-18 12:57:17.024: [ OCRDUMP][2813008768]ocrdump starts...
2010-08-18 12:57:17.038: [ OCROSD][2813008768]utread:3: Problem reading buffer 7473000 buflen 4096 retval 0 phy_offset 15982592 retry 0
2010-08-18 12:57:17.038: [ OCROSD][2813008768]utread:4: Problem reading the buffer errno 2 errstring No such file or directory
2010-08-18 12:57:17.038: [ OCRRAW][2813008768]gst: Dev/Page/Block [0/3870/3927] is CORRUPT (header)
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]rbkp:2: could not read the free list
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]gst:could not read fcl page 1
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]rbkp:2: could not read the free list
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]gst:could not read fcl page 2
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]fkce:2: problem reading the tnode 131072
2010-08-18 12:57:17.039: [ OCRRAW][2813008768]propropen: Failed in finding key comp entry [26]
2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failed to open key handle for key name [SYSTEM] [PROC-26: Error while accessing the physical storage]
2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failure when trying to traverse ROOTKEY [SYSTEM]
2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Exiting [status=success]...
NOTE: an 'ocrdump' of the active ocr does work and creates the ocrdumpfile
The corruption in the ocr seems to be two keynames pointing to the same block.
Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
2010-08-18 13:22:54.095: [OCRCHECK][285084544]ocrcheck starts...
2010-08-18 13:22:55.447: [OCRCHECK][285084544]protchcheck: OCR status : total = [262120], used = [15496], avail = [246624]
2010-08-18 13:22:55.545: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.css.diskfile2], and keyname [SYSTEM.css.diskfile1.FILENAME] point to same block_number [3928]
2010-08-18 13:22:55.732: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.OCR.MANUALBACKUP.ITEMS.0], and keyname [SYSTEM.css.diskfile1] point to same block_number [3927]
2010-08-18 13:23:03.159: [OCRCHECK][285084544]Exiting [status=success]...
Since one of the keynames refers to the votedisk, that is not appearing correctly on a query -
crsctl query css votedisk
0. 0 /oracrsfiles/voting_disk_01
1. 0
2. 0 backup_20100818_103455.ocr <<<<this value changes if I issue a command that writes something to the ocr, in this case a manual backup.
My DBA is opening an SR, but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?
Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?
Although this is a developement cluster (two nodes) rebuilding would be a disaster ;)
Any help or thoughts would be much appreciated!Hi buddy,
My DBA is opening an SRWell.... corruption problems, no doubts that it's better work with support team
, but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?No, that is not the idea...if Your backup is not good, it's not safe restoring it. ;)
Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?This is for repairing nodes that were down when some kind of change on the configuration (replace OCR for example) has been executed while it was "off", so, I guess it's not Your case.
Good Luck!
Cerreia -
Dear Experts
Can you pls help in understanding of logical block corruption in detail.
Thanks
Asif Husain KhanI wrote a small piece of note about it over my blog, you may want to read that,
http://blog.aristadba.com/?p=109
HTH
Aman.... -
Hi DBA's,
We performed corruption check using RMAN and there are some block corrupted in 2 datafiles on the Standby Database.
My question is it possible to recover those three Datafiles alone if so please suggest me.
Please give me feedback for my plan.
1.Rman Datafile backup from primary
2.Take the Tablespace offline
3.drop the datafile
4.Restore the Datafile
5.Recover the Datafile
6.Bring the Tablespace online
corrupted files
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
28 1508048 16 5446971876 NOLOGGING
28 1508112 16 5446971876 NOLOGGING
28 1508176 16 5446971876 NOLOGGING
28 1508240 16 5446971876 NOLOGGING
30 3083769 1289 5450419752 NOLOGGING
30 3085079 1289 5450419837 NOLOGGING
30 3086389 1289 5450419841 NOLOGGING
30 3087684 1304 5450419884 NOLOGGING
30 3088994 122 5450419888 NOLOGGING
Thanks in Advance,
Raja...Hi Raja
Are your tablespaces in NOLOGGING mode in a standby environment? Or have you turned on FORCE LOGGING anyway at database level?
If you do a NOLOGGING operation in an logging-unenforced database, you will certianly encounter these issues now and then.
Try DBMS_REPAIR package to fix the corrupt blocks or Use RMAN to do block recovery.
If you have to restore the datafiles, your steps look fine. But remember to turn on managed recovery in standby after the recovery. -
Logic corrupts DS_Store file?
I have some really strange behavior on my machine:
I upgraded today from 10.4.1 to 10.4.2 and everything went smooth. After the install and the permission repair, Logc 7.1 launched with no problem and everything seemed to be fine. A little bit later when I did some work in the FInder, I noticed that I couldn't opened some folders. I got the spinning beach ball and the system seemed to be locked up even if there was no significant CPU, disk or network activity. If I were lucky, I could open the Force Quit window and relaunch the Finder. After some testing I found out that the hidden ".DS_Store" file seemed to be corrupted. If I encountered any folder which I couldn't open, I deleted its DS_Store file inside the Terminal and the folder opened again.
After I deleted all those files from the problematic folders, everything seemed to be stable again. I continued to work with Logic and found out later that some folders had the same problem again. After a tedious process of trial and error, I found that all the folders which encountered the open-folder-spinning -ball issue were folders with Logic files:
~/Library/Application Support
~/Library/Preferences
~/Library/Caches
This is my conclusion so far: I Logic writes a file (most likely .plist files) to a folder and corrupts the DS_Store file, or the Finder doesn't update the DS_Store file with the new information from Logic.
Maybe something went wrong during the OSX 10.4.2 upgrade? I did the upgrade on a second G5 (same Dual 2.5GHz with similar configuration and Logic 7.1) a few weeks ago when it came out and I don't have any problem there.
Anyone with a clue what is happening here?
ThanksAJD1170 wrote:
A .DS_Store file keeps appearing on my desktop. It returns seconds after I have moved it to the trash. I have deleted the .DS_Store files (hidden) from wherever I have found them, but like the desktop icon, they immediately reappear. I some time ago stumbled upon the solution, but it evades me now. How do I get rid of this nuisance permanently?
You want and need those .DS_Store files.
You need to hide hidden files in terminal, copy and paste:
defaults write com.apple.finder AppleShowAllFiles FALSE ;killall Finder
Finder will create a .DS_Store file in every folder that it accesses including your Desktop folder. You witness this everytime you try and delete it. -
Data block corrupted on standby database (logical corruption)
Hi all,
we are getting the below error on our DRSITE,it is MANUAL PHYSCIAL STANDBY DATABSE...
The following error has occurred:
ORA-01578: ORACLE data block corrupted (file # 3, block # 3236947)
ORA-01110: data file 3: '/bkp/oradata/orcl_raw_cadata01'
ORA-26040: Data block was loaded using the NOLOGGING option
I have checked in the Primary database, that there are some object which are not being logged into the redo logfiles.....
SQL> select table_name,INDEX_NAME,logging from dba_indexes where logging='NO'
TABLE_NAME INDEX_NAME LOG
MENU_MENUS NUX_MENU_MENUS_01 NO
MENU_USER_MENUS MENU_USER_MENUS_X NO
OM_CITY IDM_OM_CITY_CITY_NAME NO
OM_EMPLOYER EMPLR_CODE_PK NO
OM_EMPLOYER IDM_EMPLR_EMPLR_NAME NOOM_STUDENT_HEAD OM_STUDENT_HEAD_HEAD_UK01 NO
OT_DAK_ENTRY_DETL DED_SYS_ID_PK NO
OT_DAK_ENTRY_HEAD DEH_SYS_ID_PK NO
OT_DAK_ENTRY_HEAD IDM_DEH_DT_APPL_REGION NO
OT_DAK_ENTRY_HEAD IDM_DEH_REGION_CODE NO
OT_DAK_REFUNDS_DETL DRD_SYS_ID_PK NO
TABLE_NAME INDEX_NAME LOG
OT_MEM_FEE_COL_DETL IDM_MFCD_MFCH_SYS_ID NO
OM_STUDENT_HEAD IDM_STUD_COURSE NO
13 rows selected.
so the main problem is in the OM_EMPOYER tables if i would delete the indexes from that table recreate it again with the logging clause,and then apply the archvied logs to the DRSITE.WILL THE problem will resolve.
Pls suggest me...Hi..
Firstly how did you confirm that it was that index only.Can you post the output of
SELECT tablespace_name, segment_type, owner, segment_name
FROM dba_extents WHERE file_id = 3 and 3236947 between block_id
AND block_id + blocks - 1;
This query can take time, if are sure that its the index don't fire this command .
Secondly, when you will drop and recreate the index, it will be logged into the redo logfile.This information will be be logged in to an the archivelog file as its the replica of the redo logfile. Then when you apply this archive log maually, it will drop that index and then recreate it using the same sql.
HTH
Anand -
How to logically corrupt a block
sir
i know different ways to check whether the block is corrupted.
But i can test it only if any block is corrupted .
how will u make a block corrupt . i will bw very much grateful if u can give an
example for the block corruption .Hi Sheila M
I need to find a way to extract lists of emails from Junk mail folder and add them en masse to blocked senders, without having to add them to my blocked senders one by one.
I have had the same email address for too long to change it, but I receive huge amounts of junk mail. I have so far barred nearly 2000 senders and have 1100 sitting in my junk folder form the last week. I barred them one by one, using RULES function...I simply cannot keep doing this.
I have MacBook Pro running Mavericks, plus iPad plus iPhone 5.
The problem is aggravated by the fact that Apple has a weakness in Mail: I may bar thousands of senders on my Mac, but the filter does not sync the barred addresses to my iPad or iPhone, so if I use Mail on either, I get many thousands of emails at once and have to mark them one by one to delete. It's ridiculous.
To solve this problem, I have decided to do is block all the senders in webmail.
However I still have the same problem with extracting multiple addresses from my junk mail folder in order to copy them into blocked senders list.
Genius Bar said it can't be done. I think they're wrong. There simply as to be (a) a location on my Mac where I can find a list of all those I have already barred; (b) a way to extract email addresses from groups of received mail.
I'd be grateful for some advice, please. -
ORA-03113: end-of-file on communication channel
Hi
While I startup Oracle database, i get the following error. What could be the issue and how to resolve this.
SQL> startup
ORACLE instance started.
Total System Global Area 864333824 bytes
Fixed Size 2231368 bytes
Variable Size 704644024 bytes
Database Buffers 150994944 bytes
Redo Buffers 6463488 bytes
Database mounted.
ORA-03113: end-of-file on communication channel
Process ID: 6507
Session ID: 580 Serial number: 5
Below is the content from alert log and trace log
*#alert_orcl.log#*
Bad header found during crash/instance recovery
Reading datafile '+DATA/orcl/datafile/sysaux.257.762570243' for corruption at rdba: 0x0080f01b (file 2, block 61467)
Data in bad block:
type: 255 format: 2 rdba: 0x0000a2ff
last change scn: 0x0000.0080019f seq: 0x0 flg: 0x00
spare1: 0x0 spare2: 0x0 spare3: 0x4ff
consistency value in tail: 0x643e0346
check value in block header: 0x0
Read datafile mirror 'ASM5' (file 2, block 61467) found same corrupt data (no logical check)
block checksum disabled
Reading datafile '+DATA/orcl/datafile/sysaux.257.762570243' for corruption at rdba: 0x0080019f (file 2, block 415)
Read datafile mirror 'ASM4' (file 2, block 415) found same corrupt data (no logical check)
Read datafile mirror 'ASM1' (file 2, block 61467) found same corrupt data (no logical check)
Hex dump of (file 2, block 34539) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p000_6831.trc
Corrupt block relative dba: 0x008086eb (file 2, block 34539)
Bad header found during crash/instance recovery
Data in bad block:
type: 1 format: 6 rdba: 0x0000a201
last change scn: 0x0000.008086eb seq: 0x0 flg: 0x00
Read datafile mirror 'ASM3' (file 2, block 415) found same corrupt data (no logical check)
spare1: 0xbb spare2: 0xe1 spare3: 0x4ff
consistency value in tail: 0x02c20304
check value in block header: 0x0
block checksum disabled
Reading datafile '+DATA/orcl/datafile/sysaux.257.762570243' for corruption at rdba: 0x008086eb (file 2, block 34539)
Read datafile mirror 'ASM2' (file 2, block 34539) found same corrupt data (no logical check)
Hex dump of (file 2, block 420) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p002_6839.trc
Corrupt block relative dba: 0x008001a4 (file 2, block 420)
Bad header found during crash/instance recovery
Data in bad block:
type: 255 format: 2 rdba: 0x0000a206
last change scn: 0xe1f3.008001a4 seq: 0x74 flg: 0x00
spare1: 0x0 spare2: 0x0 spare3: 0x401
consistency value in tail: 0x474f4c20
check value in block header: 0x0
block checksum disabled
Reading datafile '+DATA/orcl/datafile/sysaux.257.762570243' for corruption at rdba: 0x008001a4 (file 2, block 420)
Read datafile mirror 'ASM4' (file 2, block 420) found same corrupt data (no logical check)
Read datafile mirror 'ASM1' (file 2, block 34539) found same corrupt data (no logical check)
Read datafile mirror 'ASM3' (file 2, block 420) found same corrupt data (no logical check)
Hex dump of (file 1, block 3097) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p002_6839.trc
Corrupt block relative dba: 0x00400c19 (file 1, block 3097)
Bad header found during crash/instance recovery
Data in bad block:
type: 2 format: 6 rdba: 0x0000a202
last change scn: 0x0000.00400c19 seq: 0x0 flg: 0x00
spare1: 0xdf spare2: 0xe2 spare3: 0x4ff
consistency value in tail: 0x09c10280
check value in block header: 0x0
Hex dump of (file 2, block 34765) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p000_6831.trc block checksum disabled
Corrupt block relative dba: 0x008087cd (file 2, block 34765)
Reading datafile '+DATA/orcl/datafile/system.256.762570243' for corruption at rdba: 0x00400c19 (file 1, block 3097)
Bad header found during crash/instance recovery
Data in bad block:
type: 255 format: 1 rdba: 0x0000a206
last change scn: 0xe27b.008087cd seq: 0x74 flg: 0x00
spare1: 0x0 spare2: 0x0 spare3: 0x401
Read datafile mirror 'ASM5' (file 1, block 3097) found same corrupt data (no logical check)
consistency value in tail: 0x00000000
check value in block header: 0x0
block checksum disabled
Reading datafile '+DATA/orcl/datafile/sysaux.257.762570243' for corruption at rdba: 0x008087cd (file 2, block 34765)
Read datafile mirror 'ASM3' (file 2, block 34765) found same corrupt data (no logical check)
Read datafile mirror 'ASM2' (file 1, block 3097) found same corrupt data (no logical check)
Hex dump of (file 3, block 272) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p002_6839.trc
Reading datafile '+DATA/orcl/datafile/undotbs1.258.762570243' for corruption at rdba: 0x00c00110 (file 3, block 272)
Read datafile mirror 'ASM1' (file 3, block 272) found same corrupt data (logically corrupt)
Read datafile mirror 'ASM5' (file 2, block 34765) found same corrupt data (no logical check)
Hex dump of (file 2, block 34771) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p000_6831.trc
Corrupt block relative dba: 0x008087d3 (file 2, block 34771)
Bad header found during crash/instance recovery
Data in bad block:
type: 1 format: 6 rdba: 0x0000a201
last change scn: 0x0000.008087d3 seq: 0x0 flg: 0x00
spare1: 0x3a spare2: 0xe3 spare3: 0x4ff
consistency value in tail: 0x00045055
check value in block header: 0x0
block checksum disabled
Reading datafile '+DATA/orcl/datafile/sysaux.257.762570243' for corruption at rdba: 0x008087d3 (file 2, block 34771)
Read datafile mirror 'ASM3' (file 2, block 34771) found same corrupt data (no logical check)
Read datafile mirror 'ASM2' (file 3, block 272) found same corrupt data (logically corrupt)
RECOVERY OF THREAD 1 STUCK AT BLOCK 272 OF FILE 3
Read datafile mirror 'ASM5' (file 2, block 34771) found same corrupt data (no logical check)
Wed Jun 27 05:49:55 2012
Hex dump of (file 2, block 65353) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_dbw0_6713.trc
Corrupt block relative dba: 0x0080ff49 (file 2, block 65353)
Bad header found during buffer corrupt after write
Data in bad block:
type: 1 format: 6 rdba: 0x0000a206
last change scn: 0xe2bf.0080ff49 seq: 0x74 flg: 0x00
spare1: 0xf5 spare2: 0xe0 spare3: 0x602
consistency value in tail: 0x00000000
check value in block header: 0x0
block checksum disabled
Reread of rdba: 0x0080ff49 (file 2, block 65353) found different data
Hex dump of (file 2, block 65356) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_dbw0_6713.trc
Corrupt block relative dba: 0x0080ff4c (file 2, block 65356)
Bad header found during buffer corrupt after write
Data in bad block:
type: 2 format: 6 rdba: 0x0000a206
last change scn: 0xe2a7.0080ff4c seq: 0x74 flg: 0x00
spare1: 0xbf spare2: 0xe2 spare3: 0x602
consistency value in tail: 0x00000059
check value in block header: 0x0
block checksum disabled
Reread of rdba: 0x0080ff4c (file 2, block 65356) found different data
Hex dump of (file 2, block 66114) in trace file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_dbw0_6713.trc
Corrupt block relative dba: 0x00810242 (file 2, block 66114)
Bad header found during preparing block for write
Data in bad block:
type: 255 format: 1 rdba: 0x0000a206
last change scn: 0xe1bb.00810242 seq: 0x74 flg: 0x00
spare1: 0x0 spare2: 0x0 spare3: 0x401
consistency value in tail: 0x800102c1
check value in block header: 0x0
block checksum disabled
Errors in file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_dbw0_6713.trc (incident=292893):
ORA-00600: internal error code, arguments: [kcbzpbuf_1], [4], [1], [], [], [], [], [], [], [], [], []
Incident details in: /appl/oracle/diag/rdbms/orcl/orcl/incident/incdir_292893/orcl_dbw0_6713_i292893.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Exception [type: SIGBUS, Non-existent physical address] [ADDR:0x72BFFFF8] [PC:0x3612E7CAE9, _wordcopy_bwd_dest_aligned()+185] [flags: 0x0, count: 1]
Errors in file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p000_6831.trc (incident=293021):
ORA-07445: exception encountered: core dump [_wordcopy_bwd_dest_aligned()+185] [SIGBUS] [ADDR:0x72BFFFF8] [PC:0x3612E7CAE9] [Non-existent physical address] []
Incident details in: /appl/oracle/diag/rdbms/orcl/orcl/incident/incdir_293021/orcl_p000_6831_i293021.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Exception [type: SIGSEGV, SI_KERNEL(general_protection)] [ADDR:0x0] [PC:0x546B040, kcbs_dump_adv_state()+634] [flags: 0x0, count: 2]
Wed Jun 27 05:49:59 2012
Dumping diagnostic data in directory=[cdmp_20120627054959], requested by (instance=1, osid=6831 (P000)), summary=[incident=293021].
Errors in file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_p000_6831.trc (incident=293022):
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+634] [SIGSEGV] [ADDR:0x0] [PC:0x546B040] [SI_KERNEL(general_protection)] []
ORA-07445: exception encountered: core dump [_wordcopy_bwd_dest_aligned()+185] [SIGBUS] [ADDR:0x72BFFFF8] [PC:0x3612E7CAE9] [Non-existent physical address] []
Incident details in: /appl/oracle/diag/rdbms/orcl/orcl/incident/incdir_293022/orcl_p000_6831_i293022.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Exception [type: SIGSEGV, SI_KERNEL(general_protection)] [ADDR:0x0] [PC:0x546B040, kcbs_dump_adv_state()+634] [flags: 0x0, count: 1]
Errors in file /appl/oracle/diag/rdbms/orcl/orcl/incident/incdir_293021/orcl_p000_6831_i293021.trc:
ORA-00607: Internal error occurred while making a change to a data block
ORA-00602: internal programming exception
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+634] [SIGSEGV] [ADDR:0x0] [PC:0x546B040] [SI_KERNEL(general_protection)] []
ORA-07445: exception encountered: core dump [_wordcopy_bwd_dest_aligned()+185] [SIGBUS] [ADDR:0x72BFFFF8] [PC:0x3612E7CAE9] [Non-existent physical address] []
Errors in file /appl/oracle/diag/rdbms/orcl/orcl/trace/orcl_dbw0_6713.trc (incident=292894):
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+634] [SIGSEGV] [ADDR:0x0] [PC:0x546B040] [SI_KERNEL(general_protection)] []
ORA-00600: internal error code, arguments: [kcbzpbuf_1], [4], [1], [], [], [], [], [], [], [], [], []
Incident details in: /appl/oracle/diag/rdbms/orcl/orcl/incident/incdir_292894/orcl_dbw0_6713_i292894.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Dumping diagnostic data in directory=[cdmp_20120627055004], requested by (instance=1, osid=6713 (DBW0)), summary=[incident=292893].
Wed Jun 27 05:50:08 2012
PMON (ospid: 6679): terminating the instance due to error 471
Wed Jun 27 05:50:08 2012
ORA-1092 : opitsk aborting process
Wed Jun 27 05:50:08 2012
License high water mark = 4
Instance terminated by PMON, pid = 6679
USER (ospid: 6860): terminating the instance
Instance terminated by USER, pid = 6860
*#trace logs#*
Corrupt block relative dba: 0x00810242 (file 2, block 66114)
Bad header found during preparing block for write
Data in bad block:
type: 255 format: 1 rdba: 0x0000a206
last change scn: 0xe1bb.00810242 seq: 0x74 flg: 0x00
spare1: 0x0 spare2: 0x0 spare3: 0x401
consistency value in tail: 0x800102c1
check value in block header: 0x0
block checksum disabled
kcra_dump_redo_internal: skipped for critical process
kcbz_try_block_recovery <1, 8454722>: tries=0 max=5 cur=1340797795 last=0
BH (0x7bbe0fc8) file#: 2 rdba: 0x00810242 (2/66114) class: 1 ba: 0x7b8f4000
set: 12 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,0
dbwrid: 0 obj: 68150 objn: -1 tsn: 1 afn: 2 hint: f
hash: [0x912f45b0,0x912f45b0] lru-req: [0x7bbdfdb0,0x90deff60]
lru-flags: on_auxiliary_list
obj-flags: object_write_list
ckptq: [0x7bbfc4c8,0x7bbea0a8] fileq: [NULL] objq: [0x8b251480,0x8b251480] objaq: [0x8b251450,0x7bbe0e88]
st: INST_RCV md: NULL rsop: 0x90d110e0
flags: buffer_dirty being_written block_written_once recovery_resilver
recovery_read_complete
cr pin refcnt: 0 sh pin refcnt: 0
kcra_dump_redo_internal: skipped for critical process
Incident 292893 created, dump file: /appl/oracle/diag/rdbms/orcl/orcl/incident/incdir_292893/orcl_dbw0_6713_i292893.trc
ORA-00600: internal error code, arguments: [kcbzpbuf_1], [4], [1], [], [], [], [], [], [], [], [], []
Incident 292894 created, dump file: /appl/oracle/diag/rdbms/orcl/orcl/incident/incdir_292894/orcl_dbw0_6713_i292894.trc
ORA-07445: exception encountered: core dump [kcbs_dump_adv_state()+634] [SIGSEGV] [ADDR:0x0] [PC:0x546B040] [SI_KERNEL(general_protection)] []
ORA-00600: internal error code, arguments: [kcbzpbuf_1], [4], [1], [], [], [], [], [], [], [], [], []Did you actually read the alert-log ??
The problem is clear in there. Your datafiles are corrupted!!!
While the database is trying to correct these, a lot of ORA-00600 and ORA-07445's are generated.
Consult Oracle Support to get this resolved
Thanks
FJFranken -
BACKUP VALIDATE vs VALIDATE in checking logical/physical corruption
Hello all,
I am checking if our 10gR2 database has any physical or logical corruption. I have read in some places where they state that VALIDATE command is enough to check database for physical corruption. Our database was never backed up by RMAN specifically before. Are any configuration settings needed for running BACKUP VALIDATE command? The reason I am asking is because just the VALIDATE command returns an error and BACKUP VALIDATE command runs without error but it is not showing the
"File Status Marked Corrupt Empty Blocks Blocks Examined High SCN" lines.
I used the command in two different formats and both do not show individual data file statuses:
RMAN> run {
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE DEVICE TYPE DISK PARALLELISM 10 BACKUP TYPE TO BACKUPSET;
BACKUP VALIDATE CHECK LOGICAL DATABASE FILESPERSET=10;
RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE
RMAN> VALIDATE DATABASE;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "database": expecting one of: "backupset"
RMAN-01007: at line 1 column 10 file: standard input
However on a different database already being backed up by RMAN daily, BACKUP VALIDATE output shows list of datafiles and STATUS = OK as below:
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
How can we check every individual datafile status. Appreciate your responses. Thanks.Hi,
After you have run:
BACKUP VALIDATE CHECK LOGICAL DATABASE You can use sqlplus and run:
select * from v$database_block_corruption.The output will tell you which block in which datafile is corrupt.
Regards,
Tycho
Edited by: tychos on 8-sep-2011 18:34 -
Corrupted datafiles in standby
Hello
I used to have three indexes in nologging mode in primary database.
When I scan the datafiles of these indexes in standby database, I notice that datafiles are logically corrupted.
Today, I dropped these indexes in primary database and recreated them with logging. Archivelogs of these steps were also applied to standby.
Even there is no obect in nologging mode, When I scan the datafiles in standby, somehow they are still logically corrupted !!!!!!!!!!
What is the reason for that?
CheersMr Sb;
They are created as logging.
It is really difficult to grasp this.
Here is the output of one of the corrupted datafile with dbv
DBV-00200: Block, dba -2126348425, already marked corrupted
DBV-00200: Block, dba -2126348424, already marked corrupted
DBV-00200: Block, dba -2126348423, already marked corrupted
DBV-00200: Block, dba -2126348422, already marked corrupted
DBV-00200: Block, dba -2126348421, already marked corrupted
DBV-00200: Block, dba -2126348420, already marked corrupted
DBV-00200: Block, dba -2126348419, already marked corrupted
DBV-00200: Block, dba -2126348418, already marked corrupted
DBV-00200: Block, dba -2126348417, already marked corrupted
DBV-00200: Block, dba -2126348416, already marked corrupted
DBV-00200: Block, dba -2126348415, already marked corrupted
DBV-00200: Block, dba -2126348414, already marked corrupted
DBV-00200: Block, dba -2126348413, already marked corrupted
DBV-00200: Block, dba -2126348412, already marked corrupted
DBV-00200: Block, dba -2126348411, already marked corrupted
DBV-00200: Block, dba -2126348410, already marked corrupted
DBV-00200: Block, dba -2126348409, already marked corrupted
DBV-00200: Block, dba -2126348408, already marked corrupted
DBV-00200: Block, dba -2126348407, already marked corrupted -
How to find physical corruption
Hi,
I sed backup validate command to find physical corruption into my database as follows.
backup validate check logical database;
After his command, I queried v$database_block_corruption.
12:04:47 SQL> select * from v$database_block_corruption;
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
1 11477 2 228760588 LOGICAL
1 11514 1 228760329 LOGICAL
12:05:09 SQL>
How can I find if this is physical corruption of logical corruption?
I also used dbv command to check my system datafile (File# 1). And output is as foolows
D:\>dbv file=D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBf
DBVERIFY: Release 10.2.0.4.0 - Production on Mon Nov 18 11:56:23 2013
Copyright (c) 1982, 2007, Oracle. All rights reserved.
DBVERIFY - Verification starting : FILE = D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBf
DBV-00200: Block, DBA 4205781, already marked corrupt
DBV-00200: Block, DBA 4205782, already marked corrupt
DBV-00200: Block, DBA 4205818, already marked corrupt
DBVERIFY - Verification complete
Total Pages Examined : 168960
Total Pages Processed (Data) : 127180
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 25248
Total Pages Failing (Index): 0
Total Pages Processed (Other): 2440
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 14092
Total Pages Marked Corrupt : 3
Total Pages Influx : 0
Highest block SCN : 245793757 (0.245793757)
Since dbv always checks for physical corruption, but how can I make sure by using RMAn that whether this is physical corruption of only logical corruption? Thanks
Alert log says something as follows
Sat Sep 14 00:22:50 2013
Errors in file d:\oracle\admin\opera\bdump\opera_p008_3916.trc:
ORA-01578: ORACLE data block corrupted (file # 1, block # 11478)
ORA-01110: data file 1: 'D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBF'
ORA-10564: tablespace SYSTEM
ORA-01110: data file 1: 'D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBF'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 5121
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [kddummy_blkchk], [1], [11478], [6101], [], [], [], []
SAQHi,
DBVerify reports both physical and logical intrablock corruptions by default. A good place to start would be going through the below MOS note for understanding them well.
Physical and Logical Block Corruptions. All you wanted to know about it. (Doc ID 840978.1)
Basically you will not be able to read data from the corrupt blocks in physical corruption while might be able to read data in logical corruption.
Also, DBVerify reports DBV-200/201 errors for logical corruption.
Hth
Abhishek
Maybe you are looking for
-
I've discovered there is no obvious, and probably even no way at all to sort photos by import date. I tried to reimport photos from a disk to iphoto and then burn them to a seperate disk...only to discover that once the photos were imported the only
-
BAPI_Salesorder_createfromdat2 - takes 5 to 8 seconds per record.
Dear all I using a BAPI call to create sales order in bulk. (Read from Excel file). While executing it was taking longer time than expected. For 1000 records it takes 6 to 8 min.. Upon debugging it is found that for each BAPI call - it takes 5 to
-
Why does it cost $700 to upgrade to an iPhone?
Okay, let me tell you this story from the beginning..... In December of 2013, I wanted an iPhone 5c. My current basic phone was acting up, so my dad wanted to get a replacement. Instead, Verizon (without telling my dad) marked this as an upgrade... M
-
Looking for "PDF Public-Key Digital Signature and Encryption Specification"
Hi, i am looking for the following ("old") document: PDF Public-Key Digital Signature and Encryption Specification Originally i could be found here: http://partners.adobe.com/asn/developer/acrosdk/DOCS/ppk_pdfspec.pdf But not anymore. Does somebody o
-
Hello folks, after I was tired bothering with the missing cursor in cinnamon, I switched to Gnome3. At first, I had background issues, the desktop was looking like this: >>moderator edit: Removed large image. Read Forum Etiquette: Pasting Pictures an