Logical corruption

Hi DBA's,
We performed corruption check using RMAN and there are some block corrupted in 2 datafiles on the Standby Database.
My question is it possible to recover those three Datafiles alone if so please suggest me.
Please give me feedback for my plan.
1.Rman Datafile backup from primary
2.Take the Tablespace offline
3.drop the datafile
4.Restore the Datafile
5.Recover the Datafile
6.Bring the Tablespace online
corrupted files
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
28 1508048 16 5446971876 NOLOGGING
28 1508112 16 5446971876 NOLOGGING
28 1508176 16 5446971876 NOLOGGING
28 1508240 16 5446971876 NOLOGGING
30 3083769 1289 5450419752 NOLOGGING
30 3085079 1289 5450419837 NOLOGGING
30 3086389 1289 5450419841 NOLOGGING
30 3087684 1304 5450419884 NOLOGGING
30 3088994 122 5450419888 NOLOGGING
Thanks in Advance,
Raja...

Hi Raja
Are your tablespaces in NOLOGGING mode in a standby environment? Or have you turned on FORCE LOGGING anyway at database level?
If you do a NOLOGGING operation in an logging-unenforced database, you will certianly encounter these issues now and then.
Try DBMS_REPAIR package to fix the corrupt blocks or Use RMAN to do block recovery.
If you have to restore the datafiles, your steps look fine. But remember to turn on managed recovery in standby after the recovery.

Similar Messages

  • Questions on Logical corruption

    Hello all,
    My DB version is 10g+ - 11.2.0.3 on various different OS.  We are in process of deploying RMAN on our system and i am having a hard time on testing/get a grip around the whole logical corruption... from what i understand(please correct me if i am wrong)
    1. I can have a check logical syntax in my backup cmd(and that will check both physical and logical corruption)...But how much overhead dose it have, Seems to be anywhere from 14-20% overhead on backup time. 
    2. Leaving the maxCorrupt to default(which i beleive is 0)...if there is a physical corruption my backup will break and i should get an email/alert saying backup broke...
    3.  Would this be same for logical corruption too ??, would RMAN report logical corrution right away like physical corruption would do?  Or do i have to query v$database_block_corruption after backup is done to figure out if i have logical corruption
    4. how would one test logical corruption ?? (besides the NO_LOGGING operation, as our DB have force logging turned on)
    5. Is it a good practice to have check logical corruption in your daily backup? ( i guess i have no problems for it if DB are small, but some of our DB are close to 50TB+ and i think the check logical is going to increase the backup time significantly)
    6. If RMAN cannot repair logical corruption, then why would i want to do the check logical (besides knowing i have a problem and the end user have to fix it by reload the data...assuming its a table not index that is corrupt)..
    7. any best practices when it comes for checking logical corruption for DB in 50+ TB
    I have actually searched on here and on google, but i could not find any way to reproducing logical corrpution(maybe there is none), but i wanted to ask the community about it....
    Thank you in advance for your time. 

    General info:
    http://www.oracle.com/technetwork/database/focus-areas/availability/maa-datacorruption-bestpractices-396464.pdf
    You might want to google "fractured block" for information about it without RMAN.  You can simulate that by writing a C program to flip some bits, although technically that would be physical corruption.  Also see Dealing with Oracle Database Block Corruption in 11g | The Oracle Instructor
    One way to simulate is to use nologging operations and then try to recover (this is why force logging is used, so google corruption force logging).  Here's an example: Block corruption after RMAN restore and recovery !!! | Practical Oracl Hey, no simulate, that's for realz!
    Somewhere in the recovery docs it explains... aw, I lost my train of thought, you might get better answers with shorter questions, or one question per thread, for this kind of fora.  Oh yeah, somewhere in the docs it explains that RMAN doesn't report the error right away, because later in the recovery stream it may decide the block is newly formatted and there wasn't really a problem.
    This really is dependent on how much data is changing and how.  If you do many nologging operations or run complicated standby, you can run into this more.  There's a trade-off between verifying everything and backup windows, site requirements control everything.  That said, I've found only paranoid DBA's check enough, IT managers often say "that will never happen."  Actually, even paranoid DBA's don't check enough, the vagaries of manual labor and flaky equipment can overshadow anything.

  • Logical corruption in datafile

    what is logical corruption.
    How this can occur in datafile , is it related caused due to disk.
    how to avoid this.
    Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
    Any good reading/url is most welcomed.
    Thank You Very Much.

    user642237 wrote:
    what is logical corruption.
    How this can occur in datafile , is it related caused due to disk.
    how to avoid this.
    Is it possible to check the this on regular interval. with some job script .. any idea what command how to do it .. does dbverify will do.
    Any good reading/url is most welcomed.
    Thank You Very Much.What's the db version and o/s? Where did you read the term logical corruption in datafiles? AFAIK, datafiles get physically corrupted only. The logical corruption happens within the blocks , for example some index entry pointing towards a null rowid. I am not sure that I have come across any situation/reference where this corruption is mentioned for files as well. To check it, the best possible tool is RMAN which can do the job by some simple commands.
    HTH
    Aman....

  • Logical corruption found in the sysaux tablespace

    Dear All:
    We lately see the logical corruption error when running dbverify command which shows the block corruption. It is always on the the sysaux tablespace. The database is 11g and platform is Linux.
    we get the error like:error backing up file 2 block xxxx: logical corruption and this comes to alert.log out of the automated maintenance job like sqltunning advisor running during maintenance window.
    Now As far as I know,we can't drop or rename the sysaux tablespace. there is a startup migrate option to drop the SYSAUX but it does not work due to the presence of domain indexes. you may run the rman block media recovery but it ends with not fixing since rman backups are more of physical than maintain the logical integrity.
    Any help, advise, suggestion will be highly appreciated.

    If you let this corruption there then you are likely to face a big issue that will compromise database availability sooner or later. The sysaux is a critical tablespace, so you must proceed with caution.
    Make sure you have a valid backup and don't do any thing unless you are sure about what you are doing and you have a fall back procedure.
    if you still have a valid backup then you can use rman to perform a db block level recovery, this will help you in fixing the block. Otherwise try to restore and recover the sysaux. In case you cannot fix the block by refreshing the sysaux tablespace then I suggest you to create a new database and use aTransportable Tablespace technique to migrate all tablespaces from your current database to the new one and get rid of this database.
    ~ Madrid
    http://hrivera99.blogspot.com

  • Backup and Logical Corruption

    Hello,
    I am running a backup and checking for any logical corruption -
    RMAN> backup check logical database;
    Starting backup at 03-MAR-10
    allocated channel: ORA_SBT_TAPE_1
    channel ORA_SBT_TAPE_1: SID=135 device type=SBT_TAPE
    channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.1.0
    allocated channel: ORA_SBT_TAPE_2
    channel ORA_SBT_TAPE_2: SID=137 device type=SBT_TAPE
    channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 5.5.1.0
    allocated channel: ORA_SBT_TAPE_3
    channel ORA_SBT_TAPE_3: SID=138 device type=SBT_TAPE
    channel ORA_SBT_TAPE_3: Data Protection for Oracle: version 5.5.1.0
    channel ORA_SBT_TAPE_1: starting full datafile backup set
    channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
    input datafile file number=00014 name=/oracle1/data01/TESTDB/TESTDB_compress_test_01.dbf
    input datafile file number=00006 name=/oracle/TESTDB/data01/TESTDB_shau_01.dbf
    input datafile file number=00015 name=/oracle/product/11.1/dbs/ILM_TOOLKIT_IML_TEST_TAB_A.f
    channel ORA_SBT_TAPE_1: starting piece 1 at 03-MAR-10
    channel ORA_SBT_TAPE_2: starting full datafile backup set
    channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/oracle/TESTDB/data02/TESTDB_undo_01.dbf
    input datafile file number=00013 name=/oracle/TESTDB/data01/TESTDB_roop_01.dbf
    input datafile file number=00012 name=/oracle/TESTDB/data01/TESTDB_example_01.dbf
    input datafile file number=00005 name=/oracle/TESTDB/data01/TESTDB_sysaud_tab_1m_01.dbf
    channel ORA_SBT_TAPE_2: starting piece 1 at 03-MAR-10
    channel ORA_SBT_TAPE_3: starting full datafile backup set
    channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
    input datafile file number=00004 name=/oracle/TESTDB/data01/TESTDB_users_01.dbf
    input datafile file number=00001 name=/oracle/TESTDB/data01/TESTDB_system_01.dbf
    input datafile file number=00002 name=/oracle/TESTDB/data01/TESTDB_sysaux_01.dbf
    input datafile file number=00025 name=/oracle/export_files/TESTDB_users_02.dbf
    channel ORA_SBT_TAPE_3: starting piece 1 at 03-MAR-10
    channel ORA_SBT_TAPE_3: finished piece 1 at 03-MAR-10
    piece handle=5ul7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
    channel ORA_SBT_TAPE_3: backup set complete, elapsed time: 00:05:15
    channel ORA_SBT_TAPE_2: finished piece 1 at 03-MAR-10
    piece handle=5tl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
    channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:06:56
    channel ORA_SBT_TAPE_1: finished piece 1 at 03-MAR-10
    piece handle=5sl7ltsd_1_1 tag=TAG20100303T204356 comment=API Version 2.0,MMS Version 5.5.1.0
    channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:08:16
    Finished backup at 03-MAR-10
    Starting Control File and SPFILE Autobackup at 03-MAR-10
    piece handle=c-2109934325-20100303-0c comment=API Version 2.0,MMS Version 5.5.1.0
    Finished Control File and SPFILE Autobackup at 03-MAR-10
    Question: By looking at the output, how can I say that RMAN did an Logical Check for the corruption? This output looks same as a simple backup without logical corruption check. Please advice how to check about this?
    Thanks!

    hi
    I think you won't see any summary on this, only when corruption is found.
    There is also one related setting that can be incorporated here - see example:
    Example 2-25 Specifying Corruption Tolerance for Datafile Backups
    This example assumes a database that contains 5 datafiles. It uses the SET MAXCORRUPT command to indicate than no more than 1 corruption should be tolerated in each datafile. Because the CHECK LOGICAL option is specified on the BACKUP command, RMAN checks for both physical and logical corruption.
    RUN
    +{+
    SET MAXCORRUPT FOR DATAFILE 1,2,3,4,5 TO 1;
    BACKUP CHECK LOGICAL
    DATABASE;
    +}+
    use this to see clear output:
    -- Check for physical corruption of all database files.
         VALIDATE DATABASE;
    -- Check for physical and logical corruption of a tablespace.
         VALIDATE CHECK LOGICAL TABLESPACE USERS;
    eg.
    List of Datafiles
    File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
    +1 FAILED 0 3536 57600 637711+
    File Name: /disk1/oradata/prod/system01.dbf
    Block Type Blocks Failing Blocks Processed
    Data       1              41876
    Index      0              7721
    Other      0              4467

  • Ocrcheck shows "Logical corruption check failed"

    Hi, I have a strange issue, that I am not sure how to recover from...
    In a random 'ocrcheck' we found the above 'logical corruption'. In the CRS_HOME/log/nodename/client/ I found the previous ocrcheck was done a month earlier and was successful. So, something in the last month caused a logical corruption. The cluster is functioning ok currently.
    So, I tried doing an ocrdump on some backups we have and I am receiving the following error -
    #ocrdump -backupfile backup00.ocr <<< any backup I try for the past month
    PROT-306: Failed to retrieve cluster registry data
    This error occurrs even on the backup file taken just prior to the successful ocrcheck from a month earlier. The log for this ocrdump shows -
    cat ocrdump_6494.log
    Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
    2010-08-18 12:57:17.024: [ OCRDUMP][2813008768]ocrdump starts...
    2010-08-18 12:57:17.038: [  OCROSD][2813008768]utread:3: Problem reading buffer 7473000 buflen 4096 retval 0 phy_offset 15982592 retry 0
    2010-08-18 12:57:17.038: [  OCROSD][2813008768]utread:4: Problem reading the buffer errno 2 errstring No such file or directory
    2010-08-18 12:57:17.038: [  OCRRAW][2813008768]gst: Dev/Page/Block [0/3870/3927] is CORRUPT (header)
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]rbkp:2: could not read the free list
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]gst:could not read fcl page 1
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]rbkp:2: could not read the free list
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]gst:could not read fcl page 2
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]fkce:2: problem reading the tnode 131072
    2010-08-18 12:57:17.039: [  OCRRAW][2813008768]propropen: Failed in finding key comp entry [26]
    2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failed to open key handle for key name [SYSTEM] [PROC-26: Error while accessing the physical storage]
    2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Failure when trying to traverse ROOTKEY [SYSTEM]
    2010-08-18 12:57:17.039: [ OCRDUMP][2813008768]Exiting [status=success]...
    NOTE: an 'ocrdump' of the active ocr does work and creates the ocrdumpfile
    The corruption in the ocr seems to be two keynames pointing to the same block.
    Oracle Database 11g CRS Release 11.1.0.7.0 - Production Copyright 1996, 2007 Oracle. All rights reserved.
    2010-08-18 13:22:54.095: [OCRCHECK][285084544]ocrcheck starts...
    2010-08-18 13:22:55.447: [OCRCHECK][285084544]protchcheck: OCR status : total = [262120], used = [15496], avail = [246624]
    2010-08-18 13:22:55.545: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.css.diskfile2], and keyname [SYSTEM.css.diskfile1.FILENAME] point to same block_number [3928]
    2010-08-18 13:22:55.732: [OCRCHECK][285084544]LOGICAL CORRUPTION: current_keyname [SYSTEM.OCR.MANUALBACKUP.ITEMS.0], and keyname [SYSTEM.css.diskfile1] point to same block_number [3927]
    2010-08-18 13:23:03.159: [OCRCHECK][285084544]Exiting [status=success]...
    Since one of the keynames refers to the votedisk, that is not appearing correctly on a query -
    crsctl query css votedisk
    0. 0 /oracrsfiles/voting_disk_01
    1. 0
    2. 0 backup_20100818_103455.ocr <<<<this value changes if I issue a command that writes something to the ocr, in this case a manual backup.
    My DBA is opening an SR, but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?
    Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?
    Although this is a developement cluster (two nodes) rebuilding would be a disaster ;)
    Any help or thoughts would be much appreciated!

    Hi buddy,
    My DBA is opening an SRWell.... corruption problems, no doubts that it's better work with support team
    , but I am wondering if I can use 'ocrconfig -restore' if the backupfile I want to use cannot be 'ocrdump'd?No, that is not the idea...if Your backup is not good, it's not safe restoring it. ;)
    Also, is anyone familiar with the 'ocrconfig -repair' as a possible solution?This is for repairing nodes that were down when some kind of change on the configuration (replace OCR for example) has been executed while it was "off", so, I guess it's not Your case.
    Good Luck!
    Cerreia

  • Logically corrupted blocks in standby

    Hi
    Assume I have a primary database and standby database.
    Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
    Force logging is not set.
    When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
    How can I get rid of these corrupted blocks?
    If I rebuild indexes with logging option, and recreate table as logging,
    Will it solve the problem? or any other suggestion
    Many thanks

    Sivok wrote:
    Hi
    Assume I have a primary database and standby database.
    Accidentally, Some of the objects (indexes and tables) are in nologging mode in primary database.
    Force logging is not set.
    When I scan the datafiles in standby I realize that some datafiles are logically corrupted because of this issue.
    How can I get rid of these corrupted blocks?
    If I rebuild indexes with logging option, and recreate table as logging,
    Will it solve the problem? or any other suggestion
    Many thanksyour primary should run in force logging mode (ALTER DATABASE FORCE LOGGING) then the object level setting is ignored for direct path operations. You can apply an incremental backup to the standby to catchup (or just recreate the standby which might be as quick depending on volumes).
    Niall Litchfield
    http://www.orawin.info/

  • Logical corruption of block

    Dear Experts
    Can you pls help in understanding of logical block corruption in detail.
    Thanks
    Asif Husain Khan

    I wrote a small piece of note about it over my blog, you may want to read that,
    http://blog.aristadba.com/?p=109
    HTH
    Aman....

  • Logic corrupts DS_Store file?

    I have some really strange behavior on my machine:
    I upgraded today from 10.4.1 to 10.4.2 and everything went smooth. After the install and the permission repair, Logc 7.1 launched with no problem and everything seemed to be fine. A little bit later when I did some work in the FInder, I noticed that I couldn't opened some folders. I got the spinning beach ball and the system seemed to be locked up even if there was no significant CPU, disk or network activity. If I were lucky, I could open the Force Quit window and relaunch the Finder. After some testing I found out that the hidden ".DS_Store" file seemed to be corrupted. If I encountered any folder which I couldn't open, I deleted its DS_Store file inside the Terminal and the folder opened again.
    After I deleted all those files from the problematic folders, everything seemed to be stable again. I continued to work with Logic and found out later that some folders had the same problem again. After a tedious process of trial and error, I found that all the folders which encountered the open-folder-spinning -ball issue were folders with Logic files:
    ~/Library/Application Support
    ~/Library/Preferences
    ~/Library/Caches
    This is my conclusion so far: I Logic writes a file (most likely .plist files) to a folder and corrupts the DS_Store file, or the Finder doesn't update the DS_Store file with the new information from Logic.
    Maybe something went wrong during the OSX 10.4.2 upgrade? I did the upgrade on a second G5 (same Dual 2.5GHz with similar configuration and Logic 7.1) a few weeks ago when it came out and I don't have any problem there.
    Anyone with a clue what is happening here?
    Thanks

    AJD1170 wrote:
    A .DS_Store file keeps appearing on my desktop. It returns seconds after I have moved it to the trash.  I have deleted the .DS_Store files (hidden) from wherever I have found them, but like the desktop icon, they immediately reappear.  I some time ago stumbled upon the solution, but it evades me now. How do I get rid of this nuisance permanently?
    You want and need those .DS_Store files.
    You need to hide hidden files in terminal,  copy and paste:
    defaults write com.apple.finder AppleShowAllFiles FALSE ;killall Finder
    Finder will create a .DS_Store file in every folder that it accesses including your Desktop folder. You witness this everytime you try and delete it.

  • Data block corrupted on standby database (logical corruption)

    Hi all,
    we are getting the below error on our DRSITE,it is MANUAL PHYSCIAL STANDBY DATABSE...
    The following error has occurred:
    ORA-01578: ORACLE data block corrupted (file # 3, block # 3236947)
    ORA-01110: data file 3: '/bkp/oradata/orcl_raw_cadata01'
    ORA-26040: Data block was loaded using the NOLOGGING option
    I have checked in the Primary database, that there are some object which are not being logged into the redo logfiles.....
    SQL> select table_name,INDEX_NAME,logging from dba_indexes where logging='NO'
    TABLE_NAME INDEX_NAME LOG
    MENU_MENUS NUX_MENU_MENUS_01 NO
    MENU_USER_MENUS MENU_USER_MENUS_X NO
    OM_CITY IDM_OM_CITY_CITY_NAME NO
    OM_EMPLOYER                    EMPLR_CODE_PK                  NO
    OM_EMPLOYER                    IDM_EMPLR_EMPLR_NAME           NOOM_STUDENT_HEAD OM_STUDENT_HEAD_HEAD_UK01 NO
    OT_DAK_ENTRY_DETL DED_SYS_ID_PK NO
    OT_DAK_ENTRY_HEAD DEH_SYS_ID_PK NO
    OT_DAK_ENTRY_HEAD IDM_DEH_DT_APPL_REGION NO
    OT_DAK_ENTRY_HEAD IDM_DEH_REGION_CODE NO
    OT_DAK_REFUNDS_DETL DRD_SYS_ID_PK NO
    TABLE_NAME INDEX_NAME LOG
    OT_MEM_FEE_COL_DETL IDM_MFCD_MFCH_SYS_ID NO
    OM_STUDENT_HEAD IDM_STUD_COURSE NO
    13 rows selected.
    so the main problem is in the OM_EMPOYER tables if i would delete the indexes from that table recreate it again with the logging clause,and then apply the archvied logs to the DRSITE.WILL THE problem will resolve.
    Pls suggest me...

    Hi..
    Firstly how did you confirm that it was that index only.Can you post the output of
    SELECT tablespace_name, segment_type, owner, segment_name
    FROM dba_extents WHERE file_id = 3 and 3236947 between block_id
    AND block_id + blocks - 1;
    This query can take time, if are sure that its the index don't fire this command .
    Secondly, when you will drop and recreate the index, it will be logged into the redo logfile.This information will be be logged in to an the archivelog file as its the replica of the redo logfile. Then when you apply this archive log maually, it will drop that index and then recreate it using the same sql.
    HTH
    Anand

  • How to logically corrupt a block

    sir
    i know different ways to check whether the block is corrupted.
    But i can test it only if any block is corrupted .
    how will u make a block corrupt . i will bw very much grateful if u can give an
    example for the block corruption .

    Hi Sheila M
    I need to find a way to extract lists of emails from Junk mail folder and add them en masse to blocked senders, without having to add them to my blocked senders one by one.
    I have had the same email address for too long to change it, but I receive huge amounts of junk mail. I have so far barred nearly 2000 senders and have 1100 sitting in my junk folder form the last week. I barred them one by one, using RULES function...I simply cannot keep doing this.
    I have MacBook Pro running Mavericks, plus iPad plus iPhone 5.
    The problem is aggravated by the fact that Apple has a weakness in Mail: I may bar thousands of senders on my Mac, but the filter does not sync the barred addresses to my iPad or iPhone, so if I use Mail on either, I get many thousands of emails at once and have to mark them one by one to delete. It's ridiculous.
    To solve this problem, I have decided to do is block all the senders in webmail.
    However I still have the same problem with extracting multiple addresses from my junk mail folder in order to copy them into blocked senders list.
    Genius Bar said it can't be done. I think they're wrong. There simply as to be (a) a location on my Mac where I can find a list of all those I have already barred; (b) a way to extract email addresses from groups of received mail.
    I'd be grateful for some advice, please.

  • BACKUP VALIDATE vs VALIDATE in checking logical/physical corruption

    Hello all,
    I am checking if our 10gR2 database has any physical or logical corruption. I have read in some places where they state that VALIDATE command is enough to check database for physical corruption. Our database was never backed up by RMAN specifically before. Are any configuration settings needed for running BACKUP VALIDATE command? The reason I am asking is because just the VALIDATE command returns an error and BACKUP VALIDATE command runs without error but it is not showing the
    "File Status Marked Corrupt Empty Blocks Blocks Examined High SCN" lines.
    I used the command in two different formats and both do not show individual data file statuses:
    RMAN> run {
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    CONFIGURE DEVICE TYPE DISK PARALLELISM 10 BACKUP TYPE TO BACKUPSET;
    BACKUP VALIDATE CHECK LOGICAL DATABASE FILESPERSET=10;
    RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE
    RMAN> VALIDATE DATABASE;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-00558: error encountered while parsing input commands
    RMAN-01009: syntax error: found "database": expecting one of: "backupset"
    RMAN-01007: at line 1 column 10 file: standard input
    However on a different database already being backed up by RMAN daily, BACKUP VALIDATE output shows list of datafiles and STATUS = OK as below:
    List of Datafiles
    =================
    File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
    How can we check every individual datafile status. Appreciate your responses. Thanks.

    Hi,
    After you have run:
    BACKUP VALIDATE CHECK LOGICAL DATABASE You can use sqlplus and run:
    select * from v$database_block_corruption.The output will tell you which block in which datafile is corrupt.
    Regards,
    Tycho
    Edited by: tychos on 8-sep-2011 18:34

  • How to find physical corruption

    Hi,
    I sed backup validate command to find physical corruption into my database as follows.
    backup validate check logical database;
    After his command, I queried v$database_block_corruption.
    12:04:47 SQL> select * from v$database_block_corruption;
         FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
             1      11477          2          228760588 LOGICAL
             1      11514          1          228760329 LOGICAL
    12:05:09 SQL>
    How can I find if this is physical corruption of logical corruption?
    I also used dbv command to check my system datafile (File# 1). And output is as foolows
    D:\>dbv file=D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBf
    DBVERIFY: Release 10.2.0.4.0 - Production on Mon Nov 18 11:56:23 2013
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    DBVERIFY - Verification starting : FILE = D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBf
    DBV-00200: Block, DBA 4205781, already marked corrupt
    DBV-00200: Block, DBA 4205782, already marked corrupt
    DBV-00200: Block, DBA 4205818, already marked corrupt
    DBVERIFY - Verification complete
    Total Pages Examined         : 168960
    Total Pages Processed (Data) : 127180
    Total Pages Failing   (Data) : 0
    Total Pages Processed (Index): 25248
    Total Pages Failing   (Index): 0
    Total Pages Processed (Other): 2440
    Total Pages Processed (Seg)  : 0
    Total Pages Failing   (Seg)  : 0
    Total Pages Empty            : 14092
    Total Pages Marked Corrupt   : 3
    Total Pages Influx           : 0
    Highest block SCN            : 245793757 (0.245793757)
    Since dbv always checks for physical corruption, but how can I make sure by using RMAn that whether this is physical corruption of only logical corruption? Thanks
    Alert log says something as follows
    Sat Sep 14 00:22:50 2013
    Errors in file d:\oracle\admin\opera\bdump\opera_p008_3916.trc:
    ORA-01578: ORACLE data block corrupted (file # 1, block # 11478)
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBF'
    ORA-10564: tablespace SYSTEM
    ORA-01110: data file 1: 'D:\ORACLE\ORADATA\OPERA\SYSTEM01.DBF'
    ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 5121
    ORA-00607: Internal error occurred while making a change to a data block
    ORA-00600: internal error code, arguments: [kddummy_blkchk], [1], [11478], [6101], [], [], [], []
    SAQ

    Hi,
    DBVerify reports both physical and logical intrablock corruptions by default. A good place to start would be going through the below MOS note for understanding them well.
    Physical and Logical Block Corruptions. All you wanted to know about it. (Doc ID 840978.1)
    Basically you will not be able to read data from the corrupt blocks in physical corruption while might be able to read data in logical corruption.
    Also, DBVerify reports DBV-200/201 errors for logical corruption.
    Hth
    Abhishek

  • 11g Block Corruption Question..

    Hello All,
    I am running an 11g (11.1.0.7) 64 bit on a Redhat 5.3 host. I did an RMAN clone or copy of this database to another Redhat host, same file structure, DB version, identical to production. I am using 11g enterprise manager GRID control to manage this newly cloned host. When I try to perform a full-backup using the 11g enterprise manager grid control GUI, it tell me that I have block corruption in datafile#7 and lists all the blocks. Now, I know for sure that my production host (where the clone was made from) does not have any corrupt blocks of any sort. I also ran a 'validate database' and 'validate datafile 7' command on the newly created host and all reports back ok. The full backup completes successfully, but where is it getting this corrupt datafile 7 from? Can I clear this block corruption list somehow? Both Production and the newly created clone are free of any block corruption. When I execute an RMAN> backup database command on the newly created host, it fires off and does its thing, no mention of any block corruption, but when I want to perform a full backup with the GUI, it warns me of the block corruption - but lets the full backup continue and complete. GUI issue with 11g enterpise manager grid control?
    Please advise..
    Thanks,
    RB

    Its not mandatory that if you are getting the message of block corruption from GUI , it must be true only. Since you are using RMAN, if it can't find a buffer to read in few tries( 3 I believe) , it would mark the buffer as "corrupted" but that's NOT a corruption. As long as you don't see ora-1578 in the alert log repeatedly, you don't have a corruption. What more you can do is to run the command validate ....check logical to confirm that there is some logical corruption detected or not. Other than that, IMO you are fine.
    Aman....

  • Checking block corruption, why in alert it is saying Error in trace file

    Hi,
    I am using Oracle 10g 10.2.0.1 with linux 32 bit
    I wanted to check block corruption using RMAN by following statement
    backup validate check logical database;
    when i executed the statement, following was the output
    Starting backup at 09-MAY-08
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=91 devtype=DISK
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00004 name=/u01/app/oracle/oradata/test/users01.dbf
    input datafile fno=00008 name=/u01/app/oracle/oradata/test/workflowuser
    input datafile fno=00001 name=/u01/app/oracle/oradata/test/system01.dbf
    input datafile fno=00003 name=/u01/app/oracle/oradata/test/sysaux01.dbf
    input datafile fno=00010 name=/u01/app/oracle/oradata/test/ifan
    input datafile fno=00002 name=/u01/app/oracle/oradata/test/undotbs01.dbf
    input datafile fno=00007 name=/u01/app/oracle/oradata/test/taker
    input datafile fno=00009 name=/u01/app/oracle/oradata/test/testing1
    input datafile fno=00005 name=/u01/app/oracle/oradata/test/brokerdb
    input datafile fno=00006 name=/u01/app/oracle/oradata/test/moneio
    input datafile fno=00011 name=/u01/app/oracle/oradata/test/web1
    input datafile fno=00012 name=/u01/app/oracle/oradata/test/e1
    input datafile fno=00013 name=/u01/app/oracle/oradata/test/ind1
    channel ORA_DISK_1: backup set complete, elapsed time: 00:06:57
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    including current control file in backupset
    including current SPFILE in backupset
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    Finished backup at 09-MAY-08
    and when i do the following query
    select * from v$database_block_corruption in sqlplus
    then there was no row
    so it means there is no logical corruption, but when i am looking at alert log file it is giving following lines
    Fri May 9 10:14:04 2008
    Errors in file /u01/app/oracle/admin/test/udump/test_ora_6606.trc:
    Fri May 9 10:14:04 2008
    Errors in file /u01/app/oracle/admin/test/udump/test_ora_6606.trc:
    Fri May 9 10:14:04 2008
    Errors in file /u01/app/oracle/admin/test/udump/test_ora_6606.trc
    and in above trace file following contents
    /u01/app/oracle/admin/test4/udump/test_ora_6606.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
    System name: Linux
    Node name: test
    Release: 2.6.18-5-686
    Version: #1 SMP Wed Oct 3 00:12:50 UTC 2007
    Machine: i686
    Instance name: test
    Redo thread mounted by this instance: 1
    Oracle process number: 61
    Unix process pid: 6606, image: oracle@test (TNS V1-V3)
    *** 2008-05-09 10:14:04.093
    *** ACTION NAME:(0000040 STARTED19) 2008-05-09 10:14:04.071
    *** MODULE NAME:(backup full datafile) 2008-05-09 10:14:04.071
    *** SERVICE NAME:(SYS$USERS) 2008-05-09 10:14:04.071
    *** SESSION ID:(91.40318) 2008-05-09 10:14:04.071
    Is it normal, why the in alert file it is saying that Error in.
    And it did not create any backupset in folder of flash recovery but in Enterprise manager it is showing last backup on 09-may-2008, why ?
    Regards,

    See logical corruption is normally termed as the internal inconsistancy within the block which is not caused by Oracle but by the user.So if you find internal inconsistancy than the best option is to look at the user and ask him to get the values sorted out.If the internal inconsistancy is some thig like index fragmentation sort of things or index entry pointing to a null rowid than they are termed as the logical corruption and they should not impact your normal as he data is already there and there is no issue in reading the block as such.
    The term "corrupted blocks" I would call both in the tables and backup for those data blocks which are unreadable by Oracle which actualy is Physical corruption.
    If i am doing the checking at 2:00 am, does it take more than two hours.I didnt understand this.
    what can we do for Physical corruptionThis will need the block to be recovered with the Block Recover command of RMAN and a good backup.Read about it here,
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta010.htm
    About the Logical and Physical corruption checks , check here
    http://download-west.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1012.htm
    Aman....

Maybe you are looking for

  • How do I replace my internal hard drive and not lose windows partition?

    Hello, I am wanting to swap my internal hard drive out for a larger one. What do I do about my windows partition? I only use boot camp, and there is some stuff on my windows side I don't want to lose. I know how to make a clone for OSX, and I know ho

  • Deterministic WCCP assignment of buckets to WAEs

    I have a scenario where there are 3 WAE devices used in a site. We are using MASK of 0x3F (6 bits) on source IP address for distribution to achieve as fair a bucket distribution to the devices as possible - 21 buckets, 21 buckets and 22 buckets. I un

  • FileIO and Windows file updating

    I am having a file problem with my Director application. I have two Director MX applications which run on different computers. They communicate with each other via a text file which is stored at a networked location. Both applications access the file

  • Error in WEBi

    Hello, Has anyone got this below error message while trying to refresh WEBi report . I am directly accessing BEx queries in WEBi. It was running fine earlier and today i am getting this error message . After i tried refreshing for two times, the thir

  • Check User Name: Isn't happening

    I have a registration page in JavaScript that is working fine except for the fact that I need it to check username against the database to insure that usernames do not get duplicated in the database. Because it does not check username I come to reali