Recovery is repairing media corrupt block x of file x in standby alert log

Hi,
oracle version:8.1.7.0.0
os version :solaris  5.9
we have oracle 8i primary and standby database. i am getting erorr in alert log file:
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
bash-2.05$ tail -f alert_pindb.log
Recovery is repairing media corrupt block 991886 of file 179
Recovery is repairing media corrupt block 70257 of file 184
Recovery is repairing media corrupt block 70258 of file 184
Recovery is repairing media corrupt block 70259 of file 184
Recovery is repairing media corrupt block 70260 of file 184
Recovery is repairing media corrupt block 70261 of file 184
Thu Aug 28 22:48:12 2008
Media Recovery Log /oratranslog/arch_1_1827391.arc
Thu Aug 28 22:50:42 2008
Media Recovery Log /oratranslog/arch_1_1827392.arc
Recovery is repairing media corrupt block 500027 of file 181
Recovery is repairing media corrupt block 500028 of file 181
Recovery is repairing media corrupt block 500029 of file 181
Recovery is repairing media corrupt block 500030 of file 181
Recovery is repairing media corrupt block 500031 of file 181
Recovery is repairing media corrupt block 991837 of file 179
Recovery is repairing media corrupt block 991838 of file 179
how i can resolve this.
[pre]
Thanks
Prakash
Edited by: user612485 on Aug 28, 2008 10:53 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Dear satish kandi,
recently we have created index for one table with nologgign option, i think for that reason i am getting that error.
if i run dbv utility on the files which are shown in alert log file i am getting the following results.
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx055.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:18:27 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx053.dbf
Block Checking: DBA = 751593895, Block Type =
Found block already marked corrupted
Block Checking: DBA = 751593896, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 1036952
Total Pages Failing (Index): 0
Total Pages Processed (Other): 7342
Total Pages Empty : 4282
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx053.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:23:12 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx054.dbf
Block Checking: DBA = 759492966, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492967, Block Type =
Found block already marked corrupted
Block Checking: DBA = 759492968, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 585068
Total Pages Failing (Index): 0
Total Pages Processed (Other): 8709
Total Pages Empty : 454799
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx054.dbf blocksize=4096
DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:32:28 2008
(c) Copyright 2000 Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx055.dbf
Block Checking: DBA = 771822208, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822209, Block Type =
Found block already marked corrupted
Block Checking: DBA = 771822210, Block Type =
.DBVERIFY - Verification complete
Total Pages Examined : 1048576
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 157125
Total Pages Failing (Index): 0
Total Pages Processed (Other): 4203
Total Pages Empty : 887248
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
My doubts are :
1.if i drop the index and recreate the index with logging option will this error won't repeat in alert log file
2.in future if i activate the standby database will database is going to open without any error.
Thanks
Prakash
.

Similar Messages

  • Repair a Corrupt PSD (Photoshop CS4) file on a mac

    I am on a Mac running 10.6.3 and using CS4.  Unfortunately I recently had a major system failure and had to wipe my drive and reinstall.  I was able to recover my data using data rescue 3, but a few of my Photoshop files won't open.  Photoshop tells me that they are not compatible with my version of photoshop, but of course, that is not true.
    I assume they are corrupt and have been looking for a PSD file repair application, but can't seem to find anything for a MAC.
    Anyone have any ideas?  I need the files for a client/deadline and am desperate.  After repairing my system I have no time to redo the files.
    Thanks in advance.
    Sarah
    PS I have already re-installed from a freshly installed system

    Corrupted files may produce any number of errors, including that they are not compatible with your version of Photoshop. They may not be compatible with any version of Photoshop now that they have been corrupted.
    When you have spare time after being fired/dropped by your client, invest in a spare hard drive or two. Use at least one drive as a clone of your working system so that you have to only waste 10 minutes to switch over the next time your computer poops. You can use that clone drive or another spare drive to hold copies of your documents. Your client work should not be stored in only one place. Meet 'TimeMachine'. Apple did not include that in your OS for nothing.
    Sarah Meikle wrote:
    ...had to wipe my drive and reinstall
    Any sort of file recovery tool is out of the question now. You blew it when you reinstalled the system.

  • Data corrupt block

    os Sun 5.10 oracle version 10.2.0.2 RAC 2 node
    alert.log 내용
    Hex dump of (file 206, block 393208) in trace file /oracle/app/oracle/admin/DBPGIC/udump/dbpgic1_ora_1424.trc
    Corrupt block relative dba: 0x3385fff8 (file 206, block 393208)
    Bad header found during backing up datafile
    Data in bad block:
    type: 32 format: 0 rdba: 0x00000001
    last change scn: 0x0000.98b00394 seq: 0x0 flg: 0x00
    spare1: 0x1 spare2: 0x27 spare3: 0x2
    consistency value in tail: 0x00000001
    check value in block header: 0x0
    block checksum disabled
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    corrupt 발생한 Block id 를 검색해 보면 Block id 가 검색이 안됩니다.
    dba_extents 로 검색
    corrupt 때문에 Block id 가 검색이 안되는 것인지 궁금합니다.
    export 받으면 데이타는 정상적으로 export 가능.

    다행이네요. block corruption 이 발생한 곳이 데이터가 저장된 블록이
    아닌 것 같습니다. 그것도 rman백업을 통해서 발견한 것 같는데
    맞는지요?
    scn이 scn: 0x0000.00000000 가 아닌
    0x0000.98b00394 인 것으로 봐서는 physical corrupt 가 아닌
    soft corrupt인 것 같습니다.
    그렇다면 버그일 가능성이 높아서 찾아보니
    Bug 4411228 - Block corruption with mixture of file system and RAW files
    의 버그가 발견되었습니다. 이것이 아닐 수도 있지만..
    이러한 block corruption에 대한 처리방법 및 원인분석은
    오라클(주)를 통해서 정식으로 요청하셔야 합니다.
    metalink를 통해서 SR 요청을 하십시오.
    export는 high water mark 이후의 block corruption을 찾아내지 못하고 이외에도
    아래 몇가지 경우에서도 찾아내지 못합니다.
    db verify( dbv)의 경우에는 physical corruption은 찾아내지 못하고
    soft block corruption만 찾아낼 수 있습니다.
    경험상 physical corruption 이 발생하였으나 /dev/null로
    datafile copy가 안되는데도 dbv로는 이 문제를 찾아내지
    못하였습니다.
    그렇다면 가장 좋은 방법은 rman 입니다. rman은 high water mark까지의
    데이터를 백업해주면서 전체 데이터파일에 대한 체크를 하기도 합니다.
    physical corruption뿐만 아니라 logical corruption도 체크를
    하니 점검하기로는 rman이 가장 좋은 방법이라 생각합니다.
    The Export Utility
    # Use a full export to check database consistency
    # Export performs a full scan for all tables
    # Export only reads:
    - User data below the high-water mark
    - Parts of the data dictionary, while looking up information concerning the objects being exported
    # Export does not detect the following:
    - Disk corruptions above the high-water mark
    - Index corruptions
    - Free or temporary extent corruptions
    - Column data corruption (like invalid date values)
    block corruption을 정상적으로 복구하는 방법은 restore 후에
    복구하는 방법이 있겠으나 이미 restore할 백업이 block corruption이
    발생했을 수도 있습니다. 그러므로 다른 서버에 restore해보고
    정상적인 datafile인 것을 확인 후에 실환경에 restore하는 것이 좋습니다.
    만약 백업본까지 block corruption이 발생하였거나 또는 시간적 여유가
    없을 경우에는 table을 move tablespace 또는 index rebuild를 통해서
    다른 테이블스페이스로 데이터를 옮겨두고 문제가 발생한 테이블스페이스를
    drop해버리고 재생성 하는 것이 좋을 것 같습니다.(지금 현재 데이터의
    손실은 없으니 move tablespace, rebuild index 방법이 좋겠습니다.
    Handling Corruptions
    Check the alert file and system log file
    Use diagnostic tools to determine the type of corruption
    Dump blocks to find out what is wrong
    Determine whether the error persists by running checks multiple times
    Recover data from the corrupted object if necessary
    Preferred resolution method: media recovery
    Handling Corruptions
    Always try to find out if the error is permanent. Run the analyze command multiple times or, if possible, perform a shutdown and a startup and try again to perform the operation that failed earlier.
    Find out whether there are more corruptions. If you encounter one, there may be other corrupted blocks, as well. Use tools like DBVERIFY for this.
    Before you try to salvage the data, perform a block dump as evidence to identify the actual cause of the corruption.
    Make a hex dump of the bad block, using UNIX dd and od -x.
    Consider performing a redo log dump to check all the changes that were made to the block so that you can discover when the corruption occurred.
    Note: Remember that when you have a block corruption, performing media recovery is the recommended process after the hardware is verified.
    Resolve any hardware issues:
    - Memory boards
    - Disk controllers
    - Disks
    Recover or restore data from the corrupt object if necessary
    Handling Corruptions (continued)
    There is no point in continuing to work if there are hardware failures. When you encounter hardware problems, the vendor should be contacted and the machine should be checked and fixed before continuing. A full hardware diagnostics should be run.
    Many types of hardware failures are possible:
    Bad I/O hardware or firmware
    Operating system I/O or caching problem
    Memory or paging problems
    Disk repair utilities
    아래 관련 자료를 드립니다.
    All About Data Blocks Corruption in Oracle
    Vijaya R. Dumpa
    Data Block Overview:
    Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks (also called logical blocks, Oracle blocks, or pages), extents, and segments. The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information. The level of logical database storage above an extent is called a segment. The high water mark is the boundary between used and unused space in a segment.
    The header contains general block information, such as the block address and the type of segment (for example, data, index, or rollback).
    Table Directory, this portion of the data block contains information about the table having rows in this block.
    Row Directory, this portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area).
    Free space is allocated for insertion of new rows and for updates to rows that require additional space.
    Row data, this portion of the data block contains rows in this block.
    Analyze the Table structure to identify block corruption:
    By analyzing the table structure and its associated objects, you can perform a detailed check of data blocks to identify block corruption:
    SQL> analyze table_name/index_name/cluster_name ... validate structure cascade;
    Detecting data block corruption using the DBVERIFY Utility:
    DBVERIFY is an external command-line utility that performs a physical data structure integrity check on an offline database. It can be used against backup files and online files. Integrity checks are significantly faster if you run against an offline database.
    Restrictions:
    DBVERIFY checks are limited to cache-managed blocks. It’s only for use with datafiles, it will not work against control files or redo logs.
    The following example is sample output of verification for the data file system_ts_01.dbf. And its Start block is 9 and end block is 25. Blocksize parameter is required only if the file to be verified has a non-2kb block size. Logfile parameter specifies the file to which logging information should be written. The feedback parameter has been given the value 2 to display one dot on the screen for every 2 blocks processed.
    $ dbv file=system_ts_01.dbf start=9 end=25 blocksize=16384 logfile=dbvsys_ts.log feedback=2
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    Output:
    $ pg dbvsys_ts.log
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = system_ts_01.dbf
    DBVERIFY - Verification complete
    Total Pages Examined : 17
    Total Pages Processed (Data) : 10
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index) : 2
    Total Pages Failing (Index) : 0
    Total Pages Processed (Other) : 5
    Total Pages Empty : 0
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Detecting and reporting data block corruption using the DBMS_REPAIR package:
    Note: Note that this event can only be used if the block "wrapper" is marked corrupt.
    Eg: If the block reports ORA-1578.
    1. Create DBMS_REPAIR administration tables:
    To Create Repair tables, run the below package.
    SQL> EXEC DBMS_REPAIR.ADMIN_TABLES(‘REPAIR_ADMIN’, 1,1, ‘REPAIR_TS’);
    Note that table names prefix with ‘REPAIR_’ or ‘ORPAN_’. If the second variable is 1, it will create ‘REAIR_key tables, if it is 2, then it will create ‘ORPAN_key tables.
    If the thread variable is
    1 then package performs ‘create’ operations.
    2 then package performs ‘delete’ operations.
    3 then package performs ‘drop’ operations.
    2. Scanning a specific table or Index using the DBMS_REPAIR.CHECK_OBJECT procedure:
    In the following example we check the table employee for possible corruption’s that belongs to the schema TEST. Let’s assume that we have created our administration tables called REPAIR_ADMIN in schema SYS.
    To check the table block corruption use the following procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.CHECK_OBJECT (‘TEST’,’EMP’, NULL,
    1,’REPAIR_ADMIN’, NULL, NULL, NULL, NULL,:A);
    SQL> PRINT A;
    To check which block is corrupted, check in the REPAIR_ADMIN table.
    SQL> SELECT * FROM REPAIR_ADMIN;
    3. Fixing corrupt block using the DBMS_REPAIR.FIX_CORRUPT_BLOCK procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.FIX.CORRUPT_BLOCKS (‘TEST’,’EMP’, NULL,
    1,’REPARI_ADMIN’, NULL,:A);
    SQL> SELECT MARKED FROM REPAIR_ADMIN;
    If u select the EMP table now you still get the error ORA-1578.
    4. Skipping corrupt blocks using the DBMS_REPAIR. SKIP_CORRUPT_BLOCK procedure:
    SQL> EXEC DBMS_REPAIR. SKIP_CORRUPT.BLOCKS (‘TEST’, ‘EMP’, 1,1);
    Notice the verification of running the DBMS_REPAIR tool. You have lost some of data. One main advantage of this tool is that you can retrieve the data past the corrupted block. However we have lost some data in the table.
    5. This procedure is useful in identifying orphan keys in indexes that are pointing to corrupt rows of the table:
    SQL> EXEC DBMS_REPAIR. DUMP ORPHAN_KEYS (‘TEST’,’IDX_EMP’, NULL,
    2, ‘REPAIR_ADMIN’, ‘ORPHAN_ADMIN’, NULL,:A);
    If u see any records in ORPHAN_ADMIN table you have to drop and re-create the index to avoid any inconsistencies in your queries.
    6. The last thing you need to do while using the DBMS_REPAIR package is to run the DBMS_REPAIR.REBUILD_FREELISTS procedure to reinitialize the free list details in the data dictionary views.
    SQL> EXEC DBMS_REPAIR.REBUILD_FREELISTS (‘TEST’,’EMP’, NULL, 1);
    NOTE
    Setting events 10210, 10211, 10212, and 10225 can be done by adding the following line for each event in the init.ora file:
    Event = "event_number trace name errorstack forever, level 10"
    When event 10210 is set, the data blocks are checked for corruption by checking their integrity. Data blocks that don't match the format are marked as soft corrupt.
    When event 10211 is set, the index blocks are checked for corruption by checking their integrity. Index blocks that don't match the format are marked as soft corrupt.
    When event 10212 is set, the cluster blocks are checked for corruption by checking their integrity. Cluster blocks that don't match the format are marked as soft corrupt.
    When event 10225 is set, the fet$ and uset$ dictionary tables are checked for corruption by checking their integrity. Blocks that don't match the format are marked as soft corrupt.
    Set event 10231 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing full table scans:
    Event="10231 trace name context forever, level 10"
    Set event 10233 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing index range scans:
    Event="10233 trace name context forever, level 10"
    To dump the Oracle block you can use below command from 8.x on words:
    SQL> ALTER SYSTEM DUMP DATAFILE 11 block 9;
    This command dumps datablock 9 in datafile11, into USER_DUMP_DEST directory.
    Dumping Redo Logs file blocks:
    SQL> ALTER SYSTEM DUMP LOGFILE ‘/usr/oracle8/product/admin/udump/rl. log’;
    Rollback segments block corruption, it will cause problems (ORA-1578) while starting up the database.
    With support of oracle, can use below under source parameter to startup the database.
    CORRUPTEDROLLBACK_SEGMENTS=(RBS_1, RBS_2)
    DB_BLOCK_COMPUTE_CHECKSUM
    This parameter is normally used to debug corruption’s that happen on disk.
    The following V$ views contain information about blocks marked logically corrupt:
    V$ BACKUP_CORRUPTION, V$COPY_CORRUPTION
    When this parameter is set, while reading a block from disk to catch, oracle will compute the checksum again and compares it with the value that is in the block.
    If they differ, it indicates that the block is corrupted on disk. Oracle makes the block as corrupt and signals an error. There is an overhead involved in setting this parameter.
    DB_BLOCK_CACHE_PROTECT=‘TRUE’
    Oracle will catch stray writes made by processes in the buffer catch.
    Oracle 9i new RMAN futures:
    Obtain the datafile numbers and block numbers for the corrupted blocks. Typically, you obtain this output from the standard output, the alert.log, trace files, or a media management interface. For example, you may see the following in a trace file:
    ORA-01578: ORACLE data block corrupted (file # 9, block # 13)
    ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
    ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
    ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'
    $rman target =rman/rman@rmanprod
    RMAN> run {
    2> allocate channel ch1 type disk;
    3> blockrecover datafile 9 block 13 datafile 2 block 19;
    4> }
    Recovering Data blocks Using Selected Backups:
    # restore from backupset
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET;
    # restore from datafile image copy
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM DATAFILECOPY;
    # restore from backupset with tag "mondayAM"
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 199 FROM TAG = mondayAM;
    # restore using backups made before one week ago
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL 'SYSDATE-7';
    # restore using backups made before SCN 100
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL SCN 100;
    # restore using backups made before log sequence 7024
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL SEQUENCE 7024;
    글 수정:
    Min Angel (Yeon Hong Min, Korean)

  • Corrupt Block and Standby Database

    Guys,
    I created a standby database recently. I then discovered a corrupt block on my primary, I assume the corruption is also on the standby since the files were coppied. If I repair the corrupt block on the primary how do I move the correction to the standby, do I have to recreate it?
    DB version is 9iR2
    Delton

    Hi Delton,
    How do you plan to repair the corrupt block ?
    * Drop and re-create the object
    * Restore from backup
    In both cases, changes are replicated to the standby database, so nothing to worry about. As Sybrand has mentioned, make sure the changes are done with LOGGING option.
    Regards
    Asif Momen
    http://momendba.blogspot.com

  • DB Recovery due to corrupt block in redologfile

    Hello.
    A block was corrupted in the redo log file # 2 and I don't have backup.
    So I tried recovering the database using the RECOVER DATABASE until a specified change #. The database required me to use the RECOVER DATABASE USING BACKUP CONTROLFILE instead. This command asked for a filename of a redo logfile. I entered the filename of redo log#1 since log#2 contained the corrupted block.
    The recovery wasn't successful. The error messages stated that the recovery required the sequence #333 but redo log#1 contained the sequence #332. I later found out that redo log#2 contained the sequence #333.
    Basically, the scenario is this: according ora-00279, the recovery requires the sequence #333 found in redolog#2 which contains the corrupted block. When i try recovery i get the ora-01194 error plus the ora-01112: media recovery not started
    is there still hope in recovering the database?
    Thanks in advance

    Well i think you can only recover up to scn #332 using log#1.
    If your redo members are not multiplexed you have lost some data.
    Acr

  • Simulate block media corruption

    How i can simulate block media corruption?
    i am in middle of DR exercise with RMAN and testing different recovery scenarios, I tried vi the dbfile add 0 , but had to do dbfile recovery....

    Here you are
    Lets take the below example:
    sql> create user usr identified by usr;
    sql> grant dba to usr;
    sql> conn usr/usr;
    sql> create table tbl_corruption(id number);
    sql> insert into tbl_corruption values(1);
    sql> commit;
    sql> exit;
    $ rman target /
    rman> backup database plus archivelog;
    rman> exit
    sql> select header_block from dba_segments where segment_name='tbl_corruption';
    header_block
    59
    sql> select a.name from v$datafile a, dba_segments b where a.file#=b.header_file and b.segment_name='tbl_corruption';
    Name
    /u01/oracle/product/10.2.0/db_1/oradata/testdb/users01.dbf
    sql> exit
    $dd of=/u01/oracle/product/10.2.0/db_1/oradata/testdb/users01.dbf bs=8192 conv=notrue seek=60 <<EOF
    taking corruption
    EOF0+1 records in
    0+1 records out
    sql> conn usr/usr
    sql> alter system flush buffer_cache;
    sql> select * from tbl_corruption;
    select * from tbl_corruption;
    Error at line1:
    ORA-01578: Oracle data block corrupted(File#4, block#60)
    ORA-01110: datafile 4:
    '/u01/oracle/product/10.2.0/db_1/oradata/testdb/users01.dbf'
    sql> exit
    $ rman target /
    rman> blockrecover datafile 4 block 60;
    sql> select * from tbl_corruption;
    ID
    1
    Block corruption resolved.

  • Skipping Corrupt Block

    Hi Gurus,
    My alert log was showing a block corrupt and i repaired it with dbms utility. After this oracle starts generating a trace file in udump folder every minute which shows follwing error.
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    *** 2010-04-30 19:52:51.203
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    I don't so many trace files to be gerated . Can i do something .
    Please help me.

    Vivek Agarwal wrote:
    Hi Gurus,
    My alert log was showing a block corrupt and i repaired it with dbms utility. After this oracle starts generating a trace file in udump folder every minute which shows follwing error.
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    *** 2010-04-30 19:52:51.203
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    table scan: segment: file# 19 block# 67
    skipping corrupt block file# 19 block# 79161
    I don't so many trace files to be gerated . Can i do something .
    Please help me.Dear Vivek. If you have used DBMS_REPAIR package, it means that you've marked those blocks corrupted and ORacle will bypass those blocks in I/O
    Why you haven't used RMAN Block Media Recovery? You can watch my video tutorial on this: http://kamranagayev.wordpress.com/2010/03/18/rman-video-tutorial-series-performing-block-media-recovery-with-rman/
    (And please don't tell me that you haven't taken RMAN backup if it's not a production database)
    It's just a informatino written to the trace files that Oracle bypasses those data blocks
    Read the following article :
    http://www.askthegerman.com/archives/107
    My Oracle Video Tutorials - http://kamranagayev.wordpress.com/oracle-video-tutorials/

  • Media corruption found: Needs workaround

    I find a data block corruption on one of the table(INDEX) :CDM.FACT_CLAIM_REINSURANCE
    I tried to use DBMS_REPAIR to attempt a repair of the corrupted block, however it shows result as o: implies no corruption
    Then I tried with DBVERIFY: It shows some block corruption ..seems to be media.
    DBVERIFY - Verification starting : FILE = /oracle/datafiles2/edwc/AGGREGATE_DATA_03.dbf
    Page 313303 is influx - most likely media corrupt
    Corrupt block relative dba: 0x0644c7d7 (file 25, block 313303)
    Fractured block found during dbv:
    Data in bad block:
    type: 6 format: 2 rdba: 0x0644c7d7
    ===================================
    As the dbv output shows, there are no failed data but still corruption exists, implying a media corruption.
    My question is:
    Is there any work-around for the issue of media corruption, keeping in mind that the database is in noarchivelog mode and any operation using RMAN will not be possible.
    Regards,
    Frdz

    The log clearly says, the object is of type INDEX. Can you please have a look on log file and the info regarding this extracted from database.
    Will Index rebuilding resolve the issue?
    =========================
    With the information from the alert log the following details were gathered.
    TSN = 15, TSNAME = AGGREGATE_DATA
    RFN = 25, BLK = 313303, RDBA = 105170903
    OBJN = 906126, OBJD = 906126, OBJECT = PK_FACT_CLAIM_REINSURANCE,
    SUBOBJECT = SEGMENT OWNER = CDM, SEGMENT TYPE = Index Segment
    Using the object number from the alert log file, the objects were identified. It was found to be an INDEX. Its base table was also identified.
    OBJECT_NAME OBJECT_ID OBJECT_TYPE OWNER
    PK_FACT_CLAIM_REINSURANCE 906126 INDEX CDM
    INDEX_NAME OWNER TABLE_NAME TABLE_OWNE TABLE_TYPE TABLESPACE_NAME
    PK_FACT_CLAIM_REINSURANCE CDM FACT_CLAIM_REINSURANCE CDM TABLE AGGREGATE_INDEX

  • Corrupted block rman

    Hi
    As far as I know, if rman finds block corruption, it will terminate the backup session.
    However, in my scenerio, rman skipped the corrupted block and continue taking backup of remaining datafiles.
    What is the reason for this?
    Note that, I didnt use maxcorrupt in rman script.
    The below is the end of the logfile:
    channel t4: backup set complete, elapsed time: 01:02:50
    released channel: t1
    released channel: t2
    released channel: t3
    released channel: t4
    released channel: t5
    released channel: t6
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on t2 channel at 03/20/2010 17:29:52
    ORA-19566: exceeded limit of 0 corrupt blocks for file /dbs02/prd/mis_refrence.dbf01
    Recovery Manager complete.

    Hi,
    RMAN is terminated only when it does not find the data file etc. But it does not terminate due to block corruption.
    See the following excerpts from the following oracle document link.
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10734/rcmconc1.htm#1016108
    RMAN detects and responds to two primary types of backup errors: I/O errors and corrupt blocks. Any I/O errors that RMAN encounters when reading files or writing to the backup pieces or image copies cause RMAN to terminate the backup jobs. For example, if RMAN tries to back up a datafile but the datafile is not on disk, then RMAN terminates the backup. If multiple channels are being used, or redundant copies of backups are being created, RMAN may be able to continue the backup without user intervention.
    If BACKUP AS BACKUPSET creates more than one complete backup set and an error occurs, then RMAN needs to rewrite the backup sets that it was writing at the time of the error. However, it retains any backup sets that it successfully wrote before terminating. The NOT BACKED UP SINCE option of the BACKUP command restarts a backup that partially completed, backing up only files that did not get backed up.
    RMAN copies datafile blocks that are already identified as corrupt into the backup. If RMAN encounters datafile blocks that have not already been identified as corrupt, then RMAN stops the backup unless SET MAXCORRUPT has been used. Setting MAXCORRUPT allows a specified number of previously undetected block corruptions in datafiles during the execution of an RMAN BACKUP command. If RMAN detects more than this number of corruptions while taking the backup, then the command terminates. The default limit is zero, meaning that RMAN does not tolerate corrupt blocks by default.
    When RMAN finds corrupt blocks, until it finds enough to exceed the MAXCORRUPT limit, it writes the corrupt blocks to the backup with a reformatted header indicating that the block has media corruption. If the backup completes without exceeding MAXCORRUPT,then the database records the address of the corrupt blocks and the type of corruption in the control file. Access these records through the V$DATABASE_BLOCK_CORRUPTION view. Note that if more than MAXCORRUPT corrupt blocks are found, the V$DATABASE_BLOCK_CORRUPTION view is not populated. In such a case, you should set MAXCORRUPT higher and re-run the command to identify the corrupt blocks.
    Regards
    Edited by: skvaish1 on Mar 24, 2010 4:39 PM

  • Volume bitmap needs minor repair for orphaned blocks

    Every once in a while I'll verify permissions using disk utility.  A few weeks ago I ran the verify disk and it came up with an issue like the one below.  I repaired it and now it has happened again.  This is the 3rd time actually.  Is my SSD having issues and need replacement or is this common?
    Verifying volume “Macintosh HD”
    Checking file systemPerforming live verification.
    Checking Journaled HFS Plus volume.
    Checking extents overflow file.
    Checking catalog file.
    Incorrect block count for file InstallESD.dmg
    (It should be 5712 instead of 1067949)
    Checking multi-linked files.
    Checking catalog hierarchy.
    Checking extended attributes file.
    Checking volume bitmap.
    Volume bitmap needs minor repair for orphaned blocks
    Checking volume information.
    Invalid volume free block count
    (It should be 41504458 instead of 40442221)
    The volume Macintosh HD was found corrupt and needs to be repaired.
    Error: This disk needs to be repaired using the Recovery HD. Restart your computer, holding down the Command key and the R key until you see the Apple logo. When the OS X Utilities window appears, choose Disk Utility.
    Thanks.

    The repair is inevitable and the sooner done the less data corruption will occur. Since the previous repair, has any app crashed or have you had to force-quit one?  Under normal circumstances, this is the usual cause: the app craps out and doesn't get a chance to free up disk space used on temp files and such before croaking.

  • ORA-19566: exceeded limit of 999 corrupt blocks for file

    Hi All,
    I am new to Oracle RMAN & RAC Administration. Looking for your support to solve the below issue.
    We have 2 disk groups - +ETDATA & +ETFLASH in our 3 node RAC environments in which RMAN is configured in node-2 to take backup. We do not have RMAN catalog and the RMAN is fetching information from control file.
    Recently, the backup failed with the error ORA-19566: exceeded limit of 999 corrupt blocks for file +ETFLASH/datafile/users.6187.802328091.
    We found that the datafiles are present in both disk groups and from the control file info, we got to know that the datafiles in +ETDATA are currently in use and +ETFLASH is having old datafiles.
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name LABWRKT are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/CONTROLFILE/snapcf_LABWRKT.f';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/controlfile/snapcf_labwrkt.f';
    Above configuration shows that SNAPSHOT CONTROLFILE is pointing to +ETFLASH. So I changed the configuration with the SNAPSHOT CONTROLFILE points to '+ETDATA/controlfile/snapcf_labwrkt.f'. At the end of backup, SNAPSHOT file was created in +ETDATA and I was expecting it to be the copy of control file being used which has dbf located in +ETDATA. But still the backup was pointing to old datafiles in +ETFLASH. Since we dont have RMAN catalog, resync also is not possible.
    When I ran it manually, it was successfull without any error and was pointing to the exisiting datafiles.
    RMAN> backup database plus archivelog all;
    I hope the issue will get resolved if the RMAN points only to the datafiles present in +ETDATA. If I am correct, please let me know how can i make it possible? Also please explain me why the newly created snapshot file does not reflect the existing control file info?

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Recovery of thread 1 stuck at Block 113713 of file 643

    Hi,
    One day ago we had a Hardware problem with our Solaris Machine and 11g(11.1.0.7.0) Database.
    Our Server crashed saying..
    ORA-00376: file 643 cannot be read at this timeThe above error repeated for another file.
    We checked file status and there was some problem(I/O errror when we gave ls command) while accessing the file.
    Then we managed to restore that mount point.
    and then we started database but the database failed to open saying below error.
    RECOVERY OF THREAD 1 STUCK AT BLOCK 113713 OF FILE 643
    Thu Jul 21 16:45:41 2011
    RECOVERY OF THREAD 1 STUCK AT BLOCK 42896 OF FILE 644We checked V$recover_file, No file mentioned.we checked V$datafile all files status was online and enabled was read/write.Then we manually recover the failed data files by
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs03.dbf'
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs04.dbf'It succeeded with media recovery complete and then we were able to open Database.
    Even if V$recover_file and V$datafile had not mentioned for above two problematic datafiles, why we had to do media recovery?
    Any ideas?
    Regards!
    Edited by: Nitin Joshi on Jul 22, 2011 10:23 AM

    Yes Hemant.
    We all were Puzzled by that scenario.
    Posting alert log stack(incomplete) when we try to open database.
    MMAN started with pid=28, OS id=1277
    Thu Jul 21 16:38:21 2011
    DBW0 started with pid=2, OS id=1279
    Thu Jul 21 16:38:21 2011
    DBW1 started with pid=3, OS id=1281
    Thu Jul 21 16:38:21 2011
    DBW2 started with pid=32, OS id=1283
    Thu Jul 21 16:38:22 2011
    DBW3 started with pid=5, OS id=1285
    Thu Jul 21 16:38:22 2011
    LGWR started with pid=7, OS id=1287
    Thu Jul 21 16:38:22 2011
    CKPT started with pid=36, OS id=1289
    Thu Jul 21 16:38:22 2011
    SMON started with pid=40, OS id=1291
    Thu Jul 21 16:38:22 2011
    RECO started with pid=44, OS id=1293
    Thu Jul 21 16:38:22 2011
    MMON started with pid=48, OS id=1295
    Thu Jul 21 16:38:22 2011
    MMNL started with pid=52, OS id=1297
    DISM started, OS id=1299
    ORACLE_BASE from environment = /data87/ora11g/app/oracle
    Thu Jul 21 16:38:27 2011
    ALTER DATABASE   MOUNT
    Setting recovery target incarnation to 1
    Successful mount of redo thread 1, with mount id 2558427203
    Database mounted in Exclusive Mode
    Lost write protection disabled
    Completed: ALTER DATABASE   MOUNT
    Thu Jul 21 16:38:31 2011
    ALTER DATABASE OPEN
    Beginning crash recovery of 1 threads
    parallel recovery started with 15 processes
    Started redo scan
    Completed redo scan
    13775 redo blocks read, 631 data blocks need recovery
    Started redo application at
    Thread 1: logseq 112616, block 157510
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 112616 Reading mem 0
      Mem# 0: /data1/irsdata/rlog1irs.dbf
      Mem# 1: /data3/irsdata/rlog11irs.dbf
    Thu Jul 21 16:38:34 2011
    RECOVERY OF THREAD 1 STUCK AT BLOCK 113713 OF FILE 643
    Thu Jul 21 16:38:34 2011
    RECOVERY OF THREAD 1 STUCK AT BLOCK 42896 OF FILE 644
    Completed redo application of 1.07MB
    Thu Jul 21 16:38:48 2011
    Non critical error ORA-48913 caught while writing to trace file "/data87/ora11g/app/diag/rdbms/irs/irs/trace/irs_p000_1307.trc"
    Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached
    Writing to the above trace file is disabled for now on...
    Thu Jul 21 16:38:48 2011
    Non critical error ORA-48913 caught while writing to trace file "/data87/ora11g/app/diag/rdbms/irs/irs/trace/irs_p011_1329.trc"
    Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached
    Writing to the above trace file is disabled for now on...
    Thu Jul 21 16:39:03 2011
    .Then once we gave below command the recovery worked.below is the alert log for the recovery command
    ALTER DATABASE   MOUNT
    Setting recovery target incarnation to 1
    Successful mount of redo thread 1, with mount id 2558426450
    Database mounted in Exclusive Mode
    Lost write protection disabled
    Completed: ALTER DATABASE   MOUNT
    Thu Jul 21 17:20:18 2011
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs03.dbf' 
    Media Recovery Start
    Fast Parallel Media Recovery NOT enabled
    parallel recovery started with 15 processes
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 112616 Reading mem 0
      Mem# 0: /data1/irsdata/rlog1irs.dbf
      Mem# 1: /data3/irsdata/rlog11irs.dbf
    Media Recovery Complete (irs)
    Completed: ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs03.dbf' 
    Thu Jul 21 17:21:00 2011
    ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs04.dbf' 
    Media Recovery Start
    Fast Parallel Media Recovery NOT enabled
    parallel recovery started with 15 processes
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 112616 Reading mem 0
      Mem# 0: /data1/irsdata/rlog1irs.dbf
      Mem# 1: /data3/irsdata/rlog11irs.dbf
    Media Recovery Complete (irs)
    Completed: ALTER DATABASE RECOVER  datafile '/data/irsdata/undodbs04.dbf'  it seems it applied all the data from online redolog.Should have been automatic(?) not required to do media recovery?
    Regards!
    Edited by: Nitin Joshi on Jul 22, 2011 10:46 AM
    some typos

  • ORA-19566: exceeded limit of 0 corrupt blocks

    Hi All,
    We have been encountering some issues with RMAN backup; it has been erroring out with same errors (max corrupt blocks). As of now, I ran the db verify for affected files and found that indexes are failing. When I tried to find out the indexes from extent views, I was unable to find it. Looks like these blocks are in free space as I found it and also the V$backup corruption view shows the logical corruption.
    Waiting for you suggestion....
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for HPUX: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    RMAN LOG:
    channel a3: starting piece 1 at 14-DEC-09
    RMAN-03009: failure of backup command on a2 channel at 12/14/2009 05:43:42
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub834/oradata/TERP/applsysd142.dbf
    continuing other job steps, job failed will not be re-run
    channel a2: starting incremental level 0 datafile backupset
    channel a2: specifying datafile(s) in backupset
    including current control file in backupset
    channel a2: starting piece 1 at 14-DEC-09
    channel a1: finished piece 1 at 14-DEC-09
    piece handle=TERP_1769708180_level0_292_1_1_20091213065437.rmn tag=TAG20091213T065459 comment=API Version 2.0,MMS Version 5.0.0.0
    channel a1: backup set complete, elapsed time: 01:14:45
    channel a2: finished piece 1 at 14-DEC-09
    piece handle=TERP_1769708180_level0_296_1_1_20091213065437.rmn tag=TAG20091213T065459 comment=API Version 2.0,MMS Version 5.0.0.0
    channel a2: backup set complete, elapsed time: 00:24:54
    RMAN-03009: failure of backup command on a4 channel at 12/14/2009 06:14:33
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub834/oradata/TERP/applsysd143.dbf
    continuing other job steps, job failed will not be re-run
    released channel: a1
    released channel: a2
    released channel: a3
    released channel: a4
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on a3 channel at 12/14/2009 06:41:00
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub806/oradata/TERP/icxd01.dbf
    Recovery Manager complete.
    Thanks,
    Vimlendu
    Edited by: Vimlendu on Dec 20, 2009 10:27 AM

    dbv file=/ora/oradata/binadb/RAT_TRANS_IDX01.dbf blocksize=8192
    The result:
    DBVERIFY: Release 10.2.0.3.0 - Production on Thu Nov 20 11:14:01 2003
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE =
    /ora/oradata/binadb/RAT_TRANS_IDX01.dbf
    Block Checking: DBA = 75520968, Block Type = KTB-managed data block
    **** row 80: key out of order
    ---- end index block validation
    Page 23496 failed with check code 6401
    DBVERIFY - Verification complete
    Total Pages Examined : 34560
    Total Pages Processed (Data) : 1
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 31084
    Total Pages Failing (Index): 1
    Total Pages Processed (Other): 191
    Total Pages Empty : 3284
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Seems like I have 1 page failing. I try to run this script:
    select segment_type, segment_name, owner
    from sys.dba_extents
    where file_id = 18 and 23496 between block_id
    and block_id + blocks - 1;
    No rows returned.
    Then, I try to run this script:
    Select tablespace_name, file_id, block_id, bytes
    from dba_free_space
    where file_id = 18
    and 23496 between block_id and block_id + blocks - 1
    Resulting 1 row.
    Seems like I have the possible corrupt block on unused space.
    Edited by: Vimlendu on Dec 20, 2009 2:30 PM
    Edited by: Vimlendu on Dec 20, 2009 2:41 PM

  • How to make a corrupted block?

    Hi all,
    Is there any method to make a block corrupted in Oracle (for Windows)?
    My target is to test detecting and repairing corrupted blocks tools in Oracle.
    Thanks in advance,
    Ahmed B.

    Its not a great practice to do and oracle doesn't recommend such act. However, Oracle Support they have some internal utility for such, I dont know whether it is available to public or not. Search at metalink.
    Jaffar
    I just reaclled, there is a utility called 'bbed' which comes with oracle.
    However in UNIX you can use the dd command to corrupt a few blocks.Something like
    dd ibs=db_block_Size skip=n count=n if=/dev/null of=full_path_name_of_file_to_corrupt
    You can use the freeware hexedit to simulate block corruption in Windows.
    My sincere request is that dont do this with your production db. Use it very caciously.
    Message was edited by:
    The Human Fly

  • How to repair corrupt sharepoint server (mdf) files

    SharePoint server has become a very popular enterprise application to enhanced collaboration. As the quantity and value of data stored on SharePoint platform rises, backup and recovery becomes critical and it proves to be a challenge for administrators.There
    can any reason of corruption of your SharePoint Server database and MDF files include drive failures, accidental file deletion on WSS websites, Server downtime, saved backup turned back, etc.
    If you come across SharePoint damage, then disaster recovery of SharePoint is important to execute. Third-party database recovery application for SharePoint is the best alternate solutions to be used for dealing with corrupted SharePoint database.
    If only you need to repair your corrupted SharePoint database due to drive failures, accidental file deletion on WSS websites, drive failures, server downtime, saved backup turned bad, and any other etc.This also allows you to recover the MDF files in offline
    mode. If you are searching for the recovery software like this, you are at the right spot.
    From here, you may get the Free Trail Version Demo to check the features and to see the recovery process.
    Visit:- http://www.filesrecoverytool.com/sharepoint-database-repair.html

    MS SQL Database Recovery software is an advanced solution to fix extremely corrupted MDF files of MS SQL Server. It effectively recovers maximum possible data from corrupted MDF files which are originally created in SQL Server 2000, 2005 and 2008 and higher
    versions.
    In case, MDF database has gone corrupt, damaged due to occurrence of this error, you can repair SQL database easily and productively using SQL Database recovery software to repair all items of corrupt MDF file.To know more about the software visit :-
    Repair MDF File of SQL Server
    Thanks
    Regards

Maybe you are looking for

  • How to limit input into a particular range?

    Hello, I am trying to input value of voltage using numeric control. I wanted to know a way to limit this input to a range.like i want inputs to be valid only between 1-10 v if user enters 11, it should not work and display an error. do i need to comp

  • Mac Newbie looking for short answers to multiple questions...

    My computer experience started with the Commodore 64 in the 1980's and progressed to building my own Core Duo PC with 4 GIG RAM running 64 bit Vista Ultimate and several flavors of Linux. My only Mac experiences happened when I visited my elderly par

  • Error: the document could not be open. Couldn't open file. It may becorrupt

    Have a MacBook Pro/ OSX ver 10.4.11/Tiger/Use Entorage for my email which is imap and get an error everytime i open or save an attachment in my email. Couldn't open file. It may be corrupt or a file format that preview doesn't recognize"

  • I've received an email I can't click on link

    I can't click on the email link that says I have to verify my account or my id will be deactivated

  • Workflow ECC User name Issue

    Hi ! Work flow Experts! I am triggering work flow from crm to ECC vise versa,I am unable to get Initiator User name it shows only CRM user name (RFC USER). I want to get ECC server User names in CRM. I am running an remote function module  from  ECC