DBVERIFY - Data in bad block

During RMAN validate structure I notice that I have plenty of files with data corruption. Then I run DBVERIFY and identify those files and database objects. I know what to do with broken tables and indexes, but 2 lines are strange for me:
1) object with segment type "TYPE2 UNDO" on undo tablespace
2) broken block on user datafile which is empty (without any object on it)
Do I mange somehow those bad blocks or leave it without changes?
My DB: 9i
My OS: Linux x32

During RMAN validate structure I notice that I have plenty of files with data corruption. Then I run DBVERIFY and identify those files and database objects. I know what to do with broken tables and indexes, but 2 lines are strange for me:
1) object with segment type "TYPE2 UNDO" on undo tablespace
2) broken block on user datafile which is empty (without any object on it)
Do I mange somehow those bad blocks or leave it without changes?I can say it is BUG, I have seen such issues. see below.
from 9i home/database
idle> select file#,name from v$datafile where file#=11;
FILE# NAME
11 */oracle/oradata/demo92/undo2.dbf*
idle>
test > /home/oracle: demo92> dbv file=/oracle/oradata/demo92/undo2.dbf
DBVERIFY: Release 9.2.0.8.0 - Production on Wed Jul 6 06:39:18 2011
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle/oradata/demo92/undo2.dbf
Page 1 is marked corrupt
Corrupt block relative dba: 0x00000001 (file 0, block 1)
Completely zero block found during dbv:
Page 2 is marked corrupt
Corrupt block relative dba: 0x00000002 (file 0, block 2)
Completely zero block found during dbv:
Page 3 is marked corrupt
Now i run the DBV from 11g home the same file.
test > */oracle/11g/product/11.1.0/db1/bin*: demo92> *./dbv file=/oracle/oradata/demo92/undo2.dbf*
DBVERIFY: Release 11.1.0.7.0 - Production on Wed Jul 6 06:40:26 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
DBVERIFY - Verification starting : FILE = /oracle/oradata/demo92/undo2.dbf
DBVERIFY - Verification complete
Total Pages Examined : 57600
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index): 0
Total Pages Processed (Other): 30601
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 26999
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Message 419 not found; product=RDBMS; facility=DBV
Message 424 not found; product=RDBMS; facility=DBV
test > /oracle/11g/product/11.1.0/db1/bin: demo92>
I cant see the corruption here.
SO i prefer you to upgrade database.

Similar Messages

  • How to recover data from a hard drive with bad blocks?

    An external hard drive, 4TB Iomega...connected via eSATA cable...had a power outage and the drive won't be read by OSX now.  (Yes, it was on a surge protector and no I did not have a backup.  I was actually preparing for the process of creating a backup when the power went out!)  Anyway, I have tried using Data Rescue 3 and DiskDrill to try and recover data from the drive.  I can recover the first 1/3 of the drive, but it ejects when either app tries to access the bad block.  Can anyone tell me how/what software to use to recover the data?  I know there are programs that will avoid the bad block but I've only found them for Windows.  Are there any that will do such a thing in Lion?  Any help will be appreciated...and no, I can not afford a data recovery service.  Trying to do this on my own.

    Basics of File Recovery
    If you stop using the drive it's possible to recover deleted files that have not been overwritten by using recovery software such as Data Rescue II, File Salvage or TechTool Pro.  Each of the preceding come on bootable CDs to enable usage without risk of writing more data to the hard drive.  Two free alternatives are Disk Drill and TestDisk.  Look for them and demos at MacUpdate or CNET Downloads.
    The longer the hard drive remains in use and data are written to it, the greater the risk your deleted files will be overwritten.
    Also visit The XLab FAQs and read the FAQ on Data Recovery.

  • Mac Pro & Powerbook data transfer / what about bad blocks

    Hello,
    Well i have Mac Pro and a old Powerbook G4. Both are running OSX (one intel version and the other PPC). the powerbook is booting from an external firewire drive which i have installed OSX (PPC) on. ( mainly because the internal drive of the laptop died).
    So I want to transfer some data from the external drive to my Mac Pro. Is it okay just to shutdown the Powerbook, and connect it & mount it on the Mac Pro as an external drive? (not the actual Powerbook, just the external drive itself...) Will there be complications since it has OSX (PPC) installed? I want to transfer some files of the external to my Mac Pro.
    (...I know another option would be to network them together too.....)
    I am not trying to boot from it...
    another quick question....
    The external drive with OSX (PPC) installed, has had a shady history, it has crashed a few times on me, bad blocks, sometimes a click here and there...it been partioned a few times....etc..
    When I transfer data or files from the bad drive to my good one, could it possibly transfer those bad data blocks to the other hard drive on my Mac Pro, and possibly "infect" it?

    The following may help you:
    A Basic Guide for Migrating to Intel-Macs
    If you are migrating a PowerPC system (G3, G4, or G5) to an Intel-Mac be careful what you migrate. Keep in mind that some items that may get transferred will not work on Intel machines and may end up causing your computer's operating system to malfunction.
    Rosetta supports "software that runs on the PowerPC G3 or G4 processor that are built for Mac OS X". This excludes the items that are not universal binaries or simply will not work in Rosetta:
    Classic Environment, and subsequently any Mac OS 9 or earlier applications
    Screensavers written for the PowerPC
    System Preference add-ons
    All Unsanity Haxies
    Browser and other plug-ins
    Contextual Menu Items
    Applications which specifically require the PowerPC G5
    Kernel extensions
    Java applications with JNI (PowerPC) libraries
    See also What Can Be Translated by Rosetta.
    In addition to the above you could also have problems with migrated cache files and/or cache files containing code that is incompatible.
    If you migrate a user folder that contains any of these items, you may find that your Intel-Mac is malfunctioning. It would be wise to take care when migrating your systems from a PowerPC platform to an Intel-Mac platform to assure that you do not migrate these incompatible items.
    If you have problems with applications not working, then completely uninstall said application and reinstall it from scratch. Take great care with Java applications and Java-based Peer-to-Peer applications. Many Java apps will not work on Intel-Macs as they are currently compiled. As of this time Limewire, Cabos, and Acquisition are available as universal binaries. Do not install browser plug-ins such as Flash or Shockwave from downloaded installers unless they are universal binaries. The version of OS X installed on your Intel-Mac comes with special compatible versions of Flash and Shockwave plug-ins for use with your browser.
    The same problem will exist for any hardware drivers such as mouse software unless the drivers have been compiled as universal binaries. For third-party mice the current choices are USB Overdrive or SteerMouse. Contact the developer or manufacturer of your third-party mouse software to find out when a universal binary version will be available.
    Also be careful with some backup utilities and third-party disk repair utilities. Disk Warrior (does not work), TechTool Pro (pre-4.5.1 versions do not work), SuperDuper (newest release works), and Drive Genius (untested) may not work properly on Intel-Macs. The same caution may apply to the many "maintenance" utilities that have not yet been converted to universal binaries.
    Before migrating or installing software on your Intel-Mac check MacFixit's Rosetta Compatibility Index.
    Additional links that will be helpful to new Intel-Mac users:
    Intel In Macs
    Apple Guide to Universal Applications
    MacInTouch List of Compatible Universal Binaries
    MacInTouch List of Rosetta Compatible Applications
    MacUpdate List of Intel-Compatible Software
    Transferring data with Setup Assistant - Migration Assistant FAQ
    Because Migration Assistant isn't the ideal way to migrate from PowerPC to Intel Macs, using Target Disk Mode or copying the critical contents to CD and DVD or an external hard drive will work better when moving from PowerPC to Intel Macs.
    Basically the instructions you should follow are:
    1. Backup your data first. This is vitally important in case you make a mistake or there's some other problem.
    2. Connect a Firewire cable between your old Mac and your new Intel Mac.
    3. Startup your old Mac in Target Disk Mode.
    4. Startup your new Mac for the first time, go through the setup and registration screens, but do NOT migrate data over. Get to your desktop on the new Mac without migrating any new data over.
    4. Copy the following items from your old Mac to the new Mac:
    In your /Home/ folder: Documents, Movies, Music, Pictures, and Sites folders.
    In your /Home/Library/ folder:
    /Home/Library/Application Support/AddressBook (copy the whole folder)
    /Home/Library/Application Support/iCal (copy the whole folder)
    Also in /Home/Library/Application Support (copy whatever else you need including folders for any third-party applications)
    /Home/Library/Keychains (copy the whole folder)
    /Home/Library/Mail (copy the whole folder)
    /Home/Library/Preferences/com.apple.mail.plist (* This is a very important file which contains all email account settings and general mail preferences.)
    /Home/Library/Preferences/ copy any preferences needed for third-party applications
    /Home /Library/iTunes (copy the whole folder)
    /Home /Library/Safari (copy the whole folder)
    If you want cookies:
    /Home/Library/Cookies/Cookies.plist
    /Home/Library/Application Support/WebFoundation/HTTPCookies.plist
    For Entourage users:
    Entourage is in /Home/Documents/Microsoft User Data
    Also in /Home/Library/Preferences/Microsoft
    Credit goes to another forum user for this information.
    If you need to transfer data for other applications please ask the vendor or ask in the Discussions where specific applications store their data.
    5. Once you have transferred what you need restart the new Mac and test to make sure the contents are there for each of the applications.
    Written by Kappy with additional contributions from a brody.

  • Recovering data from an NTFS drive with bad blocks

    I've got an 80gb NTFS drive that causes an unmountable boot volume BSOD in a PC. Mounted in a 2.5" USB enclosure, Windows 7 sees it only as an empty "RAW" partition and can't actually mount it. Using the Paragon NTFS driver, I can mount it just fine in Snow Leopard on my MBP.
    However, when I attempt to copy files from it, it pretty quickly finds one with a bad block and gives up. I'm looking for the best way to get as much data (really just images) from this as possible without going through and restarting the transfer after each of the many corrupt files.
    I've tried both Minicopier (which still prompted on every error) and Ultracopier (which would freeze every ten or so corrupt files) as far as simple copiers go, and I've tried DiskDrill and now FileSalvage for full-on data recovery applications. I'd rather use something like the former, but am open to free alternatives to the latter as well.
    Any ideas? Thanks!

    I've had problems with NTFS formatted external drives containing unremovable "corrupted files".
    My problem was resolved using windows. Since Mac isn't supposed to handle NTFS I could not find a suitable way to resolve corrupted file issues, though in windows there is a tool to handle this:
    chkdsk X: -F
    where X: is the volume. This resolved my corrupted file issues and the drive worked like a charm with my mac again.

  • Mac pro 13'' harddisk makes noises when writing and reading data, but Techtool scanning result shows no bad blocks in the disk, that is normal?

    I bought my Mac pro 13 inch, i7 processor and 750G storage 20th November 2012. But I found when copy into or out the Mac, the harddisk makes noises, like bitting something. I thought there are bad blocks in the disk, but the result of Techtool scanning is no bad blocks in disk, and giving a passed conclusion.
    I went the apple store for testing, and the repairmen told me there are two choices available for me:1) replace the harddisk 2) back to the store i bought machine from for a new one. i think that the machine only bought few days, it had better not be disassembled, so i went back for a new one. unfortunately, the new one makes more noises than my old one. i don't know why like apple named brands notebooks have such problems.  did you undergo this experience ?

    All of the HDDs, be they in my MBPs or enclosures are barely audible.  I would say the you deserve no less.  I suggest that you do not leave until you are satisfied with a near silent HDD.
    Why you got two in a row is a puzzle, but then some people beat the odds and win the lottery.  In your case the results are not exactly positive.  All HDDs eventually fail, and some fail sooner than others.  At least you should start with a quiet one.
    Good luck.
    Ciao.

  • A single bad block

    Hi,
         I am a Graphic Designer from Nepal. I have a single bad block on my 7 month old macbook pro 2011. It hasn't been written over yet with spare as my HD is almost full. Should i wait it to be written over with spare block or should i go on ahead with format. Will it damage more blocks if i wait? i would have returned it but it will cost me more to return it as i will have to sent it abroad as there is no apple stores here (only authorized dealers). It's a single bad block which is not that bad and can happen from the factory. I have even heard many hard disk manufacture dont even exchange it if there is just few bad blocks as it is quite normal. My main question is should i wait for bad block to be written over with spare or should i format it with zeros? thanks in advance.

    You ran some driver checking software and it located a bad block, no big deal because all drives have bad blocks.
    When your computer attempts to write to the bad block and can't verify it, then it writes the data to a new location and that bad block is mapped off.
    This occurs automatically and requires absolutely no assistance from you what so ever.
    So don't do anything, it's been all taken care off. If you do, your just wasting your time and could erase your data.
    The software your running is for technical use only, just go about using your computer like before and nothing will happen from the bad block.

  • Hard disk failing -- how to safely backup data / remove bad sectors?

    I'm sorry if this is posted in the wrong forums, I wasn't sure where to go.
    It appears that I have a bad block on my hard drive.  During normal boot-up, arch ran its regular scan of my / partition, and failed giving an error that looked something like:
    Error reading block 13303898 (something here about a short read) while getting next inode from scan.
    It then let me login as root for maintenance and mounted / as read only.  By doing some work with du, I was able to narrow the problem to a bunch of files in a single folder.  Attempting to access them with du gave the error
    Ext3-fs error (device sda4): ext3_get_inode_loc: unable to read inode block - inode=3328795, block=13303898
    The block was the same for all of the bad files, although the inode changed. 
    First question (probably the most important):  Is it safe to mount my hard drive in read-write mode to try to backup my data?  If I do so, what is the best way to proceed?
    Second question (hopeful here, but not optimistic):  The folder with the bad block is not critical at all.  I've had the harddrive for less than a year, so I'm hoping it just got bumped and isn't completely dying.  I'd like to think I can back up my data, use some software to ignore the bad block, and go on my merry way using the rest of my hard drive.  Is this feasible or even possible?  If so, how would I go about fixing this?
    (Sigh) all of this the week before my final exams too, with critical data on the hard drive.  Oh well, when it rains it pours I guess...

    I would backup all the data I could to another HD, then I would zero de drive with dd and try to read it back with dd and see if it complains.
    It may be a transient problem due to a failed write and not a physical problem.
    Something similar happened to me before. But either way if dd solves your problem I would still keep a close eye on that disk.

  • Disk Utility: for bad blocks on hard disks, are seven overwrites any more effective than a single pass of zeros?

    In this topic I'm not interested in security or data remanence (for such things we can turn to e.g. Wilders Security Forums).
    I'm interested solely in best practice approaches to dealing with bad blocks on hard disks.
    I read potentially conflicting information. Examples:
    … 7-way write (not just zero all, it does NOT do a reliable safe job mapping out bad blocks) …
    — https://discussions.apple.com/message/8191915#8191915 (2008-09-29)
    … In theory zero all might find weak or bad blocks but there are better tools …
    — https://discussions.apple.com/message/11199777#11199777 (2010-03-09)
    … substitution will happen on the first re-write with Zeroes. More passes just takes longer.
    — https://discussions.apple.com/message/12414270#12414270 (2010-10-12)
    For bad block purposes alone I can't imagine seven overwrites being any more effective than a single pass of zeros.
    Please, can anyone elaborate?
    Anecdotally, I did find that a Disk Utility single pass of zeros seemed to make good (good enough for a particular purpose) a disk that was previously unreliable (a disk drive that had been dropped).

    @MrHoffman
    As well pointed your answers are, you are not answering the original question, and regarding consumer device hard drives your answers are missleading.
    Consumer device hard drives ONLY remap a bad sector on write. That means regardless how many spare capacity the drive has, it will NEVER remap the sector. That means you ALWAYS have a bad file containing a bad sector.
    In other words YOU would throw away an otherwise fully functional drive. That might be reasonable in a big enterprise where it is cheaper to replace the drive and let the RAID system take care of it.
    However on an iMac or MacBook (Pro) an ordinary user can not replace the drive himself, so on top of the drive costs he has to pay the repair bill (for a drive that likely STILL is in perfect shape, except for the one 'not yet' remaped bad block)
    You simply miss the point that the drive can have still one million good reserve blocks, but will never remap the affected block in a particular email or particular song or particular calendar. So as soon as the file affected is READ the machine hangs, all other processes more or less hang at the same moment they try to perform I/O because the process trying to read the bad block is blocking in the kernal. This happens regardless how many free reserve blocks you have, as the bad block never gets reallocated, unless it is written to it. And your email program wont rewrite an email that is 4 years old for you ... because it is not programmed to realize a certain file needs to be rewritten to get rid of a bad block.
    @Graham Perrin
    You are similar stubborn in not realizing that your original question is awnsered.
    A bad block gets remapped on write.
    So obviously it happens at the first write.
    How do you come to the strange idea that writing several times makes a difference? How do you come to the strange idea that the bytes you write make a difference? Suppose block 1234 is bad. And the blocks 100,000,000 to 100,000,999 are reserve blocks. When you write '********' to block 1234 the hard drive (firmware) will remap it to e.g. 100,000,101. All subsequent writes will go to the same NEW block. So why do you ask if doing it several times will 'improve' this? After all the awnsers here you should have realized: your question makes no sense as soon as you have understood how remapping works (is supposed to work). And no: it does not matter if you write a sequence od zeros, of '0's or of '1's or of 1s or of your social security number or just 'help me I'm hold prisoner in a software forum'.
    I would try to find a software that finds which file is affected, then try to read the bad block until you in fact have read it (that works surprisngly often but may take any time from a few mins to hours) ... in other words you need a software that tries to read the file and copies it completely, so even the bad block is read (hopefully) successful. Then write the whole data to a new file and delete the old one (deleting will free the bad block and ar some later time something will be written there and cause a remap).
    Writing zeros into the bad block basically only helps if you don't care that the affected file is corrupted afterwards. E.g. in case of a movie the player might crash after trying to display the affected area. E.g. if you know the affected file is a text file, it would make more sense to write a bunch of '-' signs, as they are readable while zero bytes are not (a text file is not supposed to contain zero bytes)
    Hope that helped ;)

  • Bad blocks on an external drive, and disk tools for a MacIntel...

    I'm having a problem with my external drive which was pulled from my 12" PBook G4 and put into a USB 2.0 enclosure. When I try to transfer data from my internal HDD it runs for a bit, then, it just stops... no spinning ball and it won't allow me cancel the copy. If I fuss with it long enough (clicking on as many finder features as possible), the Finder will eventually stop responding.
    I ran Drive Genius as it was the only software I could find that would work as on my computer. Using the "Scan" function, it came back and told me that I had several bad blocks (I had to stop the scan at 115 because I needed to restart my computer). In the program's help guide, it said that it cannot do anything about the bad blocks, and I may need to reformat.
    So, I reformatted using Disk Utility on my system start-up disk (not the disk that came with the compuer), then I tried to zero-out all data. Spinning beach ball and the application stopped responding.
    Is it possible to quarantine bad blocks if they can't be reformatted or zeroed out on a MacIntel?
    Thank you kindly for any help.

    Sorry for the additional posts, I don't know how to edit my last post...
    Disk Utility is still at 49 minutes remaining, and it's been almost an hour. Still letting it run, and hoping for the best.

  • More than 1200 bad blocks do I need to change my Hard Disk

    Hi
    my hard disk on a Macbook pro has more than 1200 bad blocks do I need to change my Hard Disk
    Thanks for your help

    yousseffromlimoges wrote:
     my hard disk on a Macbook pro has more than 1200 bad blocks do I need to change my Hard Disk
    What software and version did you use to determine you had 1200+ bad blocks?
    Was it compatible with Lion?
    Run the scan again and take a screen shot of the results and make sure to save it to a external media and disconnect it, you will need this to perform a warranty call and have the drive replaced.
    After you have backed up your files to a external storage drive and disconnected it.
    Hold Command r upon rebooting and enter the Lion Recovery Partition and run Disk Utility, see if the drive needs repair. I suspect it does. Check the smart status too.
    Follow the
    Restoring OS X 10.7 (new drive, total reformat method)
    https://discussions.apple.com/message/16276201#16276201
    Also make a clone of your OS X Lion Partition on a external drive, this way your prepared if the drive dies, you can option boot off the clone. If you get a new drive you have a copy of Lion Recovery on the USB.
    It's highly unusual for a drive to have 1200+ bad blocks, the Zero Erase Free Space will confirm it as it's going to use up all your spare blocks.
    The drive will likely brick, which you then can option boot off the clone.
    Schedule a Apple warranty/AppleCare call if your under it, or order a new drive online from OtherWorld Computing "kits", iFixit for videos or other Mac places online.
    http://eshop.macsales.com/installvideos/
    You can read my link provided how to format the drive.
    However if your Zero Erase Free Space turns out fine, then I suspect the software you used or perhaps something else is wrong with your OS X not correctly reporting your drives data characteristics correctly.
    You could be spared a costly repair if that's the case.
    Good Luck.

  • Ideapad Flex 15 - HDD has bad block

    Hi All,
    Thanks for Any help to address the problem.
    I have already tried running the chkdsk /f /r which just got stuck at 10% and didn't move from there for about 40 minutes.
    I started getting below message in Event viewer :
    Log Name:      System
    Source:        disk
    Date:          1/19/2014 5:25:48 PM
    Event ID:      7
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      PC
    Description:
    The device, \Device\Harddisk0\DR0, has a bad block.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="disk" />
        <EventID Qualifiers="49156">7</EventID>
        <Level>2</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-01-19T23:25:48.278119900Z" />
        <EventRecordID>5403</EventRecordID>
        <Channel>System</Channel>
        <Computer>PC</Computer>
        <Security />
      </System>
      <EventData>
        <Data>\Device\Harddisk0\DR0</Data>
        <Binary>030080000100000000000000070004C0000100009C0000C00000000000000000007007C5090000000C10010000000000FFFFFFFF000000005800008402010000E8200AFF42072000000200003C000000C09AAB030000000080957F0100E0FFFF0000000000000000C0C8250100E0FFFF0000000000000000B883E20400000000280004E283B800000100000000000000700003000000000000000000110000000000000000000000</Binary>
      </EventData>
    </Event>
    Also similar message from Lenovo Solution Center:

    Are you encountering the same startup errors when booting straight from the One Key Recovery button? I assume the startup error is occuring while trying to boot the OS, and not booting the recovery.
    As far as I know, Lenovo utilizes a corporate activation (no printed key) on the Flex 15. You may be able to get Lenovo to send you recovery media, or even have them just re-image if covered by your warranty.
    ←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗
    Tap that kudos button if I helped ^^
    ←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗←↑→↓↘↙↖↗

  • [Repaired] Bad blocks cause kernel blocking to the device

    I got
    May 10 10:08:13 qslap kernel: sd 4:0:0:0: [sdb] Unhandled sense code
    May 10 10:08:13 qslap kernel: sd 4:0:0:0: [sdb] Result: hostbyte=0x00 driverbyte=0x08
    May 10 10:08:13 qslap kernel: sd 4:0:0:0: [sdb] Sense Key : 0x3 [current]
    May 10 10:08:13 qslap kernel: sd 4:0:0:0: [sdb] ASC=0x14 ASCQ=0x0
    May 10 10:08:13 qslap kernel: sd 4:0:0:0: [sdb] CDB: cdb[0]=0x28: 28 00 25 42 ea af 00 00 01 00
    May 10 10:08:13 qslap kernel: end_request: I/O error, dev sdb, sector 625142447
    May 10 10:08:13 qslap kernel: Buffer I/O error on device sdb, logical block 78142805
    in system log when I try to access /dev/sdb in some way (for example, plug in, fdisk, gparted, but not palimpsest).
    This kind of log repeats several times and blocks any access to that device for tens of seconds (Seems kernel keep retrying, not give up the first time), which is annoying.
    From palimpsest, I can see:
    Current Pending Sector Count: Value: 1 sector
    Uncorrectable Sector Count: Value: 1 sector
    It says when write fails, "Current Pending Sector" will be remapped automatically by hardware.
    I got the sector size = 512 bytes:
    # fdisk -lu /dev/sdb
    Disk /dev/sdb: 320.1 GB, 320072933376 bytes
    255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xaaaaaaaa
    Disk /dev/sdb doesn't contain a valid partition table
    badblocks detects the bad sector well:
    # badblocks -svw -b 512 /dev/sdb 625142447 625142447
    Checking for bad blocks in read-write mode
    From block 625142447 to 625142447
    Testing with pattern 0xaa: 625142447one, 0:20 elapsed
    done
    Reading and comparing: done
    Testing with pattern 0x55: done
    Reading and comparing: done
    Testing with pattern 0xff: done
    Reading and comparing: done
    Testing with pattern 0x00: done
    Reading and comparing: done
    Pass completed, 1 bad blocks found.
    From above, you can see that write a block one time takes 20 seconds due to kernel blocking.
    badblocks writes 4 times,  ~80 seconds.
    Note: badblocks doesn't find any bad blocks when performing a full disk read-only test.
    However, the sector wasn't automatically remapped (badblocks has already written that sector)
    the kernel is still generating logs and blocking, which is very annoying.
    I also tried to write at that sector directly, no luck:
    # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=625142447
    dd: writing `/dev/sdb': Input/output error
    1+0 records in
    0+0 records out
    0 bytes (0 B) copied, 7.26951 s, 0.0 kB/s
    What should I do to let the hardware remap that sector?
    If no way due to hardware limitation, then how can I mute the annoying log and let the kernel not blocking ?
    Additional: I am looking for a way to let kernel not blocking, give up at the begining asap, or let the hardware SMART mark that sector not to be 'Pending', not for a way to create fs with bad blocks marked.
    I know if I provide a list of badblocks to mkfs.*** to create a fs, these blocks will not be used.
    However, when I plug in the removable harddisk, BEFORE performing ANY r/w instructions, the kernel starts to generate logs and /dev/sdb is not visible in tens of seconds. Same situation occurs when I run / fdisk / gparted (these programs are not responsible for tens of seconds due to kernel blocking) ...
    I guess that SMART does these checks automatically and cause kernel blocking, while SMART can't handle these things well.
    This is the output of smartctl -a /dev/sdb -d sat, which may be helpful:
    smartctl 5.39.1 2010-01-28 r3054 [i686-pc-linux-gnu] (local build)
    Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net
    === START OF INFORMATION SECTION ===
    Model Family: Seagate Momentus 5400.5 series
    Device Model: ST9320320AS
    Serial Number: 5SX3YFQ8
    Firmware Version: SD03
    User Capacity: 320,072,933,376 bytes
    Device is: In smartctl database [for details use: -P show]
    ATA Version is: 8
    ATA Standard is: ATA-8-ACS revision 4
    Local Time is: Mon May 10 11:25:42 2010 CST
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    See vendor-specific Attribute list for marginal Attributes.
    General SMART Values:
    Offline data collection status: (0x00) Offline data collection activity
    was never started.
    Auto Offline Data Collection: Disabled.
    Self-test execution status: ( 121) The previous self-test completed having
    the read element of the test failed.
    Total time to complete Offline
    data collection: ( 700) seconds.
    Offline data collection
    capabilities: (0x73) SMART execute Offline immediate.
    Auto Offline data collection on/off support.
    Suspend Offline collection upon new
    command.
    No Offline surface scan supported.
    Self-test supported.
    Conveyance Self-test supported.
    Selective Self-test supported.
    SMART capabilities: (0x0003) Saves SMART data before entering
    power-saving mode.
    Supports SMART auto save timer.
    Error logging capability: (0x01) Error logging supported.
    General Purpose Logging supported.
    Short self-test routine
    recommended polling time: ( 1) minutes.
    Extended self-test routine
    recommended polling time: ( 114) minutes.
    Conveyance self-test routine
    recommended polling time: ( 2) minutes.
    SCT capabilities: (0x103f) SCT Status supported.
    SCT Feature Control supported.
    SCT Data Table supported.
    SMART Attributes Data Structure revision number: 10
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 Raw_Read_Error_Rate 0x000f 094 088 006 Pre-fail Always - 182650280
    3 Spin_Up_Time 0x0003 099 099 000 Pre-fail Always - 0
    4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 595
    5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0
    7 Seek_Error_Rate 0x000f 075 060 030 Pre-fail Always - 30942693
    9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 4482
    10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 1
    12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 579
    184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
    187 Reported_Uncorrect 0x0032 001 001 000 Old_age Always - 1812
    188 Command_Timeout 0x0032 100 099 000 Old_age Always - 2
    189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
    190 Airflow_Temperature_Cel 0x0022 067 039 045 Old_age Always In_the_past 33 (0 166 39 23)
    191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 98
    192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 48
    193 Load_Cycle_Count 0x0032 011 011 000 Old_age Always - 178621
    194 Temperature_Celsius 0x0022 033 061 000 Old_age Always - 33 (0 12 0 0)
    195 Hardware_ECC_Recovered 0x001a 060 039 000 Old_age Always - 182650280
    197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 1
    198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 1
    199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
    SMART Error Log Version: 1
    ATA Error Count: 1979 (device log contains only the most recent five errors)
    CR = Command Register [HEX]
    FR = Features Register [HEX]
    SC = Sector Count Register [HEX]
    SN = Sector Number Register [HEX]
    CL = Cylinder Low Register [HEX]
    CH = Cylinder High Register [HEX]
    DH = Device/Head Register [HEX]
    DC = Device Command Register [HEX]
    ER = Error register [HEX]
    ST = Status register [HEX]
    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.
    Error 1979 occurred at disk power-on lifetime: 4480 hours (186 days + 16 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    25 da 01 ff ff ff 4f 00 13:43:15.498 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:13.155 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.887 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.887 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.886 READ DMA EXT
    Error 1978 occurred at disk power-on lifetime: 4480 hours (186 days + 16 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    25 da 01 ff ff ff 4f 00 13:43:13.155 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.887 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.887 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.886 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.886 READ DMA EXT
    Error 1977 occurred at disk power-on lifetime: 4480 hours (186 days + 16 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    25 da 01 ff ff ff 4f 00 13:43:10.887 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.887 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.886 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.886 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:10.885 READ DMA EXT
    Error 1976 occurred at disk power-on lifetime: 4480 hours (186 days + 16 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    25 da 01 ff ff ff 4f 00 13:43:08.457 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:06.082 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:03.814 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:03.813 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:03.813 READ DMA EXT
    Error 1975 occurred at disk power-on lifetime: 4480 hours (186 days + 16 hours)
    When the command that caused the error occurred, the device was active or idle.
    After command completion occurred, registers were:
    ER ST SC SN CL CH DH
    40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
    Commands leading to the command that caused the error were:
    CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
    25 da 01 ff ff ff 4f 00 13:43:06.082 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:03.814 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:03.813 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:03.813 READ DMA EXT
    25 da 01 ff ff ff 4f 00 13:43:03.813 READ DMA EXT
    SMART Self-test log structure revision number 1
    Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
    # 1 Extended offline Completed: read failure 90% 4480 625142447
    # 2 Short offline Completed: read failure 90% 4474 625142447
    # 3 Extended offline Completed: read failure 90% 4474 625142447
    # 4 Short offline Completed: read failure 90% 4474 625142447
    # 5 Conveyance offline Completed: read failure 90% 4473 625142447
    # 6 Short offline Completed: read failure 90% 4473 625142447
    SMART Selective self-test log data structure revision number 1
    SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
    1 0 0 Not_testing
    2 0 0 Not_testing
    3 0 0 Not_testing
    4 0 0 Not_testing
    5 0 0 Not_testing
    Selective self-test flags (0x0):
    After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.
    Last edited by b6fan (2010-05-10 07:18:46)

    Bad block repaired by SeaTools for DOS.
    Seems only SeaTools for DOS can repair this issue.

  • Suggestions for formatting external drive with bad block?

    I have a OWC Mercury 300 GB external drive.  I wanted to erase it and use it as a backup for one of my laptops.  The drive had been working fine as a Time Machine back up for an old (intel) iMac. I told Disk Utility to erase and reformat.  It erased, but refused to reformat saying there was a bad block on the drive.  Now Disk Utility recognizes the external drive but won't do anything with it.  No Verify. No Repair.  All grayed out.  Any suggestions?

    I wasn't a hard error, that's why. Repartitioning and reformatting will often fix soft errors in the directory structure. Had there been a real, new, bad block on the drive it would not be repairable. A soft bad block error can be fixed by reformatting. The hard block error may be fixable by using Zero Data one pass that may spare out the bad block. But if none of that works then you need to replace the drive.
    You were fortunate that the error probably was spurious and resolved by repartitioning (may have been in the disk's RDB.)

  • How to keeps track of new coming bad blocks on a hdd

    I have some 2,5 hdd lying around and wanted to test them for bad blocks and found that I can use Badblocks for this.
    But how do I keep the "list" of bad blocks up-to-date if new bad blocks are detected, or is this not posseble?
    Solixxx
    Last edited by solixxx (2013-09-30 23:48:21)

    graysky wrote:You can detect and lock out bad ones... but the danger is not knowing which good blocks will go bad in the future.  I have a hdd in an old machine that has bad blocks on it... been running fine for 9+ months now after isolating them.
    Exactly :-)
    I've been using a drive with isolated bad blocks for over a year now, but I store only data I don't care that much about - I can re-download or recreate it.
    I thought you just wanted to list the bad blocks. If you're going to reformat the device, it is the mechanism of isolating the bad blocks I wrote about, so you should be OK.
    I think badblocks prevents the bad block from being used by the filesystem, but they still reside on the device, so if you run badblocks again, it should list both the old bad blocks and any new ones too.

  • ORA-01578 - bad blocks.

    Hello,
    We had a SAN crash last month and with all the confusion I missed that we had some bad blocks after the crash. Our RMAN backups have now aged out so we don't have a RMAN backup. What I do have is a datapump backup from before the event. This is table corruption. Is there a way to either restore the table to a stage location, catalog the table or datafile somehow, then restore the bad block? Otherwise, somehow determine the bad data and get the data from the backed up table and insert it back into the bad table after I re-recreate it without the bad blocks?
    Oracle support is just saying recreate the table skipping the bad blocks but there has to be some way here if the data is on that datapump backup that I can determin what it is then restore it.
    Thanks

    The fix blocks procedure did not work and I have not attempted the skip blocks. If I run the skip corrupt blocks will my query work? Is that worth a shot next?
    select count(*) TRACKING_ID from JEBCS3NM.TRACKABLE_OBJECT t, JEBCS3NM.CONVEYANCE c where c.TRACKABLE_OBJECT_GUID = t.TRACKABLE_OBJECT_GUID and PRIMARY_MOVER_ID is null
    ERROR at line 1:
    ORA-01578: ORACLE data block corrupted (file # 75, block # 1024073)
    ORA-01110: data file 75: '/u02/oradata/bcso/jebcs3nm_d.dbf'
    SELECT OBJECT_NAME, BLOCK_ID, MARKED_CORRUPT
      2       FROM REPAIR_TABLE;
    OBJECT_NAME                      BLOCK_ID MARKED_COR
    CONVEYANCE                        1024073 TRUE
    CONVEYANCE                        1024105 TRUE
    CONVEYANCE                        1024113 TRUE
    SQL> SET SERVEROUTPUT ON
    SQL> DECLARE num_fix INT;
      2  BEGIN
    num_fix := 0;
    DBMS_REPAIR.FIX_CORRUPT_BLOCKS (
         SCHEMA_NAME => 'JEBCS3NM',
      6       OBJECT_NAME=> 'CONVEYANCE',
      7       OBJECT_TYPE => dbms_repair.table_object,
         REPAIR_TABLE_NAME => 'REPAIR_TABLE',
         FIX_COUNT=> num_fix);
    DBMS_OUTPUT.PUT_LINE('num fix: ' || TO_CHAR(num_fix));
    END;
    12  /
    num fix: 0
    PL/SQL procedure successfully completed.

Maybe you are looking for

  • Problem in Purchase Requisition Release Strategy

    Hi sapgurus,                        i am defining a release strategy for Purchase Requisition. but whenver in a strategy two release codes are incorporated suppose Release group :MM Release Codes: M1 and M2. but in relese simulation tab showing    "M

  • Business area Vs profit centre

    hi, can any one tell me what are the main differences between business area and profit centre.

  • ERROR: Wi-Fi has the self-assigned IP address

    I'm having an issue with both my MacBook Pros at my home.  If I try to connect to the Time Capsule after coming home from work, I receive the exclamation point in my wifi status icon and this error message in my network preferences pane: "Wi-Fi has t

  • Scope of SAP SD

    Hello Friends I am posting my new msg I am new to this forum Pl let me know the scope os SAP after recession.Will we have scope in getting jobs pl let me know...reg anand

  • Echinus Start-up applications

    I recently gave the echinus a try. Itis amazing. Fully Extended Hint Compliant. But there is a minor issue. I want to start a couple of command when echinus starts and I want it to be echinus specific (i.e I can't use places and methods that are bein