Root filesystem won't mount.

Im getting a file not found/kernel panic error when the kernel tries to mount my root filesystem.
The partition is being specified correctly on the grub kernel line and I dont believe it is a module issue - this happens for both ext2 and ext3 partitions, but not all - just my arch partition.
So what else could cause the kernel to not mount it?

Disk /dev/sda: 30.0 GB, 30005821440 bytes
255 heads, 63 sectors/track, 3648 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x86338633
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2940 23615518+ 83 Linux
/dev/sda2 3588 3648 489982+ 82 Linux swap / Solaris
/dev/sda3 2941 3587 5197027+ 83 Linux
Partition table entries are not in disk order
sda1 is my ubuntu partition (ext3) im currently running. sda3 is arch (ext2)
Last edited by kristersaurus (2008-07-30 17:48:14)

Similar Messages

  • "ludowngrade" - Sol 8 root filesystem trashed after mounting on Sol 10?

    I've been giving liveupgrade a shot and it seems to work well for upgrading Sol 8 to Sol 10 until a downgrade / rollback is attempted.
    To make a long story short, luactivating back to the old Sol 8 instance doesn't work because I haven't figured out a way to completely unencapsulate the Sol 8 SVM root metadevice without completely removing all SVM metadevices and metadbs from the system before the luupgrade, and we can only reboot once, to activate the newupgrade.
    This leaves the old Sol 8 root filesystem metadevice around after the upgrade (even though it is not mounted anywhere). After an luactivate back to the Sol 8 instance, something gets set wrong and the 5.8 kernel panics with all kinds of undefined symbol errors.
    Which leaves me no choice but to reboot in Solaris 10, and mount the old Solaris 8 filesystem, then edit the Sol 8 /etc/system and vfstab files to boot off a plain, non-SVM root filesystem.
    Here's the problem: Once I have mounted the Old Sol 8 filesystem in Sol 10, it fails fsck when booting Sol 8;
    /dev/rdsk/c1t0d0s0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
    # fsck /dev/rdsk/c1t0d0s0
    BAD SUPERBLOCK AT BLOCK 16: BAD VALUES IN SUPER BLOCK
    LOOK FOR ALTERNATE SUPERBLOCKS WITH MKFS? y
    USE ALTERNATE SUPERBLOCK? y
    FOUND ALTERNATE SUPERBLOCK AT 32 USING MKFS
    CANCEL FILESYSTEM CHECK? n
    Fortunately, recovering the alternate superblock makes the filesystem usable in Sol 8 again. Is this supposed to happen?
    The only thing I can think of is I have logging enabled on the root FS in Sol 10, so apparently logging trashes the superblock in Sol 10 such that the FS cannot be mounted in Sol 8 without repair.
    Better yet would be a HOWTO on how to luupgrade a root filesystem encapsulated in SVM without removing the metadevices first. It seems impossible since without any fiddling, all the LU instances on the host will share the SVM metadb's on the system, which leads to problems.

    Did you upgrade the version of powerpath to a release supported on Solaris 10?

  • Root filesystem errors, Arch mounts it as RO, unusable system

    I dunno why, but since a little why Arch stopped working because it just stops the initscripts where it checks for filesystems.
    I'm getting at the beginning of initscripts this:
    "Using static /dev filesystem"
    Then it goes into maintenance mode after the Filesystem Check, asking for my root password, with the root filesystem mounted read-only, then it asks me to fsck it and change the superblock, however I can't do this because /dev/sda1 (my root FS) doesn't exists.
    What should I do?
    Thanks in advance

    1. when you start it will say filesystem check error and ask if you want to fix it type in your root password and press enter
    2. type mount -o -n remount,rw / and press enter
    3. type pacman -S initscripts and press enter
    4. type cp /etc/rc.local.shutdown.pacsave /etc/rc.local.shutdown press enter
    5. type cp /etc/rc.local.pacsave /etc/rc.local press enter
    6. type cp /etc/rc.conf.pacsave /etc/rc.conf press enter
    7. type cp /etc/inittab.pacsave /etc/inittab
    8. type reboot and press enter and you should be able to boot into your filesystem
    if you pacman tells you that it cant find initscripts then download the package from here:
    ftp://ftp.archlinux.org/testing/os/i686 … pkg.tar.gz
    and put it on a usb drive and mount it from inside another linux for example a ubuntu live cd
    then mount your arch root aswell and copy the initscripts-2008.02-1-i686.pkg.tar.gz to mountpoint/root/
    then go back and go from step 3 but with pacman -U initscripts-2008.02-1-i686.pkg.tar.gz
    Last edited by INCSlayer (2008-03-01 07:25:15)

  • Filesystem won't mount in rc.sysinit

    My RAID card won't mount with the other filesystems in rc.sysinit, it displays
    mount: special device /dev/sda1 does not exist.
    However, I took it out of fstab and added the line
    mount -t reiserfs /dev/sda1 /var/lib/mysql
    to rc.multi, so it mounts the filesystem right before it starts the mysql daemon, and it works just fine.
    Any ideas as to what would cause this?

    Hi Olaf,
    No, it hasn't gone away. I only have those two iPod menu items: HDSpecs and HDSMARTData.
    But I don't think there's anything wrong with the iPod; it works on other machines. And no other iOS devices will work on this Mac; I've tried an iPad and an iPhone. I spent 20 minutes on the Apple support chat yesterday. They suggested I zap the NVRAM and the SMC but I've tried both of those. They gave up after that and suggested it was something to do with the Mac's logic board or power management chip. But that doesn't explain why it works perfectly well with my Android phone and my Kindle.
    I don't want to take it in and find out the logic board is failing, so I'm just going to have to carry on listening to old music on my iPod, and never buy an iPad or iPhone!

  • SAN filesystems won't mount at reboot

    We are seeing a problem with Redhat Linux AS 3 (update 4). We have some filesystems on SAN and they won't mount at reboot. We have to exclusively probe the module interface for SAN and then mount the filesystems. In one situation, we have Q-Logic interface for the SAN, and we do “modprobe qla2300” before we attempt to mount. There are other situations now where we have Hitachi SAN. The sequence is, do a mod probe for SAN, mount the SAN file systems, and then start all apps needed on those filesystems.
    The problem is where to insert such commands in the boot sequence. The rc scripts won’t help as they get kicked off after filesystems mount. Because the filesystems don’t mount, all actions we want from boot on startup would fail.
    Anyone faced this kind of problem? Appreciate your feedback and any work-around you used.

    ... and we do “modprobe qla2300” before we attempt to mount.Question is why don't you using /etc/modprobe.conf (modules.conf in earlier linux versions)? Simply you configure modules.conf and Qlogic module will be loaded up automaticaly.
    Example of modprobe.conf for qla2300:
    alias scsi_hostadapter0 qla2300_conf
    alias scsi_hostadapter1 qla2300
    alias scsi_hostadapter2 qla2300_conf
    alias scsi_hostadapter3 qla2300
    post-remove qla2300 rmmod qla2300_conf
    options qla2300 ql2xuseextopts=1
    post-remove qla2200 rmmod qla2200_conf
    The sequence is, do a mod probe for SAN, mount the SAN file systems, and then start all apps needed on those filesystems.Again, load modules automaticaly via modprobe.conf.
    The problem is where to insert such commands in the boot sequence.For automatic mounting you could use automount for example. Of course you must have all relevant modules loaded (see modprobe.conf).
    Other scripts belongs to rc definitely.
    The rc scripts won’t help as they get kicked off after filesystems mount. Because the filesystems don’t mount.Why FS don't mount? Some error message?

  • Broken root filesystem

    Hello,
    I've got two eeePCs, both running arch, and yesterday I updated packages and manage to break both of them
    They've got a 8GB SSDisk per each, on which only Arch is installed.
    The fisrt one has got a boot partition (/dev/sda1) on EXT2, and a root (/dev/sda2) on EXT4.
    When booting, kernel gets loaded but it panicks saying root filesystem cannot be mounted.
    I cannot even mount the root partition using an Arch 2009.08-core-i686 written on an USB stick: the boot partition (/dev/sdb1) works fine, but mount fails on /dev/sdb2, and also fsck says that /dev/sdb2's superblock is not valid: bad magic number. Not even the backup superblock can be used...
    Since everything was working fine last time I used the eeePC, I halted it correctly and nothing bad happened to it meanwhile, I'm hoping I didn't lose the filesystem and my data...
    How could I check what the problem is?
    The second machine (I haven't got it with me at the moment, forgot it home... as soon as I get there I'll boot it using the USB stick) should have only one partition, containing the root filesystem, partitioned with NILFS2.
    If I recall correcly GRUB cannot even load the kernel: maybe it stopped supporting NILFS2 filesystems? Anyway unless I won't be able to get my data from Arch live, this is not a big problem...
    EDIT: I noticed two other similar threads... Maybe last upgraded broke "esotic" filesystems?
    Anyway I cannot mount the EXT4 FS even from Arch Live (while it looks like the others can), and this sounds pretty weird to me...
    Last edited by peoro (2010-02-04 18:06:36)

    Actually I don't think it's some problem like that...
    After I last halted the system (and the halting process went fine, didn't have to force anything) nothing happend to the machine (didn't even move it)... Besides it's not an old machine, I've only got it for a few months (consider that this arch is the first OS I installed on it, and when I installed it ext4 was already out): it's highly unlikely that it reached the writes limit...
    Moreover it looks like other users had got similar problems with filesystems the same days:
    http://bbs.archlinux.org/viewtopic.php?id=90179
    http://bbs.archlinux.org/viewtopic.php?id=90292 (not sure how related this is, anyway makes me think something about filesystems/partitions happend...)
    @leberyo: what FS were you using?

  • XFS Root Filesystem on LVM Stripe - Corruption

    I want to create a LVM Stripe across two SATA drives.  The problem I keep running into is, on boot,  arch reports that a fsck needs to be run on the root filesystem, so it mounts it read only and allows me to type in the root password to get shell access.   This all happens after I do the initial arch install, reboot, log in normally, do a pacman -Syu, reboot. 
    Here's what I have and what I did...
    I have two 320G SATA Drives plugged into my system board (an ABIT AN-M2HD).  My Bios is set to "AHCI Linux" for the SATA mode.
    Boot the latest Arch CD install at command prompt:
    cfdisk /dev/sda
    create 125M Partition (for /boot)        /dev/sda1
    create 319G Partition                      /dev/sda2
    cfdisk /dev/sdb
    create 320G Partition                     /dev/sdb1
    lvm pvcreate /dev/sda2 /dev/sdb1
    lvm vgcreate lvmgrp0 /dev/sda2 /dev/sdb1
    lvm lvcreate -i2 -I4 -L1G -nswap lvmgrp0
    lvm lvcreate -i2 -I4 -L594G -nroot lvmgrp0
    /arch/setup  (I'm going to abbreviate this)
    Hard Drive Setup (chose option 3)
        Set swap to swap LV
        Set root to root LV (xfs)
        Set /dev/sda1 to /boot (ext3)
    /etc/rc.conf USE LVM="YES"
    Kerel Params = Boot from LVM Support? I answer Yes
    Grub Looks Good.
    Everything seemingly installs fine, I can reboot and arch comes right up.  But after I do pacman -Syu everything goes to pot.
    I suspect I have something wrong in alignment between the LVM stripe and XFS, but dont know..
    alternately, should I set my LVM Stripe Size to a value larger than 4?  I thought since on i686 max XFS block size is 4096 (4K) then they should match.  Any thoughts on this?
    help! ;-)
    Ether..

    I just wanted to add an update.  It wasn't a LVM issue, it was a XFS issue.
    I went to Arch 64 bit for a while and everything (above) worked just fine.. The usability and Lib32 issues with 64 bit lost it's luster and last weekend I went back to 32 bit Arch.
    In going back to 32 bit I followed my procedure above for partitioning just as I had with the 64 bit version and after rebooting from the install and updating packman, the same thing occurred, the file system would start to corrupt, pacman would complain that libraries were truncated and so forth.    After reboot, the /root partition would not mount read/write because it was not clean...
    I reinstalled Arch 32 bit 6 times on Saturday trying various things in a scientific way and came to this conclusion:
    __and I have no idea why__, but when I use the XFS file system on an install with Arch 32 bit, it hoses up.  "Your Crazy" you might say, but trust me I've tried it different ways more than a dozen times.  (I'm no n00b either, I've been an RHCE since 2003 and a daily Linux user since 1999.)  Seems crazy, but XFS was a constant in all of my testing.  The only thing I can figure is that it's my hardware and that's just a guess.  64 bit worked without a hitch with XFS.  I even got desperate and installed / on a lone 60 gig partition (/dev/sda1) completely deleting all LVMs and it still corrupted with XFS and 32 bit
    The Solution?
    I used JFS.  I have created the LVM Stripe as I have described above and it works great.  No issues.
    Weird..
    For the sake of knowledge I have a:
    AMD 5000+ Black Edition CPU
    Abit AN-M2HD Motherboard
    2 Gig Dual Channel RAM (2x1Gb)
    2 320 Gig SATA Drive
    1 SATA DVD Writer
    -Ether..
    Last edited by EtherNut (2008-01-29 14:54:01)

  • Kinit mounts root filesystem as read only [HELP][solved]

    hello
    I've being messing around with my mkinitcpio trying to optimize my boot speed, i removed some of the hooks at the beginning i couldn't boot, but then now i can boot but the root filesystem mounts as read only, i tried everything my fstab looks fine, / exists with defaults i tried to mount it referencing by it's uuid or by it's name and i get the same results, it mounts the filesystem as root only all the time no mather what i do.
    There is not logs since i started playing with mkinitcpio, or anything i searched everywhere in this forum and around the internet, and i can't find any solution that would work, i restored all the hooks and modules on mkinitcpio and the result it's still the same. i also changed the menu.lst in grub to vga=773 but that's about it.
    Can anyone help with this please i can't seem to boot properly.
    Regards
    Last edited by ricardoduarte (2008-09-14 16:16:25)

    Hello
    Basically what happens it's that it loads all the uDev events then the loopback, it mounts the root read only, then when it checks filesystems it says
    /dev/sda4: clean, 205184/481440 files, 1139604/1920356 blocks [fail]
    ************FILESYSTEM CHECK FAILED****************
    * Please repair manually and reboot. Note that the root *
    * file system is currently mounted read-only. To remount *
    * it read-write: mount -n -o remount,rw / *
    * When you exit the maintenance shell the will *
    * reboot automatically. *
    Now what bugs me its that i can do that mount -n -o remount,rw / with no problems and when i do
    e2fsck -f /dev/sda4
    it doesn't return any errors just says that 0.9 non continuous.
    none of this makes sense to me!! thats why i though that the problem could be coming from mkinitcpio or something
    any ideas
    Thanks for your help, btw thanks for the quick reply
    Regards
    Last edited by ricardoduarte (2008-09-14 15:48:49)

  • Firewire hard drive won't mount - Filesystem verify or repair failed.

    Hi all - over the last day my lacie hard drive won't mount - i have tried it on various computers with the same result
    disk utility can see it but won't mount or repair it reporting the following error
    Filesystem verify or repair failed.
    any help would be appreciated
    best
    darren/uk

    Try Disk Utility first. It's in your Applications/Utilities folder.
    You'll want to select the disk in question, then +Repair Disk+ (not Permissions). This actually repairs the +File System+ that's on the disk.
    If that works, you're ok, unless it begins happening frequently. Then you need to save up for a new one.
    If DU can't fix it, then one of the others might. Disk Warrior is about $100, and many people here say it's the best. There are others for less money, but there's no guarantee that any of them will succeed.
    You can also try just erasing it, and choose +Security Options,+ then +Zero Out Data.+ If that fails, or you have more problems with the drive, you need a new one.

  • Bootpartition won't mount, fsck finds no errors

    Hi guys,
    got a strange situation here. My Bootpartition /dev/sda2 won't mount in Arch anymore.
    mount /dev/sda2:
    mount: wrong fs type, bad option, bad superblock on /dev/sda2,
    missing codepage or helper program, or other error
    Manchmal liefert das Syslog wertvolle Informationen – versuchen
    Sie dmesg | tail oder so
    The same error appers during boot. I don't know how Arch can boot without bootpartition but ist does...
    Another strange thing is that it mounts fine in Mandriva running on the same machine. Also testdisk under Windows didn't complain.
    dmesg | tail
    EXT2-fs (sda2): error: corrupt root inode, run e2fsc
    e2fsck /dev/sda2
    e2fsck 1.41.14 (22-Dec-2010)
    archboot: sauber, 43/36288 Dateien, 33179/144584 Blöcke
    grep -i archboot /etc/fstab
    LABEL=archboot /boot ext2 defaults 0 1
    file -s /dev/sda2
    /dev/sda2: Linux rev 1.0 ext2 filesystem data, UUID=c477713e-ff47-40f3-92f0-537291f24937, volume name "archboot
    So what now? Any help is appreciated.

    no changes to any partitions were made, just did a fatal "pacman -Syu"...
    i removed the /boot completely from fstab and tried to mount the partition manually.
    i also recreated the whole partition under ubuntu. copied the files out, deleted the entire partition, recreated it and copied the files back.
    arch linux still can't mount it.
    also when i'm trying to mount my jfs partition, the "mount" command hangs completely. even "kill -SIGKILL $pid" can't get rid of it.
    the jfs partition also mounts fine under ubuntu, i ran tests on every partition under ubuntu, they're all clean.
    mounting an jfs partition don't give any logfile entry, dmesg is also empty. it's just hanging. the funny thing is, that ext4 and vfat partitions are mounting fine.
    could something be wrong with hal or dbus?
    here's e2fsck output:
    e2fsck -pvf /dev/sda8
          35 inodes used (0.15%)
           4 non-contiguous files (11.4%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 10/6/0
       35980 blocks used (37.34%)
           0 bad blocks
           0 large files
          22 regular files
           4 directories
           0 character device files
           0 block device files
           0 fifos
           0 links
           0 symbolic links (0 fast symbolic links)
           0 sockets
          26 files
    --------EDIT:
    finally found it!
    for some reason (i don't know why) the /boot partition was not mounted during update and thus the kernel26 image was written into the /boot directory of the root partition instead of the /boot partition (where it belongs). this means on next bootup the system was booting with the older kernel but the updated system. maybe some things in filesystem handling were changed between the two versions of the kernel.
    i mounted the arch partitions within ubuntu and copied the files from the /boot directory to the partition which should be mounted in /boot on startup.
    everything's working now.
    Last edited by thok (2011-03-22 16:44:32)

  • [SOLVED] df -h does not reflect true root filesystem size

    Why does df -h show root filesystem as being only 20G?
    df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/cryptroot 20G 15G 4.6G 76% /
    dev 7.7G 0 7.7G 0% /dev
    run 7.7G 668K 7.7G 1% /run
    tmpfs 7.7G 70M 7.7G 1% /dev/shm
    tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
    tmpfs 7.7G 224K 7.7G 1% /tmp
    /dev/sda1 239M 40M 183M 18% /boot
    tmpfs 1.6G 8.0K 1.6G 1% /run/user/1000
    That is what my df -h output looks like. My setup is full disk encryption using dm-crypt with LUKS, per the guide on arch wiki. I basically created one /boot partition and left the rest of the disk to be an encrypted partition for the root filesystem. So why is my system complaining about (and acting as if it's running out of space)? Have I forgotten something?
    Thank you for reading this. Let me know if you need any more logs or info on my setup - I realise I haven't provided very much info here, but I can't think of what to provide.
    Last edited by domentoi (2014-12-24 19:02:32)

    This is lsblk:
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 465.8G 0 disk
    ├─sda1 8:1 0 250M 0 part /boot
    └─sda2 8:2 0 465.5G 0 part
    └─cryptroot 254:0 0 465.5G 0 crypt /
    sdb 8:16 0 14.9G 0 disk
    └─sdb1 8:17 0 14.9G 0 part
    and fdisk -l
    Disk /dev/sdb: 14.9 GiB, 16013942784 bytes, 31277232 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x5da5572f
    Device Boot Start End Sectors Size Id Type
    /dev/sdb1 2048 31275007 31272960 14.9G 73 unknown
    Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0xa5018820
    Device Boot Start End Sectors Size Id Type
    /dev/sda1 * 2048 514047 512000 250M 83 Linux
    /dev/sda2 514048 976773167 976259120 465.5G 83 Linux
    Disk /dev/mapper/cryptroot: 465.5 GiB, 499842572288 bytes, 976255024 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk /dev/sdc: 1.4 TiB, 1500301908480 bytes, 2930277165 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x445e51a8
    And graysky: I thought I made the partition for /boot 250 MB and the encrypted partition 465.5 GB but I'm now quite sure I did something wrong...
    Thank you all

  • Urgent - I think I just lost everything on my MacBook Pro. Hard drive won't mount and booting up shows "no" symbol

    My MacBook Pro (13" mid 2010, Yosemite) doesn't feel like booting up. It gives me the no symbol after the progress bar is about 1/4 of the way through.
    Additionally, it won't mount in the Recovery HD. It shows up as disk0s6 and clicking Mount does nothing. Can't seem to mount it in the Terminal either.
    The disk partition is encrypted, so unfortunately I can't pull the hard drive out of my MBP and pull the stuff off it with my PC.
    I tried First Aid - "no problems found" (such LIES!)
    I tried PRAM - didn't work
    I tried Verbose Mode - went fine for the first minute, then hung for a while before returning to the "No" screen and in the bottom right corner "Still waiting for root drive".
    I tried SMC - didn't work
    Most recently I did install rEFInd - which is a boot manager. I was afraid it screwed up the EFI partition -seemingly not, as it will boot to Recovery and external drives.
    The data still seems to be there, but the OS is having trouble accessing it.
    Do you know what would be causing this? I really need that data back :/

    It appears I can unlock the partition, using diskutil via the Terminal (diskutil coreStorage unlockVolume <UUID> -passphrase <passphrase>).
    However the volume doesn't have a UUID. It lists the partition type as "FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF" (which is a UUID) but using that doesn't work.
    Additionally when I use diskutil coreStorage list, it comes up with nothing. I'm at a loss here, the partition doesn't even know it's own UUID or that it's encrypted. It seems quite damaged to me but I don't have the slightest idea why, as I CAN boot to it and enter the password but it will stop at the "no" symbol part of the way along.

  • Upgraded to Mavericks. Now External HDD Enclosure Won't Mount. Is there a fix?

    Upgraded to Mavericks.  Now I can't get my external HDD enclosure to mount.  A little about this enclosure, as it's not the usual settup .  I have four 3TB HDDs in a RAID5 settup.  The most unique part is that they are formatted as a Windows NT Filesystem (Tuxera NTFS).  Basically, when I set this up, I had to get a lot of info from a Windows-formatted HDD, but also be able to read/write afterwards. It's worked fine for ~ 1 year, but now that I've upgraded to Mavericks, nothing will mount.  I ran Disk Utility, where I see the HDDs (I've partitioned them into four drives of varying sizes), but they are greyed out.  I tried to mount them, but it says it won't and that I should run First Aid.  So I did...tried to "Repair Disk," but it just gets stuck on "Updating boot support partitions for the volume as required."  For over an hour it's been on that message.
    So, does anybody out there have a fix?  I've read several similar posts about people having a hard time mounting and, overall, dealing with external HDDs after upgrading to Mavericks.  But nothing seemed to fit my exact circumstances.  Lastly, I should mention that I've used both Firewire and USB connections.  Both resulted in the same.
    Thanks.

    I can't help with specifics for your issue, but some general thoughts:
    The upgrade and the disk problem may be a coincidence. It would help you if you could verify that the drive still works fine on a windows machine. If you run a windows virtual machine or have a windows pc, or a friend does, verify full functioning that way. If it still won't mount on the mac, try to copy/backup it's data onto the windows machine (maybe another external drive connected to it. You could then scrub the disk in windows, leave it unpartitioned, unformatted. Connect it back to the mac and format it as HFS+ (Mac extended journalled) and transfer the data back to it by sharing the windows machine external drive and connecting the two machines with a network cable.

  • External Hard Drive won't mount - tried Disk Warrior, am backing up as a .dmg...will this work?

    Hi Guys,
    Hopefully someone can help! Fairly desperate situation here...
    I backed up around 700GB of data to an Iomega external hard drive (supplied by the client), using my Macbook Pro...all was fine when I unmounted the drive at the end of the backup.
    The client took the drive away and called me to say he mounted the drive in Windows (on a PC), and he was able to work with the files for a time, but now the drive won't mount...
    I collected the drive, hooked it up to my Mac Pro, but OSX tells me it can't mount the drive. It shows up in Disk Utility in the sidebar, but not on the desktop...
    I tried repairing the disk using Disk Utility and with Disk Warrior, both tell me the disk cannot be repaired...
    I am currently making a backup of the drive as a .dmg file, using a program called Disk Drill...it's slow as I expected, but I'm still not sure the .dmg will mount...because surely it's just making a copy of the data exactly, of which some part is not mounting (a mount directory or something???)
    Is there any way you guys can think of me somehow taking the data and backing it up onto another disk?
    Specialist recovery services seem to think there shouldn't be any problem recovering the data, and of couse, as a costly last resort, that's what I'll have to do, but does anyone know how they may go about doing it? It is something vastly complicated such as a command line thing?
    Sorry about the length of this question...just hope someone can help!
    Thanks in advance...

    You didn't say what format (filesystem type) the disk was.
    However, I'd say your analysis is probably correct. Something has corrupted the disk. You can make a disk image copy to the .dmg which will preserve the data, such as it is, but I doubt extremely if it will fix any file system corruption.
    Furthermore, if both Disk Utility and Disk Warrior have said the disk is unreadable I'm dubious that anything else will magically repair it and that recovery probably consists of snuffling through the disk block by block.
    However if the filesystem is FAT (Windows) there may be other (Windows) utilities that can fix it.
    You said you 'backed up the data'. Does that mean you still have the original data on your MBP? Are you trying to recover the client's changes?

  • [SOLVED] Arch won't mount my hard drives correctly, problems at boot.

    Sometimes (only sometimes) Arch Linux will have an error during bootup because my Linux drive gets mounted as sdb instead of sda. My NTFS Sata storage drives gets mounted in its place as sda.
    When this happens, Arch Linux will stop booting at the "checking filesystems" step:
    :: Checking Filesystems [BUSY]
    /dev/sda3:
    The superblock could not be read or does not describe a corect ext2
    filesystem. If the device is valid and it really contains an ext2
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
    [FAIL]
    **************** FILESYSTEM CHECK FAILED ****************
    * Please repair manually and reboot. Note that the root *
    * file system is currently mounted read-only. To remount *
    * it read-write type: mount -n -o remount,rw / *
    * When you exit the maintenance shell the system will *
    * reboot automatically. *
    Give root password for maintenance
    (or type Control-D to continue):
    And from there i have to remount my drives manually or restart the computer. Everytime I start the computer, I simply have to hope that my Linux drive will mount as sda. It's totally hit and miss.
    Now, I know that my superblock is not corrupt. fsck is failing because it is looking for my linux filesystem on sda. It is encountering my NTFS Sata drive on sda instead of the expected Linux ext filesystem.
    So how do I know that this is happening?
    Well, after giving the root password, it shows the root prompt
    [root@(none) ~]#
    and i proceed to use the lshw command to see what's up with the drives:
    [root@(none) ~]# lshw -short | grep /dev/
    /0/6/0.0.0 /dev/sdb disk 120GB WDC WD1200JB-00E
    /0/6/0.0.0/1 /dev/sdb1 volume 101MiB Linux filesystem partition
    /0/6/0.0.0/2 /dev/sdb2 volume 258MiB Linux swap volume
    /0/6/0.0.0/3 /dev/sdb3 volume 7506MiB EXT4 volume
    /0/6/0.0.0/4 /dev/sdb4 volume 104GiB EXT4 volume
    /0/6/0.1.0 /dev/sdc disk 81GB Maxtor 6Y080P0
    /0/6/0.1.0/1 /dev/sdc1 volume 76GiB Windows NTFS volume
    /0/8/0.0.0 /dev/sda disk 640GB Hitachi HDT72106
    /0/8/0.0.0/1 /dev/sda1 volume 596GiB Windows NTFS volume
    So, clearly, this shows that my Linux drive has gotten mounted as sdb and my NTFS Sata drive has gottem mounted as sda. It's totally random: sometimes they mount vice versa and the system boots just fine.
    When Arch does happen to mount itself properly as sda and the system starts successfully, then the lshw command shows this:
    [root@(none) ~]# lshw -short | grep /dev/
    /0/6/0.0.0 /dev/sda disk 120GB WDC WD1200JB-00E
    /0/6/0.0.0/1 /dev/sda1 volume 101MiB Linux filesystem partition
    /0/6/0.0.0/2 /dev/sda2 volume 258MiB Linux swap volume
    /0/6/0.0.0/3 /dev/sda3 volume 7506MiB EXT4 volume
    /0/6/0.0.0/4 /dev/sda4 volume 104GiB EXT4 volume
    /0/6/0.1.0 /dev/sdb disk 81GB Maxtor 6Y080P0
    /0/6/0.1.0/1 /dev/sdb1 volume 76GiB Windows NTFS volume
    /0/8/0.0.0 /dev/sdc disk 640GB Hitachi HDT72106
    /0/8/0.0.0/1 /dev/sdc1 volume 596GiB Windows NTFS volume
    The above correctly mounted format shows the drives in the same order as my Hard Disk boot priority in BIOS as well as in the same order as during the initial drive detection directly following the memory test (don't know if it has anything to do with it though).
    So my question is this. How do I ensure that Arch Linux mounts itself as sda ALL of the time, and not randomly?
    Or should I remove my sda entries in etc/fstab and let Arch determine where my Linux filesystems are? If so, how?
    It's interesting to note that GRUB is set to boot Arch Linux from hd0, which should be sda.
    It's also intriguing to note that if I take out my Sata drive, I never encounter this problem.
    Last edited by trusktr (2010-06-15 07:49:31)

    Thanks kgas, here's what i found out so far:
    Alright so after rebooting, this is what i determined:
    When Linux mounts the drives incorrectly (take note of the parts in bold, the short numbers are NTFS filesystems, long numbers the Linux filesystems):
    [trusktr@rocketship ~]$ ls -l /dev/disk/by-uuid
    total 0
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 01CA836D8BE82040 -> ../../[b]sdc1[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 0ddf0e41-e7e6-4af5-b0e9-bc79a91b12eb -> ../../[b]sdb1[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 92b88528-dd0f-4c1b-bcce-54084ef2aceb -> ../../[b]sdb4[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 C838CF5838CF4462 -> ../../[b]sda1[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 cdb33de5-0100-4c5f-a9b1-5c1a444e6eac -> ../../[b]sdb3[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 d0a5d49d-169d-43ce-af0f-216dc4a9f604 -> ../../[b]sdb2[/b]
    So an NTFS filesystem is mounted in sda instead of the Linux filesystem
    When Linux mounts everything properly:
    [trusktr@rocketship ~]$ ls -l /dev/disk/by-uuid
    total 0
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 01CA836D8BE82040 -> ../../[b]sdb1[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 0ddf0e41-e7e6-4af5-b0e9-bc79a91b12eb -> ../../[b]sda1[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 92b88528-dd0f-4c1b-bcce-54084ef2aceb -> ../../[b]sda4[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 C838CF5838CF4462 -> ../../[b]sdc1[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 cdb33de5-0100-4c5f-a9b1-5c1a444e6eac -> ../../[b]sda3[/b]
    lrwxrwxrwx 1 root root 10 Jun 14 21:08 d0a5d49d-169d-43ce-af0f-216dc4a9f604 -> ../../[b]sda2[/b]
    This doesn't tell us much except that I indeed do have uuid's for all the drives.
    So, i guess as kgas said, i probably need to use the uuid in fstab so that Linux always knows which hard drive is the linux drive! In that case, only the uuid for the Linux drive will be necessary. For the other drives it wouldn't matter so much i guess since they don't contain the operating system.
    Alright, i'll be back to determine if this fixes it!
    Last edited by trusktr (2010-06-15 06:41:25)

Maybe you are looking for