Changing Root Filesystem Causes Much Pain

I've just gone through a massive repartitioning, with the net result being this:
     was: /dev/sda2 reiserfs
     is: /dev/sda4 ext3
/home:
     was: /dev/sda3 reiserfs
     is: /dev/sda3 ext3
/boot:
    was and still is: /dev/sda1 ext2
I can boot from the Arch installation disk using root=/dev/sda4, so that's all good.
The problem is getting my existing installation to boot. Even if I edit the boot entry in GRUB so that instead of "root=/dev/sda2", I have "root=/dev/sda4", the boot fails with:
init: Cannot open root device sda4(8,4)
init: init not found!
kernel panic - not syncing: Attempted to kill init!
Do I need to regenerate my initrd.img or something? I can't using the Arch install disk because it doesn't seem to support ext2...

You probably don't need to change anything, just regenerate. It's because the 'autodetect' hook that you probably have in the HOOKS array in mkinitcpio.conf causes only modules that are needed to be included in the initramfs. Last time you regenerated, it was reiserfs module; when you regenerate now, it should include ext3 automatically.

Similar Messages

  • [SOLVED] df -h does not reflect true root filesystem size

    Why does df -h show root filesystem as being only 20G?
    df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/cryptroot 20G 15G 4.6G 76% /
    dev 7.7G 0 7.7G 0% /dev
    run 7.7G 668K 7.7G 1% /run
    tmpfs 7.7G 70M 7.7G 1% /dev/shm
    tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
    tmpfs 7.7G 224K 7.7G 1% /tmp
    /dev/sda1 239M 40M 183M 18% /boot
    tmpfs 1.6G 8.0K 1.6G 1% /run/user/1000
    That is what my df -h output looks like. My setup is full disk encryption using dm-crypt with LUKS, per the guide on arch wiki. I basically created one /boot partition and left the rest of the disk to be an encrypted partition for the root filesystem. So why is my system complaining about (and acting as if it's running out of space)? Have I forgotten something?
    Thank you for reading this. Let me know if you need any more logs or info on my setup - I realise I haven't provided very much info here, but I can't think of what to provide.
    Last edited by domentoi (2014-12-24 19:02:32)

    This is lsblk:
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 465.8G 0 disk
    ├─sda1 8:1 0 250M 0 part /boot
    └─sda2 8:2 0 465.5G 0 part
    └─cryptroot 254:0 0 465.5G 0 crypt /
    sdb 8:16 0 14.9G 0 disk
    └─sdb1 8:17 0 14.9G 0 part
    and fdisk -l
    Disk /dev/sdb: 14.9 GiB, 16013942784 bytes, 31277232 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x5da5572f
    Device Boot Start End Sectors Size Id Type
    /dev/sdb1 2048 31275007 31272960 14.9G 73 unknown
    Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0xa5018820
    Device Boot Start End Sectors Size Id Type
    /dev/sda1 * 2048 514047 512000 250M 83 Linux
    /dev/sda2 514048 976773167 976259120 465.5G 83 Linux
    Disk /dev/mapper/cryptroot: 465.5 GiB, 499842572288 bytes, 976255024 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk /dev/sdc: 1.4 TiB, 1500301908480 bytes, 2930277165 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x445e51a8
    And graysky: I thought I made the partition for /boot 250 MB and the encrypted partition 465.5 GB but I'm now quite sure I did something wrong...
    Thank you all

  • A seeming small change by Verizon causes major monetary and unrecovera​ble damages

    A seeming small change by Verizon causes major monetary and unrecoverable damages
    Back, I think in March or so, Verizon changed how they bill. Prior to this time frame my elderly mom paid the phone bill and I paid the DSL bill. The DSL bill came on my name and I paid by credit card and the phone bill came under her name and she paid by direct debit. She will not use a computer.
    So, they make both bills on her name and won't change it. They did, however, allow the payment from my credit card.
    The first unfortunate consequence was the internet speed suddenly dropped in speed by a factor of 2 to 1.5/364 and it did take me probably a month to notice it. NO ONE that I called was able to find the problem. It took multiple attempts until some bright “call in technician” said the problem is on their end and could be fixed in software. Thus, I believe, because of the name change, the provision changed too. The phone line crapped out too at this time.
    The troubleshooting was stupid too. I took another modem and connected it directly to the NID and directly to the laptop and the technician on the phone said, it's either in your house or your modem and wanted to send me a new modem which they did. I don' get it. This is the wrong conclusion.
    This shows the need of modems like Westell 2200 and the Westell 6100 because of the ability to measure line statistics and provisioned data rates.
    Verizon sent me a D-link model DSL-2750B modem which I have to return because its just plain old stupid. It boasts Wireless n and the wired ports are 10/100. What's up with that? Next, there are no line stats and a horrible interface, akin to Verizon's website. Still can't figure out how to put it in Bridge mode. I own a D-link repeater and the interface is fine.
    I asked them what modem they would send me and they would not tell me, so I got whatever piece of junk they had lying around.
    I have now purchased a Network RAID array with no wireless access and with ~400 mb/s access rates. Yep, really stupid router.
    Because of the number of outages, I have to be able to quickly switch from a bridge connected modem, Westel 6100) to one that is connected in direct mode, a Westell 2200 It's almost easy, but not quite, since the equipment is up in the rafters in the laundry room. The DSL cable is commercial terminated CAT5, which is rare. The modem is within 6' of the NID.
    Plans include, the ability to switch the DSL line with a switch with positions for Modem #1 (Bridge), Modem #2(Wired), #3(RJ11 – Normal wiring) and #4: RJ-11 (Reverse wiring) and moving the direct connect jack for easy access. In any event, testing with the DSL modem at the NID is the gold standard.
    In progress, is a 48 port Gigabit patch panel for phone and Ethernet, A separate 24 port RED patch panel will be bridged for phone.
    I have already upgraded my network infrastructure to include a UPS for the network infrastructure and POE (Power over Ethernet) for the DSL modem. The UPS backup does not backup a repeater although it may in the future.
    Now, let's go back to this seeming innocuous billing change.
    So, this constant having to access the modem in the alternate configuration and a medical condition (migraines) that makes it hard to think clearly. So, in one of these episodes, I placed my laptop on top a container that was soaking laundry.
    So, the LCD screen has to be replaced at $300 and the battery had to be replaced. Then the Ethernet cord to the modem got pulled and caused the laptop to fall on the floor. It worked for a few weeks and then suffered an UNRECOVERABE hard drive crash.
    During this time frame, in fact over a 2 day period. My mom ended up in the hospital and to a rehab facility. My car ended up staying in the shop for a few days because of parts unavailability. My hard drive crashed and their was water in the basement. I don't have time for this.
    Recovery would have cost $2004.00, if it were possible. I only have a drive image from 2011 which I have yet to try to use. I' now running UBUNTU Live rather than Windows 7 Pro. Some of the very important stuff was backed up on Flash, but a lot of stuff is gone forever.
    Plans are to contact the billing office and the attorney general's office in the state where billing questions need to be resolved.
    The last outage, fixed on July 27, was flagged as a phone outage by Verizon, when 18 routers supposedly died. 18 routers suspiciously means that they they were not adequately protected against surges. Verizon should look at the way DelMarVa Power reports outages: see: Delmarva,  Put the usual World Wide Web Prefix and the COMercial domain suffix
    For god's sake, quit making the website circular. I though Yahoo was the expert in that. I never visit Verizon's website unless there are problems. I mistaking thought I could find outage information, say using my phone's browser. This time I had to purchase service from CLEAR internet to ride me through this fix.
    I know, you can't fix stupid and you can always find a better idiot. Comcast is looking more attractive.
    Compensation of or my loss I will probably never get, but maybe with the help of the attorney general I can get the billing fixed. I do have power of attorney. Now, of course, the name the Verizon Website addresses me as is not right on this forum.
    All, basically because of Verizon's unreliable network, incompetent people and the lack of maintenance of copper lines. The problem is, “We have little choice”. Currently, we need traditional land-line service to support an Emergency Response system. It's unclear whether digital phone lines support Faxes and alarm monitoring reliably.
    I'll bet that there is NO WAY to tell Verizon if a service address as special needs like Emergency Medical reporting via phone or Internet, so that priorities for repair can be done properly. Utilities have that capability. Those ares with people with oxygen generators or other medical equipment get a higher priority,

    That should not be happening.  I sympathize with both of you (you and the the person who strated this thread).  That sounds like the reason I would never want Dish. 
    I am fortunate that my service with Verizon Fios has been very reliable and that I haven't had too many issues over the years.  But don't get me started on Verizon Wireless.  I am through with doing any more business with them once my cell phone minutes run out.  I love it when they say they have done everything the could to investigate such a simple issue with running my debit card transaction wrong and blaming it on my bank, i.e. running it through as a "debit" when I choose "credit" all of a sudden now  two times in a row after 4 years of doing it right as "credit".  And now they consider the issue closed on their end when nothing has been resolved.  That's no way to treat a long time customer and I really felt treated like I was a bother to them.  It's not like I ever paid them much money for this pre-paid cell phone service that I have used infrequently but I was really mad at the way they just wrote this off and pretty much told me "too bad".  Talk about "customer non-service".  It's also very hard to track the number of minutes left on my balance.  I'm due for an upgrade anyway.  I want to switch to the I-Phone from this flip phone but never again with Verizon Wireless.  T-Mobile is way more attractive.  Verizon  Wireless is not only an entity separatedfrom Fios but it's definitely a much different animal and their customer service has really gone downhill.  When you're treated like they don't care, it's maddening.
    My customer service experience with Fios overall has been far superior to that most of the time.  My setup is simple and even though there are some people who will inevitably complain about anything just to vent, I definitely believe at least some of what is posted here.  And switching TV, phone, and Internet providers is a lot more involved than switching a cell phone provider. 

  • Moving root filesystem to another hard drive

    Hi everyone,
    I've been using Arch Linux for a few months now and it is by far my favorite Linux distribution.
    I currently have Arch on an old Pentium III, with a whopping 20 GB IDE hard drive.
    I'm building a new computer, and want to use a larger 160 GB SATA hard drive, and I was wondering what is the best way that I can just transfer my entire root filesystem so I don't have to bother copying all the scripts, programs, etc.
    Can I just tarball my root directory, partition my new hard drive, and use some other LiveCD to copy the root partition? I know this shouldn't be that difficult, because everything in the root partition is in /, unlike Windows, which has association problems with the registry.
    Thanks!

    I'd never want to lead anyone down the garden path without at least trying this myself so for the last hour or so I've done just as described above on an old computer I have that I play around with and a spare small 8 gb hard drive.  I first tried copying the files over with thunar but that didn't work too well (froze up) probably due to trying to copy /sys and /proc--duh.
    I tried again using midnight commander (to install do #pacman -S mc) and it worked perfectly.  The /dev directory copied over just fine but don't try copying /proc and /sys.  Just make empty directories so the system can put what it needs there.  Also just make an empty /mnt directory and put your devices in later--otherwise I'm not sure what would happen if you were copying your /mnt/newroot to itself, but I'm sure it wouldn't be good.
    I tried to just install grub on the slave with #grub-install /dev/sdb and it said it installed fine but when I switched the slave to master and disconnected the old master, grub wouldn't boot (probably because it was installed to /dev/sdb?) so I had to use the arch install disk to install grub.  Then, much to my surprise--like I had any doubts--I rebooted and now I'm posting this with the newly copied Arch with xfce4.
    It might not be a bad idea to backup any really important stuff but you are just copying and unless you do something silly, everything should work fine.
    Last edited by bgc1954 (2008-03-11 23:13:58)

  • Kinit mounts root filesystem as read only [HELP][solved]

    hello
    I've being messing around with my mkinitcpio trying to optimize my boot speed, i removed some of the hooks at the beginning i couldn't boot, but then now i can boot but the root filesystem mounts as read only, i tried everything my fstab looks fine, / exists with defaults i tried to mount it referencing by it's uuid or by it's name and i get the same results, it mounts the filesystem as root only all the time no mather what i do.
    There is not logs since i started playing with mkinitcpio, or anything i searched everywhere in this forum and around the internet, and i can't find any solution that would work, i restored all the hooks and modules on mkinitcpio and the result it's still the same. i also changed the menu.lst in grub to vga=773 but that's about it.
    Can anyone help with this please i can't seem to boot properly.
    Regards
    Last edited by ricardoduarte (2008-09-14 16:16:25)

    Hello
    Basically what happens it's that it loads all the uDev events then the loopback, it mounts the root read only, then when it checks filesystems it says
    /dev/sda4: clean, 205184/481440 files, 1139604/1920356 blocks [fail]
    ************FILESYSTEM CHECK FAILED****************
    * Please repair manually and reboot. Note that the root *
    * file system is currently mounted read-only. To remount *
    * it read-write: mount -n -o remount,rw / *
    * When you exit the maintenance shell the will *
    * reboot automatically. *
    Now what bugs me its that i can do that mount -n -o remount,rw / with no problems and when i do
    e2fsck -f /dev/sda4
    it doesn't return any errors just says that 0.9 non continuous.
    none of this makes sense to me!! thats why i though that the problem could be coming from mkinitcpio or something
    any ideas
    Thanks for your help, btw thanks for the quick reply
    Regards
    Last edited by ricardoduarte (2008-09-14 15:48:49)

  • CentOS based linux VM running on Hyper-v : Checking root filesystem fails when kernel switches having old PV(para virtualised driver based on 2.6.32 linux kernel) to new PV(which is equivalent to linux integration component 3.4)

    hi all,
    I am running a CentOS base VM on top of Hyper-V server. I upgraded PV drivers of Hyper-V in linux kernel 2.6.32 in order to support
    Windows Server 2012, then i am hitting below issue on Windows Server 2008 when kernel switches from old PV(which is 2.6.32 based) to new PV(which is equivalent to linux integration component 3.4).i
    am hitting following filesystem check error messages :
    Setting hostname hostname:
    Checking root filesystem
    fsck.ext3/dev/hda2:
    The superblock could not be read or does not describe correct ext2 filesystem. If the device is valid and it really contains an ext2
    filesystem(and not swap or ufs or something else),then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
    : No such file or directory while trying to open /dev/hda2
    *** An error occurred during the filesystem check.
    *** Dropping you to a shell; the system will reboot
    *** When you leave the shell.
    Also, when I go to the repair filesystem mode. I found out the strange behaviour when i ran those command :
    (Repair filesytem) 1 # mount
    /dev/hda2 on / type ext3 (rw)
    proc on /proc type proc (rw)
    (Repair filesystem) 1# cat /etc/mtab
    /dev/hda2 /ext3 rw 0 0
    proc /proc proc rw 0 0
    (Repair filesystem) 1# df
    Filesystem 1K-blocks used Available Use% Mountedon
    /dev/hda2 4%
    I think for all above command there should be /dev/sda2 instead of /dev/hda2.
    Also my fstab , and fdisk -l looks like ok for me.
    (Repair filesystem) 1# cat /etc/fstab
    LABEL=/ / ext3 defaults 1 1
    LABEL=/boot /boot ext3 defaults 1 2
    devpts /dev/pts devpts gid=5,mode=620 0 0
    tmpfs /dev/shm tmpfs defaults 0 0
    proc /proc proc defaults 0 0
    sysfs /sys sysfs defaults 0 0
    LABEL=swap-xvda3 swap swap defults 0 0
    (Repair filesystem) 1# fdisk -l
    Device Boot Start End Block Id System
    /dev/sda1 * 1 49 98535 83 Linux
    Partition 1 does not end with cylinder boundary.
    /dev/sda2 49 19197 39062500 83 Linux
    Partition 2 does not end with cylinder boundary.
    /dev/sda3 ......
    Partition 3 does not ......
    /dev/sda4 ......
    Partition 4 does not end ....
    (Repair filesystem) 1# e2label /dev/sda1
    /boot
    (Repair filesystem) 1# e2label /dev/sda2
    (Repair fielsystem) 1# ls /dev/sd*
    /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4
    (Repair filesyatem) 1# ls /dev/hd*
    ls: /dev/hd*: No such file or directory
    Kindly suggest any configuration of windows server or kernel configs missing or how to resolve this issues
    Many many thanks for your reply.
    thanks & Regards,
    Ujjwal

    i am not able to understand duplicate UUID and from where it is picking /dev/hda* ?
    ~
    VVM:>>
    VVM:>> Output of dmesg | grep ata contain substring "Hyper-V" ?
    VVM:>>
    it doesn't contain "Hyper-V" or ata related message and the output doesn't change with boot parameter reserve=0x1f0, 0x8
    ~~
    ~~~~
    ==
     output of dmesg related "ata" Ubuntu v13.04 mini.iso ( with boot parameter reserve=0x1f0, 0x8)
    ==
     see later ( in "good situation" example  )
    ~~
    ===
    Disable legacy ATA driver by adding the following to kernel command line in /boot/grub/menu.lst:
    reserve=0x1f0, 0x8
    . (This option reserves this I/O region and prevents ata_piix from loading).
    ==
     See output of dmesg related "ata" Ubuntu v13.04 mini.iso ( with boot parameter reserve=0x1f0, 0x8) :
    ~~
    [ 0.176027] 
    libata version 3.00 loaded.
    [ 0.713319] 
    ata_piix 0000:00:07.1: version 2.13
    [ 0.713397] 
    ata_piix 0000:00:07.1: device not available (can't reserve [io 0x0000-0x0007])
    [ 0.713404] 
    ata_piix: probe of 0000:00:07.1 failed with error -22
    [ 0.713474] 
    pata_acpi 0000:00:07.1: device not available (can't reserve [io 0x0000-0x0007])
    [ 0.713479] 
    pata_acpi: probe of 0000:00:07.1 failed with error -22
    ~~
      As result: 1) IDE disk handled by hv_storvsc , but 2) no CD-ROM device
    ==
    ~ # blkid
    /dev/sda1: LABEL="ARCH_BOOT" UUID="009c2043-4bl7-4f95-al4d-fb8951f95b5d" TYPE="ext2"
    ==
    ~~
    VVM>>
    VVM>>Q1: Output of blkid contain duplicate UUID ?
    VVM>>
    -> blkid contains duplicate UUID, below are the output.
    ~~
     This situation is classic problem "
    use hv_storvsc instead of ata_piix to handle the IDE disks devices ( but not for the DVD-ROM / CD-ROM device handling)
    ~~
     For compare, see example "good situation": 
     See output of dmesg related "ata" Ubuntu v13.04 mini.iso ( without boot parameter reserve=0x1f0, 0x8) :
    ~~~~
    ~ # dmesg |grep ata
    [ 0.167224] libata version 3.00 loaded.
    [ 0.703109] ata_piix 0000:00:07.1: version 2.13
    [ 0.703267] ata_piix 0000:00:07.1: Hyper-V Virtual Machine detected, ATA device ignore set
    [ 0.703339] ata_piix 0000:00:07.1: setting latency timer to 64
    [ 0.704968] scsi0 : ata_piix
    [ 0.705713] scsi1 : ata_piix
    [ 0.706191] atal: PATA max UDMA/33 cmd 0xlf0 ctl 0x3f6 bmdma 0xffa0 irq 14
    [ 0.706194] ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15
    [ 0.868844] atal.00: host indicates ignore ATA devices, ignored
    [ 0.869142] ata2.00: ATAPI: Virtual CD, , max MWDMA2
    [ 0.871736] ata2.00: configured for MWDMA2
    ~~~~
    ===
    ~ # uname -a
    Linux ubuntu 3.7.0-7-generic #15-Ubuntu SUP Sat Dec 15 14:13:08 UTC 2012 x86_64 GNU/Linux
    ~ # lsmod
    hv_netvsc 22769 0
    hv_storvsc 17496 3
    hv_utils 13569 0
    hv_vmbus 34432 3 hv_netvsc,hv_storvsc,hv_utils
    ~ # blkid
    /dev/sr0: LABEL=”CDROM" TYPE="iso9660”
    /dev/sda1: LABEL="ARCH_BOOT" UUID="009c2043-4bl7-4f95-al4d-fb8951f95b5d" TYPE="ext2"
    ===
     ( only CD-ROM and 1( one) IDE disk connected to ATA)
    ~~
    regarding ata_piix.c patch . . .
    As far as i understand this patch , it ignore ATA devices on Hyper-V when PV drivers(CONFIG_HYPERV_STORAGE=y) are enabled.
    ~~
     Yes:
    ignore ATA-HDD ( but not ignore ATA CD-ROM )  on Hyper-V when PV drivers(CONFIG_HYPERV_STORAGE=y) are enabled.
    ~
     this patches need be backported:
      cd006086fa5d ata_piix: defer disks to the Hyper-V drivers by default
    and its prerequisite
      db63a4c8115a libata: add a host flag to ignore detected ATA devices
    ~
    ~~
    P.S.
     Are You do this:
    ==
    As temporary solution, increase on 1-2 Gb size all .vhd connected to IDE bus
    ( but not increase size of partitions inside disks)
    ==
    ? fsck write message a-la: "no error in file system" ?
    2013-01-24 Answer by Ujjwal Kumar: As a temporary solution looks ok for me, but [ VVM: need true solution ]
    P.P.S.
    To Ujjwal Kumar :
     My e-mail:
    ZZZZZZZZZZZZZZZ
    please send e-mail to me,  in reply I send to You patches to ata_piix ( and *.c before and after patches) , etc.
    } on 2013-01-14 -- DoNe

  • ZFS root filesystem & slice 7 for metadb (SUNWjet)

    Hi,
    I'm planning to use ZFS root filesystem in Sun Cluster 3.3 environment, as written in documentation when we will use UFS share diskset then we need to create small slice for metadb on slice 7. In standar installation we can't create slice 7 when we install solaris with zfs root, then we can create it with jumpstart profile below :
    # example Jumpstart profile -- ZFS with
    # space on s7 left out of the zpool for SVM metadb
    install_type initial_install
    cluster SUNWCXall
    filesys c0t0d0s7 32
    pool rpool auto 2G 2G c0t0d0s0
    so, my question is : "when we use SUNWjet (JumpStart(tm) Enterprise Toolkit) how we can write the profile similar to above jumpstart profile"?
    Thanks very much, for your best answer.

    This can be done with JET
    You create the template as normal.
    Then create a profile file with the slice 7 line.
    Then edit the template to use it.
    see
    ---8<
    # It is also possible to append additional profile information to the JET
    # derived one. Do this using the base_config_profile_append variable, but
    # don't forget to fill out the remaining base_config_profile variables.
    base_config_profile=""
    base_config_profile_append="
    ---8<
    It is how OpsCentre (which uses JET) does it.
    JET questions are best asked on the external JET alias at yahoogorups (until the forum is setup on OTN)

  • [Solved] NFS shares root filesystem

    [edit] Sorry, my fault. I was using Nautilus m(
    and i think it silently switiched to using sftp.... still wondering why no password was prompted.
    Hi!
    I set up nfs following the wiki. It works, but i can access the whole remote-root-filesystem, not just the nfs-root. Is https://wiki.archlinux.org/index.php/NFS up to date?
    greetings
    Server:
    cat /etc/exports
    /srv/nfs4/ 192.168.0.0/24(ro,fsid=0,no_subtree_check)
    /srv/nfs4/a 192.168.0.0/24(ro,no_subtree_check)
    /srv/nfs4/b 192.168.0.0/24(ro,no_subtree_check)
    [root@alarmpi srv]#
    Client:
    showmount -e 192.168.0.116
    Export list for 192.168.0.116:
    /srv/nfs4/a 192.168.0.0/24
    /srv/nfs4/b 192.168.0.0/24
    /srv/nfs4 192.168.0.0/24
    Last edited by matto (2015-03-22 14:50:59)

    Thanks for the hints. Meanwhile, I have done some more background reading on running Arch in VMWare Player. In particular I followed the guidelines in this article. So it turns out that X requires some special configuration when run from within VMWare.
    pacman -S xf86-input-vmmouse xf86-video-vmware xf86-video-vesa svga-dri
    and a vmwgfx module is supposed to be loaded. I installed tha packages and tried to load vmwgfx, but when I do lsmod I just can't see it. Then it was pointed out to me that it might be a kernel mismatch problem. The linux installed on nfsroot is 3.6.2 whereas for some reason uname -a gives me version 3.5.6.
    I've tried to rebuild initramfs with the new kernel using -k switch of mkinicpio, but it didn't help. It still boots into 3.5.6. It as if the initramfs from the client was taking precedence. So then I went back to the previous image of my VM, updated the system, made appropriate changes to mkinitcpio.conf and run mkinitcpio... and it no longer boots. mount: protocol not supported. I will post the details in my other topic.

  • Damaged root filesystem

    Hello folks,
    I'm in some trouble and need help!
    I mirrored my Solaris 10 root filesystem using Solaris Volume Manager using the following sequence of commands:
    metainit -f d1 1 1 c1d0s0
    metainit d2 1 1 c2d0s0
    metainit d0 -m d1
    I then edited the /etc/vfstab file to mount /dev/md/dsk/d0 instead of /dev/dsk/c1d0s0 on /. Then I supplied "init 6". The GRUB boot environment commands relating to booting from the disk containing the c1d0s0 slice were not changed.
    You'll immediately note the missing metaroot command; when I rebooted the root file system would not load and warned that it was unable to fsck the metadevice. It then proceeded to ask for the root password to access system maintenance mode.
    The question is: how can I safely roll back the change and reboot from /dev/dsk/c1d0s0? Can I start by going to system maintenance mode in order to use fsck -F ufs /dev/md/dsk/d0?
    Cheers!

    Here's the brief update: As Darren suggested, I was in fact able to run fsck on the slice underlying the metadevice and mount the root file system outside the control of the SVM service. Other problems unrelated to the subject of this thread have so far prevented me from closing this episode. For those interested, I'm providing details below.
    Here's the full update: I booted off the Solaris 10 1/06 installation DVD and ran fsck -F ufs /dev/dsk/c1d0s0 on the cloned drive. The 5th phase reported an impossible cylinder count and proceeded to correct it. A second fsck reported no further errors.
    I was then able to "mount /dev/dsk/c1d0s0 /tmp/goodroot", from where I corrected /etc/vfstab entries. Then, I rebooted.
    The root file system mounted but:
    1. I had trouble with another slice on the same cloned drive where I had saved my SVM database replicas, leading to a state of "Insufficient database replicas located."
    2. Furthermore, on this same cloned drive, a submirror (d51) of the /var filesystem mirror (d50) failed to load and reported that it needed maintenance.
    I tackled these problems using procedures documented in Sun doc 816-4520 (SVM Administration Guide):
    1. I used metadb -d -f c1d0s7 to remove the reference to the missing replicas of the metadevice database. I was then able to boot without the error messages relating to the replicas.
    2. After the reboot referred to above, I replaced the submirror of d50 using:
    metadetach -f d50 d51
    metaclear -f d51
    metainit d51 1 1 c1d0s5
    metattach d50 d51
    More info at it becomes available,
    Cheers!

  • Root filesystem errors, Arch mounts it as RO, unusable system

    I dunno why, but since a little why Arch stopped working because it just stops the initscripts where it checks for filesystems.
    I'm getting at the beginning of initscripts this:
    "Using static /dev filesystem"
    Then it goes into maintenance mode after the Filesystem Check, asking for my root password, with the root filesystem mounted read-only, then it asks me to fsck it and change the superblock, however I can't do this because /dev/sda1 (my root FS) doesn't exists.
    What should I do?
    Thanks in advance

    1. when you start it will say filesystem check error and ask if you want to fix it type in your root password and press enter
    2. type mount -o -n remount,rw / and press enter
    3. type pacman -S initscripts and press enter
    4. type cp /etc/rc.local.shutdown.pacsave /etc/rc.local.shutdown press enter
    5. type cp /etc/rc.local.pacsave /etc/rc.local press enter
    6. type cp /etc/rc.conf.pacsave /etc/rc.conf press enter
    7. type cp /etc/inittab.pacsave /etc/inittab
    8. type reboot and press enter and you should be able to boot into your filesystem
    if you pacman tells you that it cant find initscripts then download the package from here:
    ftp://ftp.archlinux.org/testing/os/i686 … pkg.tar.gz
    and put it on a usb drive and mount it from inside another linux for example a ubuntu live cd
    then mount your arch root aswell and copy the initscripts-2008.02-1-i686.pkg.tar.gz to mountpoint/root/
    then go back and go from step 3 but with pacman -U initscripts-2008.02-1-i686.pkg.tar.gz
    Last edited by INCSlayer (2008-03-01 07:25:15)

  • Btrfs root filesystem

    The release notes for OL6.3 indicate that its possible to create a btrfs root filesystem on install using the alternative boot ISO media (excerpt from release notes at https://oss.oracle.com/ol6/docs/RELEASE-NOTES-U3-en.html):
    Note: The standard installation media does not have support for creating a btrfs root filesystem on initial install. If you want to install Oracle Linux 6 Update 3 and use btrfs as your root filesystem, please use the alternative boot ISO media which uses btrfs as the default root filesystem. Using the boot.iso requires that the full installation source be available via a network method, i.e. FTP, HTTP or NFS.
    However I still don't see how to do this. Just as with the full DVD install there isn't an option to use btrfs when laying out the fileystem in the installer. Anyone know how to do this?

    Oracle Linux uses the RHEL 6 installer, which has no btrfs support. However, it is not difficult to convert your existing installation to btrfs. It takes only several minutes. You can boot the system from V33412-01 (Oracle 6.3 boot.iso) available from https://edelivery.oracle.com/linux
    The following worked for me:
    Start the system from Oracle 6.3 boot DVD
    Select "Rescue installed system"
    When prompted select "local cd/dvd" as installation source (we don't need it)
    When prompted to start the network interface choose "no" (we don't need it)
    When prompted "The rescue environment…." select "Skip"
    Open "shell"
    To find your system volume group, e.g. vg_vm003:
    <pre>
    vgscan
    </pre>
    Activate the LVM volume
    <pre>
    lvchange -ay vg_vm003
    </pre>
    To find your system partition (now ACTIVE), e.g. /dev/vg_vm003/lv_root
    <pre>
    lvscan
    </pre>
    Verify/Repair the filesystem and convert it to btrfs
    <pre>
    fsck -fy /dev/vg_vm003/lv_root
    btrfs-convert /dev/vg_vm003/lv_root
    </pre>
    Mount the system partition (Do NOT use /mnt!)
    <pre>
    mkdir /me
    mount /dev/vg_vm003/lv_root /me
    </pre>
    Modify fstab to change the fstype of your lv_root partition from "ext4" to "btrfs"           
    <pre>
    vi /me/etc/fstab
    </pre>
    To address problems with SELinux, do the following to prevent "Respawning too fast. Stopped" errors at startup.
    <pre>
    touch /me/.autorelabel
    </pre>
    Finally dismount the partition
    <pre>
    umount /me
    </pre>
    Then remove the boot DVD and reset the computer. When the system restarts use default Oracle UEK kernel with btrfs support builtin.

  • Root filesystem won't mount.

    Im getting a file not found/kernel panic error when the kernel tries to mount my root filesystem.
    The partition is being specified correctly on the grub kernel line and I dont believe it is a module issue - this happens for both ext2 and ext3 partitions, but not all - just my arch partition.
    So what else could cause the kernel to not mount it?

    Disk /dev/sda: 30.0 GB, 30005821440 bytes
    255 heads, 63 sectors/track, 3648 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Disk identifier: 0x86338633
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 2940 23615518+ 83 Linux
    /dev/sda2 3588 3648 489982+ 82 Linux swap / Solaris
    /dev/sda3 2941 3587 5197027+ 83 Linux
    Partition table entries are not in disk order
    sda1 is my ubuntu partition (ext3) im currently running. sda3 is arch (ext2)
    Last edited by kristersaurus (2008-07-30 17:48:14)

  • Root filesystem

    Can someone give me a best practice for increasing the root filesystem?

    replayed wrote:
    I did something stupid and screwed up the permissions in /Library and /Applications.
    Long story short: I wanted to backup those directories before sending my MBP in for repairs. Rather than take the time to figure out Time Machine as an old Unix hand and newcomer to Mac OS X, I chose to use tar and cpio to backup those files from / that had been modified since my last OS install. Rookie mistake.
    Having copied back those files and directories onto their former location, cpio lost all owner and group info, with everything winding up with owner root and group admin. Basic access permissions look preserved, but ACL info is presumably gone as well.
    Surprisingly, my computer seems fine for the most part. A few applications complained about extensions, and I managed to quiet them by chgrp'ing a few .kext directories back to wheel.
    But my latest software updates -- Safari 4.0.4 and Security Update 2009-006 -- are aborting while complaining about access issues with /.
    I fired up Disk Utility and tried to Verify Disk Permissions, but the attempt fails immediately after announcing "Reading permissions database" with the error "The underlying task reported failure on exit."
    I'm hoping to avoid a complete reinstall (of course), and I'm thinking I might get out of jail free if I can just manage to restore the correct ownership and permissions to the /Library/Receipts directory, which I understand is where canonical permissions are archived.
    Am I on the right track here? Is there anything else I can do to restore permissions short of a full reinstall?
    Any advice or suggestions would be appreciated.
    honestly, I would reinstall in this situation. the permissions are too messed up at this point. that will be much quicker than anything else. an archive and install on top of your current system should fix things up.

  • Security: Zone vs. Change Root

    Hi,
    can someone tell me the security benefits I gain by using zones instead of using change root?
    I'm in the process of setting up a couple of DMZ machines. I was playing around with zones to increase the security. I have the feeling I will decrease security instead of increasing it because a zone has far too many features. I can't really install a tiny minimal Solaris with just a couple of files, and if an attacker got me he can use the zone itself to attack other systems. Correct?
    BTW.: Is there a Solaris list of minimal required packages? I removed all packages I could but I found still thinks like ssh, NIS, perl, .. After changing manually SUNW_PKG_ALLZONES I could remove a couple more until the zone crashed.
    Right now I see two possibilities to go forward:
    1.) Use a zone and change root the application. The zone part looks for me like an awful lot of work.
    2.) Forget about zones and install and change root the application directly to the global zone. This will minimize the maintenance, only one system to harden, much faster to set up.
    Do you agree or do I miss something?
    What are you doing to increase the security on Solaris 10 (in opposition to Solaris 9).
    Are there some guidelines how to securely setup zones?
    I really like to hear some other thoughts about this.
    Thanks for reading and consideration
    Matthias

    First I want to say that I fully agree with Darren here. You can gain a little increase in security by applying tools, but nothing can beat having some basic understanding of the system you're working with.
    But, to try and answer your questions..
    can someone tell me the security benefits I gain by
    using zones instead of using change root? I have no idea what so ever what a "change root" maybe. If you refer to a chroot then the answer is simple: security. Breaking out of a chroot is rather trivial (just search google for "breaking out chroot" and see for yourself). One of the stories I kinda like is http://www.bpfh.net/simes/computing/chroot-break.html.
    A zone is much more than a mere chroot, its a whole new (controllable) process.
    I have the feeling I will decrease
    security instead of increasing it because a zone has
    far too many features. I can't really install a tiny
    minimal Solaris with just a couple of files, and if
    an attacker got me he can use the zone itself to
    attack other systems. Correct?Wrong. It depends on how you set it up. And even if you use the default (which directory inheritage) you can still disable most of the services.
    But its perfectly possible to install a zone and then start removing all but the core packages.
    What are you doing to increase the security on
    Solaris 10 (in opposition to Solaris 9).What Darren already said.
    Are there some guidelines how to securely setup
    zones? docs.sun.com, and I'd say in particular:
    http://docs.sun.com/app/docs/doc/817-1592
    http://docs.sun.com/app/docs/doc/816-4557
    >
    I really like to hear some other thoughts about
    this.
    Thanks for reading and consideration
    Matthias

  • UniPack / root filesystem on E10K running SC3.4 gets full

    Hi !
    The problem is within a domain in my E10K running sun cluster 3.4. I am running a CorDaptix application which is the electricity billing system developed by SPL World Group. This is a web based application.
    My /root filesystem which sits on an internal Unipack disks 18GB on one of my domains gets full. I have moved the /opt and re-created it on an external disk. This has decreased my root filesystem but I can see that its filling up my volume again about 200MBs daily. When I check all my logs and find the files modified within the last days, I cannot find the cause.
    Please, please help

    process accounting should help clear up all the used space on the / partiton for now...
    If you make a new /var partition, I'd wager you'll find a lot less instances of / filling up afterward..
    Process accounting used to be something worthwhile if you didn't have a very busy system, but if you have a big honking database with many connections or a web server taking plenty of hits and doing subsequent system level processing (calling perl scripts and other things like that), you'll fill up the logfile system quickly.
    Teamquest probably does a lot of new processes during it's monitoring and checking of your system health, so that would also contribute to things.
    Bottom line, I agree, make a new /var partition, copy (via some methodology tar, cpio, ufsdump) from the original location to the new, and then edit the /etc/vfstab and put /var to mount on boot, and if you have the opportunity, reboot with the new /var (plus restored /var/sadm area), and you should be in good shape

Maybe you are looking for

  • Use of DECODE function in Discoverer 3.1

    Hi all, Is there any better way of using DECODE in Discoverer 3.1, i dont want to use that function in my Discoverer User edition or Admin ed. Many of my queries use the DECODE function. Your help is greatly appreciated.

  • Asset Scrapping - Calculating Depreciation for entire year

    Hi guys We want to scrap an asset without revenue using ABAVN. Asset Details: The asset will be completely depreciated by the end of this year (December) We have smoothing indicator on. We wish to post the retirement and close out the asset balances

  • Problem while handling Parent and child nodes in CE

    Hi all I am facing a problem with handling Child and Parent nodes in CE. I have a Table in which I have drop downs. On selecting a value in the drop down, I have to take that value and do some action. I am trying to access the value in the following

  • SIS-Sales information System (rejection flag )

    When sales order line items are "rejected" the cancellation amount does not flow as negative booking to the SIS structures. How do we activate ? Would appreciate your help on this. Thanks Namrita

  • Motion Short Cuts

    I am new to Motion and getting a pretty good handle on it, but I could work more efficiently if I knew some short-cuts. For example when I have a clip selected can I toggle back and forth from it's in and out points. In FCP I use Shift-I or O to do t