Unbootable Solaris 10 x86 installed on ZFS root file system

Hi all,
I have unbootable Solaris 10 x86 installed on ZFS root file system. on an IDE HDD
The bios keep showing the msg
DISK BOOT FAILURE , PLEASE INSERT SYSTEM BOOT DISK
please note :
1- the HDD is connected properly and recognized by the system
2- GRUB don't show any messages
is there any guide to recover the system , or detail procedure to boot system again
Thanks,,,

It's not clear if this is a recently installed system that is refusing to boot OR if the system was working fine and crashed.
If it's the former, I would suggest you check the BIOS settings to make sure it's booting from the right hard disk. In any case, the Solaris 10 installation should have writting the GRUB stage1 and stage2 blocks to the beginning of the disk.
If the system crashed and is refusing to boot, you can try to boot from a Solaris 10 installation DVD. Choose the single user shell option and see if it can find your system. You should be able to use format/devfsadm/etc to do the actual troubleshooting. If your disk is still responding, try a `zpool import` to see if there is any data that ZFS can recognize (it usually has many backup uberblocks and disk labels scattered around the disk).

Similar Messages

  • Sol10 u8 installed on a ZFS Root File System have different swap needs?

    Does Sol10 u8 installed on a ZFS Root File System have different swap needs/processes?
    Information:
    I've installed Solaris 10 (10/09 s10s_u8wos_08a SPARC, Assembled 16 September 2009) on a half dozen servers and every one of them no longer mount swap at boot.
    The install program commented out the old swap entry and created this one:
    # grep swap /etc/vfstab
    swap - /tmp tmpfs - yes -
    Everything works like a champ. I didn't discover the issue until I tried to install some patches and the install failed. It didn't fail because of lack of swap - it refused to run because it found "No swap devices configured".
    Here are the symptoms:
    # swap -s
    total: 183216k bytes allocated + 23832k reserved = 207048k used, 13600032k available
    # swap -l
    No swap devices configured
    # mount | grep swap
    /etc/svc/volatile on swap read/write/setuid/devices/xattr/dev=5ac0001 on Mon Apr 19 08:06:45 2010
    /tmp on swap read/write/setuid/devices/xattr/dev=5ac0002 on Mon Apr 19 08:07:40 2010
    /var/run on swap read/write/setuid/devices/xattr/dev=5ac0003 on Mon Apr 19 08:07:40 2010
    #

    Hi Nitabills,
    I assume that you create a zfs entry for swap with the commande zfs create -V $size
    did you launch the command :
    swap -a /dev/zvol/dsdk/$ZPOOL/swap
    Try this entry below in the vfstab :
    /dev/zvol/dsdk/$ZPOOL/swap - - swap - no -

  • Convert ZFS root file system to UFS with data.

    Hi, I would need to covert my ZFS root file systems to UFS and boot from the other disk as a slice (/dev/dsk/c1t0d0s0)
    I am ok to split the hard disk from root pool mirror. any ideas on how this can be acheived?
    Please sugget. Thanks,

    from the same document that was quoted above in the Limitations section:
    Limitations
    Version 2.0 of the Oracle VM Server for SPARC P2V Tool has the following limitations:
    Only UFS file systems are supported.
    Only plain disks (/dev/dsk/c0t0d0s0), Solaris Volume Manager metadevices (/dev/md/dsk/dNNN), and VxVM encapsulated boot disks are supported on the source system.
    During the P2V process, each guest domain can have only a single virtual switch and virtual disk server. You can add more virtual switches and virtual disk servers to the domain after the P2V conversion.
    Support for VxVM volumes is limited to the following volumes on an encapsulated boot disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris 9 operating systems that use VxVM.
    You cannot convert Solaris 10 systems that are configured with zones.

  • Automated install Arch x86_64 root-file system the "Raspberry" way..

    Dear Archers,
    I am currently deeply interested in installing Arch as dual boot on my
    current x86_64 system.
    And yes, there is plenty information regarding this subject in Arch wiki's.
    However...
    I would like to automate the installation the "Raspberry-Pi" -way.
    Details on the "Installation tab" of:
    http://archlinuxarm.org/platforms/armv6/raspberry-pi.
    As you can see, the installation steps are easy to automate, in my opinion.
    That's why I wrote an automation script for this purpose..
    The information further suggests that there is a root-filesystem for
    Arch named: 'ArchLinuxARM-rpi-latest.tar.gz.'
    Unfortunately, I don't think this root system will function on x86_64 systems..
    So I researched if there was a root filesystem for Arch x86_64 and found filenames like 'root-image.fs.sfs' 'core.db' 'core.img'.
    But I'm unsure if these archives/images really contain a rootfilesystem similar to the ArchARm system.
    Therefore my question:
    Does anybody know if there is a similar root filesystem for x86_64 so I can reproduce the "rpi-way" installation for x86_64 systems?
    Kind regards,
    Kees Epema

    You are definitely not going to be able to simply dd arch to your machine unless you create an image to do so.  Even then, that image would become old and stale pretty quickly, possibly leading to problems getting the system up to date.
    In my opinion, the raspberry pi way to simply dd'ing an image to an SD card is a disservice to the ArchlinuxARM users.  Though seemingly the proper way to get a system running on a raspberry pi, it masks the install process and leaves the user without a clue as to what would have gone into creating the system.
    Go read the beginners guide.  Use a virtual machine to practice if you need.  Shortcuts are not going to help you with Arch.

  • Solaris 10:unable to mount a solaris root file system

    Hi All,
    I am trying to install Solaris 10 X86 on a Proliant DL385 Server it has a Smart array 6i, I have download the driver from the HP web site, on booting up the installation CD 1, adding the device driver, it sees the device but now says it can���t mount the device. Any clues what I need to do?
    Screen Output:
    Unable to mount a Solaris root file system from the device
    DISK: Target 0, Bios primary drive - device 0x80
    on Smart Array 6i Controller on Board PCI bus 2, at Dev 4
    Error message from mount::
    /pci@0,0/pci1022,7450@7/pcie11,4091@4/cmdk@0,0:a: can't open - no vtoc
    any assistence would be appreciated.

    Hi,
    I read the Message 591 (Agu 2003) and the problem is quite the same. A brief description: I have aLaptop ASUS with HDD1 60GB and a USB storage HDD (in next HDD2) 100GB. I installed Solaris 10 x86 on HDD2 (partition c2t0d0s0). At the end of installation I removed the DVD and using BIOS features I switched the boot to HDD2. All ok; I received the SUN Blue Screen and I choose the active Solaris option; but at the beginning of the boot I received the following error message
    Screen Output:
    Unable to mount a Solaris root file system from the device
    DISK: Target 0: IC25N060 ATMR04-0 on Board ....
    Error message from mount::
    /pci@0,0/pci-ide2,5/ide@1/cmdk@0,0:a: can't open
    any assistence would be appreciated.
    Regards

  • Change ZFS root dataset name for root file system

    Hi all
    A quick one.
    I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
    Can I change it to another name afterward without reinstalling the OS? For example,
    zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
    zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
    Thank you.

    Renaming the root pool is not recommended.

  • Is it possible to install OL6.2 with UEK using btrfs root file system?

    Can we use btrfs for root file system with Oracle Linux?
    If yes - how to install it? (OL6.2 installer doesn't offer btrfs within the available file systems).

    For what it's worth, I tried the following, which worked:
    host: vm022
    cd /etc/yum.repos.d
    rm public-yum*
    wget http://public-yum.oracle.com/public-yum-ol6.repo
    +Edit public-yum-ol6.repo and enable [ol6_UEK_latest]+
    yum update kernel-uek
    yum install btrfs-progs
    The system was successfully updated to UEK2 - 2.6.39-100
    Boot the system form Fedora 15 Live CD and open a terminal. Enter the following:
    su - root
    yum install btrfs-progs
    lvscan
    btrfs-convert /dev/vg_vm022/lv_root
    (conversion complete)
    mount /dev/vg_vm022/lv_root /mnt
    Edit /mnt/etc/fstab and change the root volume filesystem from ext4 to btrfs
    umount /mnt
    reboot
    Start from the OL6.2 DVD in rescue mode. Select continue to let if find the existing Linux system and drop into the shell
    chroot /mnt/sysimage
    modprobe btrfs
    lsmod | grep btrfs
    cd /boot
    cp initramfs-2.6.39-100.5.1.el6uek.x86_64.img /root
    mkinitrd -f -v /boot/initramfs-2.6.39-100.5.1.el6uek.x86_64.img 2.6.39-100.5.1.el6uek.x86_64
    exit
    unmount /mnt/sysimage
    After another restart (it restarted twice doing some volume/label converison - didn't catch it)
    mount
    /dev/mapper/vg_vm022-lv_root on / type btrfs (rw)
    Edited by: Dude on Mar 29, 2012 6:35 PM

  • Programmatic interface to get zone's root file system

    Hi,
    I am a newcomer to solaris zones. Is there any programmatic (C API) way to know the path to root file system of a zone given its name, from the global zone?
    Thanks!

    A truss of zoneadm list -cv shows a bunch of zone related calls like:
    zone_lookup()
    zone_list()
    zone_getattr()
    Using the truss output as an example and including /usr/include/sys/zones.h and linking to libzonecfg
    (and maybe libzoneinfo) seems like a fairly straight-forward path to getting the info you are looking for.
    You could also parse /etc/zones/index
    which is (on my s10_63 machine) a colon seperated flat file containing [zone:install state:root path] that looks like:
    global:installed:/
    demo1:installed:/zones/demo1
    demo2:installed:/zones/demo2
    demo3:installed:/zones/demo3
    foo:installed:/zones/foo
    ldap1:installed:/zones/ldap1
    Neither of these methods are documented, so they are certainly subject to change or removal.
    Good luck!
    -William Hathaway

  • Problem in Reducing the root file system space

    Hi All ,
    The root file system is reached 86%. We have cleared 1 GB data in /var file system. But the root file system still showing 86%. Please note that the /var file is not seprate file system.
    I have furnished the df -h output for your reference. Please provide solution as soon as possible.
    /dev/dsk/c1t0d0s0 2.9G 2.4G 404M 86% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 30G 1.0M 30G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /dev/dsk/c1t0d0s3 6.7G 3.7G 3.0G 56% /usr
    /platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/libc_psr.so.1
    /platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 33G 3.5G 30G 11% /tmp
    swap 30G 48K 30G 1% /var/run
    /dev/dsk/c1t0d0s4 45G 30G 15G 67% /www
    /dev/dsk/c1t0d0s5 2.9G 1.1G 1.7G 39% /export/home
    Regards,
    R. Rajesh Kannan.

    I don't know if the root partition filling up was sudden, and thus due to the killing of an in-use file, or some other problem. However, I have noticed that VAST amounts of space is used up just through the normal patching process.
    After I installed Sol 10 11/06, my 12GB root partition was 48% full. Now, about 2 months later, after applying available patches, it is 53% full. That is about 600 MB being taken up by the superseded versions of the installed patches. This is ridiculous. I have patched using Sun Update Manager, which by default does not use the patchadd -d option that would not back up old patch versions, so the superseded patches are building up in /var, wasting massive amounts of space.
    Are Solaris users just supposed to put up with this, or is there some other way we should manage patches? It is time consuming and dangerous to manually clean up the old patch versions by using patchrm to delete all versions of a patch and then using patchadd to re-install only the latest revision.
    Thank you.

  • Backup and restore Root File system

    Hi
    Can I take backup of Root File system using ufsdump and later restore it (Root file system) completely using ufsrestore?
    Please give me the steps or a link
    Thanks in advance
    Ashraf.

    In short yes. But the steps depends on where you are going to store the backup and if you are running Sparc or x86 and what you do with your disk in between.
    Boot in single user.
    example# ufsdump 0cfu /dev/rmt/0 /dev/rdsk/c0t3d0s0
    /dev/rmt/0 (tape) can be another partions file /backup/root_backup
    For restore it must be safeest to boot from CD.
    Mount your disk to restore to under /a
    cd /a
    ufsrestore rf /dev/rmt/0 (or your file /backup/root_backup)
    If the partition is reformatted you may have to install new bootblocks.
    Please read some from docs.sun.com
    This is just an advice, not detailed workorder.....
    /Gunnar

  • Restoring root file system from netbackup

    Recently one of the hard disk containing root file system in my T2000 got many hard errors suddenly but the system continued to run. I dont have os mirroring. So I took backup of the root file system using veritas netbackup & restored those in the second hard disk. Also installed boot block for the same. I tried booting from the second hard disk & it booted, came to the prompt. But i found that most of the commands are not working. I tried df -h & it gave me error message like "unable to open /etc/mnttab". Is this approach incorrect?
    My guess is that it doesnt work as i dint set partition table same for both the disks.

    You need to use the baremetal backup/restore options with Netbackup to make backups of your operating system. It's generally recommended that you make a backup of your OS in single user mode, or booted from alternate media, so that you don't make backups of open files.
    Another backup method would be to use something like ufsdump for ufs filesystems, or ZFS send/receive
    and when you do restore, first boot in single user mode from cd so that you can update the files that still point to the old disk location, like the /etc/vfstab (for ufs) and to update the device paths with a reconfiguration reboot.
    Edited by: 3sth3r on Jan 25, 2012 1:04 AM

  • Zerofree: Shrinking ARCH guest VMDK--'remount the root file-system'?

    Hi!
    [using ZEROFREE]
    Getting great results with and extra ARCH install running as a VMDK in Workstation.
    REALLY need tips on shrinking the VMDK. obviously have deleted unneeded files
    and now rather urgently need to learn what's eluding me so far.
    1) zerofree is install IN the virtual machine (VMDK)workstation  running on windows 8.
    2) Here's the instructions for zerofree:
           filesystem has to be unmounted or mounted  read-only  for  zerofree  to
           work.  It  will exit with an error message if the filesystem is mounted
           writable.
           To remount the  root  file-system  readonly,  you  can  first
           switch to single user runlevel (telinit 1) then use mount -o remount,ro
           filesystem.
    As it a VMDK and it's running would the only/best option be to: "remount the  root  file-system  readonly" ??
    OR, could i add the VMDK to another running arch system that I do have and NOT mount the VMachine thereby
    allowing zero free to run even better on that?
    Are both method JUST as efficive at shrinking? My guess would be the remount root file-system as read only
    would NOT be as efficient at shrinking.
    I could really use a brief walk-through on this as all attempts have failed so far.
    I boot the ARCH virtual machine and do what may I ask?
    Last edited by tweed (2012-06-05 07:43:41)

    How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
    Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
    Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
    I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
    Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
    I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
    Last edited by AGT (2014-05-20 16:49:35)

  • Archive Repository - Content Server or Root File System?

    Hi All,
    We are in the process of evaluating a storage solution for archiving and I would like to hear your experiences and recommendations.  I've ruled out 3rd-party solutions such as IXOS as over kill for our requirement.  That leaves us with the i5/OS root file system or the SAP Content Server in either a Linux partition or on a Windows server.  Has anyone done archiving with a similar setup?  What issues did you face?  I don't plan to replicate archive objects via MIMIX.
    Is anyone running the SAP Content Server in a Linux partition?  I'd like to know your experience with this even if you don't use the Content Server for archiving.  We use the Content Server (currently on Windows) for attaching files to SAP documents (e.g., Sales Documents) via Generic Object Services (GOS).  While I lean towards running separate instances of the Content Server for Archiving and GOS, I would like to run them both in the same Linux LPAR.
    TIA,
    Stan

    Hi Stanley,
    If you choose to store your data archive files at the file system level, is that a secure enough environment?  A third party certified storage solution provides a secure system where the archive files cannot be altered and also provides a way to manage the files over the years until they have met their retention limit.
    Another thing to consider, just because the end users may not need access to the archived data, your company might need to be able to access the data easily due to an audit or law suit situation. 
    I am a SAP customer whose job function is the technical lead for my company's SAP data archiving projects, not a 3rd party storage solution provider , and I highly recommend a certified storage solution for compliance reasons.
    Also, here is some information from the SAP Data Archiving web pages concerning using SAP Content Server for data archive files:
    10. Is the SAP Content Server suitable for data archiving?
    Up to and including SAP Content Server 6.20 the SAP CS is not designed to handle large files, which are common in data archiving. The new SAP CS 6.30 is designed to also handle large files and can therefore technically be used to store archive files. SAP CS does not support optical media. It is especially important to regularly run backups on the existing data!
    Recommendation for using SAP CS for data archiving:
          Store the files on SAP CS in a decompressed format (make settings at the repository)
           Install SAP CS and SAP DB on one server
           Use SAP CS for Unix (runtime tests to see how SAP CS for Windows behaves with large files still have to be carried out)
    Best Regards,
    Karin Tillotson

  • Mounting the Root File System into RAM

    Hi,
    I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself.  At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
    I have now succeeded, at least it seems, in performing such a task. There are still some issues.
    For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
    A custom kernel hook is used to (overall):
    Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
    Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
    Copy the non-volatile content into the ramdisk.
    Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
    A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
    To integrate this hook into a initramfs, a preset is needed.
    I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
    The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
    When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
    I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
    Some prerequisites exist (by default):
        Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
        Referencing paths:
            The variables:
                source=
                temporary=
            ...should have the same value in all of these files:
                "/etc/initcpio/hooks/ram"
                "/root/Documents/rootfs/unmount-root-fs.sh"
                "/root/.rsync/exclude.txt"    -- Should correspond.
            This is needed to sync the RAM disk back to the hard disk.
        I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
    Here are all the accepted and used parameters:
        Parameter                       Allowed Values                                          Default Value        Considered Values                         Description
        root                                 Default (UUID=+,/dev/disk/by-*/*)            None                     Any string                                      The source root
        rootfstype                       Default of "-t <types>" of "mount"           "auto"                    Any string                                      The FS type of the source root.
        rootflags                         Default of "-o <options>" of "mount"        None                     Any string                                      Options when mounting the source root.
        ram                                 Any string                                                  None                     "1"                                                  If this hook sould be run.
        ramfstype                       Default of "-t <types>" of "mount"           "auto"                     Any string                                      The FS type of the RAM disk.
        ramflags                         Default of "-o <options>" of "mount"        "size=50%"           Any string                                       Options when mounting the RAM disk.
        ramcleanup                    Any string                                                   None                     "0"                                                  If any left-overs should be cleaned.
        ramcleanup_source       Any string                                                   None                     "1"                                                  If the source root should be unmounted.
        ram_transfer_tool          cp,find,cpio,rsync,unison                            unison                   cp,find,cpio,rsync                           What tool to use to transfer the root into RAM.
        ram_unison_fastcheck   true,false,default,yes,no,auto                    "default"                true,false,default,yes,no,auto        Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_use        0,1                                                              None                    0                                                      If unison should use any available cache. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_update   0,1                                                              None                    0                                                     If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
    This is the basic setup.
    Optionally:
        I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
        I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
    Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
    I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
    Of course, use what you need from the provided files.
    Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
        find+cpio:  2:10s (2:12s on slower hardware)
        unison:      3:10s - 4:00s
        cp:             4 minutes (31 minutes on slower hardware)
        rsync:        4:40s (55 minutes on slower hardware)
        Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
    These are the remaining issues:
        find+cpio option does not create any destination files.
        (On some older hardware) When booting up, the source disk is not always detected.
        When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
    Inconveniences:
        Unison needs to perform an update detection at each startup.
        initramfs' ash does not parse wild characters to use "cp".
    That's about what I can think of for now.
    I will gladly try to answer any questions.
    I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
    Last edited by AGT (2014-05-20 23:21:45)

    How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
    Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
    Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
    I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
    Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
    I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
    Last edited by AGT (2014-05-20 16:49:35)

  • Root file system 100%

    Hi,
    We are facing a proble in which our root file system is 100% because of a lot of directories being created in the /proc directory. These directories are created with numbers as their names.
    Please give some advice

    1- It should be nothing to do with proc file system.
    2- Somtimes backup command has an 'o' instead of a 0
    (zero) in /dev/rmt/"0" and a file "o" is created in
    /dev/rmt that fills entire space.
    3- If your slice is really large enough, use du -sh
    /* command in case solaris 9 (du -sk for 8) to see
    which directories are taking space.
    4- If you still find nothing and directory sizes
    don;t seem to pose any problem, umount all filesstems
    and then do a du -sh /*.
    Remember, if you have a directory (say /export/home)
    filled with stuff, and currently it is not mounted,
    It consumes space from / filesystem. When it is
    mounted later, the space shown by df -k would show
    consumed space for entire Inode table. But du -sk
    wont show those files. Indeed those are hidden untill
    you unmount the physical slice from /export/homeI had root file system 100% used
    I deleted all syslog and messages files, the best I could reclaim was 1%
    I did "du -d / |more" to discover sizes of file that may be filling up root file system.
    I discovered that in /dev/rmt the size was unusually very large about
    2.5GB
    There was a file with file name "1" in /dev/rmt which was taking all the space
    I did "rm -r 1" to removed the unwanted file and reclaimed 53% free space

Maybe you are looking for

  • How to prevent an encrypted backup from being restored to a different device?

    If I force an employee to do an encrypted backup (which I can do with a configuration profile), and that employee is fired. We take back the company iphone, but they go and buy a personal one. They connect the new, personal iphone to itunes and do a

  • How do i stop using imessage on my new macbook pro

    I am using icloud on my iphone 5, macbook pro and old iphone 4. Sometimes i recive my imessgaes on all my devices and sometimes I don't recive them on any device. To make it easiler for me how can i stop using imessage on my macbook pro and iphone 4.

  • What is average cpu usage for itunes? when idle?

    I've noticed lately that itunes is using 6% cpu when I'm doing absolutely nothing, even with the window x'ed out. It seems to stay at 6% pretty steadily too doesn't fluctuate when idle; but when I open up the window and attempt to play music, the cpu

  • Photoshop Elements 12.0 WIN AOO License I (65,224,943)

    I bought in 2014 the license for Photoshop Elements 12.0 WIN AOO License I (65,224,943). They failed, however, to find and download Photoshop Elements 12.0, nor can I apply the instructions published in Adobe "Silently install (deploy) Adobe Premiere

  • I just purchased a Canon Pro 100 printer.

    Installation went well.  I have PC with Win 7, using CS6.  I calibrate my monitor with Spyder Monkey 3.  I can take my files to a custom printer in the Seattle area and my print matches my file 99% of the time (night sky images need a little further