ZFS root filesystem & slice 7 for metadb (SUNWjet)

Hi,
I'm planning to use ZFS root filesystem in Sun Cluster 3.3 environment, as written in documentation when we will use UFS share diskset then we need to create small slice for metadb on slice 7. In standar installation we can't create slice 7 when we install solaris with zfs root, then we can create it with jumpstart profile below :
# example Jumpstart profile -- ZFS with
# space on s7 left out of the zpool for SVM metadb
install_type initial_install
cluster SUNWCXall
filesys c0t0d0s7 32
pool rpool auto 2G 2G c0t0d0s0
so, my question is : "when we use SUNWjet (JumpStart(tm) Enterprise Toolkit) how we can write the profile similar to above jumpstart profile"?
Thanks very much, for your best answer.

This can be done with JET
You create the template as normal.
Then create a profile file with the slice 7 line.
Then edit the template to use it.
see
---8<
# It is also possible to append additional profile information to the JET
# derived one. Do this using the base_config_profile_append variable, but
# don't forget to fill out the remaining base_config_profile variables.
base_config_profile=""
base_config_profile_append="
---8<
It is how OpsCentre (which uses JET) does it.
JET questions are best asked on the external JET alias at yahoogorups (until the forum is setup on OTN)

Similar Messages

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

  • Change ZFS root dataset name for root file system

    Hi all
    A quick one.
    I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
    Can I change it to another name afterward without reinstalling the OS? For example,
    zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
    zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
    Thank you.

    Renaming the root pool is not recommended.

  • Live upgrade only for zfs root?

    Only live upgrade for zfs root on 5/09? Is this true? I have tried to do live upgrades previously and have had no luck. Particularly on my old blade1000 with an 18gb drive.

    Reading over this post I see it is a little unclear. I am trying to upgrade a u6 installation that has a zfs root to u7.

  • Remounting Root Filesystem Read-only, what for?

    Hi.
    I have question. What really for while rebooting/halting system root filesystem is remounted? One second before all filesystems are unmounted. Why it has to be mounted when shutting off the computer.
    Thanks in advance.

    alexpnx wrote:I have a question on a similar topic... How unsafe is it if the system is rebooted or shutdown without unmounting samba shares? Can the shares be corrupted?
    I think the  file(s) you're writing to during shutdown can be corrupted (not all corrupted, but simply unfinished saved), if you write there at all, but not anything else as the partition is still up on the host and the host takes care of metadata and stuff - it's worse with local partitions where metadata and stuff can be saved uncompletely, which can lead to files disappearing all over the filesystem, some old files showing up again etc.
    Can someone else confirm this?

  • Damaged root filesystem

    Hello folks,
    I'm in some trouble and need help!
    I mirrored my Solaris 10 root filesystem using Solaris Volume Manager using the following sequence of commands:
    metainit -f d1 1 1 c1d0s0
    metainit d2 1 1 c2d0s0
    metainit d0 -m d1
    I then edited the /etc/vfstab file to mount /dev/md/dsk/d0 instead of /dev/dsk/c1d0s0 on /. Then I supplied "init 6". The GRUB boot environment commands relating to booting from the disk containing the c1d0s0 slice were not changed.
    You'll immediately note the missing metaroot command; when I rebooted the root file system would not load and warned that it was unable to fsck the metadevice. It then proceeded to ask for the root password to access system maintenance mode.
    The question is: how can I safely roll back the change and reboot from /dev/dsk/c1d0s0? Can I start by going to system maintenance mode in order to use fsck -F ufs /dev/md/dsk/d0?
    Cheers!

    Here's the brief update: As Darren suggested, I was in fact able to run fsck on the slice underlying the metadevice and mount the root file system outside the control of the SVM service. Other problems unrelated to the subject of this thread have so far prevented me from closing this episode. For those interested, I'm providing details below.
    Here's the full update: I booted off the Solaris 10 1/06 installation DVD and ran fsck -F ufs /dev/dsk/c1d0s0 on the cloned drive. The 5th phase reported an impossible cylinder count and proceeded to correct it. A second fsck reported no further errors.
    I was then able to "mount /dev/dsk/c1d0s0 /tmp/goodroot", from where I corrected /etc/vfstab entries. Then, I rebooted.
    The root file system mounted but:
    1. I had trouble with another slice on the same cloned drive where I had saved my SVM database replicas, leading to a state of "Insufficient database replicas located."
    2. Furthermore, on this same cloned drive, a submirror (d51) of the /var filesystem mirror (d50) failed to load and reported that it needed maintenance.
    I tackled these problems using procedures documented in Sun doc 816-4520 (SVM Administration Guide):
    1. I used metadb -d -f c1d0s7 to remove the reference to the missing replicas of the metadevice database. I was then able to boot without the error messages relating to the replicas.
    2. After the reboot referred to above, I replaced the submirror of d50 using:
    metadetach -f d50 d51
    metaclear -f d51
    metainit d51 1 1 c1d0s5
    metattach d50 d51
    More info at it becomes available,
    Cheers!

  • Solaris 10 with zfs root install and VMWare-How to grow disk?

    I have a Solaris 10 instance installed on an ESX host. During the install, I selected a 20gig disk. Now, I would like to grow the disk from 20GB to 25GB, I made the change on VMWare but now the issue seems to be Solaris. I haven't seen anything on how to grow the FS in Solaris. Someone mentioned using fdisk to manually change the number of cylinders but that seems awkward. I am using a zfs root install too.
    bash-3.00# fdisk /dev/rdsk/c1t0d0s0
    Total disk size is 3263 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 2609 2609 80
    This shows the expanded number of cylinders. but a format command does not.
    bash-3.00# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci1000,30@10/sd@0,0
    Specify disk (enter its number):
    Any ideas?
    Thanks.

    That's the MBR label on the disk. That's easy to modify with fdisk.
    Inside the Solaris partition is another (VTOC) label. That one is harder to modify. It's what you see when you run 'format' -> 'print' -> 'partition' or 'prtvtoc'.
    To resize it, the only method I'm aware of is to record the slices somewhere, then destroy the label or run 'format -e' and create a new label for the autodetect device. Once you have the new label in place, you can recreate the old slices. All the data on the disk should be stable.
    Then you can make use of the space on the disk for new slices, for enlarging the last slice, or if you have a VM of some sort managing the disk.
    Darren

  • Convert ZFS root file system to UFS with data.

    Hi, I would need to covert my ZFS root file systems to UFS and boot from the other disk as a slice (/dev/dsk/c1t0d0s0)
    I am ok to split the hard disk from root pool mirror. any ideas on how this can be acheived?
    Please sugget. Thanks,

    from the same document that was quoted above in the Limitations section:
    Limitations
    Version 2.0 of the Oracle VM Server for SPARC P2V Tool has the following limitations:
    Only UFS file systems are supported.
    Only plain disks (/dev/dsk/c0t0d0s0), Solaris Volume Manager metadevices (/dev/md/dsk/dNNN), and VxVM encapsulated boot disks are supported on the source system.
    During the P2V process, each guest domain can have only a single virtual switch and virtual disk server. You can add more virtual switches and virtual disk servers to the domain after the P2V conversion.
    Support for VxVM volumes is limited to the following volumes on an encapsulated boot disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris 9 operating systems that use VxVM.
    You cannot convert Solaris 10 systems that are configured with zones.

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • ZFS root and Live upgrade

    Is it possible to create /var as its own ZFS dataset when using liveupgrade? With ufs, there's the -m option to lucreate. It seems like any liveupgrade to a ZFS root results in just the root, dump, and swap datasets for the boot environment.
    merill

    Hey man.
    I bad my head against the wall with the same question :-)
    One thing that might help you out anyway is that i found a solution on how to move ufs filesystems to the new ZFS pool.
    Let's say you have a ufs fs with let's say application server and stuff on /app which is on c1t0d0s6.
    When you create the new ZFS based BE the /app is shared.
    In order to move it to the new BE, all you need to do is to comment the lines in /etc/vfstab you want to be moved.
    then run lucreate to create the ZFS BE.
    After that, create a new dataset for /app, just give it a different mountpoint.
    Copy all your stuff.
    rename the original /app
    and set the dataset's mountpoint
    this is it, all your stuff are now on ZFS.
    Hope it will be usefull,

  • Solaris cluster 3.2 with zfs failover filesystem failed. How can I recover?

    Hi all,
    I have just install and configure Solaris cluster 3.2U3 using zfs for both of root filesystem and shared storage file system.
    This cluster operate clearly. Today, I can not see the zpool for shared storage. I can see the storage volume in the output of format command.
    So all my resources change to offline status. and my application is failed.
    How can I recover this cluster??????
    is there any body can help me :(

    Have you used a SUNW.HAStoragePlus (HASP) resource to control your zpool? If not, the zpool is probably needs importing. That is what the HASP resource would do for you. You would also need a dependency from your application on the HASP resource to ensure that your application does not try to start up before the storage is avaialable.
    Regards,
    Tim
    ---

  • "ludowngrade" - Sol 8 root filesystem trashed after mounting on Sol 10?

    I've been giving liveupgrade a shot and it seems to work well for upgrading Sol 8 to Sol 10 until a downgrade / rollback is attempted.
    To make a long story short, luactivating back to the old Sol 8 instance doesn't work because I haven't figured out a way to completely unencapsulate the Sol 8 SVM root metadevice without completely removing all SVM metadevices and metadbs from the system before the luupgrade, and we can only reboot once, to activate the newupgrade.
    This leaves the old Sol 8 root filesystem metadevice around after the upgrade (even though it is not mounted anywhere). After an luactivate back to the Sol 8 instance, something gets set wrong and the 5.8 kernel panics with all kinds of undefined symbol errors.
    Which leaves me no choice but to reboot in Solaris 10, and mount the old Solaris 8 filesystem, then edit the Sol 8 /etc/system and vfstab files to boot off a plain, non-SVM root filesystem.
    Here's the problem: Once I have mounted the Old Sol 8 filesystem in Sol 10, it fails fsck when booting Sol 8;
    /dev/rdsk/c1t0d0s0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
    # fsck /dev/rdsk/c1t0d0s0
    BAD SUPERBLOCK AT BLOCK 16: BAD VALUES IN SUPER BLOCK
    LOOK FOR ALTERNATE SUPERBLOCKS WITH MKFS? y
    USE ALTERNATE SUPERBLOCK? y
    FOUND ALTERNATE SUPERBLOCK AT 32 USING MKFS
    CANCEL FILESYSTEM CHECK? n
    Fortunately, recovering the alternate superblock makes the filesystem usable in Sol 8 again. Is this supposed to happen?
    The only thing I can think of is I have logging enabled on the root FS in Sol 10, so apparently logging trashes the superblock in Sol 10 such that the FS cannot be mounted in Sol 8 without repair.
    Better yet would be a HOWTO on how to luupgrade a root filesystem encapsulated in SVM without removing the metadevices first. It seems impossible since without any fiddling, all the LU instances on the host will share the SVM metadb's on the system, which leads to problems.

    Did you upgrade the version of powerpath to a release supported on Solaris 10?

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

  • [SOLVED] df -h does not reflect true root filesystem size

    Why does df -h show root filesystem as being only 20G?
    df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/cryptroot 20G 15G 4.6G 76% /
    dev 7.7G 0 7.7G 0% /dev
    run 7.7G 668K 7.7G 1% /run
    tmpfs 7.7G 70M 7.7G 1% /dev/shm
    tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
    tmpfs 7.7G 224K 7.7G 1% /tmp
    /dev/sda1 239M 40M 183M 18% /boot
    tmpfs 1.6G 8.0K 1.6G 1% /run/user/1000
    That is what my df -h output looks like. My setup is full disk encryption using dm-crypt with LUKS, per the guide on arch wiki. I basically created one /boot partition and left the rest of the disk to be an encrypted partition for the root filesystem. So why is my system complaining about (and acting as if it's running out of space)? Have I forgotten something?
    Thank you for reading this. Let me know if you need any more logs or info on my setup - I realise I haven't provided very much info here, but I can't think of what to provide.
    Last edited by domentoi (2014-12-24 19:02:32)

    This is lsblk:
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 465.8G 0 disk
    ├─sda1 8:1 0 250M 0 part /boot
    └─sda2 8:2 0 465.5G 0 part
    └─cryptroot 254:0 0 465.5G 0 crypt /
    sdb 8:16 0 14.9G 0 disk
    └─sdb1 8:17 0 14.9G 0 part
    and fdisk -l
    Disk /dev/sdb: 14.9 GiB, 16013942784 bytes, 31277232 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x5da5572f
    Device Boot Start End Sectors Size Id Type
    /dev/sdb1 2048 31275007 31272960 14.9G 73 unknown
    Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0xa5018820
    Device Boot Start End Sectors Size Id Type
    /dev/sda1 * 2048 514047 512000 250M 83 Linux
    /dev/sda2 514048 976773167 976259120 465.5G 83 Linux
    Disk /dev/mapper/cryptroot: 465.5 GiB, 499842572288 bytes, 976255024 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk /dev/sdc: 1.4 TiB, 1500301908480 bytes, 2930277165 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x445e51a8
    And graysky: I thought I made the partition for /boot 250 MB and the encrypted partition 465.5 GB but I'm now quite sure I did something wrong...
    Thank you all

Maybe you are looking for