Scrub ZFS root pool

Does anyone see any issue in having a cron job that scrubs the ZFS root pool rpool periodically?
Let's say every Sunday at midnight (00:00 of Sunday).

Hi ,
What you need to do is very easy here is a procedure that i use:
1)  make a snapshot off the rpool
       zfs snapshot -r rpool@<snapshotname>
2) send that snapshot somewhere safe
       zpool destroy rpool/dump<snapshotname>
       zpool destroy rpool/swap@<snapshotname>
       zpool send -R rpool@<snapshotname | gzip > /net/<ipaddress>/<share>/<snapshotname>.gz
3) Once the above is done you can do the following.
     Boot from DVD make sure you ahe a disk available and start creating the rpool.
     The rpool can be created with EFI or SMI label
     so for example to use EFI label zpool create c0d0
                           to use SMI label zpool create c0d0s0 => make sure that the disk is labeled and that all cylinders are in s0.
4) create a new boot env
      zpool create rpool <disk>
5) import the data again.
     gzcat /mnt/<snapshotname>.qz | zfs receive -Fv rpool
      zfs create -V 4G rpool/dump
     zfs create -V 4G rpool/swap
6) check a list off bootenv
        beadm list
        beadm mount <bootenv> /tmp/mnt
        bootadm install-bootloader -P rpool
       devfsadm -Cn -r /tmp/mnt
       touch /tmp/mnt/reconfigure
       beadm umount <bootenv>
       beadm activate <bootenv>
This is for Solaris 11 but it also works for Solaris 10 only the last part number 6 is different.
I need to look this up again but if i remember again you need to set the following for solaris 10 bootfs that needs to be set on the rpool
If you want i have a script that makes a backup off the rpool towards a nfs share.
Hope this helps
Regards
Filip

Similar Messages

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

  • ZFS Root Pool Restore of a EFI labelled Disk

    Hi Team
    Please let me know the procedure for backup and restore of a EFI labelled root pool using zfs send/receive.
    Note - original operating system is installed on a t5 server with latest firmware,here the default disk label will be EFl instead of SMI as in the case of earlier firmware version.
    operation system is Solaris 11.1.
    Also need to know how to expand lun which is formatted with EFI labelled disk without losing its data.
    Expecting a positive response soon
    Regards
    Arun

    Hi ,
    What you need to do is very easy here is a procedure that i use:
    1)  make a snapshot off the rpool
           zfs snapshot -r rpool@<snapshotname>
    2) send that snapshot somewhere safe
           zpool destroy rpool/dump<snapshotname>
           zpool destroy rpool/swap@<snapshotname>
           zpool send -R rpool@<snapshotname | gzip > /net/<ipaddress>/<share>/<snapshotname>.gz
    3) Once the above is done you can do the following.
         Boot from DVD make sure you ahe a disk available and start creating the rpool.
         The rpool can be created with EFI or SMI label
         so for example to use EFI label zpool create c0d0
                               to use SMI label zpool create c0d0s0 => make sure that the disk is labeled and that all cylinders are in s0.
    4) create a new boot env
          zpool create rpool <disk>
    5) import the data again.
         gzcat /mnt/<snapshotname>.qz | zfs receive -Fv rpool
          zfs create -V 4G rpool/dump
         zfs create -V 4G rpool/swap
    6) check a list off bootenv
            beadm list
            beadm mount <bootenv> /tmp/mnt
            bootadm install-bootloader -P rpool
           devfsadm -Cn -r /tmp/mnt
           touch /tmp/mnt/reconfigure
           beadm umount <bootenv>
           beadm activate <bootenv>
    This is for Solaris 11 but it also works for Solaris 10 only the last part number 6 is different.
    I need to look this up again but if i remember again you need to set the following for solaris 10 bootfs that needs to be set on the rpool
    If you want i have a script that makes a backup off the rpool towards a nfs share.
    Hope this helps
    Regards
    Filip

  • S10 x86 ZFS on VMWare - Increase root pool?

    I'm running Solaris 10 x86 on VMWare.
    I need more space in the zfs root pool.
    I doubled the provisioned space in Hard disk 1, but it is not visible to the VM (format).
    I tried creating a 2nd HD, but root pool can't have multiple VDEVs.
    How can I add space to my root pool without rebuilding?

    Hi,
    This is what I did in single user (it may fail in multi user):
    -> format -> partition -> print
    Current partition table (original):
    Total disk cylinders available: 1302 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 1301 9.97GB (1301/0/0) 20900565
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wm 0 - 1301 9.97GB (1302/0/0) 20916630
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> fdisk
    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1304 1304 83
    -> format -> fdisk -> delete partition 1
    -> format -> fdisk -> create SOLARIS2 partition with 100% of the disk
    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1565 1565 100
    format -> partition -> print
    Current partition table (original):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> partition -> 0 cyl=1 size=1562e
    Current partition table (unnamed):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 1 - 1562 11.97GB (1562/0/0) 25093530
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> partition -> label
    zpool set autoexpand=on rpool
    zpool list
    zpool scrub rpool
    zpool status
    Best regards,
    Ibraima

  • Can't boot with zfs root - Solaris 10 u6

    Having installed Solaris 10 u6 on one disk with native ufs and made this work by adding the the following entries
    /etc/driver_aliases
    glm pci1000,f
    /etc/path_to_inst
    <lang pci string for my scsi controller> glm
    which are needed since the driver selected by default are the ncsr scsi controller driver that do not work in 64 bit.
    Now I would like to create a new boot env. on a second disk on the same scsi controller, but use zfs instead.
    Using Live Upgrade to create a new boot env on the second disk with zfs as file system worked fine.
    But when trying to boot of it I get the following error
    spa_import_rootpool: error 22
    panic[cpu0]/thread=fffffffffbc26ba0: cannot mount root path /pci@0,0-pci1002,4384@14,4/pci1000@1000@5/sd@1,0:a
    Well that's the same error I got with ufs before making the above mentioned changes /etc/driver_aliases and path_to_install
    But that seems not to be enough when using zfs.
    What am I missing ??

    Hmm I dropped the live upgrade from ufs to zfs because I was not 100% sure it worked.
    Then I did a reinstall selecting to use zfs during the install and made the changes to driver_aliases and path_to_inst before the 1'st reboot.
    The system came up fine on the 1'st reboot and did use the glm scsi driver and running in 64bit.
    But that was it. When the system then was rebooted (where it made a new boot-archive) it stopped working. Same error as before.
    I have managed to get it to boot in 32bit mode but still the same error (thats independent of what scsi driver used.)
    In all cases it does pop the SunOS Relase banner and it do load the driver (ncrs or glm) and detects the disks in the correct path and numbering.
    But it fails to load the file system.
    So basically the current status are no-go if you need to use the ncrs/glm scsi driver to access the disks with your zfs root pool.
    File-Safe works and can mount the zfs root pool, but that's no fun as server OS :(

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

  • How so I protect my root file system? - x86 solaris 10 - zfs data pools

    Hello all:
    I'm new to ZFS and am trying to understand it better before I start building a new file server. I'm looking for a low cost file server for smaller projects I support and would like to use the ZFS capabilities. If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem? The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too? or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Should I install solaris 10 on a mirrored set of drives? Can the root filesystem be zfs too? I'd like to be able to use a fairly simple PC to do this, perhaps one that doesn't have built in raid. I'm not looking for 10 terabytes of storage, maybe just four 500gb sata disks connected into a raidz zpool.
    thanks,

    patrickez wrote:
    If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem?Solaris 10 doesn't yet support ZFS for a root filesystem, but it is working in some OpenSolaris distributions.
    You could use Sun Volume Manager to create a mirror for your root filesystem.
    The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too?No. They're separate filesystems.
    or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Yes. (Actually, you can import the ZFS pool you created).
    Should I install solaris 10 on a mirrored set of drives?If you have one, that would work as well.
    Can the root filesystem be zfs too?Not currently in Solaris 10. The initial root support in OpenSolaris will require the root pool be only a single disk or mirrors. No striping, no raidz.
    Darren

  • Booting from a mirrored disk on a zfs root system

    Hi all,
    I am a newbee here.
    I have a zfs root system with mirrored disks c0t0d0s0 and c1t0d0s0, grub has been installed on c0t0d0s0 and OS booting is just fine.
    Now the question is if I want to boot the OS from the mirrored disk c1t0d0s0, how can I achieve that.
    OS is solaris 10 update 7.
    I installed the grub to c1t0d0s0 and assume menu.lst need to be changed (but i don't know how), somehow no luck.
    # zpool status zfsroot
    pool: zfsroot
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    zfsroot ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0
    c0t0d0s0 ONLINE 0 0 0
    # bootadm list-menu
    The location for the active GRUB menu is: /zfsroot/boot/grub/menu.lst
    default 0
    timeout 10
    0 s10u6-zfs
    1 s10u6-zfs failsafe
    # tail /zfsroot/boot/grub/menu.lst
    title s10u6-zfs
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
    module /platform/i86pc/boot_archive
    title s10u6-zfs failsafe
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel /boot/multiboot kernel/unix -s -B console=ttya
    module /boot/x86.miniroot-safe
    Appreciate anyone can provide some tips.
    Thanks.
    Mizuki

    This is what I have in my notes.... not sure if I wrote them or not. This is a sparc example as well. I believe on my x86 I still have to tell the bios to boot the mirror.
    After attaching mirror (if the mirror was not present during the initial install) you need to fix the boot block.
    #installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
    If the primary then fails you need to set the obp to the mirror:
    ok>boot disk1
    for example
    Apparently there is a way to set the obp to search for a bootable disk automatically.
    Good notes on all kinds of zfs and boot issues here:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Boot_Issues

  • Change ZFS root dataset name for root file system

    Hi all
    A quick one.
    I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
    Can I change it to another name afterward without reinstalling the OS? For example,
    zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
    zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
    Thank you.

    Renaming the root pool is not recommended.

  • Convert ZFS root file system to UFS with data.

    Hi, I would need to covert my ZFS root file systems to UFS and boot from the other disk as a slice (/dev/dsk/c1t0d0s0)
    I am ok to split the hard disk from root pool mirror. any ideas on how this can be acheived?
    Please sugget. Thanks,

    from the same document that was quoted above in the Limitations section:
    Limitations
    Version 2.0 of the Oracle VM Server for SPARC P2V Tool has the following limitations:
    Only UFS file systems are supported.
    Only plain disks (/dev/dsk/c0t0d0s0), Solaris Volume Manager metadevices (/dev/md/dsk/dNNN), and VxVM encapsulated boot disks are supported on the source system.
    During the P2V process, each guest domain can have only a single virtual switch and virtual disk server. You can add more virtual switches and virtual disk servers to the domain after the P2V conversion.
    Support for VxVM volumes is limited to the following volumes on an encapsulated boot disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris 9 operating systems that use VxVM.
    You cannot convert Solaris 10 systems that are configured with zones.

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • ZFS root filesystem & slice 7 for metadb (SUNWjet)

    Hi,
    I'm planning to use ZFS root filesystem in Sun Cluster 3.3 environment, as written in documentation when we will use UFS share diskset then we need to create small slice for metadb on slice 7. In standar installation we can't create slice 7 when we install solaris with zfs root, then we can create it with jumpstart profile below :
    # example Jumpstart profile -- ZFS with
    # space on s7 left out of the zpool for SVM metadb
    install_type initial_install
    cluster SUNWCXall
    filesys c0t0d0s7 32
    pool rpool auto 2G 2G c0t0d0s0
    so, my question is : "when we use SUNWjet (JumpStart(tm) Enterprise Toolkit) how we can write the profile similar to above jumpstart profile"?
    Thanks very much, for your best answer.

    This can be done with JET
    You create the template as normal.
    Then create a profile file with the slice 7 line.
    Then edit the template to use it.
    see
    ---8<
    # It is also possible to append additional profile information to the JET
    # derived one. Do this using the base_config_profile_append variable, but
    # don't forget to fill out the remaining base_config_profile variables.
    base_config_profile=""
    base_config_profile_append="
    ---8<
    It is how OpsCentre (which uses JET) does it.
    JET questions are best asked on the external JET alias at yahoogorups (until the forum is setup on OTN)

  • Trouble mirroring root pool onto larger disk

    Hi,
    I have Solaris 11 Express with a root pool installed on a 500 GB disk. I'd like to migrate it to a 2 TB disk. I've followed the instructions on the ZFS troubleshooting guide (http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk) and the Oracle ZFS Administration Guide (http://download.oracle.com/docs/cd/E19253-01/819-5461/ghzvx/index.html) pretty carefully. However, things still don't work: after re-silvering, I switch my BIOS to boot from the 2 TB disk and at boot, some kind of error message appears for < 1 second before the machine reboots itself. Is there any way I can view this message? I.e., is this message written to the log anywhere?
    As far as I can tell, I've set up all the partitions and slices correctly (VTOC below). The only error message I get is when I do:
    # zpool attach rpool c9t0d0s0 c13d1s0
    (c9t0d0s0 is the 500 GB original disk, c13d1s0 is the 2 TB new disk)
    I get:
    invalid vdev specification
    use '-f' to override the following errors:
    /dev/dsk/c13d1s0 overlaps with /dev/dsk.c13d1s2
    But that's a well known bug and I use "-f" to force it since the backup slice shouldn't matter. If anyone has any ideas, I really appreciate it.
    Here's my disk layout
    =============================================================
    500 GB disk
    fdisk
    Total disk size is 60801 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 60800 60800 100
    VTOC:
    partition> p
    Current partition table (original):
    Total disk cylinders available: 60798 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 60797 465.73GB (60797/0/0) 976703805
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 60797 465.74GB (60798/0/0) 976719870
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    =============================================================
    2 TB disk:
    fdisk:
    Total disk size is 60799 cylinders
    Cylinder size is 64260 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 60798 60798 100
    VTOC:
    partition> p
    Current partition table (original):
    Total disk cylinders available: 60796 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 60795 1.82TB (60795/0/0) 3906686700
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 60795 1.82TB (60796/0/0) 3906750960
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 31.38MB (1/0/0) 64260
    9 unassigned wm 0 0 (0/0/0) 0
    =============================================================

    Thanks for the suggestions! I fixed the problem. I took a video of the boot sequence using my iPhone and managed to catch the error messages. It appears that ZFS (specifically, zpools) aren't very robust to devices changing ports (i.e., names).
    My original boot device (500 GB) was on c9t0d0 (SATA port 0). My new boot device was on port c13d1 (on a PCI SATA card). The problem was a combination of devices getting renamed.
    After successfully attaching the 2TB disk to create a mirror, I of course could boot off the original 500GB disk. The problem was I didn't try to boot off the 2TB disk on the PCI card. Instead I swapped the cables, which led to the zpool freaking out about not being able to find the device (I only discovered this through the video! Automatic reboot on a kernel panic might not be such a great idea after all...). The other thing I originally tried was removing the 500 GB disk and try booting off the PCI card, but it seems that my BIOS isn't very robust to devices being removed either - it renames devices in its "list of hard drives" in such a way that it fails to boot from the default device. Manually rearranging the list, or using the boot sequence selector (F8) made it all work.
    In the end, since I really didn't want to boot off a PCI card, I simply detached the 500 GB disk, attached a different 2 TB disk to SATA port 0, and mirrored onto that. Finally, I detached the 2 TB disk on the PCI card (I don't have enough physical slots in the machine to hold that last disk!).
    Just to tie up any loose ends, does anyone know how to tell ZFS that a device has changed position (or name)? My data zpool is running pretty happily as in raid-z2. But if I take one of the disks and attach it to another SATA port, it complains that the device is missing. If I do a zpool replace, is it smart enough to recognize that the disk simply moved, and not waste a day re-silvering?
    Similarly, is there a way to change the port of a disk attached to the root pool without using an extra disk and doing two mirrors?
    Thanks!

  • ZFS root and Live upgrade

    Is it possible to create /var as its own ZFS dataset when using liveupgrade? With ufs, there's the -m option to lucreate. It seems like any liveupgrade to a ZFS root results in just the root, dump, and swap datasets for the boot environment.
    merill

    Hey man.
    I bad my head against the wall with the same question :-)
    One thing that might help you out anyway is that i found a solution on how to move ufs filesystems to the new ZFS pool.
    Let's say you have a ufs fs with let's say application server and stuff on /app which is on c1t0d0s6.
    When you create the new ZFS based BE the /app is shared.
    In order to move it to the new BE, all you need to do is to comment the lines in /etc/vfstab you want to be moved.
    then run lucreate to create the ZFS BE.
    After that, create a new dataset for /app, just give it a different mountpoint.
    Copy all your stuff.
    rename the original /app
    and set the dataset's mountpoint
    this is it, all your stuff are now on ZFS.
    Hope it will be usefull,

  • Flarecreate for zfs root dataset and ignore multiple dataset

    Hi All,
    I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers.
    but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate.
    I can use -D option to ignore the datasets on the server but it is not serving my purpose as i am maintaining a common file to ignore the mounts on all different servers.
    Please help me in this

    Renaming the root pool is not recommended.

Maybe you are looking for

  • Refresh Data feed error in Excel Workbook

    Hi I am getting the error  Errors in the high-level relational engine. The following exception occurred while the managed IDbConnection interface was being used The payload kind 'BinaryValue' of the given data feed is not supported.. A connection cou

  • I need to acquire impulsive signal(s) and then write them to a file - but I need a good approach

    Okay, I am pretty new to Lab View, and would appreciate suggestions of how to approach a problem. My only LabView experience has been writing a simple .vi to acquire voltage signals in a loop at a certain sample rate, get 5 of them each time the loop

  • Kernel_data_inpage_error in Windows 8.1

    Hi, I am encountering a blue windows 8.1 screen that says the computer has encountered an error and will restart. The error message is kernel_data_inpage_error I have attached the following files, including the files from minidump, system info, and d

  • Launching a JAR file

    I am distributing my java application as a JAR file. The target OS is Windows. Before launching the application each time, I need to set some system variables. For this purpose I am using a batch file. When the application is launched, two windows op

  • How to retrieve users logging-in and logging-out date and times in SharePoint

    At the moment I am using SherePoint 2013 with a few tenants. I am going to have access to the users logging-in and logging-out dates and times. For instance, I would like to know the detail of the dates and times which a particular user of a tenant h