ZFS / vxfs Quick IO

Hi,
Apologies if this has been asked previously but i'll ask anyway...
We are having discussions around whether to configure ZFS or Veritas Foundation Suite for a new Oracle 10G installation. My thought is we should should go ZFS for definate but on previous Volume Manager install's a license for Quick I/O was purchased so I was wondering which aspect of ZFS was comparible to Quick I/O..??.. Appreciate ZFS is a totally different filesystem but unfortunately we only have ZFS installed on Test/Dev server which do not have Oracle configured and although ive seen many a blog comapring ZFS to VXFS still's a still a little unclear.
I did read it was advisable NOT to place the Oracle redo logs on ZFS due to write performance so has anyone any experience (good/bad) on installing Oracle on ZFS or is it best to take the safe option and configure Veritas??
Thanks in advance...

waleed.badr wrote:
My point of view is that VxFS is much better than ZFS ... and even ZFS tries to imitate VxFS in many ways but it couldn't
VxFS have a perfect read/write IO performance ... plus the support and the ease of troubleshooting and administration.
As for VxFS from 1997 ... there are a lot of enhancments and bunch of new features and development on it and a massive ability to the data availability. It is a noticable change in the file systemEr...how exactly is VxFS "much better" than ZFS? And at what?
The elegance, simplicity and power of ZFS surpasses any other Filesystem I have encountered in a 15yr SA career. And yes, I swore by Veritas till I happened to work with ZFS. After that, there was no looking back.
Edited by: implicate_order on Apr 8, 2010 11:34 AM

Similar Messages

  • Lucreate with zfs system

    Hello,
    Am relatively new to using liveupgrade to patch solaris 10 systems, but so far have found it to work pretty well. I have come across an oddity on a system that I would like to have explained. The system has solaris 10 installed, with one zfs pool rpool, and when I create an alternate BE to patch with: lucreate -n altBEname, it does not use the clone/snapshots of zfs to quickly create the alternate BE. It looks like it is creating a full copy (like lucreate does on our systems using ufs), and takes about 45 minutes to 1 hr to complete. On other solaris 10 systems installed with zfs, the lucreate command completes in a minute or so, and a snapshot along with the alternate BE is in the zfs list output. On this system there is no snapshot, only the alternate BE. Below is output from commands run on the system to show what I am trying to describe. Any ideas what might be the problem, thanks:
    bash# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    Solaris_10_012011_patched yes yes yes no -
    Solaris_10_022011_patched yes no no yes -
    bash# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 22.1G 112G 97K /rpool
    rpool/ROOT 17.0G 112G 21K legacy
    rpool/ROOT/Solaris_10_012011_patched 9.18G 112G 9.18G /
    rpool/ROOT/Solaris_10_022011_patched 7.87G 112G 7.87G /
    rpool/dump 1.00G 112G 1.00G -
    rpool/export 44K 112G 23K /export
    rpool/export/home 21K 112G 21K /export/home
    rpool/swap 4.05G 112G 4.05G -
    bash# lucreate -n Solaris_10_072011_patched
    Analyzing system configuration.
    Comparing source boot environment <Solaris_10_012011_patched> file systems
    with the file system(s) you specified for the new boot environment.
    Determining which file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <Solaris_10_072011_patched>.
    Source boot environment is <Solaris_10_012011_patched>.
    Creating boot environment <Solaris_10_072011_patched>.
    Creating file systems on boot environment <Solaris_10_072011_patched>.
    Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/Solaris_10_072011_patched>.
    /usr/lib/lu/lumkfs: test: unknown operator zfs
    Populating file systems on boot environment <Solaris_10_072011_patched>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Copying.
    Creating shared file system mount points.
    Creating compare databases for boot environment <Solaris_10_072011_patched>.
    Creating compare database for file system </>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <Solaris_10_072011_patched>.
    Updating compare databases on boot environment <Solaris_10_022011_patched>.
    Making boot environment <Solaris_10_072011_patched> bootable.
    Population of boot environment <Solaris_10_072011_patched> successful.
    Creation of boot environment <Solaris_10_072011_patched> successful.
    bash# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 30.8G 103G 97K /rpool
    rpool/ROOT 25.8G 103G 21K legacy
    rpool/ROOT/Solaris_10_012011_patched 9.22G 103G 9.22G /
    rpool/ROOT/Solaris_10_022011_patched 7.87G 103G 7.87G /
    rpool/ROOT/Solaris_10_072011_patched 8.70G 103G 8.70G /
    rpool/dump 1.00G 103G 1.00G -
    rpool/export 44K 103G 23K /export
    rpool/export/home 21K 103G 21K /export/home
    rpool/swap 4.05G 103G 4.05G -

    Hi,
    i have installed Solaris10 x86 in vmware. The root disk is currently present on UFS Filesystem. "c0d0s0" on disk "c0d0"
    i have placed another new disk and created a root pool (rpool). "c0t2d0s0" on "c0t2d0"
    lustatus -> (shows the c0d0s0) as the current boot environment
    when i try to create a new zfs boot environment it gives me a error. Please help
    "lucreate -c c0d0s0 -n zfsBE -p rpool"
    it tells me "unknown option -- p"
    cat /etc/release shows me this...
    Solaris 10 5/08 s10x_u5wos_10 X86
    Please help!!!!

  • Mount usb thumbdrive

    hi there,
    need your help.i run #volcheck when i'd like to detect the mounted floppy disk.what about usb thumbdrive? wud like to mount it and access it but i cant see is when i run df -k.
    please help how i can mount my thumbdrive and the files on it.
    thanks.
    rgds,
    kim

    depending on OS version -
    I haven't been running vold on the Solaris 10 machines I'm playing with - and on them, I simply plug in the memory stick into either the onboard USB 1.1 connection, or into my Adaptec USB 2.0 interface
    once connected, you should be able to "rmformat -l" to see the devices have changed -
    from my own experience, I've only been able to see/use fat32 formatted drives with the mount -F pcfs (mountable rw or ro) commands, and ntfs formatted drives with mount_ntfs (mountable ro, downloadable from web), but you can still mount the drive on your sparc/x86 machine and put ufs/zfs/vxfs on it - and since it's removable format media, you will only really be able to use the "whole" drive versus laying down your own vtoc on the disk (unless somebody knows how to get around that one) - regards - jeff
    #### snip ####
    root@node1:/# uname -a
    SunOS node1 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Blade-100
    root@node1:/# rmformat -l
    Looking for devices...
    1. Logical Node: /dev/rdsk/c0t1d0s2
    Physical Node: /pci@1f,0/ide@d/sd@1,0
    Connected Device: Memorex DVD+-RAM 510L v1 MWS7
    Device Type: DVD Reader/Writer
    2. Logical Node: /dev/rdsk/c3t0d0s2
    Physical Node: /pci@1f,0/pci@5/usb@0,2/storage@1/disk@0,0
    Connected Device: Maxtor OneTouch III 035f
    Device Type: Removable
    3. Logical Node: /dev/rdsk/c4t0d0s2
    Physical Node: /pci@1f,0/usb@c,3/storage@3/disk@0,0
    Connected Device: USB Flash Memory 6.50
    Device Type: Removable
    #### INSERT THE DRIVE ####
    root@node1:/# rmformat -l
    Looking for devices...
    1. Logical Node: /dev/rdsk/c0t1d0s2
    Physical Node: /pci@1f,0/ide@d/sd@1,0
    Connected Device: Memorex DVD+-RAM 510L v1 MWS7
    Device Type: DVD Reader/Writer
    2. Logical Node: /dev/rdsk/c1t0d0s2
    Physical Node: /pci@1f,0/usb@c,3/storage@4/disk@0,0
    Connected Device: STF Flash Drive 2.0 2.00
    Device Type: Removable
    3. Logical Node: /dev/rdsk/c3t0d0s2
    Physical Node: /pci@1f,0/pci@5/usb@0,2/storage@1/disk@0,0
    Connected Device: Maxtor OneTouch III 035f
    Device Type: Removable
    4. Logical Node: /dev/rdsk/c4t0d0s2
    Physical Node: /pci@1f,0/usb@c,3/storage@3/disk@0,0
    Connected Device: USB Flash Memory 6.50
    Device Type: Removable
    root@node1:/# mount -F pcfs /dev/dsk/c1t0d0s0 /mnt3
    root@node1:/# df -k /mnt3
    Filesystem kbytes used avail capacity Mounted on
    /dev/dsk/c1t0d0s0 125972 2 125970 1% /mnt3
    #### snip ####

  • Devspace of type file(F) and raw(R) together

    I have a database with data devspaces located in a filesystem (F).
    I want to migrate to devspaces of the type raw (R).
    My Question: Can I use both types of devspaces in one database together?
    Thanks for any help.
    Bernhard

    > For very modern filesystems (ZFS, VxFS > 4) this may be true but the FS check for e. g. ext3 and UFS still check the inode integrity compared to/in the superblocks, in practice checking an ext3 even with one big file can take hours. On Linux this will be done every 180 days after rebooting if not switched off before. This can increase a downtime unexpectedly if you don't know that you need to switch it off.
    I stand corrected on this - thanks for pointing out this. Really might become naughty on the half-yearly reboot.weekend,...
    > > Concerning the faster extension... we format RAW devices too  - so where do you expect to save time here?
    >
    > Just for curiosity: since when are RAW devices formatted? I can build a 4 TB database in about 10 minutes where 9,5 minutes are used to format the log area; using a filesystem this is a several-10-hour task with high load on the SAN, even if formatted sequentially. I never saw in the last 12+ years that RAW devices are really "fomatted" as the log area is
    >
    > Two weeks ago I added 400 GB to a database using a script, it didn't take a minute until the space was available so I don't know...
    Hmm... Ok, I just spend half an hour wading through what is called our Volume-Management code...
    Really looks like as if we just performa few wrtie I/Os into the RAW devs (something like, start - middle - end of the whole device area) and write the header page.
    You could have the same behaviour for Filesystem-Data-Vols by switching off formatting via the DB parameter.
    Why this is just done for Data-Volumes? Phew.. no idea actually.
    > > The reason why we changed the recommendation is simply that the benefits don't equal out the potential problems that can occur due to things like double allocation, handling mistakes etc.
    >
    > Sapinst (up to 7.0 SR3) still has "raw device" as default selected. One has to explicitly choose filesystem, I don't know, however, for later installations.
    >
    > I still think that RAW devices are a better choice for big installations, nobody can accidentially to an rm -rf I agree though that the handling is different and that one has to take special care.
    Well... ahem... the installation media barely represent current recommendations.
    No parameters, no recent patches, no nothing.
    Basically they represent a defined and working starting setup from where you can move on (<-- MY personal opinion! Not to be mistaken as official! ).
    Seen that way I totally agree with Oracles decision to kick out RAW device support and I wouldn't recommend to move to them either. (but also not to convert a working RAW dev-installation to filesystem  - the effect is simply not worth the conversion effort).
    Hey, Markus - thanks for pointing my misbelieves here!
    regards,
    Lars

  • ZFS Full Filesystem -  Quick Question

    Folks,
    With traditional UFS filesystems, when it hits 100% a message similar to below is logged in /var/adm/messages
    "NOTICE: alloc: /var: file system full"
    I'm wondering if zfs has a similar "NOTICE" as we have hit some 100% filesystems in the past and zfs has not logged anything to the messages file.
    Is there a way to get zfs to log if a particular dataset within a storage pool hits 100%
    TIA

    What is a real "volume" and how would it be different from an emulation of one?
    Most uses of "volume" are as a logical construction (such as Sun Volume Manager or Veritas Volume Manager). So there's not much "real" about them.
    ZFS also uses physical disks to store data that is accessed via a filesystem (or block device) interface. Like the other tools, it can store redundant information to increase availability in case of hardware failures, plus many other features.
    Darren

  • Trouble installing ZFS in archlinux kernel 3.6.3-1-ARCH

    I've been trying to install ZFS on my system, and i can't get past a building error for SPL, here is my install output:
    ==> Downloading zfs PKGBUILD from AUR...
    x zfs_preempt.patch
    x zfs.install
    x PKGBUILD
    Comment by: modular on Wed, 24 Oct 2012 03:09:04 +0000
    @demizer
    I don't/won't run ZFS as a root file system. I'm getting the following build error:
    http://pastebin.com/ZcWiaViK
    Comment by: demizer on Wed, 24 Oct 2012 04:11:54 +0000
    @modular, You're trying to build with the 3.6.2 kernel. The current version (rc11) does not work with the 3.6.2 kernel. If you want to use it, you will have to downgrade to the 3.5.6 kernel (linux and linux-headers). https://wiki.archlinux.org/index.php/Downgrading_Packages
    Thanks!
    Comment by: MilanKnizek on Wed, 24 Oct 2012 08:07:19 +0000
    @demizer: there still seemed to be a problem during upgrading - zfs/spl requires kernel of certain version (hard-coded) and this blocks the upgrade (the old installed zfs/spl requires the old kernel and kernel can't be upgraded w/o breaking dependency of zfs/spl and therefore build of the new zfs/spl fails, too).
    So far, I have had to remove zpl/spl, upgrade kernel, rebuild + install spl/zfs and manually run depmod against the new kernel (i.e. the postinst: depmod -a does not work until next reboot) and only then reboot to load the new kernel zfs modules successfully.
    That is quite clumsy and error-prone - I hope it will be resolved via DMKS.
    Comment by: srf21c on Sun, 28 Oct 2012 04:00:31 +0000
    All, if you're suffering zfs kernel upgrade pain fatigue, seriously consider going with the LTS (long term support) kernel. I just successfully built zfs on a system that I switched to the linux-lts 3.0.48-1. All you have to do is install the linux-lts and linux-lts-headers packages, reboot to the lts kernel, and change any instances of depends= or makedepends= lines in the package build file like so:
    Before:
    depends=('linux=3.5' "spl=${pkgver}" "zfs-utils=${pkgver}")
    makedepends=('linux-headers=3.5')
    After:
    depends=('linux-lts=3.0' "spl=${pkgver}" "zfs-utils=${pkgver}")
    makedepends=('linux-lts-headers=3.0')
    Then build and install each package in this order: spl-utils,spl,zfs-utils,zfs.
    Worked like a champ for me.
    Comment by: stoone on Mon, 29 Oct 2012 12:09:29 +0000
    If you keep the linux, and linux-headers packages while using the LTS you don't need to modify the PKGBUILDs. Because the checks will pass but it will build the packages to your current runnning kernel.
    Comment by: demizer on Mon, 29 Oct 2012 15:56:27 +0000
    Hey everybody, just a quick update. The new build tool I have been working on is now in master, https://github.com/demizer/aur-zfs. With it you can build and package two different groups of packages one for aur and one for split. Again, building the split packages is more efficient. I still have a lot of work to be done, but it is progressing. I will be adding git, dkms, and lts packages after I setup my repo. My next step is to add unofficial repository support to my build tool so I can easily setup a repo with precompiled binaries. I will be hosting the repo on my website at http://demizerone.com/archzfs. Initially it will only be for 64bit code since the ZOL FAQ states that ZOL is very unstable with 32bit code due to memory management differences in Solaris and Linux. I will notify you all in the future when that is ready to go.
    @MilanKnizek, Yes updating is a pain. ZFS itself is hard-coded to linux versions at build time. The ZFS build tool puts the modules in "/usr/lib/modules/3.5.6-1-ARCH/addon/zfs/", and this the primary reason it has to be rebuilt each upgrade, even minor point releases. Nvidia for example puts their module in "/usr/lib/modules/extramodules-3.5-ARCH/", so minor point releases are still good and the nvidia package doesn't need to be re-installed. A possible reason for ZOL to be hard-coded like this because ZOL is still technically very beta code.
    I do have a question for the community, does anyone use ZFS on a 32bit system?
    Thanks!
    First Submitted: Thu, 23 Sep 2010 08:50:51 +0000
    zfs 0.6.0_rc11-2
    ( Unsupported package: Potentially dangerous ! )
    ==> Edit PKGBUILD ? [Y/n] ("A" to abort)
    ==> ------------------------------------
    ==> n
    ==> zfs dependencies:
    - linux>=3.5 (already installed)
    - linux-headers>=3.5 (already installed)
    - spl>=0.6.0_rc11 (building from AUR)
    - zfs-utils>=0.6.0_rc11 (building from AUR)
    ==> Edit zfs.install ? [Y/n] ("A" to abort)
    ==> ---------------------------------------
    n
    ==> Continue building zfs ? [Y/n]
    ==> -----------------------------
    ==>
    ==> Building and installing package
    ==> Install or build missing dependencies for zfs:
    ==> Downloading spl PKGBUILD from AUR...
    x spl.install
    x PKGBUILD
    Comment by: timemaster on Mon, 15 Oct 2012 22:42:32 +0000
    I am not able to compile this package after the upgrade to the 3.6 kernel. Anyone else ? any idea?
    Comment by: mikers on Mon, 15 Oct 2012 23:34:17 +0000
    rc11 doesn't support Linux 3.6; there are some patches on GitHub that might apply against it (I've not done it myself), see:
    https://github.com/zfsonlinux/spl/pull/179
    https://github.com/zfsonlinux/zfs/pull/1039
    Otherwise downgrade to Linux 3.5.x or linux-lts and wait for rc12.
    Comment by: timemaster on Mon, 15 Oct 2012 23:54:03 +0000
    Yes, I saw that too late.
    https://github.com/zfsonlinux/zfs/commit/ee7913b644a2c812a249046f56eed39d1977d706
    Comment by: demizer on Tue, 16 Oct 2012 07:00:16 +0000
    Looks like the patches have been merged, now we wait for rc12.
    Comment by: vroomanj on Fri, 26 Oct 2012 17:07:19 +0000
    @demizer: 3.6 support is available in the master builds, which are stable but not officially released yet. Can't the build be updated to use the master tars?
    https://github.com/zfsonlinux/spl/tarball/master
    https://github.com/zfsonlinux/zfs/tarball/master
    Comment by: demizer on Fri, 26 Oct 2012 17:51:42 +0000
    @vroomanj, I plan on working on the git packages this weekend. All I have to figure out if it is going to be based on an actual git clone or if its just going to be the download links you provided. They are pretty much the same, but i'm not really clear what the Arch Package Guidelines say about this yet. Also, I don't think the current packages in AUR now should be based off of git master. They should be based off of the ZOL stable releases (rc10, rc11, ...). That's why I am making git packages so people can use them if they want to upgrade to the latest kernel and the stable release hasn't been made yet. As is the case currently.
    First Submitted: Sat, 26 Apr 2008 14:34:31 +0000
    spl 0.6.0_rc11-2
    ( Unsupported package: Potentially dangerous ! )
    ==> Edit PKGBUILD ? [Y/n] ("A" to abort)
    ==> ------------------------------------
    ==> n
    ==> spl dependencies:
    - linux>=3.5 (already installed)
    - spl-utils>=0.6.0_rc11 (already installed)
    - linux-headers>=3.5 (already installed)
    ==> Edit spl.install ? [Y/n] ("A" to abort)
    ==> ---------------------------------------
    ==> n
    ==> Continue building spl ? [Y/n]
    ==> -----------------------------
    ==>
    ==> Building and installing package
    ==> Making package: spl 0.6.0_rc11-2 (Tue Oct 30 11:34:13 CET 2012)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving Sources...
    -> Downloading spl-0.6.0-rc11.tar.gz...
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 178 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    100 136 100 136 0 0 154 0 --:--:-- --:--:-- --:--:-- 293
    100 508k 100 508k 0 0 357k 0 0:00:01 0:00:01 --:--:-- 1245k
    ==> Validating source files with md5sums...
    spl-0.6.0-rc11.tar.gz ... Passed
    ==> Extracting Sources...
    -> Extracting spl-0.6.0-rc11.tar.gz with bsdtar
    ==> Starting build()...
    configure.ac:34: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see:
    configure.ac:34: http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_INIT_AUTOMAKE-invocation
    checking metadata... yes
    checking build system type... i686-pc-linux-gnu
    checking host system type... i686-pc-linux-gnu
    checking target system type... i686-pc-linux-gnu
    checking whether to enable maintainer-specific portions of Makefiles... no
    checking whether make supports nested variables... yes
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
    checking for gawk... gawk
    checking whether make sets $(MAKE)... yes
    checking for gcc... gcc
    checking whether the C compiler works... yes
    checking for C compiler default output file name... a.out
    checking for suffix of executables...
    checking whether we are cross compiling... no
    checking for suffix of object files... o
    checking whether we are using the GNU C compiler... yes
    checking whether gcc accepts -g... yes
    checking for gcc option to accept ISO C89... none needed
    checking for style of include used by make... GNU
    checking dependency style of gcc... gcc3
    checking how to print strings... printf
    checking for a sed that does not truncate output... /bin/sed
    checking for grep that handles long lines and -e... /usr/bin/grep
    checking for egrep... /usr/bin/grep -E
    checking for fgrep... /usr/bin/grep -F
    checking for ld used by gcc... /usr/bin/ld
    checking if the linker (/usr/bin/ld) is GNU ld... yes
    checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
    checking the name lister (/usr/bin/nm -B) interface... BSD nm
    checking whether ln -s works... yes
    checking the maximum length of command line arguments... 1572864
    checking whether the shell understands some XSI constructs... yes
    checking whether the shell understands "+="... yes
    checking how to convert i686-pc-linux-gnu file names to i686-pc-linux-gnu format... func_convert_file_noop
    checking how to convert i686-pc-linux-gnu file names to toolchain format... func_convert_file_noop
    checking for /usr/bin/ld option to reload object files... -r
    checking for objdump... objdump
    checking how to recognize dependent libraries... pass_all
    checking for dlltool... no
    checking how to associate runtime and link libraries... printf %s\n
    checking for ar... ar
    checking for archiver @FILE support... @
    checking for strip... strip
    checking for ranlib... ranlib
    checking command to parse /usr/bin/nm -B output from gcc object... ok
    checking for sysroot... no
    checking for mt... no
    checking if : is a manifest tool... no
    checking how to run the C preprocessor... gcc -E
    checking for ANSI C header files... yes
    checking for sys/types.h... yes
    checking for sys/stat.h... yes
    checking for stdlib.h... yes
    checking for string.h... yes
    checking for memory.h... yes
    checking for strings.h... yes
    checking for inttypes.h... yes
    checking for stdint.h... yes
    checking for unistd.h... yes
    checking for dlfcn.h... yes
    checking for objdir... .libs
    checking if gcc supports -fno-rtti -fno-exceptions... no
    checking for gcc option to produce PIC... -fPIC -DPIC
    checking if gcc PIC flag -fPIC -DPIC works... yes
    checking if gcc static flag -static works... yes
    checking if gcc supports -c -o file.o... yes
    checking if gcc supports -c -o file.o... (cached) yes
    checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes
    checking whether -lc should be explicitly linked in... no
    checking dynamic linker characteristics... GNU/Linux ld.so
    checking how to hardcode library paths into programs... immediate
    checking whether stripping libraries is possible... yes
    checking if libtool supports shared libraries... yes
    checking whether to build shared libraries... yes
    checking whether to build static libraries... yes
    checking spl license... GPL
    checking linux distribution... arch
    checking default package type... arch
    checking whether rpm is available... no
    checking whether rpmbuild is available... no
    checking whether dpkg is available... no
    checking whether dpkg-buildpackage is available... no
    checking whether alien is available... no
    checking whether pacman is available... yes (4.0.3)
    checking whether makepkg is available... yes (4.0.3)
    checking spl config... kernel
    checking kernel source directory... /usr/src/linux-3.6.3-1-ARCH
    checking kernel build directory... /usr/src/linux-3.6.3-1-ARCH
    checking kernel source version... 3.6.3-1-ARCH
    checking kernel file name for module symbols... Module.symvers
    checking whether debugging is enabled... no
    checking whether basic debug logging is enabled... yes
    checking whether basic kmem accounting is enabled... yes
    checking whether detailed kmem tracking is enabled... no
    checking whether modules can be built... yes
    checking whether atomic types use spinlocks... no
    checking whether kernel defines atomic64_t... yes
    checking whether kernel defines atomic64_cmpxchg... no
    checking whether kernel defines atomic64_xchg... yes
    checking whether kernel defines uintptr_t... yes
    checking whether INIT_WORK wants 3 args... no
    checking whether register_sysctl_table() wants 2 args... no
    checking whether set_shrinker() available... no
    checking whether shrinker callback wants 3 args... no
    checking whether struct path used in struct nameidata... yes
    checking whether task_curr() is available... no
    checking whether unnumbered sysctl support exists... no
    checking whether struct ctl_table has ctl_name... no
    checking whether fls64() is available... yes
    checking whether device_create() is available... yes
    checking whether device_create() wants 5 args... yes
    checking whether class_device_create() is available... no
    checking whether set_normalized_timespec() is available as export... yes
    checking whether set_normalized_timespec() is an inline... yes
    checking whether timespec_sub() is available... yes
    checking whether init_utsname() is available... yes
    checking whether header linux/fdtable.h exists... yes
    checking whether files_fdtable() is available... yes
    checking whether __clear_close_on_exec() is available... yes
    checking whether header linux/uaccess.h exists... yes
    checking whether kmalloc_node() is available... yes
    checking whether monotonic_clock() is available... no
    checking whether struct inode has i_mutex... yes
    checking whether struct mutex has owner... yes
    checking whether struct mutex owner is a task_struct... yes
    checking whether mutex_lock_nested() is available... yes
    checking whether on_each_cpu() wants 3 args... yes
    checking whether kallsyms_lookup_name() is available... yes
    checking whether get_vmalloc_info() is available... no
    checking whether symbol *_pgdat exist... yes
    checking whether first_online_pgdat() is available... no
    checking whether next_online_pgdat() is available... no
    checking whether next_zone() is available... no
    checking whether pgdat_list is available... no
    checking whether global_page_state() is available... yes
    checking whether page state NR_FREE_PAGES is available... yes
    checking whether page state NR_INACTIVE is available... no
    checking whether page state NR_INACTIVE_ANON is available... yes
    checking whether page state NR_INACTIVE_FILE is available... yes
    checking whether page state NR_ACTIVE is available... no
    checking whether page state NR_ACTIVE_ANON is available... yes
    checking whether page state NR_ACTIVE_FILE is available... yes
    checking whether symbol get_zone_counts is needed... no
    checking whether user_path_dir() is available... yes
    checking whether set_fs_pwd() is available... no
    checking whether set_fs_pwd() wants 2 args... yes
    checking whether vfs_unlink() wants 2 args... yes
    checking whether vfs_rename() wants 4 args... yes
    checking whether vfs_fsync() is available... yes
    checking whether vfs_fsync() wants 2 args... yes
    checking whether struct fs_struct uses spinlock_t... yes
    checking whether struct cred exists... yes
    checking whether groups_search() is available... no
    checking whether __put_task_struct() is available... yes
    checking whether proc_handler() wants 5 args... yes
    checking whether kvasprintf() is available... yes
    checking whether rwsem_is_locked() acquires sem->wait_lock... no
    checking whether invalidate_inodes() is available... no
    checking whether invalidate_inodes_check() is available... no
    checking whether invalidate_inodes() wants 2 args... yes
    checking whether shrink_dcache_memory() is available... no
    checking whether shrink_icache_memory() is available... no
    checking whether symbol kern_path_parent exists in header... no
    checking whether kern_path_parent() is available... no
    checking whether zlib_deflate_workspacesize() wants 2 args... yes
    checking whether struct shrink_control exists... yes
    checking whether struct rw_semaphore member wait_lock is raw... yes
    checking that generated files are newer than configure... done
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating lib/Makefile
    config.status: creating cmd/Makefile
    config.status: creating module/Makefile
    config.status: creating module/spl/Makefile
    config.status: creating module/splat/Makefile
    config.status: creating include/Makefile
    config.status: creating scripts/Makefile
    config.status: creating spl.spec
    config.status: creating spl-modules.spec
    config.status: creating PKGBUILD-spl
    config.status: creating PKGBUILD-spl-modules
    config.status: creating spl.release
    config.status: creating dkms.conf
    config.status: creating spl_config.h
    config.status: executing depfiles commands
    config.status: executing libtool commands
    make all-recursive
    make[1]: Entering directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11'
    Making all in module
    make[2]: Entering directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module'
    make -C /usr/src/linux-3.6.3-1-ARCH SUBDIRS=`pwd` CONFIG_SPL=m modules
    make[3]: Entering directory `/usr/src/linux-3.6.3-1-ARCH'
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-debug.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-proc.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-kmem.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-thread.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-taskq.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-rwlock.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.o
    /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.c: In function 'vn_remove':
    /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.c:327:2: error: implicit declaration of function 'path_lookup' [-Werror=implicit-function-declaration]
    cc1: some warnings being treated as errors
    make[5]: *** [/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.o] Error 1
    make[4]: *** [/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl] Error 2
    make[3]: *** [_module_/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module] Error 2
    make[3]: Leaving directory `/usr/src/linux-3.6.3-1-ARCH'
    make[2]: *** [modules] Error 2
    make[2]: Leaving directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11'
    make: *** [all] Error 2
    ==> ERROR: A failure occurred in build().
    Aborting...
    ==> ERROR: Makepkg was unable to build spl.
    ==> Restart building spl ? [y/N]
    ==> ----------------------------
    ... i'm stuck here, can anyone help me with this one? please !

    Did you read the comments, either on the AUR page or in the output that you posted? They explain it.

  • How to count number of files on zfs filesystem

    Hi all,
    Is there a way to count the number of files on a zfs filesystem similar to how "df -o i /ufs_filesystm" works? I am looking for a way to do this without using find as I suspect there are millions of files on a zfs filesystem that is causing slow performance sometimes on a particular zfs file system
    Thanks.

    So I have finished 90% of my testing and I have accepted _df -t /filesystem | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'_ as acceptable in the absence of a known built in zfs method. My main conern was with the reduction of available files from the df -t output as more files were added. I used a one liner for loop to just create empty files to conserve on space used up so I would have a better chance of seeing what happens if the available files reached 0.
    root@fj-sol11:/zfstest/dir4# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _5133680_
    root@fj-sol11:/zfstest/dir4# df -t /zfstest
    /zfstest (pool1 ): 7237508 blocks *7237508* files
    total: 10257408 blocks 12372310 files
    root@fj-sol11:/zfstest/dir4#
    root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _6742772_
    root@fj-sol11:/zfstest/dir7# df -t /zfstest
    /zfstest (pool1 ): 6619533 blocks *6619533* files
    total: 10257408 blocks 13362305 files
    root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _7271716_
    root@fj-sol11:/zfstest/dir7# df -t /zfstest
    /zfstest (pool1 ): 6445809 blocks *6445809* files
    total: 10257408 blocks 13717010 files
    root@fj-sol11:/zfstest# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _12359601_
    root@fj-sol11:/zfstest# df -t /zfstest
    /zfstest (pool1 ): 4494264 blocks *4494264* files
    total: 10257408 blocks 16853865 files
    I noticed the total files kept increasing and the creation of 4 millions files (4494264) after the above example was taking up more time than I had after already creating 12 million plus ( _12359601_ ) which took 2 days on a slow machine on and off (mostly on). If anyone has any idea of creating them quicker than "touch filename$loop" in a for loop let me know :)
    In the end I decided to use a really small file system 100mb on a virtual machine to test what happens as the free files approached 0. Turns out if never does ... it somehow increased
    bash-3.00# df -t /smalltest/
    /smalltest (smalltest ): 31451 blocks *31451* files
    total: 112640 blocks 278542 files
    bash-3.00# pwd
    /smalltest
    bash-3.00# mkdir dir4
    bash-3.00# cd dir4
    bash-3.00# for arg in {1..47084}; do touch file$arg; done <--- I created 47084 files here, more that the free listed above ( *31451* )
    bash-3.00# zfs list smalltest
    NAME USED AVAIL REFER MOUNTPOINT
    smalltest 47.3M 7.67M 46.9M /smalltest
    bash-3.00# df -t /smalltest/
    /smalltest (smalltest ): 15710 blocks *15710* files
    total: 112640 blocks 309887 files
    bash-3.00#
    The other 10% of my testing will be to see what happens when I try to a find on 12 million plus files and try to pipe it to wc -l :)

  • Extremely Disappointed and Frustrated with ZFS

    I feel like crying. In the last 2 years of my life ZFS has caused me more professional pain and grief than anything else. At the moment I'm anxiously waiting for a resilver to complete, I expect that my pool will probably fault and I'll have to rebuild it.
    I work at a small business, and I have two Solaris 11 servers which function as SAN systems primarly serving virtual machine disk images to Proxmox VE cluster. Each is fitted with an Areca 12 port SATA controller put in JBOD mode, and populated with WD Caviar Black 2TB drives (probably my first and biggest mistake, was in not using enterprise class drives). One system is configured as a ZFS triple mirror, and the other a double mirror, both have 3 hot spares.
    About a year ago I got CKSUM errors on one of the arrays I promptly ran zpool scrub on the array, and stupidly I decided to scrub the other array at the same time. The scrub quickly turned into resilvers on both arrays as more CKSUM errors were uncovered, and as the resilver continued the CKSUM count rose until both arrays faulted and were irrecoverable. Irrecoverable metadata corruption was the error I got from ZFS. After 20+ hours of attempted recovery, trying to play back the ZIL I had to destroy them both and rebuild everything. I never knew for certain what the cause was, but I suspected disk write caching on the controller, and/or the use of a non-enterprise class flash drive for ZIL.
    In the aftermath did extremely thorough checking of all devices. I checked each backplane port, each drive for bad sectors, SATA controller onboard RAM, main memory, ran extended burn-in testing, etc. I then rebuilt the arrays without controller write caching and no seperate ZIL device. I also scheduled weekly scrubbing, and scripted ZFS alerting.
    Yesterday I got an alert from the controller on my array with the triple mirror about read errors on one of the ports. I ran a scrub which completed and then I proceeded to replace the drive I was getting read errors on. I offlined the old drive and inserted a brand new drive and ran zfs replace. Re-silver started fast and then the rate quickly dropped down to 1.0MB/s, my controller began spitting out a multitude of SATA command timeout errors on the port of the newly inserted drive. Since the whole array had essential froze up, I popped the drive out and everything ran back at full speed, resilvering against one of the hot spares. Now the resilver soon started uncovering CKSUM errors similar to the disaster I had last year. Error counts rose and now my system dropped another drive off in the same mirror set and is resilvering 2 drives in the same set, with the third drive in the set showing 6 CKSUM errors. I'm afraid I'm going to lose the whole array again, as the only drive left in the set is showing errors as well. WTF?!?!?!?!?!?!
    So I suspect I have a bad batch of disks, however, why the heck did zfs scrub complete and show no errors? What is the point of ZFS scrub if it doesn't accuratly uncover errors? I'm so frustrated that these types of errors seem to show up only during resilvers. I'm beginning to think ZFS isn't as robust as advertised....

    FMA does notify admins automatically through the smtp-notify service via mail notification to root. You
    can customize this service to send notification to your own email account on any system.
    The poster said he has scripted for ZFS, but I don't know if that means reviewing zpool status or
    FMA data.
    With ongoing hardware problems, you need to review FMA data as well. See the example below.
    Rob Johnston has a good explanation of this smtp-notify service, here:
    https://blogs.oracle.com/robj/entry/fma_and_email_notifications
    For the system below, I had to enable sendmail to see the failure notice in root's mail,
    but that was it.
    Thanks, Cindy
    I failed a disk in a pool:
    # zpool status -v tank
    pool: tank
    state: DEGRADED
    status: One or more devices are unavailable in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
    action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or 'fmadm repaired', or replace the device
    with 'zpool replace'.
    scan: resilvered 944M in 0h0m with 0 errors on Mon Dec 17 10:30:05 2012
    config:
    NAME STATE READ WRITE CKSUM
    tank DEGRADED 0 0 0
    mirror-0 DEGRADED 0 0 0
    c3t1d0 ONLINE 0 0 0
    c3t2d0 UNAVAIL 0 0 0
    device details:
    c3t2d0 UNAVAIL cannot open
    status: ZFS detected errors on this device.
    The device was missing.
    see: http://support.oracle.com/msg/ZFS-8000-LR for recovery
    errors: No known data errors
    Check root's email:
    # mail
    From [email protected] Mon Dec 17 10:48:54 2012
    Date: Mon, 17 Dec 2012 10:48:54 -0700 (MST)
    From: No Access User <[email protected]>
    Message-Id: <[email protected]>
    Subject: Fault Management Event: tardis:ZFS-8000-LR
    To: [email protected]
    Content-Length: 751
    SUNW-MSG-ID: ZFS-8000-LR, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: Mon Dec 17 10:48:53 MST 2012
    PLATFORM: SUNW,Sun-Fire-T200, CSN: 11223344, HOSTNAME: tardis
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: c2cfa39b-71f4-638e-fb44-9b223d9e0803
    DESC: ZFS device 'id1,sd@n500000e0117173e0/a' in pool 'tank' failed to open.
    AUTO-RESPONSE: An attempt will be made to activate a hot spare if available.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Run 'zpool status -lx' for more information. Please refer to the associated reference document at http://support.oracle.com/msg/ZFS-8000-LR for the latest service procedures and policies regarding this diagnosis.

  • SAP R/3 on Oracle / Solaris / ZFS

    Hello,
    My unix administator is planning on upgrading to a new OS environment to Solaris 10 with ZFS. We currently run SAP R/3 4.6C kernel / Oracle 9i on Solaris release 5.8 and I would like to know what would be the path to take from here. Does SAP R/3 4.6C or even 4.7 Support ZFS and what about Running Oracle 10g datafiles on a ZFS system for the SAP Database.
    Any support is appreciated,
    Arun

    Oracle doesn't certify any more filesystems or certain Storage Systems (see Oracle Metalink 403202.1 - Zeta File System (Zfs) On Solaris 10 Certified/Supported By Oracle) but it's supported to run on them
    We run several databases (Oracle and non-Oracle) on ZFS - and combined with zones it's GREAT to consolidate systems.
    Check http://www.sun.com/software/whitepapers/solaris10/zfs_veritas.pdf for a comparison between VXFS and ZFS.
    The SAP system itself is agnostic about the underlying filesystem.
    Markus

  • Zfs on solaris 10 and home directory creation

    I am using samba and a root preexec script to automatically create individual ZFS filesystem home directories with quotas on a Solaris 10 server that is a member of a Windows 2003 domain.
    There are about 60,000 users in Active Directory.
    My question is about best practice.
    I am worried about the overhead of having 60,000 ZFS filesytems to mount and run on Solaris 10 ?
    Edited by: fatfish on Apr 29, 2010 2:51 AM

    Testing results as follows -
    Solaris 10 10/09 running as VM on Vmware ESX server with 7 GB RAM 1 CPU 64 bit.
    ZFS pool created with three 50 GB FC LUNS from our SAN (Hardware RAID5). There are shared to ESX server and presented to the Solaris VM as Raw Device Mappings (no VMFS).
    I set up a simple script to create 3000 ZFS filesystem home directories
    #!/usr/bin/bash
    for i in {1..3000}
    do
    zfs create tank/users/test$i
    echo "$i created"
    done
    The first 1000 created very quickly.
    By the time I reached about 2000 each filesystem was taking almost 5 seconds to create. Way too long. I gave up after about 2500.
    So I rebooted.
    The 2500 ZFS filesystems mounted in about 4 seconds, so no problem there.
    The problem I have is why do the ZFS file system creation time drop of and become unworkable? I tried again to add to the pool after reboot and there was the same slow creation time.
    Am I better off with just one ZFS file system with 60,000 userquotas applied and lots of ordinary user home directories created under that with mkdir?

  • Any "Best Practice" regarding use of zfs in LDOM with zones

    I have 3 different networks and I want to create a guest-domain for each of the three networks on the same control domain.
    Inside each guest-domain, I want to create 3 zones.
    To make it easy to handle growth and also make the zones more portable, I want to create a zpool inside each guest domain and then a zfs for each zoneroot.
    By doing this I will be able to handle growth by adding vdisks to the zpool(in the guest domain) and also to migrate individual zones by using zfs send/receive.
    In the "LDoms Community Cookbook", I found a description on how to use zfs clone in the control domain to decrease deploy time of new guest domains:
    " You can use ZFS to very efficiently, easily and quickly, take a copy of a previously prepared "golden" boot disk for one domain and redeploy multiple copies of that image as a pre-installed boot disk for other domains."
    I can see clear advantages in using zfs in both the control domain and the guest domain, but what is the downside?
    I ends up with a kind of nested zfs where I create a zpool inside a zpool, the first in the control domain and the second inside a guest domain.
    How is zfs caching handled, will I end up with a solution with performance problems and a lot of I/O overhead?
    Kindest,
    Tor

    I'm not familiar with the Sybase agent code and you are correct, only 15.0.3 seems to be supported. I think we'd need a little more debug information to determine if there was a workaround. May be switching on *.info messages in syslogd.conf might get some more useful hints (no guarantee).
    Unfortunately, I can't comment on if, or when, Sybase 15.5.x might be supported.
    Regards,
    Tim
    ---

  • Change ZFS root dataset name for root file system

    Hi all
    A quick one.
    I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
    Can I change it to another name afterward without reinstalling the OS? For example,
    zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
    zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
    Thank you.

    Renaming the root pool is not recommended.

  • Migration from Netware 6.x  NSS to Solaris 10 ZFS

    Hi,
    I am looking at Solaris and ZFS as being the possible future of our file store for users.
    We have around 3TB (40million files) of data to transfer to ZFS from Netware 6.5 NSS volumes.
    What is the best way to do this?
    I have tried running utilities like richcopy (son of robocopy) , teracopy, fastcopy from a Windows client mapped to the Netware server via NCP and the Solaris server via Samba.
    In all tests the copy very quickly failed, rendering the Solaris server unusable. I imagine this has to do with the utilities expecting a destination NTFS filesystem, and ZFS combined with Samba does not fit the bill.
    I have tried running the old rsync client from Netware, but this does not seem to talk to Solaris rsyncd.
    As well as NCP, Netware has the ability to export its NSS volumes as a CIFS share.
    I have tried mounting a CIFS share of the Netware volume on Solaris...but Solaris as far as I am aware does not support mount -t smbfs as this is Linux only. You can mount smb:// from the Gui (Nautilus), but this does not help a great deal. I was hoping to run maybe Midnight Commander, but I presume that I would need a valid smb share to the Netware volume from the command line?
    I really want to avoid the idea of staging on say NTFS first, then from NTFS to ZFS. A two part copy would take forever. It needs to be direct.
    BTW..I am not bothered about ACL's or quota. These can be backed up from Netware and reapplied with ZFS/chown/chmod commands.
    A wild creative though did occur to me as follows -
    Opensolaris, unlike Solaris, has its CIFS kernel addition, and hence smb mounts from the command line (I presume), but I am not happy running opensolaris in production.So maybe I could mount the Netware NSS volume as a CIFS share on opensolaris (as a staging server), copy all the data to a ZFS pool locally, and the do a send receive to Solaris 10.......
    Maybe not...
    I suppose there is FTP, if I can get it to work on Netware.
    I really need a utility with full error checking, and that can be left unattended.
    Any ideas?

    Bu unusable I mean that the mapped ZFS Samba drive to the windows workstation died and was inaccessible.
    Logging onto the solaris box after this from the console was almost impossible. There was a massive delay. When I did log in there appeared to be no network at all. There were no errors in the smbd log file. I need to look at other logs to find out what is going on. Looking at the ZFS filesystem some files had copied over before it died.
    After rebooting the Solaris box I then tried dragging and dropping the same files to the ZFS filesystem with the native windows expolrer interface on the windows client. This worked, as in the Solaris box did not die and the files were copying happily (until I manually stopped it). As we all know Windows explorer is not a safe unattended way to copy large amounts of files.
    This tells me that the copy utilities on Windows are the problem, not native windows copy/paste.

  • Zones, crashing, vxfs and fsck

    Folks,
    We just had the unfortunate situation of having one of our zone hosting servers crash. This server has 10 local zones on it and on reboot (after a long delay to recover the crash dump) the local zones did not restart. Not a single one. That certainly gets your blood pumping.
    I quickly did a "zoneadm -z XX boot" for one and was told:
    zoneadm: zone 'XX': fsck of '/dev/vx/rdsk/vg05/dba00' failed with exit status 32; run fsck manuallyThis repeated (with different volume groups/volume names obviously) for several other zones as I tried to start them. I finally wrote a script to parse the /etc/zone/*.xml files to recover the raw slices and run fsck against them all (over 80 volumes). These being Veritas file systems, it was simply a matter of replaying the logs and it was done (no actual user interaction required).
    My take away from this is that the zone code does something like "fsck -m XXX" and if it's not suitable for mounting, aborts. This puts me into a situation where my virtual servers cannot reboot automatically in the middle of the night -- not something I'm happy about.
    Is my experience in this regard reflected by others? Do I need to write a service for the global zone that fsck's all the volumes on boot? I think that would be fairly simple to do (and make it a parent of the zone service ni the SMF framework) but I'm hesitant to do that if I'm not understanding this situation correctly. I fear this may be part of the not-so-tight zone/VxFS integration that Sun has done as well...
    Any thoughts appreciated!

    Just checking in to see what you ended up doing...we're obviously having the exact same problem.
    zoneadm definitely just does fsck -m and mounting zones in the global zone and then exporting the raw device into the zone/using the loop back device doesn't seem like a fantastic way to go about this. Writing a service to fsck seems better, but ideally it would be something you can control in the zonecfg.

  • How to back up a ZFS boot disk ?

    Hello all,
    I have just installed Solaris 10 update 6 (10/08) on a Sparc machine (an Ultra 45 workstation) using ZFS for the boot disk.
    Now I want to port a custom UFS boot disk backup script to ZFS.
    Basically, this script copies the boot disk to a secondary disk and makes the secondary disk bootable.
    With UFS, I had to play with the vfstab a bit to allow booting off the secondary disk, but this is not necessary with ZFS.
    How can I perform such a backup of my ZFS boot disk ?
    I tried the following (source disk: c1t0d0, target disk: c1t1d0):
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 110G 118G 94K /rpool
    rpool/ROOT 4.58G 118G 18K legacy
    rpool/ROOT/root 4.58G 25.4G 4.50G /
    rpool/ROOT/root/var 79.2M 4.92G 79.2M /var
    rpool/dump 16.0G 118G 16.0G -
    rpool/export 73.3G 63.7G 73.3G /export
    rpool/homelocal 21.9M 20.0G 21.9M /homelocal
    rpool/swap 16G 134G 16K -
    # zfs snapshot -r rpool@today
    # zpool create -f -R /mnt rbackup c1t1d0
    # zfs send -R rpool@today | zfs receive -F -d rbackup               <- This one fails (see below)
    # installboot /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    The send/receive command fails after transfering the "/" filesystem (4.5 GB) with the following error message:
    cannot mount '/mnt': directory is not empty
    There may be some kind of unwanted recursion here (trying to back up the backup or something) but I cannot figure it out.
    I tried a workaround: creating the mount point outside the snapshot:
    zfs snapshot -r rpool@today
    mkdir /var/tmp/mnt
    zpool create -f -R /var/tmp/mnt rbackup c1t1d0
    zfs send -R rpool@today | zfs receive -F -d rbackup
    But it still fails, this time with mounting "/var/tmp/mnt".
    So how does one back up the ZFS boot disk to a secondary disk in a live environment ?

    OK, this post requires some clarification.
    First, thanks to robert.cohen and rogerfujii for giving some elements.
    The objective is to make a backup of the boot disk on another disk of the same machine. The backup must be bootable just like the original disk.
    The reason for doing this instead of (or, even better, in addition to) mirroring the boot disk is to be able to quickly recover a stable operating system in case anything gets corrupted on the boot disk. Corruption includes hardware failures, but also any software corruption which could be caused by a virus, an attacker or an operator mistake (rm -rf ...).
    After doing lots of experiments, I found two potential solutions to this need.
    Solution 1 looks like what rogerfujii suggested, albeit with a few practical additions.
    It consists in using ZFS mirroring and breaking up the mirror after resilvering:
         - Configure the backup disk as a mirror of the boot disk :
         zpool attach -f rpool <boot disk>s0 <backup disk>s0
         - Copy the boot block to the backup disk:
         installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
         - Monitor the mirror resilvering:
         zpool status rpool
         - Wait until the "action" field disappears (this can be scripted).
         - Prevent any further resilvering:
         zpool offline rpool <backup disk>s0
         Note: this step is mandatory because detaching the disk without offlining it first results in a non bootable backup disk.
         - Detach the backup disk from the mirror:
         zpool detach rpool <backup disk>s0
         POST-OPERATIONS:
         After booting on the backup disk, assuming the main boot disk is unreachable:
         - Log in as super-user.
         - Detach the main boot disk from the mirror
         zpool detach rpool <boot disk>s0
    This solution has many advantages, including simplicity and using no dirty tricks. However, it has two major drawbacks:
    - When booting on the backup disk, if the main boot disk is online, it will be resilvered with the old data.
    - There is no easy way to access the backup disk data without rebooting.
    So if you accidentally lose one file on the boot disk, you cannot easily recover it from the backup.
    This is because the pool name is the same on both disks, therefore effectively preventing any pool import.
    Here is now solution 2, which I favor.
    It is more complex and dependent on the disk layout and ZFS implementation changes, but overall offers more flexibility.
    It may need some additions if there are other disks than the boot disk with ZFS pools (I have not tested that case yet).
    ***** HOW TO BACKUP A ZFS BOOT DISK TO ANOTHER DISK *****
    1. Backup disk partitioning
    - Clean up ZFS information from the backup disk:
    The first and last megabyte of the backup disk, which hold ZFS information (plus other stuff) are erased:
    dd if=/dev/zero seek=<backup disk #blocks minus 2048> count=2048 of=/dev/rdsk/<backup disk>s2
    dd if=/dev/zero count=2048 of=/dev/rdsk/<backup disk>s2
    - Label and partition the backup disk in SMI :
    format -e <backup disk>
         label
         0          -> SMI label
         y
         (If more questions asked: press Enter 3 times.)
         partition
         (Create a single parition, number 0, filling the whole disk)
         label
         0
         y
         quit
         quit
    2. Data copy
    - Create the target ZFS pool:
    zpool create -f -o failmode=continue -R /mnt -m legacy rbackup <backup disk>s0
    Note: the chosen pool name is here "rbackup".
    - Create a snapshot of the source pool :
    zfs snapshot -r rpool@today
    - Copy the data :
    zfs send -R rpool@today | zfs receive -F -d rbackup
    - Remove the snapshot, plus its copy on the backup disk :
    zfs destroy -r rbackup@today
    zfs destroy -r rpool@today
    3. Backup pool reconfiguration
    - Edit the following files:
    /mnt/etc/vfstab
    /mnt/etc/power.conf
    /mnt/etc/dumpadm.conf
    In these files, replace the source pool name "rpool" with the backup pool name "rbackup".
    - Remove the ZFS mount list:
    rm /mnt/etc/zfs/zpool.cache
    4. Making the backup disk bootable
    - Note the name of the current boot filesystem:
    df -k /
    E.g.:
    # df -k /
    Filesystem kbytes used avail capacity Mounted on
    rpool/ROOT/root 31457280 4726390 26646966 16% /
    - Configure the boot filesystem on the backup pool:
    zpool set bootfs=rbackup/ROOT/root rbackup
    Note: "rbackup/ROOT/root" is derived from the main boot filesystem name "rpool/ROOT/root".
    - Copy the ZFS boot block to the backup disk:
    installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
    5. Cleaning up
    - Detach the target pool:
    zpool export rbackup
    I hope this howto will be useful to those like me who need to change all their habits while migrating to ZFS.
    Regards.
    HL

Maybe you are looking for

  • In&nline in Portal

    Hi, I facing issue in portal text.When there is space between first line and second line,in portal it is showing me in&nline. Can anyone help me how to solve this issue? Thanks, Usha

  • 13 hours to transfer 94GB to from LaCie to iMac?

    I have a 2.16 GHz Intel Core 2 Duo 20" iMac with 2.5 GB 667MHz DDR2 SDRAM. It has been behaving badly, so I moved all of my files onto external drives and did an Erase and Install of Mac OS X 10.5. The install is complete, and I am now transferring v

  • Changing the cursor shape

    how can i change the shape of a cursor to "hand" when the mouse rolls over specific text item. i have tried the cursor_style property but it returns only some of the cursor styles plus the are just behind the mouse clik trigeers so it chnges the shap

  • Do you have to pay for a refurbished iPad if your iPad is still has warranty ?

    Do you have to pay for a refurbished iPad if your iPad is still has warranty ? Or something like that ?

  • ARD Admin migration

    I have a client who has upgraded their Mac. EHT Admin still shows all the clients in the FUI, but they lost the passwords. They don't seem to be in the keychain. Any ideas for retrieving/migrating them?