ZFS:  ***?

Ok, I confess that I instinctively hate anything new (except OS X itself, back when it was new).
This one, though, has me worried. I've never really quite bought the idea that HFS+ is somehow better than a normal unix file system, but mdfind and so forth at least finally convinced me that there was a point to the madness (however misguided).
So why yet another weirdo file system? Will this require us to reformat all our drives? Is there a chance that data backed up on HFS+-formatted drives and DVDs may one day be inaccessible?
I'm already quite worried, even though 90% of what I care about can live happily on any unix file system.
Darn censor won't let me type WTF.

Most file systems have failed to keep pace with the tremendous growth in storage needs (or at least availability) over recent years.
10 years ago hard drives were still being measured in megabytes. Now they're approaching terabytes - and that's just single drives. Once you head into RAID and arrays of drives you can even get into petabytes (thousands of terabytes), o even exabytes (thousands of petabytes).
It's hard to see how anyone could have foreseen that kind of growth, so it's not surprising that current generation file systems just aren't geared for it.
So, along come the latest filesystems including ZFS (although there are many others). They're designed around the cutting edge of storage needs today (and hopefully tomorrow).
Modern filesystems also implement better security protocols - something that wasn't as important a few years ago, and can leverage faster CPUs to offer things like on the fly encryption (to protect data) and checksumming (to ensure the integrity of the data).
Then comes things like journaling (added to HFS+ recently) which improves performance, auditing (recoding who changed what and when), and snapshots (multiple point-in-time copies of files so you can revert to a previous version).
Most of these newer features were never envisioned 10 years ago, and some of them are hard to back-port to existing file systems without breaking compatibility.
Do you need some of those features? No - at least not now, but knowing that they're there (or the OS silently taking advantage of them) is a good thing.
As for the otehr questions - will you need to reformat? Yes - at least to use ZFS, but there's nothing to say whether this is required or just available. You may be able to opt to keep your HFS+ disks as they are. I guess we'll know more as Leopard Day approaches.
Even if ZFS does become the standard that doesn't mean that everything else is instantly obsolete. Old filesystems can still be accessed by newer OSes if support is included. Does that mean your HFS+ backup drive will still work in 10 years time? who can say, but ZFS's snapshot feature alone probably means you want to switch your backups to ZFS as soon as possible.
As for DVDs I wouldn't worry about them. Their format is well defined and there are millions (billions?) of them already out there and, more importantly, not able to be changed due to the read-only nature of the disks. Any OS is going to need to support DVD-based formats for the foreseeable future.

Similar Messages

  • Hard drive array losing access - suspect controller - zfs

    i am having a problem with one of my arrays, this is a zfs fielsystem.  It consists of a 1x500GB, 2x750GB, and 1x2TB, linear array. the pool is named 'pool'.    I have to mention here, i dont have enough hard drive to have a raidz (raid5) setup yet, so there is no actual redundancy to so the zfs cant auto repair itself from a copy because there is none, therefore all auto repair features can be thrown out the door in this equation meaning i believe its possible that the filesystem can easily be corrupted by the controller in this specific case which i suspect.  Please keep that in mind while reading the following.
    I just upgraded my binaries, therefore i removed zfs-fuse and installed archzfs.  did i remove it completely?  not sure.  i wasnt able to get my array back up and running until i fiddled with  the sata cables, moved around the sata connectors, tinkered with bios drive detect. after i got it running, i copied some files off of it from samba thinking it might not last long.  the copy was succesfull, but problems began surfacing again shortly after.  so now i suspect i have a bad controller on my gigabyte board.  I round recently someone else who had this issue so im thinking its not the hard drive. 
    I did some smartmontools tests last night and found that ll drives are showing good on a short test, they all passed.  today im not having so much luck with getting access.  there is hangs on reboot, and the drive light stays on.  when i try to run zfs and zpool commands its stating the system is hanging.  i have been getting what appears as HD errors as well, ill have to manually type them in here since no copy and paste from the console to the maching im posting from, and the errors arent showing up via ssh or i would copy them from my terminal tha ti currently have open to here.
    ata7: SRST failed (errno=-16)
    reset failed, giving up,
    end_request I/O error, dev sdc, sector 637543760
    ' ' ' ' '''' ' ' ''' sector 637543833
    sd 6:0:0:0 got wrong page
    ' ' ' ' ' '' asking for cache data failed
    ' ' ' ' ' ' assuming drive cache: write through
    info task txg_sync:348 blocked for more than 120 seconds
    and so forth, and when i boot i see this each time which is making me feel that the HD is going bad, however i still want to believe its the controller.
    Note, it seems only those two sectors show up, is it possible that the controller shot out those two sectors with bad data?  {Note, i have had a windows system prior installed on this motherboard and after a few months of running lost a couple raid arrays of data as well.}   
    failed command: WRITE DMA EXT
    ... more stuff here...
    ata7.00 error DRDY ERR
    ICRC ABRT
    blah blah blah.
    so now i can give you some info from the diagnosis that im doing on it, copied from a shell terminal.  Note the following metadata errors JUST appeared after i was trying to delete some files, copying didnt cause this, so it apears either something is currently degrading, or it just inevitably happened from a bad controller
    [root@falcon wolfdogg]# zpool status -v
    pool: pool
    state: ONLINE
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
    see: http://zfsonlinux.org/msg/ZFS-8000-HC
    scan: resilvered 33K in 0h0m with 0 errors on Sun Jul 21 03:52:53 2013
    config:
    NAME STATE READ WRITE CKSUM
    pool ONLINE 0 26 0
    ata-ST2000DM001-9YN164_W1E07E0G ONLINE 6 41 0
    ata-ST3750640AS_5QD03NB9 ONLINE 0 0 0
    ata-ST3750640AS_3QD0AD6E ONLINE 0 0 0
    ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 ONLINE 0 0 0
    errors: Permanent errors have been detected in the following files:
    <metadata>:<0x0>
    <metadata>:<0x1>
    <metadata>:<0x14>
    <metadata>:<0x15>
    <metadata>:<0x16d>
    <metadata>:<0x171>
    <metadata>:<0x277>
    <metadata>:<0x179>
    if one of the devices are faulted, then why are they all 4 stating online???
    [root@falcon dev]# smartctl -a /dev/sdc
    smartctl 6.1 2013-03-16 r3800 [x86_64-linux-3.9.9-1-ARCH] (local build)
    Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF INFORMATION SECTION ===
    Vendor: /6:0:0:0
    Product:
    User Capacity: 600,332,565,813,390,450 bytes [600 PB]
    Logical block size: 774843950 bytes
    scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46
    scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46
    >> Terminate command early due to bad response to IEC mode page
    A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
    my drive list
    [root@falcon wolfdogg]# ls -lah /dev/disk/by-id/
    total 0
    drwxr-xr-x 2 root root 280 Jul 21 03:52 .
    drwxr-xr-x 4 root root 80 Jul 21 03:52 ..
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-_NEC_DVD_RW_ND-2510A -> ../../sr0
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST2000DM001-9YN164_W1E07E0G -> ../../sdc
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K -> ../../sdb
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part2 -> ../../sdb2
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part3 -> ../../sdb3
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part4 -> ../../sdb4
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3750640AS_3QD0AD6E -> ../../sde
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3750640AS_5QD03NB9 -> ../../sdd
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 -> ../../sda
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 wwn-0x5000c50045406de0 -> ../../sdc
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 wwn-0x50014ee1ad3cc907 -> ../../sda
    and this one i dont get
    [root@falcon dev]# zfs list
    no datasets available
    i remember creating a dataset last year, why is it reporting none, but still working
    is anybody seeing any patterns here?  im prepared to destroy the pool and recreate it just to see if its bad data.But what im thinking to do now is since the problem appears to only be happening on the 2TB drive, either the controller just cant handle it, or the drive is bad.  So, to rule out the controller there might be hope.  I have a scsi card (pci to sata) connected that one of the drives in the array is connected to since i only have 4 sata slots on the mobo, and i keep the 500GB connected to there and have not yet tried the 2tb there yet.  So if i connect this 2TB drive to the scsi i should see the problems disappear, unless the drive got corrupted already. 
    Does any experience in the arch forums know whats going on here?  did i mess up by not completely removing zfs-fuse, is my HD going bad, is my controller bad, or did ZFS just get misconfigured?
    Last edited by wolfdogg (2013-07-21 19:38:51)

    ok, something interesting happened when i connected it (the badly reacting 2TB drive) to the scsi pci card.  first of all no errors on boot.... then take a look at this, some clues to some remanants to the older zfs-fuse setup, and a working pool.
    [root@falcon wolfdogg]# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    pool 2.95T 636G 23K /pool
    pool/backup 2.95T 636G 3.49G /backup
    pool/backup/falcon 27.0G 636G 27.0G /backup/falcon
    pool/backup/redtail 2.92T 636G 2.92T /backup/redtail
    [root@falcon wolfdogg]# zpool status
    pool: pool
    state: ONLINE
    status: The pool is formatted using a legacy on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on software that does not support
    feature flags.
    scan: resilvered 33K in 0h0m with 0 errors on Sun Jul 21 04:52:52 2013
    config:
    NAME STATE READ WRITE CKSUM
    pool ONLINE 0 0 0
    ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0
    ata-ST3750640AS_5QD03NB9 ONLINE 0 0 0
    ata-ST3750640AS_3QD0AD6E ONLINE 0 0 0
    ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 ONLINE 0 0 0
    errors: No known data errors
    am i looking at a bios update needed here so the controller can talk to the 2TB properly?
    Last edited by wolfdogg (2013-07-21 19:50:18)

  • Mounting zfs in Solaris 10

    Currently I have three hard drives with:
    Disk1: Solaris 10
    Disk2: SXDE
    Disk3: OpenSolaris (ZFS)
    What is the proper way to mount Disk3 from Solaris 10 or SXDE

    Hi
    I think that the command that you need is 'zpool import' (on its own it will list pools that are available to import).
    You may have problems if the version of ZFS on disk 3 is higher than the drivers on either Solaris 10 or SXDE (I recently tried importing an SXCE created ZFS partition into FreeBSD 7, but it failed for this reason).
    Paul

  • Does SAP support Solaris 10 ZFS filesystem when using DB2 V9.5 FP4?

    Hi,
    I'm installing NW7 (BI usage). SAPINST has failed in the step "ABAP LOAD due to the DB2 error message
    "Unsupported file system type zfs for Direct I/O". It appears my Unix Admin must have decided to set these filesystems as ZFS on this new server.
    I  have several questions requiring your expertise.
    1) Does SAP support ZFS filesystems on Solaris 10 (SPARC hardware)? I can not find any reference in SDN or Service Market Place? Any reference will be much appreciated.
    2) How can I confirm my sapdata fielsystems are ZFS?
    3) What actions do you recommend for me to resolve the SAPINST errors? Do I follow the note "Note 995050 - DB6: NO FILE SYSTEM CACHING for Tablespaces" to disable "Direct I/O" for all DB2 tablespaces? I have seen Markus Doehr's forum Link:[ thread|Re: DB2 on Solaris x64 - ZFS as filesystem possible?; but it does not state exactly how he overcame the error.
    regards
    Benny

    Hi Frank,
    Thanks for your input.
    I have also found  the command "zfs list" that would display any ZFS filesystems.
    We have also gone back to UFS as the ZFS deployment schedule does not meet this particular SAP BW implementation timeline.
    Has any one come across any SAP statement that states NW7 can be deployed with ZFS for DB2 database on Solaris SPARC platform. If not, I'll open an OSS message.
    regards
    Benny

  • Trouble installing ZFS in archlinux kernel 3.6.3-1-ARCH

    I've been trying to install ZFS on my system, and i can't get past a building error for SPL, here is my install output:
    ==> Downloading zfs PKGBUILD from AUR...
    x zfs_preempt.patch
    x zfs.install
    x PKGBUILD
    Comment by: modular on Wed, 24 Oct 2012 03:09:04 +0000
    @demizer
    I don't/won't run ZFS as a root file system. I'm getting the following build error:
    http://pastebin.com/ZcWiaViK
    Comment by: demizer on Wed, 24 Oct 2012 04:11:54 +0000
    @modular, You're trying to build with the 3.6.2 kernel. The current version (rc11) does not work with the 3.6.2 kernel. If you want to use it, you will have to downgrade to the 3.5.6 kernel (linux and linux-headers). https://wiki.archlinux.org/index.php/Downgrading_Packages
    Thanks!
    Comment by: MilanKnizek on Wed, 24 Oct 2012 08:07:19 +0000
    @demizer: there still seemed to be a problem during upgrading - zfs/spl requires kernel of certain version (hard-coded) and this blocks the upgrade (the old installed zfs/spl requires the old kernel and kernel can't be upgraded w/o breaking dependency of zfs/spl and therefore build of the new zfs/spl fails, too).
    So far, I have had to remove zpl/spl, upgrade kernel, rebuild + install spl/zfs and manually run depmod against the new kernel (i.e. the postinst: depmod -a does not work until next reboot) and only then reboot to load the new kernel zfs modules successfully.
    That is quite clumsy and error-prone - I hope it will be resolved via DMKS.
    Comment by: srf21c on Sun, 28 Oct 2012 04:00:31 +0000
    All, if you're suffering zfs kernel upgrade pain fatigue, seriously consider going with the LTS (long term support) kernel. I just successfully built zfs on a system that I switched to the linux-lts 3.0.48-1. All you have to do is install the linux-lts and linux-lts-headers packages, reboot to the lts kernel, and change any instances of depends= or makedepends= lines in the package build file like so:
    Before:
    depends=('linux=3.5' "spl=${pkgver}" "zfs-utils=${pkgver}")
    makedepends=('linux-headers=3.5')
    After:
    depends=('linux-lts=3.0' "spl=${pkgver}" "zfs-utils=${pkgver}")
    makedepends=('linux-lts-headers=3.0')
    Then build and install each package in this order: spl-utils,spl,zfs-utils,zfs.
    Worked like a champ for me.
    Comment by: stoone on Mon, 29 Oct 2012 12:09:29 +0000
    If you keep the linux, and linux-headers packages while using the LTS you don't need to modify the PKGBUILDs. Because the checks will pass but it will build the packages to your current runnning kernel.
    Comment by: demizer on Mon, 29 Oct 2012 15:56:27 +0000
    Hey everybody, just a quick update. The new build tool I have been working on is now in master, https://github.com/demizer/aur-zfs. With it you can build and package two different groups of packages one for aur and one for split. Again, building the split packages is more efficient. I still have a lot of work to be done, but it is progressing. I will be adding git, dkms, and lts packages after I setup my repo. My next step is to add unofficial repository support to my build tool so I can easily setup a repo with precompiled binaries. I will be hosting the repo on my website at http://demizerone.com/archzfs. Initially it will only be for 64bit code since the ZOL FAQ states that ZOL is very unstable with 32bit code due to memory management differences in Solaris and Linux. I will notify you all in the future when that is ready to go.
    @MilanKnizek, Yes updating is a pain. ZFS itself is hard-coded to linux versions at build time. The ZFS build tool puts the modules in "/usr/lib/modules/3.5.6-1-ARCH/addon/zfs/", and this the primary reason it has to be rebuilt each upgrade, even minor point releases. Nvidia for example puts their module in "/usr/lib/modules/extramodules-3.5-ARCH/", so minor point releases are still good and the nvidia package doesn't need to be re-installed. A possible reason for ZOL to be hard-coded like this because ZOL is still technically very beta code.
    I do have a question for the community, does anyone use ZFS on a 32bit system?
    Thanks!
    First Submitted: Thu, 23 Sep 2010 08:50:51 +0000
    zfs 0.6.0_rc11-2
    ( Unsupported package: Potentially dangerous ! )
    ==> Edit PKGBUILD ? [Y/n] ("A" to abort)
    ==> ------------------------------------
    ==> n
    ==> zfs dependencies:
    - linux>=3.5 (already installed)
    - linux-headers>=3.5 (already installed)
    - spl>=0.6.0_rc11 (building from AUR)
    - zfs-utils>=0.6.0_rc11 (building from AUR)
    ==> Edit zfs.install ? [Y/n] ("A" to abort)
    ==> ---------------------------------------
    n
    ==> Continue building zfs ? [Y/n]
    ==> -----------------------------
    ==>
    ==> Building and installing package
    ==> Install or build missing dependencies for zfs:
    ==> Downloading spl PKGBUILD from AUR...
    x spl.install
    x PKGBUILD
    Comment by: timemaster on Mon, 15 Oct 2012 22:42:32 +0000
    I am not able to compile this package after the upgrade to the 3.6 kernel. Anyone else ? any idea?
    Comment by: mikers on Mon, 15 Oct 2012 23:34:17 +0000
    rc11 doesn't support Linux 3.6; there are some patches on GitHub that might apply against it (I've not done it myself), see:
    https://github.com/zfsonlinux/spl/pull/179
    https://github.com/zfsonlinux/zfs/pull/1039
    Otherwise downgrade to Linux 3.5.x or linux-lts and wait for rc12.
    Comment by: timemaster on Mon, 15 Oct 2012 23:54:03 +0000
    Yes, I saw that too late.
    https://github.com/zfsonlinux/zfs/commit/ee7913b644a2c812a249046f56eed39d1977d706
    Comment by: demizer on Tue, 16 Oct 2012 07:00:16 +0000
    Looks like the patches have been merged, now we wait for rc12.
    Comment by: vroomanj on Fri, 26 Oct 2012 17:07:19 +0000
    @demizer: 3.6 support is available in the master builds, which are stable but not officially released yet. Can't the build be updated to use the master tars?
    https://github.com/zfsonlinux/spl/tarball/master
    https://github.com/zfsonlinux/zfs/tarball/master
    Comment by: demizer on Fri, 26 Oct 2012 17:51:42 +0000
    @vroomanj, I plan on working on the git packages this weekend. All I have to figure out if it is going to be based on an actual git clone or if its just going to be the download links you provided. They are pretty much the same, but i'm not really clear what the Arch Package Guidelines say about this yet. Also, I don't think the current packages in AUR now should be based off of git master. They should be based off of the ZOL stable releases (rc10, rc11, ...). That's why I am making git packages so people can use them if they want to upgrade to the latest kernel and the stable release hasn't been made yet. As is the case currently.
    First Submitted: Sat, 26 Apr 2008 14:34:31 +0000
    spl 0.6.0_rc11-2
    ( Unsupported package: Potentially dangerous ! )
    ==> Edit PKGBUILD ? [Y/n] ("A" to abort)
    ==> ------------------------------------
    ==> n
    ==> spl dependencies:
    - linux>=3.5 (already installed)
    - spl-utils>=0.6.0_rc11 (already installed)
    - linux-headers>=3.5 (already installed)
    ==> Edit spl.install ? [Y/n] ("A" to abort)
    ==> ---------------------------------------
    ==> n
    ==> Continue building spl ? [Y/n]
    ==> -----------------------------
    ==>
    ==> Building and installing package
    ==> Making package: spl 0.6.0_rc11-2 (Tue Oct 30 11:34:13 CET 2012)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving Sources...
    -> Downloading spl-0.6.0-rc11.tar.gz...
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 178 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    100 136 100 136 0 0 154 0 --:--:-- --:--:-- --:--:-- 293
    100 508k 100 508k 0 0 357k 0 0:00:01 0:00:01 --:--:-- 1245k
    ==> Validating source files with md5sums...
    spl-0.6.0-rc11.tar.gz ... Passed
    ==> Extracting Sources...
    -> Extracting spl-0.6.0-rc11.tar.gz with bsdtar
    ==> Starting build()...
    configure.ac:34: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see:
    configure.ac:34: http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_INIT_AUTOMAKE-invocation
    checking metadata... yes
    checking build system type... i686-pc-linux-gnu
    checking host system type... i686-pc-linux-gnu
    checking target system type... i686-pc-linux-gnu
    checking whether to enable maintainer-specific portions of Makefiles... no
    checking whether make supports nested variables... yes
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
    checking for gawk... gawk
    checking whether make sets $(MAKE)... yes
    checking for gcc... gcc
    checking whether the C compiler works... yes
    checking for C compiler default output file name... a.out
    checking for suffix of executables...
    checking whether we are cross compiling... no
    checking for suffix of object files... o
    checking whether we are using the GNU C compiler... yes
    checking whether gcc accepts -g... yes
    checking for gcc option to accept ISO C89... none needed
    checking for style of include used by make... GNU
    checking dependency style of gcc... gcc3
    checking how to print strings... printf
    checking for a sed that does not truncate output... /bin/sed
    checking for grep that handles long lines and -e... /usr/bin/grep
    checking for egrep... /usr/bin/grep -E
    checking for fgrep... /usr/bin/grep -F
    checking for ld used by gcc... /usr/bin/ld
    checking if the linker (/usr/bin/ld) is GNU ld... yes
    checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
    checking the name lister (/usr/bin/nm -B) interface... BSD nm
    checking whether ln -s works... yes
    checking the maximum length of command line arguments... 1572864
    checking whether the shell understands some XSI constructs... yes
    checking whether the shell understands "+="... yes
    checking how to convert i686-pc-linux-gnu file names to i686-pc-linux-gnu format... func_convert_file_noop
    checking how to convert i686-pc-linux-gnu file names to toolchain format... func_convert_file_noop
    checking for /usr/bin/ld option to reload object files... -r
    checking for objdump... objdump
    checking how to recognize dependent libraries... pass_all
    checking for dlltool... no
    checking how to associate runtime and link libraries... printf %s\n
    checking for ar... ar
    checking for archiver @FILE support... @
    checking for strip... strip
    checking for ranlib... ranlib
    checking command to parse /usr/bin/nm -B output from gcc object... ok
    checking for sysroot... no
    checking for mt... no
    checking if : is a manifest tool... no
    checking how to run the C preprocessor... gcc -E
    checking for ANSI C header files... yes
    checking for sys/types.h... yes
    checking for sys/stat.h... yes
    checking for stdlib.h... yes
    checking for string.h... yes
    checking for memory.h... yes
    checking for strings.h... yes
    checking for inttypes.h... yes
    checking for stdint.h... yes
    checking for unistd.h... yes
    checking for dlfcn.h... yes
    checking for objdir... .libs
    checking if gcc supports -fno-rtti -fno-exceptions... no
    checking for gcc option to produce PIC... -fPIC -DPIC
    checking if gcc PIC flag -fPIC -DPIC works... yes
    checking if gcc static flag -static works... yes
    checking if gcc supports -c -o file.o... yes
    checking if gcc supports -c -o file.o... (cached) yes
    checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes
    checking whether -lc should be explicitly linked in... no
    checking dynamic linker characteristics... GNU/Linux ld.so
    checking how to hardcode library paths into programs... immediate
    checking whether stripping libraries is possible... yes
    checking if libtool supports shared libraries... yes
    checking whether to build shared libraries... yes
    checking whether to build static libraries... yes
    checking spl license... GPL
    checking linux distribution... arch
    checking default package type... arch
    checking whether rpm is available... no
    checking whether rpmbuild is available... no
    checking whether dpkg is available... no
    checking whether dpkg-buildpackage is available... no
    checking whether alien is available... no
    checking whether pacman is available... yes (4.0.3)
    checking whether makepkg is available... yes (4.0.3)
    checking spl config... kernel
    checking kernel source directory... /usr/src/linux-3.6.3-1-ARCH
    checking kernel build directory... /usr/src/linux-3.6.3-1-ARCH
    checking kernel source version... 3.6.3-1-ARCH
    checking kernel file name for module symbols... Module.symvers
    checking whether debugging is enabled... no
    checking whether basic debug logging is enabled... yes
    checking whether basic kmem accounting is enabled... yes
    checking whether detailed kmem tracking is enabled... no
    checking whether modules can be built... yes
    checking whether atomic types use spinlocks... no
    checking whether kernel defines atomic64_t... yes
    checking whether kernel defines atomic64_cmpxchg... no
    checking whether kernel defines atomic64_xchg... yes
    checking whether kernel defines uintptr_t... yes
    checking whether INIT_WORK wants 3 args... no
    checking whether register_sysctl_table() wants 2 args... no
    checking whether set_shrinker() available... no
    checking whether shrinker callback wants 3 args... no
    checking whether struct path used in struct nameidata... yes
    checking whether task_curr() is available... no
    checking whether unnumbered sysctl support exists... no
    checking whether struct ctl_table has ctl_name... no
    checking whether fls64() is available... yes
    checking whether device_create() is available... yes
    checking whether device_create() wants 5 args... yes
    checking whether class_device_create() is available... no
    checking whether set_normalized_timespec() is available as export... yes
    checking whether set_normalized_timespec() is an inline... yes
    checking whether timespec_sub() is available... yes
    checking whether init_utsname() is available... yes
    checking whether header linux/fdtable.h exists... yes
    checking whether files_fdtable() is available... yes
    checking whether __clear_close_on_exec() is available... yes
    checking whether header linux/uaccess.h exists... yes
    checking whether kmalloc_node() is available... yes
    checking whether monotonic_clock() is available... no
    checking whether struct inode has i_mutex... yes
    checking whether struct mutex has owner... yes
    checking whether struct mutex owner is a task_struct... yes
    checking whether mutex_lock_nested() is available... yes
    checking whether on_each_cpu() wants 3 args... yes
    checking whether kallsyms_lookup_name() is available... yes
    checking whether get_vmalloc_info() is available... no
    checking whether symbol *_pgdat exist... yes
    checking whether first_online_pgdat() is available... no
    checking whether next_online_pgdat() is available... no
    checking whether next_zone() is available... no
    checking whether pgdat_list is available... no
    checking whether global_page_state() is available... yes
    checking whether page state NR_FREE_PAGES is available... yes
    checking whether page state NR_INACTIVE is available... no
    checking whether page state NR_INACTIVE_ANON is available... yes
    checking whether page state NR_INACTIVE_FILE is available... yes
    checking whether page state NR_ACTIVE is available... no
    checking whether page state NR_ACTIVE_ANON is available... yes
    checking whether page state NR_ACTIVE_FILE is available... yes
    checking whether symbol get_zone_counts is needed... no
    checking whether user_path_dir() is available... yes
    checking whether set_fs_pwd() is available... no
    checking whether set_fs_pwd() wants 2 args... yes
    checking whether vfs_unlink() wants 2 args... yes
    checking whether vfs_rename() wants 4 args... yes
    checking whether vfs_fsync() is available... yes
    checking whether vfs_fsync() wants 2 args... yes
    checking whether struct fs_struct uses spinlock_t... yes
    checking whether struct cred exists... yes
    checking whether groups_search() is available... no
    checking whether __put_task_struct() is available... yes
    checking whether proc_handler() wants 5 args... yes
    checking whether kvasprintf() is available... yes
    checking whether rwsem_is_locked() acquires sem->wait_lock... no
    checking whether invalidate_inodes() is available... no
    checking whether invalidate_inodes_check() is available... no
    checking whether invalidate_inodes() wants 2 args... yes
    checking whether shrink_dcache_memory() is available... no
    checking whether shrink_icache_memory() is available... no
    checking whether symbol kern_path_parent exists in header... no
    checking whether kern_path_parent() is available... no
    checking whether zlib_deflate_workspacesize() wants 2 args... yes
    checking whether struct shrink_control exists... yes
    checking whether struct rw_semaphore member wait_lock is raw... yes
    checking that generated files are newer than configure... done
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating lib/Makefile
    config.status: creating cmd/Makefile
    config.status: creating module/Makefile
    config.status: creating module/spl/Makefile
    config.status: creating module/splat/Makefile
    config.status: creating include/Makefile
    config.status: creating scripts/Makefile
    config.status: creating spl.spec
    config.status: creating spl-modules.spec
    config.status: creating PKGBUILD-spl
    config.status: creating PKGBUILD-spl-modules
    config.status: creating spl.release
    config.status: creating dkms.conf
    config.status: creating spl_config.h
    config.status: executing depfiles commands
    config.status: executing libtool commands
    make all-recursive
    make[1]: Entering directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11'
    Making all in module
    make[2]: Entering directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module'
    make -C /usr/src/linux-3.6.3-1-ARCH SUBDIRS=`pwd` CONFIG_SPL=m modules
    make[3]: Entering directory `/usr/src/linux-3.6.3-1-ARCH'
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-debug.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-proc.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-kmem.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-thread.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-taskq.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-rwlock.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.o
    /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.c: In function 'vn_remove':
    /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.c:327:2: error: implicit declaration of function 'path_lookup' [-Werror=implicit-function-declaration]
    cc1: some warnings being treated as errors
    make[5]: *** [/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.o] Error 1
    make[4]: *** [/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl] Error 2
    make[3]: *** [_module_/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module] Error 2
    make[3]: Leaving directory `/usr/src/linux-3.6.3-1-ARCH'
    make[2]: *** [modules] Error 2
    make[2]: Leaving directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11'
    make: *** [all] Error 2
    ==> ERROR: A failure occurred in build().
    Aborting...
    ==> ERROR: Makepkg was unable to build spl.
    ==> Restart building spl ? [y/N]
    ==> ----------------------------
    ... i'm stuck here, can anyone help me with this one? please !

    Did you read the comments, either on the AUR page or in the output that you posted? They explain it.

  • Lucreate zfs BE dataset does not exist

    Hi guys,
    I'm trying to upgrade a Solaris 10 full zfs with liveupgrade but unfortunatly after one day search and after re-reading all sunDocs I block on and I cannot create a new BE.
    The final objective is to have the server on release 10 10/09 for VDI3.1.1
    If someone get an idea would be so great.
    tks
    Please find here all informations:
    SunOS lima 5.10 Generic_142900-02 sun4v sparc SUNW,Sun-Blade-T6340
    Solaris 10 10/08 s10s_u6wos_07b SPARC
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 15.0G 119G 94K none
    rpool@install 0 - 94K -
    rpool/ROOT 5.83G 119G 18K legacy
    rpool/ROOT@install 0 - 18K -
    rpool/ROOT/s10s_u6wos_07b 5.83G 113G 5.71G /
    rpool/ROOT/s10s_u6wos_07b@install 127M - 5.70G -
    rpool/ROOT/s10s_u6wos_07b@sol10_u8 394K - 5.71G -
    rpool/ROOT/sol10_u8 0 119G 5.71G /.alt.tmp.b-.7.mnt/
    rpool/dump 2.00G 119G 2.00G -
    rpool/export 112K 1024M 20K /export
    rpool/export@install 16K - 20K -
    rpool/export/home 75.5K 1024M 56.5K /export/home
    rpool/export/home@install 19K - 31K -
    rpool/flar 4.38G 3.62G 4.38G /var/flar
    rpool/flar@install 67K - 68K -
    rpool/opt 134M 890M 133M /opt
    rpool/opt@install 200K - 89.0M -
    rpool/tarantella 2.61G 397M 1.11G /opt/SUNWvda
    rpool/tarantella@install 491M - 491M -
    rpool/tarantella/var 1.02G 397M 1.02G /var/tarantella
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    old yes yes yes no -
    sol10_u8 no no no yes -
    The errors :
    root@ lima #lucreate -n sol10_u8
    Analyzing system configuration.
    Comparing source boot environment <old> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <sol10_u8>.
    Source boot environment is <old>.
    Creating boot environment <sol10_u8>.
    Cloning file systems from boot environment <old> to create boot environment <sol10_u8>.
    Creating snapshot for <rpool/ROOT/s10s_u6wos_07b> on <rpool/ROOT/s10s_u6wos_07b@sol10_u8>.
    Creating clone for <rpool/ROOT/s10s_u6wos_07b@sol10_u8> on <rpool/ROOT/sol10_u8>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/sol10_u8>.
    ERROR: cannot open ' ': dataset does not exist
    ERROR: cannot mount mount point </.alt.tmp.b-.7.mnt/opt> device < >
    ERROR: failed to mount file system < > on </.alt.tmp.b-.7.mnt/opt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to mount ABE <sol10_u8>
    ERROR: Unable to clone the existing file systems from boot environment <old> to create boot environment <sol10_u8>.
    ERROR: Cannot make file systems for boot environment <sol10_u8>.

    In fact the /opt were seperated to include this in the / resolve the issue.

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Using a ZFS volume with encryption on as the Virtual disk in a Sparc VM

    Hi guys,
    Working my way through understanding how virtualization works in VM for Sparc. One question I thought of deals with having encryption turned on in a ZFS volume. If I create a volume with encryption ON, and then attach that volume as the sole virtual disk in a guest domain, will the virtualization still work? Has anyone else dealt with a scenario like this? I'm about to try it now, I'll report back with my findings.

    From my understanding, if you create a ZFS disk with encryption ON, mount it on Solaris, and after that present that to the VM, you should not have problems. The encryption will work between the volume and Solaris. To the guest vm, it should be transparent.

  • Can someone please tell me how to format a new disk to ZFS format?

    I have a Sun v240 with Solaris 10 update 8 installed on a single 73GB harddisk. Everything is working fine. I just purchased a another identical harddisk online. I plugged the disk into my v240 and ran 'devfsadm' and solaris found the new disk. I want to add this disk to my existing ZFS pool as a mirror. However, this disk was originally formatted with a UFS file system. So when I run:
    zpool attach rpool c1t0d0 c1t1d0I get:
    /dev/dsk/c1t1d0s0 contains a ufs filesystem.I understand the error message but I don't know how to format the disk to have a ZFS file system instead. Note that I am extremely new to Solaris, ZFS, and pretty much everything Sun - I bought this server on eBay so that I could learn more about it. It's been pretty fun so far but need some help here and there.
    For some reason I can't find a single hit on Google telling me how to just simply format a disk to ZFS. Can I use the 'format' command? Maybe you don't "format" disks for ZFS? I have no idea. I might not have the right terminology. If so, apologies. Can anyone help me on this?
    Thanks a lot! =D
    Jonathon

    Yes, you were right. The partitions were totally different. Here is what I saw:
    For c1t0d0:
    # format
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      1 unassigned    wm       0                0         (0/0/0)             0
      2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6 unassigned    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0For c1t1d0:
    # format
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 - 12865       62.43GB    (12866/0/0) 130924416
      1       swap    wu   12866 - 14079        5.89GB    (1214/0/0)   12353664
      2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      3 unassigned    wm   14080 - 14086       34.78MB    (7/0/0)         71232
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6 unassigned    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0So then I ran the following:
    # prtvtoc /dev/rdsk/c1t0d0s0 | fmthard -s - /dev/rdsk/c1t1d0s0
    fmthard:  New volume table of contents now in place.Then I rechecked the partition table for c1t1d0:
    # format
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      1 unassigned    wu       0                0         (0/0/0)             0
      2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      3 unassigned    wu       0                0         (0/0/0)             0
      4 unassigned    wu       0                0         (0/0/0)             0
      5 unassigned    wu       0                0         (0/0/0)             0
      6 unassigned    wu       0                0         (0/0/0)             0
      7 unassigned    wu       0                0         (0/0/0)             0Woo-hoo!! It matches the first disk now! :)
    Then I tried to attach the new disk to the pool again:
    # zpool attach -f rpool c1t0d0s0 c1t1d0s0
    Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable.
    Make sure to wait until resilver is done before rebooting.
    bash-3.00# zpool status
      pool: rpool
    state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
    scrub: resilver in progress for 0h0m, 0.40% done, 0h58m to go
    config:
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0  30.3M resilvered
    errors: No known data errorsBoo-yah!!! ++Does little dance++
    Then, after resilvering completed I ran:
    # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0I think I'm starting to understand this now. I also shutdown the server to the OpenBoot prompt and booted off of the new disk and it worked! Also, my bootup time to login has drastically decreased - I would say it's about half the time it was before I added the mirror disk. So I believe the server is properly reading from both disks simultaneously in order to get better bandwidth. Cool! :)
    Thanks for the help!
    Jonathon

  • How to count number of files on zfs filesystem

    Hi all,
    Is there a way to count the number of files on a zfs filesystem similar to how "df -o i /ufs_filesystm" works? I am looking for a way to do this without using find as I suspect there are millions of files on a zfs filesystem that is causing slow performance sometimes on a particular zfs file system
    Thanks.

    So I have finished 90% of my testing and I have accepted _df -t /filesystem | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'_ as acceptable in the absence of a known built in zfs method. My main conern was with the reduction of available files from the df -t output as more files were added. I used a one liner for loop to just create empty files to conserve on space used up so I would have a better chance of seeing what happens if the available files reached 0.
    root@fj-sol11:/zfstest/dir4# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _5133680_
    root@fj-sol11:/zfstest/dir4# df -t /zfstest
    /zfstest (pool1 ): 7237508 blocks *7237508* files
    total: 10257408 blocks 12372310 files
    root@fj-sol11:/zfstest/dir4#
    root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _6742772_
    root@fj-sol11:/zfstest/dir7# df -t /zfstest
    /zfstest (pool1 ): 6619533 blocks *6619533* files
    total: 10257408 blocks 13362305 files
    root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _7271716_
    root@fj-sol11:/zfstest/dir7# df -t /zfstest
    /zfstest (pool1 ): 6445809 blocks *6445809* files
    total: 10257408 blocks 13717010 files
    root@fj-sol11:/zfstest# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _12359601_
    root@fj-sol11:/zfstest# df -t /zfstest
    /zfstest (pool1 ): 4494264 blocks *4494264* files
    total: 10257408 blocks 16853865 files
    I noticed the total files kept increasing and the creation of 4 millions files (4494264) after the above example was taking up more time than I had after already creating 12 million plus ( _12359601_ ) which took 2 days on a slow machine on and off (mostly on). If anyone has any idea of creating them quicker than "touch filename$loop" in a for loop let me know :)
    In the end I decided to use a really small file system 100mb on a virtual machine to test what happens as the free files approached 0. Turns out if never does ... it somehow increased
    bash-3.00# df -t /smalltest/
    /smalltest (smalltest ): 31451 blocks *31451* files
    total: 112640 blocks 278542 files
    bash-3.00# pwd
    /smalltest
    bash-3.00# mkdir dir4
    bash-3.00# cd dir4
    bash-3.00# for arg in {1..47084}; do touch file$arg; done <--- I created 47084 files here, more that the free listed above ( *31451* )
    bash-3.00# zfs list smalltest
    NAME USED AVAIL REFER MOUNTPOINT
    smalltest 47.3M 7.67M 46.9M /smalltest
    bash-3.00# df -t /smalltest/
    /smalltest (smalltest ): 15710 blocks *15710* files
    total: 112640 blocks 309887 files
    bash-3.00#
    The other 10% of my testing will be to see what happens when I try to a find on 12 million plus files and try to pipe it to wc -l :)

  • How can i create a zfs slice on the finish scritp of a jumpstart?

    Hi All,
    i need advice for a particular task.
    i have a jumpstart wich create a certain number of usf slice (/, /var, etc...).
    it's working good, but i have a final goal : with the rest of the half of the free space, i want to create a zfs pool, with the help of the finish script.
    i think the best way is to use "format" or "fdisk" command with a script like "fdisk /dev/dsk/c0d0 < script.sh"
    and after that a simple "zpool create ...." command for creating the zfs.
    so i have 2 questions:
    do you think it's the best way?
    how can i write the "script.sh" for telling him to use only the half of the free space?
    thx

    Why not make another slice for ZFS to use? Then just setup the zfs pool in your finish script. I use JET here as a friendly front end to jumpstart. You could just have your jumpstart setup create the slice (base_config_profile_s?_size) with no mountpoint (base_config_profile_s?_mtpt) then use that slice when you make the zfs pool later in the finish script.
    I do not believe you will be able to easily get ZFS to use just part of a device without some sort of partitioning. Do some reading on zpool (man zpool) under the vdev section.

  • Mounting a shared zfs onto a remote system : how to restrict permissions

    I'm trying to setup some basic nfs shares out of a zpool and failing miserably as it is.
    Here's what I have; all my machines are in 10.154.22.0/24 .. it's basically a test (pre-prod) network
    - oslo, the nfs server, 10.154.22.1, SunOS 5.10-x86
    - helsinki, the remote machine, 10.154.22.4, linux 2.6.22
    On oslo I've created a zpool named pool2 with a zfs filesytem called tools. pool2/tools is mounted in /tools. I've further restricted access to /tools with : chown 0:nfsusers /tools && chmod 770 /tools . I want to ensure that only users from the group nfsusers will be able to read/write/execute into /tools.
    I have a user, dbusr who is part of the nfsusers group. He can access the FS as he wants. All usernames / uids / gids are identical across the whole network .
    Ok now, on helsinki, I have a directory /export/helsinki/tools . This directory is also chowned 0:nfsusers and chmoded 770.
    Now, on helsinki everytime I try : mount -t nfs oslo:/tools /export/helsinki/tools I get :
    mount.nfs: oslo:/tools failed, reason given by server: Permission denied
    Server-side I've modified /etc/default/nfs so that both client and server run at NFSv3 (I've read somewhere that NFSv4 is not that well supported on Linux). The zfs share is set up like this:
    zfs set sharenfs=rw=10.154.22.0/255.255.255.0 pool2/tools .
    I'd like all my users in the group nfsusers to be able to write on the remote nfs FS, and optionally, root to be able, too.
    What am I missing, here ?
    Regards,
    Jeff

    It was not a problem. The raw partition with the soft partition or ZFS filesystem works fine in the local zone.
    Thank you for your help.
    -Yong

  • [SOLVED] What is the best way to use ZFS on linux-ck?

    Hi, I've recently gotten into ZFS to replace my Intel FakeRaid array I had set up for my /home folder. All is well under the stock kernel, but I'm running into issues while trying to run it under linux-ck.
    At boot systemd throws a fit, and in the emergency shell it seems that -CK cannot use the installed module (module not found). I noticed that the versions for -ck and -arch are currently slightly different, and tried recompiling -ck to match -arch's version to no avail.
    Inspecting the PKGBUILDs for the zfs packages, it looks like they're designed to be built for the current [core] version of the stock kernel only, which would explain why -ck can't see the zfs module.
    Is there an elegant, semi-automatic way to go about maintaining zfs/spl so that I can keep it up-to-date for the current linux-ck version?
    Thanks!
    Last edited by SirWuffleton (2014-05-09 11:22:06)

    graysky wrote:You will need to change the PKGBUILDs for zfs-git/spl-git to require linux-ck and linux-ck-headers rather than linux and linux-headers.
    Okay, I was thinking it would be something along those lines. Thanks for the clarification.
    After I modify the PKGBUILDs to use linux-ck, I should be able to build the modules against -ck even if I'm currently running -arch, correct? Just want make sure, since I'd like to avoid breaking ZFS on both kernels at the same time so I don't have to deal with some awkward situation where I need to build the modules against -ck from the emergency shell or something.

  • RBAC and zlogin, zpool, zfs commands - doesn't work

    Question about RBAC and the zlogin, zpool, and zfs commands. If you go into SMC and look at the rights being assigned to a user, on the left side you have a long list of commands that are denied to the user. Not listed in /usr/bin or /usr/sbin are the zpool and zfs commands. I can assign a user a very limited set of commands, and ones that remain in the left column (such as lustatus or format, for example) remain forbidden and cannot be used. However, commands like zfs create will still work, even though explicitly not granted through RBAC. There is a link in /usr/sbin for zfs to /sbin, and I added the /sbin directory to the list of commands denied, but with the link in place, the command still works for the user. When I logout of the session then go back into SMC after logging back in, the /sbin directory I added is gone again and the commands still work. I tried creating a new right but the same thing happens. Similar things happen with commands located in /usr/ucb which are all allowable since they cannot be explicitly denied. How to deal with this situation?
    thanks
    mc

    If zfs believes there is an active pool using the disk, then even the -f flag will not work. What is the output of 'zpool status'? If it shows disk11 as a hot spare, and you don't want it as a hot spare, then use 'zpool remove'. Note that 'remove' will only work if it is in AVAIL state and not INUSE state.

  • How to access ZFS share from Windows 7?

    I am new to UNIX and am having a hard time to get a ZFS share to access from windows 7 on my home network.
    I was able to access both WHS 2011 and QNAP 459 share on SE 11 by using the file manager - Server - windows & then just using the IP address, username, password. That was easy or at least similar to what I was used with windows 7.
    However, I have yet to be able to access a ZFS pool containing a share that I can access from another windows 7 machine at home.
    Apparently, I can mount the share from windows but the login name/password do not get accepted when I add a network connection in windows. Windows does seem to find the path \\solaris\tank_share1 and even mounts it, but the login for SE 11 does not work for some reason.
    I changes the workgroup name to WORKGROUP in windows but that did not change anything. I tried to edit the pam.conf file by changing the ownership from root to myself so I could use gedit since it has been 15 years since I last used vi. However, that corrupted the setup as I got "system error" message on reboot that never got out of that infinite loop.
    I am basically using the instruction through the following link:
    http://blogs.oracle.com/observatory/entry/accessing_opensolaris_shares_from_windows
    Any help to get this problem resolved is much appreciated
    Thanks,
    Kurt

    The documented procedure of having to edit the pam_conf file seems to work followed by resetting one's password seems to work after all. I believe, by taking away ownership from root to "admin user" screwed things up. I had to relearn how to use vi but that didn't take very long.
    Got about 50 MB/s speed coping from Windows SSD to SE11 SSD via very small (5 GB) RAIDZ array in VMWare (running on top of WIN 7-64). I have to try native SE11 SSD next as the VMWare setup is just for practice.
    Q: Is there a way to launch gedit from the terminal window in root mode so I wouldn't have to use vi?
    Kurt

  • Extremely Disappointed and Frustrated with ZFS

    I feel like crying. In the last 2 years of my life ZFS has caused me more professional pain and grief than anything else. At the moment I'm anxiously waiting for a resilver to complete, I expect that my pool will probably fault and I'll have to rebuild it.
    I work at a small business, and I have two Solaris 11 servers which function as SAN systems primarly serving virtual machine disk images to Proxmox VE cluster. Each is fitted with an Areca 12 port SATA controller put in JBOD mode, and populated with WD Caviar Black 2TB drives (probably my first and biggest mistake, was in not using enterprise class drives). One system is configured as a ZFS triple mirror, and the other a double mirror, both have 3 hot spares.
    About a year ago I got CKSUM errors on one of the arrays I promptly ran zpool scrub on the array, and stupidly I decided to scrub the other array at the same time. The scrub quickly turned into resilvers on both arrays as more CKSUM errors were uncovered, and as the resilver continued the CKSUM count rose until both arrays faulted and were irrecoverable. Irrecoverable metadata corruption was the error I got from ZFS. After 20+ hours of attempted recovery, trying to play back the ZIL I had to destroy them both and rebuild everything. I never knew for certain what the cause was, but I suspected disk write caching on the controller, and/or the use of a non-enterprise class flash drive for ZIL.
    In the aftermath did extremely thorough checking of all devices. I checked each backplane port, each drive for bad sectors, SATA controller onboard RAM, main memory, ran extended burn-in testing, etc. I then rebuilt the arrays without controller write caching and no seperate ZIL device. I also scheduled weekly scrubbing, and scripted ZFS alerting.
    Yesterday I got an alert from the controller on my array with the triple mirror about read errors on one of the ports. I ran a scrub which completed and then I proceeded to replace the drive I was getting read errors on. I offlined the old drive and inserted a brand new drive and ran zfs replace. Re-silver started fast and then the rate quickly dropped down to 1.0MB/s, my controller began spitting out a multitude of SATA command timeout errors on the port of the newly inserted drive. Since the whole array had essential froze up, I popped the drive out and everything ran back at full speed, resilvering against one of the hot spares. Now the resilver soon started uncovering CKSUM errors similar to the disaster I had last year. Error counts rose and now my system dropped another drive off in the same mirror set and is resilvering 2 drives in the same set, with the third drive in the set showing 6 CKSUM errors. I'm afraid I'm going to lose the whole array again, as the only drive left in the set is showing errors as well. WTF?!?!?!?!?!?!
    So I suspect I have a bad batch of disks, however, why the heck did zfs scrub complete and show no errors? What is the point of ZFS scrub if it doesn't accuratly uncover errors? I'm so frustrated that these types of errors seem to show up only during resilvers. I'm beginning to think ZFS isn't as robust as advertised....

    FMA does notify admins automatically through the smtp-notify service via mail notification to root. You
    can customize this service to send notification to your own email account on any system.
    The poster said he has scripted for ZFS, but I don't know if that means reviewing zpool status or
    FMA data.
    With ongoing hardware problems, you need to review FMA data as well. See the example below.
    Rob Johnston has a good explanation of this smtp-notify service, here:
    https://blogs.oracle.com/robj/entry/fma_and_email_notifications
    For the system below, I had to enable sendmail to see the failure notice in root's mail,
    but that was it.
    Thanks, Cindy
    I failed a disk in a pool:
    # zpool status -v tank
    pool: tank
    state: DEGRADED
    status: One or more devices are unavailable in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
    action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or 'fmadm repaired', or replace the device
    with 'zpool replace'.
    scan: resilvered 944M in 0h0m with 0 errors on Mon Dec 17 10:30:05 2012
    config:
    NAME STATE READ WRITE CKSUM
    tank DEGRADED 0 0 0
    mirror-0 DEGRADED 0 0 0
    c3t1d0 ONLINE 0 0 0
    c3t2d0 UNAVAIL 0 0 0
    device details:
    c3t2d0 UNAVAIL cannot open
    status: ZFS detected errors on this device.
    The device was missing.
    see: http://support.oracle.com/msg/ZFS-8000-LR for recovery
    errors: No known data errors
    Check root's email:
    # mail
    From [email protected] Mon Dec 17 10:48:54 2012
    Date: Mon, 17 Dec 2012 10:48:54 -0700 (MST)
    From: No Access User <[email protected]>
    Message-Id: <[email protected]>
    Subject: Fault Management Event: tardis:ZFS-8000-LR
    To: [email protected]
    Content-Length: 751
    SUNW-MSG-ID: ZFS-8000-LR, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: Mon Dec 17 10:48:53 MST 2012
    PLATFORM: SUNW,Sun-Fire-T200, CSN: 11223344, HOSTNAME: tardis
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: c2cfa39b-71f4-638e-fb44-9b223d9e0803
    DESC: ZFS device 'id1,sd@n500000e0117173e0/a' in pool 'tank' failed to open.
    AUTO-RESPONSE: An attempt will be made to activate a hot spare if available.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Run 'zpool status -lx' for more information. Please refer to the associated reference document at http://support.oracle.com/msg/ZFS-8000-LR for the latest service procedures and policies regarding this diagnosis.

Maybe you are looking for

  • F150 Dunning selection executed, job deleted

    Hi , Pl let me know that from tcode F150 i am getting error like "Dunning selection executed, job deleted". after defining Parameter & scheduling the job, it is deleting. Whereas It should come "selection scheduled for 28.08.05   at 00:00:00". Thanks

  • Apps no longer refreshing data

    Another issue with iOS 8.0.2 in that today I noticed that all my non-Apple apps are no longer able to refresh data from servers.  My POP email is not working, my Tapatalk forum app is not able to download new topics and a EA game I play that must con

  • Problem with sender agreement

    Hi, We are trying to send a file from the file system (using file channel) to XI. However we are getting the error as " No suitable sender agreement" found error in the communication channel monitoring in RWB. However the sender agreement is very muc

  • What is Firefox doing behind my back?

    Hi! Sometimes Firefox is very slow. When looking in the Windows Task Manager I see it uses 99% of CPU. Memory usage is 148.260 K. What is Firefox doing? I have Windows XP Sp 3. Version 5.1.2600 Service Pack 3 Build 2600 Processor x86 Family 15 Model

  • Table control layout

    Hi All can someone tell me how to design the complete layout of a table control. please tell me step-by-step process. reward points for all useful ans