If zfs manages /etc in a separate ZFS filesystem it fails to boot

In my most recent installation I wanted to keep /ect in a separed ZFS filesystem to keep it with a higher compression.
But Arch fails to boot, it seems dbus needs the /etc directory mounted before the zfs daemon actually mounts it.
Do anyone had this problem? It is possible to mount the partitions before?
Thanks
Last edited by ezzetabi (2013-02-17 15:29:23)

It happened to me ...
And he does not serve doing a RESET to her BIOS, you must take out the battery for some ten minutes, in my case, I tried with half hour and it worked.
Try with that and you tell us.

Similar Messages

  • Mounting ZFS filesystems: (1/10)cannot mount  directory is not empt(10/10

    Hi
    in zone:
    bash-3.00# reboot
    [NOTICE: Zone rebooting]
    SunOS Release 5.10 Version Generic_144488-17 64-bit
    Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
    Hostname: dbspfox1
    Reading ZFS config: done.
    Mounting ZFS filesystems: (1/10)cannot mount '/zonedev/dbspfox1/biblio/P622/dev': directory is not empt(10/10 )
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Nov 4 10:07:33 svc.startd[12427]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" fa iled with exit status 95.
    Nov 4 10:07:33 svc.startd[12427]: system/filesystem/local:default failed fatally: transitioned to maintenanc e (see 'svcs -xv' for details)
    For sure the directory in not empty, but the others too are not empty.
    bash-3.00# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    zonedev 236G 57.6G 23K /zonedev
    zonedev/dbspfox1 236G 57.6G 1.06G /zonedev/dbspfox1
    zonedev/dbspfox1/biblio 235G 57.6G 23K /zonedev/dbspfox1/biblio
    zonedev/dbspfox1/biblio/P622 235G 57.6G 10.4G /zonedev/dbspfox1/biblio/P622
    zonedev/dbspfox1/biblio/P622/31mars 81.3G 57.6G 47.3G /zonedev/dbspfox1/biblio/P622/31mars
    zonedev/dbspfox1/biblio/P622/31mars/data 34.0G 57.6G 34.0G /zonedev/dbspfox1/biblio/P622/31mars/data
    zonedev/dbspfox1/biblio/P622/dev 89.7G 57.6G 50.1G /zonedev/dbspfox1/biblio/P622/dev
    zonedev/dbspfox1/biblio/P622/dev/data 39.6G 57.6G 39.6G /zonedev/dbspfox1/biblio/P622/dev/data
    zonedev/dbspfox1/biblio/P622/preprod 53.3G 57.6G 12.9G /zonedev/dbspfox1/biblio/P622/preprod
    zonedev/dbspfox1/biblio/P622/preprod/data 40.4G 57.6G 40.4G /zonedev/dbspfox1/biblio/P622/preprod/data
    bash-3.00# svcs -xv
    svc:/system/filesystem/local:default (local file system mounts)
    State: maintenance since Fri Nov 04 10:07:33 2011
    Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
    See: http://sun.com/msg/SMF-8000-KS
    See: /var/svc/log/system-filesystem-local:default.log
    Impact: 33 dependent services are not running:
    svc:/system/webconsole:console
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/milestone/multi-user-server:default
    svc:/application/autoreg:default
    svc:/application/stosreg:default
    svc:/application/graphical-login/cde-login:default
    svc:/application/cde-printinfo:default
    svc:/network/smtp:sendmail
    svc:/application/management/seaport:default
    svc:/application/management/snmpdx:default
    svc:/application/management/dmi:default
    svc:/application/management/sma:default
    svc:/network/sendmail-client:default
    svc:/network/ssh:default
    svc:/system/sysidtool:net
    svc:/network/rpc/bind:default
    svc:/network/nfs/nlockmgr:default
    svc:/network/nfs/client:default
    svc:/network/nfs/status:default
    svc:/network/nfs/cbd:default
    svc:/network/nfs/mapid:default
    svc:/network/inetd:default
    svc:/system/sysidtool:system
    svc:/system/postrun:default
    svc:/system/filesystem/volfs:default
    svc:/system/cron:default
    svc:/application/font/fc-cache:default
    svc:/system/boot-archive-update:default
    svc:/network/shares/group:default
    svc:/network/shares/group:zfs
    svc:/system/sac:default
    svc:/network/rpc/gss:default (Generic Security Service)
    State: uninitialized since Fri Nov 04 10:07:31 2011
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M gssd
    Impact: 17 dependent services are not running:
    svc:/network/nfs/client:default
    svc:/system/filesystem/autofs:default
    svc:/system/webconsole:console
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/milestone/multi-user-server:default
    svc:/application/autoreg:default
    svc:/application/stosreg:default
    svc:/application/graphical-login/cde-login:default
    svc:/application/cde-printinfo:default
    svc:/network/smtp:sendmail
    svc:/application/management/seaport:default
    svc:/application/management/snmpdx:default
    svc:/application/management/dmi:default
    svc:/application/management/sma:default
    svc:/network/sendmail-client:default
    svc:/network/ssh:default
    svc:/application/print/server:default (LP print server)
    State: disabled since Fri Nov 04 10:07:31 2011
    Reason: Disabled by an administrator.
    See: http://sun.com/msg/SMF-8000-05
    See: man -M /usr/share/man -s 1M lpsched
    Impact: 1 dependent service is not running:
    svc:/application/print/ipp-listener:default
    svc:/network/rpc/smserver:default (removable media management)
    State: uninitialized since Fri Nov 04 10:07:32 2011
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M rpc.smserverd
    Impact: 1 dependent service is not running:
    svc:/system/filesystem/volfs:default
    svc:/network/rpc/rstat:default (kernel statistics server)
    State: uninitialized since Fri Nov 04 10:07:31 2011
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M rpc.rstatd
    See: man -M /usr/share/man -s 1M rstatd
    Impact: 1 dependent service is not running:
    svc:/application/management/sma:default
    bash-3.00# df -h
    Filesystem size used avail capacity Mounted on
    / 59G 1.1G 58G 2% /
    /dev 59G 1.1G 58G 2% /dev
    /lib 261G 7.5G 253G 3% /lib
    /platform 261G 7.5G 253G 3% /platform
    /sbin 261G 7.5G 253G 3% /sbin
    /usr 261G 7.5G 253G 3% /usr
    proc 0K 0K 0K 0% /proc
    ctfs 0K 0K 0K 0% /system/contract
    mnttab 0K 0K 0K 0% /etc/mnttab
    objfs 0K 0K 0K 0% /system/object
    swap 2.1G 248K 2.1G 1% /etc/svc/volatile
    fd 0K 0K 0K 0% /dev/fd
    swap 2.1G 0K 2.1G 0% /tmp
    swap 2.1G 16K 2.1G 1% /var/run
    zonedev/dbspfox1/biblio
    293G 23K 58G 1% /zonedev/dbspfox1/biblio
    zonedev/dbspfox1/biblio/P622
    293G 10G 58G 16% /zonedev/dbspfox1/biblio/P622
    zonedev/dbspfox1/biblio/P622/31mars
    293G 47G 58G 46% /zonedev/dbspfox1/biblio/P622/31mars
    zonedev/dbspfox1/biblio/P622/31mars/data
    293G 34G 58G 38% /zonedev/dbspfox1/biblio/P622/31mars/data
    zonedev/dbspfox1/biblio/P622/dev/data
    293G 40G 58G 41% /zonedev/dbspfox1/biblio/P622/dev/data
    zonedev/dbspfox1/biblio/P622/preprod
    293G 13G 58G 19% /zonedev/dbspfox1/biblio/P622/preprod
    zonedev/dbspfox1/biblio/P622/preprod/data
    293G 40G 58G 42% /zonedev/dbspfox1/biblio/P622/preprod/data
    What i missed? what happen with zfs dev directory?
    thank you
    Walter

    Hi
    I finally found the problem.
    ZFS naming restrictions:
    names must begin with a letter
    Walter

  • DskPercent not returned for ZFS filesystems?

    Hello.
    I'm trying to monitor the space usage of some ZFS filesystems on a Solaris 10 10/08 (137137-09) Sparc system with SNMP. I'm using the Systems Management Agent (SMA) agent.
    To monitor the stuff, I added the following to /etc/sma/snmp/snmpd.conf and restarted svc:/application/management/sma:default:
    # Bug in SMA?
    # Reporting - NET-SNMP, Solaris 10 - has a bug parsing config file for disk space.
    # -> http://forums.sun.com/thread.jspa?threadID=5366466
    disk /proc 42%  # Dummy Wert; wird fälschlicherweise ignoriert werden...
    disk / 5%
    disk /tmp 10%
    disk /apps 4%
    disk /data 3%Now I'm checking what I get via SNMP:
    --($ ~)-- snmpwalk -v2c -c public 10.0.1.26 dsk
    UCD-SNMP-MIB::dskIndex.1 = INTEGER: 1
    UCD-SNMP-MIB::dskIndex.2 = INTEGER: 2
    UCD-SNMP-MIB::dskIndex.3 = INTEGER: 3
    UCD-SNMP-MIB::dskIndex.4 = INTEGER: 4
    UCD-SNMP-MIB::dskPath.1 = STRING: /
    UCD-SNMP-MIB::dskPath.2 = STRING: /tmp
    UCD-SNMP-MIB::dskPath.3 = STRING: /apps
    UCD-SNMP-MIB::dskPath.4 = STRING: /data
    UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/md/dsk/d200
    UCD-SNMP-MIB::dskDevice.2 = STRING: swap
    UCD-SNMP-MIB::dskDevice.3 = STRING: apps
    UCD-SNMP-MIB::dskDevice.4 = STRING: data
    UCD-SNMP-MIB::dskMinimum.1 = INTEGER: -1
    UCD-SNMP-MIB::dskMinimum.2 = INTEGER: -1
    UCD-SNMP-MIB::dskMinimum.3 = INTEGER: -1
    UCD-SNMP-MIB::dskMinimum.4 = INTEGER: -1
    UCD-SNMP-MIB::dskMinPercent.1 = INTEGER: 5
    UCD-SNMP-MIB::dskMinPercent.2 = INTEGER: 10
    UCD-SNMP-MIB::dskMinPercent.3 = INTEGER: 4
    UCD-SNMP-MIB::dskMinPercent.4 = INTEGER: 3
    UCD-SNMP-MIB::dskTotal.1 = INTEGER: 25821143
    UCD-SNMP-MIB::dskTotal.2 = INTEGER: 7150560
    UCD-SNMP-MIB::dskTotal.3 = INTEGER: 0
    UCD-SNMP-MIB::dskTotal.4 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.1 = INTEGER: 13584648
    UCD-SNMP-MIB::dskAvail.2 = INTEGER: 6471520
    UCD-SNMP-MIB::dskAvail.3 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.4 = INTEGER: 0
    UCD-SNMP-MIB::dskUsed.1 = INTEGER: 11978284
    UCD-SNMP-MIB::dskUsed.2 = INTEGER: 679040
    UCD-SNMP-MIB::dskUsed.3 = INTEGER: 0
    UCD-SNMP-MIB::dskUsed.4 = INTEGER: 0
    UCD-SNMP-MIB::dskPercent.1 = INTEGER: 47
    UCD-SNMP-MIB::dskPercent.2 = INTEGER: 9
    UCD-SNMP-MIB::dskPercent.3 = INTEGER: 0
    UCD-SNMP-MIB::dskPercent.4 = INTEGER: 0
    UCD-SNMP-MIB::dskPercentNode.1 = INTEGER: 9
    UCD-SNMP-MIB::dskPercentNode.2 = INTEGER: 0
    UCD-SNMP-MIB::dskPercentNode.3 = INTEGER: 0
    UCD-SNMP-MIB::dskPercentNode.4 = INTEGER: 0
    UCD-SNMP-MIB::dskErrorFlag.1 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorFlag.2 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorFlag.3 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorFlag.4 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorMsg.1 = STRING:
    UCD-SNMP-MIB::dskErrorMsg.2 = STRING:
    UCD-SNMP-MIB::dskErrorMsg.3 = STRING:
    UCD-SNMP-MIB::dskErrorMsg.4 = STRING: As expected, dskPercent.1 and dskPercent.2 (ie. */* and */tmp*) returned good values. But why did Solaris/SNMP/??? return 0 for dskPercent.3 (*/apps*) and dskPercent.4 (*/data*)? Those directories are on two ZFS which are on seperate zpools:
    --($ ~)-- zpool list
    NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    apps  39.2G  20.4G  18.9G    51%  ONLINE  -
    data   136G   121G  15.2G    88%  ONLINE  -
    --($ ~)-- zfs list apps data
    NAME   USED  AVAIL  REFER  MOUNTPOINT
    apps  20.4G  18.3G    20K  /apps
    data   121G  13.1G   101K  /dataOr is it supposed to be that way? I'm pretty much confused, because I found some blog posting by a guy called asyd at http://sysadmin.asyd.net/home/en/blog/asyd/zfs+snmp. Copying from there:
    snmpwalk -v2c -c xxxx katsuragi.global.asyd.net UCD-SNMP-MIB::dskTable
    UCD-SNMP-MIB::dskPath.1 = STRING: /
    UCD-SNMP-MIB::dskPath.2 = STRING: /home
    UCD-SNMP-MIB::dskPath.3 = STRING: /data/pkgsrc
    UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/dsk/c1d0s0
    UCD-SNMP-MIB::dskDevice.2 = STRING: data/home
    UCD-SNMP-MIB::dskDevice.3 = STRING: data/pkgsrc
    UCD-SNMP-MIB::dskTotal.1 = INTEGER: 1017935
    UCD-SNMP-MIB::dskTotal.2 = INTEGER: 0
    UCD-SNMP-MIB::dskTotal.3 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.1 = INTEGER: 755538
    UCD-SNMP-MIB::dskAvail.2 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.3 = INTEGER: 0
    UCD-SNMP-MIB::dskPercent.1 = INTEGER: 21
    UCD-SNMP-MIB::dskPercent.2 = INTEGER: 18
    UCD-SNMP-MIB::dskPercent.3 = INTEGER: 5What I find confusing, are his dskPercent.2 and dskPercent.3 outputs - for him, he got back dskPercent for what seems to be directories on ZFS filesystems.
    Because of that I'm wondering about how it is supposed to be - should Solaris return dskPercent values for ZFS?+
    Thanks a lot,
    Alexander

    I don't have the ability to test out my theory, but I suspect that you are using the default mount created for the zpools you've created (apps & data) as opposed to specific ZFS files systems, which is what the asyd blog shows.
    Remember, the elements reported on in the asyd blog ARE zfs file systems; they are not just directories. They are indeed mountpoints, and it is reporting the utilization of those zfs file systems in the pool ("data") on which they are constructed. In the case of /home, the administrator has specifically set the mountpoint of the ZFS file system to be /home instead of the default /data/home.
    To test my theory, instead of using your zpool default mount point, try creating a zfs file system under each of your pools and using that as the entry point for your application to write data into the zpools. I suspect you will be rewarded with the desired result: reporting of "disk" (really, pool) percent usage.

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • Confused about ZFS filesystems created with Solaris 11 Zone

    Hello.
    Installing a blank Zone in Solaris *10* with "zonepath=/export/zones/TESTvm01" just creates one zfs filesystem:
    +"zfs list+
    +...+
    +rzpool/export/zones/TESTvm01 4.62G 31.3G 4.62G /export/zones/TESTvm01"+
    Doing the same steps with Solaris *11* will ?create? more filesystems:
    +"zfs list+
    +...+
    +rpool/export/zones/TESTvm05 335M 156G 32K /export/zones/TESTvm05+
    +rpool/export/zones/TESTvm05/rpool 335M 156G 31K /rpool+
    +rpool/export/zones/TESTvm05/rpool/ROOT 335M 156G 31K legacy+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris 335M 156G 310M /export/zones/TESTvm05/root+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris/var 24.4M 156G 23.5M /export/zones/TESTvm05/root/var+
    +rpool/export/zones/TESTvm05/rpool/export 62K 156G 31K /export+
    +rpool/export/zones/TESTvm05/rpool/export/home 31K 156G 31K /export/home"+
    I dont understand why Solaris 11 is doing that. Just one FS (like in Solaris 10) would be better for my setup. I want to configure all created volumes by myself.
    Is it possible to deactivate this automatic "feature"?

    There are several reasons that it works like this, all guided by the simple idea "everything in a zone should work exactly like it does in the global zone, unless that is impractical." By having this layout we get:
    * The same zfs administrative practices within a zone that are found in the global zone. This allows, for example, compression, encryption, etc. of parts of the zone.
    * beadm(1M) and pkg(1) are able to create boot environments within the zone, thus making it easy to keep the global zone software in sync with non-global zone software as the system is updated (equivalent of patching in Solaris 10). Note that when Solaris 11 updates the kernel, core libraries, and perhaps other things, a new boot environment is automatically created (for the global zone and each zone) and the updates are done to the new boot environment(s). Thus, you get the benefits that Live Upgrade offered without the severe headaches that sometimes come with Live Upgrade.
    * The ability to have a separate /var file system. This is required by policies at some large customers, such as the US Department of Defense via the DISA STIG.
    * The ability to perform a p2v of a global zone into a zone (see solaris(5) for examples) without losing the dataset hierarchy or properties (e.g. compression, etc.) set on datasets in that hierarchy.
    When this dataset hierarchy is combined with the fact that the ZFS namespace is virtualized in a zone (a feature called "dataset aliasing"), you see the same hierarchy in the zone that you would see in the global zone. Thus, you don't have confusing output from df saying that / is mounted on / and such.
    Because there is integration between pkg, beadm, zones, and zfs, there is no way to disable this behavior. You can remove and optionally replace /export with something else if you wish.
    If your goal is to prevent zone administrators from altering the dataset hierarchy, you may be able to accomplish this with immutable zones (see zones admin guide or file-mac-profile in zonecfg(1M)). This will have other effects as well, such as making all or most of the zone unwritable. If needed, you can add fs or dataset resources which will not be subject to file-mac-profile and as such will be writable.

  • Mount options for ZFS filesystem on Solaris 10

    Do you know some recomendation
    about mount options for SAP on Oracle
    with data on ZFS filesystem?
    Also recomended block size required.
    We assume that file system with datafiles has 8kb block size
    and offline redologs has default (128kB).
    But what about ONLINE REDOLOGS?
    Best regards
    Andy

    SUN Czech installed new production HW for one Czech customer with ZFS filesystem on data-, redo- and archivelog files.
    Now we have performance problem and currently there is no SAP recomendation
    for ZFS file system.
    The HW which are by benchmark about tvice power has worst responses than
    old hardware.
    a) There is bug in Solaris 10 - ZFS buffers once allocated are not released
        (generally we do not want to use buffering due to prevence of double
         buffering)
    b) ZFS buffers takes about 20GB (32GB total) of memory on DB server
    and we are not able to define huge shared pool and db cache. (it may be possible
    to set special parameter in /etc/system to reduce maximum size of ZFS buffers to e.g. 4GB )
    c) We are looking for proven mount option for ZFS to enable asynchronious/concurent io for database filesystems
    d) There is no proven clear answer for support of ZFS/SOLARIS/Oracle/SAP.
    SAP says It is Oracle problem, Oracle does not certify filesystems from Jan2007
    any more and says ask your OS provider and SUN looks happy, but performance
    goes down and it is not so funny for system with 1TG DB with over 30GB grow
    per month.
    Andy

  • Does SAP support Solaris 10 ZFS filesystem when using DB2 V9.5 FP4?

    Hi,
    I'm installing NW7 (BI usage). SAPINST has failed in the step "ABAP LOAD due to the DB2 error message
    "Unsupported file system type zfs for Direct I/O". It appears my Unix Admin must have decided to set these filesystems as ZFS on this new server.
    I  have several questions requiring your expertise.
    1) Does SAP support ZFS filesystems on Solaris 10 (SPARC hardware)? I can not find any reference in SDN or Service Market Place? Any reference will be much appreciated.
    2) How can I confirm my sapdata fielsystems are ZFS?
    3) What actions do you recommend for me to resolve the SAPINST errors? Do I follow the note "Note 995050 - DB6: NO FILE SYSTEM CACHING for Tablespaces" to disable "Direct I/O" for all DB2 tablespaces? I have seen Markus Doehr's forum Link:[ thread|Re: DB2 on Solaris x64 - ZFS as filesystem possible?; but it does not state exactly how he overcame the error.
    regards
    Benny

    Hi Frank,
    Thanks for your input.
    I have also found  the command "zfs list" that would display any ZFS filesystems.
    We have also gone back to UFS as the ZFS deployment schedule does not meet this particular SAP BW implementation timeline.
    Has any one come across any SAP statement that states NW7 can be deployed with ZFS for DB2 database on Solaris SPARC platform. If not, I'll open an OSS message.
    regards
    Benny

  • How to count number of files on zfs filesystem

    Hi all,
    Is there a way to count the number of files on a zfs filesystem similar to how "df -o i /ufs_filesystm" works? I am looking for a way to do this without using find as I suspect there are millions of files on a zfs filesystem that is causing slow performance sometimes on a particular zfs file system
    Thanks.

    So I have finished 90% of my testing and I have accepted _df -t /filesystem | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'_ as acceptable in the absence of a known built in zfs method. My main conern was with the reduction of available files from the df -t output as more files were added. I used a one liner for loop to just create empty files to conserve on space used up so I would have a better chance of seeing what happens if the available files reached 0.
    root@fj-sol11:/zfstest/dir4# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _5133680_
    root@fj-sol11:/zfstest/dir4# df -t /zfstest
    /zfstest (pool1 ): 7237508 blocks *7237508* files
    total: 10257408 blocks 12372310 files
    root@fj-sol11:/zfstest/dir4#
    root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _6742772_
    root@fj-sol11:/zfstest/dir7# df -t /zfstest
    /zfstest (pool1 ): 6619533 blocks *6619533* files
    total: 10257408 blocks 13362305 files
    root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _7271716_
    root@fj-sol11:/zfstest/dir7# df -t /zfstest
    /zfstest (pool1 ): 6445809 blocks *6445809* files
    total: 10257408 blocks 13717010 files
    root@fj-sol11:/zfstest# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
    _12359601_
    root@fj-sol11:/zfstest# df -t /zfstest
    /zfstest (pool1 ): 4494264 blocks *4494264* files
    total: 10257408 blocks 16853865 files
    I noticed the total files kept increasing and the creation of 4 millions files (4494264) after the above example was taking up more time than I had after already creating 12 million plus ( _12359601_ ) which took 2 days on a slow machine on and off (mostly on). If anyone has any idea of creating them quicker than "touch filename$loop" in a for loop let me know :)
    In the end I decided to use a really small file system 100mb on a virtual machine to test what happens as the free files approached 0. Turns out if never does ... it somehow increased
    bash-3.00# df -t /smalltest/
    /smalltest (smalltest ): 31451 blocks *31451* files
    total: 112640 blocks 278542 files
    bash-3.00# pwd
    /smalltest
    bash-3.00# mkdir dir4
    bash-3.00# cd dir4
    bash-3.00# for arg in {1..47084}; do touch file$arg; done <--- I created 47084 files here, more that the free listed above ( *31451* )
    bash-3.00# zfs list smalltest
    NAME USED AVAIL REFER MOUNTPOINT
    smalltest 47.3M 7.67M 46.9M /smalltest
    bash-3.00# df -t /smalltest/
    /smalltest (smalltest ): 15710 blocks *15710* files
    total: 112640 blocks 309887 files
    bash-3.00#
    The other 10% of my testing will be to see what happens when I try to a find on 12 million plus files and try to pipe it to wc -l :)

  • What is the best way to backup ZFS filesystem on solaris 10?

    Normally on Linux environment, I'd use mondorescue to create image (full & incremental) so it can be easily restored (full or file/folders) to a new similar server environment for restore purposes in case of disaster.
    I'd like to know the best way to backup a ZFS filesystem to a SAN storage and to restore it from there with minimal downtime. Preferrably with tools already available on Solaris 10.
    Thanks.

    the plan is to backup whole OS, and configuration files
    2 servers to be backed up
    server A zpool:
    - rootpool
    - usr
    - usrtmp
    server B zpool:
    - rootpool
    - usr
    - usrtmp
    if we were to cut hardware cost, it is possible to back up to samba share?
    any suggestions?

  • My CS6 photoshop launhes Adobe Application Manager which launches Photoshop which launches App Manager etc, etc . Photoshop eventually crashes. HELP

    My CS6 Photoshop launches Adobe Application Manager which launches Photoshop which launches App Manager etc, etc . Photoshop eventually crashes. HELP

    Which OS?

  • Slow down in zfs filesystem creation

    Solaris 10 10/09 running as VM on Vmware ESX server with 7 GB RAM 1 CPU 64 bit
    I wondered if anyone had seen the following issue, or indeed could see if they could replicate it -
    Try creating a script that creates thousands of ZFS filesystems in one pool.
    For example -
    #!/usr/bin/bash
    for i in {1..3000}
    do
    zfs create tank/users/test$i
    echo "$i created"
    done
    I have found that after about 1000 filesystems the creation time slows down massively and it can take up to 4 seconds for,each new filesystem to create within the pool.
    If I do the same for ordinary directories (mkdir) then I have no delays at all.
    I was under the impression that ZFS filesystem were as easy to create as directories (folders), but this does not seem to be the case.
    This sounds like it could be a bug. I have been able to replicate it several times on my system, but need others to verify this.

    Might be worth raising on the open solaris forums where theres a least a chance it will be read by a ZFS developer.

  • MAI for managed systems residing in separate network

    Hello!
    We are considering to implement the new Monitoring and Alerting Infrastructure (MAI) with Diagnostics Agents for our SAP systems. 
    The most of our SAP managed systems reside in separate network (differ from SOLMAN network).
    In order to retrieve CCMS-data from these systems we must use RFC connections within SAP router string.
    Has someone already successfully set up MAI for managed systems that reside in separate network?
    Of great interest are the information about
    1) Integration of these systems into DBACOCKPIT of SOLMAN
    2) Installation of Diagnostic Agent on remote host
    Many thanks for your information.

    Hello SAP-SDN,
    as for your initial question:
    Has someone already successfully set up MAI for managed systems that reside in separate network?
    Of great interest are the information about
    1) Integration of these systems into DBACOCKPIT of SOLMAN
    2) Installation of Diagnostic Agent on remote host
    1.- actually on process, but still don0t found any problem to do on remote network trought sapruter.
    2.- actually working and running, the unique limitation is the connection between willy hostagent and wily EM.
    as for your next question:
    1a) database related data in your Alerting (e.g. tablespaces)
    1b) database related data in your IT Performance Reporting/Interactive Reporting (e.g. growth of database)
    1c) DBACOCKPIT connection for the remote SAP system
    1a) for ewa ABAP you can get that information from a remote netwaork managed system without any problem
    1b) for ewa ABAP you can get that information from a remote netwaork managed system without any problem
    1c) still on process, but i think that can be possible as well is possible to connect the remote SMD diagnostic agent trought one or more saprouters.

  • Zfs filesystem screwed up?

    Hi,
    I am running S10U3 (all patches applied).
    Today by mistake I extracted a big (4.5G) tar archive into my home directory (on ZFS) which ran out of space and the tar command terminated with the error "Disk quota exceeded" (it should have been something like "No space left on device" ?)
    I think the zfs filesystem got screwed. Now I am unable to delete any file with rm as unlink(2) fails with error 49 (EDQUOT).
    I can't login because there is no space on left on /home.
    I even tried to delete files as root but I still get EDQUOT.
    Files can be read though.
    I tried zpool scrub (not sure what that does) and it shows no errors.
    zpool status shows no errors either.
    I am confident that my drive is not faulty.
    Restarting the system didn't help either.
    I had put all my important stuff on that zfs FS thinking that it would be safe but I never expected that such a problem would ever occur.
    What should I do? Any suggestions?
    Is zfs completely reliable or are there any known problems?

    Robert,
    ZFS uses atomic operations to update filesytem metadata.This is implemented as follows. When a directory is updated a shadow copy of it and all its parents is created all the way to the root "superblock".
    Then the existing superblock is swapped for the shadow superblock as an atomic operation.
    A file deletion is an metadata operation like any other and requires making shadow copies
    So what I think has happened is that the filesystem is so full that it can't find space to make the shadow copies to allow a delete.
    Thanks for the explanation, probably that's what happened but I would consider it a very weak design if a user can cripple the FS just by filling it up.
    So one way out is if you can add an extra device even a small one to the pool.That will give you enough space to delete.
    Of course since you can never remove a device from a pool you'll be stuck with it.
    I would have certainly liked to do this but this is just my desktop computer and I have only 1 hard disc with no extra space.
    You could try asking on the opensolaris zfs forum's.They might have a special technique for dealing with it
    The guys at the opensolaris forums don't like to answer Solaris problems but anyway I will give it a try.
    Thankfully, I lost no data because I had backups and because the damaged ZFS was readable, so the only damage done was a loss of confidence in ZFS.

  • I'm trying to update my Photoshop CS5, but continue to receive the same problem every time. The application manager update dialogue box simply states "some updates failed to install." There's no specific error code, but a link to contact support for furth

    I'm trying to update my Photoshop CS5, but continue to receive the same problem every time. The application manager update dialogue box simply states "some updates failed to install." There's no specific error code, but a link to contact support for further assistance. It doesn't take me to customer support, but does take me to a screen which states. "Error "This serial number is not for a qualifying product." I've checked my account and the product serial number is associated with my account., so I don't see any problem.  I have not been able to find a resolution to this problem, so I hope that someone can point me in the right direction.  Thank you!

    update directly, Product updates

  • ZFS Filesystem for FUSE/Linux progressing

    About
    ZFS is an advanced modern filesystem from Sun Microsystems, originally designed for Solaris/OpenSolaris.
    This project is a port of ZFS to the FUSE framework for the Linux operating system.
    It is being sponsored by Google, as part of the Google Summer of Code 2006 program.
    Features
    ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems. ZFS list of features:
          Provable integrity - it checksums all data (and meta-data), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables..). Read how ZFS helped to detect a faulty power supply after only two hours of usage, which was previously silently corrupting data for almost a year!
          Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots/power failures.
          Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks.
          Built-in (optional) compression
          Highly scalable
          Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc..).
          Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model).
          Among others (variable sector sizes, adaptive endianness, ...)
    http://www.wizy.org/wiki/ZFS_on_FUSE
    http://developer.berlios.de/project/sho … up_id=6836

    One workaround for this test was to drop down to NFSv3. That's fine for testing, but when I get ready to roll this thing into production, I hope there are no problems doing v4 from my NetApp hardware.

Maybe you are looking for

  • Photoshop CS5 on Mavericks crashes when opened

    Hi, My photoshop CS5 crashes every time  I open it but everything else on the Creative Suite like Illustrator and InDesign etc. works perfectly. Below is the crash report: Process:         Adobe Photoshop CS5 [2600] Path:            /Applications/Ado

  • Add fields to Standard Purchase Order

    How can I change the standard order data definition. I need to add some fields that are not in the standard po definition. Must i add fields to po_headers_xml. But i think I also have to change the .xsd file. But how can I change them ?

  • Where are the downloaded files? Project is not showing up in the programs area of Windows 8.1

    I downloaded the Project files.  Now it doesn't show up in the programs area.  What happened to it?  The files did download completely - it took a while!  Assistance is greatly appreciated! 

  • How do you import Elements 5 catalog to Elements 13?

    HI, Was running Elements v5.  I bought a new Win8 computer and the organizer will not open so I (finally) upgraded to Elements13.  I installed v13.  I thought I would easily be able to open my old catalogs (3 of them).  However, I did not see that op

  • Bank of America says to Mac Users: Upgrade Your Browser

    Upgrade Your Browser Your current browser does not support our online application. Your browser: Safari 523 Your browser isn't able to take advantage of the latest security standards. For your security, we require you to use a browser that supports 1