HAStoragePlus NFS for ZFS - nested ZFS mounts

I have a two node cluster setup to be a HA nfs server for several zpools. Everything is going fine according to all instructions, except I can not seem to find any documents that discuss nested zfs mounts. Here's what I have:
top-level zfs:
# zfs list t1qc/img_prod
NAME            USED  AVAIL  REFER  MOUNTPOINT
t1qc/img_prod  18.8M  5.88T  1.07M  /t1qc/img_prod- descendant from that zfs are many other zfs's (i.e., t1qc/img_prod/0000, t1qc/img_prod/0001.... etc.)
the top-level is setup under the HAStoragePlus as follows:
# clresourcegroup create -p PathPrefix=/t1qc/img_prod improdqc1-rg
# clreslogicalhostname create -g improdqc1-rg -h improdqc1 improdqc1-resource
# clresource create -g improdqc1-rg -t SUNW.HAStoragePlus -p Zpools=t1qc improdqc1-hastp-resource
# clresourcegroup online -M improdqc1-rg
# clresource create -g improdqc1-rg -t SUNW.nfs -p Resource_dependencies=improdqc1-hastp-resource improdqc1-nfs-resource
# clresourcegroup online -M improdqc1-rg- contents of /t1qc/img_prod/SUNW.nfs/dfstab.improdqc1-nfs-resource:
share -F nfs -o rw -d "t1qc" /t1qc/img_prod-----
jump over to one of my other servers (linux rhel5) and mount the exported fileystem
# mount -t nfs4 improd1:/t1qc/img_prod /zfs_img_prod- that works just fine, if I execute an ls of that mounted filesystem, I see the listings for the descendant zfs's, however, if i try to access one of those:
# ls /zfs_img_prod/0000
ls: reading directory /zfs_img_prod/0000: Input/output error-----
jump over to one of my solaris 10 servers and mount the exported fileystem
# mount -F nfs -o vers=4 improd1:/t1qc/img_prod /zfs_img_prod- that works just fine, if I execute an ls of that mounted filesystem, I see the listings for the descendant zfs's, however, if i try to access one of those:
# ls /zfs_img_prod/0000
#- empty listing, even if there are files/directories in that zfs on the server
This setup worked great without the cluster, i.e., just shared with zfs. Is this not possible under the cluster or am I missing something?
Thanks.

Yes, I've been using NFSv4 on the client side since I discovered that in relation to zfs without the cluster. You mentioned you were using OpenSolaris, maybe there's been a change there that I don't have because I'm running Solaris 10...
If I add a zfs:
# zfs create t1qc/img_prod/testzfsShare it on the server:
# scswitch -n -M -j improdqc1-nfs-resource
# share -F nfs -o rw -d "testzfs" /t1qc/img_prod/testzfs
# scswitch -e -M -j improdqc1-nfs-resourceOn my client:
# ls /zfs_img_prod
testzfs
# ls /zfs_img_prod/testzfs
ls: reading directory /zfs_img_prod/testzfs: Input/output error
# mount -o remount /zfs_img_prod
# ls /zfs_img_prod/testzfs
.. files are listed
.I have to be missing something here... a setting... something

Similar Messages

  • Failover on zone cluster configured for apache on zfs filesystem takes 30 M

    Hi all
    I have configured zone cluster for apache service, i have used ZFS file-system as high available storage.
    The failover takes around 30mts which is not acceptable. my configuration steps are outlined as below
    1) configured a 2 node physical cluster.
    2) configured a quorum server.
    3) configured a zone cluster.
    4) created a resource group in the zone cluster.
    5) created a resource for logical hostname and added to the above resource group
    6) created a resource for Highavailable storage ( ZFS here) and added to the above resource group
    7) created a resource for apache and added to the above resource group
    the failover is taking 30mts of time and shows "pending offline/online" most of the time
    I reduced the number of retry's to 1 , but of no use
    Any help will be appreciated
    Thanks in advance
    Sid

    Sorry guys for the late reply,
    I tried to switch the owners of RG to both the nodes simultaniously,which is taking reasonable time.But the failover for a dry run is taking 30mts
    The same setup with SVM is working fine, but i want to have ZFS in my zone cluster
    Thanks in advance
    Sid

  • Hi, i've changed my password for console of zfs 7120, i've forgotten the password, how doing to recover the pawword or reset it. i've  rested the password of ilom but the issue is also here, great thanks.

    hi, i've changed my password for console of zfs 7120, i've forgotten the password, how doing to recover the pawword or reset it. i've  rested the password of ilo

    Hi.
    1.  In case  you have access to BUI 7120 as root but lost password to ILOM.
          Try change password on ZFS stroage via BUI interface.  Genaraly ZFS Appliance synchronize BUI and ILOM  passwords.
    2. In case You have ILOM password but lost password to BUI.
          Read doc "Sun Storage 7000 Unified Storage System: How to Recover from a Lost Root Password (Doc ID 1547912.1)" on support.oracle.com.
    3.  You lost all passwords.
          on support.oracle.com present doc:
          Sun Storage 7000 Unified Storage System: How to recover from lost ILOM password (Doc ID 1548188.1)
    But  it nothing usefull for genereal user.
    Array 7120 based on server: Sun Fire X4270 M2 you can try find information about clear ILOM password for this system.
    Regards.

  • Does file share preview support NFS for mounting in linux?

    I've been experimenting with the file share preview and realized that cifs doesn't really support a true file share, allowing proper permissions.
    Is it possible to use the file share with NFS?
    thanks
    Ricardo

    RicardoK,
    No, you can't mount an Azure file share via NFS. Azure file shares only support CIFS (SMB version 2.1). Although it doesn't support NFS you can still mount it to a Linux system via CIFS. Install the "cifs-utils" package ("apt-get
    install cifs-utils" on Ubuntu). You can then mount it manually like this:
    $ mount -t cifs \\\\mystorage.blob.core.windows.net\\mydata /mnt/mydata -o vers=2.1,dir_mode=0777,file_mode=0777,username=mystorageaccount,password=<apikeygoeshere>
    Or you can add it to your /etc/fstab to have it mounted automatically at boot. Add the following line to your /etc/fstab file:
    //mystorage.blob.core.windows.net/mydata /mnt/mydata cifs vers=2.1,dir_mode=0777,file_mode=0777,username=mystorageaccount,password=<apikeygoeshere>
    It's not as good as having a real NFS export, but it's as good as you can get using Azure Storage at the moment. If you truly want NFS storage in Azure, the best approach is to create a Linux VM that you configure as an NFS file server and create NFS
    exports that can be mounted on all of your Linux servers.
    -Robert  

  • ZFS Snapshots/ZFS Clones of Database on sun/solaris

    Our production database is on Sun/Solaris 10 (SunOS odin 5.10 Generic_127127-11 sun4u sparc SUNW,SPARC-Enterprise) with oracle 10.1.0 . It is about 1TB in size. We have also created our MOCK and DEVELOPMENT databases from the Production database. To save disk space, we created these databases as ZFS Snapshots/ZFS Clones at the OS level and are using less than 10GB each being clones as on now. Now I want to upgrade the production database from oracle 10.1 to 11.2 but I don't want to upgrade the MOCK and DEVELOPMENT databases for the time being and want them to continue to run as clones on 10.1. After upgrade, Prod will run from 11g oracle tree one one machine and MOCK/DEVL on 10g tree on another machine. Will the upgrade of Production from 10.1 to 11.2 INVALIDATE the cloned MOCK and DEVELOPMENT databases?? There might be data types/features in 11g which do not exist in 10g.
    Below are the links to the documentation we used to create the snapshots.
    http://docs.huihoo.com/opensolaris/solaris.../html/ch06.html
    http://docs.huihoo.com/opensolaris/solaris...ml/ch06s02.html

    Hi,
    The mentioned links in the post is not working.
    I would suggest u to raise an Official S.R. with http://support.oracle.com prior upgrading your database.
    Also you can try this out with 10g db installation on TEST machine and create databases as ZFS Snapshots/ZFS Clones at the OS level for MOCK. Then upgrade the 10g database and test it.
    Refer:
    *429825.1 -- Complete Checklist for Manual Upgrades to 11gR1*
    *837570.1 -- Complete Checklist for Manual Upgrades to 11gR2*
    Regards,
    X A H E E R

  • How do I get an actor to wait for its nested actors to stop running before stopping itself?

    I'm developing a series of projects that are based on the Actor Framework and my actor dependency hierarchy is starting to get some depth. One of the issues I'm facing now is making sure that each actor will only stop running (and signal this via a Last Ack to a higher-level actor or via a notification to non-AF code that launches it) once all of its nested actors have stopped running.
    For instance, say I have a type of actor that handles communication with a microcontroller over USB - a USB Controller Actor. I might want to have an application-specific actor that launches USB Controller Actor to issue commands to the microcontroller. When shutting down, I want this top-level actor to send a Stop Msg to USB Controller Actor and then wait to receive a Last Ack back before sending a notification within a provided notifier to the non-AF application code, which can then finish shutting down completely.
    I'm sure that having actors wait for all nested actors to shutdown before shutting down themselves is an extremely common requirement and I'm confident National Instruments have made it possible to handle that in a simple, elegant manner. I'm just struggling to figure out what that is.
    The approaches I've experimented with are:
    Creating a pseudo "Stop" message for an actor that won't actually stop it running straight away, but instruct it to stop all nested actors, wait for their Last Acks and then shut itself down by sending Stop Msg to itself. This isn't elegant because it means the client will be forced to fire off these pseudo stop messages instead of Stop Msg to certain actors.
    Instantiating an internally-used notifier and overriding Stop Core.vi and Handle Last Ack Core.vi. The idea is that within Stop Core.vi, I send Stop Msg to each of the nested actors and then make it wait indefinitely on the internal notifier. Within Handle Last Ack Core.vi, I make it send a notification using this notifier which allows Stop Core.vi to continue and perform deinitialisation for the top-level actor itself. Figures 1 & 2 below show this approach. I wasn't confident that this would work since it assumed that it was possible for Stop Core.vi to execute and then Handle Last Ack Core.vi to concurrently execute some time after. These assumptions didn't hold and it didn't work. It would have been messy even if it had.
    Overriding Stop Core.vi, making it send Stop Msg to each nested actor and then waiting for an arbitrarily long period of time (100ms, 200ms, etc.). What if a nested actor takes only 10ms to shut down? What if takes 400ms?
    The figures below show how I implemented the second approach. Ignore the broken object wires - they only appear that way in the snippets.
    Figure 1 - Stop Core.vi from the second approach
    Figure 2 - Handle Last Ack Core.vi from the second approach

    tst wrote:
    It wasn't that hard to find - https://decibel.ni.com/content/thread/27138?tstart=0
    But with dozens of posts, I have no intention of rereading it now.
    Also, when crossposting, it's considered polite to add a link, so that people can see if the other thread has relevant replies.
    Thanks. I've only read the first page for now but I find this interesting (I'm not sure how to format non-reply quotes, sorry):
     "AristosQueue wrote:
    CaseyLamers1 wrote:
    I think that this would be a nice addition. I think how a program stops is just as important as how it starts.
    I think everyone who has worked on AF design agrees with this. Indeed, managing "Stop" was *the* thing that lead to the creation of the Actor Framework in the first place. The other issues (deadlock/hang avoidance and resource management) were secondary to just trying to get a clean shutdown.
    CaseyLamers1 wrote:
    I find the current code a bit lacking.
    My concern would be that the mixing of a verified stop and a regular stop could create confusion and lead to people having trouble during editting and testing with the project ending up locked (due to VIs left running which did not shutting down).
    Your concern is to some degree why no verified Stop exists in the AF already. We looked at each actor as an independent entity, and left it open to the programmer to add an additional managment layer for those applications that needed it. But over time, I have seen that particular management layer come up more often, which is why I am exploring the option."
    So that gives one of the reasons why this hasn't already been implemented but also points out that it's something quite a lot of people want.
    > Also, when crossposting, it's considered polite to add a link, so that people can see if the other thread has relevant replies.
    Noted. Here's the discussion: https://decibel.ni.com/content/message/104983#104983
    Edit: since there doesn't seem to be any NI-provided way of doing this yet, I suppose I'll try rolling my own at some point. The ideas posted in the discussion you linked seem pretty useful.

  • NFS for clustering

    Hi
    There two methods in using NFS for Oracle datafiles in RAC
    1. creating ordinary datafiles in NFS shared path
    2. creating zero filled files in NFS shared path and then using them as disk for ASM
    My question is that which method is better in terms of performance for operational environment ?
    thanks

    TakhteJamshid wrote:
    There two methods in using NFS for Oracle datafiles in RAC
    1. creating ordinary datafiles in NFS shared path
    2. creating zero filled files in NFS shared path and then using them as disk for ASM
    My question is that which method is better in terms of performance for operational environment ?IMO, not using NFS at all.
    The optimal and most performant architecture is the one with the minimal number of moving parts to achieve the desired result.
    In the case of RAC, it means using ASM to manage the shared stored devices for you. It means that there should ideally be no unnecessary software layers between ASM and these shared storage devices. ASM should use the shared storage directly as block devices - and these block devices should be the LUNs visible via the multipath/powerpath driver software that "publishes" these (fibre channel/Infiniband/etc) storage LUNs for the o/s to use.
    Throwing an IP layer and other s/w layers in between, affects both robustness and performance.

  • Using NFS for RAC

    Hi I am planning to use NFS for RAc but I am not able to find the certified NAS devices.Where can I get the list
    Thanks

    NAS is NFS.
    See
    Following NFS storage vendors are supported: EMC, Fujitsu, HP, IBM, NetApp, Pillar Data, Sun, Hitachi. 
    NFS file servers do not require RAC certification. The NFS file server must be supported by the system and storage vendors. 
    Currently, only NFS protocol version 3 (NFSv3) is supported.
    Hemant K Chitale

  • ZFS tries to mount SAN volume before ISCSI is running

    I am running Solaris 10 x86 U7. It is actually a VMWare guest (ESX4) on a Sun X4170 server- although I do not believe that that is relevant. I have a Sun 2510 iSCSI SAN appliance. I have an iSCSI volume with a ZFS Pool that is mounted on the server. All was fine until yesterday when I installed the following patches:
    142934-02 SunOS 5.10_x86: failsafe patch
    142910-17 SunOS 5.10_x86: kernel patch
    144489-02 SunOS 5.10_x86: kernel patch
    142912-01 (as a dependency requirement for one of the others.)
    I had installed the patches in run level 1 , then switched to run level S to allow the patch install to finish.
    Now, when I restart the zfs volume on the san is marked as off line. the /var/adm/messages shows the following
    Nov 7 00:26:30 hostnameiscsi: [ID 114404 kern.notice] NOTICE: iscsi discovery failure - SendTargets (ip.ad.dr.ess)
    I can mount the SAN ZFS pool with
    #zpool clear ZFSPOOL1
    #zfs mount -a
    For iscsi device discovery, I am using send targets (not static or iSNS.) I am not using CHAP authentication.
    It seems to be me this may merely be a timing in services and not fundamentally an iscsi issue. Can I tell the OS to wait for a minute after starting iscsi service before continuing with zfs mount and autofs shares? Can I tell the OS to delay mounting non OS zfs pools?
    Thanks

    Here is what I tried. Installed Batchmod and Xupport on each of internal system disk, backup internal system disk and external system disk. Batchmod could not find the folders automount or Network.
    Booting from external disk, I made hidden files visible using Xupport, then deleted automount > Servers, automount > Static on internal disk and backup disk. The folder Network had no files or folder named "Server". Booting from internal disk, the desktop tried to mount server volumes. Examining the internal disk automount folder showed aliases for "Servers" and "static". Get Info said they pointed to originals "Servers" and "static" in folder /automount but these items do not appear in the Finder.
    Sometimes icons, not aliases, for "Network", "Servers", and "static" appear on all three desktops on login. Trying to eject these icons by dragging to Trash or highlighting and clicking File > Eject has no effect. Examining Users > Username > Desktop does not show these items. Sometimes ".DS_Store" appears on desktop and in folder Users > Username > Desktop.
    Next I deleted user accounts so that all system disks are single user. Booted up on External disk and deleted automount > Servers, automount > Static on internal disk and internal backup disk or their aliases, whichever appeared in Finder. Booting up on internal disk results in... desktop trying to mount server volumes.
    Will try an archive and install on internal disk.

  • ZFS file system mount in solaris 11

    Create a ZFS file system for the package repository in the root pool:
    # zfs create rpool/export/repoSolaris11
    # zfs list
    The atime property controls whether the access time for files is updated when the files are read.
    Turning this property off avoids producing write traffic when reading files.
    # zfs set atime=off rpool/export/repoSolaris11
    Create the required pkg repository infrastructure so that you can copy the repository
    # pkgrepo create /export/repoSolaris11
    # cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > \
    sol-11-1111-repo-full.iso
    # mount -F hsfs /export/repoSolaris11/sol-11-1111-repo-full.iso /mnt
    # ls /mnt
    # df -k /mnt
    Using the tar command as shown in the following example can be a faster way to move the
    repository from the mounted file system to the repository ZFS file system.
    # cd /mnt/repo; tar cf - . | (cd /export/repoSolaris11; tar xfp -)
    # cd /export/repoSolaris11
    # ls /export/repoSolaris11
       pkg5.repository README
       publisher sol-11-1111-repo-full.iso
    # df -k /export/repoSolaris11
    # umount /mnt
    # pkgrepo -s /export/repoSolaris11 refresh
    =============================================
    # zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
    ==============================================I am trying to reconfigure the package repository with above steps. when reached the below step
    # zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
    created the mount point but not mounted giving the error message
    cannot mount ,directory not empty When restarted the box, threw service adm screen with error message
    not able to mount all pointsPlease advise and Thanks in advance.

    Hi.
    Don't mix content of directory as mountpoint and what you see after FS was mounted.
    On othet ZFS - mount point also clear. You see contetn of ZFS file system.
    For check you can unmount any other ZFS and see that mountpoint also clear.
    Regards.

  • Solaris 10 (sparc) + ZFS boot + ZFS zonepath + liveupgrade

    I would like to set up a system like this:
    1. Boot device on 2 internal disks in ZFS mirrored pool (rpool)
    2. Non-global zones on external storage array in individual ZFS pools e.g.
    zone alpha has zonepath=/zones/alpha where /zones/alpha is mountpoint for ZFS dataset alpha-pool/root
    zone bravo has zonepath=/zones/bravo where /zones/bravo is mountpoint for ZFS dataset bravo-pool/root
    3. Ability to use liveupgrade
    I need the zones to be separated on external storage because the intent is to use them in failover data services within Sun Cluster (er, Solaris Cluster).
    With Solaris 10 10/08, it looks like I can do 1 & 2 but not 3 or I can do 1 & 3 but not 2 (using UFS instead of ZFS).
    Am I missing something that would allow me to do 1, 2, and 3? If not is such a configuration planned to be supported? Any guess at when?
    --Frank                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Nope, that is still work in progress. Quite frankly I wonder if you would even want such a feature considering the way the filesystem works. It is possible to recover if your OS doesn't boot anymore by forcing your rescue environment to import the zfs pool, but its less elegant than merely mounting a specific slice.
    I think zfs is ideal for data and data-like places (/opt, /export/home, /opt/local) but I somewhat question the advantages of moving slices like / or /var into it. Its too early to draw conclusions since the product isn't ready yet, but at this moment I'd only think off disadvantages.

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • NFS for portable home directories not working

    I just recently tried to move our PHD's over to NFS instead of AFP to allow for fast user switching and some other reasons.
    However it doesn't work at all... The automount seems to work fine, as I can browse to /Network/Servers/servername/Users/ fine, but when the user tries to sync, a dialog pops up:
    The sync could not complete because your network home at "nfs://servername/Users" is currently unavailable.
    Try again later when it is available.
    and then in the console it shows:
    com.apple.SystemUIServer.agent[14236] mount_nfs: /Volumes/Users: Operation not permitted
    HomeSync[14369] HomeSync.syncNow: Unable to mount server URL at 'nfs://servername/Users', status = 65.
    com.apple.SystemUIServer.agent[14236] HomeSync[14369:903] HomeSync.syncNow: Unable to mount server URL at 'nfs://servername/Users', status = 65.
    It seems like its trying to mount it at /Volumes/Users, but it can't (because a normal user can't mount nfs volumes?(..as far as i know))...and furthermore I don't know why it needs to mount it at /Volumes/Users when it's already automounted at /Network/Servers/servername/Users

    I just managed to get my first sync to work.
    My server exports /opt/home/<user> but not /opt/home since each user has a separate lvm volume. What worked was the following:
    dscl . -delete /Users/<user> dsAttrTypeStandard:OriginalHomeDirectory
    dscl . -append /Users/<user> dsAttrTypeStandard:OriginalHomeDirectory "<homedir><url>nfs://find/opt/home/<user></url><path></path></homedir>"
    This is similar to what I saw on http://managingosx.wordpress.com/2009/02/19/leopard-mobileaccounts-and-nfs-homes / except putting the users name (in place of <user> as part of the url instead of part of the path.
    The value for dsAttrTypeStandard:OriginalHomeDirectory was formerly /Network/Servers/<server>/opt/home/<user> which is a perfectly good directory, but not a url. I don't know why it wouldn't use the directory and manufactured a url instead.
    By changing the value back to default and making my server export /opt/home, I'm still able to sync. Naturally I like this solution much better.
    Doesn't seem like this solution will help you much if a given user can sync on some machines and not others, unless maybe you have different export rules to different machines in your network.

  • Solution other than nfs for centralized baskup location on Unix?

    Good afternoon,
    I am trying to setup RMAN and am currently in discussions with our Unix team. They have expressed a certain uneasiness in using NFS mounts on multiple machines to achieve the same results as creating a share on W2K. Is there an alternative to using NFS that will accomplish the same results?
    Thanks for any help/advice you can give me.
    Sebastian DiFelice

    Q: An alternative to NFS ?
    A: That would be Samba / CIFS.
    Really though you need to ask the Unix team why they object to NFS ie: is this a throughput / network traffic problem or something else?
    If you're planning to centralise your backup solution through an existing network you'll run into this problem all the time.

  • Write permission over nfs for transmission files

    Hi all,
    I have a strange problem and I hope you could help me.
    Context :
    I have a server with :
    transmission-daemon to remotely download torrent
    pyload to remotely download ddl
    nfs server to provide write access over lan for linux clients
    samba server to provide write access over lan for windows clients
    And of course, a client wich connect to nfs share to manage files .
    Problem :
    nfs clients have write permission over files created by pyload and basic users of servers but not on transmission files.
    otherwise, samba or local basic user have write access on all files (even transmission)
    Configuration :
    On the server, I have a storage mounted in /mnt/data binded to /srv/nfs4/.
    Transmission save files with mask 022 in directory :
    /mnt/data/downloads/
    Pyload with mask 022 in :
    /mnt/data/downloads/ddl/
    Nfs give access to /srv/nfs4 with wirte permission :
    /srv/nfs4/ 192.168.100.0/24(rw,no_subtree_check)
    Infos :
    With I know and I tested, I can tell you :
    I have 770 (and not 700) permissions on /mnt/data for local users on servers
    Permission files should not be the cause. nfs and smbd are owned by root so logically, both of them can write.
    Smb clients can write with every permissions (777,770,700) on files
    Nfs clients seems have permission of "other". No write with 700 and 770. But write access with 777
    pyload files are owned by pyload:pyload
    transmission files are owned by transmission:transmission
    basic users files are owned by [the_user]:users
    So the problem exists only for transmission (not pyload) owned files from nfs client (not samba) and files are the same permissions .. I don't understand.
    I will try with more verbose nfs server but I take every idea  
    Thanks you.
    Last edited by xp-1000 (2015-05-01 12:46:00)

    The problem could be related to Mavericks permission problems. You can try the Bridge fix:
    Open Finder > (go) Computer > Macintosh HD (for me) > Users > Click once on the Home icon (main admin) > Get Info >
    click on lock and authorise bottom right > click on cog wheel drop down > click on 'Apply to enclosed items' > OK

Maybe you are looking for

  • How pop up form Purchase Order !

    How pop up form Purchase Order  and show last record

  • OS Migration from AIX to SUSE Linux

    Hi All, We are having a scenario in which we want to change our OS. First we have AIX with oracle 10.2.0.4 and SAP ECC 6.00 which we want to change into new hardware with SUSE Linux with same Oracle and ECC 6.00 So we are planning to do Oracle Offlin

  • Problem during Message Splitting

    Hi, My requirement is to split by Senders Message.. During Mapping i have changed the occurenec of my ZRFC to 0..unbounded.. Mapping i slike Message->                             Message->     Message1->                        Message1->         MT_A

  • Help on uninstalling Sophos antivirus

    I have downloaded a trial version of Sophos antivirus and it has resulted in my Mac running very slowly. Does anybody have an ideas on how I can uninstall this software? Cheers

  • Overlay Drop Down Links/Rotating Text Over Graphic/Background

    I am fairly new to DW and am currently using Dreamweaver CS4. I would like to overlay drop down links and rotating text over a graphic image/background and am curious as to how this would best be accomplished. Is each piece; background/graphic, drop