ZFS mount points and zones

folks,
a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
we haven't looked at zone clusters at this stage if for no other reason than time....
or is there a better way?
thanks muchly,
nelson

i must be missing something...any ideas what and where?
nelson
devsun012~> zpool import Zbob
devsun012~> zfs list|grep bob
Zbob 56.9G 15.5G 21K /export/zfs/bob
Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
devsun012~> zpool export Zbob
devsun012~> zoneadm -z bob list -v
ID NAME STATUS PATH BRAND IP
1 bob running /opt/zones/bob native shared
devsun013~> zoneadm -z bob list -v
ID NAME STATUS PATH BRAND IP
16 bob running /opt/zones/bob native shared
devsun012~> clrt list|egrep 'oracle_|HA'
SUNW.HAStoragePlus:6
SUNW.oracle_server:6
SUNW.oracle_listener:5
devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
root@devsun012 > bob-has-rs
clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
clrs: (C891200) Failed to create resource "bob-has-rs".

Similar Messages

  • ZFS mount point - problem

    Hi,
    We are using ZFS to take the sanpshots in our solaris 10 servers. I have the problem when using ZFS mount options.
    The solaris server we are used as,
    SunOS emch-mp89-sunfire 5.10 Generic_127127-11 sun4u sparc SUNW,Sun-Fire-V440Sys
    tem = SunOS
    Steps:
    1. I have created the zfs pool named as lmspool
    2. Then created the file system lmsfs
    3. Now I want to set the mountpoint for this ZFS file system (lmsfs) as "/opt/database" directory (which has some .sh files).
    4. Then need to take the snapshot of the lmsfs filesystem.
    5. For the mountpoint set, I tried two ways.
    1. zfs set mountpoint=/opt/database lmspool/lmsfs
    it returns the message "cannot mount '/opt/database/': directory is not empty
    property may be set but unable to remount filesystem".
    If I run the same command in second time, the mount point set properly and then I taken the snapshot of the ZFS filesystem (lmsfs). After done some modification in the database directory (delete some files), then I rollback the snapshot but the original database directory was not recovered. :-(
    2. In second way, I used the "legacy" option for mounting.
    # zfs set mountpoint=legacy lmspool/lmsfs
    # mount -F zfs lmspool/lmsfs /opt/database
    After run this command, I cant able to see the files of the database directory inside the /opt. So I cant able to modify anything inside the /opt/database directory.
    Please someone suggest me the solution for this problem. or anyother ways to take the ZFS snapshot with mounting point in UFS file system?..
    Thanks,
    Muthukrishnan G

    You'll have to explain the problem clearer. What exactly is the problem? What is "the original database directory"? The thing with the .sh files? Why are you trying to mount onto something with files in it in the first place?

  • New zone and inherited file system mount point error

    Hi - would anyone be able to help with the following error please. I've tried to create a new zone that has the following inherited file system:
    inherit-pkg-dir:
    dir: /usr/local/var/lib/sudo
    But when I try to install it fails with:
    root@tdukunxtest03:~ 532$ zoneadm -z tdukwbprepz01 install
    A ZFS file system has been created for this zone.
    Preparing to install zone <tdukwbprepz01>.
    ERROR: cannot create zone <tdukwbprepz01> inherited file system mount point </export/zones/tdukwbprepz01/root/usr/local/var/lib>
    ERROR: cannot setup zone <tdukwbprepz01> inherited and configured file systems
    ERROR: cannot setup zone <tdukwbprepz01> file systems inherited and configured from the global zone
    ERROR: cannot create zone boot environment <tdukwbprepz01>
    I've added this because unknown to me when I installed sudo from sunfreeware in the global it requires access to /usr/local/var/lib/sudo - sudo itself installs in /usr/local. And when I try to run any sudo commands in the new zone it gave this:
    sudo ls
    Password:
    sudo: Can't open /usr/local/var/lib/sudo/tdgrunj/8: Read-only file system
    Thanks - Julian.

    Think I've just found the answer to my problem, I'd already inherited /usr ..... and as sudo from freeware installs in /usr/local I guess this is never going to work. I can only think to try the sudo version of the Solaris companion DVD or whatever it's called.

  • SQL 2014 cluster installation on mounting point disks error: Updating permission setting for file

    Dear all,
    I am attempting to install SQL 2014 fail over cluster under a physical windows 2012 R2 server which have mounting point disks.
    At the finish of setup I get the following error message.
    Could you please help me on this?
    Thanks
    The following error has occurred:
    Updating permission setting for file 'E:\Sysdata!System Volume Information\.....................................' failed. The file permission setting were supposed to be set to 'D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-3880718306-3832830129-1677859214-2598158968-1052248003)'.
    Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup.
    For help, click: go.microsoft.com/fwlink
    I am using an administrator account which have all security rights (cheched on secpol.msc mmc) needed for installation.

    Hi Marco_Ben_IT,
    Do not install SQL Server to the root directory of a mount point, setup fails when the root of a mounted volume is chosen for the data file directory, we must always specify
    a subdirectory for all files. This has to do with how permissions are granted. If you must put files in the root of the mount point you must manually manage the ACLs/permissions. We need to create a subfolder under the root of the mount point, and install
    there.
    More related information:
    Using Mount Points with SQL Server
    http://blogs.msdn.com/b/cindygross/archive/2011/07/05/using-mount-points-with-sql-server.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Question about changing zonepath from one mount point to another

    Hi all
    A local zone is currently running with its zonepath, say /export/home/myzone, mounted on a Veritas volume. Is it possible to change the zonepath to a different mount point, say /zone/myzone, which is mounted on the same volume, without re-installing the entire zone?
    Regards
    Sunny

    Yes..
    U can use zonecfg to reconfigure the zone..
    which is Sun's supported way..
    I just usually edit /zones/zone-name.xml
    There are several ways to move a zone, but this has
    always worked for me..
    - stop the zone
    - tar the zone
    - move the zone
    - edit the /etc/zones/zone-name.xml (to reflect the new path)
    - detach the zone
    - attach the zone (so it regnerates the hash)
    - boot the zone
    hth

  • Clone A Database Instance on a different mount point.

    Hello Gurs,
    I need you help as I need to clone a 11.1.0.7 database instance to a NEW mount point on the same host. The host is a HP-UX box and my question is do I need to install oracle database software in this new mount point and then clone ?? or cloning to the NEW MOUNT point itself will create all the necessary software?. Please provide me any documents that will be helpful for the process.
    Thanks In Advance.

    882065 wrote:
    Hello Gurs,
    my question is do I need to install oracle database software in this new mount point and then clone ??No.
    or cloning to the NEW MOUNT point itself will create all the necessary software?.No: cloning a database on same host means cloning database files : it does not mean cloning Oracle executables. You don't need to clone ORACLE_HOME on same host.
    Please provide me any documents that will be helpful for the process.
    Try to use : http://www.oracle-base.com/articles/11g/DuplicateDatabaseUsingRMAN_11gR2.php
    Thanks In Advance.Edited by: P. Forstmann on 29 nov. 2011 19:53

  • NFS Mounted Directory And Files Quit Responding

    I mounted a remote directory using NFS and I can access the mount point and all of its sub-directories and files. After a while, all of the sub-directories and files no longer respond when clicked; in column view there is no longer an icon nor any statistics for those files. If I go back and click on Network->Servers->myserver->its_subdirectories, it will eventually respond again.
    I have found no messages in the system log. And nfsstat shows no errors.
    I am using these these mount parameters with the Directory Utility->Mounts tab:
    ro net -P -T -3
    Any idea why the NFS mounted directories and files quit responding?
    Thanks.

    I may have found an answer to my own question.
    It looks like automount will automatically unmount a file system if it has not been accessed in 10 minutes. This time-out can be changed using the automount command. I am going to try increasing this time-out value.
    Here is part of the man page:
    SYNOPSIS
    automount [-v] [-c] [-t timeout]
    -t timeout
    Set to timeout seconds the time after which an automounted file
    system will be unmounted if it hasn't been referred to within
    that period of time. The default is 10 minutes (600 seconds).

  • Powershell- Associate Mount Point to Physical Disk

    I need to be able to associate mount points (Get-WmiObject -Class Win32_MountPoint) with the physical drive on which it resides.
    Scenario: I have physical disks (SAN LUNs) mounted as folders on an E: drive (also a SAN LUN) of a server.  I need to be able to, via a PowerShell script, associate the "folder" name to the physical disk (i.e., Harddisk4 or PhysicalDrive4).
    I can get Mount Point associated with the Volume, etc., but can't make the link to the physical disk.
    Any help is appreciated.

    Unfortunately there isn't an association class between mount points and physical disks like there is between logical and physical disks. I did a blog about finding partition alignment which required using Win32_LogicalDiskToPartition class. One of the comments
    suggested using Sysinternals diskext as workaround. See comments:
    http://sev17.com/2009/02/disk-alignment-partitioning-the-good-the-bad-the-ok-and-the-not-so-ugly/

  • Changing Oracle Mount Point After Installation Is Completed.

    How easy is it to change an Oracle Mount Point once an SAP installation has been completed?  Reason I am asking, is another Basis person believes it's easier to wipe the partition and reinstall.

    Hello,
    Yes this is OK (for SAP/Oracle) and easy for unix team too. You please shut down SAP and Oracle completely and Unix expert can change the file system. But be careful that the exact mount point 'name' should remain intact. For e.g. '/oracle/SID' is a mount point and if you want to change the file system or mount this particular 'directory' onto a different filesystem/partition then the 'mount point name' i.e. 'oracle/SID' should remain same - as required by oracle/sap to run.
    Thanks

  • Hidden shares/mount points

    Apologies for double-posting this, but I just realized this forum is more suited for this question.
    I'm having a very strange problem in Lion running on a Mac Mini. I'm using Automount Maker to mount 8 different network shares at login and 2 of them are exhibiting weird behavior. All the shares except those 2 show up in the Finder. When I look in /Volumes using the Finder, the 2 mount points are not visible, but if I use Terminal to ls /Volumes, the mount points are visible in the list in Terminal. I can cd to those mount points and navigate the files, so the shares are definitely mounted, just invisible to the Finder. As a result, some scripts that run on the computer that reference /Volumes/sharename are failing.
    I've tried changing the visibility of these mount points with "chflags hidden" but that didn't fix it.
    I tested mounting the share without using Automount Maker (just the normal Apple+K method) and the share still mounts invisible. This time however, when the Finder pops open to show me the share after mounting it, the icon for the share in the Finder is dimmed. I've never seen this behavior before.

    You dont need timbuktu to control OSX server screen sharing is now built in.
    The most secure way to do what you want it to use ssh tunneling. You dont need vpn. with ssh you could run screen sharing and apple filesharing through the ssh tunnel.
    By default remote login is enabled in osx server. Being a server if configured correctly it will have a static ip address.
    So configure your office networks router to forward tcp port 22 from the internet to the ip address of your server.
    Then you need an account on the server, maybe you are already the admin account.
    open your terminal the following commands will setup a tunnel for file sharing and screen sharing.
    screen sharing tunnel
    ssh user@server -L 5901:localhost:5900
    replace server with the actual public IP address of your office internet connection. replace user with the actual account name on the server.
    Once logged in open the screen sharing app and use a host name of localhost:5901
    for file sharing use
    ssh user@server -L 5548:localhost:548
    once logged in
    press command +K and in the server address use localhost:5548
    that is it.

  • ZFS 7320c and T4-2 server mount points for NFS

    Hi All,
    We have an Oracle ZFS 7320c and T4-2 servers. Apart from the on-board 1 GB Ethernet, we also have a 10 Gbe connectivity between the servers and the storage
    configured as 10.0.0.0/16 network.
    We have created a few NFS shares but unable to mount them automatically after reboot inside Oracle VM Server for SPARC guest domains.
    The following document helped us in configuration:
    Configure and Mount NFS shares from SUN ZFS Storage 7320 for SPARC SuperCluster [ID 1503867.1]
    However, we can manually mount the file systems after reaching run level 3.
    The NFS mount points are /orabackup and /stage and the entries in /etc/vfstab are as follows:
    10.0.0.50:/export/orabackup - /orabackup nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    10.0.0.50:/export/stage - /stage nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    On the ZFS storage, the following are the properties for shares:
    zfsctrl1:shares> select nfs_prj1
    zfsctrl1:shares nfs_prj1> show
    Properties:
    aclinherit = restricted
    aclmode = discard
    atime = true
    checksum = fletcher4
    compression = off
    dedup = false
    compressratio = 100
    copies = 1
    creation = Sun Jan 27 2013 11:17:17 GMT+0000 (UTC)
    logbias = latency
    mountpoint = /export
    quota = 0
    readonly = false
    recordsize = 128K
    reservation = 0
    rstchown = true
    secondarycache = all
    nbmand = false
    sharesmb = off
    sharenfs = on
    snapdir = hidden
    vscan = false
    sharedav = off
    shareftp = off
    sharesftp = off
    sharetftp =
    pool = oocep_pool
    canonical_name = oocep_pool/local/nfs_prj1
    default_group = other
    default_permissions = 700
    default_sparse = false
    default_user = nobody
    default_volblocksize = 8K
    default_volsize = 0
    exported = true
    nodestroy = false
    space_data = 43.2G
    space_unused_res = 0
    space_unused_res_shares = 0
    space_snapshots = 0
    space_available = 3.97T
    space_total = 43.2G
    origin =
    Shares:
    Filesystems:
    NAME SIZE MOUNTPOINT
    orabackup 31K /export/orabackup
    stage 43.2G /export/stage
    Children:
    groups => View per-group usage and manage group
    quotas
    replication => Manage remote replication
    snapshots => Manage snapshots
    users => View per-user usage and manage user quotas
    zfsctrl1:shares nfs_prj1> select orabackup
    zfsctrl1:shares nfs_prj1/orabackup> show
    Properties:
    aclinherit = restricted (inherited)
    aclmode = discard (inherited)
    atime = true (inherited)
    casesensitivity = mixed
    checksum = fletcher4 (inherited)
    compression = off (inherited)
    dedup = false (inherited)
    compressratio = 100
    copies = 1 (inherited)
    creation = Sun Jan 27 2013 11:17:46 GMT+0000 (UTC)
    logbias = latency (inherited)
    mountpoint = /export/orabackup (inherited)
    normalization = none
    quota = 200G
    quota_snap = true
    readonly = false (inherited)
    recordsize = 128K (inherited)
    reservation = 0
    reservation_snap = true
    rstchown = true (inherited)
    secondarycache = all (inherited)
    shadow = none
    nbmand = false (inherited)
    sharesmb = off (inherited)
    sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.13/16:@10.0.0.200/16:@10.0.0.203/16
    snapdir = hidden (inherited)
    utf8only = true
    vscan = false (inherited)
    sharedav = off (inherited)
    shareftp = off (inherited)
    sharesftp = off (inherited)
    sharetftp = (inherited)
    pool = oocep_pool
    canonical_name = oocep_pool/local/nfs_prj1/orabackup
    exported = true (inherited)
    nodestroy = false
    space_data = 31K
    space_unused_res = 0
    space_snapshots = 0
    space_available = 200G
    space_total = 31K
    root_group = other
    root_permissions = 700
    root_user = nobody
    origin =
    zfsctrl1:shares nfs_prj1> select stage
    zfsctrl1:shares nfs_prj1/stage> show
    Properties:
    aclinherit = restricted (inherited)
    aclmode = discard (inherited)
    atime = true (inherited)
    casesensitivity = mixed
    checksum = fletcher4 (inherited)
    compression = off (inherited)
    dedup = false (inherited)
    compressratio = 100
    copies = 1 (inherited)
    creation = Tue Feb 12 2013 11:28:27 GMT+0000 (UTC)
    logbias = latency (inherited)
    mountpoint = /export/stage (inherited)
    normalization = none
    quota = 100G
    quota_snap = true
    readonly = false (inherited)
    recordsize = 128K (inherited)
    reservation = 0
    reservation_snap = true
    rstchown = true (inherited)
    secondarycache = all (inherited)
    shadow = none
    nbmand = false (inherited)
    sharesmb = off (inherited)
    sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.203/16:@10.0.0.200/16
    snapdir = hidden (inherited)
    utf8only = true
    vscan = false (inherited)
    sharedav = off (inherited)
    shareftp = off (inherited)
    sharesftp = off (inherited)
    sharetftp = (inherited)
    pool = oocep_pool
    canonical_name = oocep_pool/local/nfs_prj1/stage
    exported = true (inherited)
    nodestroy = false
    space_data = 43.2G
    space_unused_res = 0
    space_snapshots = 0
    space_available = 56.8G
    space_total = 43.2G
    root_group = root
    root_permissions = 755
    root_user = root
    origin =
    Can anybody please help?
    Regards.

    try this:
    svcadm enable nfs/clientcheers
    bjoern

  • Live upgrade, zones and separate mount points

    Hi,
    We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
    Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
    lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
    This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
    Anyone tried something similar or has any idea on how to solve this?
    The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
    1:
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mount point </zones/zone01>.
    Copying.
    Creating shared file system mount points.
    Copying root of zone <zone01>.
    Creating compare databases for boot environment <upgrade>.
    Creating compare database for file system </zones/zone01>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <upgrade>.
    Making boot environment <upgrade> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
    zoneadm: zone 'zone01': call to zoneadmd failed
    ERROR: unable to mount zone <zone01> in </.alt.upgrade>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
    umount: /dev/dsk/c2t1d0s0 not mounted
    ERROR: cannot unmount </dev/dsk/c2t1d0s0>
    ERROR: cannot mount boot environment by name <upgrade>
    ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
    ERROR: Update of loader failed.
    ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
    Making the ABE <upgrade> bootable FAILED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    2:
    0 21191 21113 /usr/lib/lu/lumount -f upgrade
    0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
    0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
    0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
    0 21196 21192 mount swap /.alt.upgrade/var/run
    0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
    0 21197 21192 mount swap /.alt.upgrade/tmp
    0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
    0 21199 21198 /bin/expr 2 - 1
    0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
    0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
    0 21202 21198 false
    0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
    0 21206 21204 sed s/\([^\]\)::/\1:-:/
    0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
    0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
    0 21210 21203 false
    0 21211 21203 gettext unable to mount zone <%s> in <%s>
    0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
    Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

    I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
    # lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
    <snip>
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
    Mounting file systems for boot environment <upgrade>.
    ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
    allowable number of mount points exceeded
    ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
    ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
    ERROR: unmounting partially mounted boot environment file systems
    If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
    <snip>
    Creating boot environment <upgrade>.
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
    Mounting file systems for boot environment <upgrade>.
    Calculating required sizes of file systems for boot environment <upgrade>.
    Populating file systems on boot environment <upgrade>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mountED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
    # lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

  • EBS with ZFS and Zones

    I will post this one again in desperation, I have had a SUN support call open on this subject for some time now but with no results.
    If I can't get a straight answer soon, I will be forced to port the application over to Windows, a desperate measure.
    Has anyone managed to recover a server and a zone that uses ZFS filesystems for the data partitions.
    I attemped a restore of the server and then the client zone but it appears to corrupt my ZFS file systems.
    The steps I have taken are listed below:
    Built a server and created a zone, added a ZFS fileystem to this zone and installed the EBS 7.4 client software into the zone making the host server the EBS server.
    Completed a backup.
    Destroyed the zone and host server.
    Installed the OS and re-created a zone with the same configuration.
    Added the ZFS filesystem and made this available within the zone.
    Installed EBS and carried out a complete restore.
    Logged into the zone and installed the EBS client software then carried out a complete restore.
    After a server reload this leaves the ZFS filesytem corrupt.
    status: One or more devices could not be used because the the label is missing
    or invalid. There are insufficient replicas for the pool to continue
    functioning.
    action: Destroy and re-create the pool from a backup source.
    see: http://www.sun.com/msg/ZFS-8000-5E
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    p_1 UNAVAIL 0 0 0 insufficient replicas
    mirror UNAVAIL 0 0 0 insufficient replicas
    c0t8d0 FAULTED 0 0 0 corrupted data
    c2t1d0 FAULTED 0 0 0 corrupted data

    I finally got a solution to the issue, thanks to a SUN tech guy rather than a member of the EBS support team.
    The whole issue revolves around the file:/etc/zfs/zpool.cache which needs to be backed up prior to carrying out a restore.
    Below is a full set of steps to recover a server using EBS7.4 that has zones installed and using ZFS:
    Instructions On How To Restore A Server With A Zone Installed
    Using the servers control guide re-install the OS from CD configuring the system disk to the original sizes, do not patch at this stage.
    Create the zpool's and the zfs file systems that existed for both the global and non-global zones.
    Carry out a restore using:
    If you don't have a bootstrap printout, read the backup tape to get the backup indexes.
    cd /usr/sbin/nsr
    Use scanner -B -im <device>
    to get the ssid number and record number
    scanner �B -im /dev/rmt/0hbn
    cd /usr/sbin/nsr
    Enter: ./mmrecov
    You will be prompted for the SSID number followed by the file and record number.
    All of this information is on the Bootstrap report.
    After the index has been recovered:
    Stop the backup demons with: �/etc/rc2.d/S95networker stop�
    Copy the original res file to res.org and then copy res.R to res.
    Start the backup demons with: �/etc/rc2.d/S95networker start�
    Now run: nsrck �L7 to reconstruct the indexes.
    You should now have your backup indexes intact and be able to perform standard restores.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache /etc/zfs/zpool.cache.org
    To restore the whole system:
    Shutdown any sub zones
    cd /
    Run �/usr/sbin/nsr/nsrmm �m� to mount the tape
    Enter �recover�
    At the Recover prompt enter: �force�
    Now enter: �add *� (to restore the complete server, this will now list out all the files in the backup library selected for restore)
    Now enter: �recover� to start the whole system recovery, and ensure the backup tape is loaded into the server.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache.org /etc/zfs/zpool.cache
    Reboot the server
    The non-global zone should now be bootable use zoneadm -z <zoneaname> boot
    start an X session onto the non-global zone and carry out a selective restore of all the ZFS file systems.

  • Sparse Root Zone Mount Points

    In my zone xml files I have configured filesystems to be mounted in the non global zones.
    The filesystems exists on separate SAN storage.
    filesystem special=/dev/dsk/controllerdisknumber raw=/dev/dsk/controllerdisknumber directory=/somemountpoint
    The problem we are experiencing is that occasionally our path to the SAN goes down. Yes these are multipathed but we still experience the outage.
    What happens is that the filesystems that are mounted in the NGZ's go away. So the zone is left running
    but the mountpoints are not mounted any longer.
    When the path to the SAN comes back, the global zone sees that the devices are back, but the NGZ still
    does not mount their filesystems as specified in their specific xml.
    The only way for the mount points to come back is to 1) reboot the zone or 2) manually mount the filesystems from the global zone, specifying that they be mounted in the NGZ.
    Is this working as it should? Should the zone not attempt to remount its filesystems if they are lost?
    Instead of specifying a filesystem in the zoneconfig, should we just be giving the devices to the zone and then use the NGZ /etc/vfstab to mount ?
    Thank you for the help.

    I've seen this issue also, but not aware of a good fix.
    One possibility that comes to mind but I havent actually tested is to do a NFS export from the global zone and automount it in the local zone.
    The automounter shoud deal with mountpoint reliability issues OK.
    Its kind of ugly and isnt likely to be as efficient as a loopback mount though.

  • Sharing ZFS via NFS and mounting on OSX

    I need some help sharing and mounting a ZFS mount.
    I have a pool setup and a SMB share all working. Owner of the ZFS files is user "nas" group "nas"
    I have tried zfs set sharenfs=on pool/share
    I am trying to mount this on a OSX client and it appears to mount however when I even try to cd into the mount point it says Permission denied. Can anyone point me in the right direction?

    After re-reading your post, It sounds like a permissions thing, actually .. the ID on the client needs to have execute permissions on the directory in order to CD into it. Is it world +x ?  If not, does your uid on the client side match something that can execute a cd into that directory on the server side? To test, you could try and chmod 777 the directory on the server, long enough to see if your able to cd into it from the client.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for