Question on cluster 3.x and NFS shares

I'm going to depict a situation that under sc2.x worked just fine, but currently isn't working so well..
Let's attach some trivial names to machines just for grins -
I'm on dbserver1 and I want to share a filesystem over a private (ipmp'd) network - not my nodename that is - to a server called appserver1 :
dbserver1 has routable ip address of 10.0.0.1
appserver1 has routable ip address of 10.0.0.2
dbserver1-qfe1 is using ip address of 192.168.0.1
appserver1-qfe1 is using ip address of 192.168.0.2
all entries are in each server's local /etc/inet/hosts file
the nodename of each system is the corresponding ip address on the 10 net.
If I wanted to share /usr/local via the physical, I'd run from dbserver1
share -F nfs -o rw=appserver1 -d "comment" /usr/local
on appserver1 -
mount -F nfs dbserver1:/usr/local /mnt
I want to do this however, I want to share some filesystem so it's only visible via the 192 subnet
share -F nfs -o rw=appserver1-qfe1 -d "comment" /usr/local
on appserver1 -
mount -F nfs dbserver1-qfe1:/usr/local /mnt
currently mounting over the "public" works, but over the private returns "permission denied"
Interesting twist...
If I do this
share -F nfs -o rw -d "comment" /usr/local
and then try
mount -F nfs dbserver1-qfe1:/usr/local /mnt
it works...
I know I've depicted something that's fairly generic, but I'm just trying to understand what is being done differently in sc3.x with respect to nfs exports versus sc2.x.
thanks in advance,
Jeff

anything, anybody?
Just for additional clarification, this is a solaris 9 cluster running cluster 3.1...
Thanks again,

Similar Messages

  • Slow ZFS-share performance (both CIFS and NFS)

    Hello,
    After upgrading my OpenSolaris file server (newest version) to Solaris 11 Express, the read (and write)-performance on my CIFS and NFS-shares dropped from 40-60MB/s to a few kB/s. I upgraded the ZFS filesystems to the most recent version as well.
    dmsg and /var/log/syslog doesn't list anything abnormal as far as I can see.. I'm not running any scrubs on the zpools, and they are listed as online. top doesn't reveal any process utilizing the CPU more than 0.07%.
    The problem is probably not at the client side, as the clients are 100% untouched when it comes to configuration.
    Where should I start looking for errors (logs etc.)? Any recommended diagnostic tools?
    Best regards,
    KL

    Hi!
    Check Link speed.
    dladm show-dev
    Check for collisions and wrong network packets:
    netstat -ia
    netstat -ia 2 10 ( when file transfered)
    Check for lost packets :
    ping -s <IP client> ( whait more 1 min )
    Check for retransmit, latency for respond:
    snoop -P -td <IP client> ( when file transfered)
    Try replace network cable.
    Regards.

  • Zone cluster and NFS

    Hiya folks.
    Setup is, 2 global nodes running 3.3 and a zone cluster setup among them. NFS share from a Netapp filer that could be mounted on both the global zones.
    I’m not aware of way how I can present this NFS share to the zone clusters.
    This is a failover cluster setup and there won’t be any parallel I/O from the other cluster node.
    Heard the path I should follow is loopback F/S. Seek your advice. Thanks in advance.
    Cheers
    osp

    Hi,
    I had been confused by the docs and needed confirmation before replying.
    You have to issue the clnas command from the global zone but can use the -Z <zoneclustername> option to work in the zonecluster itself. E.g.
    # clnas add -t netapp -p userid=nasadmin -f <passwd-file> -Z <zc> <appliance>
    # clnas add-dir -Z <zc> -d <dir> <appliance>
    Your proposal (it must be clnas, not clns)
    clns -t netapp -u nasadmin -f /home/nasadmin/passwd.txt -Z zc1 netapp_nfs_vfiler1
    clns add-dir -d /nfs_share1 netapp_nfs_vfiler1 is not quite correct.
    Few concerns here, the -u and the password should be vfiler users or are they unix users ? This is the vfiler user!
    where does the share get presented to on the zone cluster ?.Good question. Just give it a try.
    Let us know whether that worked.
    Hartmut

  • I downloaded mountain lion for my macbook pro and i want to install it on my sisters' macbook air, my question is, how many times can i share my purchase?

    i downloaded mountain lion for my macbook pro and i want to install it on my sisters' macbook air, my question is, how many times can i share my purchase?

    Association of Associated Devices is subject to the following terms:
    "You may auto-download Eligible Content or download previously-purchased Eligible Content from an Account on up to 10 Associated Devices, provided no more than 5 are iTunes-authorized computers."
    That information is available here >   iTUNES STORE - MAC APP STORE - TERMS AND CONDITIONS
    If you re download Mountain Lion using your Apple ID on her Mac, your sister will need to use your Apple ID and password to install and update apps.

  • New files and folders on a Linux client mounting a Windows 2012 Server for NFS share do not inherit Owner and Group when SetGID bit set

    Problem statement
    When I mount a Windows NFS service file share using UUUA and set the Owner and Group, and set the SetGID bit on the parent folder in a hierarchy. New Files and folders inside and underneath the parent folder do not inherit the Owner and Group permissions
    of the parent.
    I am given to understand from this Microsoft KnowledgeBase article (http://support.microsoft.com/kb/951716/en-gb) the problem is due to the Windows implmentation of NFS Services not supporting the Solaris SystemV or BSD grpid "Semantics"
    However the article says the same functionality can acheived by using ACE Inheritance in conjunction with changing the Registry setting for "KeepInheritance" to enable Inheritance propagation of the Permissions by the Windows NFS Services.
    1. The Precise location of the "KeepInheritance" DWORD key appears to have "moved" in  Windows Server 2012 from a Services path to a Software path, is this documented somewhere? And after enabling it, (or creating it in the previous
    location) the feature seems non-functional. Is there a method to file a Bug with Microsoft for this Feature?
    2. All of the references on demonstrating how to set an ACE to achieve the same result "currently" either lead to broken links on Microsoft technical websites, or are not explicit they are vague or circumreferential. There are no plain Examples.
    Can an Example be provided?
    3. Is UUUA compatible with the method of setting ACE to acheive this result, or must the Linux client mount be "Mapped" using an Authentication source. And could that be with the new Flat File passwd and group files in c:\windows\system32\drivers\etc
    and is there an Example available.
    Scenario:
    Windows Server 2012 Standard
    File Server (Role)
    +- Server for NFS (Role) << -- installed
    General --
    Folder path: F:\Shares\raid-6-array
    Remote path: fs4:/raid-6-array
    Protocol: NFS
    Authentication --
    No server authentication
    +- No server authentication (AUTH_SYS)
    ++- Enable unmapped user access
    +++- Allow unmapped user access by UID/GID
    Share Permissions --
    Name: linux_nfs_client.host.edu
    Permissions: Read/Write
    Root Access: Allowed
    Encoding: ANSI
    NTFS Permissions --
    Type: Allow
    Principal: BUILTIN\Administrators
    Access: Full Control
    Applies to: This folder only
    Type: Allow
    Principal: NT AUTHORITY\SYSTEM
    Access: Full Control
    Applies to: This folder only
    -- John Willis, Facebook: John-Willis, Skype: john.willis7416

    I'm making some "major" progress on this problem.
    1. Apparently the "semantics" issue to honor SGID or grpid in NFS on the server side or the client side has been debated for some time. It also existed as of 2009 between Solaris nfs server and Linux nfs clients. The Linux community defaulted to declaring
    it a "Server" side issue to avoid "Race" conditions between simultaneous access users and the local file system daemons. The client would have to "check" for the SGID and reformulate its CREATE request to specify the Secondary group it would have to "notice"
    by which time it could have changed on the server. SUN declined to fix it.. even though there were reports it did not behave the same between nfs3 vs nfs4 daemons.. which might be because nfs4 servers have local ACL or ACE entries to process.. and a new local/nfs
    "inheritance" scheme to worry about honoring.. that could place it in conflict with remote access.. and push the responsibility "outwards" to the nfs client.. introducing a race condition, necessitating "locking" semantics.
    This article covers that discovery and no resolution - http://thr3ads.net/zfs-discuss/2009/10/569334-CR6894234-improved-sgid-directory-compatibility-with-non-Solaris-NFS-clients
    2. A much Older Microsoft Knowledge Based article had explicit examples of using Windows ACEs and Inheritance to "mitigate" the issue.. basically the nfs client "cannot" update an ACE to make it "Inheritable" [-but-] a Windows side Admin or Windows User
    [-can-] update or promote an existing ACE to "Inheritable"
    Here are the pertinent statements -
    "In Windows Services for UNIX 2.3, you can use the KeepInheritance registry value to set inheritable ACEs and to make sure that these ACEs apply to newly created files and folders on NFS shares."
    "Note About the Permissions That Are Set by NFS Clients
    The KeepInheritance option only applies ACEs that have inheritance enabled. Any permissions that are set by an NFS client will
    only apply to that file or folder, so the resulting ACEs created by an NFS client will
    not have inheritance set."
    "So
    If you want a folder's permissions to be inherited to new subfolders and files, you must set its permissions from the Windows NFS server because the permissions that are set by NFS clients only apply to the folder itself."
    http://support.microsoft.com/default.aspx?scid=kb;en-us;321049
    3. I have set up a Windows 2008r2 NFS server and mounted it with a Redhat Enteprise Linux 5 release 10 x86_64 server [Oct 31, 2013] and so far this does appear to be the case.
    4. In order to mount and then switch user to a non-root user to create subdirectories and files, I had to mount the NFS share (after enabling Anonymous AUTH_SYS mapping) this is not a good thing, but it was because I have been using UUUA - Unmapped Unix
    User Access Mapping, which makes no attempt to "map" a Unix UID/GID set by the NFS client to a Windows User account.
    To verify the Inheritance of additional ACEs on new subdirectories and files created by a non-root Unix user, on the Windows NFS server I used the right click properties, security tab context menu, then Advanced to list all the ACEs and looked at the far
    Column reflecting if it applied to [This folder only, or This folder and Subdirectories, or This folder and subdirectories and files]
    5. All new Subdirectories and files createdby the non-root user had a [Non-Inheritance] ACE created for them.
    6. I turned a [Non-Inheritance] ACE into an [Inheritance] ACE by selecting it then clicking [Edit] and using the Drop down to select [This folder, subdirs and files] then I went back to the NFS client and created more subdirs and files. Then back to the
    Windows NFS server and checked the new subdirs and folders and they did Inherit the Windows NFS server ACE! - However the UID/GID of the subdirs and folders remained unchanged, they did not reflect the new "Effective" ownership or group membership.
    7. I "believe" because I was using UUUA and working "behind" the UID/GID presentation layer for the NFS client, it did not update that presentation layer. It might do that "if" I were using a Mapping mechanism and mapped UID/GID to Windows User SIDs and
    Group SIDs. Windows 2008r2 no longer has a "simple" Mapping server, it does not accept flat text files and requires a Schema extension to Active Directory just to MAP a windows account to a UID/GID.. a lot of overhead. Windows Server 2012 accepts flat text
    files like /etc/passwd and /etc/group to perform this function and is next on my list of things to see if that will update the UID/GID based on the Windows ACE entries. Since the Local ACE take precedence "over" Inherited ACEs there could be a problem. The
    Inheritance appears to be intended [only] to retain Administrative rights over user created subdirs and files by adding an additional ACE at the time of creation.
    8. I did verify from the NFS client side in Linux that "Even though" the UID/GID seem to reflect the local non-root user should not have the ability to traverse or create new files, the "phantom" NFS Server ACEs are in place and do permit the function..
    reconciling the "view" with "reality" appears problematic, unless the User Mapping will update "effective" rights and ownership in the "view"
    -- John Willis, Facebook: John-Willis, Skype: john.willis7416

  • NFS shares and df problem

    I have two nfs shares mounted on bootup, "10.0.0.118:/warehouse" and "10.0.0.118:/lataukset". However when I use df, it always displays something like this:
    udev 10M 0 10M 0% /dev
    run 10M 260K 9,8M 3% /run
    /dev/sda1 30G 5,7G 23G 21% /
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/warehouse 5,4T 3,3T 1,9T 65% /mnt/warehouse
    10.0.0.118:/lataukset 908G 84G 778G 10% /mnt/lataukset
    So same entry is shown multiple times.
    /etc/fstab
    10.0.0.118:/lataukset /mnt/lataukset nfs4 defaults,user,noauto,comment=systemd.automount 0 0
    10.0.0.118:/warehouse /mnt/warehouse nfs4 defaults,user,noauto,comment=systemd.automount 0 0
    What could be the reason for this and is there a fix for it?
    Thanks.

    My best guess is "comment=systemd.automount" and do u have any reaon to use "noauto" ?
    i always mount nfs4 shares with "defaults,async,user,proto=tcp,soft,intr" ( i m not realy sure what is included and what is not in "defaults so i just put those there, they dont byte so far Oo) u can only think about async option ( no problems so far ~1 year but who knows i think it was not recommended at least for nfs v3)
    well that is it good luck

  • Nautilus NFS shares listed under both Devices and Network

    I have several NFS shares from my home server that I mount using fstab.  I am running a fully up to date installation of Gnome 3.12.  However, as my subject indicates, my NFS shares appear under both Devices and Network in Nautilus.  I've done some searching and not found much about this.  It is redundant and most importantly space wasting to have two lists of the same entries.  Is there a way I can eliminate this duplication?  Thanks.
    Here is what I'm using in fstab to mount the shares:
    nfs noauto,x-systemd.automount,x-systemd.device-timeout=10,rw,rsize=32768,wsize=32768,timeo=14,hard,intr,user 0 0

    I indicated that I was trying to mount several NFS shares off my home server.  I should have mentioned that these are NOT NFSv4 shares.  Originally, I had the mount points inside my home folder.  I've since moved them in the /mnt directory and don't have duplicate entries.  Well, I don't have any entries under either Network or Devices.  I've made links to the directories where the NFS shares are mounted.  I was more curious if this was a bug with Nautilus or my own error as having the mount points in my home directory with other file browsers such as Dolphin in KDE did not yield duplicate entries with the same fstab mount options.
    I guess the long and short of it is that if you are using Nautilus, have your NFS mount points outside your home folder.
    Thanks.

  • Adding NFS Share to Mountain Lion Server

    Alright, here goes.
    The company I work for has been using SL server for years and wanted to test a possible upgrade to ML server for NFSv4. Downloaded ML to test machine and ML server. I mounted it via server connector (nfs://blah blah..you get the idea) volume that I want to share comes up on desktop but will not show under the server app. I also have access to posix but I need full access to acl as well.
    I'm trying to figure out how to access this NFS share so that I can share it from the server but cannot seem to get it working. To those who know OSX servers I probably sound like a moron, but I'm just an I.T. guy tasked with setting up a test server even though I'm not a "server guy". Any information would be greatly appreciated.
    One more question. Would a migration from SL to ML server also bring along this volume? Thanks again.

    I upgraded today and had the same issue. I took following steps to fix my computer.
    Boot into Recovery Partition (Hold Option Button while booting)
    Open Terminal.
    Type resetpassword
    Select your hard drive
    Select the user account (Administrator)
    Enter a new password for the user
    Reenter password
    Save
    Restart
    Boot normally, Login as Adminstrator with the new password and add "Admin" permission to your account.
    Restart
    Everything should be working as expected

  • Launching xcode from nfs share (Ensure that Xcode.app is installed on a volume with ownership enabled)

    Hi!
    We have mac mini (Yosemite) and NFS server under Ubnuntu 14.04.
    Also we have xcode resides on NFS share that mounted to mac.
    Problem: When I try launch xcode from nfs share I got error message:
    NSLocalizedRecoverySuggestion=Ensure that Xcode.app is installed on a volume with ownership enabled
    You can see full error here:
    https://gist.github.com/keferoff/fcfd3ea6c13f6ba481fa
    The question is:
    How I can launch xcode from nfs share? For some reasons I can't use xcode-select or store several xcodes locally.
    Thanks in advance!

    It's cool answer but I need solution how to accomplish my task. Maybe I can use iSCSI device or NBD device ot maybe there we have some NFS mount options?

  • Testing ha-nfs in two node cluster (cannot statvfs /global/nfs: I/O error )

    Hi all,
    I am testing HA-NFS(Failover) on two node cluster. I have sun fire v240 ,e250 and Netra st a1000/d1000 storage. I have installed Solaris 10 update 6 and cluster packages on both nodes.
    I have created one global file system (/dev/did/dsk/d4s7) and mounted as /global/nfs. This file system is accessible form both the nodes. I have configured ha-nfs according to the document, Sun Cluster Data Service for NFS Guide for Solaris, using command line interface.
    Logical host is pinging from nfs client. I have mounted there using logical hostname. For testing purpose I have made one machine down. After this step files tem is giving I/O error (server and client). And when I run df command it is showing
    df: cannot statvfs /global/nfs: I/O error.
    I have configured with following commands.
    #clnode status
    # mkdir -p /global/nfs
    # clresourcegroup create -n test1,test2 -p Pathprefix=/global/nfs rg-nfs
    I have added logical hostname,ip address in /etc/hosts
    I have commented hosts and rpc lines in /etc/nsswitch.conf
    # clreslogicalhostname create -g rg-nfs -h ha-host-1 -N
    sc_ipmp0@test1, sc_ipmp0@test2 ha-host-1
    # mkdir /global/nfs/SUNW.nfs
    Created one file called dfstab.user-home in /global/nfs/SUNW.nfs and that file contains follwing line
    share -F nfs &ndash;o rw /global/nfs
    # clresourcetype register SUNW.nfs
    # clresource create -g rg-nfs -t SUNW.nfs ; user-home
    # clresourcegroup online -M rg-nfs
    Where I went wrong? Can any one provide document on this?
    Any help..?
    Thanks in advance.

    test1#  tail -20 /var/adm/messages
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 344672 daemon.error] Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 801855 daemon.error]
    Feb 28 22:28:54 testlab5 Error in scha_cluster_get
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d5s0 has changed to OK
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d6s0 has changed to OK
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 537175 daemon.notice] CMM: Node e250 (nodeid: 1, incarnation #: 1235752006) has become reachable.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 525628 daemon.notice] CMM: Cluster has reached quorum.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node e250 (nodeid = 1) is up; new incarnation number = 1235752006.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node testlab5 (nodeid = 2) is up; new incarnation number = 1235840337.
    Feb 28 22:37:15 testlab5 Cluster.CCR: [ID 499775 daemon.notice] resource group rg-nfs added.
    Feb 28 22:39:05 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:05 testlab5 Cluster.CCR: [ID 491081 daemon.notice] resource ha-host-1 removed.
    Feb 28 22:39:17 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:17 testlab5 Cluster.CCR: [ID 254131 daemon.notice] resource group nfs-rg removed.
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, timeout <300> seconds
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<rg-nfs.ha-host-1.2>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_validate> completed successfully for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, time used: 0% of timeout <300 seconds>
    Feb 28 22:39:30 testlab5 Cluster.CCR: [ID 973933 daemon.notice] resource ha-host-1 added.

  • How to create an nfs share using the method "CreateShare" of class "MSFT_NfsServerTasks"

    I need to use the "CreateShare" of the wmi class "MSFT_NfsServerTasks" to create a nfs share.
    I am new to wmi. could somebody please guide me to achieve this. thanks in advance.

    let to me rephrase my question. I need to call the MSFT_NfsServerTasks::createmethod to create the nfs share.
    And the syntax for this method is
    uint32 CreateShare(
    [in] string Name,
    [in] string Path,
    [in] string NetworkName,
    [in] string Authentication[],
    [in] boolean UnmappedUserAccess,
    [in] boolean AnonymousAccess,
    [in] sint32 AnonymousUid,
    [in] sint32 AnonymousGid,
    [in] string LanguageEncoding,
    [in] boolean AllowRootAccess,
    [in] string Permission,
    [in] MSFT_NfsSharePermission ClientPermission[],
    [out] MSFT_NfsShare Share
    And to run this method I will be using "IWbemServices::ExecMethod". This method takes the Input parameter object(IWbemClassObject). I managed to add string the parameters in the input parameterobject as,
    IWbemClassObject* pInInst = NULL;:/*some code*/
    VARIANT var;
    var.vt = VT_BSTR;
    var.bstrVal= SysAllocString(L"C:\\share1");
    BSTR ArgPath = SysAllocString(L"Path");
    hRes = pInInst->Put(ArgName, 0, &var, 0);
    But how could I use it for other parameters e.g. string[], MSFT_NfsSharePermission ClientPermission[]etc.

  • Configuring an NFS Share for CDM (SGD 4.6 on Linux)

    hello all,
    i'm not able to access local drive from apps because i don't create /smb on the server, as specified in the doc 821-1926.pdf.
    My question is:
    - is it mandatory to have a nfs share on the SGD server?
    Users just need to load local files into published apps, not to access remote files.
    Thanks in advance for help,
    gerard

    Yes, I it is mandatory.
    Note: The /smb share must be configured on the application server and not the SGD server (unless the server is performing both roles).
    You must also install the SGD enhancement module on the application server and, after configuring the NFS share, you need to start the CDM component of the Enhancement Module as described in the doc.
    The CDM component of the enhancement module presents the NFS share as an SMB share to the SGD server, and without this you will not be able to access local drives.

  • What is proper way of defining NFS shares?

    i have two servers, serverA is solaris 9 and serverB is solaris 10.
    in serverA, i define in /etc/dfs/dfstab:
    share -F nfs -o root=serverB /dirBthen from serverB, i do a:
    mount -F nfs serverA:/dirB /xdirBwhen i do a (from serverB):
    cd /xdirB
    find . -print -mount -depth | cpio -pvdm /destinationi get permission denied errors on some directories. i found that if i first do a
    chmod -R 777 /dirBi will not get permission denied errors. which is understandble.
    question is, how does one properly define NFS shares so that i don't have to make all dirs/files world readable?

    If you are running DNS, it is likely that the IP address of your client does not resolve to 'serverB' but to something like 'serverB.company.com'. Those two strings do not match, and you are probably not granting root access to the client.
    On the client, touch a file that doesn't exist. When it's created, who is the owner? If it's 'nobody', then that's almost certainly your problem.
    Darren

  • OS X Lion, NFS shares, Time Machine?

    A few years ago I bought a Mac Mini server. I wanted to use it as a storage server using attached drives. I got everything up and running but kept running into the same problem: The NFS server deamon would stop / crash for no apparent reason when I was writing large files to the server over NFS.
    I spent some time troubleshooting the issue but never resolved it and eventually resorted to wiping out OS X on the Mac Mini server an install CentOS Linux instead. The NFS deamon here is rock solid and I have used it ever since.
    Fast forward to today where my Time Capsule died due to the usual power supply failures (thanks to Apple for a crappy design). I then realized that I could use my Mac Mini Server as a host for Time Machine, I'd just need to get it running OS X Server again. I would still want to serve writeable NFS shares so this leads to my question:
    Does anyone know for sure whether OS X Mountain Lion has had any improvements to the NFS deamon over previous versions? I'd be quite happy to buy the new OS, but I'd prefer to be a little more confident that all the work would pay off (in the form of a stable NFS service)

    The Server.app and Server Admin utilities for Lion no longer let you configure NFS. However the NFS software is still there and in fact was significantly upgraded and now supports NFS v4.
    See http://support.apple.com/kb/HT4695

  • AlwaysOn Cluster reboot due to file share witness unavailability

    Hi Team,
    Anyone came across this scenario in AlwaysOn Availability Group (two node), file share witness times out and RHS terminate and cause the cluster node to reboot. File share witness is for continuous
    failover and if the resource is unavailable my expectation was that it should go offline and should not impact Server or Sql Server. But its rebooting the cluster node to rectify the issue.
    Configuration
    Windows Server 2012 R2 (VMs) - two node, file share witness (nfs)
    Sql Server 2012 SP2
    Errors
    A component on the server did not respond in a timely fashion. This caused the cluster resource 'File Share Witness' (resource type 'File Share Witness', DLL 'clusres2.dll') to
    exceed its time-out threshold. As part of cluster health detection, recovery actions will be taken. The cluster will try to automatically recover by terminating and restarting the Resource Hosting Subsystem (RHS) process that is running this resource. Verify
    that the underlying infrastructure (such as storage, networking, or services) that are associated with the resource are functioning correctly.
    The cluster Resource Hosting Subsystem (RHS) process was terminated and will be restarted. This is typically associated with cluster health detection and recovery of a resource.
    Refer to the System event log to determine which resource and resource DLL is causing the issue.
    Thanks,
    -SreejitG

    Thanks Elden, We were using DFS name for the file share! We gave the actual file share name and looks good now.
    Few interesting facts, the failure happens exactly between window 12:30 PM to 1:30 AM and it never recorded any error specific to DFS! Not sure if there was any daily maintenance or task happens during the window pertaining to DFS.
    + DFS is not supported or recommended by Microsoft
    Do not use a file share that is part of a Distributed File System (DFS) Namespace.
    https://technet.microsoft.com/en-us/library/cc770620%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
    Thanks,
    -SreejitG

Maybe you are looking for

  • The core audio for Logic 8 has stopped working.

    I was successfully running Logic 8, Protools LE and Cubase on a MacPro with Leopard (10.5.5) and a MOTU traveler interface. I loaded on the Ignition Pack bundled with Protools LE (Mbox mini), including Melodyne/Reason/Live 6. This then stopped me ope

  • Powershell will not update-help

    Update-help doesn't work. Update-help -sourcepath "D:\update help" also doesn't work. I manually downloaded the files and put them in that folder since the live update fails, but even that won't work. Pasting error message: "PS C:\WINDOWS\system32> U

  • Itunes wont open in windows 8

    New computer with Windows 8....itunes will not open.

  • Using more than one textpds.conf

    I have customer who has several custom applications supported by different developers groups. They use a lot of text pluggable data sources. Is it possible to use more than one textpds.conf in the same report instance? The idea is to avoid that one a

  • Tab carachter at the end of line

    Hello! I am trying to upload a text file separated by Tab into SAP by an Abap program, but I have the following problem: - Code: constants: l_tab                 type x value '09'. split vl_line_file at             l_tab             into