NAC and Allowing NFS Mounts

When a linux system boots, the 'root' processes typically attempts to mount network drives via NFS, prior to any user login on the system. In NAC, is it sufficient to add in the User Roles->Traffic Control->Unathenticated Role allowing TCP/UDP ports 111 & 2049 for the NFS mount process to work? We're using NAC v4.7.1
Thanks,
Doug

Doug,
If those two ports are what your NFS uses, then yes, opening them up in Unauthenticated role should be sufficient!
HTH,
Faisal

Similar Messages

  • Application to list connected Drives and allow to mount

    Does anyone out there know of an app that would put an icon or a drop down menu in the menu bar that, when clicked, would list all of the USB devices that are connected to your mac? Then allow you to click on one of the drives and mount it?
    I know that you can go into disk utility and select a drive from the list and instruct to mount it. I am wondering if someone might have written an app that would place it in the menu bar for quick access.
    Anyone out there ever see something like that?

    No, I am looking for something like the picture below (I tried to draw it as best as I could)

  • Are nfs mounts allowed in vfstab after a flash archive restore?

    I am restoring a server using a flash archive. The vfstab on the original server contains an entry for an nfs mount. It looks like during the push of the archive to the new server, the vfstab gets saved to vfstab.orig and the nfs mount is removed from vfstab. So I added a finish script to add the nfs mount back to vfstab and now the server gets hung during S73nfs.client during reboot.
    Are nfs mounts not allowed in vfstab after a flash archive restore?

    One option is to secure the OBP with a password.
    At the OK prompt, set the security-mode to "command" and then set a password. Careful not to loose this password. With this set the only option is to "boot" without supplying the firmware password.
    Commands:
    # halt
    ok setenv security-mode = command
    ok setent security-password <enterpasswd>
    ok reset-all

  • Nfs mount created with Netinfo not shown by Directory Utility in Leopard

    On TIger I used to mount dynamically a few directories using NFS.
    To do so, I used NetInfo.
    I have upgraded to Leopard and the mounted directories
    are still working, although Netinfo is not present anymore.
    I was expecting to see these mount points and
    modify them using Directory Utility, which has substituted Netinfo.
    But they are not even shown in the Mount panel of Directory Utility.
    Is there a way to see and modify NFS mount point previously
    created by NetInfo with the new Directory Utility?

    Thank you very much! I was able to recreate the static automount that I had previously had. I just had to create the "mounts" directory in /var/db/dslocal/nodes/Default/ and then I saved the following text as a .plist file within "mounts".
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>dir</key>
    <array>
    <string>/Network/Backups</string>
    </array>
    <key>generateduid</key>
    <array>
    <string>0000000-0000-0000-0000-000000000000</string>
    </array>
    <key>name</key>
    <array>
    <string>server:/Backups</string>
    </array>
    <key>opts</key>
    <array>
    <string>url==afp://;AUTH=NO%20USER%[email protected]/Backups</string>
    </array>
    <key>vfstype</key>
    <array>
    <string>url</string>
    </array>
    </dict>
    </plist>
    I don't think the specific name of the .plist file matters, or the value for "generateduid". I'm listing all this info assuming that someone out there might care.
    I assume this would work for SMB shares also... if SMB worked, which it hasn't on my system since I installed leopard.

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Windows server 2008 R2 and NFS mounted subdirectories.

    I am mounting from a Windows Server 2008 R2 Box to a RHEL 6.3 machine and I am able see the folders; however, I am unable to see the sub-folders via the mapped NFS mount in windows. Any ideas as to why? Some additional facts are below.
    1. We are using an NFS mount from Windows to RHEL (mount -o
    \\RHELBOX\ops\resources R:)  We NFS mount from RHEL to a Storage device - the fstab entry looks like this:
    10.9.9.9:/vol/afpres1/psf_prod         
    /ops/resources/prod     nfs         
    _netdev,defaults            
    0 0
    10.9.9.9:/vol/afpres1/psf_test           
    /ops/resources/test       nfs         
    _netdev,defaults            
    0 0
    10.9.9.9:/vol/afpres1/baselib             
    /ops/resources/psf       
    nfs          _netdev,defaults            
    0 0
    The RHEL export file looks like this.
    /ops/resources 10.4.4.4(rw,sync,no_all_squash,insecure,nohide)  (the 10.4.4.4 IP address is the of the 2008 r2 server
    /ops/resources 10.4.4.5(rw,sync,no_all_squash,insecure,nohide)
    /ops 10.4.11.66(rw,sync,no_all_squash,insecure,nohide)
    #/ops/resources *(rw,sync,no_all_squash,insecure,nohide)
    /opspool *(rw,sync)
    2. We are not using Samba or CIFS

    Hello,
    The TechNet Sandbox forum is designed for users to try out the new forums functionality. Please be respectful of others, and do not expect replies to questions asked here.
    As it's off-topic here, I am moving the question to the
    Where is the forum for... forum.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book: Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?&lt;=\G.{2})'|%{if($_){[char][int]&quot;0x$_&quot;}})

  • NFS Mounted Directory And Files Quit Responding

    I mounted a remote directory using NFS and I can access the mount point and all of its sub-directories and files. After a while, all of the sub-directories and files no longer respond when clicked; in column view there is no longer an icon nor any statistics for those files. If I go back and click on Network->Servers->myserver->its_subdirectories, it will eventually respond again.
    I have found no messages in the system log. And nfsstat shows no errors.
    I am using these these mount parameters with the Directory Utility->Mounts tab:
    ro net -P -T -3
    Any idea why the NFS mounted directories and files quit responding?
    Thanks.

    I may have found an answer to my own question.
    It looks like automount will automatically unmount a file system if it has not been accessed in 10 minutes. This time-out can be changed using the automount command. I am going to try increasing this time-out value.
    Here is part of the man page:
    SYNOPSIS
    automount [-v] [-c] [-t timeout]
    -t timeout
    Set to timeout seconds the time after which an automounted file
    system will be unmounted if it hasn't been referred to within
    that period of time. The default is 10 minutes (600 seconds).

  • Systemd and nfs .mount

    Hi,
    I'm trying to setup a systemd unit for my NFS mounts. I don't want to use /etc/fstab because I want an openvpn.service dependency. However, for now, I'm omitting that dependency due to debugging. Here is the current unit file I have:
    # cat host\@.mount
    [Unit]
    Description=%i mount
    DefaultDependencies=no
    Requires=local-fs.target network.target rpc-statd.service
    Conflicts=umount.target
    [Mount]
    What=host:/%i
    Where=/%i
    Type=nfs
    Options=user,async,atime,exec,rw,wsize=32768,rsize=32768
    DirectoryMode=0755
    TimeoutSec=20
    [Install]
    WantedBy=multi-user.target
    When I try to enable this I get the famously vague error:
    # systemctl enable ./host\@mnt.mount
    Failed to issue method call: Invalid argument
    Any ideas on how to fix this? Thanks!

    The solution is NOT to create this file at all.  Apparently, exports from the server do not require them.  If I remove it and reboot the server, I am able to connect from my workstation with no issues.  For reference:
    $ ls -l /etc/systemd/system/multi-user.target.wants/
    total 0
    lrwxrwxrwx 1 root root 40 May 10 10:58 cpupower.service -> /usr/lib/systemd/system/cpupower.service
    lrwxrwxrwx 1 root root 38 May 10 10:58 cronie.service -> /usr/lib/systemd/system/cronie.service
    lrwxrwxrwx 1 root root 40 May 10 12:10 exportfs.service -> /usr/lib/systemd/system/exportfs.service
    lrwxrwxrwx 1 root root 42 May 10 10:59 lm_sensors.service -> /usr/lib/systemd/system/lm_sensors.service
    lrwxrwxrwx 1 root root 35 Apr 30 15:15 network.service -> /etc/systemd/system/network.service
    lrwxrwxrwx 1 root root 36 May 10 10:59 ntpd.service -> /usr/lib/systemd/system/ntpd.service
    lrwxrwxrwx 1 root root 36 May 10 11:33 rc-local.service -> /etc/systemd/system/rc-local.service
    lrwxrwxrwx 1 root root 40 May 2 22:37 remote-fs.target -> /usr/lib/systemd/system/remote-fs.target
    lrwxrwxrwx 1 root root 39 May 10 10:58 rpcbind.service -> /usr/lib/systemd/system/rpcbind.service
    lrwxrwxrwx 1 root root 42 May 10 12:10 rpc-mountd.service -> /usr/lib/systemd/system/rpc-mountd.service
    lrwxrwxrwx 1 root root 41 May 10 12:10 rpc-statd.service -> /usr/lib/systemd/system/rpc-statd.service
    lrwxrwxrwx 1 root root 43 May 10 10:58 sshdgenkeys.service -> /usr/lib/systemd/system/sshdgenkeys.service
    lrwxrwxrwx 1 root root 36 May 10 10:58 sshd.service -> /usr/lib/systemd/system/sshd.service
    lrwxrwxrwx 1 root root 41 May 10 11:06 syslog-ng.service -> /usr/lib/systemd/system/syslog-ng.service
    lrwxrwxrwx 1 root root 35 May 10 10:57 ufw.service -> /usr/lib/systemd/system/ufw.service

  • Solaris Zones and NFS mounts

    Hi all,
    Got a customer who wants to seperate his web environments on the same node. The release of apache, Java and PHP are different so kind of makes sense. Seems a perfect opportunity to implement zoning. It seems quite straight forward to setup (I'm sure I'll find out its not). The only concern I have is that all Zones will need access to a single NFS mount from a NAS storage array that we have. Is this going to be a problem to configure and how would I get them to mount automatically on boot.
    Cheers

    Not necessarily, you can create (from Global zone) a /zone/zonename/etc/dfs/dfstab (NOT a /zone/[i[zonename[/i]/root/etc/dfs/dfstab notice you don't use the root dir) and from global do a shareall and the zone will start serving. Check your multi-level ports and make sure they are correct. You will run into some problems if you are running Trusted Extensions or the NFS share is ZFS but they can be overcome rather easily.
    EDIT: I believe you have to be running TX for this to work. I'll double check.
    Message was edited by:
    AdamRichards

  • NFS Mount and WRT160NL

    I'd like to ask a qustion :
    Is it possible add NFS mount to WRT160NL?
    If its possible. i request of the WRT160NL support team add NFS option to storage setup in future firmware.
    Message Edited by cygol on 11-12-2009 12:30 AM

    No,the NFS Mount is not possible in WRT160NL.

  • Anyone else having problems with NFS mounts not showing up?

    Since Lion, cannot see NFS shares anymore.  Folder that had them is still threre but the share will not mount.  Worked fine in 10.6.
    nfs://192.168.1.234/volume1/video
    /Volumes/DiskStation/video
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    Any ideas?
    Thanks

    Since the NFS points show up in the terminal app, go to the local mount directory (i.e. the mount location in the NFS Mounts using Disk Utility) and do the following:
    First Create a Link file
    sudo ln -s full_local_path link_path_name
    sudo ln -s /Volumes/linux/projects/ linuxProjects
    Next Create a new directory say in the Root of the Host Drive (i.e. Macintosh HDD)
    sudo mkdir new_link_storage_directory
    sudo mkdir /Volumes/Macintosh\ HDD/Links
    Copy the Above Link file to the new directory
    sudo  mv link_path_name new_link_storage_directory
    sudo  mv linuxProjects /Volumes/Macintosh\ HDD/Links/
    Then in Finder locate the NEW_LINK_STORAGE_DIRECTORY and then the link file should allow opening of these NFS point points.
    Finally, after all links have been created and placed into the NEW..DIRECTORY, Place it into the left sidebar.  Now it works just like before.

  • Cannot write to NFS mount with finder

    hi folks
    i have moved from a G4 cube running 10.4.9 to a new iMac running os x 10.5.7 (including upgrading from the old machine)
    the NFS mounts i used to have with my G4 cube are not working properly; new mounts i've created aren't working properly either.
    i can read/open file OK, but when i try to drag & drop files onto the NFS mount using the finder i get errors:
    "You may need to enter the name and password for an administrator on this computer to change the item named test.jpg" [ stop ] [ continue ]
    clicking "continue" i get:
    "The item test.jpg contains one or more items you do not have permission to read. Do you want to copy the items you are allowed to read? [ stop ] [ continue ]
    Choosing continue again results in the file appearing in the NFS directory, but with 0 size, and a time stamp of 1970.
    if i try to copy the same file using the Terminal, it works fine - so it is not a simple NFS permissions problem - it is something particular to the Finder.
    i am able to create a folder inside the NFS director by using the Finder.
    i thought at first it might be related to the .DS_Store and similar files being written, so i tried turning off that behaviour:
    defaults write com.apple.desktopservices DSDontWriteNetworkStores true
    but that hasn't fixed the problem
    there are no obvious messages in any of the logs
    any suggestions or pointers on how to fix this?

    thanks for the reply
    these articles appear to relate to sharing a mac filesystem via NFS: exporting the data.
    i am referring to mounting a NFS filesystem from another server onto the mac (leopard) client
    the mounting works fine: it's just the finder which isn't behaving. the finder worked in tiger; isn't in leopard.

  • NFS Mounts Empty After Upgrade

    Hello, I'm having some issues with my file sharing setup this morning.  Can anyone shed any light?  I didn't notice any relevant Pacman warnings I needed to watch out for.
    After running pacman -Syu on my laptop ( NFS client) and trying to mount shares from my file server I get "successful" mounts, no errors that I see, but showing empty directories.  Files are there on the server.  My NFS server was not updated.  NFS configuration has not changed (except what and pacman -Syu might change).  I have rebooted since updating.
    Also, when I mount as user, I am prompted for my root pass to unmount, this should not happen. 
    # from fstab
    nas1.lan2.demurgatroid.net:/Misc        /mnt/Misc        nfs4    rw,user,soft,intr,noauto,rsize=8192,wsize=8192    0 0
    # from exports on server
    /srv/nfs/Misc        10.144.15.100(rw,no_subtree_check)
    # Example
    [jamie@ada ~]$ mount | grep Misc
    [jamie@ada ~]$ mount /mnt/Misc
    [jamie@ada ~]$ ls -al /mnt/Misc
    total 0
    drwxr-xr-x 1 root root 0 Dec 2 15:19 .
    drwxr-xr-x 1 root root 108 Dec 2 15:22 ..
    [jamie@ada ~]$ umount /mnt/Misc
    umount: /mnt/Misc: umount failed: Operation not permitted
    [jamie@ada ~]$ sudo !!
    sudo umount /mnt/Misc
    [sudo] password for jamie:
    Thanks, let me know you need more data.
    --Jamie

    I believe this is all my error.  My lately unreliable wifi dropped while putting out fires at work so I plugged in an ethernet cable and disabled my wifi which changed my IP - so I was not allowed per my own rules.  In the process I learned something new:
    Change my fstab like so enables convenient auto mounting
    nas1.lan2.demurgatroid.net:/Misc        /mnt/Misc        nfs4    rw,user,soft,intr,noauto,x-systemd.automount,x-systemd.device-timeout=10,timeo=14,rsize=8192,wsize=8192    0 0

  • Can you re-export an nfs mount as an nfs share

    If so what is the downside?
    I'm asking because we currently have an iscsi san and a recent upgrade
    severely degraded iscsi connectivity. consequently can't mount my iscsi
    volumes.
    Thanks,
    db

    Originally Posted by David Brown
    The filer/san NFS functionality is working normally. I can't access
    some of the iscsi luns. Thinking of just using NFS as the backend.
    Which would be a better sub forum?
    Thank you,
    db
    Depending on which Novell OS you are running.... this subform is for NetWare, but I suspect you are using OES Linux.
    I've never tried creating a NCP share on OES for a remote NFS mount on the server. My first guess would be it is not allowed and also not a good practice. You could however, with this situation and if you are running an OES2 or OES 11 Linux server, try configuring an NFS mount on the OES server and then configuring the NCP share on that using remote manager on the server.
    What I would recommend however to see if the iSCSI issue cannot be fixed or worked around.
    Could you describe a bit more of the situation there/what happened and what is not working on that end?
    -Willem

  • Permissions issue with NFS mounted MyCloud

    FIrst off let me say I think this is a LInux issue and not a WDMyCloud issue, but I'm not sure, so here goes... I can rsync my stuff off the  Linux systems on my LAN to back them up into a WDMyCloud share, no problem.  But then I can't get at some stuff with limited file permissions, when I mount the WDMyCloud via NFS. Here's the problem...  Let's s say my directory tree on the WDMyCloud looks like "/shares/Stuff/L1/L2/L3" where the permissions on "L3" look like this: drwx------+ 3 fred share 4096 Jul 1 01:03 L3 It's readable/writeable only by "fred".  I want to preserve the permissions and ownerships on everything, so if I have to do a restore and "rsync" it back onto another machine they'll go back the way they were when I backed them up.  I can see the contents of "L3" if I "ssh" into the MyCloud, *but* I *cannot* see the contents of  "L3" if I try to look at it via the NFS mount - I get ls: L3: permission denied. If I change the permissions on it, e.g.drwxrwxrwx+ 3 fred share 4096 Jul 1 01:03 L3then I can see the contents of "L3" just fine.  So it's just the fact that via the NFS mount the MyCloud NFS server (Or something?) won't give me access to it unless the permissions are open, even logged in as "root" on the machine where the MyCloud is NFS mounted. I tried creating "fred" as a user on the MyCloud *and* made sure the numerical UID and GIDs were the same on the Linux machine and the MyCloud - No dice. I haven't tried everything in the world yet (I haven't tried rebooting the MyCloud to see if some server hasn't picked up the changes to "/etc/passwd" or whatever ; there's something called "idmapd" that I guess I should look into...  Etc.)  But I thought maybe somebody here might have run into this or have a bright idea?  

    I had some problems, and I modified the /etc/exports file to meet a little more my needs This is the content of it: -------/nfs/jffs *(rw,sync,no_root_squash,no_subtree_check)/nfs *(rw,all_squash,sync,no_subtree_check,insecure,crossmnt,anonuid=999,anongid=1000)-------- The first line/share allows to change the permissisions and uid below them when you monted on your machineThe second one I maped the anonuid to the uid my user has on the wd (999) so when I mount a share on the machine everything is write with that used id If you modifes the /etc/exports file remeber to run after you edited the command "exportfs -a" to refresh the changes Hope this help with your problem

Maybe you are looking for

  • PC Suite 7.0.9.2 on Vista Backup Utility Crashes

    I just updated my PC suite with latest version (7.0.9.2) on my Windows Vista (Business) OS. After doing so I am not able to use the backup utility (ContentCopier). I used to backup my phone on PC using the earlier version of PC Suite. Is there any so

  • How do I delete a song from my iTunes, keep it in the cloud and in the playlist it resides?

    I don't know why, but a few weeks ago I was able to do this... Now I cannot figure it out. I want to delete the music from my computer, keeping it in the cloud (with the download button on the side), and keeping the clouded song in the playlist it be

  • There was an error in opening the doeument..  The file cannot be found

    Hi, A user from our application (IE based) is having an error in opening a pdf document. He's getting the following message: There was an error in opening the doeument. The file cannot be found. The error is ecnountered when the link of the document

  • How to start or restarted Oracle_Secure_Backup service in RedHat linux

    Hi when i try to connect oracle secure backup using the following command, i am getting the follwoing message [root@midevdb bin]# obtool obtool: Error: can't connect to administrative observiced - observiced not running so how i can start, stop and r

  • Setting Element in Container

    I am using the folloiwng code to set elements in a workflow container then triggering an event.  The workflow is being triggered and the container is being created but my elements are not being set.  Can someone see the error in my code? include <cnt