Solaris Zones and NFS mounts

Hi all,
Got a customer who wants to seperate his web environments on the same node. The release of apache, Java and PHP are different so kind of makes sense. Seems a perfect opportunity to implement zoning. It seems quite straight forward to setup (I'm sure I'll find out its not). The only concern I have is that all Zones will need access to a single NFS mount from a NAS storage array that we have. Is this going to be a problem to configure and how would I get them to mount automatically on boot.
Cheers

Not necessarily, you can create (from Global zone) a /zone/zonename/etc/dfs/dfstab (NOT a /zone/[i[zonename[/i]/root/etc/dfs/dfstab notice you don't use the root dir) and from global do a shareall and the zone will start serving. Check your multi-level ports and make sure they are correct. You will run into some problems if you are running Trusted Extensions or the NFS share is ZFS but they can be overcome rather easily.
EDIT: I believe you have to be running TX for this to work. I'll double check.
Message was edited by:
AdamRichards

Similar Messages

  • Strange delete behavior in Solaris 10 with NFS mounts

    We are using the apache commons-io framework to delete a directory in a Solaris 10 environment. The code works well on our dev and qa boxes, but when we load it into our production envrionment we get intermittent failures where the files in a directory are not being deleted and therefore when we try to delete the directory it fails to delete.
    We suspect that this may be some kind of NFS problem in Solaris where it may take longer to delete a file than if it is on a local drive and therefore the code reaches the deletedir before the OS actually removes the files and this cause the delete directory failure because files are still present.
    Has anyone seen this in an NFS environment with Solaris? We are on Java 1.4.2_15 and we are using apache commons-io 1.3.1.

    The apache commons-io framework contains a method to delete a directory by recursively deleting all files and subdirectories. Intermittently, we are seeing some of the files in a subdirectory remain and then when delete is called to remove the directory (from within the commons-io framework deletedir method) we get an IOException. This only occurs on an NFS mounted file system on our production system. Our dev and qa systems are also on an NFS but it is a different one and appears to be loaded differently and the behavior for dev and qa consistently works as expected.
    It appears to be some kind of latency issue related to the way java deletes files on the NFS, but no conclusive evidence so far.
    We have not tried this with a newer version of java since we are presently constrained to 1.4 :-(

  • Live upgrade, zones and separate mount points

    Hi,
    We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
    Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
    lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
    This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
    Anyone tried something similar or has any idea on how to solve this?
    The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
    1:
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mount point </zones/zone01>.
    Copying.
    Creating shared file system mount points.
    Copying root of zone <zone01>.
    Creating compare databases for boot environment <upgrade>.
    Creating compare database for file system </zones/zone01>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <upgrade>.
    Making boot environment <upgrade> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
    zoneadm: zone 'zone01': call to zoneadmd failed
    ERROR: unable to mount zone <zone01> in </.alt.upgrade>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
    umount: /dev/dsk/c2t1d0s0 not mounted
    ERROR: cannot unmount </dev/dsk/c2t1d0s0>
    ERROR: cannot mount boot environment by name <upgrade>
    ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
    ERROR: Update of loader failed.
    ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
    Making the ABE <upgrade> bootable FAILED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    2:
    0 21191 21113 /usr/lib/lu/lumount -f upgrade
    0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
    0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
    0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
    0 21196 21192 mount swap /.alt.upgrade/var/run
    0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
    0 21197 21192 mount swap /.alt.upgrade/tmp
    0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
    0 21199 21198 /bin/expr 2 - 1
    0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
    0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
    0 21202 21198 false
    0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
    0 21206 21204 sed s/\([^\]\)::/\1:-:/
    0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
    0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
    0 21210 21203 false
    0 21211 21203 gettext unable to mount zone <%s> in <%s>
    0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
    Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

    I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
    # lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
    <snip>
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
    Mounting file systems for boot environment <upgrade>.
    ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
    allowable number of mount points exceeded
    ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
    ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
    ERROR: unmounting partially mounted boot environment file systems
    If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
    <snip>
    Creating boot environment <upgrade>.
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
    Mounting file systems for boot environment <upgrade>.
    Calculating required sizes of file systems for boot environment <upgrade>.
    Populating file systems on boot environment <upgrade>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mountED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
    # lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

  • Windows server 2008 R2 and NFS mounted subdirectories.

    I am mounting from a Windows Server 2008 R2 Box to a RHEL 6.3 machine and I am able see the folders; however, I am unable to see the sub-folders via the mapped NFS mount in windows. Any ideas as to why? Some additional facts are below.
    1. We are using an NFS mount from Windows to RHEL (mount -o
    \\RHELBOX\ops\resources R:)  We NFS mount from RHEL to a Storage device - the fstab entry looks like this:
    10.9.9.9:/vol/afpres1/psf_prod         
    /ops/resources/prod     nfs         
    _netdev,defaults            
    0 0
    10.9.9.9:/vol/afpres1/psf_test           
    /ops/resources/test       nfs         
    _netdev,defaults            
    0 0
    10.9.9.9:/vol/afpres1/baselib             
    /ops/resources/psf       
    nfs          _netdev,defaults            
    0 0
    The RHEL export file looks like this.
    /ops/resources 10.4.4.4(rw,sync,no_all_squash,insecure,nohide)  (the 10.4.4.4 IP address is the of the 2008 r2 server
    /ops/resources 10.4.4.5(rw,sync,no_all_squash,insecure,nohide)
    /ops 10.4.11.66(rw,sync,no_all_squash,insecure,nohide)
    #/ops/resources *(rw,sync,no_all_squash,insecure,nohide)
    /opspool *(rw,sync)
    2. We are not using Samba or CIFS

    Hello,
    The TechNet Sandbox forum is designed for users to try out the new forums functionality. Please be respectful of others, and do not expect replies to questions asked here.
    As it's off-topic here, I am moving the question to the
    Where is the forum for... forum.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book: Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?&lt;=\G.{2})'|%{if($_){[char][int]&quot;0x$_&quot;}})

  • Systemd and nfs .mount

    Hi,
    I'm trying to setup a systemd unit for my NFS mounts. I don't want to use /etc/fstab because I want an openvpn.service dependency. However, for now, I'm omitting that dependency due to debugging. Here is the current unit file I have:
    # cat host\@.mount
    [Unit]
    Description=%i mount
    DefaultDependencies=no
    Requires=local-fs.target network.target rpc-statd.service
    Conflicts=umount.target
    [Mount]
    What=host:/%i
    Where=/%i
    Type=nfs
    Options=user,async,atime,exec,rw,wsize=32768,rsize=32768
    DirectoryMode=0755
    TimeoutSec=20
    [Install]
    WantedBy=multi-user.target
    When I try to enable this I get the famously vague error:
    # systemctl enable ./host\@mnt.mount
    Failed to issue method call: Invalid argument
    Any ideas on how to fix this? Thanks!

    The solution is NOT to create this file at all.  Apparently, exports from the server do not require them.  If I remove it and reboot the server, I am able to connect from my workstation with no issues.  For reference:
    $ ls -l /etc/systemd/system/multi-user.target.wants/
    total 0
    lrwxrwxrwx 1 root root 40 May 10 10:58 cpupower.service -> /usr/lib/systemd/system/cpupower.service
    lrwxrwxrwx 1 root root 38 May 10 10:58 cronie.service -> /usr/lib/systemd/system/cronie.service
    lrwxrwxrwx 1 root root 40 May 10 12:10 exportfs.service -> /usr/lib/systemd/system/exportfs.service
    lrwxrwxrwx 1 root root 42 May 10 10:59 lm_sensors.service -> /usr/lib/systemd/system/lm_sensors.service
    lrwxrwxrwx 1 root root 35 Apr 30 15:15 network.service -> /etc/systemd/system/network.service
    lrwxrwxrwx 1 root root 36 May 10 10:59 ntpd.service -> /usr/lib/systemd/system/ntpd.service
    lrwxrwxrwx 1 root root 36 May 10 11:33 rc-local.service -> /etc/systemd/system/rc-local.service
    lrwxrwxrwx 1 root root 40 May 2 22:37 remote-fs.target -> /usr/lib/systemd/system/remote-fs.target
    lrwxrwxrwx 1 root root 39 May 10 10:58 rpcbind.service -> /usr/lib/systemd/system/rpcbind.service
    lrwxrwxrwx 1 root root 42 May 10 12:10 rpc-mountd.service -> /usr/lib/systemd/system/rpc-mountd.service
    lrwxrwxrwx 1 root root 41 May 10 12:10 rpc-statd.service -> /usr/lib/systemd/system/rpc-statd.service
    lrwxrwxrwx 1 root root 43 May 10 10:58 sshdgenkeys.service -> /usr/lib/systemd/system/sshdgenkeys.service
    lrwxrwxrwx 1 root root 36 May 10 10:58 sshd.service -> /usr/lib/systemd/system/sshd.service
    lrwxrwxrwx 1 root root 41 May 10 11:06 syslog-ng.service -> /usr/lib/systemd/system/syslog-ng.service
    lrwxrwxrwx 1 root root 35 May 10 10:57 ufw.service -> /usr/lib/systemd/system/ufw.service

  • LDAP and NFS mounts/setup OSX Lion iMac with Mac Mini Lion Server

    Hello all,
    I have a local account on my iMac (Lion), and I also have a Mac Mini (Lion Server) and I want to use LDAP and NFS to mount the /Users directory, but am having trouble.
    We have a comination of Linux (Ubuntu), Windows 7 and Macs on this network using LDAP and NFS, except the windows computers.
    We have created users in workgroup management on the server, and we have it working on a few Macs already, but I wasnt there to see that process. 
    Is there a way to keep my local account separate, and still have NFS access to /Users on the server and LDAP for authentification?
    Thanks,
    -Matt

    It would make a great server. Bonus over Apple TV for example is that you have access via both wired ethernet and wireless. Plus if you load tools from XBMC, Firecore and others you have a significant media server. Cost is right too.
    Many people are doing this - google mac mini media server or other for more info.
    Total downside to any windows based system - dealing with constant anti-virus, major security hassels, lack of true media integration and PITA to update, etc.
    You should be aware that Lion Server is not ready for prime time - it stil has significant issues if you are migrating from SNL 10.6.8. If you buy an apple fresh Lion Server mac mini you should have no problems.
    You'll probably be pleased.

  • LDOMs, Solaris zones and Live Migration

    Hi all,
    If you are planning to use Solaris zones inside a LDOM and using an external zpool as Solaris zone disk, wouldn't this break one of the requirements for being able to do a Live Migration ? If so, do you have any ideas on how to use Solaris zones inside an LDOM and at the same time be able to do a Live Migration or is it impossible ? I know this may sound as a bad idea but I would very much like to know if it is doable.

    Thanks,
    By external pool I am thinking of the way you probably are doing it, separate LUNs mirrored in a zpool for the zones coming from two separate IO/Service domains. So even if this zpool exist inside the LDOM as zone storage this will not prevent LM ? That's good news. The requirement "no zpool if Live Migration" must then only be valid for the LDOM storage itself and not for storage attached to the running LDOM. I am also worried about a possible performance penalty introducing an extra layer of virtualisation. Have you done any tests regarding this ?

  • Word 2008 for Mac and NFS mounted home directories "Save File" issues

    Greetings everyone,
    (Long time lurker, first time poster here)
    I admin a small network (under 20 workstaitons) with a centralized NFS server, with user home directories mounted via NFS upon login.  Users are authenticated via LDAP.  This is all working fine, there is no problem here.  The problem lies when my users use Microsoft Word 2008 for Mac.  When they attempt to save a file to thier Desktop (or Documents or any folder under thier home dir) they are met with the following message:
    (dialog box popup)
    "Word cannot save or create this file.  The disk maybe be full or write-protected.  Try one or more of the following: * Free more memory. * Make sure the disk you want to save the file on is not full, write-protected or damaged. (document-name.ext)"
    This happens regardless of file format (Doc, Docx, Txt) and regardless of saved location under the network mounted dir.  I've noticed that when saving Word creates a .tmp file in the target directory, which only further confuses me to the underlying cause of the issue.
    When users logon to a local machine account and attempt the save, there is no issue.
    I have found many posts in other commuity forums, including this one, indicating that the issue is a .TempoaryItems folder in the root of the mounted directory.  This folder already exists and is populated with entries such as "folder.2112" (where 2112 is the uid of the LDAP user).  I find other posts indicating that this is an issue with Word:2008 and OSX10.8, with finger pointing in either direction, but no real solution.
    I have installed all Office for Mac updates from Microsoft (latest version 12.3.6).
    I have verified permissions of the user's home dir.
    I have also ensured that this issue effects ONLY Microsoft Office 2008 for Mac apps, LibreOffice and other applications have no issue.
    Does *ANYONE* have a solution or workaround for this issue?  While we're trying to phase Microsoft products out, getting users to ditch Word and Excel is difficult without removing them from systems completely.  So any pointers or help would be greatly appreciated.
    Thanks.
    ~k

    I can't tell you how to fix bugs in an obsolete version of Office, but a possible workaround is to use mobile home directories under OS X Server. The home directories are hosted locally and synced with the server.

  • Solaris 10 and NFS/Automount into Solaris 8 Env.

    Hello fellow Administrators,
    I have recently upgraded my own station to Solaris 10 6/06 update 2 in a Solaris 8 NIS/Automount environment. How ever, i have noticed that my station from time to time 'frezzez' and becomes unresponding. It's as if the system hangs due to some sort of conversation/hand shake with my NIS Server. I have several automounts in my vfstab to my local NIS server.
    Question, has anyone else had similair problems/experiances ?
    Grateful to any answer that can solve this mystery of mine.
    Regards,
    Pierre

    Hi Robert,
    Seems to be working better now and i tried on a user as well and there are no complains yet. In the end i applied the latest Sun Cluster for Sol 10 (not many where applied to the 6/06 update 2) and your small 'hack'.
    Thank's a bunch!
    Pierre

  • Solaris zone and IBM DB2

    We have a container in T3-1 in which IBM DB2 running it. Recently we migrated the container to T4-1 server. The container is up and running but unable to start DB2. The container configuration is similar as in T3-1. Did anyone faced similar issue while running DB2 on T4-1 server ?

    You can refer
    App Server 9.0 developer guide
    http://docs.sun.com/app/docs/doc/819-3659
    making driver .jar files accessible :
    http://docs.sun.com/app/docs/doc/819-3659/6n5s6m5bk?a=view#beamn
    IBM DB2 8.2 datasource configuration
    http://docs.sun.com/app/docs/doc/819-3658/6n5s5nklk?a=view#beanc
    If you are still not able to setup:
    can you post
    1) con pool configuration from domain.xml
    2) the error message that you get in domains/<domainname>logs/server.log
    Thanks,
    -Jagadish

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Recommendations for Solaris Zones for NW2004S?

    I'm new to Solaris zones and would appreciate any recommendations from you regarding the setup of zones to run NW2004S. I did a scan of SapNet and SDN, but found nothing.
    So, for example, would you insist on running a Prod instance in the global zone?  For an SAP that runs in a zone, which file-systems would you share from the global zone?
    Another tips/traps you have would be greatly appreciated.

    We consolidated already 7 systems:
    root@consbig / >zoneadm list -vi
      ID NAME             STATUS         PATH                         
       0 global           running        /                            
      22 srmtest          running        /zone/srmtest                
      23 nwdi_ext_1       running        /zone/nwdi_ext_1             
      32 bbbcpy           running        /zone/bbbcpy                 
      35 icht             running        /zone/icht                   
      40 osiris           running        /zone/osiris                 
      42 bi_oracle_test   running        /zone/biorat                 
      43 hpvm             running        /zone/hpvm      
    This is a HP DL585 with 4 CPUs and 48 GB RAM (Opteron, not SPARC).

  • Parameters of NFS in Solaris 10 and Oracle Linux 6 with ZFS Storage 7420 in cluster without database

    Hello,
    I have ZFS 7420 in cluster and OS Solaris 10 and Oracle Linux 6 without DB and I need mount share NFS in this OS and I do not know which parameters are the best for this.
    Wich are the best parameters to mount share NFS in Solaris 10 or Oracle Linux 6?
    Thanks
    Best regards.

    Hi Pascal,
    My question is because when We mount share NFS in some servers for example Exadata Database Machine or Super Cluster  for best performance we need mount this shares with specific parameters, for example.
    Exadata
    192.168.36.200:/export/dbname/backup1 /zfssa/dbname/backup1 nfs rw,bg,hard,nointr,rsize=131072,wsize=1048576,tcp,nfsvers=3,timeo=600 0 0
    Super Cluster
    sscsn1-stor:/export/ssc-shares/share1      -       /export/share1     nfs     -       yes     rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    Now,
    My network is 10GBE
    What happen with normal servers only with OS (Solaris and Linux)?
    Which parameters I need use for best performance?
    or are not necessary specific parameters.
    Thanks.
    Best regards.

  • NFS4: Problem mounting NFS mount onto a Solaris 10 Client

    Hi,
    I am having problems mounting NFS mount point from a Linux-Server onto a Solaris 10 Client.
    In the following
    =My server IP ..*.120
    =Client IP ..*.100
    Commands run on Client:
    ==================
    # mount -o vers=3 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: retrying: /scratch/pvfs2
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    nfs mount: 172.25.30.120: : RPC: Rpcbind failure - RPC: Unable to receive
    # mount -o vers=4 -F nfs 172.25.30.120:/scratch/pvfs2 /scratch/pvfs2
    nfs mount: 172.25.30.120:/scratch/pvfs2: No such file or directory
    # rpcinfo -p
    program vers proto port service
    100000 4 tcp 111 rpcbind
    100000 3 tcp 111 rpcbind
    100000 2 tcp 111 rpcbind
    100000 4 udp 111 rpcbind
    100000 3 udp 111 rpcbind
    100000 2 udp 111 rpcbind
    1073741824 1 tcp 36084
    100024 1 udp 42835 status
    100024 1 tcp 36086 status
    100133 1 udp 42835
    100133 1 tcp 36086
    100001 2 udp 42836 rstatd
    100001 3 udp 42836 rstatd
    100001 4 udp 42836 rstatd
    100002 2 tcp 36087 rusersd
    100002 3 tcp 36087 rusersd
    100002 2 udp 42838 rusersd
    100002 3 udp 42838 rusersd
    100011 1 udp 42840 rquotad
    100021 1 udp 4045 nlockmgr
    100021 2 udp 4045 nlockmgr
    100021 3 udp 4045 nlockmgr
    100021 4 udp 4045 nlockmgr
    100021 1 tcp 4045 nlockmgr
    100021 2 tcp 4045 nlockmgr
    100021 3 tcp 4045 nlockmgr
    100021 4 tcp 4045 nlockmgr
    # showmount -e 172.25.30.120 (Server)
    showmount: 172.25.30.120: RPC: Rpcbind failure - RPC: Unable to receive
    Commands OnServer:
    ================
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100021 1 tcp 49927 nlockmgr
    100021 3 tcp 49927 nlockmgr
    100021 4 tcp 49927 nlockmgr
    100021 1 udp 32772 nlockmgr
    100021 3 udp 32772 nlockmgr
    100021 4 udp 32772 nlockmgr
    100011 1 udp 796 rquotad
    100011 2 udp 796 rquotad
    100011 1 tcp 799 rquotad
    100011 2 tcp 799 rquotad
    100003 2 udp 2049 nfs
    100003 3 udp 2049 nfs
    100003 4 udp 2049 nfs
    100003 2 tcp 2049 nfs
    100003 3 tcp 2049 nfs
    100003 4 tcp 2049 nfs
    100005 1 udp 809 mountd
    100005 1 tcp 812 mountd
    100005 2 udp 809 mountd
    100005 2 tcp 812 mountd
    100005 3 udp 809 mountd
    100005 3 tcp 812 mountd
    100024 1 udp 854 status
    100024 1 tcp 857 status
    # showmount -e 172.25.30.120
    Export list for 172.25.30.120:
    /scratch/nfs 172.25.30.100,172.25.24.0/4
    /scratch/pvfs2 172.25.30.100,172.25.24.0/4
    Thank you, ~al

    I also tried to run Snoop on the client and wireshark on Server and following is what I see:
    One Server: Upon issuing mount command on client:
    # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.205570 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.205586 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    0.207863 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    0.207869 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    2.005314 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    4.011005 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    5.206109 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    5.206277 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    5.216157 172.25.30.100 -> 172.25.30.120 Portmap V2 GETPORT Call MOUNT(100005) V:3 UDP
    5.216170 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    On Clinet Upon issuing mount command on client:
    # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    atlas-pvfs2 -> pvfs2-io-0-3 PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (UDP port 111 unreachable)
    Also I see the following on Client:
    # rpcinfo -p pvfs2-io-0-3
    rpcinfo: can't contact portmapper: RPC: Rpcbind failure - RPC: Failed (unspecified error)
    When I try the above rpcinfo command on Client and Server Snoop And wireshark(ethereal) outputs are as follows:
    Client # snoop -d bge1
    Using device /dev/bge1 (promiscuous mode)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=872 Syn Seq=2065245538 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=2004 (Unknown), size = 48 bytes
    ? -> (multicast) ETHER Type=0003 (LLC/802.3), size = 90 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    pvfs2-io-0-3 -> * ARP C Who is 172.25.30.100, atlas-pvfs2 ?
    atlas-pvfs2 -> pvfs2-io-0-3 ARP R 172.25.30.100, atlas-pvfs2 is 0:14:4f:70:ff:17
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    atlas-pvfs2 -> pvfs2-io-0-3 TCP D=111 S=874 Syn Seq=2068043912 Len=0 Win=49640 Options=<mss 1460,nop,wscale 0,nop,nop,sackOK>
    pvfs2-io-0-3 -> atlas-pvfs2 ICMP Destination unreachable (TCP port 111 unreachable)
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> (multicast) ETHER Type=0000 (LLC/802.3), size = 52 bytes
    ? -> * ETHER Type=9000 (Loopback), size = 60 bytes
    Server # tshark -i eth1
    Running as user "root" and group "root". This could be dangerous.
    Capturing on eth1
    0.000000 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    0.313739 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD CDP Device ID: MILEVA Port ID: GigabitEthernet1/0/16
    2.006422 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    3.483733 172.25.30.100 -> 172.25.30.120 TCP 865 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    3.483752 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    4.009741 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.014524 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    6.551356 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    8.019386 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    8.484344 Dell_70:ad:29 -> SunMicro_70:ff:17 ARP Who has 172.25.30.100? Tell 172.25.30.120
    8.484569 SunMicro_70:ff:17 -> Dell_70:ad:29 ARP 172.25.30.100 is at 00:14:4f:70:ff:17
    10.024411 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.030956 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    12.901333 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    12.901421 Cisco_3d:68:10 -> CDP/VTP/DTP/PAgP/UDLD DTP Dynamic Trunking Protocol
    ^[[A 14.034193 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00  Cost = 0  Port = 0x8010
    15.691119 172.25.30.100 -> 172.25.30.120 TCP 866 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    15.691138 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    16.038944 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    16.550760 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    18.043886 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    20.050243 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    21.487689 172.25.30.100 -> 172.25.30.120 TCP 867 > sunrpc [SYN] Seq=0 Win=49640 Len=0 MSS=1460 WS=0
    21.487700 172.25.30.120 -> 172.25.30.100 ICMP Destination unreachable (Port unreachable)
    22.053784 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    24.058680 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.063406 Cisco_3d:68:10 -> Spanning-tree-(for-bridges)_00 STP Conf. Root = 32770/00:0a:b8:3d:68:00 Cost = 0 Port = 0x8010
    26.558307 Cisco_3d:68:10 -> Cisco_3d:68:10 LOOP Reply
    ~thank you for any help you can provide!!!

  • SAP Java and Solaris Zones SolMan 4.0

    May require Solaris Zones experience to continue.
    I have three SAP database instances/central instances running in three sparse Solaris zones with no problem.
    I have created a new sparse zone for a new SAP installation (Solution Manager 4.0) and started the installation. SAP requires a 1.4.2 SDK even though Java 1.5 comes with Solaris 10. The 1.4.2 SDK is in /usr/j2se. The installation in the sparse zone errors out because it can't get "write" rights to /usr/j2se/jre/lib/security/local_policy.jar as it is trying to install some security encryption JCE component.
    I have thought about creating a /usr/j2se_zonename file system, copying the contents of /usr/j2se into it and then mounting /usr/j2se_zonename in the zone as a lofs with the name /usr/j2se. However when I do the copy of /usr/j2se I get some recursion errors.
    Any thoughts about how to add a writable /usr/j2se into the sparse zone with the least amount of effort ? Otherwise plan B would be to create a "large" zone with a writable /usr directory.
    Received a great answer, that while it may not be architecturally "pure" it may get the job done.
    You might just download the relevant JDK tarball and unpack that
    somewhere in your zone (anywhere you like), and point SAP at it...
    http://java.sun.com/j2se/1.4.2/download.html
    Get the one called "self extracting file"-- you can unpack that anywhere
    you want.
    Message was edited by: Atis Purins

    Hi Russ,
    no you only have to generate two RFCs to your R/3 and assign them in SMSY to for system monitoring
    Then you need a Solution, assign your R/3 to the Solution, setup the system monitoring.
    Regards,
    uDo

Maybe you are looking for