Global NFS Mount & Cluster

Dear All
My Development server is in LAN environment and other system QAS and PRD is in the SZ2. For transport management configuration, we need to do the global NFS mounting.  But as per my company policy, there is security issue. 
The Second issue is that if we mount /usr/sap/trans as global and also part of the NFS then it cluster startup will fail. Please suggest the above directory should be part of cluster or not.
Regards
Vimal Pathak

Tiffany wrote:
          > We need to store information (objects) that are global to a cluster.
          The only way you can do this is to store the information in a
          database.
          > It's my understanding that anything stored in the servlet context is
          > visible to all servers,
          No. This is not true.
          > but it resides on a network drive. Wouldn't
          > each read of this servlet context info involve a directory read hit
          > with all its implied performance degradation?
          How about WebLogic Workspaces? Is this information replicated across
          clusters? Does it live on a network drive as a file as well?
          > Hoping someone can help us out here.
          Workspaces are not replicated.
          >
          > Thanks for any help,
          >
          > Tiffany
          Cheers
          - Prasad
          

Similar Messages

  • Testing ha-nfs in two node cluster (cannot statvfs /global/nfs: I/O error )

    Hi all,
    I am testing HA-NFS(Failover) on two node cluster. I have sun fire v240 ,e250 and Netra st a1000/d1000 storage. I have installed Solaris 10 update 6 and cluster packages on both nodes.
    I have created one global file system (/dev/did/dsk/d4s7) and mounted as /global/nfs. This file system is accessible form both the nodes. I have configured ha-nfs according to the document, Sun Cluster Data Service for NFS Guide for Solaris, using command line interface.
    Logical host is pinging from nfs client. I have mounted there using logical hostname. For testing purpose I have made one machine down. After this step files tem is giving I/O error (server and client). And when I run df command it is showing
    df: cannot statvfs /global/nfs: I/O error.
    I have configured with following commands.
    #clnode status
    # mkdir -p /global/nfs
    # clresourcegroup create -n test1,test2 -p Pathprefix=/global/nfs rg-nfs
    I have added logical hostname,ip address in /etc/hosts
    I have commented hosts and rpc lines in /etc/nsswitch.conf
    # clreslogicalhostname create -g rg-nfs -h ha-host-1 -N
    sc_ipmp0@test1, sc_ipmp0@test2 ha-host-1
    # mkdir /global/nfs/SUNW.nfs
    Created one file called dfstab.user-home in /global/nfs/SUNW.nfs and that file contains follwing line
    share -F nfs –o rw /global/nfs
    # clresourcetype register SUNW.nfs
    # clresource create -g rg-nfs -t SUNW.nfs ; user-home
    # clresourcegroup online -M rg-nfs
    Where I went wrong? Can any one provide document on this?
    Any help..?
    Thanks in advance.

    test1#  tail -20 /var/adm/messages
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 344672 daemon.error] Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 801855 daemon.error]
    Feb 28 22:28:54 testlab5 Error in scha_cluster_get
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d5s0 has changed to OK
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d6s0 has changed to OK
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 537175 daemon.notice] CMM: Node e250 (nodeid: 1, incarnation #: 1235752006) has become reachable.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 525628 daemon.notice] CMM: Cluster has reached quorum.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node e250 (nodeid = 1) is up; new incarnation number = 1235752006.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node testlab5 (nodeid = 2) is up; new incarnation number = 1235840337.
    Feb 28 22:37:15 testlab5 Cluster.CCR: [ID 499775 daemon.notice] resource group rg-nfs added.
    Feb 28 22:39:05 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:05 testlab5 Cluster.CCR: [ID 491081 daemon.notice] resource ha-host-1 removed.
    Feb 28 22:39:17 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:17 testlab5 Cluster.CCR: [ID 254131 daemon.notice] resource group nfs-rg removed.
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, timeout <300> seconds
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<rg-nfs.ha-host-1.2>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_validate> completed successfully for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, time used: 0% of timeout <300 seconds>
    Feb 28 22:39:30 testlab5 Cluster.CCR: [ID 973933 daemon.notice] resource ha-host-1 added.

  • Some shared paths in file /global/nfs/SUNW.nfs/dfstab.nfs-res are invalid

    Hello,
    I try to configure a NFS service in a SC 3.1 cluster environment, but this is impossible as the resource is not existing on the secondary node.
    I'm sure this is some kind of mis-configuration. I read the docs, but still don't know why.
    +/opt/ppark/gateway/logs+ should be the NFS share, whereas a defined RG (ppark-rg) contains the NFS share.
    Another requirement: the NFS resource should be a member of this RG (ppark-rg).
    My steps were as follows:
    1. define the RG.
    # scrgadm -a -g ppark-rg -h pixtest1,pixtest22. define the LogicalHostname -
    # scrgadm -a -L -l ppark-lh -g ppark-rg1. register the agent -
    # scrgadm -a -t SUNW.nfs2. add HAS+
    # scrgadm -a -t SUNW.HAStoragePlus3. add the HAS+ Resource
    # scrgadm -a -j ppark-stor-res -t SUNW.HAStoragePlus -g ppark-rg -x FilesystemMountpoints=/opt/ppark -x AffinityOn=true4. now, the RG can be brought online -
    # scswitch -Z -g ppark-rgHere is the NFS part:
    1. making a administrative directory on both nodes -
    # mkdir -p /global/nfs/SUNW.nfs2. define the share in /global/nfs/SUNW.nfs/dfstab.nfs-res on both nodes -
    # share -F nfs -o rw,anon=0 /opt/ppark/gateway/logs3. define the NFS resource
    # scrgadm -a -g ppark-rg -j nfs-res -t nfs -y Network_resources_used=ppark-lh
    pixtest2 - Some shared paths in file /global/nfs/SUNW.nfs/dfstab.nfs-res are invalid.
    VALIDATE on resource nfs-res, resource group ppark-rg, exited with non-zero exit status.
    Validation of resource nfs-res in resource group ppark-rg on node pixtest2 failed.and of course the resource /opt/ppark/gateway/logs is invalid!
    According to the syslog on the secondary node the share is not existing - but how should it? It can be only mounted on one node at a time.
    Jun 24 13:39:31 pixtest2 Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_validate> for resource <nfs-res>, resource group <ppark-rg>, timeout <300> seconds
    Jun 24 13:39:31 pixtest2 SC[SUNW.nfs:3.1,ppark-rg,nfs-res,nfs_validate]: [ID 382252 daemon.error] Share path /opt/ppark/gateway/logs: file system /opt/ppark is not mounted.
    Jun 24 13:39:31 pixtest2 SC[SUNW.nfs:3.1,ppark-rg,nfs-res,nfs_validate]: [ID 792295 daemon.error] Some shared paths in file /global/nfs/SUNW.nfs/dfstab.nfs-res are invalid.
    Jun 24 13:39:31 pixtest2 Cluster.RGM.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <nfs-res>, resource group <ppark-rg>, time used: 0% of timeout <300, seconds>How do I prevent this?
    Many thanks for some input.
    -- Nick

    You need to explicitly set the resource dependency to the SUNW.HAStoragePlus resource for the SUNW.nfs resource, like:
    scrgadm -a -g ppark-rg -j nfs-res -t nfs -y Network_resources_used=ppark-lh -y Resource_dependencies=ppark-stor-resGreets
    Thorsten

  • Since update to 10.4.10 NFS-mounts stopped mounting

    Since the update to 10.4.10 (with or without the security update, didn't matter) my NFS-mounts from a Linux-machine via a WLAN-router and Airport to my MacBook Pro.
    It did work, although apple-typically unreliable, until the 10.4.10 update.
    rpcinfo -p machine shows:
    program vers proto port
    100000 2 tcp 111 portmapper
    100000 2 udp 111 portmapper
    100005 1 udp 797 mountd
    100005 1 tcp 800 mountd
    100005 2 udp 797 mountd
    100005 2 tcp 800 mountd
    100005 3 udp 797 mountd
    100005 3 tcp 800 mountd
    schowmount -e machine:
    Exports list on dream:
    /var/mnt/hdd/Bilder 192.168.233.9/255.255.255.0
    /var/mnt/hdd/Tools 192.168.233.9/255.255.255.0
    /var/mnt/hdd/movie 192.168.233.9/255.255.255.0
    /var/mnt/hdd/Musik 192.168.233.9/255.255.255.0
    /hdd/usbstick 192.168.233.9/255.255.255.0
    showmount -d machine:
    Directories on dream:
    /var/mnt/hdd/Bilder
    /var/mnt/hdd/Musik
    /var/mnt/hdd/movie
    192.168.233.9/255.255.255.0
    Therefore everything looks ok at the Linux-side, another Linux-machine is able to mount the shares without problems.
    The console.log shows:
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/Musik returned 1 (Operation not permitted)
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/Bilder returned 1 (Operation not permitted)
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/Tools returned 1 (Operation not permitted)
    Jul 15 18:11:50 alu automount[280]: Attempt to mount /automount/Servers/dream/hdd/movie returned 1 (Operation not permitted)
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    [...Masses of those lines]
    NFS Portmap: RPC: Program not registered
    2007-07-15 18:14:55.992 NetInfo Manager[499] * -[NSCFString substringFromIndex:]: Range or index out of bounds
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    NFS Portmap: RPC: Program not registered
    [.... masses of those lines]
    I tried with NFS Manager - no luck.
    I checked the Netinfo-DB - no luck.
    I rebooted the MacBook Pro - no change.
    I rebooted the Linuxmachine - no change.
    I typed "mount -t nfs ..." into a terminal - no luck.
    It is working with Linux - what prevents Apple to get it to work, too?

    This error appeared once and never again since.
    But NFS-mounts still fail. A friends Titanium, still on 10.4.9, has the normal problems with airport-connection-stability and believed-server-connection-losses but it connects like a charm to that server the Aluminium doesn't like.
    I reinstalled 10.4.10, rebooted several times, repaird permissions, searched system.log and console.log but couldn't find a message pointing to a problem (with the exception of a message about a failing startup of my flying butress firewall.
    Here you have the latest system.log (deleted some lines for security reasons only):
    Jul 16 16:35:22 alu SystemStarter[1322]: authentication service (1332) did not complete successfully
    Jul 16 16:35:22 alu SystemStarter[1322]: Printing Services (1325) did not complete successfully
    Jul 16 16:35:24 alu Parallels: Unloading Network module...
    Jul 16 16:35:24 alu Parallels: Unloading ConnectUSB module...
    Jul 16 16:35:24 alu Parallels: Unloading Monitor module...
    Jul 16 16:35:28 alu SystemStarter[1322]: BrickHouse Firewall (1338) did not complete successfully
    Jul 16 16:35:29 alu SystemStarter[1322]: The following StartupItems failed to properly start:
    Jul 16 16:35:29 alu SystemStarter[1322]: /System/Library/StartupItems/AuthServer
    Jul 16 16:35:29 alu SystemStarter[1322]: - execution of Startup script failed
    Jul 16 16:35:29 alu SystemStarter[1322]: /System/Library/StartupItems/PrintingServices
    Jul 16 16:39:57 localhost kernel[0]: hi mem tramps at 0xffe00000
    Jul 16 16:39:58 localhost kernel[0]: PAE enabled
    Jul 16 16:39:58 localhost kernel[0]: standard timeslicing quantum is 10000 us
    Jul 16 16:39:58 localhost kernel[0]: vmpagebootstrap: 254317 free pages
    Jul 16 16:39:58 localhost kernel[0]: migtable_maxdispl = 71
    Jul 16 16:39:58 localhost kernel[0]: Enabling XMM register save/restore and SSE/SSE2 opcodes
    Jul 16 16:39:58 localhost kernel[0]: 89 prelinked modules
    Jul 16 16:39:58 localhost kernel[0]: ACPI CA 20060421
    Jul 16 16:39:58 localhost kernel[0]: AppleIntelCPUPowerManagement: ready
    Jul 16 16:39:58 localhost kernel[0]: AppleACPICPU: ProcessorApicId=0 LocalApicId=0 Enabled
    Jul 16 16:39:58 localhost kernel[0]: AppleACPICPU: ProcessorApicId=1 LocalApicId=1 Enabled
    Jul 16 16:39:58 localhost kernel[0]: Copyright (c) 1982, 1986, 1989, 1991, 1993
    Jul 16 16:39:58 localhost kernel[0]: The Regents of the University of California. All rights reserved.
    Jul 16 16:39:58 localhost kernel[0]: using 5242 buffer headers and 4096 cluster IO buffer headers
    Jul 16 16:39:58 localhost kernel[0]: Enabling XMM register save/restore and SSE/SSE2 opcodes
    Jul 16 16:39:58 localhost kernel[0]: Started CPU 01
    Jul 16 16:39:58 localhost kernel[0]: IOAPIC: Version 0x20 Vectors 64:87
    Jul 16 16:39:58 localhost kernel[0]: ACPI: System State [S0 S3 S4 S5] (S3)
    Jul 16 16:39:58 localhost kernel[0]: Security auditing service present
    Jul 16 16:39:58 localhost kernel[0]: BSM auditing present
    Jul 16 16:39:58 localhost kernel[0]: disabled
    Jul 16 16:39:58 localhost kernel[0]: rooting via boot-uuid from /chosen: 4EF96DEE-9FCF-4476-AD53-58BEA0AA953E
    Jul 16 16:39:58 localhost kernel[0]: Waiting on <dict ID="0"><key>IOProviderClass</key><string ID="1">IOResources</string><key>IOResourceMatch</key><string ID="2">boot-uuid-media</string></dict>
    Jul 16 16:39:58 localhost kernel[0]: USB caused wake event (EHCI)
    Jul 16 16:39:58 localhost kernel[0]: FireWire (OHCI) Lucent ID 5811 PCI now active, GUID 0016cbfffe66af32; max speed s400.
    Jul 16 16:39:58 localhost kernel[0]: Got boot device = IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/SATA@1F,2/AppleAHCI/PRT2 @2/IOAHCIDevice@0/AppleAHCIDiskDriver/IOAHCIBlockStorageDevice/IOBlockStorageDri ver/FUJITSU MHV2100BH Media/IOGUIDPartitionScheme/Customer@2
    Jul 16 16:39:58 localhost kernel[0]: BSD root: disk0s2, major 14, minor 2
    Jul 16 16:39:59 localhost kernel[0]: CSRHIDTransitionDriver::probe:
    Jul 16 16:39:59 localhost kernel[0]: CSRHIDTransitionDriver::start before command
    Jul 16 16:39:59 localhost kernel[0]: CSRHIDTransitionDriver::stop
    Jul 16 16:39:59 localhost kernel[0]: IOBluetoothHCIController::start Idle Timer Stopped
    Jul 16 16:39:59 localhost kernel[0]: Jettisoning kernel linker.
    Jul 16 16:39:59 localhost kernel[0]: Resetting IOCatalogue.
    Jul 16 16:39:59 localhost kernel[0]: display: family specific matching fails
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 0
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: Matching service count = 21
    Jul 16 16:39:59 localhost kernel[0]: display: family specific matching fails
    Jul 16 16:39:59 localhost kernel[0]: Previous Shutdown Cause: 0
    Jul 16 16:39:59 localhost kernel[0]: ath_attach: devid 0x1c
    Jul 16 16:39:59 localhost kernel[0]: mac 10.3 phy 6.1 radio 10.2
    Jul 16 16:39:59 localhost kernel[0]: IPv6 packet filtering initialized, default to accept, logging disabled
    Jul 16 16:40:00 localhost lookupd[47]: lookupd (version 369.6) starting - Mon Jul 16 16:40:00 2007
    Jul 16 16:40:03 localhost DirectoryService[55]: Launched version 2.1 (v353.6)
    Jul 16 16:40:05 localhost diskarbitrationd[45]: disk0s2 hfs 9AC36BC8-2C3E-3282-B08D-9C22EC354E35 Alu HD /
    Jul 16 16:40:07 localhost kernel[0]: yukonosx: Ethernet address 00:xx:xx:xx:xx - deleted
    Jul 16 16:40:07 localhost mDNSResponder: Couldn't read user-specified Computer Name; using default “Macintosh-00.........” instead
    Jul 16 16:40:07 localhost kernel[0]: AirPort_Athr5424ab: Ethernet address 00:xx:xx:xx:xx - deleted
    Jul 16 16:40:07 localhost mDNSResponder: Couldn't read user-specified local hostname; using default “Macintosh-00..........” instead
    Jul 16 16:40:08 localhost mDNSResponder: Adding browse domain local.
    Jul 16 16:40:09 localhost lookupd[70]: lookupd (version 369.6) starting - Mon Jul 16 16:40:09 2007
    Jul 16 16:40:09 localhost configd[43]: AppleTalk startup
    Jul 16 16:40:09 alu configd[43]: setting hostname to "alu.local"
    Jul 16 16:40:13 alu kernel[0]: Registering For 802.11 Events
    Jul 16 16:40:13 alu kernel[0]: [HCIController][setupHardware] AFH Is Supported
    Jul 16 16:40:15 alu configd[43]: AppleTalk startup complete
    Jul 16 16:40:15 alu configd[43]: AppleTalk shutdown
    Jul 16 16:40:15 alu configd[43]: AppleTalk shutdown complete
    Jul 16 16:40:18 alu configd[43]: AppleTalk startup
    Jul 16 16:40:20 alu mDNSResponder: getifaddrs ifa_netmask for fw0(7) Flags 8863 Family 2 169.254.113.87 has different family: 0
    Jul 16 16:40:20 alu mDNSResponder: SetupAddr invalid sa_family 0
    Jul 16 16:40:23 alu SystemStarter[51]: BrickHouse Firewall (102) did not complete successfully
    Jul 16 16:40:27 alu configd[43]: AppleTalk startup complete
    Jul 16 16:40:29 alu configd[43]: executing /System/Library/SystemConfiguration/Kicker.bundle/Contents/Resources/enable-net work
    Jul 16 16:40:29 alu configd[43]: posting notification com.apple.system.config.network_change
    Jul 16 16:40:29 alu lookupd[169]: lookupd (version 369.6) starting - Mon Jul 16 16:40:29 2007
    Jul 16 16:40:31 alu mDNSResponder: getifaddrs ifa_netmask for fw0(7) Flags 8863 Family 2 169.254.113.87 has different family: 0
    Jul 16 16:40:31 alu mDNSResponder: SetupAddr invalid sa_family 0
    Jul 16 16:40:31 alu /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow: Login Window Application Started
    Jul 16 16:40:31 alu SystemStarter[51]: The following StartupItems failed to properly start:
    Jul 16 16:40:31 alu SystemStarter[51]: /Library/StartupItems/Firewall
    Jul 16 16:40:31 alu SystemStarter[51]: - execution of Startup script failed
    Jul 16 16:40:33 alu loginwindow[182]: Login Window Started Security Agent
    Jul 16 16:40:48 alu configd[43]: target=enable-network: disabled
    Jul 16 16:44:14 alu /System/Library/PrivateFrameworks/Apple80211.framework/Resources/airport: Currently connected to network WZ
    Jul 16 16:45:40 alu automount[222]: Can't get NFS_V3/TCP port for dream
    Jul 16 16:45:40 alu automount[222]: Can't get NFS_V2/TCP port for dream
    Jul 16 16:45:40 alu automount[222]: Attempt to mount /automount/static/mnt returned 1 (Operation not permitted)

  • Solaris Zones and NFS mounts

    Hi all,
    Got a customer who wants to seperate his web environments on the same node. The release of apache, Java and PHP are different so kind of makes sense. Seems a perfect opportunity to implement zoning. It seems quite straight forward to setup (I'm sure I'll find out its not). The only concern I have is that all Zones will need access to a single NFS mount from a NAS storage array that we have. Is this going to be a problem to configure and how would I get them to mount automatically on boot.
    Cheers

    Not necessarily, you can create (from Global zone) a /zone/zonename/etc/dfs/dfstab (NOT a /zone/[i[zonename[/i]/root/etc/dfs/dfstab notice you don't use the root dir) and from global do a shareall and the zone will start serving. Check your multi-level ports and make sure they are correct. You will run into some problems if you are running Trusted Extensions or the NFS share is ZFS but they can be overcome rather easily.
    EDIT: I believe you have to be running TX for this to work. I'll double check.
    Message was edited by:
    AdamRichards

  • To use NFS mount as shared storage for calendar

    hi all,
    Colocated IM deployment <<To ensure high availbility, Oracle Calendar Server is placed on Cold Failover Cluster. Cold Failover Cluster installation requires shared storage for ORACLE_HOME and oraInventory.
    Q: can NFS mount be used as the shared storage? has anyone tried it? thanks

    Hi Arnaud!
    This is of course a test environment on my laptop. I WOULD NEVER do this in production or even mention this to a customer :-)
    In this environment I do not care for performance but it is not slow.
    cu
    Andreas

  • Starting global heartbeat for cluster "d8aed2bb8ab9381d": Failed

    When I tries to start the O2cb it can't find the heartbeat region..
    The heartbeat regions is on /dev/mapper/360060e8006d044000000d0440000012b, And as you can see it has "Bad magic number in inode",
    Soo What can I do about that? Can I clear this disk and create a new one? Any suggestions?
    # /etc/init.d/o2cb start
    Setting cluster stack "o2cb": OK
    Registering O2CB cluster "d8aed2bb8ab9381d": OK
    Setting O2CB cluster timeouts : OK
    Starting global heartbeat for cluster "d8aed2bb8ab9381d": Failed
    o2cb: Heartbeat region could not be found 0004FB000005000018DD9ACEA2B4B918
    Stopping global heartbeat on cluster "d8aed2bb8ab9381d": OK
    [root@u139gw poolfsmnt]# mounted.ocfs2 -f
    Device Stack Cluster F Nodes
    /dev/sdb o2cb d8aed2bb8ab9381d G Unknown: Bad magic number in inode
    /dev/sdc o2cb d8aed2bb8ab9381d G u139gw
    /dev/sdd o2cb d8aed2bb8ab9381d G Unknown: Bad magic number in inode
    /dev/sde o2cb d8aed2bb8ab9381d G u139gw
    /dev/sdg o2cb d8aed2bb8ab9381d G u139gw
    /dev/mapper/360060e8006d044000000d0440000012b o2cb d8aed2bb8ab9381d G Unknown: Bad magic number in inode
    /dev/mapper/360060e8006d044000000d0440000008a o2cb d8aed2bb8ab9381d G u139gw
    /dev/mapper/360060e8016503600000150360000000d o2cb d8aed2bb8ab9381d G u139gw
    /dev/sdf o2cb d8aed2bb8ab9381d G u139gw

    Magnus wrote:
    The heartbeat regions is on /dev/mapper/360060e8006d044000000d0440000012b, And as you can see it has "Bad magic number in inode",
    Soo What can I do about that? Can I clear this disk and create a new one? Any suggestions?Open an SR with Oracle Support. You could try dismounting that disk on all servers and running fsck.ocfs2 to see if it fixes the inode issue for you as well.

  • NFS mount unmounting automatically

    Hi There,
    I have an issue where I have a NFS mount on my client. Here, NFS server and client are configured on the same server. After a period of time for some reason this mount point disappears or unmounts. Could anyone enlighten me on why this occurs?
    Thanks for any help in advance.

    Hi Cindy,
    Here, the NFS server and client is the same.
    root@GTRFO005 #df -kh /smb
    Filesystem             size   used  avail capacity  Mounted on
    localhost:/smb         1.0G   512M   512M    50%    /smb
    root@GTRFO005 #share
    -               /smb   rw   ""
    root@GTRFO005 #mount
    / on /dev/md/dsk/d0 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540000 on Sun Apr 19 01:54:40 2015
    /devices on /devices read/write/setuid/devices/rstchown/dev=5940000 on Sun Apr 19 01:53:47 2015
    /system/contract on ctfs read/write/setuid/devices/rstchown/dev=5980001 on Sun Apr 19 01:53:47 2015
    /proc on proc read/write/setuid/devices/rstchown/dev=59c0000 on Sun Apr 19 01:53:47 2015
    /etc/mnttab on mnttab read/write/setuid/devices/rstchown/dev=5a00001 on Sun Apr 19 01:53:47 2015
    /etc/svc/volatile on swap read/write/setuid/devices/rstchown/xattr/dev=5a40001 on Sun Apr 19 01:53:47 2015
    /system/object on objfs read/write/setuid/devices/rstchown/dev=5a80001 on Sun Apr 19 01:53:47 2015
    /etc/dfs/sharetab on sharefs read/write/setuid/devices/rstchown/dev=5ac0001 on Sun Apr 19 01:53:47 2015
    /platform/sun4u-us3/lib/libc_psr.so.1 on /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 read/write/setuid/devices/rstchown/dev=154000
    0 on Sun Apr 19 01:54:37 2015
    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1 on /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 read/write/setuid/devices/rst
    chown/dev=1540000 on Sun Apr 19 01:54:38 2015
    /dev/fd on fd read/write/setuid/devices/rstchown/dev=5c40001 on Sun Apr 19 01:54:40 2015
    /var on /dev/md/dsk/d2 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540002 on Sun Apr 19 01:54:46 201
    5
    /tmp on swap read/write/setuid/devices/rstchown/xattr/dev=5a40002 on Sun Apr 19 01:54:46 2015
    /var/run on swap read/write/setuid/devices/rstchown/xattr/dev=5a40003 on Sun Apr 19 01:54:46 2015
    /auditlog5 on /dev/md/dsk/d154 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=154009a on Sun Apr 19 01:5
    5:04 2015
    /opt on /dev/md/dsk/d3 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1540003 on Sun Apr 19 01:55:04 201
    5
    /global/.devices/node@1 on /dev/md/dsk/d80 read/write/setuid/devices/rstchown/intr/largefiles/logging/noquota/global/xattr/nodfratime/onerro
    r=panic/dev=1540050 on Sun Apr 19 01:56:17 2015
    /global/.devices/node@3 on /dev/md/dsk/d83 read/write/setuid/devices/rstchown/intr/largefiles/logging/noquota/global/xattr/nodfratime/onerro
    r=panic/dev=1540053 on Sun Apr 19 01:56:17 2015
    /global/.devices/node@4 on /dev/md/dsk/d92 read/write/setuid/devices/rstchown/intr/largefiles/logging/noquota/global/xattr/nodfratime/onerro
    r=panic/dev=154005c on Sun Apr 19 01:56:17 2015
    /apps_cl on /dev/md/sgd-mset/dsk/d120 read/write/setuid/devices/rstchown/intr/largefiles/logging/xattr/onerror=panic/dev=1546078 on Sun Apr
    19 02:01:13 2015
    /smb on localhost:/smb remote/read/write/setuid/devices/rstchown/proto=udp/vers=2/port=4242/xattr/dev=5cc0001 on Sun Apr 19 02:03:51 2015
    /global/.devices/node@2 on /dev/md/dsk/d89 read/write/setuid/devices/rstchown/intr/largefiles/logging/noquota/global/xattr/nodfratime/onerro
    r=panic/dev=1540059 on Sun Apr 19 05:05:41 2015
    /net/gtrfo005 on / read/write/nosetuid/nodevices/rstchown/dev=1540000 on Sun Apr 19 09:29:54 2015
    root@GTRFO005 #

  • Hardware for RAC using NFS mounts

    Hi,
    At a recent Oracle/NetApp seminar we heard that certain RAC configuration could make use of NFS mounts to access its shared storage, leaving SCSI/Fiber/FC Switches out of the picture.
    We're currently looking for a budget cluster configuration that, ideally, is not limited to 2 nodes. The NFS option looks promising, however, our NAS hardware may not be NetApp.
    Has anybody used this kind of setup? For example, several cheap x86 blade servers mounting shared storage via NFS in a NAS.
    Thanks,
    Ivan.

    NAS is NFS.
    See
    Following NFS storage vendors are supported: EMC, Fujitsu, HP, IBM, NetApp, Pillar Data, Sun, Hitachi. 
    NFS file servers do not require RAC certification. The NFS file server must be supported by the system and storage vendors. 
    Currently, only NFS protocol version 3 (NFSv3) is supported.
    Hemant K Chitale

  • Vi error on nfs mount; E212: Can't open file for writing

    Hi all,
    I've setup a umask of 0 for testing on both NFS client (Centos 5.2) and NFS server (OSX 10.5.5 server).
    I can create files as one user and edit/save out as another user w/o issue when directly logged into the server via ARD.
    However, when I attempt the same from an NFS mount on a client machine, even as root I get the following error using vi;
    "file" E212: Can't open file for writing
    Looking at the system.log file on the server, I see;
    kernel[0]: add_fsevent: no name hard-link! dropping the event. (event 2 vp == 0xa5db510 (-UNKNOWN-FILE)).
    This baffles me. My umask is 0 meaning files I create and attempt to edit as other users are 777, but I cannot save out edits unless I do a wq! in vi. At that point, the owner of the file changes to whomever did the vi.
    This isn't just a vi issue as it happens using any editor, but I like to use vi.
    Any help is greatly appreciated. Hey, beer is on me!

    Hi all,
    Thanks for the replies
    I've narrowed it down to a Centos client issue.
    Everything works fine using other Linux based OS's as clients.
    Since we have such a huge investment in Centos, I must figure out a workaround. Apple support wasn't much help as usual however they were very nice.
    There usual response is "its unsupported".
    If Apple really wants to play in the enterprise of business space, they really need to change there philosophy. I mean telling me that I shouldn't mount home directories via NFS is completely rediculus.
    What am I supposed to use then, Samba of AFP? No, I don't think so. No offense to Microsoft but why would I use a Windows based file sharing protocol to mount network shares in a Nix env???

  • Accessing NFS mounted share in Finder no longer works in 10.5.3+

    I have setup an automounted NFS share previously with Leopard against a RHEL 5 server at the office. I had to go through a few loops to punch a hole through the appfirewall to get the share accessible in the Finder.
    A few months later when I returned to the office after a consultancy stint and upgrades to 10.5.3 and 10.5.4 the NFS mount no longer works. I have investigated it today and I can't get it to run even with the appfirewall disabled.
    I've been doing some troubleshooting, and the interaction between the statd, lockd and perhaps the portmap seem a bit fishy, even with the appfirewall disabled. Both the statd and lockd complains that they can not register; lockd once and statd indefinitely.
    Jul 2 15:17:10 ySubmarine com.apple.statd[521]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd[521]): Exited with exit code: 1
    Jul 2 15:17:10 ySubmarine com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    ... and rpcinfo -p gets connection refused unless I start portmap using the launchctl utility.
    This may be a bit obscure, and I'm not exactly an expert of NFS, so I wonder if someone else stumbled across this, and can point me in the right direction?
    Johan

    Sorry for my late response, but I have finally got around to some trial and error. I can mount the share using mount_nfs (but need to use sudo), and it shows up as a mounted disk in the Finder. However, when I start to browse a directory on the share that I can write to, I end up with the lockd and statd failures.
    $ mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    mount_nfs: /Users/yyyy/xxxx-home: Permission denied
    $ sudo mount_nfs -o resvport xxxx:/home /Users/yyyy/xxxx-home
    Jul 7 10:37:34 zzzz com.apple.statd[253]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd[253]): Exited with exit code: 1
    Jul 7 10:37:34 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:44 zzzz com.apple.statd[254]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd[254]): Exited with exit code: 1
    Jul 7 10:37:44 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:54 zzzz com.apple.statd[255]: rpc.statd: unable to register (SM_PROG, SM_VERS, UDP)
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd[255]): Exited with exit code: 1
    Jul 7 10:37:54 zzzz com.apple.launchd[1] (com.apple.statd): Throttling respawn: Will start in 10 seconds
    Jul 7 10:37:58 zzzz loginwindow[25]: 1 server now unresponsive
    Jul 7 10:37:59 zzzz KernelEventAgent[26]: tid 00000000 unmounting 1 filesystems
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /net updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: /home updated
    Jul 7 10:38:02 zzzz com.apple.autofsd[40]: automount: no unmounts
    Jul 7 10:38:02 zzzz loginwindow[25]: No servers unresponsive
    ... and firewall wide open.
    I guess that the Finder somehow triggers file locking over NFS.

  • Cannot access external NFS mounts under Snow Leopard

    I was previously running Leopard (10.5.x) and automounted an Ubuntu (9.04 Jaunty) Linux NFS mount from my iMac. I had set this up with Directory Utility and it was instantly functional and I never had any issues. After upgrading to Snow Leopard, I set up the same mount point on the same machine (using Disk Utility now), without changing any of the export settings, and Disk Utility stated that the external server had responded and appeared to be working correctly. However, when attempting to access the share, I get a 'Operation not permitted' error. I also cannot manually create the NFS mount using mount or mount_nfs. I get a similar error if I try to cd into /net/<remote-machine>/<share>. I can see the shared folder in /net/<remote-machine>, but I cannot access it (cd, ls, etc). I can see on the Linux machine that the iMac has mounted the share (showmount -a), so the problem appears to be solely in the permissions. But I have not changed any of the permissions on the remote machine, and even then, they are blown wide open (777) so I'm not sure what is causing the issue. I have tried everything as both a regular user, and as root. Any thoughts?
    On the Linux NFS server:
    % cat /etc/exports
    /share 192.168.1.0/24(rw,sync,nosubtree_check,no_rootsquash)
    % showmount -a
    All mount points on <server>:
    192.168.1.100:/share <-- <server> address
    192.168.1.101:/share <-- iMac address
    On the iMac:
    % rpcinfo -t 192.168.1.100 nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    program 100003 version 4 ready and waiting
    % mount
    trigger on /net/<server>/share (autofs, automounted, nobrowse)
    % mount -t nfs 192.168.1.100:/share /Volumes/share1
    mount_nfs: /Volumes/share1: Operation not permitted

    My guess is that the Linux server is refusing NFS requests coming from a non-reserved (<1024) source port. If that's the case, adding "insecure" to the Linux export options should get it working. (Note: requiring the use of reserved ports doesn't actually make things any more secure on most networks, so the name of the option is a bit misleading.)
    If you were previously able to mount that same export from a Mac, you must have been specifying the "-o resvport" option and doing the mounts as root (via sudo or automount which happens to run as root). So that may be another fix.
    HTH
    --macko

  • Expdp fails to create .dmp files in NFS mount point in solaris 10,Oracle10g

    Dear folks,
    I am facing a wierd issue while doing expdp with NFS mount point. Kindly help me on this.
    ===============
    expdp system/manager directory=exp_dumps dumpfile=u2dw.dmp schemas=u2dw
    Export: Release 10.2.0.4.0 - 64bit Production on Wednesday, 31 October, 2012 17:06:04
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/backup_db/dumps/u2dw.dmp"
    ORA-27040: file create error, unable to create file
    SVR4 Error: 122: Operation not supported on transport endpoint
    I have mounted like this:
    mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 -F nfs 172.20.2.204:/exthdd /backup_db
    NFS=172.20.2.204:/exthdd
    given read,write grants to public as well as specific user

    782011 wrote:
    Hi sb92075,
    Thanks for ur reply. pls find the below. I am able to touch the files while exporting log files also creating having the error msg as i showed in previous post.
    # su - oracle
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have new mail.
    oracle 201> touch /backup_db/dumps/u2dw.dmp.test
    oracle 202>I contend that Oracle is too dumb to lie & does not mis-report reality
    27040, 00000, "file create error, unable to create file"
    // *Cause:  create system call returned an error, unable to create file
    // *Action: verify filename, and permissions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Anyone else having problems with NFS mounts not showing up?

    Since Lion, cannot see NFS shares anymore.  Folder that had them is still threre but the share will not mount.  Worked fine in 10.6.
    nfs://192.168.1.234/volume1/video
    /Volumes/DiskStation/video
    resvport,nolocks,locallocks,intr,soft,wsize=32768,rsize=3276
    Any ideas?
    Thanks

    Since the NFS points show up in the terminal app, go to the local mount directory (i.e. the mount location in the NFS Mounts using Disk Utility) and do the following:
    First Create a Link file
    sudo ln -s full_local_path link_path_name
    sudo ln -s /Volumes/linux/projects/ linuxProjects
    Next Create a new directory say in the Root of the Host Drive (i.e. Macintosh HDD)
    sudo mkdir new_link_storage_directory
    sudo mkdir /Volumes/Macintosh\ HDD/Links
    Copy the Above Link file to the new directory
    sudo  mv link_path_name new_link_storage_directory
    sudo  mv linuxProjects /Volumes/Macintosh\ HDD/Links/
    Then in Finder locate the NEW_LINK_STORAGE_DIRECTORY and then the link file should allow opening of these NFS point points.
    Finally, after all links have been created and placed into the NEW..DIRECTORY, Place it into the left sidebar.  Now it works just like before.

  • Accessing NFS mounts in Finder

    I currently have trouble accessing NFS mounts with finder. The mount is O.K. I can access the directories on the NFS server in Terminal. However, in Finder when I click on the mount, instead of seeing the contents of the NFS mount I only see the "Alias" icon. Logs show nothing.
    I am not sure when it worked the last time. It could well be that the problem only started after one of the lastest snow leopard updates. I know it worked when I upgraded to Snow Leopard.
    Any ideas?

    Hello gvde,
    Two weeks ago I bought a NAS device that touted NFS as one of the features. As I am a fan of Unix boxes I chose an NAS that would support that protocol. I was disappointed to find out that my Macbook would not connect to it. As mentioned in previous posts (by others) on this forum, I could see my NFS share via the command line, but not when using Finder. I was getting pretty upset and racking my brain trying to figure it out. I called the NAS manufacturer which was no help. I used a Ubuntu LiveCD (which connected fine). I was about ready to give up. Then, in another forum someone had mentioned the NFS manager App.
    After I installed the app and attempted to configure my NFS shares, the app stated something along the lines of (paraphrasing) "default permissions were incorrect". It then asked me if I would authenticate to have the NFS manager fix the problem. I was at my wits end so I thought why not. Long story short, this app saved me! My shares survive a reboot, Finder is quick and snappy with displaying the network shares, and all is right with the world. Maybe in 10.6.3 Apple will have fixed the default permissions issue. Try the app. It's donationware. I hope this post helps someone else.
    http://www.macupdate.com/info.php/id/5984/nfs-manager

Maybe you are looking for

  • C6380 is installed in Windows 7 64-bit but will not print

    I recently bought a Pavillion dv-2210ev windows 7 64bit, and would like to wi-fi connect it to my c6380. It works well with my desktop (wirelessly also), and got the Solutions Center installed perfectly on the Pavillion. It scans as requested from th

  • WD ABAP and Portal theme

    Hi, I am launching a WD ABAP application from a Portal iView and would like the application to take on the Portal theme.  The launching of the application is being implemented like this:   DATA: low_window TYPE REF TO if_wd_window_manager,           

  • "This field name is not known" error when upgrading from VS 2003 to VS 2005

    When we upgraded our web servers from VS 2003 to 2005, our crystal reports web viewer began having a problem with reports that contained sub-reports.  We now are getting the following error on the web page.  The reports were developed in Crystal 10,

  • Sub-account user id rules

    I'm trying to create a sub-account but I keep getting a message that the names I choose are invalid.  I know that there are rules for passwords and for sub account names, and I am following those rules.  I'm not talking about already used names, like

  • How to install Epson printer without software cd

    I'm trying to install Epson ink jet printer with wifi without a software disc since I have a Mac mini without cd/dvd external unit. Any suggestions? Thanks in advance.