Sun Cluster 3.2 - Global File Systems

Sun Cluster has a Global Filesystem (GFS) that supports read-only access throughout the cluster. However, only one node has write access.
In Linux a GFS filesystem allows it to be mounted by multiple nodes for simultaneous READ/WRITE access. Shouldn't this be the same for Solaris as well..
From the documentation that I have read,
"The global file system works on the same principle as the global device feature. That is, only one node at a time is the primary and actually communicates with the underlying file system. All other nodes use normal file semantics but actually communicate with the primary node over the same cluster transport. The primary node for the file system is always the same as the primary node for the device on which it is built"
The GFS is also known as Cluster File System or Proxy File system.
Our client believes that they can have their application "scaled" and all nodes in the cluster can have the ability to write to the globally mounted file system. My belief was, the only way this can occur is when the application has failed over and then the "write" would occur from the "primary" node whom is mastering the application at that time. Any input will be greatly appreciated or clarification needed. Thanks in advance.
Ryan

Thank you very much, this helped :)
And how seamless is remounting of the block device LUN if one server dies?
Should some clustered services (FS clients such as app servers) be restarted
in case when the master node changes due to failover? Or is it truly seamless
as in a bit of latency added for duration of mounting the block device on another
node, with no fatal interruptions sent to the clients?
And, is it true that this solution is gratis, i.e. may legally be used for free
unless the customer wants support from Sun (authorized partners)? ;)
//Jim
Edited by: JimKlimov on Aug 19, 2009 4:16 PM

Similar Messages

  • Problems mounting global file system

    Hello all.
    I have setup a Cluster using two Ultra10 machines called medusa & ultra10 (not very original I know) using Sun Cluster 3.1 with a Cluster patch bundle installed.
    When one of the Ultra10 machines boots it complains about being unable to mount the global file system and for some reason tries to mount the node@1 file system when it is actually node 2.
    on booting I receive the message on the macine ultra10
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance): resuming boot
    If I use control D to continue then the following happens:
    ultra10:
    ultra10:/ $ cat /etc/cluster/nodeid
    2
    ultra10:/ $ grep global /etc/vfstab
    /dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@2 ufs 2 no global
    ultra10:/ $ df -k | grep global
    /dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
    medusa:
    medusa:/ $ cat /etc/cluster/nodeid
    1
    medusa:/ $ grep global /etc/vfstab
    /dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@1 ufs 2 no global
    medusa:/ $ df -k | grep global
    /dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
    Does anyone have any idea why the machine called ultra10 of node ID 2 is trying to mount the node ID 1 global file system when the correct entry is within the /etc/vfstab file?
    Many thanks for any assistance.

    Hmm, so for arguments sake, if I tried to mount both /dev/md/dsk/d50 devices to the same point in the filesystem for both nodes, it would mount OK?
    I assumed the problem was because the device being used has the same name, and was confusing the Solaris OS when both nodes tried to mount it. Maybe some examples will help...
    My cluster consists of two nodes, Helene and Dione. There is fibre-attached storage used for quorum, and website content. The output from scdidadm -L is:
    1 helene:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2 helene:/dev/rdsk/c0t1d0 /dev/did/rdsk/d2
    3 helene:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
    3 dione:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
    4 dione:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
    5 dione:/dev/rdsk/c0t1d0 /dev/did/rdsk/d5
    This allows me to have identical entries in both host's /etc/vfstab files. There are also shared devices under /dev/global that can be accessed by both nodes. But the RAID devices are not referenced by anything from these directories (i.e. there's no /dev/global/md/dsk/50). I just thought it would make sense to have the option of global meta devices, but maybe that's just me!
    Thanks again Tim! :D
    Pete

  • Unable to remount the global file system

    Hello All,
    I am facing problem when i am remounting the global file system on one of the nodes in cluster.
    Here are my system details:
    OS: SunOS sf44buce02 5.10 Generic_141414-01 sun4u sparc SUNW,Sun-Fire-V440
    SunCluster Version:3.2
    The problem details:
    The following entry i have in my /etc/vfstab file
    dev/md/cfsdg/dsk/d10 /dev/md/cfsdg/rdsk/d10 /global/TspFt ufs 2 yes global,logging
    and now i wanted to add "nosuid" option to the global file system. I have used the following command to add but i couldn't succeed.
    # mount -o nosuid,remount /global/TspFt i am getting the following error
    mount: Operation not supported
    mount: Cannot mount /dev/md/cfsdg/dsk/d10
    can anyone tell me How to remount the global file system without reboot?
    Thanks in advance.
    Regards,
    Rajeshwar

    Hi,
    Thank you very much for the reply. Please see the below details that you have asked:
    -> The volume manager i am using is *"SUN"*.
    -> In my previous post i missed "*/*" while pasting the vfstab entry. Please have a look at the below vfstab entry.
    */dev/md/cfsdg/dsk/d10 /dev/md/cfsdg/rdsk/d10 /global/TspFt ufs 2 yes global,logging,nosuid,noxattr*
    - Output of ls -al /dev/md/
    root@sf44buce02> ls -al /dev/md/
    total 34
    drwxr-xr-x 4 root root 512 Jun 24 16:37 .
    drwxr-xr-x 21 root sys 7168 Jun 24 16:38 ..
    lrwxrwxrwx 1 root root 31 Jun 3 20:19 admin -> ../../devices/pseudo/md@0:admin
    lrwxrwxrwx 1 root root 8 Jun 24 16:37 arch1dg -> shared/2
    lrwxrwxrwx 1 root other 8 Jun 3 22:26 arch2dg -> shared/4
    lrwxrwxrwx 1 root root 8 Jun 24 16:37 cfsdg -> shared/1
    drwxr-xr-x 2 root root 1024 Jun 3 22:41 dsk
    lrwxrwxrwx 1 root other 8 Jun 3 22:27 oradg -> shared/5
    drwxr-xr-x 2 root root 1024 Jun 3 22:41 rdsk
    lrwxrwxrwx 1 root root 8 Jun 24 16:37 redodg -> shared/3
    lrwxrwxrwx 1 root root 42 Jun 3 22:02 shared -> ../../global/.devices/node@2/dev/md/shared
    - output of ls -al /dev/md/cfsdg/
    root@sf44buce02> ls -al /dev/md/cfsdg/
    total 8
    drwxr-xr-x 4 root root 512 Jun 3 22:29 .
    drwxrwxr-x 7 root root 512 Jun 3 22:29 ..
    drwxr-xr-x 2 root root 512 Jun 24 16:37 dsk
    drwxr-xr-x 2 root root 512 Jun 24 16:37 rdsk
    - output of ls -la /dev/md/cfsdg/dsk/.
    root@sf44buce02> ls -al /dev/md/cfsdg/dsk
    total 16
    drwxr-xr-x 2 root root 512 Jun 24 16:37 .
    drwxr-xr-x 4 root root 512 Jun 3 22:29 ..
    lrwxrwxrwx 1 root root 42 Jun 24 16:37 d0 -> ../../../../../devices/pseudo/md@0:1,0,blk
    lrwxrwxrwx 1 root root 42 Jun 24 16:37 d1 -> ../../../../../devices/pseudo/md@0:1,1,blk
    lrwxrwxrwx 1 root root 43 Jun 24 16:37 d10 -> ../../../../../devices/pseudo/md@0:1,10,blk
    lrwxrwxrwx 1 root root 43 Jun 24 16:37 d11 -> ../../../../../devices/pseudo/md@0:1,11,blk
    lrwxrwxrwx 1 root root 42 Jun 24 16:37 d2 -> ../../../../../devices/pseudo/md@0:1,2,blk
    lrwxrwxrwx 1 root root 43 Jun 24 16:37 d20 -> ../../../../../devices/pseudo/md@0:1,20,blk

  • Installing TREX global file system standalone

    I am installing TREX 7.1.23 as a distributed system with central storage (SAN).  However, according to multiple SAP documents, 7.1 does not yet allow you to install the global file system in a separate manner than the TREX system installation.  Refer to TREX 7.1 central note 1003900 [https://service.sap.com/sap/support/notes/1003900].
    SAP does indicate there is an interim solution with note 1258694 "TREX 7.1:Install TREX with Global File System (Windows). " [https://service.sap.com/sap/support/notes/1258694].
    In this note it says to execute "install. cmd --action=install_cfs --target=<UNC path_to_NAS> --sid=<SAPSID>", but I have downloaded the standalone TREX 7.1 installation and there is no "install.cmd" file to be found!?
    Has anyone seen this note and know how this is done or where this file can be found?
    thanks!
    John

    will be redesigning this installation, so closing thread

  • Cluster with global file system

    Hi
    I setup Cluster 3.2 and all working fine
    I follow the SUN doc of creating a global filesystem ( 1. newfs ... 2. mount under /global/foo etc). however I cannot mount under /global
    say
    mount /dev/global/dsk/d3s1 /global/foo ( will say "no such file or directory" )
    But I can mount on say "/a"
    It can be mounted only one layer beneath /
    Why ?
    Appreciate any hints
    Thanks in advance
    Brian

    Well, assuming you are doing:
    # mount -g /dev/global/dsk/d3s1 /global/foo
    You'll need /global/foo to exist on all cluster nodes before you issue the mount command. The mount is an 'all or nothing' mount. It can't succeed on just a subset of cluster nodes, assuming they are all up. It must succeed on all.
    Tim
    ---

  • Does oracle clusterware and oracle RAC require sun cluster

    Hi,
    I have to setup oracle RAC on solaris 10 SPARC. so is it necessary to install sun cluster 3.2, QFS file system on solaris
    I have 2 sun sparc servers with solaris 10 installed on it and shared LUN setup(SAN disk RAID 5 partitions)
    Have to have 2 node setup for RAC load balancing.
    Regards
    Prakash

    Hi Prakash,
    very interesting point:
    As per oracle clusterware documents the cluster manager support is only for windows and linux.
    In case of solaris SPARC will the cluster manager get configured ???
    The term "Cluster Manager" refers to a "cluster manager" that Oracle used in 9i times and this one was indeed only available on Linux / Windows.
    Therefore, let me, please, ask you something: Which version of Oracle RAC do you plan to use?
    Because for 9i RAC, you would need Sun or Veritas Cluster on Solaris. The answers given here that Sun Cluster would not be required assume 10g RAC or higher.
    Now, you might see other dependencies which can be resolved by Sun Cluster. I cannot comment on those.
    For the RAW setup: having RAW disks (not raw logical volumes) will be fine without Veritas and ASM on top.
    Hope that helps. Thanks,
    Markus

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Cluster File system local to Global

    I need to convert a local High available file system to a global file system. The client needs to share data within the cluster and the solution i offered him was this.
    Please let me know if there is a better way to do this. The servers are running 2 failover resource NFS resource groups sharing file systems to clients. Currently the filesystems are configured as HAStoragePlus file systems.
    Thanks

    Tim, thanks much for your reply. I will doing this as a global file system. Currently, the HA file systems are shared out from only one node and I intend to keep it that way. The only difference is i will make the local HA file systems as global.
    I was referring the sun cluster concepts guide which mentions
    http://docs.sun.com/app/docs/doc/820-2554/cachcgee?l=en&a=view
    "A cluster file system to be highly available, the underlying disk storage must be connected to more than one node. Therefore, a local file system (a file system that is stored on a node's local disk) that is made into a cluster file system is not highly available"
    I assume i need to remove the file systems from hastp and make them as global? Please let me know if the understanding is correct...
    Thanks again.

  • How to install a SUN cluster in a non-global zone?

    How do I install a SUN cluster in a non-global zone? If it is the same as installing SUN Cluster in the global zone, then how do I access the cdrom from a non-global zone to the global zone?
    Please point me to some docs or urls if there are any. Thanks in advance!

    You don't really install the cluster software on zones. You have to set up a zone cluster once you have configured the Global Zone cluster (or in other words, clustered the physical systems + OS).
    http://blogs.sun.com/SC/entry/zone_clusters

  • Is there native file system for solaris cluster?

    As GFS for RedHat Linux and GPFS for AIX, is there any native file system for solaris cluster? Thanks for any infomation!

    What do you mean by 'native'?
    Oracle Solaris Cluster has a feature called the Global File System (or sometimes the Proxy File System). It allows UFS or Symantec VxFS to be mounted globally (on all cluster nodes simultaneously) in read/write mode. There is also a shared QFS option. All these are described in the documentation and in the book "Oracle Solaris Cluster Essentials".
    Tim
    ---

  • Regarding shared file system requirement in endeca server cluster

    Hi,
    Our solution involves running a single data domain in an endeca server cluster.
    As per the documentation, endeca server cluster requires shared file system for keeping index file for follower node to read.
    My questions are,
    Can I run the endeca cluster with out shared file system by having the index file on each node of the endeca server cluster?
    Can dependency on shared file system be a single point of failure, if yes how can it be avoided?
    I really appreciate your feedback on these questions.
    thanks,
    rp

    Hi rp,
    The requirement for a shared file system in the Endeca Server cluster is a must. As this diagram shows, the shared file system maintans the index, and also maintains the state of the Cluster Coordinator, which ensures cluster services (automatic leader election, propagation of the latest index version to all nodes in the data domain). A dependency on a shared file system can be a single point of failure and requires to run a backup, -- this is a standard IT approach, that is, it is not specific to the Endeca Server cluster in particular.
    See this section on Cluster Behavior, for info on how the shared file system is used (the topic "how updates are processed"), and on how increased availability is achieved.
    HTH,
    Julia

  • Installing sun cluster on system based on SUNWcrnet

    Hi,
    I'm planning to testdrive the sun cluster software. All our systems do run the SUNWcrnet install cluster plus additionally required packages. I read that for sun cluster 3.1 at least SUNWCuser is necessary. Did anybody succeed in running sun cluster software on a system based on SUNWcrnet? If yes which are the additionally needed packages?
    Thanks,
    mark

    I wouldn't bother. You will need to install way too many packages. SUNWcrnet by default does not even give you the showrev command. I rather install everything and then disable what I don't need.

  • Testing ha-nfs in two node cluster (cannot statvfs /global/nfs: I/O error )

    Hi all,
    I am testing HA-NFS(Failover) on two node cluster. I have sun fire v240 ,e250 and Netra st a1000/d1000 storage. I have installed Solaris 10 update 6 and cluster packages on both nodes.
    I have created one global file system (/dev/did/dsk/d4s7) and mounted as /global/nfs. This file system is accessible form both the nodes. I have configured ha-nfs according to the document, Sun Cluster Data Service for NFS Guide for Solaris, using command line interface.
    Logical host is pinging from nfs client. I have mounted there using logical hostname. For testing purpose I have made one machine down. After this step files tem is giving I/O error (server and client). And when I run df command it is showing
    df: cannot statvfs /global/nfs: I/O error.
    I have configured with following commands.
    #clnode status
    # mkdir -p /global/nfs
    # clresourcegroup create -n test1,test2 -p Pathprefix=/global/nfs rg-nfs
    I have added logical hostname,ip address in /etc/hosts
    I have commented hosts and rpc lines in /etc/nsswitch.conf
    # clreslogicalhostname create -g rg-nfs -h ha-host-1 -N
    sc_ipmp0@test1, sc_ipmp0@test2 ha-host-1
    # mkdir /global/nfs/SUNW.nfs
    Created one file called dfstab.user-home in /global/nfs/SUNW.nfs and that file contains follwing line
    share -F nfs &ndash;o rw /global/nfs
    # clresourcetype register SUNW.nfs
    # clresource create -g rg-nfs -t SUNW.nfs ; user-home
    # clresourcegroup online -M rg-nfs
    Where I went wrong? Can any one provide document on this?
    Any help..?
    Thanks in advance.

    test1#  tail -20 /var/adm/messages
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 344672 daemon.error] Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 801855 daemon.error]
    Feb 28 22:28:54 testlab5 Error in scha_cluster_get
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d5s0 has changed to OK
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d6s0 has changed to OK
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 537175 daemon.notice] CMM: Node e250 (nodeid: 1, incarnation #: 1235752006) has become reachable.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 525628 daemon.notice] CMM: Cluster has reached quorum.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node e250 (nodeid = 1) is up; new incarnation number = 1235752006.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node testlab5 (nodeid = 2) is up; new incarnation number = 1235840337.
    Feb 28 22:37:15 testlab5 Cluster.CCR: [ID 499775 daemon.notice] resource group rg-nfs added.
    Feb 28 22:39:05 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:05 testlab5 Cluster.CCR: [ID 491081 daemon.notice] resource ha-host-1 removed.
    Feb 28 22:39:17 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:17 testlab5 Cluster.CCR: [ID 254131 daemon.notice] resource group nfs-rg removed.
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, timeout <300> seconds
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<rg-nfs.ha-host-1.2>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_validate> completed successfully for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, time used: 0% of timeout <300 seconds>
    Feb 28 22:39:30 testlab5 Cluster.CCR: [ID 973933 daemon.notice] resource ha-host-1 added.

  • Oracle RAC binaries on vxfs shared file system

    Hi,
    Is it possible to install oracle binaries on vxfs cluster file system for oracle RAC under sun cluster? Because as I know we can not use vxfs cluster file system for our oracle datafiles.
    TIA

    The above post is incorrect. You can have a cluster (global) file system using VxVM and VxFS. You do not need to have VxVM/CVM for this. A cluster file system using VxVM+VxFS can be used for Oracle binaries but cannot be used for Oracle RAC data files where they are updated from both nodes simultaneously.
    If further clarification is needed, please post.
    Tim
    ---

  • Regarding installation and configuring sun cluster 3.2

    Can any provide me the correct hardware and software configurations required for two node cluster for acheiving failover.
    We have A1000 hardware configured with Raid 5. Does this suit our requirement.
    I want some clarifications on
    1) Configuring the solaris volume manager for creating diskset. or registering disk group. Since this requires state database replicas, these should be created on local disks or shared disk.
    2) creating cluster file system. What is the diffrernce between global namespace and cluster file system.
    Can i get clarification on these.......

    Hi,
    Both nodes should need at least have
    - 4 network Interfaces,
    --- 2 for the Public Interface
    --- 2 for the Interconnects (each a dedicated cable or VLan!)
    - 2 Disk to mirror the Operating System
    - Some shared storage (i have no experience with the A1000 we use SAN, but i guess it should be ok)
    - Configure LVM to have its metadbs on both Disks for the metaset 0 (the mirrored root disk), set the mirrored_root_flag in /etc/system
    - LVM takes itself care of the metadbs for the disk sets, they are placed on slice 7 of the member (shared) disks
    global name space means you have a unic identifier (DID) for each disk, cd and tape device in the cluster. This avoids problems like a SAN disk connected to controller 3 on one node and controller 5 on the other one.
    global devices provide simultaneous access to all storage devices from all nodes, regardless where the storage is physically attached.
    the global file system makes file systems simultaneously available on all nodes, regardless of their physical location

Maybe you are looking for

  • HP Photosmart scan and fax no longer working

    Has anyone had their hp photosmart scan and fax stop working after installing lion os x? And, if so, what can be done to fix it? Thanks!

  • How to make use of asynchronous service in CAF development

    Hello SDNs, How can we make use of asynchronous service in CAF development; Actually i am new to CAF development; my business requirement suites for the service provided by SAP. But the service provided is asynchronous; is it not possible to use the

  • FaceTime (beta) for mac

    I having a huge problem with the facetime app, i can't sign in, I am stuck in signing in step. I've tried using different accounts for FaceTime but still can't access, I've tried creating an account in FaceTime but still can't access, but a fiend of

  • Can't import a DVD into iMovieHD

    Please help. I just purchased a new imac and am new to iMovie and iDVD. I am unable to import a DVD that I made using onestep DVD in iMovie. If I click and drag the icon into the iMovie clip sectin it says "importing" but nothing happens. Can someone

  • Type definition in oracle 7.3.4.4

    why my following cod eis not working to define a user datatpe create type array_type as object ( table of varchar2(1022) any suggesstion . pls help thanx