Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

Hi all.
I have two node sun cluster.
I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
AVS working fine. But I don't understand how integrate it in cluster.
What did I do:
Created remote mirror with AVS.
v210-node1# sndradm -P
/dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
v210-node1# 
v210-node0# sndradm -P
/dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
v210-node0#   Created resource group in Sun Cluster:
v210-node0# clrg status avs_test_rg
=== Cluster Resource Groups ===
Group Name       Node Name       Suspended      Status
avs_test_rg      v210-node0      No             Offline
                 v210-node1      No             Online
v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
v210-node0# cat /etc/vfstab  | grep avs
/dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
v210-node0#
v210-node0# clrs show avs_test_hastorageplus_rs
=== Resources ===
Resource:                                       avs_test_hastorageplus_rs
  Type:                                            SUNW.HAStoragePlus:6
  Type_version:                                    6
  Group:                                           avs_test_rg
  R_description:
  Resource_project_name:                           default
  Enabled{v210-node0}:                             True
  Enabled{v210-node1}:                             True
  Monitored{v210-node0}:                           True
  Monitored{v210-node1}:                           True
v210-node0# In default all work fine.
But if i need switch RG on second node - I have problem.
v210-node0# clrs status avs_test_hastorageplus_rs
=== Cluster Resources ===
Resource Name               Node Name    State     Status Message
avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                            v210-node1   Online    Online
v210-node0# 
v210-node0# clrg switch -n v210-node0 avs_test_rg
clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
v210-node0#  If I change state in logging - all work.
v210-node0# sndradm -C local -l
Put Remote Mirror into logging mode? (Y/N) [N]: Y
v210-node0# clrg switch -n v210-node0 avs_test_rg
v210-node0# clrs status avs_test_hastorageplus_rs
=== Cluster Resources ===
Resource Name               Node Name    State     Status Message
avs_test_hastorageplus_rs   v210-node0   Online    Online
                            v210-node1   Offline   Offline
v210-node0#  How can I do this without creating SC Agent for it?
Anatoly S. Zimin

Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
Regards,
Tim
---

Similar Messages

  • Information about Sun Cluster 3.1 5Q4 and Storage Foundation 4.1

    Hi,
    I have 2 Sunfire V440 with Solaris 9 last release 9/05 with last cluster patchs.. , Qlogic HBA fibre card on a seven disks shared on a Emc Clariion cx500. I have installed and configured Sun cluster 3.1 and Veritas Storage Foundation 4.1 MP1. My problems is when i run format wcommand on each node, I see the disks in a different order and veritas SF (4.1) is also picking up the disks in a different order.
    1. Storage Foundation 4.1 is compatible with Sun cluster 3.1 2005Q4?????
    2. Do you have a how to or other procedure for Storage foundation 4.1 with Sun Cluster 3.1.
    I'm very confuse with veritas Storage foundation
    Thanks!
    J-F Aubin

    This combination does not work today, but it will be available later.
    Since Sun and Veritas are two separate companies, it takes more
    time than expected to synchronize releases. Products supported by
    Sun for Sun Cluster installation undergo extensive testing, which also
    takes time.
    -- richard

  • Multiple Oracle databases in Sun cluster 3.2 (without RAC setup)

    There are 2 Sun SPARC (Sun Fire T2000) servers with Solaris 10 (05/09) OS and Sun Cluster 3.2 software. We need two different Oracle databases & instances (Oracle 10g R2 without RAC) for an application. The first database is for Production and need to be configured in the first node & Shared storage disk and need high availability. This database should run from the second node if the first node fails. The second database is for Quality/Test and it is prefered to be running in the Second node for better load distribution. This DB doesnt require any failover.
    The shared storage is Sun SE 3510 FC and multiple LUNs can be created for different databases..
    Is it possible to configure two different resource groups (one for Quality and other for Production) and make the first node primary for Production RG, and the second node primary for Quality RG and thus distributing the load on 2 servers ? If possible, what special configuration required in Sun OS and Cluster side ?
    Appreciate if you can give some configuration procedures/documents for this multi-master cluster setup..

    You can configure two resource groups, such as:
    # clrg create -n node-a,node-b prod-rg
    # clrg create -n node-b qa-rgand you configure the required resources (disk groups / file systems, logical host, oracle listener, oracle server) as described within
    [http://docs.sun.com/app/docs/doc/819-2980?l=en&a=expand|http://docs.sun.com/app/docs/doc/819-2980?l=en&a=expand]
    Note that this is not really called a "multi-master" configuration - that has a specific meaning for a resource group (see [http://docs.sun.com/app/docs/doc/820-4682/babefcja?l=en&a=view|http://docs.sun.com/app/docs/doc/820-4682/babefcja?l=en&a=view] ) for details.
    With Solaris Cluster all nodes part of a cluster are considered active and can host resource groups. You can have any number of resource groups running, where a subset runs one one node, another subset on other nodes. The nodelist property of the resource group defines where it can run, the first node in the list is the preferred primary.
    You can even define resource group dependencies or affinities between the resource groups. Like you could define a negative affinity between qa-rg and prod-rg, such as if prod-rg needs to failover to node-b (since e.g. node-a died), it would offline qa-rg. Details for that kind of possibilities are described at [http://docs.sun.com/app/docs/doc/820-4682/ch14_resources_admin-35?l=en&a=view|http://docs.sun.com/app/docs/doc/820-4682/ch14_resources_admin-35?l=en&a=view].
    Regards
    Thorsten

  • Migrate Sun Cluster (+RACdisks to new hardware running Sun Cluster ( + RAC)

    Hello,
    We have old hardware (v490s) running Sun Cluster 3.2 + Oracle RAC 10.2.0.4.0 connected to SAN. We need to move to T4. Oracle advised against including new hardware into existing cluster, so we are planning on building a new cluster with T4's, same software (Solaris 10, Sun Cluster 3.2, RAC 10.2.0.4.0).
    When ready, we plan to shut down existing cluster, zone new cluster to existing disks and bring up everything on new hardware (simply stated).
    Will it work?
    Any gotchas - like need to clear disk ids or Sun Cluster panicking? RAC panicking? Any reference docs out there?
    Thanks
    user12961096

    Do we absolutely need that in our new setup or could we forgo that additional layer? Would Sun Cluster give us anything that the OS + RAC doesn't give us?Yes, Oracle Solaris Cluster does make things a lot easier. It looks after your device space and gives you consistent DID devices for CRS/RAC. It gives you the choice to use sQFS, raw metasets, or ASM. It has clprivnet which is a lot easier and performs better than an IPMP solution. The node failure detection time is <= 10 seconds which is quicker than CRS on it's own and it uses SCSI fencing instead of a STONITH approach. Finally, you have all the off the shelf agents that Solaris Cluster offers.
    However, if you are only doing RAC and you just want ASM and you don't need the last few seconds of failure detection that OSC gives you and you think STONITH is good enough for your fencing purposes, then CRS on its own is perfect. There are many, many deployments both with and without OSC, it's not a simple yes/no answer.
    Having worked for the Solaris Cluster group, I'm still slightly bias to including it rather than going without. Others have the alternate view! :-)
    Hope that helps,
    Tim
    ---

  • Sun Cluster 3.1  RIP or /etc/defaultroute

    Now built a cluster Sun cluster 3.1 without specifying a static route. Possible to specify a gateway to the file /etc/defaultroute? Problems may arise in the future?

    This is more or less depending on your choice. Both options are possible - either using a routing protocol, and the Solaris Cluster nodes consume those informations (which has the advantage of a centralized configuration), or the Solaris Cluster nodes get a static configuration. The correct config file is /etc/defaultrouter.
    More informations on how to configure routing on Solaris can be found at [http://docs.sun.com/app/docs/doc/816-4554/gcvjj?l=en&a=view|http://docs.sun.com/app/docs/doc/816-4554/gcvjj?l=en&a=view].
    In any case, the router infrastructure to be used by the Solaris Cluster nodes should be highly available. Also be aware that a cluster node itself must not be configured as a router itself.
    Note that simply adding something to /etc/defaultrouter does not yet make the configuration active. You would either need to also manually invoke the route command to add the default router, or you have to reboot (which you should do at some point, to test if your change really works).
    Regards
    Thorsten

  • Sun Cluster

    Hi Experts,
    I am new to Sun Cluster and trying to build a Sun Cluster on Solaris 10 x86 boxes. Is there a document that will walk me through step by step on configuring the Sun Cluster.
    Thanks in Advance.
    Sunil.

    Sunil,
    You can download the software and the documentation for free from the Sun web-site. Try reading the manuals. They are very detailed and contains many examples how to install the software.
    Cheers
    Andreas

  • Sun Cluster dataservice for Webspher MQ

    Is Sun Cluster dataservice for Websphere MQ available for
    Sun Cluster 3.0?
    Should a Sun Cluster 3.0 be upgraded to 3.1 in order to install the Websphere MQ dataservice?
    Thanks in advance.

    The Sun Cluster dataservice for WebSphere MQ is available for SC3.0 update 3 onwards.
    However the agent is only distributed on the SC3.1 Agents CD, please contact Sun for the CD or Agent.
    Updating to SC3.1 is not required. MQ 5.2/5.3 are supported, although only MQ 5.2 on Solaris 8, with MQ 5.3 on either Solaris 8 or 9.
    Regards
    Neil

  • Cannot import a disk group after sun cluster 3.1 installation

    Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
    Did anyone run into the same problem?
    The dump of the private region for every single disk in the VM returns the following error:
    # /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
    VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
    Format error in disk private region
    Any help or suggestion would be greatly appreciated
    Thx
    Max

    If I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
    First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
    You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
    Having said that, I'm still confused by the error. See if the above solves the problem first.
    Regards,
    Tim
    ---

  • Sun Cluster 3.3 Mirror 2 SAN storages (storagetek) with SVM

    Hello all,
    I would like to know if you have any best practice for mirroring two storage systems with svm on sun cluster without corrupting/loosing data from the storages.
    I currently have enabled the multipath on the fc (stmsboot) after that configure the cluster and created the SVM mirror with the did devices.
    I have some issues that i wan to know if there's gonna be any problem.
    a) 4 quorum votes. As i have two (2) nodes and 2 storages that i need to know which is up i have 4 votes, so in order the cluster to start needs 3 votes. Is there any solution on this like cldevice combine ?
    b) The mirror is on SVM level so when a failover happens the metasets go to the other node. Is there any change to start the mirror from the second SAN insteand of the first and have any kind of corruption? Is there someway to better protect the storage ?
    c) The storagetek has option for snapshots, is there a good way of using this feature or not?
    d) Is there any problem by failling over global filesystems (global option in mount)? The only thing that may write in this filesystem is the application itself that belongs in the same resource group, so when it will need to fail over it will stop all the proccesses accessing this filesystem and it would be ok to unmount it.
    Best regards to all of you,
    PiT

    Thank you very much for your answers Tim, they are really very helpfull, i only have some comments on them to be fully answered.
    a) Its all answered to me. I thing that i will add the vote from only one storage and if the storage goes down, i will tell the customer to check the quorum status and add the second storage as QD. The quorum server is not a bad idea, but if the network is down for some reason i thing that bad thing will happen so i dont wont to relly on that.
    b) I think you are clear enough.
    c) I thing you are clear enough! (just as i thought this would happen for the snapshots....)
    d) Finally, if this filesystem is in a metadisk that is been started from the first node and the second node is proxing to the first node for the metaset disks, is there any change to lock the filesystem/ metaset group and don't be able to take it?
    Thanks in advance,
    Pit
    (I will also look the document you mention, a lot of thanks)

  • Sun Cluster 3.1 Failover Resource without Logical Hostname

    Maybe it could sound strange, but I'd need to create a failover service without any network resource in use (or at least with a dependency on a logical hostname created in a different resource-group).
    Does anybody know how to do that?

    Well, you don't really NEED a LogicalHostname in a RG. So, i guess i am not understanding
    the question.
    Is there an application agent which demands to have a network resource in the RG? Sometimes
    the VALIDATE method of such agents refuses to work if there is no network resource in
    the RG.
    If so, tell us a bit more about the application. Is this GDS based and generated by
    Sun Cluster Agent Builder? The Agent Builder has a option of "non Network Aware", if you
    select that while building you app, it ought to work without a network resource in the RG.
    But maybe i should back up and ask the more basic question of exactly what is REQUIRING
    you to create a LogicalHostname?
    HTH,
    -ashu

  • Any experience with NFS failover in Sun Cluster?

    Hello,
    I am planning to install dual-node Sun Cluster for NFS failover configuration. The SAN storage is shared between nodes via Fibre Channel. The NFS shares will be manually assigned to nodes and should fail over / takeback between nodes.
    Is this setup tested well? How the NFS clients survive the failover (without "stale NFS handle" errrors)? Does it work smoothly for Solaris,Linux,FreeBSD clients?
    Please share your experience.
    TIA,
    -- Leon

    My 3 year old linux installtion on my laptop, which is my NFS client most of the time uses udp as default (kernel 2.4.19).
    Anyway the key is that the NFS client, or better, the RPC implementation on the client is intelligent enough to detect a failed TCP connection and tries to reestablish it with the same IP address. Now once the cluster has failed over the logical IP the reconnect will be successful and NFS traffic continues as if nothing bad had happened. This only(!) works if the NFS mount was done with the "hard" option. Only this makes the client retry the connection.
    Other "dumb" TCP based applications might not retry and thus would need manual intervention.
    Regarding UFS or PxFS, it does not make a difference. NFS does not know the difference. It shares a mount point.
    Hope that helped.

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Sun Cluster.. Why?

    What are the advantages of installing RAC 10.2.0.3 on a Sun Cluster.? Are there any benefits?

    Oracle 10g onward, there is no such burning requirement for Sun Cluster (or any third party cluster) as far as you are using all Oracle technologies for your Oracle RAC database. You should Oracle RAC with ASM for shared storage and that would not require any third party cluster. Bear inmind that
    You may need to install Sun Cluster in the following scenarios:
    1) If there is applicaiton running with in the cluster along with Oracle RAC database that you want to configure for HA and Sun Cluster provide the cluster resourced (easy to use) to manage and monitor the application. THIS can be achieved with Oracle Clusterware but you will have to write your own cluster resource for that.
    2) If you want to install cluster file system such as QFS then you will need to install the Sun Cluster. If this cluster is only running the Oracle RAC database then you can rely on Oracle technologies such as ASM, raw devices without installing Sun Cluster.
    3) Any certification conflicts.
    Any correction is welcome..
    -Harish Kumar Kalra

  • Shared Tuxedo 8.0 Binaries on a SUN Cluster 3.0

    I know pefectly well that in every installation document BEA strongly advise not to
    try to share executables across remote file systems (NFS etc.) Still i need to ask
    if one of you have any experience in a setup of a SUN 8 / SUN cluster 3.0 enviroment,
    where 2 or more nodes shares disks by the same SUN 3 cluster. The basic idea is to
    have the the Tux8 binaries installed only once, and then separate all the "dynamic"
    files tmconfig, tlogdevices etc in to its own respective katalog (/node1, /node2
    etc.) But stil they remain on the clusterd disks.
    Thank you for a quick response.
    Best of regards
    Raoul

    We have the same problem with 2 SUN E420 and a D1000 storage array.
    The problem is releted to the settings on the file :
    /etc/system
    added by the cluster installation :
    set rpcmod:svc_default_stksize=0x4000
    set ge:ge_intr_mode=0x833
    the second one line try to set a Gigabit Ethernet Interface that does not exist.
    We comment out the two line and every things work fine.
    I'm interesting to know what you think about Sun Cluster 3.0. and your experience.
    email : [email protected]
    Stefano

  • Sun Cluster question

    Hello everyone
    I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of the databases on one node fails for whatever reason, the whole system gets shifted to the second node to keep the environment going. That works fine.
    My intended scenario:
    Each node is holding 2 database instances, both nodes ARE working at the same time so that each one is serving one instance of the database. In the event of failure on one node, the other one should assume the role of BOTH database instances till the first one gets fixed.
    The question is: is that possible? and if it is, does that require breaking the whole cluster and rebuilding it? or can this be done online without bringing down the system?
    Thanks a lot in advance

    What you propose will not work either. E.g. there is no logic implemented to fence the underlying zpool from one node to the other in such a configuration.
    Also the current SUNW.HAStoragePlus(5) manpage document:
            Note -   SUNW.HAStoragePlus does not support  file  sys-
                     tems created on ZFS volumes.
                     You cannot use SUNW.HAStoragePlus  to  manage  a
                     ZFS storage pool that contains a file system for
                     which the ZFS  mountpoint  property  is  set  to
                     legacy or none.[...]
    Greets
    Thorsten

Maybe you are looking for