Question about cluster node majority voting

We've been having problems with a DB instance crashing regularly.  This weekend when it crashed, it seems to have taken the node it was on with it, or this was a separate incident...
Right now I have 3 nodes in the cluster.  2 nodes are running 3 instances (2 on 1). The 3rd node is in a state where the OS is mostly unusable and the Cluster service will not start. 
Event Log:
"The failover cluster database could not be unloaded. If restarting the cluster service does not fix the problem, please restart the machine."
Cluster Log from that machine:
00003768.000067a0::2014/01/06-03:28:05.393 INFO  -----------------------------+ LOG BEGIN +-----------------------------
00003768.000067a0::2014/01/06-03:28:05.393 INFO  [CS] Starting clussvc as a service
00003768.000067a0::2014/01/06-03:28:05.394 INFO  [CS] cluster service logging level is 2
00003768.00004c30::2014/01/06-03:28:05.521 DBG   [NETFTAPI] received NsiInitialNotification
00003768.00004c30::2014/01/06-03:28:05.523 DBG   [NETFTAPI] received NsiInitialNotification
00003768.000031f4::2014/01/06-03:28:05.588 DBG   [NETFTAPI] received NsiAddInstance  for 169.254.3.47
00003768.00004eb4::2014/01/06-03:28:05.590 ERR   [DM] Error while restoring (refreshing) the hive: STATUS_INVALID_PARAMETER(c000000d
00003768.00004eb4::2014/01/06-03:28:05.592 ERR   [DM] mscs::DmAgent::Start: STATUS_INVALID_PARAMETER(c000000d' because of 'Load(NOTHROW(), securityAttributes, discardError )'
00003768.00004eb4::2014/01/06-03:28:05.592 ERR   [DM] Node 3: failed to unload cluster hive, error 87.
00003768.00004eb4::2014/01/06-03:28:05.592 ERR   Hive unload failed (status = 87)
00003768.00004eb4::2014/01/06-03:28:05.592 ERR   FatalError is Calling Exit Process.
This is a 3 node cluster set to node majority, I don't have an available drive letter for a witness disk.  Since the cluster service won't start, I'm not certain how the cluster is still running, but am thankful that it is.
A reboot might fix everything, but I'm very worried that if I reboot the server, and the cluster service still fails to start... it may prevent the entire cluster from starting and we won't be able to run the instances on the other 2 nodes.
Does the 3rd server still act as an odd-number server, even if the cluster service won't start?  If I reboot and the cluster service still fails to start, will the cluster itself be able to be in an UP state and run the DB instances on the other nodes?
I already need to open a MS Support incident on the DB instance crashing, so I'd rather not have to open a 2nd one just to answer this hopefully simple question.
Thanks in advance!
Mark

I'll answer it here, since it matters fundamentally to SQL High Availability.
There are a couple of entities you are conflating here, leading to much confusion.  There is a difference between the Cluster and the cluster service.
The cluster service will run on a node once the Failover Cluster Feature is installed on that node.  The cluster service will run, even if a cluster is not created.  It may generate errors and not participate in a Cluster if it cannot talk to the
other nodes, but it will not shut down.
The Cluster itself requires a quorum, that is a majority of votes, in order to operate.  With three nodes, you should choose Node Majority quorum model, which sounds like what you have.  Any two votes will count, so the third node being offline
does not matter.  You can safely restart the cluster service on the failed nod, and even restart the node.  Note that with the third node down, you have no redundancy.  (Windows 2012 and 2012 R2 have dynamic quorum, which adjusts the quorum
count based on the last "settled" quorum vote, but that doesn't apply here).
I am concerned with your statement that you are out of drive letters.  With three instances, you should have plenty of drive letters left.  I suggest investigating Mount Points.  You only need one drive letter per instance when using Mount
Points.
Geoff N. Hiten Principal Consultant Microsoft SQL Server MVP

Similar Messages

  • Question about cluster node NodeWeight property

    Hi,
    I have a three nodes (A/B/C) windows 2008 r2 sp1 cluster testCluster, and installed KB2494036 for three nodes,suppose Node A is a active node.
      I configured node C's NodeWeight property to 0, and node A and node B keep default (NodeWeight=1). I also added a shared disk Q for cluster quorum.
    So i want to know if node C and Node B are down , is the windows cluster testCluster down as lost of quorum or keep up?
    At the first i thought testCluster should keep up , because the cluster has 2 votes (node A and quorum), node B is down, node C doesn't join voting. But after testing, testCluster  was down as  lost of quorum.
    So anybody konw the reason,thanks.

    Hello mark.gao,
    Let me see if I understand correctly your steps, so I can think that if you create your cluster with three nodes at the beginning your quorum model should be "Node Majority", then you have three votes one per each node.
    Then was removed the vote for Node "C" and added a disk to be witness for cluster quorum, at this point we have two out of three votes from the original configuration on "Node Majority"
    Question:
    At some point you changed the quorum model to be "Node and Disk Majority"???
    Maybe this is the issue, you are stuck on "Node Majority" and when "B" and "C" nodes are down we have only one vote from node "A" therefore there is no quorum to keep the service online.
    On 2012 we have the awesome option to configure a Dynamic Quorum:
    Dynamic quorum management
    In Windows Server 2012, as an advanced quorum configuration option, you can choose to enable dynamic quorum management by cluster. When this option is enabled, the cluster dynamically manages
    the vote assignment to nodes, based on the state of each node. Votes are automatically removed from nodes that leave active cluster membership, and a vote is automatically assigned when a node rejoins the cluster. By default, dynamic quorum management is enabled.
    Note
    With dynamic quorum management, the cluster quorum majority is determined by the set of nodes that are active members of the cluster at any time. This is an important distinction from the cluster quorum in Windows Server 2008 R2, where the quorum
    majority is fixed, based on the initial cluster configuration.
    With dynamic quorum management, it is also possible for a cluster to run on the last surviving cluster node. By dynamically adjusting the quorum majority requirement, the cluster can sustain
    sequential node shutdowns to a single node.
    The cluster-assigned dynamic vote of a node can be verified with the DynamicWeight common property of the cluster node by using the Get-ClusterNodeWindows
    PowerShell cmdlet. A value of 0 indicates that the node does not have a quorum vote. A value of 1 indicates that the node has a quorum vote.
    The vote assignment for all cluster nodes can be verified by using the Validate Cluster Quorum validation test.
    Additional considerations
    Dynamic quorum management does not allow the cluster to sustain a simultaneous failure of a majority of voting members. To continue running, the cluster must always have a quorum majority at the time of a node shutdown or failure.
    If you have explicitly removed the vote of a node, the cluster cannot dynamically add or remove that vote. 
    Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
    https://technet.microsoft.com/en-us/library/jj612870.aspx#BKMK_dynamic
    Hope this info help you to reach your goal. :D
    5ALU2 !

  • A question about cluster

    Hi all,
    I have a question about the usage of cluster. I create a cluaster
    which includes three indicators on the front panel, but I can't use
    unbundle function on the diagram. If I change the indicator to control,
    it works. I didn't find on the manual that the unbundle function only
    accept cluster of control as input.
    Thanks for your help!
    Regards,
    Tao

    This is because of the way dataflow programming works. It probably didn't
    explicitly mention.
    Labview has data sources (controls, inputs) and data sinks (indicators,
    outputs). Unbundling accepts cluster as input and outputs the individual
    pieces of data. Similarly bundling accepts individual pieces of data as
    input and ouputs a cluster. So logically your cluster must be a control
    (input) for unbundling, and an indicator (output) for bundling.
    If I'm misunderstanding the question or oversimplifying the problem email me
    or post again.
    -joey
    "tsong" wrote in message news:b11154$qmq$[email protected]..
    > Hi all,
    >
    > I have a question about the usage of cluster. I create a cluaster
    > which includes three indicators on the front panel, but I can'
    t use
    > unbundle function on the diagram. If I change the indicator to control,
    > it works. I didn't find on the manual that the unbundle function only
    > accept cluster of control as input.
    >
    > Thanks for your help!
    >
    > Regards,
    >
    > Tao
    >
    >
    >

  • Question about Cluster/DSync/Load Balance

    According to the admin doc of iplanet, primary server is
    the "manager" for data sync, is there any impact on
    load balance when the iAS run as primary or backup?
    will the primary kxs get the request first and do dispatching?
    Thanks.
    Heng

    First of all lets discuss load balancing....
    The type of load balancing you are using will determine which process manages the load balancing. If you are using Response time (per server or per component response time) or round robin (regular or weighted) the web connector does the load balancing. If you are using User Defined (iAS based) load balancing then the kxs process becomes involved with load balancing of requests since the "Load Balancing System" is part of the kxs process.
    Now for Dsync and how it impacts load balancing.
    When a server is a sync primary or a sync backup role it is doing more work. For the sync primary the extra work is making sure the backup has the latest Dsync Data and processing requests from the other servers in the cluster about the Distributed data. All state/session information is updated/created/deleted on the sync primary, when this happens the sync primary immediately updates the sync backup(s) with this new information. As you can guess managing the Dsync information and making the updates to the sync backups causes extra processing on the sync primary, so this will impact the overall performance of the machine (whether it be in server load or response time of processing). All lookup of state/session information is done on the sync primary only so the more lookups/updates you have to more impact on the server.
    The sync backup(s) also have the extra work of managing their copy of the Dsync Data which will impact server performance but to a lessor degree of the sync primary.
    Ultimately the extra overhead involved does have an impact on loadbalancing due to the extra load on the sync primary and sync backups.
    Hope that helps,
    Chris Buzzetta

  • Questions about multiple nodes and licenses

    Nico, What would be the right forum to ask the licensing questions i stated above? Jimit

    I can't and won't discuss license issues, but for the last question (mixing nodes with different operating systems) there's a fairly easy answer why this can lead to all sorts of trouble.Windows and Linux/Unix usually work in different code pages, making it hard to interchange flat files.Text lines in Windows flat files are usually terminated by the two characters Carriage Return followed by Line Feed (0x0D followed by 0x0A) whereas under Linux/Unix text lines are always terminated by Line Feed characters (0x0a) only. Not all Windows and Linux/Unix programs can handle this difference without trouble.Very often Windows and Linux/Unix machines run with difference 8-bit code pages for the nodes themselves. This means that all sorts of diacritics (such as German Umlaute ä, ö, ü, ß) may be processed by programs on both platforms in different ways.Shell scripts and batch files resp. all sorts of operating system commands are very much incompatible between these systems, making it almost impossible (ok, just extremely difficult) to write scripts running on both system worlds.Last not least Integration Services in a server grid must be of the same operating system and run in the same code page, meaning that these two nodes will never be able to be part of the same server grid. That's just what came to my mind within a few seconds of thinking. Regards,Nico

  • A question about cluster of indicators

    Hi,
    Here is what I want to achieve:
    Three indicators, use cluster to change the display
    number
    Here is what I have done:
    1). Creat three indicators on the front panel
    2). Put them in a cluster
    3). Create a local variable and change its attribute to read
    4). Unbundle the cluster local variable
    5). Now I can't wire any vaule to the output element
    of the unbundle function. (It seems all indicators become
    data source).
    How can I solve this problem?
    Thanks a lot for your help!
    Regards,
    Tao
    4

    The issue is that a read local variable IS a data source. If you want to write to a control programatically (promise me you are only going to do this in your user interface code) you have to use a write local variable. In your case here, you need to bundle the three control values before writing the output of the bundler to the local variable.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Question about context node filling

    Hi all gurus; I'm struggling over a simple task, hope that someone could help .
    Shortly: I defined a data structure as follows:
    DATA: BEGIN OF ls_struct,
         vendor   TYPE bbp_bp_orga,
         cptable  TYPE zebp_contpers_t, "this is a Table Type!
             END OF ls_struct,
             lt_struct LIKE TABLE OF ls_struct.
    So, there's a table lt_struct with a line that is build up by:
    - a "flat" field;
    - an internal table (made up by some fields).
    I need to TRANSPORT these information from one view to another using a common Context Node shared via ComponentController.
    I then tried to create such a node in the Context but there's something I must have done wrong:
    Here's my sketch:
    CP_FOR_BIDDERS has no Dictionary structure; here's the subnode CPTABLE:
    I tried then to store the values in these nodes in my methods:
    IF lt_struct IS NOT INITIAL.
         DATA lo_nd_cp_for_bidders TYPE REF TO if_wd_context_node..
         lo_nd_cp_for_bidders = wd_context->get_child_node( name = wd_this->wdctx_cp_for_bidders ).
         CALL METHOD lo_nd_cp_for_bidders->bind_table
           EXPORTING
             new_items            = lt_struct
    *        set_initial_elements = ABAP_TRUE
    *        index                =
    However, if I try then to GET the values from the node, I can see only VENDOR values, while the associated internal table is always blank.
    What am I missing?
    Thanks in advance

    Hi Matteo,
    You also want to bind the CPTABLE node for each CP_FOR_BIDDERS element. One example is below; you can also loop through the table of elements for node CP_FOR_BIDDERS, fetch each element's CPTABLE node and bind the CPTABLE data to each CPTABLE node.
        lo_nd_cptable = wd_context->path_get_node( path = `CP_FOR_BIDDERS.CPTABLE` ).
        CALL METHOD lo_nd_cptable>bind_table(
            new_items               = lt_cptable_data
            set_initial_elements = abap_true
    Cheers,
    Amy

  • Cluster windows 2008 NODE MAJORITY

    hallo, i have a windows 2008 cluster with three nodes (A-B-C) in NODE MAJORITY (default windows). I have installed oracle 10g rel. 2 Node A is active with DB1 , node B is active with DB2, node C isi passive for node A or B.
    have installed failsafe and , apparently , the resource move from node A to C, and resource move from node B to C.
    But when i open Failsafe panel if the services OracleMSCService is not on node with Node Majority it's not open.
    I want to configure the service to move with Cluster Node Majority. How to do ??
    or i obliged to create a cluster with NODE MAJORITY and DISK QUORUM ???
    or i obliged o reduce the cluster at two node with disk quorum ????
    IS VERY URGENT , FRIDAY THE CLUSTER GOES TO PRODUCTION

    DUPLICATE POSTING
    {message:id=4173094}
    This is not acceptable usage of OTN. Please cease with posting the exact same message in multiple OTN forums.

  • Question about adding an Extra Node to SOFS cluster

    Hi, I have a fully functioning SOFS cluster, with two nodes, it uses SAN FC storage, Not SAS JBODS. its running about 100VM's in production at the moment.
    Both my nodes currently sit on one blade chassis, but for resiliency, I want to add another node from a blade chassis in our secondary onsite smaller DC.
    I've done plenty of cluster node upgrades before on SQL and Hyper-V , but never with a SOFS cluster. 
    I have the third node fully prepaired, it can see the Disks the FC Luns, on the SAN (using powerpath, disk manager) and all the roles are installed.
    so in theory I can just add this node in the cluster manager and it should all be good, my question is has anyone else done this, and is there anything else I should be aware of, and what's the best way to check the new node will function , and be able
    to migrate the File role over without issues. I know I can run a validation when adding the node, I presume this is the best option ?
    cannot find much information on the web about expanding a SOFS cluster.
    any advice or information would be greatfully received !!
    cheers
    Mark

    Hi Mark,
    Sorry for the delay in reply.
    As you said there is no much information which related to add a node to a SOFS cluster.
    The only ones I could find is related to System Center (VMM):
    How to Add a Node to a Scale-Out File Server in VMM
    http://technet.microsoft.com/en-us/library/dn466530.aspx
    However adding a node to SOFS cluster should be simple as you just prepared. You can have a try and see the result. 
    If you have any feedback on our support, please send to [email protected]

  • Question about DBCA generate script o create RAC database 2 node cluster

    Question about creating two node RAC database 11g after installing and configuration 11g clusterware. I've used DBCA to generate script to create a rac database. I've set
    environment variable ORACLE_SID=RAC and the creating script creates instance of RAC1 and RAC2. My understanding is that each node will represent a node, however there should only be one database with a name of 'RAC'. Please advise

    You are getting your terminology mixed up.
    You only have one database. Take a look, there are one set of datafiles on shared storage.
    You have 2 instances which are accessing one database.
    Database name is RAC. Instance names are RAC1, RAC2, etc, etc.
    Also, if you look at the listener configuration and if your tnsnames is setup properly then connecting to RAC will connect you to either one of the instances wheras connecting to RAC1 will connect you to that instance.

  • BizTalk Enterprise Passive Cluster Node Licensing Question

    Can anyone confirm if you need to license a passive BizTalk Enterprise cluster node if all BizTalk components are not running? If so is there an official reference I can refer to? The PUR has a section on Running Instances that states components must be
    in memory to require a license, but no BizTalk components would be in memory if all services are stopped. All references I have read about passive nodes state they must be licensed though.
    Nikolai Blackie Adaptiv Integration

    The reason I am asking this question is there is a site with a small cluster with all hosts and SSO under cluster management, on the effectively passive node there are no actually running instance of any BizTalk components. Not strictly HA, but certainly
    quicker than DR. Personally I would have just used VM failover but that design decision was made a long time ago.
    http://msdn.microsoft.com/en-us/library/aa578057.aspx
    This is a relatively grey area in terms of licensing the configuration and depending on how you interpret PUR non licensable passive nodes appear to valid under the documented terms.
    It would just be great if there was something somewhere that said outright all BizTalk servers in a cluster must have an assigned server license, or cluster nodes with no running components are not licensable =)
    Nikolai Blackie Adaptiv Integration

  • A question about Job schdueling in cluster

    hi all
    I have a weblogic cluster and want to use the build-in commonj support to do some scheduling work.the pdf version document "Timer and Work Manager API (CommonJ) Programmer's Guide" has something like this on page 7,"The Timer Listener class must be pesent in the server system classpath" .does it mean that I should not put it in web-inf/classes?instead, I should jar it and put the jar somewhere inside wls_home/server/lib or ext ?
    thanks a lot :-]

    hi mchellap
    here is another question about timers in the cluster,
    1) I implemented a serializable timerlistener which I want to make it cluster aware
    2) put the JNDI items "timer/MyTimer" in web.xml which is to commonj.timers.TimerManager
    3) I created a datasource on cluster in console with the tables created in db
    after the cluster is started,the job is to print out the "new Date()" in console every 40 second,and it worked very well
    I am expecting something in the db table,but there is nothing,not even a exception ,anything wrong here?
    thanks a lot

  • What are the preferred methods for backing up a cluster node bootdisk?

    Hi,
    I would like to use flarcreate to backup the bootdisks for each of the nodes in my cluster... but I cannot see this method mentioned in any cluster documentation...
    Has anybody used flash backups for cluster nodes before (and more importantly - successfully restored a cluster node from a flash image..?)
    Thanks very much,
    Trevor

    Hi, some backround on this - I need to patch some production cluster nodes, and obviously would like to backup the rootdisk of each node before doing this.
    What I really need is some advice about the best method to backup & patch my cluster node (with a recovery method also).
    The sun documentation for this says to use ufsdump, which i have used in the past - but will FLAR do the same job? - has anyone had experiance using FLAR to restore a cluster node?
    Or if someone has some other solutions for patching the nodes? - maybe offline my root mirror (SVM) - patch root disk - barring any major problems - online the mirror again??
    Cheers, Trevor

  • The question about the HA installation on ECC6.0

    Hi Experts,
    We are about to implement a project with HA environment on the ECC6.0 in the near future, which is just about the ABAP stack. After reading the Installating Guide, I stil have several questions related to the procedures of HA installation.
    In the guide document, I got the following steps to process for realizing the HA of ECC6.0:
    1. Run SAPinst to install the central services instance (ASCS) using the virtual host name on the primary cluster node, host A.
    2. Prepare the standby node, host B, making sure that it meets the hardware and software requirements and it has all the necessary file systems, mount points, and (if required) Network File System (NFS), as described in Preparing for Switchover .
    3. Set up the user environment on the standby node, host B. For more information, see Creating Operating System Users and Groups Manually. Make sure thatyou use the same user and group IDs as on the primary node. Create the home directories of users and copy all files from the home directory of the primary node.
    4. Configure the switchover software and test that switchover functions correctly.
    5. Install the database instance on the primary node, host A.
    6. Install the central instance with SAPinst on the primary node, host A.
    7.If required, install additional dialog instances with SAPinst to replicate the SAP system services that are not a SPOF. These nodes do not need to be part of the cluster.
    My Question is that does standby node(host B in above context) need to install the ASCS, database instance and Central Instance?
    If host B does not need to install the database instance, how about the whole system would be when the primary cluster node (Host A in above context) totally crash, such as power failure.

    Hi Rong,
    I would try to explain it in simple words...
    My Question is that does standby node(host B in above context) need to install the ASCS,
    database instance and Central Instance?
    If host B does not need to install the database instance, how about the whole system would be when
    the primary cluster node (Host A in above context) totally crash, such as power failure.
    1. You don't need to install ASCS on Node B. You are installing it using a VIRTUAL HOSTNAME which represent cluster not individual node. VIRTUAL HOSTNAME is assigned to cluster package, so whichever Node is the owner of the package, will have the VIRTUAL HOSTNAME. (it will switch with cluster switchover)
    2. It is actually a cluster package configuration magic. When Node A is active, cluster package owner is Node A. So all mount points (which is on SAN disk) is mounted on Node A. When you switchover the cluster, those packages will be mounted on Node B.
    Some time single cluster package is used (which includes mount points for SAP instance + Database directories). You can also use 2 cluster package seperating SAP and Database directory structure.
    Only OS related directories should be on servers local disk. All other application related mount points should be on SAN disk which is configured in "Cluster Package". (For example /sapmnt, /usr/sap, /oracle etc.)
    You only need identical users and their enviornment settings on both Nodes.
    In simple words, When primary node fails or crashed only users and thier enviornment setting will be lost. On second node, because of identical users and their profiles, same settings will be available to bring up the SAP system. Your all SAP and Database data is intact as it is on SAN Disk.
    I hope, your confusion is cleared now...
    Regards.
    Rajesh Narkhede

  • How to use SVM metadevices with cluster - sync metadb between cluster nodes

    Hi guys,
    I feel like I've searched the whole internet regarding that matter but found nothing - so hopefully someone here can help me?!?!?
    <b>Situation:</b>
    I have a running server with Sol10 U2. SAN storage is attached to the server but without any virtualization in the SAN network.
    The virtualization is done by Solaris Volume Manager.
    The customer has decided to extend the environment with a second server to build up a cluster. According our standards we
    have to use Symantec Veritas Cluster, but I think regarding my question it doesn't matter which cluster software is used.
    The SVM configuration is nothing special. The internal disks are configured with mirroring, the SAN LUNs are partitioned via format
    and each slice is a meta device.
    d100 p 4.0GB d6
    d6 m 44GB d20 d21
    d20 s 44GB c1t0d0s6
    d21 s 44GB c1t1d0s6
    d4 m 4.0GB d16 d17
    d16 s 4.0GB c1t0d0s4
    d17 s 4.0GB c1t1d0s4
    d3 m 4.0GB d14 d15
    d14 s 4.0GB c1t0d0s3
    d15 s 4.0GB c1t1d0s3
    d2 m 32GB d12 d13
    d12 s 32GB c1t0d0s1
    d13 s 32GB c1t1d0s1
    d1 m 12GB d10 d11
    d10 s 12GB c1t0d0s0
    d11 s 12GB c1t1d0s0
    d5 m 6.0GB d18 d19
    d18 s 6.0GB c1t0d0s5
    d19 s 6.0GB c1t1d0s5
    d1034 s 21GB /dev/dsk/c4t600508B4001064300001C00004930000d0s5
    d1033 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s4
    d1032 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s3
    d1031 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s1
    d1030 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s0
    d1024 s 31GB /dev/dsk/c4t600508B4001064300001C00004870000d0s5
    d1023 s 512MB /dev/dsk/c4t600508B4001064300001C00004870000d0s4
    d1022 s 2.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s3
    d1021 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s1
    d1020 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s0
    d1014 s 8.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s5
    d1013 s 1.7GB /dev/dsk/c4t600508B4001064300001C00004750000d0s4
    d1012 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s3
    d1011 s 256MB /dev/dsk/c4t600508B4001064300001C00004750000d0s1
    d1010 s 4.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s0
    d1004 s 46GB /dev/dsk/c4t600508B4001064300001C00004690000d0s5
    d1003 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s4
    d1002 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s3
    d1001 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s1
    d1000 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s0
    <b>The problem is the following:</b>
    The SVM configuration on the second server (cluster node 2) must be the same for the devices d1000-d1034.
    Generally spoken the metadb needs to be in sync.
    - How can I manage this?
    - Do I have to use disk sets?
    - Will a copy of the md.cf/md.tab and an initialization with metainit do it?
    I would be great to have several options how one can manage this.
    Thanks and regards,
    Markus

    Dear Tim,
    Thank you for your answer.
    I can confirm that Veritas Cluster doesn't support SVM by default. Of course they want to sell their own volume manager ;o).
    But that wouldn't be the big problem. With SVM I expect the same behaviour as with VxVM, If I do or have to use disk sets,
    and for that I can write a custom agent.
    My problem is not the cluster implementation. It's more likely a fundamental problem with syncing the SVM config for a set
    of meta devices between two hosts. I'm far from implementing the devices into the cluster config as long as I don't know how
    how to let both nodes know about both devices.
    Currently only the hosts that initialized the volumes knows about them. The second node doesn't know anything about the
    devices d1000-d1034.
    What I need to know in this state is:
    - How can I "register" the alrady initialized meta devices d1000-d1034 on the second cluster node?
    - Do I have to use disk sets?
    - Can I only copy and paste the appropriate lines of the md.cf/md.tab
    - Generaly speaking: How can one configure SVM that different hosts see the same meta devices?
    Hope that someone can help me!
    Thanks,
    Markus

Maybe you are looking for