Understanding Quorum Disk

We've got a simple 3-node Hyper-V cluster setup which initially passed the verify tool 100%.  We created it, added the drives from the SAN, everything works great (including planned/unplanned fail-overs) BUT.... when we now re-verify the cluster we
get a quorum warning.
I understand that we can designate one of the disks on the SAN as a quorum disk, and if we do that warning goes away, but if we do that does that mean we cannot use that disk for actual storage of VMs?

We've got a simple 3-node Hyper-V cluster setup which initially passed the verify tool 100%.  We created it, added the drives from the SAN, everything works great (including planned/unplanned fail-overs) BUT.... when we now re-verify the cluster we
get a quorum warning.
I understand that we can designate one of the disks on the SAN as a quorum disk, and if we do that warning goes away, but if we do that does that mean we cannot use that disk for actual storage of VMs?
With a 3 node cluster you can use "node majority" so everything should work fine w/o Quorum disk.
Right, you don't store any data on Quorum disk. Keep it for system (cluster voting) use. 
For your reference see:
Understanding Quorum
http://technet.microsoft.com/en-us/library/cc731739.aspx
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Cluster Quorum Disk failing inside Guest cluster VMs in Hyper-V Cluster using Virtual Disk Sharing Windows Server 2012 R2

    Hi, I'm having a problem in a VM Guest cluster using Windows Server 2012 R2 and virtual disk sharing enabled. 
    It's a SQL 2012 cluster, which has around 10 vhdx disks shared this way. all the VHDX files are inside LUNs on a SAN. These LUNs are presented to all clustered members of the Windows Server 2012 R2 Hyper-V cluster, via Cluster Shared Volumes.
    Yesterday happened a very strange problem, both the Quorum Disk and the DTC disks got the information completetly erased. The vhdx disks themselves where there, but the info inside was gone.
    The SQL admin had to recreated both disks, but now we don't know if this issue was related to the virtualization platform or another event inside the cluster itself.
    Right now I'm seen this errors on one of the VM Guest:
     Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1069
    Task Category: Resource Control Manager
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    Cluster resource 'Quorum-HDD' of type 'Physical Disk' in clustered role 'Cluster Group' failed.
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1069</EventID>
        <Version>1</Version>
        <Level>2</Level>
        <Task>3</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14140</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="ResourceName">Quorum-HDD</Data>
        <Data Name="ResourceGroup">Cluster Group</Data>
        <Data Name="ResTypeDll">Physical Disk</Data>
      </EventData>
    </Event>
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1558
    Task Category: Quorum Manager
    Level:         Warning
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1558</EventID>
        <Version>0</Version>
        <Level>3</Level>
        <Task>42</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14139</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="NodeName">ServerDB02</Data>
      </EventData>
    </Event>
    We don't know if this can happen again, what if this happens on disk with data?! We don't know if this is related to the virtual disk sharing technology or anything related to virtualization, but I'm asking here to find out if it is a possibility.
    Any ideas are appreciated.
    Thanks.
    Eduardo Rojas

    Hi,
    Please refer to the following link:
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx#.Ux172HnxtNA
    Best Regards,
    Vincent Wu
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Failover Cluster Quorum Disk is fallen off the shared volume

    Hi, we had a Cluster that was holding 40+ VMs and was originally setup with the building 1GB Ethernet adapter, we yesterday installed Qlogic 10 GB NIC teaming for both of the nodes and reconfigured the network on both nodes. However now the Quorum disk
    is not a part of Cluster Shared Volume How can I add that disk back to the shared volum please?

    Hi Riaz,
    Add a quorum disk is easy so please let us know if there is any specific error occurs during the steps provided in following thread:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/0566ede4-55bb-4694-a134-104fac2a7052/replace-quorum-disk-on-failover-cluster-on-different-lun?forum=winserverClustering
    If you have any feedback on our support, please send to [email protected]

  • Can't start cluster, 2 node 3.3 cluster lost 2 quorum disks

    Hi,
    I have a 2 node cluster 1 one iscsi quorum disk, I was in the middle of migrating the quorum device to another iscsu disk, when the servers lost contact with the disks(iscsi targe problem), so the 2 cluster nodes where left with no quorum, because of the 2 quorum devices 3 votes are needed, I only have 2 votes from the 2 cluster nodes.
    iscsi disks are back online, but the cluster/quorum isn't able to get hold of them.
    May 11 11:21:59 vmcluster1 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node vmcluster2 (nodeid = 1) with votecount = 1 added.
    May 11 11:21:59 vmcluster1 genunix: [ID 965873 kern.notice] NOTICE: CMM: Node vmcluster1 (nodeid = 2) with votecount = 1 added.
    May 11 11:22:04 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d1s2 with error 2.
    May 11 11:22:10 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d2s2 with error 2.
    May 11 11:22:14 vmcluster1 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter e1000g2 constructed
    May 11 11:22:15 vmcluster1 genunix: [ID 884114 kern.notice] NOTICE: clcomm: Adapter e1000g1 constructed
    May 11 11:22:15 vmcluster1 genunix: [ID 843983 kern.notice] NOTICE: CMM: Node vmcluster1: attempting to join cluster.
    May 11 11:22:15 vmcluster1 e1000g: [ID 801725 kern.info] NOTICE: pci8086,100e - e1000g[2] : link up, 1000 Mbps, full duplex
    May 11 11:22:16 vmcluster1 e1000g: [ID 801725 kern.info] NOTICE: pci8086,100e - e1000g[1] : link up, 1000 Mbps, full duplex
    May 11 11:23:20 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d1s2 with error 2.
    May 11 11:23:25 vmcluster1 genunix: [ID 832830 kern.warning] WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d2s2 with error 2.
    May 11 11:23:25 vmcluster1 genunix: [ID 980942 kern.notice] NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
    Looks like the server thinks the ID of the disks have changed:
    [root@vmcluster1:/]# scdidadm -L (05-11 11:27)
    1 vmcluster1:/dev/rdsk/c3t5d0 /dev/did/rdsk/d1
    1 vmcluster2:/dev/rdsk/c3t5d0 /dev/did/rdsk/d1
    2 vmcluster1:/dev/rdsk/c3t4d0 /dev/did/rdsk/d2
    2 vmcluster2:/dev/rdsk/c3t4d0 /dev/did/rdsk/d2
    3 vmcluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d3
    4 vmcluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d4
    5 vmcluster2:/dev/rdsk/c3t6d0 /dev/did/rdsk/d5
    5 vmcluster1:/dev/rdsk/c3t6d0 /dev/did/rdsk/d5
    6 vmcluster2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d6
    7 vmcluster1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d7
    [root@vmcluster1:/]# scdidadm -r (05-11 11:27)
    scdidadm: Device ID "vmcluster1:/dev/rdsk/c3t5d0" does not match physical device ID for "d1".
    Warning: Device "vmcluster1:/dev/rdsk/c3t5d0" might have been replaced.
    scdidadm: Device ID "vmcluster1:/dev/rdsk/c3t4d0" does not match physical device ID for "d2".
    Warning: Device "vmcluster1:/dev/rdsk/c3t4d0" might have been replaced.
    scdidadm: Device ID "vmcluster1:/dev/rdsk/c3t6d0" does not match physical device ID for "d5".
    Warning: Device "vmcluster1:/dev/rdsk/c3t6d0" might have been replaced.
    scdidadm: Could not save DID instance list to file.
    scdidadm: File /etc/cluster/ccr/global/did_instances exists.
    Disks are ok, and accesible from format
    [root@vmcluster1:/]# echo | format (05-11 11:28)
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 8351 alt 2 hd 255 sec 63>
    /pci@0,0/pci8086,2829@d/disk@0,0
    1. c1t1d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
    /pci@0,0/pci8086,2829@d/disk@1,0
    2. c3t4d0 <IET-VIRTUAL-DISK-0-1.00GB>
    /iscsi/[email protected]%3Astorage.lun10001,0
    3. c3t5d0 <DEFAULT cyl 497 alt 2 hd 64 sec 32>
    /iscsi/[email protected]%3Astorage.lun20001,1
    4. c3t6d0 <DEFAULT cyl 496 alt 2 hd 64 sec 32>
    /iscsi/[email protected]%3Astorage.lun30001,2
    Is there a way to remove a quorum device without the cluster online?
    Or is there another alternative?, try and fix the did problem ?
    Thanks!

    This is the primary reason that you have one and only one quorum device. There are many failure modes that result in your cluster not starting. Looks like your only option is to hand edit the CCR. If this is a production cluster, please log a service desk ticket for the full procedure. If it's just a development cluster and you are happy to take a risk, the basic outline is (IIRC):
    1. Boot nodes into non-cluster mode
    2. Edit /etc/cluster/ccr/global/infrastructure and either remove the cluster.quorum_devices.* entries or set the votecount to 0
    3. cd /etc/cluster/ccr/global
    4. Run /usr/cluster/lib/sc/ccradm replace -i infrastructure infrastructure
    5. Reboot back into cluster mode
    6. Add one new quorum disk
    You may need to run one or more of:
    # cldev refresh
    # cldev check
    # cldev clean
    # cldev populate
    to get the right DID entries between steps 5 and 6.
    Tim
    ---

  • Quorum disk question

    What is the best practice for the quorum disk asignment in a dual-node cluster ?
    1.Is there any benefit to have a dedicated quorum disk and if yes - what size should it be ?
    2.The manual says: "Quorum devices can contain users data". Does it mean they can contain the NFS shared data in the NFS cluster? Is there a problem that the quorum device in this case will be under the volume manager (SVM or VxVM) control ?
    TIA

    Best practice is to use a disk that is actively used within the cluster as a quorum disk. This means that because data is frequently read from and written to the disk, any problems with the disk will be highlighted very quickly. That way a new QD can be nominated before the old disk fails and causes the entire cluster to fail if one node then goes down. (This would happen because the remaining node would not be able to gain majority).
    A QD can be any shared disk with data on undere SVM or VxVM control or just on it's own.

  • Sun cluster quorum disk

    Hi,
    I just want to know how to assign quorum disk under Sun cluster. Can I use LUN disk that is shared for both node as quorum disk and do I need to bring the disk first to vxvm control before using it as quorum disk? Appreciate any response/advise.
    Thanks.

    No you don't need to bring the disk into VXVM control.
    First run scdidadm -L from either node. This will give you a list of shared disk devices. Find one that is shared between the nodes and note its DID. ie d21
    scconf -a -q globaldev=d21
    Once you have added a quorum disk you can set install mode to off.
    scconf -c -q installmodeoff
    I would also recommend reading this:
    http://docs.sun.com/app/docs/doc/816-3384/6m9lu6fig?q=sun+cluster+add+quorum+disk&a=view
    Then reset you quorum count.
    scconf -c -q reset

  • Adding quorum disk causing wasted space?

    Hi,
    Any idea whether this is a bug or an expected behavior?
    Seeing this with ASM 11.2.0.4 and 12.1.0.4
    Have a Normal redundancy disk group (FLASHDG22G below) with two disks of equal size. With no data on the disk group the Usable_file_MB is equal to the size of one disk, as expected.
    But if I add a small quorum disk to the disk group, the Usable_file_MB decreases to 1/2 of the disk size. So, half of the capacity is lost.
    Thoughts?
    [grid@symcrac3 ~]$ asmcmd lsdsk -k
    Total_MB  Free_MB  OS_MB  Name        Failgroup  Failgroup_Type  Library  Label  UDID  Product  Redund   Path
       20980    20878  20980  SYMCRAC3_A  FG1        REGULAR         System                         UNKNOWN  /dev/symcrac3-a-22G
         953      951    953  QUORUMDISK  FGQ        QUORUM          System                         UNKNOWN  /dev/symcrac3-a-quorum
       20980    20878  20980  SYMCRAC4_A  FG2        REGULAR         System                         UNKNOWN  /dev/symcrac4-a-22G
    [grid@symcrac3 ~]$ asmcmd lsdg
    State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
    MOUNTED  NORMAL  N         512   4096  1048576     42913    42707            20980           10388              0             N  FLASHDG22G/

    There are two separate issues:
    1) ASMCMD silently fails to add quorum failure groups. Adds them, but as regular failure groups.
    2) Even if a quorum failure group is added with SQLPlus, the space is still lost – I have just confirmed it. And it doesn’t matter whether I add a quorum disk to an existing group or create a new group with a quorum disk.
    For #2 here is the likely source of the problem. Usable_File_MB = (FREE_MB – REQUIRED_MIRROR_FREE_MB ) / 2.
    REQUIRED_MIRROR_FREE_MB is computed as follows (per ASM 12.1 user guide):
    –Normal redundancy disk group with more than two failure groups
       The value is the total raw space for all of the disks in the largest failure group. The largest failure group is the one with the largest total raw capacity. For example, if each disk is in its own failure group, then the value would be the size of the largest capacity disk.
    Instead, it should be "with more than two regular failure groups".
    With just two failure groups it is not possible to restore full redundancy after one of them fails. So, REQUIRED_MIRROR_FREE_MB = 0 in this case.
    Also REQUIRED_MIRROR_FREE_MB should remain 0 even when there are three failure groups if one of them is a quorum failure group. But the logic seems to be broken here.

  • ASM quorum disk question

    Hi,
    Documentation says that the QUORUM disks does not contain any user data and the QUORUM disk are added as the failure groups to the disk groups? didnt find much on google as well.
    Can any one please confirm
    1. are the quorum disks created to contain the VOTING or OCRs (RAC)?
    2. what data does the quorum disks contain. if it is not holding any user data?
    3. use of quorum disks?
    Thanks,
    Vishnu P

    855370 wrote:
    Hi,
    Documentation says that the QUORUM disks does not contain any user data and the QUORUM disk are added as the failure groups to the disk groups? didnt find much on google as well.
    Can any one please confirm
    1. are the quorum disks created to contain the VOTING or OCRs (RAC)?
    2. what data does the quorum disks contain. if it is not holding any user data?
    3. use of quorum disks?
    Thanks,
    Vishnu P
    >Hi,
    Documentation says that the QUORUM disks does not contain any user data and the QUORUM disk are added as the failure groups to the disk groups? didnt find much on google as well.
    Can any one please confirm
    1. are the quorum disks created to contain the VOTING or OCRs (RAC)?
    2. what data does the quorum disks contain. if it is not holding any user data?
    3. use of quorum disks?
    Thanks,
    Vishnu P
    Back in the 1980 Digital Equipment Corporation (DEC), first created the VAX & then created the VAX Cluster.
    When you had an even number of systems, you did not want each half to think they were a valid cluster (bifurcate).
    In order to avoid this problem DEC added a Quorum Disk. A valid Cluster was 50% + 1 members.
    In a two node RAC, you don't want each system independently decide it was the remaining cluster member
    & not share changes with the other system.

  • Understanding Virtual Disks and Sparse Allocation

    1. From the doc here , i understand that allocation is fast when "Sparse Allocation" is used while creating a virtual disk and is slow when using non-sparse allocation. But does this mean that we are just talking about allocation alone? i.,e a one time activity - creation.
    2. If my repository has 1TB space , then i see that i can create 'n' Virtual Disks each with a size of more than a 1TB(i.e, lets say 3-4 VDisks each with a size of 1TB); whereas, if i use non-sparse allocation, then i have to have the sum of the sizes of the disks to be lesser than 1TB.
    FYI : I get an error if i allocate more than the space available: "OVMRU_002032E Excl-BigRepos - Repository size not large enough for new virtual disk of size: 999,999,999"
    3. When i install OVS on a RAID-1 partition(with two virtual disks - say 100GB and 400GB), then, OVS picks up the 100GB disk by default and installs in there. And this 100GB is not 'visible' henceforth. Reasons?

    1. Correct. Spare allocation means as the data is written the disk is expanded. This means, you're doing two operations as opposed to one with non sparse allocation. Thus, when writing to a virtual disk with a sparse allocation, the "write" could take longer if the disks doesn't have free space available.
    2. Too a degree.... Yes. It depends on what storage you're using. On some storage, using compression and deduplication, this is not entire accurate. But, this is independent of anything to do with Oracle VM. Many storage environments now use "virtual allocation".
    3. The default "LUN" used by by OVS can not be used for anything else. It is reserved. You will not able to use any free space on that LUN. The OVM manager expect "raw disks/LUN/shares" to used as storage. Such storage my be free of any filesystem, partition or etc. The Pools and repos necessary to run Oracle VM use a ocfs2 filesystem.

  • Quorum/Witness Disk Offline every 15 minutes error - Event ID 1069 - Event ID 1558

    2 node active passive, 2008 failover cluster.  I keep getting the following error events:
    The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data. Event ID 1558
    Cluster resource 'Cluster Disk 1' in clustered service or application 'Cluster Group' failed. Event ID 1069
    However my quorum disk is online from what I can see.   One thing I did notice is, the Quorum disk no longer has any files in it. It is just a empty drive.
    I also noticed these errors are being entered every 15 minutes exactly.
    I monitored the Quorum disk and it is going offline momentarily every 15 minutes.
    Experts Exchange Posts indicate this is known issue and waiting for microsoft to deliver SP2?  Really?
    http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/Windows_Server_2008/Q_24751092.html
    Any thoughts

    support.microsoft.com/kb/2750820 - This does not resolved your issue because it applies only to the Quorum model Node and File share majority.
    The issue is because of the cluster configuration corruption and cluster I checking after every 15 minute and is seeing the cluster quorum disk in failed state (actually it is not failed) but is not able to update the current state of the quorum disk.
    Check for below logs in the cluster logs:
    00000d84.00000f58::2014/11/18-19:04:06.805 INFO  [QUORUM] Node 1: Witness Failed Gum Handler [QUORUM] Node 1
    00000d84.00000f58::2014/11/18-19:04:06.805 INFO  [QUORUM] Node 1: witness attach failed. next restart will happen at 2014/11/18-13:19:06.797
    00000d84.00000ef8::2014/11/18-19:04:06.805 INFO  [QUORUM] Node 1: quorum is not owned by anyone
    Try reforming the cluster, it will resolve the issue.
    Change the quorum model to Node majority and back to node and disk majority.
    Stop cluster service on all the nodes and then start them back when it is stopped on all the nodes.
    Thanks
    Himanshu Swarnkar (MSFT)

  • Understanding disk permissions

    I need a bit of help in understanding some disk permissions results. I'm running a Pro Tools studio here and I've been having some slight instability with the software(PT 8.0.1, the latest version) running on Leopard 10.5.8. I'm in the habit of repairing disk permissions either if I see some instability or install software as this is what Digidesign/Avid recommends. Everytime I repair disk permissions I get this long list of repairs such as "permissions differ on.....etc." and they seem to be the same ones each time I run permissions. Here you can see the list after running DP:
    http://farm3.static.flickr.com/2653/41385639615cec372faam.jpg
    and here is the Log window:
    http://farm3.static.flickr.com/2534/4138563977274e321ec4m.jpg
    How do I take this info and make a repair so I don't keep getting these? Or, is it normal to keep getting these same permissions?

    Hi-
    Leopard and permissions are an enigma.
    One can repair them over and over, and they are still in need of repair.
    Besides that, it takes an inordinate amount of time, as compared to previous OS X versions.
    Many would say to ignore the report, and I concur.
    I usually don't bother with permissions repair, or let it go after one pass.
    Cache problems are the first suspect......

  • Question on Quorum

    Hi Experts,
    Couple of questions on Windows Clusters. I am not a cluster expert and look for some help to understand the concepts
    1. In Win 2003 cluster, we have quorum disk and "Disk only quorum model". IF this quorum model is used, and if quorum disk is down, then cluster is down.   So, my question is what quorum disk contain?  All I know is, it contains cluster
    related information. But what kind of info is stored in quorum disk which is so critical.
    2. Secondly, from Win 2008 and above cluster, we have new quorum models(i.e node majority, node + disk witness ...), in these new quorum models, do we still have quorum disk? And where does cluster information is stored? I came to know that cluster information
    is stored or replicated to all nodes and each node has its own copy.  I wanted to know is this true or not. pl correct me if am wrong. If this information is correct, how does the cluster info replication happens? Can anybody explain in simple terms?
    Any links/msdn blogs or resources would be a great help for reference.
    Thanks in advance.
          

    Hi Samantha v,
    For 2003 cluster before giving an explanation of what a cluster quorum is, it is important to understand the background to Windows Clustering Technologies.
    Starting with Microsoft® Windows NT® 4.0 Enterprise Edition, Microsoft introduced the idea of a cluster, which is simply a group of servers that are presented as one virtual
    server. For example, you can configure two servers, server A and server B, in a cluster, and present them to the outside world as server C (a virtual server). If, for example, server A dies, server B is used to ensure that the virtual server (server C) and
    the services it offered, are still available to clients, thereby providing transparent access to the user. You can refer the following related KB for more detail information:
    Background (Server Clusters: Quorum Options - Windows Server 2003)
    http://technet.microsoft.com/en-us/library/cc780689(v=ws.10).aspx
    Quorum Drive Configuration Information
    http://support.microsoft.com/kb/280345
    Frequently Asked Questions (Server Clusters: Quorum Options - Windows Server 2003)
    http://technet.microsoft.com/en-us/library/cc737067(v=ws.10).aspx
    For 2008 or later cluster you can refer the following article:
    New Cluster Quorum Models in Windows 2008
    http://blogs.msdn.com/b/saponsqlserver/archive/2010/06/30/new-cluster-quorum-models-in-windows-2008.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows 2008 Cluster question on using a new cluster drive source from shrinking existing disk

    I have a two node Windows 2008 R2 enterprise SP1 cluster. It has a basic cluster setup of one (Q:)quorum disk and data disk (E:) which is 2.7tb is size. This cluster is connected to a shared Dell Disk array.
    My question is can I safely shrink the 2.7tb drive down and carve out a disk size of 500gb from the same disk and use for a new cluster disk resource. We want to install Globalscape SFTP software on this new disk for use as a cluster resource.
    Will this work without crashing the cluster.
    Thanks,
    Gonzolean

    Hi ,
    Thank you for posting your issue in the forum.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Quorum votes

    I have a three node cluster with a quorum disk attached to two of them. This makes number of quorum votes = 4. It is my understanding that there should odd number of votes in a cluster. So I would like to increase the number of votes assigned to QD to 2. Is this possible? If not, what are the alternatives?

    Even if this sounds mean, you should try to come up with a list of node failure scenarios and an appropriate (reconfiguration) action. E.g. in you scenario with 3 nodes and 1 QD connected to 2 nodes only, what are the scenarios that you want to cover. If one node fails, everything is fine! If the 2 nodes fail and those nodes are the ones connected to the storage, you lose your data, so no worry about the 3rd node. If 2 nodes fail and one of them is not connected to the storage, bad luck, only 2 votes out of 4. But as nodes rarely fail in parallel, you could set the first failed node into maintenance mode, which would lower your total votecount to 3.......
    An external quorum server would solve all of this.
    The customer engineer who I worked with a while back came back with a 20p document, describing all the various alternatives with votes settings. And in the end an engineer needed to come up with the one and only solution to solve their very specific problem. If I were to change votes on the fly, I can guarantee you I would shoot myself in the foot.

  • Questions on asm disk discovery:

    Questions on asm disk discovery:
    1)What is the relationship btween asm_diskstring in the init.ora and DiscoveryString in the GPNP profile.xml?
    2)  Which one of the above two finally accounts for the disk discovery process?
    3)  We know that asmlib disks are self describing at the disk header. This overcomes the disk name/path persistency issue as we no long need to rely on the path to discover the asm disks, by setting asm_diskstring='ORCL:*' , ASM instance will identify the right disks automatically. However, I am not sure if setting asm_diskstring='ORCL:*' is the most economic way to do the discovery as I am not sure if Oracle will have to probe all the disks on the OS to determine the right disks. If Oracle has to screen all the disks in this way, then I think setting asm_diskstring='<path_to_asmlib_disk>' will be much faster, although this will be open to the persistent problem. Is my understanding correct?
    Thanks.

    From my understanding all disk you see in /dev/oracleasm/disks are the disks in your system that been discovered by asmlib at discovery stage.
    Currently, due to bug 13465545, ASM instance will discover disks from both locations, ASM_DISKSTRING and gpnp profile, which can cause some mess in disk representation for asm. You can check the settings using asmcmd command: dsget, and set to be the same using dsset.
    I think its more secure to set ASM_DISKSTRING to only the disks used by asm instance.
    ASMCMD> dsget
    Regards
    Ed

Maybe you are looking for

  • Magic Mouse click not working

    I have a magic mouse, with a MBP and 10.6.1 and the wireless mouse update installed. The mouse more or less works if "Secondary click" is checked. If I uncheck it, clicking does nothing. The mouse moves and scrolling works, but clicking has no effect

  • General Design With Database and Session Bean Question

    I have an application I am developing where users connect to individual databases located on a server. When they login an admin table is accessed which shows what databases they have permissions to. I am then storing the connection to the database in

  • Issue WIth Screen Painter

    I am trying to assign same varible to Field and corresponding text filed. But it is not allowing as it same duplicate field . All though I have done that for couple of screen elements. Please share your thoughts.

  • Hal and PcmanFm Mod

    I've installed PcManFm Mod, and I tries to use it with hal. But hal doesn't start as daemon. This is my /etc/dbus-1/system.d/hal.conf <!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN" "http://www.freedesktop.org/standar

  • Snow Leopard + Epson 3800 = Profiles missing from InDesign

    My 'Grafic Artist' wife just got a new PowerMac and installed InDesign as part of the CS4 Creative Suite. When printing from Photoshop (or Illustrator) she is offered the option in the print dialogue to print with profile, and when she checks the pul