Cluster vs vmware vmotion

Hi
We are evaluating cluster options mainly Sun Cluster due VCS is expensive, and we still have another option: Vmware Vmotion. My experience is more with VCS and SC and I would like to know of somebody have experience with this product and what advantages and disadvantages is presenting against SunCluster 3.2.
I'm totally new in this kind of "vmware virtual cluster" if I can named like that but any comment or suggestions are really appreciated.
The applications that we need to implement in HA are in the midsize segment; not high IO and nor demanding a high cpu/memory performance, we are also considering to use iscsi disks for them (using 2 independent iscsi storage boxes and using mirrors in the volume manger level), but with this we have a restriction in SC 3.2 since is not supporting iscsi storage, so iscsi may be discarded.
Thanks.

There is some coverage of this in my Blueprint on Virtualisation and Solaris Cluster [Virtualisation and Solaris Cluster,|http://wikis.sun.com/display/BluePrints/Using+Solaris+Cluster+and+Sun+Cluster+Geographic+Edition] IIRC. (This paper is due an update though!)
Hopefully others will chime in with their experiences of the products.
Tim

Similar Messages

  • OES2 Cluster on VMware ESX

    Has anyone installed an OES2 cluster on VmWare ESX successfully.

    Originally Posted by rllangford
    Has anyone installed an OES2 cluster on VmWare ESX successfully.
    We have set up two ESX hosts (ESX 3.5 U4) and have two SLES10 64 clustered VMs with OES2 installed. We have two RDMs set up with a split brain partition (quorum drive). We have no problems setting all of this up and when it works it works well. What we have problems with is when we do maintainence on the ESX hosts.
    We get the dreaded "Unable to power up machine due to afile lock" on the second clustered VM when we power it up after doing maintaince on the ESX host. The first clustered VM is powered up and ok but we are unable to clear the lock as ESX does not see it. VMware does not support this and Novell seems to point us to VMware. The only way we have cleared this up is by shutting down the ESX host for about 10 minutes and then powering it back up. This clears the lock sometimes. The most extreme case was we had to power down the SAN to release the lock.
    There seems to be nothing specific on the web about our proble, only similar cases. Any help or advice would be appreciated.

  • Solaris Cluster and VMware Disk and SCSI Sharing Questions

    Hello folks,
    Does Solaris Clustering support the following?
    Oracle Solaris 11 Cluster support shared disk using VMware 5.1 RDM?
    Oracle  Solaris 11 Cluster using VMware SCSI Bus Sharing? (although VMware doesn’t support this)
    Best,
    Dave

    Hello folks,
    Does Solaris Clustering support the following?
    Oracle Solaris 11 Cluster support shared disk using VMware 5.1 RDM?
    Oracle  Solaris 11 Cluster using VMware SCSI Bus Sharing? (although VMware doesn’t support this)
    Best,
    Dave

  • Migrade old NW-cluster to VmWare and iSCSI

    That's the job to be done,
    question is just; which way to go;
    Setup today is;
    2x Single-CPU XeonServers, both connected
    to 2xPromise 15HD/Raidkabinetts over 160mb SCSI.
    This install beeing approx 5 years old and is getting slow and full.
    Idea is to buy new, fast 6core Intel servers, run VmWare which makes
    use of the power and make our life easier....
    One question Im wondering about and have a guess regarding the answer
    to is;
    iSCSI based storage for these servers, should it be assigned and
    supplied through VmWare or should the virtual NWserver running under
    VmWare connect directly to the iSCSI storage ?
    Usually, having an additional step on the way should be overhead, but
    with regards to support for load-balancing NIC's, memory,etc,, my
    guess is that the opposite could be true here; meaning that the VmWare
    box should handle all storage and NW getting it's storage through
    VmWare. But,, again just my guess...any input ?
    Another question of course is Netware Vs Linux OES.
    We have long time experience with NW and find our way around it well,
    the Linux part is still something we really don't feel that at home
    with. The install's of OES on Linux we've done for lab, test's etc
    have allways felt rather unfinished. One could think that beeing from
    the Novell/Netware camp, a logical step should be Linux/OES but
    ....Even when following the online doc's to do a basic test-setup works
    quite ..bad...to much manual fixes getting stuff to work, to much
    hazzle getting registration and updates to work....
    Still, going virtual might be a way to make it easier switching to
    OES/Linux since it'll be easier having a backup/image each time one
    try do update,fix,,etc,,etc..
    In the end, the needs are basic, one Groupwise server and one
    FileServer. Going virtual enables us to over time migrate other
    resources to....

    Thanks 4 the quick reply Massimo,
    Well, iSCSI or not, the other part of the question.
    Time IS probably here to replace Netware, that much is obvious.
    With the existing setup, we got working backups,
    with our W2k/03/08, we got working backups and disaster recovery plans
    Moving forward, using OES for us at least, seems a more difficult path
    while any move from Netware today would probaby give us better
    throughput. Using VmWare seems like a managable solution since do
    updates/upgrades and backups could be done easily. To have a image to
    revert to if any update goes wrong is much easier/faster than a
    re-install each time....
    On Mon, 22 Nov 2010 08:54:15 GMT, Massimo Rosen
    <[email protected]> wrote:
    >Hi,
    >
    >[email protected] wrote:
    >>
    >> That's the job to be done,
    >> question is just; which way to go;
    >>
    >> Setup today is;
    >> 2x Single-CPU XeonServers, both connected
    >> to 2xPromise 15HD/Raidkabinetts over 160mb SCSI.
    >>
    >> This install beeing approx 5 years old and is getting slow and full.
    >
    >Hmmmm....
    >
    >> Idea is to buy new, fast 6core Intel servers, run VmWare which makes
    >> use of the power and make our life easier....
    >> One question Im wondering about and have a guess regarding the answer
    >> to is;
    >>
    >> iSCSI based storage for these servers, should it be assigned and
    >> supplied through VmWare or should the virtual NWserver running under
    >> VmWare connect directly to the iSCSI storage ?
    >
    >If your current setup is slow, you shouldn't be using iSCSI at all. It
    >won't be much faster, and iSCSI is becoming increasingly stale
    >currently, unless you have 10GBE. And even then it isn't clear if iSCSI
    >over 10GBE is really much faster than when using 1GB. TCP/IP needs a lot
    >of tuning to achieve that speed.
    >
    >>
    >> In the end, the needs are basic, one Groupwise server and one
    >> FileServer. Going virtual enables us to over time migrate other
    >> resources to....
    >
    >I would *NEVER* put a Groupwise Volume into a VMDK. That said, my
    >suggestion would be OES2, and a RDM at the very least for Groupwise.
    >
    >CU,

  • Getting Error in starting VIP in 3 NODE RAC Cluster in VMWARE

    hi
    please can some one help me to have solution for why VIPCA is failing to start VIP on RAC Node 3 it gives the ERROR: CRS-1006; CRS-0215 no more members. Network Configuration is like:
    /etc/hosts
    127.0.0.1 localhost.localdomain localhost
    #Public IP
    192.168.2.131 rac1.sun.com rac1
    192.168.2.132 rac2.sun.com rac2
    192.168.2.133 rac3.sun.com rac3
    #Private IP
    10.10.10.31 rac1-priv rac1-priv
    10.10.10.32 rac2-priv rac2-priv
    10.10.10.33 rac3-priv rac3-priv
    #Virtual IP
    192.168.2.131 rac1-vip.sun.com rac1-vip
    192.168.2.132 rac2-vip.sun.com rac2-vip
    192.168.2.133 rac3-vip.sun.com rac3-vip
    /etc/sysconfig/network
    NETWORKING=yes
    HOSTNAME=rac1.sun.com
    GATEWAY=192.168.2.1
    Thanks in Advance

    you should have to user some other new ips for VIP.
    PLEASE CHANGE THE VIP IP's and try again.
    192.168.2.131 rac1-vip.sun.com rac1-vip
    192.168.2.132 rac2-vip.sun.com rac2-vip
    192.168.2.133 rac3-vip.sun.com rac3-vipchange the ips to some other ip not used by any machines.
    sample /etc/hosts file
    127.0.0.1 localhost.localdomain localhost
    # Public
    10.1.10.201 rac1.localdomain rac1
    10.1.10.202 rac2.localdomain rac2
    #Private
    10.1.9.201 rac1-priv.localdomain rac1-priv
    10.1.9.202 rac2-priv.localdomain rac2-priv
    #Virtual
    *10.1.10.203 rac1-vip.localdomain rac1-vip*
    *10.1.10.204 rac2-vip.localdomain rac2-vip*

  • Solaris Cluster 3.3 on VMware ESX 4.1

    Hi there,
    I am trying to setup Solaris Cluster 3.3 on Vmware ESX 4.1
    My first question is: Is there anyone out there setted up Solaris Cluster on vmware accross boxes?
    My tools:
    Solaris 10 U9 x64
    Solaris Cluster 3.3
    Vmware ESX 4.1
    HP DL 380 G7
    HP P2000 Fibre Channel Storage
    When I try to setup cluster, just next next next, it completes successfully. It reboots the second node first and then the itself.
    After second node comes up on login screen, ping stops after 5 sec. Same either nodes!
    I am trying to understand why it does that? I did every possibility to complete this job. Setted up quorum as RDM from VMware. Solaris has direct access to quorum disk now.
    I am new to Solaris and I am having the errors below. If someone would like to help me it will be much appreciated!
    Please explain me in more details i am new bee in solaris :) Thanks!
    I need help especially on error: /proc fails to mount periodically during reboots.
    Here is the error messages. Is there any one out there setted up Solaris Cluster on ESX 4.1 ?
    * cluster check (ver 1.0)
    Report Date: 2011.02.28 at 16.04.46 EET
    2011.02.28 at 14.04.46 GMT
    Command run on host:
    39bc6e2d- sun1
    Checks run on nodes:
    sun1
    Unique Checks: 5
    ===========================================================================
    * Summary of Single Node Check Results for sun1
    ===========================================================================
    Checks Considered: 5
    Results by Status
    Violated : 0
    Insufficient Data : 0
    Execution Error : 0
    Unknown Status : 0
    Information Only : 0
    Not Applicable : 2
    Passed : 3
    Violations by Severity
    Critical : 0
    High : 0
    Moderate : 0
    Low : 0
    * Details for 2 Not Applicable Checks on sun1
    * Check ID: S6708606 ***
    * Severity: Moderate
    * Problem Statement: Multiple network interfaces on a single subnet have the same MAC address.
    * Applicability: Scan output of '/usr/sbin/ifconfig -a' for more than one interface with an 'ether' line. Check does not apply if zero or only one ether line.
    * Check ID: S6708496 ***
    * Severity: Moderate
    * Problem Statement: Cluster node (3.1 or later) OpenBoot Prom (OBP) has local-mac-address? variable set to 'false'.
    * Applicability: Applicable to SPARC architecture only.
    * Details for 3 Passed Checks on sun1
    * Check ID: S6708605 ***
    * Severity: Critical
    * Problem Statement: The /dev/rmt directory is missing.
    * Check ID: S6708638 ***
    * Severity: Moderate
    * Problem Statement: Node has insufficient physical memory.
    * Check ID: S6708642 ***
    * Severity: Critical
    * Problem Statement: /proc fails to mount periodically during reboots.
    ===========================================================================
    * End of Report 2011.02.28 at 16.04.46 EET
    ===========================================================================
    Edited by: user13603929 on 28-Feb-2011 22:22
    Edited by: user13603929 on 28-Feb-2011 22:24
    Note: Please ignore memory error I have installed 5GB memory and it says it requires min 1 GB! i think it is a bug!
    Edited by: user13603929 on 28-Feb-2011 22:25

    @TimRead
    Hi, thanks for reply,
    I have already followed the steps also on your links but no joy on this.
    What i noticed here is cluster seems to be buggy. Because i have tried to install cluster 3.3 on physical hardware and it gave me excat same error messages! interesting isnt it?
    Please see errors below that I got from on top of VMware and also on Solaris Physical hardware installation:
    ERROR1:
    Comment: I have installed different memories all the time. It keeps sayying that silly error.
    problem_statement : *Node has insufficient physical memory.
    <analysis>5120 MB of memory is installed on this node.The current release of Solaris Cluster requires a minimum of 1024 MB of physical memory in each node. Additional memory required for various Data Services.</analysis>
    <recommendations>Add enough memory to this node to bring its physical memory up to the minimum required level.
    ERROR2
    Comment: Despite rmt directory is there I gor error below on cluster check
    <problem_statement>The /dev/rmt directory is missing.
    <analysis>The /dev/rmt directory is missing on this Solaris Cluster node. The current implementation of scdidadm(1M) relies on the existence of /dev/rmt to successfully execute 'scdidadm -r'. The /dev/rmt directory is created by Solaris regardless of the existence of the actual nderlying devices. The expectation is that the user will never delete this directory. During a reconfiguration reboot to add new disk devices, if /dev/rmt is missing scdidadm will not create the new devices and will exit with the following error: 'ERR in discover_paths : Cannot walk /dev/rmt' The absence of /dev/rmt might prevent a failover to this node and result in a cluster outage. See BugIDs 4368956 and 4783135 for more details.</analysis>
    ERROR3
    Comment: All Nics have different MAC address though, also I have done what it suggests me. No joy here as well!
    <problem_statement>Cluster node (3.1 or later) OpenBoot Prom (OBP) has local-mac-address? variable set to 'false'.
    <analysis>The local-mac-address? variable must be set to 'true.' Proper operation of the public networks depends on each interface having a different MAC address.</analysis>
    <recommendations>Change the local-mac-address? variable to true: 1) From the OBP (ok> prompt): ok> setenv local-mac-address? true ok> reset 2) Or as root: # /usr/sbin/eeprom local-mac-address?=true # init 0 ok> reset</recommendations>
    ERROR4
    Comment: No comment on this, i have done what it says no joy...
    <problem_statement>/proc fails to mount periodically during reboots.
    <analysis>Something is trying to access /proc before it is normally mounted during the boot process. This can cause /proc not to mount. If /proc isn't mounted, some Solaris Cluster daemons might fail on startup, which can cause the node to panic. The following lines were found:</analysis>
    Thanks!

  • Unicast Flooding on Nexus 5020 with ESXi 5 vMotion

    We recently began testing VMware ESXi 5.0 on our production network.  After observing some heavy discards (3-10 million at times) on the 10G uplinks FROM our core 6509s TO the Nexus 5Ks we began some investigation.  We started by capturing traffic on vPCs from the Nexus 5K to the 6509s.  We found a tremendous amount of unicast vMotion traffic transmitting from the 6509s to the Nexus 5Ks.  Unicast vMotion traffic should never touch the 6509s core switches since it is layer two traffic.  We found that our problem was two fold.  Problem number one was the fact that on the ESXi 5 test cluster we had vMotion and the management vm kernel nics in the same subnet.  This is a known issue in which ESXi replies back using the management virtual mac address instead of the vMotion virtual mac address.  Therefore the switch never learns the vMotion virtual mac address thus flooding all of the vMotion traffic.  We fixed problem number 1 by creating a new subnet for the vMotion vm kernel nics and we also created a new isolated vlan across the Nexus 5Ks that does not extend to the cores, modifying the vDistributed switch port group as necessary.  To verify that the vMotion traffic was no longer flooding we captured traffic locally on the N5K, not using SPAN but simply eves dropping on the vMotion VLAN as an access port.  The testing procedure involved watching the CAM table on the 5K, waiting for the vMotion mac addresses to age out then starting a vMotion from one host to another.  Doing this process we were able to consistently capture flooded vMotion traffic onto our spectator host doing the captures.  The difference from problem 1 was that the flooding did not include all of the vMotion conversation as before but when vMotioning 1-2 servers we saw anywhere from 10ms to 1 full second of flooding then it would stop.  The amount of flooding varied but greatly depended on whether the traffic traversed the vPC between the 5Ks or not.  We were able to make the flooding much worse by forcing the traffic across the vPC between the N5Ks.
    Has anyone else observed this behavior with N5Ks or VMware on another switching platform?
    We were able to eliminate the vMotion flooding by pinging both vMotion hosts before beginning the vMotion. It seems that if VMware would setup a ping to verify connectivity between the vMotion hosts before starting the vMotion it would eliminate the flooding.
    A brief description of the network..
    Two 6509 core switches with layer 2 down to two Nexus 5020 running NX-OS version 5.0(3)N2(2b) using 2232PP FEX for top-of-rack.  For testing purposes each ESXi host is dual-homed with one 10G link (CNA) to each N5K through the FEX.  VMware is using vDistributed switch with a test port-group defined for the ESXi 5 boxes.
    For curiosities sake we also observed packet captures from ESX 4.1 where we saw similar unicast flooding although it was near not as many packets as in ESXi 5.
    We have a case open with TAC and VMware to track down the issue but were curious if anyone else has observed similar behavior or had any thoughts.
    Thanks
    Cody

    Essentially the fix was to (a) turn off mac aging on the vmotion vlan on the 5K, (b) remove the L3 addressing from the vmotion vlan by not extending it to the 6K, and for good measure we (c) dedicated 2x10G ports per server just for multi-nic vmotion. These three measures did the trick.

  • Moving ESX hosts to a new cluster?

    I have 6 ESX 3.5.0 systems currently divided into three 2-node clusters.
    Cluster 1 is HP DL380 G6 with Xeon E5540 CPU's and is not licensed for vMotion
    Cluster 2 is HP DL380 G7 with Xeon X5650 CPU's and is licensed for vMotion
    Cluster 3 if HP DL380 G6 with Xeon E5520 CPU's and is licensed for vMotion
    All nodes are live and in production.  Recently while enabling hyperthreading on nodes of cluster 2, the manual failovers didn't go well and we had a lot of downtime while systems fought for resources.  HA has admission control enabled and 1 host failover capacity so I'm not sure why we had such extreme problems.  Based on that issue, I'd like to create a larger cluster if possible but that brings a series of questions:
    1)  What nodes are hardware compatible to create a larger cluster?  It looks like the nodes from 1 and 3 can be combined into one four node cluster but the CPU's are too different in cluster 2 to join them.  Is that correct?
    2)  What implications are there of moving nodes to a new cluster?  These are running VM's in a production environment, unplanned downtime is not tolerated (but planned is usually OK).  It looks like I can just put a node into maintenance mode, remove it from a cluster, and then re-add it to a new cluster?  If I license the nodes on cluster 1 for vMotion before this work, can that be done with no guest downtime?
    Any gotcha's?
    Everything is scheduled to be replaced later this year (new servers, VMware, and SAN) but with the failovers not working smoothly on cluster 2, I'd like to get this fixed as much as possible as soon as possible, with of course as little cash outlay as possible.
    Thanks for any insight you can provide!

    Hi,
    As you have new servers I recommend you :
    - Install OEL 5 on these new server ( OEL 6 is not supported)
    - Install Oracle Clusterware 11.2.0.3
    - Install Oracle Database 11.1.0.7
    - Configure temporary Virtual IP (at moment of migration you can reconfigure these Virtual IP with same used on old cluster)
    - Create a New Database with same name from actual PROD env using DBCA
    - Configure Services and Network
    - So remove only database files (Spfile, Control, Datafiles, Online Logs) from this new database created
    At moment of Migration of Servers:
    - Shutdown the Old Cluster
    - Configure the Virtual IP of old cluster on new Cluster / Also you will need reconfigure Listeners and Tnsnames.ora
    - Map the Luns of Database Files on new Cluster and mount your filesystem
    - Start Database and all depend resources.
    I recommend you use Grid Infrastructure (clusterware) 11.2 because if you in the future want upgrade your database to 11.2, you will need only install a Oracle RAC 11.2 and Upgrade your Database from 11.1.
    If you install Clusterware 11.1 you will need upgrade both (Clusterware and RAC). It is much more work and you can find much more trouble to UPGRADE Clusterware, than performing a fresh installation.
    Oracle Clusterware 11.2 Support (Oracle RAC 9i/10g/11g)
    Any problem during these step at moment of migration you will have the old cluster.
    Open an SR in the MoS to validate this new structure and migration.(This will help with possible problems that may occur)
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Jan 25, 2012 1:09 PM

  • Vmware clusters and L2 Spanning

    Server  virtualization and coud computing are the driving forces behind the  need to span a network's L2 domain. This is where TRILL comes into play.  You get horizontal scale-out in a flattened network, with multiple L2  forwarding paths powered by TRILL.
    The  question I have is just how far across the data center does the L2  domain really need to span? Lets say this is a vmware environment. The  latest version of vSphere allows up to 32 hosts per cluster.  Furthermore, it is only within that 32-host cluster that a vmotion or  DRS or FT action can take place. Theoretically, you can configure the  vmware cluster to span the entire server farm, end-to-end. But in  reality, how are vmware clusters configured? I would think that the  hosts in a cluster are largely configured in adjacent cabinets, not  across the entire data center. So, do you REALLY need to span the L2  domain across the entire data center server farm?

    Hi Ayodeji,
    Is this type of deployment support.
    Also off the topic question, can I have a top level domain(example.com) and sub-domain(staging.example.com) form a peer relationship in im & presence servers.
    As per Cisco 2 sub-domains apart of the same top-level domain can form a peer relationship.
    Thanks.

  • DSC, SQL Server 2012 Enterprise sp2 x64, SQL Server Failover Cluster Install not succeeding

    Summary: DSC fails to fully install the SQL Server 2012 Failover Cluster, but the identical code snippet below run in powershell ise with administrator credentials works perfectly as does running the SQL server install interface.
    In order to develop DSC configurations, I have set up a Windows Server 2012 R2 failover cluster in VMware Workstation v10 consisting of 3 nodes.  All have the same Windows Server 2012 version and have been fully patched via Microsoft Updates. 
    The cluster properly fails over on command and the cluster validates.  Powershell 4.0 is being used as installed in windows.
    PDC
    Node1
    Node2
    The DSC script builds up the parameters to setup.exe for SQL Server.  Here is the cmd that gets built...
    $cmd2 = "C:\SOFTWARE\SQL\Setup.exe /Q /ACTION=InstallFailoverCluster /INSTANCENAME=MSSQLSERVER /INSTANCEID=MSSQLSERVER /IACCEPTSQLSERVERLICENSETERMS /UpdateEnabled=false /IndicateProgress=false /FEATURES=SQLEngine,FullText,SSMS,ADV_SSMS,BIDS,IS,BC,CONN,BOL /SECURITYMODE=SQL /SAPWD=password#1 /SQLSVCACCOUNT=SAASLAB1\sql_services /SQLSVCPASSWORD=password#1 /SQLSYSADMINACCOUNTS=`"SAASLAB1\sql_admin`" `"SAASLAB1\sql_services`" `"SAASLAB1\cubara01`" /AGTSVCACCOUNT=SAASLAB1\sql_services /AGTSVCPASSWORD=password#1 /ISSVCACCOUNT=SAASLAB1\sql_services /ISSVCPASSWORD=password#1 /ISSVCSTARTUPTYPE=Automatic /FAILOVERCLUSTERDISKS=MountRoot /FAILOVERCLUSTERGROUP='SQL Server (MSSQLSERVER)' /FAILOVERCLUSTERNETWORKNAME=SQLClusterLab1 /FAILOVERCLUSTERIPADDRESSES=`"IPv4;192.168.100.15;LAN;255.255.255.0`" /INSTALLSQLDATADIR=M:\SAN\SQLData\MSSQLSERVER /SQLUSERDBDIR=M:\SAN\SQLData\MSSQLSERVER /SQLUSERDBLOGDIR=M:\SAN\SQLLogs\MSSQLSERVER /SQLTEMPDBDIR=M:\SAN\SQLTempDB\MSSQLSERVER /SQLTEMPDBLOGDIR=M:\SAN\SQLTempDB\MSSQLSERVER /SQLBACKUPDIR=M:\SAN\Backups\MSSQLSERVER > C:\Logs\sqlInstall-log.txt "
    Invoke-Expression $cmd2
    When I run this specific command in Powershell ISE running as administrator, logged in as domain account that is in the Node1's administrators group and has domain administrative authority, it works perfectly fine and sets up the initial node properly.
    When I use the EXACT SAME code above pasted into my custom DSC resource, as a test with a known successful install, run with the same user as above, it does NOT completely install the cluster properly.  It still installs 17 applications
    related to SQL Server and seems to properly configure everything except the cluster.  The Failover Cluster Manager shows that the SQL Server Role will not come on line and the SQL Server Agent Role is not created. 
    The code is run on Node1 so the setup folder is local to Node1.
    The ConfigurationFile.ini files for the two types of installs are identical.
    Summary.txt does have issues..
    Feature:                       Database Engine Services
      Status:                        Failed: see logs for details
      Reason for failure:            An error occurred during the setup process of the feature.
      Next Step:                     Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
      Component name:                SQL Server Database Engine Services Instance Features
      Component error code:          0x86D8003A
      Error description:             The cluster resource 'SQL Server' could not be brought online.  Error: There was a failure to call cluster code from a provider. Exception message: Generic
    failure . Status code: 5023. Description: The group or resource is not in the correct state to perform the requested operation.  .
    It feels like this is a security issue with DSC or an issue with the setup in SQL Server, but please note I have granted administrators group and domain administrators authority.  The nodes were built with the same login.  Windows firewall
    is completely disabled.
    Please let me know if any more detail is required.

    Hi Lydia,
    Thanks for your interest and help.
    I tried "Option 3 (recommended)" and that did not help.
    The issue I encounter with the fail-over cluster only occurs when trying to install with DSC!
    Using the SQL Server Install wizard, Command Prompt and even in Powershell by invoking the setup.exe all work perfectly.
    So, to reiterate, this issue only occurs while running in the context of DSC.
    I am using the same domain login with Domain Admin Security and locally the account has Administrators group credentials.  The SQL Server Service account also has Administrators Group Credentials.

  • Should one use MPIO and/or CSV in a Windows 2012 R2 guest cluster?

    Should one use MPIO and/or CSV in a Windows 2012 R2 guest cluster using VMware ESXi 5.5 presented Fiber LUN RDMs.
    If MPIO were implemented is there a preference for HW manufacturer DISM vs. MS DISM in a guest cluster?
    What partition size/offset is recommended for the MSR partition (currently set to 1000 MB) - unfortunately seeing storage validation error with failing block write at block 2048 (which in return may be related to VMware ESXi 5.5. disk partition layout)
    The current setup works without using MPIO (question is would it help overcome the current failing persistent SCSI-3 reservation warning.)
    What were the benefit of using CSV if any in a guest cluster? The Luns in scope would eventually hold SQL data and log files.
     Thanks for your input already.
    Sassan Karai

    Hi,
    Regardless of what type of the shared storage is failover cluster have to use the shared storage, the shared storage can redirect the failed node data to others node, accordingly
    the failover cluster can get the high availability.
    From you descripted error there must you choose the VMware® unsupported storage with failover cluster, please refer the following VMware® official KB then reconfirm your topology
    design is supported.
    Third party KB:
    VMware vSphere support for Microsoft clustering solutions on VMware products
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037959
    The related KB:
    Failover Clustering Hardware Requirements and Storage Options
    http://technet.microsoft.com/zh-cn/library/jj612869.aspx
    More information:
    Validate Storage Spaces Persistent Reservation Test Results with Warning
    http://blogs.msdn.com/b/clustering/archive/2013/05/24/10421247.aspx
    Understanding Cluster Validation Tests: Storage
    http://technet.microsoft.com/en-us/library/cc771259.aspx#PersistentReservation
    Shared storage for Windows Failover Cluster with MPIO
    http://blogs.technet.com/b/storageserver/archive/2011/05/31/shared-storage-for-windows-failover-cluster-with-mpio.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Addind new node on vmware

    hello
    i create a setup of 2 node cluster on vmware server on my laptop.I use linux 4(vmware OS),oracle 10.2.0.2 and windows server 2003 my laptop OS now is it possible to add new node in this setup.
    thanks

    user629298 wrote:
    thanks sir
    sir any doc related to adding node on vmware.
    thanksVmware is nothing but the virtual set up that you have. Adding/deleting nodes from cluster is not going to be a different procedure. you would be required to have a third machine set up and add it to the current cluster. Other than that, you can remove one of the nodes out of your 2-node cluster, making it a single node cluster and than reattach it. This would give you a good idea about both the things.
    HTH
    Aman....

  • Regarding Building of Sun Cluster

    I want to learn Sun Cluster in vmware , which is installed on top of windows xp 32 bit. I want to practice for sun cluster certification .Unfortunately i dont have SPARC. Whether is it possible to install a two node cluster in vmware ? I have shared iscsi storage ( open filer ). Shall u admins please guide me , which solaris version , sun cluster version, and other prerequisties to be taken for installation to proceed for vmware sun cluster configuration.
    Hardware : x86 - Intel Dual Core 2.5 ghz, ( not 64 bit emulated ), 4 GB RAM

    VirtualBox allows you to run a 64-bit Solaris even on a 32-bit operating system, as long as your hardware is 64-bit. (Yes, you need 64-bit HARDWARE, and the starter of this thread cleary stated, that his hardware is NOT 64-bit!)
    See:
    http://download.virtualbox.org/virtualbox/2.2.4/UserManual.pdf
    Pages 17-18...
    BUT: You can officially use the OpenSolaris Versions.
    Yes, they do have the 32-bit versions inside it!
    With OpenSolaris 2009.06 it is as easy as this:
    Just install OpenSolaris 2009.06 and add the packages from pkg.sun.com/opensolaris/ha-cluster
    See:
    http://docs.sun.com/app/docs/prod/open.ha.cluster?l=en&a=view
    and more specifically:
    http://docs.sun.com/app/docs/doc/820-7821/gikpv?l=en&a=view
    It's even easier then Solaris 10! There's no real technical difference between SC32u3 on Solaris 10 and OHAC for OpenSolaris. So, I highly do recommend to use OHAC on OpenSolaris!
    Matthias

  • SC 3.1u4 on VMWARE

    Hi, as I understand from previous postings there is no support for SC 3.1u4 on x86 32bit. Thus I wonder whether there are any experiences in runing SC 3.1.u4 on top of Sol 10u1 in a 64bit virtual machine of VMWARE. This would be then a perfect environment for testing.
    Thnaks, Marc

    I had limited success in Solaris 9 09/04 x86 based 2-node cluster (with oracle server agent running)
    under VMware server 1.0, where SCSI-2 is supported for quorum. But bear in mind, it can never be
    used for anything but serious purposes, absolutely not for production, for three reasons,
    (1) clock delay is constantly annoyance, and even generates pm_tick delay to bring down the cluster in panic.
    (2) pcn0 public interface automatically turns to sc_ipmp0 group, and constantly fails. Has to tweak /etc/default/mpathd to survive.
    (3) you have to be patient enough to endure the pain and suffering of numerous reboots to get the cluster running,
    and dozen times more efforts to failover Oracle server successfully.
    If you can gracefully resolve those three issues, you will be an asset in building Solaris cluster under VMware.
    Your experience will be valueable to share.
    P.S. I have not tried 5.0 VMware workstation. But 4.0 (likely 4.5.2 ?) ws does not support SCSI disks.

  • VNIC failover policy for vMotion network

    Hi everyone,
    I'm wondering what the recommended design/configuration is for a VMware vMotion network on UCS-B Series v2.0?
    1. Create a single vNIC with A-B failover for the vSwitch
    2. Create 2 vNICs with A-B failover and make them both active for the vSwitch
    3. Create 2 vNICs, 1 with on A and 1 on B both with no failover, and make them both active for the vSwitch
    4. Create 2 vNICs, 1 with on A and 1 on B both with no failover, and make 1 active and 1 standby for the vSwitch
    The intension is to avoid the vMotion traffic from leaving the FIs, either from within a chassis or across chassis.
    Which of the above, if any, would satisfy that?
    What are some folks experiences and/or best practices in the real world?
    Thanks in advance for any responses.
    -Brian

    Hello Brian,
    Avoid with fabric failover (A-B ) with ESXi teaming and choose option #4
    old discussion on this topic
    https://supportforums.cisco.com/thread/2061189
    Padma

Maybe you are looking for