OC4J Instances in Cold Failover Cluster
I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
Thanks for any thoughts!
user7575753 wrote:
I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share .
Similar Messages
-
SSAS 2012 (SP2) - Connecting to a Named Instance in a Failover Cluster
I posted this question some months ago and never got a resolution...still having the same problem. (http://social.msdn.microsoft.com/Forums/sqlserver/en-US/4178ba62-87e2-4672-a4ef-acd970ac1011/ssas-2012-sp1-connecting-to-a-named-instance-in-a-failover-cluster?forum=sqlanalysisservices)
I have a 3 node failover cluster installation (active-passive-passive) configured as follows:
Node1: DB Engine 1 (named instance DB01)
Node2: DB Engine 2 (named instance DB02)
Node3: Analysis Services (named instance DBAS)
Obviously, the node indicated is merely the default active node for each service, with each service able to fail from node to node as required.
Strangely, when I attempt to connect to the SSAS node using the cluster netbios "alias" (dunno, what else it would be called, so I apologize if I am mixing terminology or somesuch), I am only able to do so by specifying the the alias _without_ the
required named instance. If I issue a connection request using an external program or even SSMS using Node3\DBAS or Node3.domain\DBAS, it appears that the SQL Server Browser is offering up a bogus TCP port for the named instance (in my case TCP/58554), when
in reality, the SSAS service is running on TCP/2383 (confirmed with netstat) -- which if I understand correctly after much, much reading on the subject is the only port that can be used in a failover cluster. In any case, I'm puzzled beyond words. As I think
through it, I believe I've seen this issue in the past, but never worried about it since it wasn't necessary to specify the named instance when I had SSAS requirements... It's only a showstopper now because I'm finalizing my implementation of SCVMM/SCOM 2012
R2, and for some strange reason the PRO configuration in VMM gets all butthurt if you don't offer up a named instance...
Thank you much for reading. I appreciate any help to get this resolved.
POSSIBLY NOT RELEVANT...?
I've properly configured the SPNs for the SSAS service (MSOLAPSvc.3) and the SQL Browser (MSOLAPDisco.3), with the former mapped to the SSAS service account and the latter to the cluster "alias" (since it runs as "NT AUTHORITY\LOCALSERVICE"
as is customary) and have permitted delegation on the service and machine accounts as required. So, I'm not getting any kerberos issues with the service...any more, that is... ;) I'm not sure that's important, but I wanted to be forthcoming with details to
help solve the issue.When connecting to SSAS in a cluster, you do not specify an instance name. In your case, you would use the name of the SSAS IP address to connect.
See:
http://msdn.microsoft.com/en-us/library/dn141153.aspx
For servers deployed in a failover cluster, connect using the network name of the SSAS cluster. This name is specified during SQL Server setup, as
SQL Server Network Name. Note that if you installed SSAS as a named instance onto a Windows Server Failover Cluster (WSFC), you never add the instance name on the connection. This practice is unique to SSAS; in contrast, a named
instance of a clustered relational database engine does include the instance name. For example, if you installed both SSAS and the database engine as named instance (Contoso-Accounting) with a SQL Server Network Name of SQL-CLU, you would connect to SSAS using
"SQL-CLU" and to the database engine as "SQL-CLU\Contoso-Accounting". See
How to Cluster SQL Server Analysis Services for more information and examples. -
Oracle 10gR2 Installation on Cold Failover Cluster
Installation document recuired for Oracle 10g R2 on Cold Failover Cluster in Redhat Linux 4 platform.
Message was edited by:
user491196user12974178 wrote:
checking network configuration requirements
check complete: The overall result of this check is : Not Executed
Recommendation: Oracle supports installation on systems with DHCP - assigned public IP addresses.
However , the primary network Interface on the system should be configured with a static IP address
in order for the Oracle Software to function properly. See the Installation Guide for more details on installing the
software on systems configured with DHCP
Could anyone help me in this regard
Thanks in AdvanceYour problem is:
"However , the primary network Interface on the system should be configured with a static IP address in order for the Oracle Software to function properly"
Your solution is
"See the Installation Guide for more details on installing the software on systems configured with DHCP"
That book is at http://tahiti.oracle.com and the steps you need are in Chapter 2 of the Installation Guide for Linux.
Now, I could assume this is Oracle 11gR2 and you are using Oracle Enterprise Linux 5 Update 4, but I won't. Assuming things gets me into trouble. -
OracleAS cold failover cluster
I am installing oracleAS cold failover cluster. but during installing a am getting error in database configuration
assistant. can anybody help me?This is as simple as "Two heads think better that once", for this you will have two points (or more) of access to your applications, the process are split between servers and if one of the nodes fails, there are other to backup them. That's the only think that a Cold Fail Over cluster does, as well the administration is centralized if you have a nice configuration you will need to do only one configuration and all nodes will notice about it, finaly when you have a cold fail over cluster you have 50% of your potential workload in use, you have a server as a portrait that only is good to look at it but doesn't do nothing else, you dont need benchmarks or notes, this is really simple as it sounds.
Greetings. -
Oracle Infrastructure in Cold Failover Cluster
Hello,
I have browsed through the Oracle docs for High Availability, but I cannot find any information abourt achieving that CFC HA for OAS Infrastructura using RedHat EL AS and HP Serviceguard...
Can anyone help me?
thanks in advance, and best regardsuser7575753 wrote:
I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share . -
10g in a Cold Failover Cluster
Hallo,
I have some doubts about configuration of a single instance in a two node cluster for a cold failover solution.
I mean,
1 - in listener.ora I should put the floating-ip hostname?
2 - to create a dbconsole, do I have to use ORACLE_HOSTNAME variable?
thanks in advanceEven if in Metalink note 362524.1,
a "failing over" db console need a shared home,
I successed in creating hostname indipendent one in few steps
1 - Setting ORACLE_HOSTNAME=clustername (and resolving the clustername with the floating ip in etc/hosts in server and client machines)
2 - Creating the db console with repository in active node and w/o repository on passive one.
That's all,
Bye -
Using an instance in a failover cluster as a witness server
Firstly I absolutely hate the idea of doing this with a passion . . . and then some, in my mind a witness server should have as few if not zero dependencies on any other SQL Server HA technology as possible, it should be a stand alone SQL Server instance.
However, technically would you be able to use an instance which is already in a failover cluster as a witness for a mirrored pair of databases ?.Yes you can use cluster instance as witness.
https://msdn.microsoft.com/en-IN/library/ms191309.aspx -
Cold failover cluster for repository ?
Hi,
We have Box A (infra and mid) and B(mid),
we want to use Box B for cold failover claster
Any suggestion on how to do this?
TIASimilar question was asked recently in our forum:
http://social.msdn.microsoft.com/Forums/en-US/18239da7-74f2-45a7-b984-15f1b3f27535/biztalk-clustering?forum=biztalkgeneral#4074a082-8459-420f-8e99-8bab19c8fba2
White paper from Microsoft on this context shall you with step-by-step guidance of failover cluster for BizTalk 2010.
http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=2290
Also refer this blog where authors in series of his post guides with the steps required:
Part 1: BizTalk High Availability Server Environment – Preparations
Part 2: BizTalk High Availability Server Environment–Domain Controller Installation
Part 3: BizTalk High Availability Server Environment – SQL & BizTalk Active Directory Accounts
Part 4: BizTalk High Availability Server Environment – Prepping our SQL & BizTalk Failover Clusters
Part 5: BizTalk High Availability Server Environment – SQL Server 2008r2 Failover Cluster
Part 6: BizTalk High Availability Server Environment–BizTalk 2010 Failover Cluster Creation
Part 7 – BizTalk High Availability Server Environment –BizTalk 2010 Installation, Configuration
and Clustering
Par 8 - Adding Network Load balancing to our High Availability Environment
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply. -
How to deploy oracle 11g under cold failover cluster
Hi,
I am trying to install oracle 11g under failover cluster environments. I am trying to find some oracle documentation about it. But not able to find one. Can anyone help me. Please i need it badly.
we have two servers sun solaris 10. on which we have failover cluster being installed. one is active while one is passive. How can i install oracle 11g on this failover clusters. did i have to install oracle of shared SAN. what are the extra configuration i have to do please help
Also is it wise to install 11g warehouse with fail safe or use RAC.
regards
Nick
Edited by: Nick Naughty on Feb 1, 2009 4:02 AMthanks alot for reply.
I am of the same point of view. but the machine i am working on is sun solaris 10 with two partitions finance and accounts which have been protected by a failed over machine having two partitions finance and accounts and i think you are right these are the same shared storage folder which are mounted on active machine as well as on passive machine.
Now my question is if i install oracle 11g on one machine without oracle cluster then sun cluster should be able to mount the database in case the primary machine fails.
I have tried hard to get some documentation on this top but nothing so far. please help
regards
Nick -
Hi Folks
OEL 5.4 64bit, 11gR2
SCAN
I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
what's your opinion about this?
ASM for voting&ocr
I'm also planning to create one diskgroup in asm for 3xvoting and ocr, normal redundancy and 3 failgroups. Do I have to create the diskgroup with the QUORUM Keyword for each failgroup or not? as described here http://download.oracle.com/docs/cd/E11882_01/server.112/e10500/asmdiskgrps.htm#CHDBGAFB. I assume not, as they cannot store ocr files (http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_5008.htm#SQLRF01114).
what is your way to store the ocr and voting files?
is there a 11gR2 Version of this paper available (http://www.oracle.com/technology/products/database/clusterware/pdf/SI_DB_Failover_11g.pdf) ?
thanks
regards oviwanSCAN
I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
1- You can not install the grid infrastructure without the SCAN even if you will not use it.
2 - The old fashion you described is what is still in use independently from SCAN. You still have a vip which relocates in case the node goes down.
Where you use the SCAN is when configuring the REMOTE LISTENER and ALIAS for clients. But for those purposes, you can use the «old fashion» if you don't want to use the SCAN. But i believe that you should use it because it makes things become more easier.... -
TO setup Cold Failover in ESP Cluster
Hi,
I created two nodes (node1 and node2) and started in defferent machines, node1 in vewin764SA016(1st machine) and node2 in vewin764SA046(2nd machine)(I have attached both files).
I created an ESP project which has input window attached to XML Input Adapter.
I setup cluster configuration in CCR file as follows (I have attached CCR file also. I could not able to attach .ccr, so it change extension to txt):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration xmlns="http://www.sybase.com/esp/project_config/2010/08/">
<Runtime>
<Clusters>
<Cluster name="esp://vewin764SA016:19011" type="remote">
<Auth>user</Auth>
<Username encrypted="false">sybase</Username>
<Password encrypted="false">sybase</Password>
<Rsakeyfile></Rsakeyfile>
<Managers>
<Manager>vewin764SA016:19001</Manager>
<Manager>vewin764SA046:19002</Manager>
</Managers>
</Cluster>
</Clusters>
<Bindings>
<Binding name="Battery_Life_Input_Adapter_window">
<Cluster>esp://vewin764SA016:19011</Cluster>
<Workspace>test</Workspace>
<Project>test</Project>
<BindingName>b1</BindingName>
<RemoteStream>remote1</RemoteStream>
<Output>false</Output>
</Binding>
</Bindings>
<AdaptersPropertySet>
<PropertySet name="Atom Feed Input">
<Property name="URL"></Property>
</PropertySet>
</AdaptersPropertySet>
</Runtime>
<Deployment>
<Project ha="false">
<Options>
<Option name="time-granularity" value="5"/>
<Option name="debug-level" value="4"/>
</Options>
<Instances>
<Instance>
<Failover enable="true">
<FailureInterval>120</FailureInterval>
<FailuresPerInterval>5</FailuresPerInterval>
</Failover>
<Affinities>
<Affinity charge="positive" strength="strong" type="controller" value="node1"/>
<Affinity charge="positive" strength="weak" type="controller" value="node2"/>
</Affinities>
<Affinities/>
</Instance>
</Instances>
</Project>
</Deployment>
</Configuration>
I added project to node1 using following command and started the project
esp_cluster_admin --uri=esp://vewin764SA016:19011 --username=sybase --password=sybase --add_project --project-name=test --workspace-name=test --ccx=test.ccx --ccr=test.ccr
Project started in both machines and both getting data.
As per ESP document: "If cold failover is enabled, a failover occurs when a failed project switches to another server to continue processing."
But in my case project is running in both the nodes and both getting data.
I tried all possible combinations for Affinity attributes charge and strength(positive/negative, strong/weak). But it did not work out.
How can I set up a cluster such that when i add a project, it should start running on one node and when that node goes down, it should bring up second node and start running on that.
Thanks
ShashiHi Shashi,
Reviewing the configuration of your ccr file, you've got
<Affinities>
<Affinity charge="positive" strength="strong" type="controller" value="node1"/>
<Affinity charge="positive" strength="weak" type="controller" value="node2"/>
</Affinities>
<Affinities/>
Just note, that in your set-up, you've set strong and positive for node1. This means that it will only run on node1.So if your node1 is down, the project will not start up on node2. Is that what you are trying to accomplish?
"</Affinities>" - that fifth line there is extra. I don't think it is harming anything though.
I am assuming that you are using Studio to see the that the "Project started in both machines and both getting data".
There is an outstanding New Feature Request CR for Studio to support HA and failover. Studio is really not compatible with HA. Studio makes it look like there are multiple instances of the project running when in fact there is only one instance running. Studio will start getting errors if the project crashes and is failed over to the secondary node. I can’t really recommend using Studio at all when dealing with HA:
CR 732974 - Request for Studio support of HA and failover.
To confirm that there is only one instance of the project running, I look at esp_cluster_admin:
$ esp_cluster_admin --uri esp://bryantpark:19011 --username=sybase
Password:
> get managers
Manager[0]: node1@http://bryantpark.sybase.com:19011
Manager[1]: node2@http://bryantpark.sybase.com:19012
> get controllers
Controller[0]: node1@http://bryantpark.sybase.com:19011
Controller[1]: node2@http://bryantpark.sybase.com:19012
> get projects
=============================
Workspace: test
Project: test
Instance Count: 1
----------- Instance Details -----------
Instance Name: default
Controller Name: node1
Current Status: started-running
Requested Status: started-running
Failure Interval: 120
Failures Per Interval: 5
-------------- Affinities --------------
Affinity Type: CONTROLLER
Affinity Charge: POSITIVE
Affinity Strength: STRONG
Affinity Subject: node1
Affinity Type: CONTROLLER
Affinity Charge: POSITIVE
Affinity Strength: WEAK
Affinity Subject: node2
>
The output from "get projects" shows that there are two active managers and controllers, and that the single instance is running from the controller named node1.
Next, I run the "get project" command, and it shows me the Pid:
> get project test/test
Workspace: test
Project: test
------------------ Instance : 0 ------------------
Instance Id: Id_2000003_1393600620713
Command Host: bryantpark.sybase.com
Command Port: 44905
Gateway Host: bryantpark.sybase.com
Gateway Port: 41162
Sql Port: 47244
SSL Enabled: false
Big Endian: false
Address Size: 8
Date Size: 8
Money Precision: 4
Pid: 20637
Topology Ignored: false
Timer Interval: 5
Active-Active: false
I run these Linux commands that also confirm to me that I have only one instance running:
$ ps -ef | grep esp_server
alices 19963 19961 1 09:45 pts/0 00:00:27 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
alices 20243 20241 1 09:59 pts/1 00:00:15 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
alices 20637 19963 0 10:16 ? 00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2000003_1393600620713 --log-to 4
alices 20729 18201 0 10:18 pts/2 00:00:00 grep esp_server
From here, I see that pid 20637's parent pid is 19963, which is the pid for node1 server.
Next, I kill the active manager and the project to simulate a catastrophic failure:
$ kill -9 20637 19963
Due to the affinities, no failover happens to node2:
$ ps -ef | grep esp_server
alices 20243 20241 1 09:59 pts/1 00:00:17 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
alices 20764 18201 0 10:21 pts/2 00:00:00 grep esp_server
Next, I use a different ccr file that does not have affinities configured.
From esp_cluster_admin, I stop and remove the test/test project and then re-run using the modified ccr that has no affinities.
$ esp_cluster_admin --uri esp://bryantpark:19012\;bryantpark:19011 --username=sybase
Password:
> remove project test/test
[done]
> add project test/test bin/test.ccx test-without-affins.ccr
[done]
> start project test/test
[done]
> get projects
=============================
Workspace: test
Project: test
Instance Count: 1
----------- Instance Details -----------
Instance Name: default
Controller Name: node2
Current Status: started-running
Requested Status: started-running
Failure Interval: 120
Failures Per Interval: 5
This time, the project started on node2.
> get project test/test
Workspace: test
Project: test
------------------ Instance : 0 ------------------
Instance Id: Id_1000003_1393601774031
Command Host: bryantpark.sybase.com
Command Port: 46151
Gateway Host: bryantpark.sybase.com
Gateway Port: 36432
Sql Port: 50482
SSL Enabled: false
Big Endian: false
Address Size: 8
Date Size: 8
Money Precision: 4
Pid: 21422
Topology Ignored: false
Timer Interval: 5
Active-Active: false
>
The pid is 21422.
$ ps -ef | grep esp_server
alices 21213 21211 4 10:33 pts/0 00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
alices 21282 21280 4 10:33 pts/1 00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
alices 21422 21282 1 10:36 ? 00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_1000003_1393601774031 --log-to 4
alices 21500 18201 0 10:36 pts/2 00:00:00 grep esp_server
Pid 21422's parent pid is 21282, which is the pid for node2 server.
Kill the project and node2:
$ kill -9 21422 21282
$ ps -ef | grep esp_server
alices 21213 21211 4 10:33 pts/0 00:00:08 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
alices 21505 21213 12 10:36 ? 00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2_1393601813749 --log-to 4 --clus
alices 21578 18201 0 10:36 pts/2 00:00:00 grep esp_server
The project automatically fails over now to node1.
> get managers
Manager[0]: node1@http://bryantpark.sybase.com:19011
> get controllers
Controller[0]: node1@http://bryantpark.sybase.com:19011
> get projects
=============================
Workspace: test
Project: test
Instance Count: 1
----------- Instance Details -----------
Instance Name: default
Controller Name: node1
Current Status: started-running
Requested Status: started-running
Failure Interval: 120
Failures Per Interval: 5
Thanks,
Alice -
Multiple OC4J instances missing in the Enterprise Manager for a cluster
We have a cluster of 4 OAS instances installed on two Solaris 10 servers. When I login to the Oracle Enterprise Manager, I see only the Administration OC4J Instance. We had this issue earlier but it resolved by itself but this time the issue has been there for quite few days now. Need some help.
Thanks.As I told you, this is a normal behavior, so there is no workarround, because it has to work fine later, but if you dont see any change tell me...
How did you configure your Cluster?
in a command line window, run the following in any of the node of your cluster
opmnctl @cluster status
it have to list all your components, this has to work on any of the nodes.
Also in the opmn.xml check the tags for <topology> in the firsts lines, if you dont have this in the secondary nodes then the cluster may not be working.
Regards -
Transactional replication from a failover cluster instance to a SQL Server Express DB
Hello,
I have been poking around on Google trying to understand if there are any gotchas in configuring transactional replication on a instance DB of a failover cluster, to a SQL Server Express DB. Also, this client would like to replicate a set of tables between
two instances DB's which both reside on nodes of the cluster.
Everything I've read suggests there is no problem using transactional replication on clustered instance as long as you use a shared snapshot folder. I still have some concerns:
1) Should the distributor need to live on a separate instance?
2) What happens in the event of an automatic, or manual failover of a publisher, especially if the distributor does not need to live on a separate instance? I know that when a failover occurs, all jobs in progress are stopped and this seems like a recipe for
inconsistency between the publisher and subscriber.
There is a paramount concern, that this particular client won't have staff on hand to troubleshoot replication if there are problems, hence my hesitancy to implement a solution that relies on it.
Thanks in advance.1) Should the distributor need to live on a separate instance?
Answer: It is recommended to configure the distributor on the different server, but it also be configured on Publisher/subscriber server. (Subscriber in our case is not possible as its a Express edition)
2) What happens in the event of an automatic, or manual failover of a publisher, especially if the distributor does not need to live on a separate instance? I know that when a failover occurs, all jobs in progress are stopped and this seems like a recipe for
inconsistency between the publisher and subscriber. There is a paramount concern, that this particular client won't have staff on hand to troubleshoot replication if there are problems, hence my hesitancy to implement a solution that relies on it.
Answer: If you configure both publisher and distributor on the same server and the SQL instance is failed over, the data synchronization/replication is suspended till the instance comes online.
Once the instance is up,all the replication jobs will start again and it will continue to synchronize the data to subscriber. No manual intervention is required. -
Exploring AlwaysOn Failover Cluster Instances in SQL Server 2014 Tecnet Labs
I like the Labs and have found them very usful. I have a problem with: Exploring AlwaysOn Failover Cluster Instances in SQL Server 2014. The lab starts but hangs on opening SQLONE and the Manual never loads. I have tried coming out and starting again but
get returned to the same failing session. I thought I would mention it here as the is no way to feedback a response as the appraisal only happens once the lab is completed, i.e. no apparent link on the lab site. The session ends when the HOL is reported
as not responding.Hi,
I'd try this first:
https://vlabs.holsystems.com/vlabs/SystemRequirements.aspx
If that's no help, try the 'Help and Support' link on this page:
https://vlabs.holsystems.com/vlabs/technet?eng=VLabs&auth=none&src=vlabs&altadd=true&labid=12693#
Don't retire TechNet! -
(Don't give up yet - 13,085+ strong and growing) -
Failover Cluster Instance always on
we have two node cluster active passive with shared storage , I have setup AlwaysOn as DR on standalone instance
daily morning on failover cluster events, i am getting following critical events
On windows log i couldn't see much information, please help meCan you provide the version of Windows Server that you are using as cluster nodes? Behavior will be different between Windows Server 2008, 2012 and 2012 R2. While in the past, changing the NodeWeight property of a cluster node may be an appropriate solution,
Windows Server 2012 R2 introduced dynamic witness where a witness is configured to either have a vote or not have a vote depending on the state of the cluster and the quorum. Combined with dynamic quorum (which was introduced in Windows Server 2012,) which
dynamically adds or removes a vote from a node depending on the status, the behavior will be different depending on what version of Windows Server you are running. A more appropriate approach (if you are running Windows Server 2012 or 2012 R2) is to configure
the LowerQuorumPriorityNodeID property of the cluster.
http://technet.microsoft.com/en-ca/library/dn265972.aspx
Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
Blog |
Twitter | LinkedIn
SQL Server High Availability and Disaster Recover Deep Dive Course
Maybe you are looking for
-
Use a window handler in a webutil_c_api.invoke
Hi I need to get the window handler of the forms, but I always get 0 from get_window_property( 'my_win',WINDOW_HANDLE ) and wen I ran the function: webutil_c_api.invoke(ps_dll,'TWAIN_AcquireToFilename',func_param_acq); the function send me a messageb
-
Customer analysis - INVOICED SALES report
Hi all, Please can someone advice how to run the ollowing report, I am not getting any data. for report MCE and other MC reports.....?
-
Solaris 10 01/06 Installation on T1000 Server
Hi, I am having lots of trouble doing a jumpstart installation of Solaris 10 on a T1000 Server. My setup Using Debian Sarge as the Jumpstart Server ( I have used this machine to install Sol 8 and 9 on Sun V240/210. Running TFTP Server, RARPD, BOOTPAR
-
Newbie Completely Stuck With The Connections. HE
Hi I have received the USB SoundBlaster Li've! 24 bit equipment for my birthday with the sole aim of converting my vinyl to mp3. Already I am stuck with conecting my turntable to the box. From my turntable I have one cable which forks into two leads
-
Differing dates in Gantt view and the Task Usage view
Hi, I am facing a below issue in the MSP 2010. For certain tasks, the Start & Finish date is for the year 2014, but in the Task Usage View it is showing for the same dates but in the year 2015. Below is the screenshot for the same. How is it possible