10g in a Cold Failover Cluster

Hallo,
I have some doubts about configuration of a single instance in a two node cluster for a cold failover solution.
I mean,
1 - in listener.ora I should put the floating-ip hostname?
2 - to create a dbconsole, do I have to use ORACLE_HOSTNAME variable?
thanks in advance

Even if in Metalink note 362524.1,
a "failing over" db console need a shared home,
I successed in creating hostname indipendent one in few steps
1 - Setting ORACLE_HOSTNAME=clustername (and resolving the clustername with the floating ip in etc/hosts in server and client machines)
2 - Creating the db console with repository in active node and w/o repository on passive one.
That's all,
Bye

Similar Messages

  • Oracle 10gR2 Installation on Cold Failover Cluster

    Installation document recuired for Oracle 10g R2 on Cold Failover Cluster in Redhat Linux 4 platform.
    Message was edited by:
    user491196

    user12974178 wrote:
    checking network configuration requirements
    check complete: The overall result of this check is : Not Executed
    Recommendation: Oracle supports installation on systems with DHCP - assigned public IP addresses.
    However , the primary network Interface on the system should be configured with a static IP address
    in order for the Oracle Software to function properly. See the Installation Guide for more details on installing the
    software on systems configured with DHCP
    Could anyone help me in this regard
    Thanks in AdvanceYour problem is:
    "However , the primary network Interface on the system should be configured with a static IP address in order for the Oracle Software to function properly"
    Your solution is
    "See the Installation Guide for more details on installing the software on systems configured with DHCP"
    That book is at http://tahiti.oracle.com and the steps you need are in Chapter 2 of the Installation Guide for Linux.
    Now, I could assume this is Oracle 11gR2 and you are using Oracle Enterprise Linux 5 Update 4, but I won't. Assuming things gets me into trouble.

  • OracleAS cold failover cluster

    I am installing oracleAS cold failover cluster. but during installing a am getting error in database configuration
    assistant. can anybody help me?

    This is as simple as "Two heads think better that once", for this you will have two points (or more) of access to your applications, the process are split between servers and if one of the nodes fails, there are other to backup them. That's the only think that a Cold Fail Over cluster does, as well the administration is centralized if you have a nice configuration you will need to do only one configuration and all nodes will notice about it, finaly when you have a cold fail over cluster you have 50% of your potential workload in use, you have a server as a portrait that only is good to look at it but doesn't do nothing else, you dont need benchmarks or notes, this is really simple as it sounds.
    Greetings.

  • OC4J Instances in Cold Failover Cluster

    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!

    user7575753 wrote:
    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
    For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share .

  • Cold failover cluster for repository ?

    Hi,
    We have Box A (infra and mid) and B(mid),
    we want to use Box B for cold failover claster
    Any suggestion on how to do this?
    TIA

    Similar question was asked recently in our forum:
    http://social.msdn.microsoft.com/Forums/en-US/18239da7-74f2-45a7-b984-15f1b3f27535/biztalk-clustering?forum=biztalkgeneral#4074a082-8459-420f-8e99-8bab19c8fba2
    White paper from Microsoft on this context shall you with step-by-step guidance of failover cluster for BizTalk 2010.
    http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=2290
    Also refer this blog where authors in series of his post guides with the steps required:
    Part 1: BizTalk High Availability Server Environment – Preparations
    Part 2: BizTalk High Availability Server Environment–Domain Controller Installation
    Part 3: BizTalk High Availability Server Environment – SQL & BizTalk Active Directory Accounts
    Part 4: BizTalk High Availability Server Environment – Prepping our SQL & BizTalk Failover Clusters
    Part 5: BizTalk High Availability Server Environment – SQL Server 2008r2 Failover Cluster
    Part 6: BizTalk High Availability Server Environment–BizTalk 2010 Failover Cluster Creation
    Part 7 – BizTalk High Availability Server Environment –BizTalk 2010 Installation, Configuration
    and Clustering
    Par 8 - Adding Network Load balancing to our High Availability Environment
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • Oracle Infrastructure in Cold Failover Cluster

    Hello,
    I have browsed through the Oracle docs for High Availability, but I cannot find any information abourt achieving that CFC HA for OAS Infrastructura using RedHat EL AS and HP Serviceguard...
    Can anyone help me?
    thanks in advance, and best regards

    user7575753 wrote:
    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
    For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share .

  • How to deploy oracle 11g under cold failover cluster

    Hi,
    I am trying to install oracle 11g under failover cluster environments. I am trying to find some oracle documentation about it. But not able to find one. Can anyone help me. Please i need it badly.
    we have two servers sun solaris 10. on which we have failover cluster being installed. one is active while one is passive. How can i install oracle 11g on this failover clusters. did i have to install oracle of shared SAN. what are the extra configuration i have to do please help
    Also is it wise to install 11g warehouse with fail safe or use RAC.
    regards
    Nick
    Edited by: Nick Naughty on Feb 1, 2009 4:02 AM

    thanks alot for reply.
    I am of the same point of view. but the machine i am working on is sun solaris 10 with two partitions finance and accounts which have been protected by a failed over machine having two partitions finance and accounts and i think you are right these are the same shared storage folder which are mounted on active machine as well as on passive machine.
    Now my question is if i install oracle 11g on one machine without oracle cluster then sun cluster should be able to mount the database in case the primary machine fails.
    I have tried hard to get some documentation on this top but nothing so far. please help
    regards
    Nick

  • SCAN & Cold Failover

    Hi Folks
    OEL 5.4 64bit, 11gR2
    SCAN
    I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
    what's your opinion about this?
    ASM for voting&ocr
    I'm also planning to create one diskgroup in asm for 3xvoting and ocr, normal redundancy and 3 failgroups. Do I have to create the diskgroup with the QUORUM Keyword for each failgroup or not? as described here http://download.oracle.com/docs/cd/E11882_01/server.112/e10500/asmdiskgrps.htm#CHDBGAFB. I assume not, as they cannot store ocr files (http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_5008.htm#SQLRF01114).
    what is your way to store the ocr and voting files?
    is there a 11gR2 Version of this paper available (http://www.oracle.com/technology/products/database/clusterware/pdf/SI_DB_Failover_11g.pdf) ?
    thanks
    regards oviwan

    SCAN
    I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
    1- You can not install the grid infrastructure without the SCAN even if you will not use it.
    2 - The old fashion you described is what is still in use independently from SCAN. You still have a vip which relocates in case the node goes down.
    Where you use the SCAN is when configuring the REMOTE LISTENER and ALIAS for clients. But for those purposes, you can use the «old fashion» if you don't want to use the SCAN. But i believe that you should use it because it makes things become more easier....

  • TO setup Cold Failover in ESP Cluster

    Hi,
    I created two nodes (node1 and node2) and started in defferent machines, node1 in vewin764SA016(1st machine) and node2 in vewin764SA046(2nd machine)(I have attached both files).
    I created an ESP project which has input window attached to XML Input Adapter.
    I setup cluster configuration in CCR file as follows (I have attached CCR file also. I could not able to attach .ccr, so it change extension to txt):
    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration xmlns="http://www.sybase.com/esp/project_config/2010/08/">
      <Runtime>
      <Clusters>
          <Cluster name="esp://vewin764SA016:19011" type="remote">
            <Auth>user</Auth>
            <Username encrypted="false">sybase</Username>
            <Password encrypted="false">sybase</Password>
            <Rsakeyfile></Rsakeyfile>
      <Managers>
      <Manager>vewin764SA016:19001</Manager>
      <Manager>vewin764SA046:19002</Manager>
      </Managers>
          </Cluster>
        </Clusters>
        <Bindings>
          <Binding name="Battery_Life_Input_Adapter_window">
            <Cluster>esp://vewin764SA016:19011</Cluster>
            <Workspace>test</Workspace>
            <Project>test</Project>
            <BindingName>b1</BindingName>
            <RemoteStream>remote1</RemoteStream>
            <Output>false</Output>
          </Binding>
        </Bindings>
        <AdaptersPropertySet>
          <PropertySet name="Atom Feed Input">
            <Property name="URL"></Property>
          </PropertySet>
        </AdaptersPropertySet>
      </Runtime>
      <Deployment>
        <Project ha="false">
          <Options>
            <Option name="time-granularity" value="5"/>
            <Option name="debug-level" value="4"/>
          </Options>
          <Instances>
            <Instance>
              <Failover enable="true">
                <FailureInterval>120</FailureInterval>
                <FailuresPerInterval>5</FailuresPerInterval>
              </Failover>
              <Affinities>
                <Affinity charge="positive" strength="strong" type="controller" value="node1"/>
                <Affinity charge="positive" strength="weak" type="controller" value="node2"/>
              </Affinities>
              <Affinities/>
            </Instance>
          </Instances>
        </Project>
      </Deployment>
    </Configuration>
    I added project to node1 using following command and started the project
    esp_cluster_admin --uri=esp://vewin764SA016:19011 --username=sybase --password=sybase --add_project --project-name=test --workspace-name=test --ccx=test.ccx --ccr=test.ccr
    Project started in both machines and both getting data.
    As per ESP document: "If cold failover is enabled, a failover occurs when a failed project switches to another server to continue processing."
    But in my case project is running in both the nodes and both getting data.
    I tried all possible combinations for Affinity attributes charge and strength(positive/negative, strong/weak). But it did not work out.
    How can I set up a cluster such that when i add a project, it should start running on one node and when that node goes down, it should bring up second node and start running on that.
    Thanks
    Shashi

    Hi Shashi,
    Reviewing the configuration of your ccr file, you've got
              <Affinities>
                <Affinity charge="positive" strength="strong" type="controller" value="node1"/>
                <Affinity charge="positive" strength="weak" type="controller" value="node2"/>
              </Affinities>
              <Affinities/>
    Just note, that in your set-up, you've set strong and positive for node1. This means that it will only run on node1.So if your node1 is down, the project will not start up on node2. Is that what you are trying to accomplish?
    "</Affinities>" - that fifth line there is extra. I don't think it is harming anything though.
    I am assuming that you are using Studio to see the that the "Project started in both machines and both getting data".
    There is an outstanding New Feature Request CR for Studio to support HA and failover. Studio is really not compatible with HA.  Studio makes it look like there are multiple instances of the project running when in fact there is only one instance running. Studio will start getting errors if the project crashes and is failed over to the secondary node.  I can’t really recommend using Studio at all when dealing with HA:
    CR 732974 - Request for Studio support of HA and failover.
    To confirm that there is only one instance of the project running, I look at esp_cluster_admin:
    $ esp_cluster_admin --uri esp://bryantpark:19011 --username=sybase
    Password:
    > get managers
    Manager[0]:     node1@http://bryantpark.sybase.com:19011
    Manager[1]:     node2@http://bryantpark.sybase.com:19012
    > get controllers
    Controller[0]:  node1@http://bryantpark.sybase.com:19011
    Controller[1]:  node2@http://bryantpark.sybase.com:19012
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node1
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
            -------------- Affinities --------------
            Affinity Type:                CONTROLLER
            Affinity Charge:              POSITIVE
            Affinity Strength:            STRONG
            Affinity Subject:             node1
            Affinity Type:                CONTROLLER
            Affinity Charge:              POSITIVE
            Affinity Strength:            WEAK
            Affinity Subject:             node2
    >
    The output from "get projects" shows that there are two active managers and controllers, and that the single instance is running from the controller named node1.
    Next, I run the "get project" command, and it shows me the Pid:
    > get project test/test
    Workspace:                    test
    Project:                      test
        ------------------ Instance : 0 ------------------
        Instance Id:                  Id_2000003_1393600620713
        Command Host:                 bryantpark.sybase.com
        Command Port:                 44905
        Gateway Host:                 bryantpark.sybase.com
        Gateway Port:                 41162
        Sql Port:                     47244
        SSL Enabled:                  false
        Big Endian:                   false
        Address Size:                 8
        Date Size:                    8
        Money Precision:              4
        Pid:                          20637
        Topology Ignored:             false
        Timer Interval:               5
        Active-Active:                false
    I run these Linux commands that also confirm to me that I have only one instance running:
    $ ps -ef | grep esp_server
    alices   19963 19961  1 09:45 pts/0    00:00:27 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   20243 20241  1 09:59 pts/1    00:00:15 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   20637 19963  0 10:16 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2000003_1393600620713 --log-to 4
    alices   20729 18201  0 10:18 pts/2    00:00:00 grep esp_server
    From here, I see that pid 20637's parent pid is 19963, which is the pid for node1 server.
    Next, I kill the active manager and the project to simulate a catastrophic failure:
    $ kill -9  20637 19963
    Due to the affinities, no failover happens to node2:
    $ ps -ef | grep esp_server
    alices   20243 20241  1 09:59 pts/1    00:00:17 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   20764 18201  0 10:21 pts/2    00:00:00 grep esp_server
    Next, I use a different ccr file that does not have affinities configured.
    From esp_cluster_admin, I stop and remove the test/test project and then re-run using the modified ccr that has no affinities.
    $ esp_cluster_admin --uri esp://bryantpark:19012\;bryantpark:19011 --username=sybase
    Password:
    > remove project test/test
    [done]
    > add project test/test bin/test.ccx test-without-affins.ccr
    [done]
    > start project test/test
    [done]
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node2
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
    This time, the project started on node2.
    > get project test/test
    Workspace:                    test
    Project:                      test
        ------------------ Instance : 0 ------------------
        Instance Id:                  Id_1000003_1393601774031
        Command Host:                 bryantpark.sybase.com
        Command Port:                 46151
        Gateway Host:                 bryantpark.sybase.com
        Gateway Port:                 36432
        Sql Port:                     50482
        SSL Enabled:                  false
        Big Endian:                   false
        Address Size:                 8
        Date Size:                    8
        Money Precision:              4
        Pid:                          21422
        Topology Ignored:             false
        Timer Interval:               5
        Active-Active:                false
    >
    The pid is 21422.
    $ ps -ef | grep esp_server
    alices   21213 21211  4 10:33 pts/0    00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   21282 21280  4 10:33 pts/1    00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   21422 21282  1 10:36 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_1000003_1393601774031 --log-to 4
    alices   21500 18201  0 10:36 pts/2    00:00:00 grep esp_server
    Pid 21422's parent pid is 21282, which is the pid for node2 server.
    Kill the project and node2:
    $ kill -9  21422 21282
    $ ps -ef | grep esp_server
    alices   21213 21211  4 10:33 pts/0    00:00:08 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   21505 21213 12 10:36 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2_1393601813749 --log-to 4 --clus
    alices   21578 18201  0 10:36 pts/2    00:00:00 grep esp_server
    The project automatically fails over now to node1.
    > get managers
    Manager[0]:     node1@http://bryantpark.sybase.com:19011
    > get controllers
    Controller[0]:  node1@http://bryantpark.sybase.com:19011
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node1
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
    Thanks,
    Alice

  • Data Guard Broker and Cold Failover clusters

    Hi,
    I wanted to use Data Guard Broker to control primary/standby systems on AIX clusters but found that without Oracle clusterware it is not supported on failover of a database to another node (unless you drop and recreate the configuration which in my opinion is not support at all!).
    The 11g documentation states that DG Broker offers support for single instance databases configured for HA using Oracle Clusterware and "cold failover clusters".
    Does anyone know whether this support for cold failover clusters in 11g means "proper support" i.e the configuration detects that the hostname has changed and automatically renames it so it continues to work?, or is the support in 11g the same as that in 10g?.
    Thanks,
    Andy

    Hi Jan,
    We already use virtual hostnames.
    When I set up the configuration the hostname for the databases default to the server name. The only way I know of changing this is to disable the configuration, then use the "edit database...set property" command but when I enable the configuration again it reverts back to its previous value.
    regards,
    Andy

  • Cold-Failover with Oracle Weblogic Server 10 R3

    Hi ALL,
    It is possible to configure cold-failover using Oracle Weblogic Server?/
    - 2 instances with the same ip, if one fails the second will run
    - Not comunication between nodes like in normal cluster
    Thanks

    Hello All,
    Have anybody configured auto server migration for cold failover?
    I have a few questions regarding this setup, any help is much appreciated.
    1. for the floating ips to work, do I need 2 NICS and two cables on each server?
    2. Do I have to configure 2 different ips on one interface, which will be used as node manager listen address's for each of the 2 machines?
    3. What will be the listen address of each of these servers(primary server & failover server)? Is it the floating ip address? or empty (all local addresses)?
    4. can we start both the primary server & failover server so when primary goes down, the failover will be quick?
    Thanks,
    Chandra

  • Active Failover Cluster of Identity Managment

    Could be possible create an AFC of IM using WebCache WC as Load BalanceR?
    Or we need to adquiere an LBR fron third parts tested list?
    http://www.oracle.com/technology/products/ias/hi_av/Tested_LBR_FW_SSLAccel.html
    If the first answer is YES, could someone redirect me to some notes, please.
    We are talking about a complete IM (SSO, DAS, OCA, OID).
    I guess OID could be a problem... :( due to WC can't do an ldap balancing
    tx for your help.
    diego

    When I started this, we had two different vmware servers set up in two isolated datacenters across the factory from one another. The two nodes of our RAC system (mounted on Netapp) follow the same topology. The tech doing the vmware tested a forced failover using a manual method (we didn't need it active at that point), and I didn't even see any downtime. I have no clue how this really works, but it seems fine. The vmware disks are mounted on a SAN which has striping between the two datacenters.
    Since then, they've started creating a vmware cluster which should fail over automatically (apparently), but I haven't seen it tested.
    Your (cold failover) solution sounds fine to me. Your only problem is that the SSOs will stop working while you're doing the failover.
    I once saw a solution somewhere (either at a client or in metalink) that had two OIDs running at the same time behind a hw load balancer with the MR RAC-mounted. I suspect that only one was "active" at a time. I think the solution is difficult because of the way that OID has cached data. I think that as well, some applications keep a persistent connection to OID (Forms runtime?) and these would break on failover.
    Maybe what you could do is have your standby "up" like this and somehow set up an automatic reload of the data on the standby when it becomes primary...

  • Reporting Services as a generic service in a failover cluster group?

    There is some confusion on whether or not Microsoft will support a Reporting Services deployment on a failover cluster using scale-out, and adding the Reporting Services service as a generic service in a cluster group to achieve active-passive high
    availability.
    A deployment like this is described by Lukasz Pawlowski (Program Manager on the Reporting Services team) in this blog article
    http://blogs.msdn.com/b/lukaszp/archive/2009/10/28/high-availability-frequently-asked-questions-about-failover-clustering-and-reporting-services.aspx. There it is stated that it can be done, and what needs to be considered when doing such a deployment.
    This article (http://technet.microsoft.com/en-us/library/bb630402.aspx) on the other hand states: "Failover clustering is supported only for the report server database; you
    cannot run the Report Server service as part of a failover cluster."
    This is somewhat confusing to me. Can I expect to receive support from Microsoft for a setup like this?
    Best Regards,
    Peter Wretmo

    Hi Peter,
    Thanks for your posting.
    As Lukasz said in the
    blog, failover clustering with SSRS is possible. However, during the failover there is some time during which users will receive errors when accessing SSRS since the network names will resolve to a computer where the SSRS service is in the process of starting.
    Besides, there are several considerations and manual steps involved on your part before configuring the failover clustering with SSRS service:
    Impact on other applications that share the SQL Server. One common idea is to put SSRS in the same cluster group as SQL Server.  If SQL Server is hosting multiple application databases, other than just the SSRS databases, a failure in SSRS may cause
    a significant failover impact to the entire environment.
    SSRS fails over independently of SQL Server.
    If SSRS is running, it is going to do work on behalf of the overall deployment so it will be Active. To make SSRS Passive is to stop the SSRS service on all passive cluster nodes.
    So, SSRS is designed to achieve High Availability through the Scale-Out deployment. Though a failover clustered SSRS deployment is achievable, it is not the best option for achieving High Availability with Reporting Services.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • Difference between scalable and failover cluster

    Difference between scalable and fail over cluster

    A scalable cluster is usually associated with HPC clusters but some might argue that Oracle RAC is this type of cluster. Where the workload can be divided up and sent to many compute nodes. Usually used for a vectored workload.
    A failover cluster is where a standby system or systems are available to take the workload when needed. Usually used for scalar workloads.

  • Install Guide - SQL Server 2014, Failover Cluster, Windows 2012 R2 Server Core

    I am looking for anyone who has a guide with notes about an installation of a two node, multi subnet failover cluster for SQL Server 2014 on Server Core edition

    Hi KamarasJaranger,
    According to your description, you want configure a SQL Server 2014 Multi-Subnet failover Cluster on Windows Server 2012 R2. Below are the whole steps for the configuration. For the detailed steps about the configuration, please download
    and refer to the
    PDF file.
    1.Add Required Windows Features (.NET Framework 3.5 Features, Failover Clustering and Multipath I/O).
    2.Discover target portals.
    3.Connect targets and configuring Multipathing.
    4.Initialize and format the Disks.
    5.Verify the Storage Replication Process.
    6.Run the Failover Cluster Validation Wizard.
    7.Create the Windows Server 2012 R2 Multi-Subnet Cluster.
    8.Tune Cluster Heartbeat Settings.
    9.Install SQL Server 2014 on a Multi-Subnet Failover Cluster.
    10.Add a Node on a SQL Server 2014 Multi-Subnet Cluster.
    11.Tune the SQL Server 2014 Failover Clustered Instance DNS Settings.
    12.Test application connectivity.
    Regards,
    Michelle Li

Maybe you are looking for

  • Ipod shuffle Wont play.. Seems like a battery issue

    Guys ... can anyone help with this My shuffle just stopped playing. Looks like the battery is dead. The lead indicator has no light when depressed. i have charged it via USB port and alos power adapter. The front status light shows green [ having bee

  • BootCamp, MacDrive and Gaming

    I recently got a mac book pro, I am looking at installing boot camp for gaming. I want to keep the windows partition as small as possible i.e. have only the OS on there to allow for maximum flexibility. So i was thinking that i could simply install W

  • In Which Table Users are assigned to UGR?

    Hi, In Which Table Users are assigned to UGR? quick answers are appreciated. We have a scenario where we have to compare User Id's with UGR to customise authorisations in TEM. Regards Chandrashekar

  • I'm having problems uploading files (to YouTube & SendSpace) with Firefox. Both work fine using IE. Any idea what the problem could be?

    When I try to upload to YouTube, things appear to be working, but every time it either ends up telling me the upload has been interrupted or it shows that the upload finished & then the file is nowhere to be found in my account. When I switched to IE

  • Operation Activity names

    I'm calling the function module BAPI_PRODORDCONF_GET_TT_PROP The response from this bapi does not provide the activity names in it only the values and units.  Can you please suggest a way to obtain the names? I also have all of the data provided by B