Oracle Infrastructure in Cold Failover Cluster

Hello,
I have browsed through the Oracle docs for High Availability, but I cannot find any information abourt achieving that CFC HA for OAS Infrastructura using RedHat EL AS and HP Serviceguard...
Can anyone help me?
thanks in advance, and best regards

user7575753 wrote:
I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share .

Similar Messages

  • How to deploy oracle 11g under cold failover cluster

    Hi,
    I am trying to install oracle 11g under failover cluster environments. I am trying to find some oracle documentation about it. But not able to find one. Can anyone help me. Please i need it badly.
    we have two servers sun solaris 10. on which we have failover cluster being installed. one is active while one is passive. How can i install oracle 11g on this failover clusters. did i have to install oracle of shared SAN. what are the extra configuration i have to do please help
    Also is it wise to install 11g warehouse with fail safe or use RAC.
    regards
    Nick
    Edited by: Nick Naughty on Feb 1, 2009 4:02 AM

    thanks alot for reply.
    I am of the same point of view. but the machine i am working on is sun solaris 10 with two partitions finance and accounts which have been protected by a failed over machine having two partitions finance and accounts and i think you are right these are the same shared storage folder which are mounted on active machine as well as on passive machine.
    Now my question is if i install oracle 11g on one machine without oracle cluster then sun cluster should be able to mount the database in case the primary machine fails.
    I have tried hard to get some documentation on this top but nothing so far. please help
    regards
    Nick

  • Oracle 10gR2 Installation on Cold Failover Cluster

    Installation document recuired for Oracle 10g R2 on Cold Failover Cluster in Redhat Linux 4 platform.
    Message was edited by:
    user491196

    user12974178 wrote:
    checking network configuration requirements
    check complete: The overall result of this check is : Not Executed
    Recommendation: Oracle supports installation on systems with DHCP - assigned public IP addresses.
    However , the primary network Interface on the system should be configured with a static IP address
    in order for the Oracle Software to function properly. See the Installation Guide for more details on installing the
    software on systems configured with DHCP
    Could anyone help me in this regard
    Thanks in AdvanceYour problem is:
    "However , the primary network Interface on the system should be configured with a static IP address in order for the Oracle Software to function properly"
    Your solution is
    "See the Installation Guide for more details on installing the software on systems configured with DHCP"
    That book is at http://tahiti.oracle.com and the steps you need are in Chapter 2 of the Installation Guide for Linux.
    Now, I could assume this is Oracle 11gR2 and you are using Oracle Enterprise Linux 5 Update 4, but I won't. Assuming things gets me into trouble.

  • OracleAS cold failover cluster

    I am installing oracleAS cold failover cluster. but during installing a am getting error in database configuration
    assistant. can anybody help me?

    This is as simple as "Two heads think better that once", for this you will have two points (or more) of access to your applications, the process are split between servers and if one of the nodes fails, there are other to backup them. That's the only think that a Cold Fail Over cluster does, as well the administration is centralized if you have a nice configuration you will need to do only one configuration and all nodes will notice about it, finaly when you have a cold fail over cluster you have 50% of your potential workload in use, you have a server as a portrait that only is good to look at it but doesn't do nothing else, you dont need benchmarks or notes, this is really simple as it sounds.
    Greetings.

  • OC4J Instances in Cold Failover Cluster

    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!

    user7575753 wrote:
    I'm running OAS 10.1.2.2.0 on a windows 2003 server under a cold failover clustered environment and was wondering... Is it recommended to have one web application deployed in it's own separate instance? For example, webapp1 deployed to instance1 and webapp2 deployed to instance2? Or would it be better to have multiple web applications deployed to one instance?
    Thanks for any thoughts!I can say your configuration is ok for single instance failover . Once u like to make cluster and load balance , OAS hs managed and non-managed cluster.
    For Managed Cluster , you must setup either Oracle WebCache or F5 Big/IP . With regard to non-managed cluster, that means nothing required to share .

  • Cold failover cluster for repository ?

    Hi,
    We have Box A (infra and mid) and B(mid),
    we want to use Box B for cold failover claster
    Any suggestion on how to do this?
    TIA

    Similar question was asked recently in our forum:
    http://social.msdn.microsoft.com/Forums/en-US/18239da7-74f2-45a7-b984-15f1b3f27535/biztalk-clustering?forum=biztalkgeneral#4074a082-8459-420f-8e99-8bab19c8fba2
    White paper from Microsoft on this context shall you with step-by-step guidance of failover cluster for BizTalk 2010.
    http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=2290
    Also refer this blog where authors in series of his post guides with the steps required:
    Part 1: BizTalk High Availability Server Environment – Preparations
    Part 2: BizTalk High Availability Server Environment–Domain Controller Installation
    Part 3: BizTalk High Availability Server Environment – SQL & BizTalk Active Directory Accounts
    Part 4: BizTalk High Availability Server Environment – Prepping our SQL & BizTalk Failover Clusters
    Part 5: BizTalk High Availability Server Environment – SQL Server 2008r2 Failover Cluster
    Part 6: BizTalk High Availability Server Environment–BizTalk 2010 Failover Cluster Creation
    Part 7 – BizTalk High Availability Server Environment –BizTalk 2010 Installation, Configuration
    and Clustering
    Par 8 - Adding Network Load balancing to our High Availability Environment
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • 10g in a Cold Failover Cluster

    Hallo,
    I have some doubts about configuration of a single instance in a two node cluster for a cold failover solution.
    I mean,
    1 - in listener.ora I should put the floating-ip hostname?
    2 - to create a dbconsole, do I have to use ORACLE_HOSTNAME variable?
    thanks in advance

    Even if in Metalink note 362524.1,
    a "failing over" db console need a shared home,
    I successed in creating hostname indipendent one in few steps
    1 - Setting ORACLE_HOSTNAME=clustername (and resolving the clustername with the floating ip in etc/hosts in server and client machines)
    2 - Creating the db console with repository in active node and w/o repository on passive one.
    That's all,
    Bye

  • SCAN & Cold Failover

    Hi Folks
    OEL 5.4 64bit, 11gR2
    SCAN
    I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
    what's your opinion about this?
    ASM for voting&ocr
    I'm also planning to create one diskgroup in asm for 3xvoting and ocr, normal redundancy and 3 failgroups. Do I have to create the diskgroup with the QUORUM Keyword for each failgroup or not? as described here http://download.oracle.com/docs/cd/E11882_01/server.112/e10500/asmdiskgrps.htm#CHDBGAFB. I assume not, as they cannot store ocr files (http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_5008.htm#SQLRF01114).
    what is your way to store the ocr and voting files?
    is there a 11gR2 Version of this paper available (http://www.oracle.com/technology/products/database/clusterware/pdf/SI_DB_Failover_11g.pdf) ?
    thanks
    regards oviwan

    SCAN
    I'm planning to install a two node cold failover cluster. I'm not sure whether it makes sense to use a SCAN listener or to do it the old fashion way (for each db a vip, which relocates with the db in the case the node goes down).
    1- You can not install the grid infrastructure without the SCAN even if you will not use it.
    2 - The old fashion you described is what is still in use independently from SCAN. You still have a vip which relocates in case the node goes down.
    Where you use the SCAN is when configuring the REMOTE LISTENER and ALIAS for clients. But for those purposes, you can use the «old fashion» if you don't want to use the SCAN. But i believe that you should use it because it makes things become more easier....

  • Data Guard Broker and Cold Failover clusters

    Hi,
    I wanted to use Data Guard Broker to control primary/standby systems on AIX clusters but found that without Oracle clusterware it is not supported on failover of a database to another node (unless you drop and recreate the configuration which in my opinion is not support at all!).
    The 11g documentation states that DG Broker offers support for single instance databases configured for HA using Oracle Clusterware and "cold failover clusters".
    Does anyone know whether this support for cold failover clusters in 11g means "proper support" i.e the configuration detects that the hostname has changed and automatically renames it so it continues to work?, or is the support in 11g the same as that in 10g?.
    Thanks,
    Andy

    Hi Jan,
    We already use virtual hostnames.
    When I set up the configuration the hostname for the databases default to the server name. The only way I know of changing this is to disable the configuration, then use the "edit database...set property" command but when I enable the configuration again it reverts back to its previous value.
    regards,
    Andy

  • Cold-Failover with Oracle Weblogic Server 10 R3

    Hi ALL,
    It is possible to configure cold-failover using Oracle Weblogic Server?/
    - 2 instances with the same ip, if one fails the second will run
    - Not comunication between nodes like in normal cluster
    Thanks

    Hello All,
    Have anybody configured auto server migration for cold failover?
    I have a few questions regarding this setup, any help is much appreciated.
    1. for the floating ips to work, do I need 2 NICS and two cables on each server?
    2. Do I have to configure 2 different ips on one interface, which will be used as node manager listen address's for each of the 2 machines?
    3. What will be the listen address of each of these servers(primary server & failover server)? Is it the floating ip address? or empty (all local addresses)?
    4. can we start both the primary server & failover server so when primary goes down, the failover will be quick?
    Thanks,
    Chandra

  • Installation guide Oracle cloud Control Agents on a Windows failover cluste

    Hello Experts,
    i have to install Oracle cloud control agent on windows fail-over cluster. i could not found any document on oracle support site/ internet.
    here are the requirement:
    i have two hosts (host1 and host2) which are in windows cluster environment (active/passive). means, one node will hold the cluster service/resource at a time.
    if primary node (host1) is down, Microsoft cluster ware will fail-over cluster resource to host2 automatically.
    any one deployed 12c Agents on windows fail-over clustered environment. if yes, please provide hints / procedure.
    Thanks
    pavan

    Hi Pavan,
    Please find the following document:
    How to Configure Grid Control Agents in Windows HA - Failover Cluster Environments (Doc ID 464191.1)
    For 12C Agent install, you can proceed with the silent install passing a virtual hostname in silent method also using the following document:
    How to Install EM 12c Agent using Silent Install Method with Response File (Doc ID 1360083.1)
    Best Regards,
    Venkat

  • TO setup Cold Failover in ESP Cluster

    Hi,
    I created two nodes (node1 and node2) and started in defferent machines, node1 in vewin764SA016(1st machine) and node2 in vewin764SA046(2nd machine)(I have attached both files).
    I created an ESP project which has input window attached to XML Input Adapter.
    I setup cluster configuration in CCR file as follows (I have attached CCR file also. I could not able to attach .ccr, so it change extension to txt):
    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration xmlns="http://www.sybase.com/esp/project_config/2010/08/">
      <Runtime>
      <Clusters>
          <Cluster name="esp://vewin764SA016:19011" type="remote">
            <Auth>user</Auth>
            <Username encrypted="false">sybase</Username>
            <Password encrypted="false">sybase</Password>
            <Rsakeyfile></Rsakeyfile>
      <Managers>
      <Manager>vewin764SA016:19001</Manager>
      <Manager>vewin764SA046:19002</Manager>
      </Managers>
          </Cluster>
        </Clusters>
        <Bindings>
          <Binding name="Battery_Life_Input_Adapter_window">
            <Cluster>esp://vewin764SA016:19011</Cluster>
            <Workspace>test</Workspace>
            <Project>test</Project>
            <BindingName>b1</BindingName>
            <RemoteStream>remote1</RemoteStream>
            <Output>false</Output>
          </Binding>
        </Bindings>
        <AdaptersPropertySet>
          <PropertySet name="Atom Feed Input">
            <Property name="URL"></Property>
          </PropertySet>
        </AdaptersPropertySet>
      </Runtime>
      <Deployment>
        <Project ha="false">
          <Options>
            <Option name="time-granularity" value="5"/>
            <Option name="debug-level" value="4"/>
          </Options>
          <Instances>
            <Instance>
              <Failover enable="true">
                <FailureInterval>120</FailureInterval>
                <FailuresPerInterval>5</FailuresPerInterval>
              </Failover>
              <Affinities>
                <Affinity charge="positive" strength="strong" type="controller" value="node1"/>
                <Affinity charge="positive" strength="weak" type="controller" value="node2"/>
              </Affinities>
              <Affinities/>
            </Instance>
          </Instances>
        </Project>
      </Deployment>
    </Configuration>
    I added project to node1 using following command and started the project
    esp_cluster_admin --uri=esp://vewin764SA016:19011 --username=sybase --password=sybase --add_project --project-name=test --workspace-name=test --ccx=test.ccx --ccr=test.ccr
    Project started in both machines and both getting data.
    As per ESP document: "If cold failover is enabled, a failover occurs when a failed project switches to another server to continue processing."
    But in my case project is running in both the nodes and both getting data.
    I tried all possible combinations for Affinity attributes charge and strength(positive/negative, strong/weak). But it did not work out.
    How can I set up a cluster such that when i add a project, it should start running on one node and when that node goes down, it should bring up second node and start running on that.
    Thanks
    Shashi

    Hi Shashi,
    Reviewing the configuration of your ccr file, you've got
              <Affinities>
                <Affinity charge="positive" strength="strong" type="controller" value="node1"/>
                <Affinity charge="positive" strength="weak" type="controller" value="node2"/>
              </Affinities>
              <Affinities/>
    Just note, that in your set-up, you've set strong and positive for node1. This means that it will only run on node1.So if your node1 is down, the project will not start up on node2. Is that what you are trying to accomplish?
    "</Affinities>" - that fifth line there is extra. I don't think it is harming anything though.
    I am assuming that you are using Studio to see the that the "Project started in both machines and both getting data".
    There is an outstanding New Feature Request CR for Studio to support HA and failover. Studio is really not compatible with HA.  Studio makes it look like there are multiple instances of the project running when in fact there is only one instance running. Studio will start getting errors if the project crashes and is failed over to the secondary node.  I can’t really recommend using Studio at all when dealing with HA:
    CR 732974 - Request for Studio support of HA and failover.
    To confirm that there is only one instance of the project running, I look at esp_cluster_admin:
    $ esp_cluster_admin --uri esp://bryantpark:19011 --username=sybase
    Password:
    > get managers
    Manager[0]:     node1@http://bryantpark.sybase.com:19011
    Manager[1]:     node2@http://bryantpark.sybase.com:19012
    > get controllers
    Controller[0]:  node1@http://bryantpark.sybase.com:19011
    Controller[1]:  node2@http://bryantpark.sybase.com:19012
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node1
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
            -------------- Affinities --------------
            Affinity Type:                CONTROLLER
            Affinity Charge:              POSITIVE
            Affinity Strength:            STRONG
            Affinity Subject:             node1
            Affinity Type:                CONTROLLER
            Affinity Charge:              POSITIVE
            Affinity Strength:            WEAK
            Affinity Subject:             node2
    >
    The output from "get projects" shows that there are two active managers and controllers, and that the single instance is running from the controller named node1.
    Next, I run the "get project" command, and it shows me the Pid:
    > get project test/test
    Workspace:                    test
    Project:                      test
        ------------------ Instance : 0 ------------------
        Instance Id:                  Id_2000003_1393600620713
        Command Host:                 bryantpark.sybase.com
        Command Port:                 44905
        Gateway Host:                 bryantpark.sybase.com
        Gateway Port:                 41162
        Sql Port:                     47244
        SSL Enabled:                  false
        Big Endian:                   false
        Address Size:                 8
        Date Size:                    8
        Money Precision:              4
        Pid:                          20637
        Topology Ignored:             false
        Timer Interval:               5
        Active-Active:                false
    I run these Linux commands that also confirm to me that I have only one instance running:
    $ ps -ef | grep esp_server
    alices   19963 19961  1 09:45 pts/0    00:00:27 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   20243 20241  1 09:59 pts/1    00:00:15 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   20637 19963  0 10:16 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2000003_1393600620713 --log-to 4
    alices   20729 18201  0 10:18 pts/2    00:00:00 grep esp_server
    From here, I see that pid 20637's parent pid is 19963, which is the pid for node1 server.
    Next, I kill the active manager and the project to simulate a catastrophic failure:
    $ kill -9  20637 19963
    Due to the affinities, no failover happens to node2:
    $ ps -ef | grep esp_server
    alices   20243 20241  1 09:59 pts/1    00:00:17 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   20764 18201  0 10:21 pts/2    00:00:00 grep esp_server
    Next, I use a different ccr file that does not have affinities configured.
    From esp_cluster_admin, I stop and remove the test/test project and then re-run using the modified ccr that has no affinities.
    $ esp_cluster_admin --uri esp://bryantpark:19012\;bryantpark:19011 --username=sybase
    Password:
    > remove project test/test
    [done]
    > add project test/test bin/test.ccx test-without-affins.ccr
    [done]
    > start project test/test
    [done]
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node2
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
    This time, the project started on node2.
    > get project test/test
    Workspace:                    test
    Project:                      test
        ------------------ Instance : 0 ------------------
        Instance Id:                  Id_1000003_1393601774031
        Command Host:                 bryantpark.sybase.com
        Command Port:                 46151
        Gateway Host:                 bryantpark.sybase.com
        Gateway Port:                 36432
        Sql Port:                     50482
        SSL Enabled:                  false
        Big Endian:                   false
        Address Size:                 8
        Date Size:                    8
        Money Precision:              4
        Pid:                          21422
        Topology Ignored:             false
        Timer Interval:               5
        Active-Active:                false
    >
    The pid is 21422.
    $ ps -ef | grep esp_server
    alices   21213 21211  4 10:33 pts/0    00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   21282 21280  4 10:33 pts/1    00:00:07 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node2.xml --cluster-log-properties node2.log.properties
    alices   21422 21282  1 10:36 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_1000003_1393601774031 --log-to 4
    alices   21500 18201  0 10:36 pts/2    00:00:00 grep esp_server
    Pid 21422's parent pid is 21282, which is the pid for node2 server.
    Kill the project and node2:
    $ kill -9  21422 21282
    $ ps -ef | grep esp_server
    alices   21213 21211  4 10:33 pts/0    00:00:08 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-node node1.xml --cluster-log-properties node1.log.properties
    alices   21505 21213 12 10:36 ?        00:00:00 /work/alices/esp514/ESP-5_1/bin/esp_server --cluster-container --cluster-container-id Id_2_1393601813749 --log-to 4 --clus
    alices   21578 18201  0 10:36 pts/2    00:00:00 grep esp_server
    The project automatically fails over now to node1.
    > get managers
    Manager[0]:     node1@http://bryantpark.sybase.com:19011
    > get controllers
    Controller[0]:  node1@http://bryantpark.sybase.com:19011
    > get projects
    =============================
    Workspace:                    test
    Project:                      test
    Instance Count:               1
        ----------- Instance Details -----------
        Instance Name:                default
        Controller Name:              node1
        Current Status:               started-running
        Requested Status:             started-running
        Failure Interval:             120
        Failures Per Interval:        5
    Thanks,
    Alice

  • Add GNS and Configure Oracle Infrastructure for Cluster failed

    hi
    i tried to install Oracle Grid 11g on Oracle Linux EnterPrise edition but i got the following error while
    creating the Root.sh files.
    *Add GNS 192.168.138.131 -d nisha.......failed
    *Configure Oracle Infrastructure for Cluster failed.
    plz help me out thanks in advance...

    hi
    thanks for reply.
    if i did not give GNS name then i cannot install it because without GNS i am not able to install
    is there anyway to install without GNS?
    and if i want to install Typically then its also asking for Scan name
    so plz guide me what is scan name and how to configure?
    @Levi Pereira
    because without that i am not able to install the software plz if i am wrong then
    help me out how to install without GNS?
    Edited by: nisha on May 17, 2011 4:56 AM

  • Failover Cluster Hyper-V Storage Choice

    I am trying to deploy a 2 nodes Hyper-V Failover cluster in a closed environment.  My current setup is 2 servers as hypervisors and 1 server as AD DC + Storage server.  All 3 are running Windows Server 2012 R2.
    Since everything is running on Ethernet, my choice of storage is between iSCSI and SMB3.0 ?
    I am more inclined to use SMB3.0 and I did find some instructions online about setting up a Hyper-V cluster connecting to a SMB3.0 File server cluster.  However, I only have budget for 1 storage Server.  Is it a good idea to choice SMB over iSCSI
    in this scenario?  ( where there is only 1 storage for the Hyper-V Cluster ). 
    What do I need to pay attention to in this setup?  Except some unavoidable single points of failures. 
    In the SMB3.0 File server cluster scenario that I mentioned above, they had to use SAS drives for the file server cluster (CSV).  I am guessing in my scenario, SATA drives should be fine, right?

    "I suspect that Starwind solution achieves FT by running shadow copies of VMs on the partner Hypervisor"
    No, it does not run shadow VMs on the partner hypervisor.  Starwind is a product in a family known as 'software defined storage'.  There are a number of solutions on the market.  They all provide a similar service in that they allow for the
    use of local storage, also known as Direct Attached Storage (DAS), instead of external, shared storage for clustering.  Each of these products provides some method to mirror or 'RAID' the storage among the nodes in the software defined storage. 
    So, yes, there is some overhead to ensure data redundancy, but none of this class of product will 'shadow' VMs on another node.  Products like Starwind, Datacore, and others are nice entry points to HA without the expense of purchasing an external storage
    shelf/array of some sort because DAS is used instead.
    1) "Software Defined Storage" is a VERY wide term. Many companies use it for solutions that DO require actual hardware to run on. Say Nexenta claims they do SDS and they need a separate physical servers running Solaris and their (Nexenta) storage app. Microsoft
    we all love so much because they give us infrastructure we use to make our living also has Clustered Storage Spaces MSFT tells is a "Software Defined Storage" but they need physical SAS JBODs, SAS controllers and fabric to operate. These are hybrid software-hardware
    solutions. More pure ones don't need any hardware but they still share actual server hardware with hypervisor (HP VSA, VMware Virtual SAN, oh, BTW, it does require flash to operate so it's again not pure software thing). 
    2) Yes there are number of solutions but devil is in details. Technically all virtualization world is sliding away from ancient way of VM-running storage virtualization stacks to ones being part of a hypervisor (VMware Virtual Storage Appliance replaced
    with VMware Virtual SAN is an excellent example). So talking about Hyper-V there are not so many companies who have implemented VM-less solutions. Except the ones you've named it's also SteelEye and that's all probably (Double-Take cannot replicate running
    VMs effectively so cannot be counted). Running storage virtualization stack as part of a Hyper-V has many benefits compared to VM-running stuff:
    - Performance. Obviously kernel-space running DMA engines (StarWind) and polling driver model (DataCore) are faster in terms of latency and IOPS compared to VM-running I/O all routed over VMBus and emulated storage and network hardware.
    - Simplicity. With native apps it's click and install. With VMs it's UNIX management burden (BTW, who will update forked-out Solaris VSA is running on top of? Sun? Out of business. Oracle? You did not get your ZFS VSA from Oracle. Who?) and always "hen and
    chicken" issue. Cluster starts, it needs access to shared storage to spawn a VMs but VMs are inside a VM VSA that need to be spawned. So first you start storage VMs, then make them sync (few terabytes, maybe couple of hours to check access bitmaps for volumes)
    and only after that you can start your other production VMs. Very nice!
    - Scenario limitations. You want to implement a SCV for Scale-Out File Servers? You canont use HP VSA or StorMagic because SoFS and Hyper-V roles cannot mix on the same hardware. To surf SMB 3.0 tide you need native apps or physical hardware behind. 
    That's why current virtualization leader VMware had clearly pointed where these types of things need to run - side-by-side with hypervisor kernel.
    3) DAS is not only cheaper but also faster then SAN and NAS (obviously). So sure there's no "one size fits all" but unless somebody needs a a) very high LUN density (Oracle or huge SQL database or maybe SAP) and b) very strict SLAs (friendly telecom company
    we provide Tier2 infrastructure for runs cell phone stats on EMC, $1M for a few terabytes. Reason? EMC people have FOUR units like that marked as a "spare" and have requirement to replace failed one in less then 15 minutes) there's no point to deploy hardware
    SAN / NAS for shared storage. SAN / NAS is an sustained innovation and Virtual SAN is disruptive. Disruptive comes to replace sustained for 80-90% of business cases to allow sustained live in a niche deployments. Clayton Christiansen\s "Innovator's Dilemma".
    Classic. More here:
    Disruptive Innovation
    http://en.wikipedia.org/wiki/Disruptive_innovation
    So I would not consider Software Defined Storage as a poor-mans HA or usable to Test & Development only. Thing is ready for prime time long time ago. Talk to hardware SAN VARs if you have connections: How many stand-alone units did they sell to SMBs
    & ROBO deployments last year?
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Active Failover Cluster of Identity Managment

    Could be possible create an AFC of IM using WebCache WC as Load BalanceR?
    Or we need to adquiere an LBR fron third parts tested list?
    http://www.oracle.com/technology/products/ias/hi_av/Tested_LBR_FW_SSLAccel.html
    If the first answer is YES, could someone redirect me to some notes, please.
    We are talking about a complete IM (SSO, DAS, OCA, OID).
    I guess OID could be a problem... :( due to WC can't do an ldap balancing
    tx for your help.
    diego

    When I started this, we had two different vmware servers set up in two isolated datacenters across the factory from one another. The two nodes of our RAC system (mounted on Netapp) follow the same topology. The tech doing the vmware tested a forced failover using a manual method (we didn't need it active at that point), and I didn't even see any downtime. I have no clue how this really works, but it seems fine. The vmware disks are mounted on a SAN which has striping between the two datacenters.
    Since then, they've started creating a vmware cluster which should fail over automatically (apparently), but I haven't seen it tested.
    Your (cold failover) solution sounds fine to me. Your only problem is that the SSOs will stop working while you're doing the failover.
    I once saw a solution somewhere (either at a client or in metalink) that had two OIDs running at the same time behind a hw load balancer with the MR RAC-mounted. I suspect that only one was "active" at a time. I think the solution is difficult because of the way that OID has cached data. I think that as well, some applications keep a persistent connection to OID (Forms runtime?) and these would break on failover.
    Maybe what you could do is have your standby "up" like this and somehow set up an automatic reload of the data on the standby when it becomes primary...

Maybe you are looking for

  • How to Restrict adding AR-Delivery connecting to Sales Order

    Good Day Experts, I am newbie in SAP Business One, Please me help on this scenario: I have created an Series Numbering in Sales Order (SO-BOM) No.17 and a User Define Field (ORDR_DwnPayment) with option w/DP and w/out DP (DP means DownPayment). Now i

  • Trying to update OS can't find update in general settings

    Friend gifted me his IPAD, trying to update OS, but can't find Software Update in General Settings.

  • MAc won't save youtube video anymore. What am I doing wrong?

    I was trying to save a video from youtube as I did a 1000 times before. Now when I try, every program/site I use (KeepVid for instance), will turn the .mp4 video file in a .txt file. Basically the file name will be something like video.mp4.txt I trie

  • Office 2008 and Acrobat 8

    There is no pdf maker button installed, and I cannot convert a word doc to pdf, in acrobat, the list is not even offered, is that a office issue or acrobat? and how would I fix it?

  • Adapt app runtime for diferent aspect ratio

    as I can scale and position the sprites of a stage and that they are in an area defined by the screen size (the image of serious background constant) but to adapt the sprites within a frame or area that would be the screen size. The algorithm would b