VMs High availability in cloud

If a cloud is created with SAN Storage Pool,  hyper-v hosts (without clustered) and network fabrics should the VMs will migrate to another hyper-v host in case one of the hyper-v hosts failure. By cloud  characteristics it should.
Please help clarifying.
  

A cloud in VMM is an abstraction of the underlying resources, it is not a technical feature on its own, but depending on the underlying configuration in the fabric.
A cloud can contain several host groups, and each host group can contain a mix of both stand-alone hosts and clusters. In order to have VMs on a cluster, they must be instrumented that they should be highly available. This can be set on the template where
the VM deployment will be based on, or you can force the cloud to deploy them to a cluster.
No matter what you are trying to achieve, high availability for Hyper-V would require the virtualization hosts to be members of a Windows Failover Cluster.
And regarding placement, when you deploy to a cloud in VMM, you don't specify a host. Intelligent placement (in VMM) will determine the best suited host for the configured virtual machine. If you deploy to a host
group, then you are able to pick the desired host - although intelligent placement will give the recommended host in the wizard. 
-kn
Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

Similar Messages

  • Install Cloud Control 12c High Availability

    Good day,
    I am installing cloud control high availibility, but when i install the Third Party Certificate the services are not available when I check for HTTP.
    I followed the following guidelines:
    Deploying a Highly Available Enterprise Manager 12c Cloud Control:
    http://www.oracle.com/technetwork/oem/framework-infra/wp-em12c-building-ha-level3-1631423.pdf
    And configuration Third Party Certificate, note:
    NOTE:1399293.1 - EM 12c Cloud Control How to Create a Wallet With Third Party Trusted Certificate that Can Be Imported into the OMS For SSL Comunication?
    I have a couple of questions
    How i can to change the CA in OMS?
    How export the certificate and key?
    Can anyone help me, please,
    Best Regards,
    Alexandra Granados

    hi,
    Please follow the below procedure..
    EM11g / EM12c : Using ORAPKI Utility to Create a Wallet with Third Party Trusted Certificate and Import into OMS [ID 1367988.1]

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

  • How do I connect two VMs in the same cloud service?

    I want to setup a DB cluster with two VMs in a single cloud service. My DB (Couchbase) requires being able to reference the other nodes by IP address or hostname. Since VMs change their IP everytime they restart I can't use that, and as far as I can tell
    hostname is the same for both VMs in a cloud service. But the Azure docs mention Azure provided name resolution (here https://msdn.microsoft.com/en-us/library/azure/jj156088.aspx). I can't find any information on how this service works, or how I find the permanent
    hostnames it assigns to my VMs within a cloud service. Is there anyway to find one VM from another within a cloud service that persists across restarts?

    Hi CanuckAlex;
    Thank you for your post.
    Virtual machines must be in a cloud service, which acts as a container and provides a unique public DNS name, a public IP address, and a set of endpoints to access the virtual machine over the Internet. The cloud service can optionally be in a virtual network.
    If a cloud service is not in a virtual network, the virtual machines in that cloud service can only communicate with other virtual machines through the use of the other virtual machines’ public DNS names, and that traffic would travel over the Internet.
    If a cloud service is in a virtual network, the virtual machines in that cloud service can communicate with all other virtual machines in the virtual network without sending any traffic over the Internet.
    If you place your virtual machines in the same standalone cloud service, you can take advantage of load balancing and availability sets. For details, see
    Load balancing virtual machines and
    Manage the availability of virtual machines. However, you cannot organize the virtual machines on subnets or connect a standalone cloud service to your on-premises network. Here is an example.
    If you place your virtual machines in a virtual network, you can decide how many cloud services you want to use to take advantage of load balancing and availability sets. Additionally, you can organize the virtual machines on subnets in the same way as your
    on-premises network and connect the virtual network to your on-premises network. Here is an example.
    Virtual networks are the recommended way to connect virtual machines in Azure. The best practice is to configure each tier of your application in a separate cloud service. This enables advanced user rights delegation through Role Based Access Control (RBAC).
    For more information, see
    Role Based Access Control in Azure Preview Portal. However, you may need to combine some virtual machines from different application tiers into the same cloud service to remain within the maximum of 200 cloud services per subscription.
    To connect virtual machines in a virtual network:
    Create the virtual network in the Azure Management Portal. For more information, see
    Virtual Network Configuration Tasks.
    Create the set of cloud services for your deployment in the Azure Management Portal to reflect your design for availability sets and load balancing with
    New > Compute > Cloud Service > Custom Create.
    When you create each new virtual machine, specify the correct cloud service and your virtual network. If the cloud service has been previously associated with your virtual network, its name will already be selected for you.
    Here is an example using the Azure Management Portal.
    To connect virtual machines in a standalone cloud service:
    Create the cloud service for your deployment in the Azure Management Portal with
    New > Compute > Cloud Service > Custom Create.
    When you create the virtual machine, specify the name of cloud service created in step 1. Alternately, you can create the cloud service for your deployment when you create your first virtual machine.
    Here is an example using the Azure Management Portal for the existing cloud service named EndpointTest.
    PS: One can assign static IP address to VM on Azure, You could refer to this
    link here.
    Hope this helps.
    Warm Regards;
    Prasant

  • Oracle EBS R12.1.3 High Availability with RAC 11g on VMWARE

    Dear All,
    Our customer requirement is of high availability and good utilization of hardware for EBS R12.13 implementation.As per our strategy we want to use Oracle VM(I think it doesn't require any licensing) , RAC 11gR2 and appstier load balancing.We have only two servers to achieve all these.One one node there will be two VMs one for dbtier and another one for appstier and same will be on second server.So this way we will be having four virtual machines on two physical nodes.For all these we will use OEL Linux as Operating system
    Please share your experiences , if above deployment model is correct or need further enhancements.
    Regards.

    Our customer requirement is of high availability and good utilization of hardware for EBS R12.13 implementation.As per our strategy we want to use Oracle VM(I think it doesn't require any licensing) , Please contact your Oracle Sales representative for license questions.
    RAC 11gR2 and appstier load balancing.We have only two servers to achieve all these.One one node there will be two VMs one for dbtier and another one for appstier and same will be on second server.So this way we will be having four virtual machines on two physical nodes.For all these we will use OEL Linux as Operating system
    Please share your experiences , if above deployment model is correct or need further enhancements.Please see these docs.
    Using Oracle VM Templates for Oracle E-Business Suite [ID 975734.1]
    Using Oracle VM with Oracle E-Business Suite Release 11i or Release 12 [ID 465915.1]
    Oracle E-Business Suite High Availability on Oracle VM [ID 986690.1]
    Certified Software on Oracle VM [ID 464754.1]
    Oracle VM General Frequently Asked Questions [ID 464756.1]
    Thanks,
    Hussein

  • Problems With the High Availability

    Hi all,
    First of all, i'm sorry for my "bad" english, but i will to try to explain my problem as clairly as i can.
    I'm using VDI 3.1.1 ,a configuration "High Availability with bundled MySQL" with a primary server and 2 secondary servers.
    i'm trying to test the HA for the core VDI and FailOver for the VirtualBox, The HA works but i think with a timeout (=2minutes) that i want to reduce.
    I found some troubles with the VirtualBox failover.
    Here is more explaination :
    These Tests are done with the VRDP mode.
    The first test after the installation&configuration, the failover works fine after the crash of server A, the desktops migrate and restart in the other server (the server B)
    The second test : i reboot the server A, and i want to test again with shuting down the server B (crash). the desktops migrate to the server A and they are with the state "running" but SRS can't deport the display (it loops on the authentication screen) ... The VM is running and i can access to it with the Remote Desktop Connection.
    When i looked to the logs, i found that the SRS is always looking for the VM on the server who crashed (Server B). I looked to the vda-client command for the user concerned and it shows me that the host is the server who crashed (Server B) with the old port. I opened the MySQL,vda database and i looked the datas on the t_desktop* table, the fields* ip-or-dns* and port *were not updated when the VMs migrates. i update the two fields with an update request with the correct host (of the server A) and the new port where the VM has migrate, and the display comes in 1s.
    So, the VM migrate well, but Datas on MySQL Database were not updated.
    For the WRDP mode, everything works fine, because the SRS uses the IP of the VM to deport the display, not the host and port like in VRDP mode.
    I test this with different configuration :
    * HA with bundled Mysql : primary server and two secondary servers. The two secondary servers contains the VDI core and the VirtualBoxes.
    * HA with bundled Mysql : the 3 vdi Cores are virtualized on a server, and Two Servers with only the VirtualBoxes.
    * One Server with VDI Core, and Two Servers with only the VirtualBoxes to test only the failover of virtualbox.
    All of this tests gave the same results.
    My questions :
    1 - After a crash of a server and after reboot, is there anything to do? (i checked all the vda services "database,webservices ..." and everything is ok after the reboot).
    2 - Is there a bug for the HA with the VRDP mode?
    3 - Is anybody had the same problems? if yes, what is the solution?
    4 - How can i reduce the timeout for the HA of VDI Core? is there any process for that?
    Thank you all for your help , and again sorry for my bad english, i know i have to improve it :)

    I am using VDI version 3.2. Virtualization layer is Virtuabl Box.
    setup is 1 x VDI server, 2 x virtual box host , remote mysql database and shared storage backend i.e NAS
    The Virtualbox session / vm does not fail over, period !
    When i pull the plug on the 1st Virtualbox, all the desktops session on the down server, remains on the server
    and did not failover to the other one.
    Seriously , have any one managed to get the HA working ? anyone ?
    see below for the cli output.
    Started 1 desktop
    root@vdihost-a:~[598]# vda provider-list-hosts virtualbox
    NAME             STATUS  ENABLED         CPU_USAGE      MEMORY_USAGE  DESKTOPS
    vdi1              OK  Enabled       0% (20 MHz)      10% (1.8 GB)         0
    vdi2               OK  Enabled      1% (310 MHz)      16% (2.6 GB)         1
    Started 2nd desktop
    root@vdihost-a:~[599]# vda provider-list-hosts virtualbox
    NAME             STATUS  ENABLED         CPU_USAGE      MEMORY_USAGE  DESKTOPS
    vdi1             OK  Enabled      1% (319 MHz)      17% (2.9 GB)         1
    vdi2               OK  Enabled      1% (327 MHz)      16% (2.6 GB)         1
    AfterpPower cut to vdi2, but second desktop did not failover to vdi2, desktops count still 1, instead of 2
    root@vdihost-a:~[600]# vda provider-list-hosts virtualbox
    NAME              STATUS  ENABLED         CPU_USAGE      MEMORY_USAGE  DESKTOPS
    vdi1               OK  Enabled      0% (140 MHz)      17% (2.9 GB)         1
    vdi2     Unresponsive  Enabled        0% (0 MHz)         0% (0 KB)         1
    I really want to know if anyone has managed to get this working at all.

  • SSAS High Availability in Azure (not SQL DBE)

    Hello,
    I was wondering what would be my options for having an SSAS environment with High Availability in Azure?
    Please refrain from mentioning HA solutions for the SQL Database Engine itself - that is
    not the issue at hand.
    The issue is that as far as I know (and please correct me if I am wrong), the only way to provide SSAS HA is through clustering that involves share storage -- which is not available in Azure.
    So I was wondering if anyone has come across such requirement before and if there are any sort of novel/original solutions
    (thinking outside the box).
    Regards,
    P.

    Hi Pmdci,
    According to your description, you need implement SSAS environment with High Availability in Microsoft Azure, the problem is that you need to use share storage in this environment which is not available in Azure, right?
    Based on my research, a clustered implementation of SSAS can be configured in Microsoft Azure.
    A clustered implementation of SSAS can also span from on premises to Azure (or any qualified cloud) if networking is properly configured so that the nodes on premise and in the cloud can see each other, talk to Active Directory, and access storage.
    http://azure.microsoft.com/en-in/documentation/articles/storage-dotnet-how-to-use-files/
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • Hardware requirements to setup high available SOA system on production env

    Hi All,
    Can anyone tell me the recommended hardware requirements to setup a highly available SOA system on production machine?
    Thanks
    Jahangir

    We have a 2 managed servers setup in 2 different VMs (SuSe Linux )
    The minimum recommended hardware and system configuration requirements for each VM on which Oracle SOA Suite, Oracle Service Bus, Oracle BAM
    components are installed is:
    •     4 CPU Cores; each with a clock speed of 1.5 GHz or greater
    •     8 GB RAM (Physical Memory)
    •     Disk Space: 50 GB SAN Disk Space on each VM
    Please note we have separate domain for all three components. Typically JVM size is 1GB for admin server and 2GB for managed servers. With this configuration, the memory usage is always around 90%.. If you want to reduce the memory usage %, you may have to increase the RAM size..

  • VSphere High Availability with local storage

    I am building a small virtualization system utilizing vSphere 6.0.  The system will only be running about 10 VMs.  I plan to use two ESXi hosts with Essentials Plus.
    Unfortunately High Availability is a requirement.  Due to the system being small, I'd like to avoid a SAN if possible and use local storage.  I know I could use vSphere Replication, but if the host that is running vCenter fails, my understanding is that you're unable to restore the replica.
    Is there anyway to achieve HA without using a SAN or shared storage?
    Thanks!   
    This topic first appeared in the Spiceworks Community

    In that field, no ip is configured at all.
    We have other sites were HA is working perfectly and checked them, they also have this field empty.
    EDIT: I inserted the IP address, but HA still times out

  • Possible to use Azure SQL as database for RD Connection broker high availability?

    I am looking in to setting up a highly available Remote Desktop services service on Azure and note that the high availability for the Connection Broker service needs a full SQL database (rather than the LocalDB used for one instance). Is it possible
    to use an Azure SQL database for these purposes (i.e. opposed to VMs running SQL Server)?
    Has anyone had any experience with this? This would be nice as I am not looking at thousands of users and the Azure SQL would provide the high availability for the database without creating multiple (expensive) VMs running SQL server.

    So far there's no such a feature in azure sql, you can vote for it
    http://feedback.azure.com/forums/217321-sql-database/suggestions/423275-enable-sql-service-broker-in-sql-azure

  • Grid Control 12c and high availability databases

    Hello,
    In the context of high availability databses, my configuration is as follow:
    2 AIX servers on 2 sites with shared storage -> Site1 and site2.
    During normal operations databases are on site1. In case of disaster, databases can fallover on site2.
    On my Grid Control, 2 agents are installed: One for AIX on site1 and one for AIX on site2.
    Databases have been discovered on AIX:site1
    Listener is a virtual address and can also fallover.
    What are the possibility on the grid side ?
    Do I have to discover also the databases on site2 ?
    Actually, Listener is seen down on the site who has not the databases.
    Is there a way to only configure the Listener virtual address on the Grid ?
    Regards

    Hi,
    Basically you have to relocate the targets to the new agent (the one installed on site2). I haven't done that, but I expect to work fairly easy and smooth. Refer to the following section of Cloud Control 12c manual:
    Configuring Targets for Failover in Active/Passive Environments
    Regards,
    Sve

  • BizTalk 2013 Azure VM (IaaS) High availability

    Hi,
    We have a 2 BT , 1 SQL configuration for the BizTalk Environment. BizTalk 2013 VMs and SQL 2012 VM provisioned from the gallery.  Since there is a limitation on having a highly available SQL tier (for BizTalk) on Azure, we are evaluating a possible
    replacement of the Azure setup with a on-premise setup which will guarantee highly available environment(as the SQL tier will be highly available too and with the SSO service). Wanted to know your thoughts on the below topics.
    1) Is the BizTalk 2013 Azure (IaaS) used widely in production scenarios today (given the limitation for HA). If yes, what could be the mitigating solution
    2) Is the roadmap for Microsoft (BizTalk Server related) going to address this scenario. 
    Appreciate your inputs in this regard which would help us to make an informed decision to switch back to on-premise.
    Thanks,
    Ujjwal
    -Ujjwal

    Theoretically since SQL Always-ON or database mirroring is not supported with BizTalk, SQL Server Azure virtual machines do not support Failover Cluster Instances.
    But ensure by selecting “Availability Set” your virtual machine will be deployed to different fault domains and update domain thus ensuring that your application is available during network failures, hardware failures, and any planned downtime. This increases
    the case of server/service high availability. And with these features unavailability of the server in Azure is very less.
    But if you’re one of those customers you still want to ensure they don’t let any chance of failure (or don’t afford for any failure) and still look to implement failover Cluster Instances, then simple answer is SQL Server Azure virtual machines do not support
    Failover Cluster Instances. If you ask about mitigation plan, then ensure you have selected “Availability Set” while provisioning the servers.
    To answer to you couple of specific questions:
    1) Is the BizTalk 2013 Azure (IaaS) used widely in production scenarios today (given the limitation for HA). If yes,
    what could be the mitigating solution - Ican't
    answer BizTalk as Iaas is widely used, this question has to be asked to Microsoft directly. And regarding the mitigation plan, as mentioned provisional your server across fault
    domains and update domain minimize the down time or increases the high availability.
    2) Is the roadmap for Microsoft (BizTalk Server related) going to address this scenario.   -  As of now SQL Always-ON or database mirroring are not supported for BizTalk and roadmap remains same.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • SharePoint Foundation 2013 - Can we use the foundation for intranet portal with high availability ( medium farm)

    Today I had requirement, where we have to use the SharePoint Foundation 2013 (free version) to build an intranet portal ( basic announcement , calendar , department site , document management - only check-in check-out / Version).
     Please help me regarding the license  and size limitations. ( I know the feature comparison of Standard / Enterprise) I just want to know only about the installation process and license.
    6 Server - 2 App / 2 Web / 2 DB cluster ( so total license 6 windows OS license , 2 SQL Server license and Guess no sharepoint licenes)

    Thanks Trevor,
    Is load balance service also comes in free license... So, in that case I can use SharePoint Foundation 2013 version for building a simple Intranet & DMS ( with limited functionality).  And for Workflow and content management we have to write code.
    Windows Network Load Balancing (the NLB feature) is included as part of Windows Server and would offer high availability for traffic bound to the SharePoint servers. WNLB can only associate with up to 4 servers.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • 11.1.2 High Availability for HSS LCM

    Hello All,
    Has anyone out there successfully exported an LCM artifact (Native Users) through Shared Services console to the OS file system (<oracle_home>\user_projects\epmsystem1\import_export) when Shared Services is High Available through a load balancer?
    My current configuration is two load balanced Shared Services servers using the same HSS repository. Everything works perfectly everywhere except when I try to export anything through LCM from the HSS web console. I have shared out the import_export folder on server 1. And I have edited the oracle_home>\user_projects\epmsystem1\config\FoundationServices1\migration.properties file on the second server to point to the share on the first server. Here is a cut and paste of the migration.properties file from server2.
    grouping.size=100
    grouping.size_unknown_artifact_count=10000
    grouping.group_by_type=Y
    report.enabled=Y
    report.folder_path=<epm_oracle_instance>/diagnostics/logs/migration/reports
    fileSystem.friendlyNames=false
    msr.queue.size=200
    msr.queue.waittime=60
    group.count=10000
    double-encoding=true
    export.group.count = 30
    import.group.count = 10000
    filesystem.artifact.path=\\server1\import_export
    I perform an export of just the native users to the file system, the export fails stating and I find errors in the log stating the following.
    [2011-03-29T20:20:45.405-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11001] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: /server1/import_export/msr/PKG_34.xml] Executing package file - /server1/import_export/msr/PKG_34.xml
    [2011-03-29T20:20:45.530-07:00] [EPMLCM] [ERROR] [EPMLCM-12097] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [SRC_CLASS: com.hyperion.lcm.clu.CLUProcessor] [SRC_METHOD: execute:?] Cannot find the migration definition file in the specified file path - /server1/import_export/msr/PKG_34.xml.
    [2011-03-29T20:20:45.546-07:00] [EPMLCM] [NOTIFICATION] [EPMLCM-11000] [oracle.EPMLCM] [tid: 10] [ecid: 0000Iw4ll2fApIOpqg4EyY1D^e6D000000,0] [arg: Completed with failures.] Migration Status - Completed with failures.
    I go and look for the path that it is searching for "/server1/import_export/msr/PKG_34.xml" and find that this file does exist, it was infact created by the export process so I know that it is able to find the correct location, but it then says that it can't find it after the fact. I have tried creating mapped drives and editing the migration.properties file to reference the mapped drive but it gives me the same errors but with the new path. I also tried the below article that I found in support with no result. Please advise, thank you.
    Bug 11683769: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE
    Bug Attributes
    Type B - Defect Fixed in Product Version 11.1.2.0.000PSE
    Severity 3 - Minimal Loss of Service Product Version 11.1.2.0.00
    Status 90 - Closed, Verified by Filer Platform 233 - Microsoft Windows x64 (64-bit)
    Created 25-Jan-2011 Platform Version 2003 R2
    Updated 24-Feb-2011 Base Bug 11696634
    Database Version 2005
    Affects Platforms Generic
    Product Source Oracle
    Related Products
    Line - Family -
    Area - Product 4482 - Hyperion Lifecycle Management
    Hdr: 11683769 2005 SHRDSRVC 11.1.2.0.00 PRODID-4482 PORTID-233 11696634Abstract: LCM MIGRATIONS FAIL WHEN HSS IS SET UP FOR HA AND HSS IS STARTED AS A SERVICE*** 01/25/11 05:15 pm ***Hyperion Foundation Services is set up for high availability. Lifecycle management migrations fail when running Hyperion Foundation Services - Managed Server as a Windows Service. We applied the following configuration changes:Set up shared disk then we modify the migration.properties.filesystem.artifact.path=\\<servername>\LCM\import_export We are able to get the migrations to work if we set filesystem.artifact.path=V:\import_export (a maped drive to the shared disk) and we run Hyperion Foundation Services in console mode.*** 01/25/11 05:24 pm *** *** 01/28/11 03:32 pm *** (CHG: Sta->11)*** 01/28/11 03:32 pm *** (CHG: Comp. Ver-> 11.1.2 -> 11.1.2.0.00)*** 01/28/11 03:34 pm *** (CHG: Base Bug-> NULL -> 11696634)*** 02/02/11 01:53 pm *** (CHG: Sta->41)*** 02/02/11 01:53 pm ****** 02/02/11 05:18 pm ****** 02/03/11 10:30 am ****** 02/04/11 03:53 pm *** *** 02/04/11 03:53 pm *** (CHG: Sta->80 Asg->NEW OWNER)*** 02/10/11 08:49 am ****** 02/15/11 11:59 am ****** 02/15/11 04:56 pm ****** 02/16/11 09:58 am *** (CHG: Sta->90)*** 02/16/11 09:58 am ****** 02/16/11 02:12 pm *** (CHG: Sta->93)*** 02/24/11 06:14 pm *** (CHG: Sta->90)Back to top

    it is not possible to implement a kind of HA between two different appliances 3315 and 3395. 
    A node in HA can have the 3 persona. 
    Suppose Node A (Admin(primary), monitoring(Primary) and PSN). 
    Node B(Admin(Secondary), Monitoring(Secondary) and PSN). 
    If the Node A is unavailable, you will have to promote manually the Admin role to Primary. 
    Although the best way is to have
    Node A Admin(Primary), Monitoring(Secondary) and PSN
    Node B Admin(Secondary), Monitoring (Primary) and PSN. 
    Rate if helpful and Marked As correct if it is correct for the experts. 
    Regards,

Maybe you are looking for

  • Enabling 5.1 Audio with Soundblaster 24

    Hello everyone. Im not sure how many times in the past this question has been asked, but i searched the board for it with no luck. Anyways, i have the soundblaster li've 5. 24bit, and the question i have is how do i actually enable the 5. sound? I kn

  • Firefox 4.0 does not open website from email link

    I installed Firefox version 4.0 today. Ever since doing so I am unable to open a website from my Windows Mail email program. Version 3.6 did not have this problem. Firefox is my default browser.

  • Connecting Soundblaster Audigy 2zs to Home Theater Amplif

    Can I connect my Soundblaster Audigy 2zs directly to an amplifier and still get 5.,6.,or7. speaker setup? I feel it's kind of redundant to use a reciever when the Audigy has all the capabilities of a Home Theater Reciever. And it outputs sound at 92k

  • I pad does not support what's app

    I pad does not support what's app

  • Applying template to imported topics creates book folder

    Applying the template creates a book folder for each topic (not visible in the Project view, but is visible in Windows Explorer) and moves the htm file into the folder. No other files are in the folder. I am generating webhelp htm files, and providin