Advice Requested - High Availability WITHOUT Failover Clustering

We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
I'm looking for validation on my thinking.
So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!

Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :) 
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • High availability without RAC DB on windows cluster and multiple DB instanc

    Is it possible to achieve High Availability on 2 Oracle DB in windows 2003 cluster servers and shared SAN server. without installing and configuring RAC DB and ASM.
    Can we use Veritas or Symantec or anyother tool to get it done.
    What are the options available to achive this?
    Appreciate response.
    Thanks
    Noor

    Please no double postings, this will not give you answers faster...
    For answer see here:
    HA Oracle DB Clustering requirement without RAC DB on shared SAN Server?

  • How to enable high availability on SQL Server 2005 with Windows Server 2008 Enterprise R2

    Dear Folks,
    I would like to ask you about this thing. I'm working for IT department for bank in Myanmar. Our bank have up to 96 branches across all of Myanmar including H.O. We are using Microsoft SQL Server 2005 with Windows Server 2008 for our banking
    information system. My main problem is having to backup and restore the database backup files every time the servers in branches goes down for whatever reasons. I want to deploy this feature of high availability and failover cluster using Windows Server 2008
    and SQL Server 2005. Our branches have 2 Servers. One is for Primary and other is for Backup. What I want to do is that, I want to change Backup Server to Primary Server whenever the Primary Server goes down for whatever reasons. All the working data and databases
    from Primary will immediately replicated into Backup Server along with all the IP information of Primary Server. Please give me step by step guide for this process.

    Try below
    http://blogs.msdn.com/b/cindygross/archive/2009/10/23/checklist-for-installing-sql-server-2005-as-a-clustered-instance.aspx
    I will recommend you to upgrade SQL server to newer version for support as well as flexibility.
    Regards,
    Vishal Patel
    Blog: http://vspatel.co.uk
    Site: http://lehrity.com

  • HANA High Availability System Vs Storage Vs VIP failover

    Dear Experts,
    Hope your all doing great. I would like to seek your expertise on HANA high availability best practice. We have been deciding to use TDI for BW on HANA. The next big question for us is how make it available atleast 99.99%.
    I was going through multiple documents, SDN forums, etc..but would like to see how the experts are performing in real time.
    My view -
    Virtual IP failover is a common HA practice which have been used to failover CI / DB hosts depends on failure/maintenance. In this case, both nodes can be used to run app servers.
    System replication - HANA based required secondary standby node, which doesn't accept user requests, but replicate the database from primary using logs after initial data snapshot either synchronous or Asynchronous. (Can be used as HA or DR - if servers are between different data centers).
    Storage replication - HANA based required secondary standby node, which replicates SAN for HA/DR.
    Could you please provide your expertise method you followed for HANA HA and what are the pros, cons and challenges that you have faced or facing.
    Thanks
    Yoga

    Thanks forbrich
    Do you know any specific doc that describes the installation and configuration steps of 10g RAC on NAS? If possible, can you provide some link that I could use to perform this task?
    I have done RAC installations on SAN without any problems and its something I'm fairly experienced with. With NAS I am not really comfortable since I can't seem to find any documentation that describes step by step installation procedure or guidelines for that matter.
    Thank you for your input
    Best Regards
    Abbas

  • 2012 R2 - Clustering High Availability File Server

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.
    What's your goal? If you want to have file server with no service interruption then you need generic SMB file server built on top of a guest VM cluster. HA VM is not going to work for you (service interruption) and SoFS is not going to work for you (workload
    different from SQL Server and / or Hyper-V is not supported). So... Tell what do you want to do and not what you're doing now :)
    For a fault-tolerant file server scenarios see following links (make sure you replace StarWind with your shared storage specific, I'd suggest to use shared VHDX). See:
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    http://technet.microsoft.com/en-us/library/cc753969.aspx
    http://social.technet.microsoft.com/Forums/en-US/bc4a1d88-116c-4f2b-9fda-9470abe873fa/fail-over-clustering-file-servers
    http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
    http://www.starwindsoftware.com/configuring-ha-file-server-for-smb-nas
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.
    The goal is a file server that's as resilient as possible. I need to still use DFS namespace though. Its on a Dell VRTX server with built-in shared storage.
    I'm having issues getting the High Availability wizard for General Purpose File Server to see local drives :(
    So I'm just trying to understand the key differences between creating one of these and  a hyper-v server with file services installed.

  • WebLogic Server clusters not high available!!

              I'm working with WebLogic Server 6.0.
              I try to connect to cluster using explicit IP addresses.
              For this case,first ,I cluster server A,B and C ,then client try to look up home
              interface at server A.
              After server A return home reference ,client creates EJB object and calls method
              using that reference .
              If server A fail, it still working with others server ,B and C.
              But if new client try to find home reference at server A ,he can't.
              This situation means that WebLogic Server cluster is not high availability or
              I did something wrong with configuration?
              And if I missed something ,how can I fix it?
              Thanks for your attention.
              

    Hi King,
              If you look up at a non-existent IP/port, then it will not respond. That is
              expected behavior.
              You should not be using an explicit IP like that. At least use DNS round
              robin.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              Tangosol Coherence: Clustered Coherent Cache for J2EE
              Information at http://www.tangosol.com/
              "KING TEAM" <[email protected]> wrote in message
              news:3c449200$[email protected]..
              >
              > I'm working with WebLogic Server 6.0.
              > I try to connect to cluster using explicit IP addresses.
              >
              > For this case,first ,I cluster server A,B and C ,then client try to look
              up home
              > interface at server A.
              >
              > After server A return home reference ,client creates EJB object and calls
              method
              > using that reference .
              >
              > If server A fail, it still working with others server ,B and C.
              >
              > But if new client try to find home reference at server A ,he can't.
              >
              > This situation means that WebLogic Server cluster is not high availability
              or
              > I did something wrong with configuration?
              >
              > And if I missed something ,how can I fix it?
              >
              > Thanks for your attention.
              >
              >
              

  • High Availability Options Without Enterprise Edition

    I think I've found the answer to this already but I just wanted to confirm that I'm getting this right. In SQL Server 2012, is there no way to implement a high-availability solution which doesn't require shared storage or use of a deprecated feature without
    having Enterprise Edition?
    At the moment we have databases hosted on a primary server running Windows Server 2012 Standard and SQL Server 2012 Standard.  We have a second, identical server, and the databases are mirrored between the two servers using Database Mirroring. 
    This works perfectly and meets all of our requirements.  The only possible problem is that we're looking at storing documents using FILESTREAM, which isn't available in Mirrored databases.
    From what I've read, Database Mirroring is deprecated in SQL Server 2012.  High Availability Groups, which sound great, are only available in the Enterprise Edition.  This feels like a real kick in the teeth from Microsoft.  We're stuck
    with either using a deprecated feature (Mirroring), which is only postponing the problem until the end of 2012's life cycle, or laying out significant amounts of money (in the tens of thousands of pounds GBP) to upgrade to Enterprise Edition, which we just
    don't have the budget for.
    Why couldn't Microsoft continue to offer a high availability option for businesses without deep pockets?  Do I have any options I'm not thinking of?  Shared storage is not an option (too costly, single point of failure, geographically separated
    datacentres).

    Thanks for all the feedback.
    I was forgetting that even data stored as FILESTREAM would need to be backed up using SQL backups, so I guess that's an issue either way, thank for reminding me, Sebastian.
    Geoff, I agree that replication isn't a viable HA solution.  FCI has lots going for it, but from what I can make out (and I am just reading up on this now so I could be wrong) requires either shared storage or a third-party tool to move the data from
    the Primary server's local storage to the local storage of the Secondaries.  It all seems overly complicated when Mirroring does exactly what I need it to do, merrily writing transactions to the mirrors at the same time as the primary databases. 
    Additionally, whilst you might be right about making the case for Enterprise Edition, it's difficult to make a clear case to non-technical people when we already have a working HA solution and document management.  Trying to explain the nuanced advantages
    of SQL Server full-text indexing (at present we use a third-party indexing product which stores the index discretely) and simplified querying when the documents are in the same database as their associated data as justification for spending tens of thousands
    of pounds is a challenge!
    David, that's useful to know, about the disadvantages of using FILESTREAM and how just storing the documents in a VARBINARY(MAX) column might actually give better performance, thank you.

  • NAC High Availability: Users getting disconnected during failover

    Hi,
    We have a pair of CAS in in-band virtual-gateway mode in high availability mode.
    We are still running some tests but we have noticed that the clients are losing connectivity during the failover.
    * The service ip is always active (never stops responding pings).
    * The stand-by CAS becomes active immediatly after we shut down the primary, we see it on the CAM.
    * The client however looses connectivity with the internal network for almost two minutes.
    I'm guessing this isn't normal, but would like to know what is the expected behaviour on this.
    Thanks and regards,

    We configured another pair today and we are noticing the same behaviour, however it seems random... sometimes the user barely looses connection, other times it will take from 2-5 minutes for it to come back.
    We are only using eth2 for the failover link since we only have one serial port.
    When we test we make sure both servers are up and then we reboot the primary. The secondary becomes active immediately. When both are up again we repeat the process.
    any other ideas? something we should check?
    Thanks!

  • SAP NetWeaver 7.30 High-Availability installation without MSCS

    Hi all,
    Do we have any option to install SAP NetWeaver 7.30 High-Availability System on the third-party cluster solution, except for MS Cluster Service?
    I tried to install the first cluster node without MSCS configuration, however, sapinst.exe checks whether we already set up MSCS on this node, then installer aborted.
    I'd like to know the way to disable this checks. (Or, way to mimic the installer as if MSCS cluster configuration was done.)
    If unavailable, I have to install MSCS first, and SAP NetWeaver, then also have to uninstall MSCS instead of out cluster software;
    It seems to me no sense.
    Best regards,

    Hi Peter,
    Thanks for your quick response.
    It is disappointed for me that HA Installation is tightly combined with MSCS.
    Do we have a way to migrate from other installation scenario to High-Availability?
    For example, once we install ASCS instance as distributed system, then migrate
    it for High-Availability usage, leveraged by profile updates and so on?
    I'm not sure how High-Availability installation is different from other installation
    scenario except for installation path and hostname to be used for connection.
    Best regards,

  • I'm looking for Failover/High available solutions for IPS 4200 Series

    Hi all,
    I tried to find out Failover/High available solutions for IPS 4200 series,I didn't saw failover solutions in IPS guide document. Anybody can be help me!

    I do not know if this is documented anywhere, but I can tell you what I do. As long as the IPS 4200 has power, with the right software settings, the unit can fail such that it will pass traffic. Should the unit loose power, it does stop all traffic. I run a patch cable in parallel with the in line IPS unit, in the same VLAN, with a higher STP cost. Thus all traffic will traverse the IPS unit when possible, but should something happen to it, a $10 patch cable takes over.
    Mike

  • Sql Server High availability failover trigger

    Hello,
    We are implementing sql server 2012 availability groups (AG). Our secondary databases are not accessible in order to save licenses.
    We have a lot of issues concerning monitoring, backup and SSIS. They all come down to the fact that they want basic information from the secondary, that is not accessible. We are implementing SSIS, which is supported on AG, but the SSISDB is encrypted.
    Backup problem
    The secondary instance does not know anything about the backups made in the primary instance. After a failover differential backups fail.
    SSIS problem:
    There is a blog (http://blogs.msdn.com/b/mattm/archive/2012/09/19/ssis-with-alwayson.aspx) that suggest to make a job that checks whether the status has changed from secondary to primary. If so, you can decrypt and encrypt again.
    This job has to be executed every minute. Which is way too much effort for an event that happens once in a while.  There are a few other problems with this solution. The phrase "use ssisdb" has to be included in the a job step. And the
    jobstep fails. The secondary is not accessible.
    Monitoring problems:
    We use Microsoft tooling for monitoring: SCOM. Scom does not recognize a non readable secondary and tries to login continuously.
    There are a few solutions that I can think of:
    -  sql server build in failover trigger
    -  Special status of secondary database.
    Failover trigger:
    We would like a build-in failover trigger, in stead of a time based job, that starts a few standard maintenance actions if only at the time (or directly after) a failover has occurred. Because now our HA cluster is not really high available until :
    - SSISDB works and is accessible after failover
    - Backup information is synchronised
    - SCOM monitoring skips the secondary database (scom produces loads of login failures)
    Does anyone have any suggestion how to fix this?

    No built in trigger can achieve your requirement.

  • Highly Available BIP Clusters

    Thread for HA and clustering questions from the blog entry here:
    http://blogs.oracle.com/xmlpublisher/2007/09/07#a504

    easy.  create 2 node  cluster-nested virtual or physical, no matter.  if virtual, set affinity (rather lack of to keep them on separate nodes-also build  a sql cluster as well and also keep them on separate nodes)follow the articles. 
    I found no less than 10 step by steps when searching for when I built the HA SCVMM / SQL environment for my company.
    MS article and third party, as you can see they are pretty straight forward to follow.  Best of luck.
    http://technet.microsoft.com/en-us/library/gg610678.aspx
    http://www.thomasmaurer.ch/2013/08/how-to-install-a-highly-available-scvmm-management-server/
    Brian

  • Hyper-v Failover clustering for SQL Server lab

    Hi,
    After 20yrs years of IT software experience, now I am realizing that I don't know much about hyperV clustering. :) I am trying to setup a lab. I badly need help on configuring network adapters
    to make the machines accessible from each other. Please see details below:
    I have a laptop with 8gb ram with windows 8.1 hyper-V. I created 4 VMs with Windows 2008r2 on it and further I intend having SQL Server Failover Clustering. VM details are as below:
    DC - Domain Controller
    SanStorage - MS iSCSI target
    Win1 - Primary node in the cluster that will have SQL Server instance
    Win2 - Secondary node in the cluster that will have SQL Server instance.
    The laptop has 1 Qualcomm network adapter (generally used for connecting Lan cable) and a Broadcomm (wifi) adapter. I intend setting up three virtual adapters
    First for regular communication
    Second for Data (SQL Server Data to be stored on SCSI)
    Third for HeartBit
    Normally, (on my laptop) I connect to internet using Broadcomm (Wi-Fi) and want to ensure that whenever needed:
    Run this lab even when Lan or wifi is NOT connected. If one of them is mandatory then even that could be the option.
    I should be able to use internet within all VMs as well- though this is good to have requirement.
    Internet needs to be available on laptop at least. Even if it is unavailable in VMs that is ok.
    Now the actual problem.
    I DON'T KNOW how many virtual internal/external/private switches to be
    defined and assigned to individual VMs. What should be the IPs, Subnets, DNS, Gateways defined on those?
    I was able to setup the stuff up-to certain extend and then it stopped working for unknown reasons. I have VMs ready. Next what should I do? Steps...
    setup network adapters in each VM
    setup DNS on DC
    Add remaining 3 VMs in domain
    Define Win1 and Win2 as cluster
    Configure Target
    Configure initiators and access the target
    e. Install SQL Server
    Is above correct?
    My queries probably demands a separate article. But I have already referred many articles and couldn't figure it out so posting it here. I sincerely request your help!
    PS – Even after disabling firewall (with and without having DNS setup) I am unable to ping VMs from each other.
    Regards,
    ThingToLearn
    Thanks, Pravin

    Hi,
    You need create another guest vm as the NAT device, attach the NAT external to the Wifi adapter or cable adapter, then you can set all your vms in internal network
    vswitch.
    The related third party article:
    Using Wireless with Hyper-V
    http://sqlblog.com/blogs/john_paul_cook/archive/2008/03/23/using-wireless-with-hyper-v.aspx
    Hope this helps.
    *** This response contains a reference to a third party World Wide Web site. Microsoft is providing this information as a convenience to you. Microsoft does not control
    these sites and has not tested any software or information found on these sites; therefore, Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. There are inherent dangers in the
    use of any software found on the Internet, and Microsoft cautions you to make sure that you completely understand the risk before retrieving any software from the Internet. ***
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

  • High Availability File Adapter in OSB

    If you use the JCA FileAdapter in OSB, it is necessary to use the eis/HAFileAdapter version, to ensure that only one instance of the adapter picks up a file; you must then configure a coordinator, by setting the
    controlDir, inboundDataSource, outboundDataSource, outboundDataSourceLocal, outboundLockTypeForWrite
    parameters.
    controlDir refers to the filesystem, the others to the DB
    This document http://www.oracle.com/technetwork/database/features/availability/maa-soa-assesment-194432.pdf says
    "Database-based mutex and locks are used to coordinate these operations in a File Adapter clustered topology. Other coordinators are available but Oracle recommends using the Oracle Database."
    Using a Oracle Database as coordinator means using RAC, otherwise no HA.
    I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
    If DB is required, I am considering using the good old "native" OSB File Poller, since it doesn't require complicated setup to be run in a cluster... but I don't want to use MFL, I would rather use the XSD-based Native Format. Here comes the second question:
    Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
    Thank you so much for your help - it will be rewarded with "helpful/answered" points .
    pierre

    I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
    I have not tried it but I think there are several options available invlucing writing your own custom mutex. Please find the details in Oracle File and FTP Adapters High Availability Configuration section on this link
    http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/ha_soa.htm#sthref434
    Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
    When you create a JCA Adapter based Proxy Service to read the files, the nXSD translation happens before the proxy service is invoked. JCA Engine first reads the data, translates using nXSD and then invokes the Proxy with the translated content. (You can verify this easily by creating a JCA based file read service and open the test console for it in sbconsole, it will show you XML request instead of native).
    So you can not read the text content using File Transport of OSB and then calling nXSD directly or calling an nXSD based Proxy Service.
    HOWEVER, you certainly can use file and nXSD in a combination if thats what you want.
    1. Create a Synchronous Read File Adapter with an nXSD created for it
    2. Create a Business Service for that Synchronous Read JCA in OSB
    3. Create a File Transport based service in OSB which will read the content of file and then call the Business Service to again read the content (which will include the translation using nXSD defined in step one to convert the content to XML).
    So basically you will need to read the file twice! Once using File Transport Proxy service (which will take care of polling in cluster) and then using Sync Read JCA based business service(which will do nXSD translation). To reduce the impact of reading the file twice you can use trigger files. File Proxy to read trigger file and and invoke JCA business service to read the actual file for that trigger.
    Another alternative can be to create a similar class as present here(http://blogs.oracle.com/adapters/entry/command_line_tool_for_testing) but instead of writing a file it will just return the translated content. Call this class with native content from the File Transport proxy using a Java Callout to do the translation.

Maybe you are looking for

  • External drive (Time Machine) password is spontaneously incorrect

    I've been making Time Machine backups to a Seagate 1.5TB USB-2 attached external drive for about 2 years, and about 3 weeks ago the backups began to fail due to "incorrect password" for the external drive.  I thought the drive connection might have j

  • Elements 9 for the mac

    Has anyone had trouble getting Elements to work on the mac?  I can't get the organizer to open. Janet

  • [nVidia Series] P4N Diamond Can't load certin Programs

    Hi I just built a new pc.  Specs:  CPU- 3.73 GHZ Extreme CPU  VGA- 2X eVGA 7800 GTX KO edition  RAM- 2GB of Corsair XMS DDR 667  HD  - 4X Western Digital 320 GB 7,200's  MOB- MSI P4N Diamond I ran diagnostics on the RAM & Hard Drives and the all have

  • [SOLVED] xmonad + panel logging applets

    Until recently I used xmonad-log-applet, but after the last update, xmonad is unable to connect to dbus. I get the following error: xmonad-x86_64-linux: D-Bus Error (org.freedesktop.DBus.Error.NoReply): Did not receive a reply. Possible causes includ

  • Installing a second SATA hard drive - Problems booting

    Hi, I currently have one Maxtor 160GB SATA hard drive in my system with my operating system installed. I have another identical hard drive that I'm attempting to install now, into the second of five available SATA ports on my MSI P965 Neo motherboard