AGPM High Availability / Fault Tolerance

I’m looking for any informations about Advanced Group Policy Management (AGPM) High Availability / Fault Tolerance.
The most interesting is that, in introduction for AGPM Planning Guide is described section about
Planning for AGPM high-availability and improved fault tolerance, but content not exists.
Microsoft MVP: Virtual Machine | http://blog.porowski.pro |
http://VirtualizationGeek.PL

Hi,
You may type the key words into the bing of TechNet website to search for the information you need. According to my search,
the following two links inlcude the detailed information regarding AGPM. But less information regarding AGPM High Availability / Fault Tolerance can be found from TechNet website.
Advanced Group Policy Management 4.0 Documents
 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=58FD6CDA-0210-4DA0-91D2-8C3CC2817DF8&displaylang=en 
Advanced Group Policy Management
 http://technet.microsoft.com/en-us/library/dd420466.aspx 
Based on the current situation, I will also submit a feedback to Microsoft through our internal channel and requst more contents regarding
AGPM High Availability / Fault Tolerance to be published. While we cannot guarantee that these contents will be added, we would like to ensure you that we will make every effort to see that they do.
Arthur Li
 TechNet Subscriber Support 
in forum
If you have any feedback on our support, please contact
 [email protected] . 
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Similar Messages

  • UOO sequencing along with WLS high availability cluster and fault tolerance

    Hi WebLogic gurus.
    My customer is currently using the following Oracle products to integrate Siebel Order Mgmt to Oracle BRM:
    * WebLogic Server 10.3.1
    * Oracle OSB 11g
    They use path service feature of a WebLogic clustered environment.
    They have configured EAI to use the UOO(Unit Of Order) Weblogic 10.3.1 feature to preserve the natural order of subsequent modifications on the same entity.
    They are going to apply UOO to a distributed queue for high availability.
    They have the following questions:
    1) When during the processing of messages having the same UOO, the end point becomes unavailable, and another node is available in order to migrate, there is a chance the UOO messages exist in the failed endpoint.
    2) During the migration of the initial endpoint, are these messages persisted?
    By persisted we mean that when other messages arrive with the same UOO in the migrated endpoint this migrated resource contains also the messages that existed before the migration?
    3) During the migration of endpoints is the client receiving error messages or not?
    I've found an entry on the WLS cluster documentation regarding fault tolerance of such solution.
    Special Considerations For Targeting a Path Service
    When the path service for a cluster is targeted to a migratable target, as a best practice, the path
    service and its custom store should be the only users of that migratable target.
    When a path service is targeted to a migratable target its provides enhanced storage of message
    unit-of-order (UOO) information for JMS distributed destinations, since the UOO information
    will be based on the entire migratable target instead of being based only on the server instance
    hosting the distributed destinations member.
    Do you have any feedback to that?
    My customer is worry about loosing UOO sequencing during migration of endpoints !!
    best regards & thanks,
    Marco

    First, if using a distributed queue the Forward Delay attribute controls the number of seconds WebLogic JMS will wait before trying to forward the messages. By default, the value is set to −1, which means that forwarding is disabled. Setting a Forward Delay is incompatible with strictly ordered message processing, including the Unit-of-Order feature.
    When using unit-of-order with distributed destinations, you should always send the messages to the distributed destination rather than to one of its members. If you are not careful, sending messages directly to a member destination may result in messages for the same unit-of-order going to more than one member destination and cause you to lose your message ordering.
    When unit-of-order messages are processed, they will be processed in strict order. While the current unit-of-order message is being processed by a message consumer, the next message in the unit-of-order will not be delivered unless it is to the same transaction or session. If no message associated with a particular unit-of-order is processing, then a message associated with that unit-of-order may go to any session that’s consuming from the message’s destination. This guarantees that all messages will be processed one at a time and in order, and any rollback or recover will not prevent ordered processing of the messages.
    The path service uses a persistent store to save the state of which member destination a particular unit-of-order is currently using. When a Path Service receives the first message for a particular unit-of-order bound for a distributed destination, it uses the normal JMS load balancing heuristics to select which member destination will handle the unit and writes that information into its persistent store. The Path Service ensures that a new UOO, or an old UOO that has no messages currently on any destination, can be enqueued anywhere in the cluster. Adding and removing member destinations will not disrupt any existing unit-of-order because the routing decision is made dynamically and those decisions are persistent.
    If the Path Service is unavailable, any requests to create new units-of-order will throw the JMSOrderException until the Path Service is available. Information about existing units-of-order are cached in the connection factory and destination servers so the Path Service availability typically will not prevent existing unit-of-order messages from being sent or processed.
    Hope this helps.

  • Fault tolerant, highly available BOSE XI R2 questions

    Post Author: waynemr
    CA Forum: Deployment
    I am designing a set of BOSE XI R2 deployment proposals for a customer, and I had a couple of questions about clustering. I understand that I can use traditional Windows clustering to setup an active/passive cluster for the input/output file repositories - so that if one server goes down, the other can seamlessly pick up where the other left off. On this Windows-based active/passive cluster, can I install other BOSE services and will they be redundant, or will they also be active/passive. For example: server A is active and has the input/output file repository services and the Page Server. Server B is passive and also has the input/output file repository services and the Page Server. Can the page Server on B be actively used as a redundant Page Server for the entire BOSE deployment? (probably not, but I am trying to check just to make sure) If I wanted to make the most fault-tolerant deployment possible, I think I would need to:Setup two hardware load-balanced web front-end serversSetup two servers for a clustered CMSSetup two web application servers (hardware load-balanced, or can BOSE do that load-balancing?)Setup two Windows-clustered servers for the input/output file repositoriesSetup two servers to provide pairs of all of the remaining BOSE services (job servers, page servers, webi, etc.)Setup the CMS, auditing, and report databases on a cluster of some form (MS SQL or Oracle)So 10 servers - 2 Windows 2003 enterprise and 8 Windows 2003 standard boxes, not including the database environment.Thanks!

    Post Author: jsanzone
    CA Forum: Deployment
    Wayne,
    I hate to beat the old drum, and no I don't work for BusinessObjects education services, but all of your questions and notions of a concept of operations in regards to redundancy/load balancing are easily answered by digesting the special BO course "SA310R2" (BusinessObjects Enterprise XI R1/R2 Administering Servers - Windows).  This course fully covers the topics of master/slave operations, BO's own load balancing operations within its application, and pitfalls to avoid.  Without attending this course, I for one would not have properly understood the BusinessObjects approach and would've been headed on a collision course with disaster in setting up a multi-server environment.
    Best wishes-- John.

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

  • Deterministic Fault Tolerant Load Balancing

    The USA has an unfortunate penchant for granting patents that arguably do not merit patent protection. Some of these are things that are blindingly obvious. Others are just not sufficiently inventive.
    Anyway, since I have no funds for patent searches, nor patent applications, and there are some other complications, I've decided to post this to establish prior-art for an algorithm. I don't claim that the algorithm is clever, nor novel, nor even that it violates no existing patents. This posting is simply to ensure that to the extent that someone might be granted a patent on it, they can't, because it has already been published.
    The Java connection is that I've done a fair amount of the work required to turn this into a real system in Java.
    Suppose you have set of processors, p0 thru pn-1, and each piece of work to be performed by a processor has some number k associated with it. The problem is to allocate the work roughly equally across the subset of processors that are actually functioning. Further, over a period of time, a series of related pieces of work may arrive with the same k. To the maximum possible extent you want each of the related pieces of work to be handled by the same processor. If a processor fails, you want its work to be distributed across the remaining processors, but still maintaining the property that pieces of work with a given value for k are handled by the same processor. In general we assume that the k values are randomly spread through a large number space.
    The motivation for these requirements is that for a given k the processor may be caching information that improves performance. Or it may be enforcing some invariant, such as in a lock manager where each request for a given lock must go to the same processor, or it clearly won't function.
    To achieve this, construct a list of integers of size n. Element i contains i if processor i is functional, and -1 otherwise.
    Calculate k mod n, and use the result as an index into the list. If the value contained there is non-negative, then it is the number of the processor to use. If it is -1, remove the element from the list, decrement the value of n and repeat. Continue until a processor number is found.
    This scheme is fault tolerant to a degree, in that the resulting system has a high level of availability.
    It also has the property that the failure of a processor only impacts on the allocation of pieces of work that would have been allocated to the failed processor. It does not result in a complete rearrangement of the work allocations. This makes things a lot simpler when dealing with things like distributed lock managers.
    The fault tolerance can be improved by an extension of the algorithm that allows a distributed master/slave arrangement, where the master number for a given k is determined as above, and a slave number is obtained by treating the master as if it were not functioning. Each processor is a master for some subset of the k values, and is a slave for another subset. For any given master, each of the other processors is a slave for a roughly equal portion of the given master's subset of the k values.
    There are some boring details that I've not discussed, such as how an entity wanting work to be done determines which processors are functioning, and the stuff related to the exact sequence of steps that must be performed when a processor breaks, or is repaired. I don't believe anyone could patent them because once you start thinking about it, the steps are pretty obvious.

    I wouldn't be so sure that a simple post to the java forums
    is all you need to prove this 'prior-art' is it ?
    Don't you need to actually use it? Or have you seen a laywer
    and this was their advice. Even if you have no money for it Im sure
    there are free legal services; even universities, you could contact.
    I don't believe anyone could patent them because once you
    start thinking about it, the steps are pretty obvious.The steps of anything a generally simple, it's the putting-them-together
    that you can patent :)

  • Windows Event Collector - Built-in options for load balancing and high availability ?

    Hello,
    I have a working collector. config is source initiated, and pushed by GPO.
    I would like to deploy a second collector for high availability and load balancing. What are the available options ? I have not found any guidance on TechNet articles.
    As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
    In my GPO Policy, if I individually declare both servers, events are forwarded twice, once for each server. Indeed it does cover high availability, but not really optimized.
    Thanks for your help.

    Hi,
    >>As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
    Based on the description, we can utilize DNS round robin to distribute workloads and increase fault tolerance. By default, DNS uses round robin to rotate the order of RR data returned in query answers where multiple RRs of the same type exist for a queried
    DNS domain name. This feature provides a simple method for load balancing client use of Web servers and other frequently queried multihomed computers. Besides, by default, DNS will perform round-robin rotation for all RR types.
    Regarding DNS round robin, the following article can be referred to for more information.
    Configuring round robin
    http://technet.microsoft.com/en-us/library/cc787484(v=ws.10).aspx
    TechNet Subscriber Support
    If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.
    Best regards,
    Frank Shen

  • Fault tolerance for network printing

    What options are available to provide fault tolerance for network printing?  Currently, all our shared printers are on ServerA.  Is it possible to have similar shared printers on ServerB and have the clients automatically switch to Server B if
    Server A became unavailable?  We don't care about load balancing as much as fault tolerance.  Thanks to all who post!

    A failover cluster node ? (not in nlb;
    Servers in a Network Load Balancing (NLB) failover cluster cannot be used as print servers in Windows Server 2008 )
    For environments that require high availability, you can use a failover cluster as a print server.  If a node in the cluster fails, all print functionality will fail over to the next node in the cluster.  To improve failover times, we recommend
    that the administrator for the cluster force failover to each node when new print drivers are installed on the server.  During a failover, the driver installation is forced to occur on the active node.  The installation of the driver on each node
    can require several minutes. Forcing this installation process during maintenance will make sure that any unplanned failovers during usual operation will be very quick, because the drivers will already be installed on each node.
    Regards, Philippe
    Don't forget to mark as answer or vote as helpful to help identify good information. ( linkedin endorsement never hurt too :o) )
    Answer an interesting question ? Create a
    wiki article about it!

  • Fault-Tolerant Networks

    Hi,
    For testing i want implementation one technique from fault-tolerant network (Simple) in virtual environment.
    Can you help me? Thanks

    Hi Arash_89,
    General practices to be considered when configuring networks in Failover Clusters about the network contains many aspects, such Identifying single points of failure and configuring
    redundancy at every point in the network is very critical to maintain high availability, you can refer the following article to know more detail:
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Fault tolerance hardware with vSphere 5.5

    Hi,
    I'm trying to test vSphere 5.5.
    So i have 5 physical machines : 2 ESXi, 1 Domain Controller, 1 vCenter, and a Freenas for iSCSI.
    ESXi01 : Intel Core i3 540 with 8 GB RAM and 120 GB SSD, 2 NIC
    ESXi02 : Intel Core i3 4150 with 8 GB RAM and 250 GB SATA, 2 NIC
    Hosted an OpenBSD VM (128 MB RAM, 1vCPU)
    I tested :
    - vMotion works
    - High Availability works (with EVC enabled).
    Now I tried Fault Tolerance : no success
    To have FT, do i need absolutely the same CPU on the 2 ESXi ?? Any advices ?  
    Thank you very much for your reply.

    Hi,
    Thank you very much.
    I tried SiteSurvey Plugin, now i know why it doesn't work
    These ESX hosts are not compatible with FT, but may contain VMs that are:
    esxi02.vclass.local
    CPU type Intel(R) Core(TM) i3-4150 CPU @ 3.50GHz is not supported by FT.

  • High availability for file server and exchange with 2 physical servers

    Dear Experts,
    I have 2 physical server with local disks only. I want to setup below on same with high availability, please advise best prossible options. We will be using windows 2012 R2 Server..
    1. Domain controller
    2. Exchange 2013
    As of now I am thinking of setting up below:
    1. Install Hyper-v on both and create 3 VM on each as
    -On Host A- 1 VM for DC, 1 VM for File server with DFS namespace and replication for file server HA and 1 VM for Exchange 2013 with CAS/MBX with DAG and DNS RR for Exchange HA
    -On Host B - 1 VM for ADC, 1 VM for File server DFS member for above and 1 VM for Exchange 2013 CAS/MBX with DAG member
    I have read on internet about new features called scale out file server (SoFS) in Windows 2012 Server but not sure that will be preferred for file sharing.
    Any advise will be highly appreciated..
    Thanks for the help in advance..
    Best regards,

    Dear Experts,
    I have 2 physical server with local disks only. I want to setup below on same with high availability, please advise best prossible options. We will be using windows 2012 R2 Server..
    1. Domain controller
    2. Exchange 2013
    As of now I am thinking of setting up below:
    1. Install Hyper-v on both and create 3 VM on each as
    -On Host A- 1 VM for DC, 1 VM for File server with DFS namespace and replication for file server HA and 1 VM for Exchange 2013 with CAS/MBX with DAG and DNS RR for Exchange HA
    -On Host B - 1 VM for ADC, 1 VM for File server DFS member for above and 1 VM for Exchange 2013 CAS/MBX with DAG member
    I have read on internet about new features called scale out file server (SoFS) in Windows 2012 Server but not sure that will be preferred for file sharing.
    Any advise will be highly appreciated..
    Thanks for the help in advance..
    Best regards,
    DFS is by far the best way to implement any sort of file server. Because a) failover is not fully transparent and does not happen always (say not on copy ) b) DFS cannot replicate open files so if you edit a big file and have node rebooted you're going to
    lose ALL transactions/updates you've applied c) actually slows down the config. See:
    DFS for failover
    http://help.globalscape.com/help/wafs3/using_microsoft_dfs_for_failover.htm
    DFS FAQ
    http://technet.microsoft.com/library/cc773238(WS.10).aspx
    (check "open files" point here)
    DFS Performance
    http://blogs.technet.com/b/filecab/archive/2009/08/22/windows-server-dfs-namespaces-performance-and-scalability.aspx
    SoFS a) requires shared storage to run and you don't have one b) does not support generic workloads
    (only Hyper-V and SQL Server) and c) technically makes sense to expand SAS JBOD or existing FC SAN to numerous Hyper-V clients over 10 GbE w/o need to invest money into SAS switches and HBAs and FC HBAs and new licenses FC ports. Making long story short:
    SoFS is NOT YOUR CASE. 
    SoFS Overview
    http://technet.microsoft.com/en-us/library/hh831349.aspx
    http://www.aidanfinn.com/?p=12786
    http://www.aidanfinn.com/?p=12786
    For now you need to find some shared storage to be a back end for your hypevisor config (SAS JBOD from supported list, virtual SAN from multiple vendors like for example StarWind see below, make sure you review ALL the vendors) and then you create a failover
    SMB 3.0 share for your file server workload. See:
    Clustered Storage Spaces over SAS JBOD
    http://technet.microsoft.com/en-us/library/jj822937.aspx
    Virtual SAN from inexpensive SATA and no SAS or FC
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Failover
    SMB File Server in Windows Server 2012 R2
    http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
    Fault
    tolerant file server on just a pair of nodes
    http://www.starwindsoftware.com/ns-configuring-ha-file-server-for-smb-nas
    For Exchange you use SMB share from above for a file share witness and use DAG. See:
    Exchange DAG
    Good luck! Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Fault tolerant server on SLES?

    Hi,
    How would you go go about setting up a fault tolerant suse file server? Id like to mirror the edirectory and the primary NSS shares to another server. Is there a suse equivalent method of widows server active directory replication and distributed file system? Currently running SLES 10 SP3.
    Thanks

    On 20/03/2014 23:16, ataubman wrote:
    > You say SLES but you've posted in OES ... which is it?
    I suspect OES since eDirectory and NSS also mentioned. Perhaps at0mic
    can post the output from "cat /etc/*release" so we know.
    > But as a general answer clustering is probably what you're looking for.
    If OES then look at Novell Cluster Services but if SLES then High
    Availability Extension.
    HTH.
    Simon
    Novell Knowledge Partner
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below. Thanks.

  • What is the difference with 1 Lync Standard Edition using VMware Fault Tolerant and 2 Lync Enterprise Edition in a cluster

    Hi
    As I will be planning to setup Lync on a virtual environment regardless if it is going to be the Standard or Enterprise edition.
    I am thinking if we use 1 Lync Standard Edition for the FE Server with Fault Tolerance enabled, would it be as good as having 2 Lync Enterprise Edition in a cluster?
    Thanks

    Hi there,
    the main difference between using Lync enterprise and lync standard is the High availability and scalability feature,
    you will get fault tolrance setup with one lync standard edition running on whatever hyper-v and vmware platform, however this will not be an optimum highly available solution for the simple reason that upon a host server failure the image will move
    to another available host server and users will lose their active session durin the move process.
    on the other hand what you will gain if you the Enterprise edition is that you will have a unique identity to which all lync clients will be connecting and this identity is the lync pool identity which is in the background handled with multiple Front-End
    servers and AV conferencing pools, mediation pools and so on.
    In additioin when you have multiple front-ends in place, those front-ends will not work in active/passive mode as in a regular cluster, in contrairy all the servers will be active and handeling the work load.
    hope i make it a bit clear, if yiu need more info i am ready
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread
    Thanks for the response, being a small environment of 300 users, standard edition would be more than enough for me but the fact for HA is very critical for me at this stage. That is why I am exploring the option of using VMware FT.
    I don't quick get what you mean on the users having to move to another image as my understanding of FT is in the event of a host failure (not VM failure like bluescreen etc), the VM will fail over to another host with no lost in any ping etc.
    So, in theory, the Lync Server VM would never know that the parent ESX host had failed and it needed to failover to another host. Hope my understanding is correct.
    Thanks

  • 2012 R2 - Clustering High Availability File Server

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.

    Hi All
    What's the difference between creating a High Availability Virtual Machine to use as a File Server and creating a 'File Server for general use' in the High Availability Wizard?
    Thanks in advance.
    What's your goal? If you want to have file server with no service interruption then you need generic SMB file server built on top of a guest VM cluster. HA VM is not going to work for you (service interruption) and SoFS is not going to work for you (workload
    different from SQL Server and / or Hyper-V is not supported). So... Tell what do you want to do and not what you're doing now :)
    For a fault-tolerant file server scenarios see following links (make sure you replace StarWind with your shared storage specific, I'd suggest to use shared VHDX). See:
    http://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    http://technet.microsoft.com/en-us/library/cc753969.aspx
    http://social.technet.microsoft.com/Forums/en-US/bc4a1d88-116c-4f2b-9fda-9470abe873fa/fail-over-clustering-file-servers
    http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
    http://www.starwindsoftware.com/configuring-ha-file-server-for-smb-nas
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.
    The goal is a file server that's as resilient as possible. I need to still use DFS namespace though. Its on a Dell VRTX server with built-in shared storage.
    I'm having issues getting the High Availability wizard for General Purpose File Server to see local drives :(
    So I'm just trying to understand the key differences between creating one of these and  a hyper-v server with file services installed.

  • High availability for Web front END server

    Dear All
    I am unable to understand the high availability model for Web front end server
    I am currently working on MOSS 2007/IIS 7 but I think for all versions it will remain the same
    I am now running single WFE server and my installation mode allow for adding extra servers to the Farm
    now when I add Extra server what will happen next ? should I add extra web application and site collections ? will load balancing include lists, library items and workflows
    how this stuff will be stored on one database
    it's too vague to me so extra explanation will be appreciated
    Regards

    To get a fault tolerant environment you need to do two things.
    1) Add a second server that is running the the Microsoft SharePoint Foundation Web Service.  That's the service that responds to calls for the web site making the server a WFE.
    2) Implement some form of load balancing to distribute HTTP requests for SharePoint web sites to the two servers.  This can be done with something as simple as Windows Network Load Balancing or with dedicated hardware like an F5 Load balancer.  
    Once you've added a load balancer and a second server you don't need to do anything else.
    Paul Stork SharePoint Server MVP
    Principal Architect: Blue Chip Consulting Group
    Blog: http://dontpapanic.com/blog
    Twitter: Follow @pstork
    Please remember to mark your question as "answered" if this solves your problem.

Maybe you are looking for

  • My ipod touch is not recognized in Itunes

    My Ipod touch is not recognized by Itunes anymore.  I am running Iyunes in Windows 7 and the Ipod is 4th generation

  • 2.5'' sata Toshiba Hard disk not detected

    Hi everyone, recently I bought a ssd drive for my macbook and formatted my original Toshiba hard disk I bought a box for hdd and when I plug it to usb as external drive it cannot be seen by mavericks.I tried disk utility ,it cannot also see. I tried

  • Issue with the value set

    Hi friends, am trying to do a percentage calculation from two value sets like select round((to_number(:$FLEX$.PROP_BONUS)*100)/(12*to_number(:$FLEX$.XX_SAL)),2) FROM DUAL this is giving me error ... those two are number data type value sets.. and am

  • Cintiq 24 HD and dual display turning files black?

    I have a brand new Cintiq 24 HD and a secondary monitor, a 24 inch Gateway. Any files I open in Photoshop show up fine on my Cintiq, but as soon as I try and drag them onto the other monitor, they mysteriously turn black. If I drag it back to the Cin

  • No Spaces in text form.

    Hi, I have a form and in the form I have a text box where the user can input his or her username. However, I dont want to allow them to be able to submit an username with spaces. For example if the following were usernames: GoodUsername - is what i w