RD Web Access high availability

I'm looking for information regarding how to deploy highly available 2012 RD web access servers. There seems to be little or no information online. Does anyone have any experience of setting this up? I would like to run two RD web access servers in an active/active
model. Also can anyone give any insight into whether this role is happy to be installed on HA RDCB servers too?
Many thanks

Hi,
Thank you for posting in Windows Server Forum.
We can install the RDCB role to maintain the RDS Farm for RD Web. As per my knowledge we need to install this role on other server same on RDSH as need to use this server outside environment. We need to install RD Web and RD Gateway outside as we can access
RD Web access page, RemoteApp from internet and then connect to RD Connection Broker which will redirect to an RDSH server which all connected by RD Gateway. 
But sorry to disappoint you but there is no official document for RD Web HA. As we can use of DNS round robin or other load balancing techniques. Please check below article just for reference.
RD Connection Broker High Availability in Windows Server 2012
http://blogs.msdn.com/b/rds/archive/2012/06/27/rd-connection-broker-high-availability-in-windows-server-2012.aspx
Step by Step Windows 2012 R2 Remote Desktop Services – Part 3
http://msfreaks.wordpress.com/2013/12/26/windows-2012-r2-remote-desktop-services-part-3/
Hope it helps!
Thanks.
Dharmesh Solanki
TechNet Community Support

Similar Messages

  • Topology.svc - Endpoints - Web Services High Availability

    Hi,
    I was recently performing some simple DRP tests before going to production and i faced some issues i never encountered before..
    (followed
    http://blogs.msdn.com/b/besidethepoint/archive/2011/02/19/how-i-learned-to-stop-worrying-and-love-the-sharepoint-topology-service.aspx for usefull commands related to enpoints)
    My farm:
    SP 2013 - CU August 2013
    2 WFE (WFE1, WFE2)
    2 App (App1, App2) : Most services started on both servers (UPSS on APP1, UPS on both) - Central Admin on Both.
    SQL Cluster
    At normal state the command : (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    returns > ServiceEndpointUri :
    https://app1:32844/Topology/topology.svc
    (if i'm not wrong, this topology.svc can run on only one server at a time)
    I stopped WFE1, no pb, the NBL (appliance) is doing the job.
    Then I stopped the App1 and started to have some issues. (most enpoints not balanced to app2)
    I run the job "Application Addresses Refresh Job"
    Or launch PS command : Start-SPTimerJob job-spconnectedserviceapplication-addressesrefresh
    Wait 20 sec.
    A few endpoints are now on APP2 (MMS, Search), it seems to work, i reached my web page.
    I ask a mate to try and he got the "sorry we encountered and error..." > Can't load user profile.
    I refreshed my browser and got the same error.......
    ULS review, I can see that some svc request (most about user, profiledbcacheService) are still on APP1 !!
    A failure was reported when trying to invoke a service application: EndpointFailure Process Name: w3wp Process ID: 6784 AppDomain Name: /LM/W3SVC/93617642/ROOT-1-130445830237578923 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-com:sharepoint:service:e8315f8e5d7d4b1b90876e3b0043a4ae#authority=urn:uuid:164efb17f28c4d2d9702ce3e86f0c0e8&authority=https://app1:32844/Topology/topology.svc
    Active Endpoints: 1 Failed Endpoints:1 Affected Endpoint:
    http://app1:32843/e8315f8e5d7d4b1b90876e3b0043a4ae/ProfileService.svc
    The command (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    Still return me that my topology.svc is on
    https://app1:32844/Topology/topology.svc
    But App1 is down !!
    If my understanding is ok: Normally, the internal round robin loadbalancer (Application Discovery and Load Balancer Service, started on all servers, not configurable) should manage this.
    Application Addresses Refresh Job is running each 15 min. and refreshs available endpoints, using the topology.svc
    But, the topology.svc called is always on APP1 which is down !
    At this time, i haven't found why SharePoint is not detecting that APP1 is down and is not
    automatically recreating a topology service on another available server.....
    If you have any idea...your help is welcome :)
    Regards,
    O.P

    Hi  ,
    For achieving Web Services High Availability, you need to make sure service-applications have more than one server to service them to increase back-end resiliency to a SharePoint server dropping off the network.
    You can refer to the blog:
    http://blogs.msdn.com/b/sambetts/archive/2013/12/05/increasing-service-application-redundancy-high-availability-sharepoint.aspx
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Eric Tao
    TechNet Community Support

  • Wireless access high availability

    Hi,
    I've this scenario:
    - two WLC 4404 located in different sites (site A and site B) with sw release 5.x
    - a wireless mesh area that is linked to these two sites
    - site A and site B are connected via L3 wired link
    - routers that act as a default gateway for the client are located in site A and site B
    Reading some whitepapers on Cisco website, I understand that I have to configure WLC-A as a primary controller and WLC-B as a secondary backup controller with different mobility groups but I dont' understand where to configure default gateway IP address for my wireless clients
    How is a high availability  configuration for the client connected to the wireless mesh area in this scenario?
    Where should be configured the default gateway IP address for my wireless clients?
    Thanks
    antonio

    is it possible to have one wlc in a city and the other wlc in another city??
    Nothing wrong with this setup.  It all boils down to your routing.  
    then both WLCs management interfaces should be on the same vlan
    Not necessary.  I have seen setups where the use the same VLANs.  I've seen setup where they use different VLANs.  Again, it all boils down to your routing.  

  • SCOM 2012 SP1 Web Console High Availability Configuration

    We are a large children's hospital, with many ancillary units that we are trying to get to help monitor their servers/devices/applications through the SCOM web console.  Due to the number of connections we have two standalone web console servers. The
    entire environment is running on Windows Server 2012 Standard Edition servers, and we are running SCOM 2012 SP1 UR3.  When accessing the web console directly via the FQDN of the server, everything works great. We have an HLB virtual IP and DNS entry
    (SCOM.domain.com), which works flawlessly when access the default IIS website. However, when accessing the web console URL (http://SCOM.domain.com/OperationManagerdown) Negotiate:Kerberos, NTLM, Negotiate.  I believe the issue lies in the fact that
    the Application Pools are running under the ApplicationPoolIdentity identity, and the HTTP SPN's are using the machine account.  We have tried changing the identity of the application pools to a domain account (the SDK service account), and modified the
    SPN's so the HTTP SPNs are assigned to the domain account.  We also created SPNs for the HLB VIP DNS alias.  But to no avail.  While we realize that Microsoft does not support load balancing of the web console, there has to be a way to make
    it work.  We will have more than 50 connections, so using one web console server is not an option, and asking our users to use two different URLs is not very user friendly.  If anyone has any suggestions or had any success, please let me know. 
    Any help is greatly appreciated!

    Is switching to forms based authentication an option? The extra login might be considered a pain however, but forms plus SSL is the only successful LM configuration we have worked with. We use F5s.
    Sign Up Now for the Infinity Connected™ Full Feature 30 Day Trial   
    Visit the Infinity Connected™ Support Center
    Visit Our Website: http://www.infinityconnected.com
    Visit Our Blog: http://blog.mobieus.com
    Want to just talk shop? 1-800-691-6774 / [email protected]
    Monitoring Information Center | Infinity On-Line | [email protected]

  • Sorry, we couldn't open your file using this feature. Visio Web Access is not available on this site.

    Recently installed Service Pack 1 in SharePoint Server 2013 Farm, post upgrade we are experiencing issue when opening visio documents:
    I am trying to open .vsdx (visio 2013) file but encounter following issue:
    Sorry, we couldn't open your file using this feature. Visio Web Access is not available on this site.
    Under Document library-->Library settings-->Advanced Settings
    Still I cant open file in browser as we always used to. Unfortunately we don't have Visio services in Farm.
    can you share your experiences regarding this issue post Sp1 SharePoint Server 2013.
    Thank You

    Hi Octopus,
    Based on the error message, it seems that the Visio Graphics Service is not started or the Enterprise feature is not enabled.
    I recommend to check the things below:
    Go to Central Administration > System Settings > Manage service on server > check if the Visio Graphics Service is started > then click Application Management > Manage service applications > check if the Visio Graphics Service application
    is created.
    Go to the root site settings page of the site where you got this error, click Site collection features to check if the SharePoint Server Enterprise Site Collection Features is enabled.
    Go to the site settings page of the site where you got this error, click Manage site feature to check if the SharePoint Server Enterprise Site Features is enabled.
    More information about the Visio Graphics Service:
    http://tutorial.programming4.us/windows_server/microsoft-sharepoint-2013---looking-at-visio-services-(part-3)---visio-graphics-service-service-application.aspx
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Office Web Apps farm - how to make it high available

    Hi there,
    I have deployed OWAs for Lync 2013 with two servers in one farm. When I was testing, e.g. the case when other server is down I found out the following error:
    Log Name:      Microsoft Office Web Apps
    Source:        Office Web Apps
    Date:          17/03/2014 22:59:54
    Event ID:      8111
    Level:         Error
    Computer:      OWASrv02.domain.com
    Description:
    A Word or PowerPoint front end failed to communicate with backend machine
    http://OWASrv01:809/pptc/Viewing.svc
    Does that means, that OWAs cannot setup as high avaiable when using standalone OWAs only? Above error appeared when the server OWASrv01 was rebooting.
    Petri

    Hi Petri,
    For your scenario, you have achieve a  high performance for your Office Web Apps farm but not high availability. 
    For performing a high availability Office Web Apps farm, you can refer to the blog:
    http://blogs.technet.com/b/meamcs/archive/2013/03/27/office-web-apps-2013-multi-servers-nlb-installation-and-deployment-for-sharepoint-2013-step-by-step-guide.aspx 
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

  • SAP Web Dispatcher in a high availability environment

    Hello, guys
    We are working in a CRM 7.0 implementation Project. Our system landscape is the following:
       - Two hosts (host1 & host2) on MSCS cluster (Windows 2008) with SQL Server and ASCS in high availability. Additional, this MSCS cluster has a instance of SAP Web Dispatcher.
       - In these two host weu2019ve installed a CI & DI instance, outside of high availability scope
       - Two additional hosts (host3 & host4) with one dialog instance in every host
    We have severe problems with communication between SAP Web Dispatcher and ICM components. Our configuration schema is the next:
       - ASCS (MSCS_virtual_hostname):
    ms/server_port_0 = PROT=HTTP,PORT=8141
    SAPLOCALHOSTFULL = <MSCS_virtual_hostname>.<domain>
       - IC (host1)
    icm/server_port_0 = PROT=HTTP,PORT=8040,TIMEOUT=90,PROCTIMEOUT=600
    icm/host_name_full = <host1>.<domain>
       - ID1 (host2)
    icm/server_port_0 = PROT=HTTP,PORT=8044,TIMEOUT=90,PROCTIMEOUT=600
    icm/host_name_full = <host2>.<domain>
       - ID3 (host3)
    icm/server_port_0 = PROT=HTTP,PORT=8045,TIMEOUT=90,PROCTIMEOUT=600
    icm/host_name_full = <host3>.<domain>
       - ID4 (host4)
    icm/server_port_0 = PROT=HTTP,PORT=8046,TIMEOUT=90,PROCTIMEOUT=600
    icm/host_name_full = <host4>.<domain>
       - SAP Web Dispatcheer (MSCS_virtual_hostname):
    SAPGLOBALHOST = <MSCS_virtual_hostname>
    SAPLOCALHOSTFULL = <MSCS_virtual_hostname>.<domain>
    SAPLOCALHOST = <MSCS_virtual_hostname>
    SAPLOCALHOST = <MSCS_virtual_hostname>
    ms/http_port = 8141
    icm/server_port_0 = PROT=HTTP, PORT=8042,TIMEOUT=30,PROCTIMEOUT=600
    wdisp/add_xforwardedfor_header = TRUE
    In SAP Web Dispatcher log weu2019ve found the following error messages:
    Fri Jan 28 15:45:22 2011
    ***LOG Q0I=> NiPConnect2: connect (10061: WSAECONNREFUSED: Connection refused)
    *** ERROR => NiPConnect2: SiPeekPendConn failed for hdl 6 / sock 130060
        (SI_ECONN_REFUSE/10061; I4; ST; 192.168.6.182:8044)
    *** ERROR => Connection request to host: , service: 8044 failed (NIECONN_REFUSED)
    SAP Web Dispather is trying to connect to connect with dialog instances through , which itu2019s incorrect (ports 8044, 8045 & 8046 are opened in dialog instances, not in virtual instance). I think it should try with real hostnames (host1, host2, host3 & host4).
    ¡¡Please, help!! Thanks in advance

    Hello, Karthi,
    Our Web Dispatcher profile looks as following:
    Instance specific parameters
    Maybe some of these parameters are needless
    SAPSYSTEMNAME = <CRM SID>
    INSTANCE_NAME = <WD SID>
    SAPSYSTEM = <WD System number>
    SAPGLOBALHOST = <virtual hostname of WD>
    SAPLOCALHOSTFULL = <FQDN of virtual hostname of WD>
    SAPLOCALHOST = <virtual hostname of WD>
    Directorios
    DIR_INSTANCE = R:\usr\sap\wd
    DIR_INSTALL = R:\usr\sap\wd
    DIR_CT_RUN = $(DIR_EXE_ROOT)\$(OS_UNICODE)\NTAMD64
    DIR_EXECUTABLE = R:\usr\sap\wd
    DIR_PROFILE = R:\usr\sap\wd
    DIR_HOME = R:\usr\sap\wd
    DIR_ICMAN_ROOT = $(DIR_INSTANCE)\icmanroot
    R:\usr\sap\wd\global\security\data
    Accesibilidad al Message Server
    rdisp/mshost = <virtual hostname of CRM Message Server>
    ms/http_port = <HTTP port of CRM Message Server>
    HTTP Settings
    Puerto estandar de acceso HTTP
    icm/server_port_0 = PROT=HTTP, PORT=8042,TIMEOUT=30,PROCTIMEOUT=600
    These parameters defines load balancing weights
    #wdisp/server_00 = NAME=<hostname_SID_SYSNR>, LB=4, ACTIVE=0
    #wdisp/server_01 = NAME=<hostname_SID_SYSNR>, LB=10, ACTIVE=1
    #wdisp/server_02 = NAME=<hostname_SID_SYSNR>, LB=20, ACTIVE=1
    #wdisp/server_03 = NAME=<hostname_SID_SYSNR>, LB=20, ACTIVE=1
    Puerto de acceso interfaz web de administrador
    icm/HTTP/admin_0 = PREFIX=/sap/admin, DOCROOT=$(DIR_ICMAN_ROOT)/admin, AUTHFILE=$(DIR_INSTANCE)\sec\icmauth.txt
    Activaciu00F3n de la cachu00E9 de SAP Web Dispatcher
    icm/HTTP/server_cache_0/http_cache_control = true
    icm/HTTP/server_cache_0 = PREFIX=/, CACHEDIR=$(DIR_INSTANCE)\cache
    Fichero de log de seguridad
    icm/security_log = LOGFILE=$(DIR_HOME)\log\security_%y%m%d.log, SWITCHTF=day, MAXSIZEKB=1024, FILEWRAP=off
    icm/HTTP/logging_0 = PREFIX=/, LOGFILE=$(DIR_HOME)\log\wd_log_%y%m%d.log, SWITCHTF=day, MAXSIZEKB=1024, FILEWRAP=off
    icm/log_level = 1
    Dispatcher Configuration
    wdisp/add_xforwardedfor_header = FALSE
    Parametrizacion de memoria
    Datos de sizing de los que se parten                                #
    #users = 1800 usuarios (900 concurrentes)
    #req_per_dialog_step = 6 peticiones HTTP por paso
    #thinktime_per_diastep_sec = 10 seg. de "thinktime"
    #conn_keepalive_sec = 30 seg. mantener conexiu00F3n abierta con ICM
    #icm/max_conn = users * req_per_dialog_step * conn_keepalive_sec / thinktime_per_diastep_sec
    icm/max_conn = 16200
    wdisp/HTTP/max_pooled_con = icm/max_conn
    wdisp/HTTP/max_pooled_con = 16200
    icm/max_sockets = al menos la suma de icm/max_conn y wdisp/HTTP/max_pooled_con
    icm/max_sockets = 32400
    mpi/buffer_size = 64K = 64 * 1024 = 65536
    mpi/buffer_size = 65536
    mpi/total_size_MB = icm/max_conn * mpi/buffer_size (hay que convertir mpi/buffer_size a MB)
    mpi/total_size_MB = 1024
    icm/req_queue_len = icm/max_conn / 2
    icm/req_queue_len = 8100
    icm/min_threads = icm/max_conn / ~50
    icm/min_threads = 512
    icm/max_threads = icm/max_conn / ~20
    icm/max_threads = 1024
    Parametrizacion de seguridad
    Evitar el envu00EDo de mensajes tu00E9cnicos al usuario final
    is/HTTP/show_detailed_errors = FALSE
    #icm/HTTP/error_templ_path
    And ICM parameters are:
    - SAPLOCALHOSTFULL= <FQDN of every application server>
    - icm/server_port_0 = PROT=HTTP,PORT=8080,TIMEOUT=90,PROCTIMEOUT=600:
    - icm/host_name_full = <FQDN of every application server>  ## This parameter is ignored if SAPLOCALHOSTFULL is defined
    I hope it helps you.
    Best regards,
    Sergio Su00E1nchez

  • Access Services 2013 high available on SQL 2012?

    Hi,
    Does anyone know how we get the randomly created databases from access services 2013 services high available on our SQL 2012 cluster?
    For the SharePoint content databases, we use the ‘always on’ functionality of SQL 2012. This functionality is not supported for Access services 2013.
    Thanks in advance,
    Johan

    Hi Edwin,
    Thanks again for your answer.
    Failover clustering is a perfect solution for high availability.
    A last additional question:
    Now we using ‘Always On groups’. The 2 instances are located one a different physical location.With that configuration, we have a High available solution and disaster recovery solution.
    If we going for a Failover Cluster we have a High Available solution, but we don’t have a disaster recovery solution anymore because everything is located on the same physical location.
    We can configure SQL mirroring, but then we have the same behavior like the ‘always On Groups’.-> Not possible to delete db’s from SharePoint and not automatically mirrored when created a new access services 2013 db.
    Is there any solution/configuration to accomplish this?
    Thanks,
    Johan

  • Detail on High Availability options for Web Apps

    Hi,
    I do really struggle to locate actual information on Azure Availability offerings / capabilities...as an Infrastructure Architect it has bugged me for years now with Azure offerings!
    Something that simply states the availability within any local DC and options for true HA over 2 or more DC's.
    We are moving away from using Web Roles to Web Apps for solutions for our clients. I understand the principles of fault domains, etc. with Web Role requirements for min. of 2 to (mostly) avoid MS update disruption within a single DataCenter, but cannot locate
    similar info. with regard to Web Apps.
    Really appreciate if someone could point me to some appropriate detail as I've failed....
    (Also, cannot find anything on DocumentDB....)
    Many Thanks,
    Lee

    Hi,
    High Availability of a running service always comes with a cost, and priorities will be app-specific. If it's the web tier, then you may indeed want to consider hosting in multiple geo's. If it's a backend processing tier, sometimes it's "ok" to
    let a service go offline, as long as requests are queued up. If it's the storage system (preventing queueing of messages), perhaps an alternate queue in a different data center could be available for redundancy purposes.
    I would request you to check this article:
    https://msdn.microsoft.com/en-us/library/azure/dn251004.aspx
    Hope this information helps.
    Regards,
    Azam khan

  • High availability for Lync 2013 persistent chat server and office web app server

    I have 1500 users, need HA in primary data center and DR also. looking for HA and DR solution for persistent chat server and office web app server.
    is below correct?
    1. 2 persistent chat server in a pool of primary data center and 1 in DR.  can this be reduced or any changes?
    2. 2 Office web app server in a pool of primary data center and 1 in DR.  can this be reduced or any changes?
     also do i need HLB for both roles?

    1) In Lync Server 2013, there are improvements in both high availability and disaster recovery:
    High availability improvements: SQL Server mirroring is used to provide high availability for the Persistent Chat Server content database and Persistent Chat compliance database within a data center (in-site).
    Disaster recovery improvements: Persistent Chat Server supports a stretched pool architecture that enables a single Persistent Chat Server pool to be stretched across two sites (that is, a single logical pool in the topology, with servers in the pool physically
    located across two sites). SQL Server Log Shipping is used for cross-site disaster recovery.
    For more information about high availability and disaster recovery, see
    Configuring Persistent Chat Server for High Availability and Disaster Recovery in the Deployment documentation.
    2) for HA & DR, you can 2 Office web app server in a pool of primary data center and 1 in DR. and You will need HLB for office web app servers
    http://blogs.technet.com/b/meamcs/archive/2013/03/27/office-web-apps-2013-multi-servers-nlb-installation-and-deployment-for-sharepoint-2013-step-by-step-guide.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

  • High availability for Web front END server

    Dear All
    I am unable to understand the high availability model for Web front end server
    I am currently working on MOSS 2007/IIS 7 but I think for all versions it will remain the same
    I am now running single WFE server and my installation mode allow for adding extra servers to the Farm
    now when I add Extra server what will happen next ? should I add extra web application and site collections ? will load balancing include lists, library items and workflows
    how this stuff will be stored on one database
    it's too vague to me so extra explanation will be appreciated
    Regards

    To get a fault tolerant environment you need to do two things.
    1) Add a second server that is running the the Microsoft SharePoint Foundation Web Service.  That's the service that responds to calls for the web site making the server a WFE.
    2) Implement some form of load balancing to distribute HTTP requests for SharePoint web sites to the two servers.  This can be done with something as simple as Windows Network Load Balancing or with dedicated hardware like an F5 Load balancer.  
    Once you've added a load balancer and a second server you don't need to do anything else.
    Paul Stork SharePoint Server MVP
    Principal Architect: Blue Chip Consulting Group
    Blog: http://dontpapanic.com/blog
    Twitter: Follow @pstork
    Please remember to mark your question as "answered" if this solves your problem.

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SharePoint 2013 High Availability - BI Semantic

    SharePoint 2013 High Availability
    Hi all,
    Settings
    Two Sites connected via tunnel, no issue in networking – Single farm
    SQL2012, two DB with mirroring
    SharePoint 2013, two servers in the farm, each server is APP & WEB
    Issue, this has been designed against full site failure (e.g. losing connection to other site, which means losing 1 SQL and 1 SP server)
    When users try to view some reports from the BI page, they can’t , we have confirmed that SQL mirrors are failing correctly, the issue (we think to be to access the SharePoint reporting services DB which we have mirrored as well), any help will be great.
    Regards

    Okay, if you've validated that you have the 1Gbps and 1ms over 10 minutes average, then you're in a supported scenario. How did you configure the SSRS databases to use mirroring within SharePoint? With SSRS databases, you'll need to use PowerShell:
    $database = Get-SPDatabase | where {$_.Name -eq "ReportingServiceDbName"}
    $database.AddFailoverServiceInstance("InstanceName")
    $database.Update()
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • HTTP Server High Availability

    Hello All.
    I have a question regarding OC4J and HTTP server High Availability.
    I want to do something like the Figure 3-1 of the Oracle Application Server High Availability Guide 10.1.2. See this link
    http://download-east.oracle.com/docs/cd/B14099_11/core.1012/b14003/midtierdesc.htm#CIHCEDFC
    What I have now is the following:
    Three hosts
    Two of them are an OAS 10.1.2 which we already configured the Cluster and deployed our applications (used this tutorial: http://www.oracle.com/technology/obe/obe_as_1012/j2ee/deploy/j2eecluster/farmcluster.htm)
    Let's say this nodes are:
    - host1
    - host2
    The other one is the Oracle WebCache stand alone (will act as Load Balancer). We will call this
    - hostwc3
    We already configured the WebCache as Load Balancer and is working just fine. We also configured the session replication successful and work great with our applications.
    What we have not clear is the following:
    When a client try to visit http://hostwc3/application/ the LOAD BALANCER routes him to, let's say http://host1/application/ and in the browser's URL will not show the Virtual Server anymore (the webcache server) and will show the actual real Apache address (host1 )that is attending him. IF we "kill" on ENTIRE host1 (apache, oc4j, etc..) the clients WILL perceive the down and if they try to press F5, the will try to access to an Apache that doesn't is up and running.... The behavior expected is that the browser NEVER shows the actual Apache URL, so, when some apache goes down, the client do not disconnect (as it happens with and OC4J downfall ) and always works with the "virtual web server".
    I came up with some ideas but I want you Guys to give me an advice:
    - In Web Cache, do not route for load balancing to Apache, and route the Oc4J directly (Is this possible?)
    - Configure a HTTP Server Cluster, this means that we have to have a "Virtual Name"to the Apaches (two of them). Is this possible? how?
    - Use the rewrite mode of the Apache. Is this a good idea?
    - Any other idea how to fix the Apache "Single Point Of Failure" ?
    According with the figure 3-1 ( Link above ) we do can have HTTP Server in a cluster. But I have no idea how to manage it or configure it.
    Thanks in advance any help!

    You cannot point Outlook Anywhere to your DAG cluster IP address. It must be pointed to the actual IP address of either server.
    For no extra cost DNS round robin is the best you will get, but it does have some drawbacks as it may give the IP address of a server you have taken down for maintenance or the server has an issue.
    You could look to implement a load balancer but again if you are doing this for high availability then you want more than one load balancer in the cluster - otherwise you've just moved your single point of failure.
    Having your existing NAT and just remembering to update it to point to the other server during maintenance may suit your needs for now.
    If you can go into more detail about what the high availability your business is looking to achieve and the budget we can suggest the best method to meet those needs for the price point.
    Have a great day
    Oliver
    Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi

  • How to configure high availability and disaster recovery? And user authenticate

    We are in the process of rolling out our online help which was created using Robohelp.   In our initial rollout we will provide access to the files via our Client Portal which requires authentication.  We are also planning for our next version where we intend to implement Robohelp server functionality.
    Our IT team is looking at options on how to configure for High Availability and Disaster Recovery.  It seems that Robohelp doesn't have any built-in functionality in this area.  In addition we require that our users authenticate.  The options for the server version seem to be more internally focused and we would need to solution the authentication using a third party.
    Would anyone be willing to share their approach in these areas?  Would you be willing to participate in a conference call with our IT Professionals?

    Hello again
    I see my good friend Peter replied to your LInkedIn post where you cross-posted the same question. For those here that have no clue what Peter stated, here it is:
    What are you seeking to recover? Your projects? Your outputs? This sounds like a question more appropriate to Disaster Recovery consultants and far wider reaching than RoboHelp. To me it seems like a question your IT people should be asking direct to such consultants who would expect a fee for their advice.
    I would agree with Peter's reply.
    I'll also go further and ask what exactly is being done in this realm for the application? Help files generally are there to support an application on the server. So whatever you are doing for the applciation should also be able to be used for the WebHelp, FlashHelp or web based AIR Help files, no?
    Cheers... Rick
    Helpful and Handy Links
    RoboHelp Wish Form/Bug Reporting Form
    Begin learning RoboHelp HTML 7, 8 or 9 within the day!
    Adobe Certified RoboHelp HTML Training
    SorcerStone Blog
    RoboHelp eBooks

Maybe you are looking for

  • ID3 tags / artwork confusion

    Firstly I need to say that I do not use the iTunes library - I rip all my albums as highest quality mp3s in windows media player and store them as files. I then drag and drop the files I want straight onto my ipod thus bypassing the itunes library an

  • Source file name

    Hi my mapping is sql loader mapping, i want to capture the source file name and make in lower case . owb is 11.2.0.3 . please help

  • How to get the None button on my account payment page

    I would like to remove my old credit card info from my account and start using only gift and prepaid iTune cards.  However, the None button is not showing on the page.  Is there some sort of pathway that I need to go through to get the button or do I

  • How to open file in their native application

    How to open file in their native application form web browser? if associated .txt file should open in editplus then from web browser if I try to open .txt file it should open in editplus rather then open in notepad(default), If on other computer if I

  • Google Spyware Alert!

    Hi there, Over the past few days, when searching on google, instead of going to my search results, I often go to an authentication page as apparently my search resembles a spyware something or other (I searched for things like investments, so go figu