WebCat high availability options?

Calling out to experienced BI architects...
What is the best practice/recommended approach to high availability of the web catalog?
Background:
We just realized that there is a flaw in the design of our clustered, load balanced BI architecture, which consists of the following:
ServerA= OBI Server1 + WebCatalog on local shared Drive
ServerB= OBI Server2
ServerC= JavaHost1/OBI Presentation Server1 pointing to WebCat on ServerA
Server D= JavaHost2/OBI Presentation Server2 pointing to WebCat on ServerA
the flaw is obvious: if ServerA goes down, we have redundance with ServerB for the OBI Server, but we've lost the web catalog. having the Webcat on a shared local drive on ServerA is a single point of failure.
We are considering creating a small 2 node cluster and having the webcat on a clustered drive resource.
Are there any alternative approaches that others have implemented successfully?
Is there an Oracle option that allows 2 separate web catalogs which are kept in sync through replication?
Thanks in advance for sharing any ideas on this.

Thanks Deepak... do you know anything about replication using sawrepaj utility? Seems a bit difficult to setup, maintain and automate. Lot's of manually setup in config files. Anyone had good/bad experiences using sawrepaj?
(Deepak: tried to give you some "helpful" points, but for some reason, I'm not able to award any points...)

Similar Messages

  • High Availability Options Without Enterprise Edition

    I think I've found the answer to this already but I just wanted to confirm that I'm getting this right. In SQL Server 2012, is there no way to implement a high-availability solution which doesn't require shared storage or use of a deprecated feature without
    having Enterprise Edition?
    At the moment we have databases hosted on a primary server running Windows Server 2012 Standard and SQL Server 2012 Standard.  We have a second, identical server, and the databases are mirrored between the two servers using Database Mirroring. 
    This works perfectly and meets all of our requirements.  The only possible problem is that we're looking at storing documents using FILESTREAM, which isn't available in Mirrored databases.
    From what I've read, Database Mirroring is deprecated in SQL Server 2012.  High Availability Groups, which sound great, are only available in the Enterprise Edition.  This feels like a real kick in the teeth from Microsoft.  We're stuck
    with either using a deprecated feature (Mirroring), which is only postponing the problem until the end of 2012's life cycle, or laying out significant amounts of money (in the tens of thousands of pounds GBP) to upgrade to Enterprise Edition, which we just
    don't have the budget for.
    Why couldn't Microsoft continue to offer a high availability option for businesses without deep pockets?  Do I have any options I'm not thinking of?  Shared storage is not an option (too costly, single point of failure, geographically separated
    datacentres).

    Thanks for all the feedback.
    I was forgetting that even data stored as FILESTREAM would need to be backed up using SQL backups, so I guess that's an issue either way, thank for reminding me, Sebastian.
    Geoff, I agree that replication isn't a viable HA solution.  FCI has lots going for it, but from what I can make out (and I am just reading up on this now so I could be wrong) requires either shared storage or a third-party tool to move the data from
    the Primary server's local storage to the local storage of the Secondaries.  It all seems overly complicated when Mirroring does exactly what I need it to do, merrily writing transactions to the mirrors at the same time as the primary databases. 
    Additionally, whilst you might be right about making the case for Enterprise Edition, it's difficult to make a clear case to non-technical people when we already have a working HA solution and document management.  Trying to explain the nuanced advantages
    of SQL Server full-text indexing (at present we use a third-party indexing product which stores the index discretely) and simplified querying when the documents are in the same database as their associated data as justification for spending tens of thousands
    of pounds is a challenge!
    David, that's useful to know, about the disadvantages of using FILESTREAM and how just storing the documents in a VARBINARY(MAX) column might actually give better performance, thank you.

  • Detail on High Availability options for Web Apps

    Hi,
    I do really struggle to locate actual information on Azure Availability offerings / capabilities...as an Infrastructure Architect it has bugged me for years now with Azure offerings!
    Something that simply states the availability within any local DC and options for true HA over 2 or more DC's.
    We are moving away from using Web Roles to Web Apps for solutions for our clients. I understand the principles of fault domains, etc. with Web Role requirements for min. of 2 to (mostly) avoid MS update disruption within a single DataCenter, but cannot locate
    similar info. with regard to Web Apps.
    Really appreciate if someone could point me to some appropriate detail as I've failed....
    (Also, cannot find anything on DocumentDB....)
    Many Thanks,
    Lee

    Hi,
    High Availability of a running service always comes with a cost, and priorities will be app-specific. If it's the web tier, then you may indeed want to consider hosting in multiple geo's. If it's a backend processing tier, sometimes it's "ok" to
    let a service go offline, as long as requests are queued up. If it's the storage system (preventing queueing of messages), perhaps an alternate queue in a different data center could be available for redundancy purposes.
    I would request you to check this article:
    https://msdn.microsoft.com/en-us/library/azure/dn251004.aspx
    Hope this information helps.
    Regards,
    Azam khan

  • ESP - MQ High Availability options

    Hi there,
    My organisation (DHS Australia) have recently upgraded our Websphere Message Broker environment from v7 to v8.
    As part of this upgrade, in our Production environment we are now running 4 queue managers, the idea behind this is that it helps to provide a higher level of availability (i.e. basically if one queue manager goes down, the others can pick up it’s load).
    The ESP MQ adapter currently only allows for connection to the one queue manager, so in our ESP-MQ implementation we are exploring possibilities of how we can possibly get the output from ESP shared across the 4 queue managers. We are looking at a number of options within MQ/Broker itself (eg. clustered queue definitions, clustered queue manager etc), and currently doing some testing to see how/if this works with the supplied ESP MQ adapters.
    However, my question for the ESP community/tech-team is whether there has been any thought to implementing something similar to how SAP PI can connect to MQ in a High Availability sense? For example, see this blog post:
    http://scn.sap.com/people/anandan.krishnamoorthy/blog/2010/06/08/high-availability-in-jms-adapter
    Has there been any thought given to whether the ESP MQ adapter can be enhanced or configured to use a MQ Client Channel Definition Table, as an alternative connection configuration method?
    Cheers,
    Jason.

    Jason - this isn't something that is currently on the ESP roadmap, but perhaps it should be. I'll be interested to see if others in the community have other perspectives on this, or experience doing something similar.

  • Installing SAP ERP on DB2 with high availability

    Hey Gurus,
    currently we're installing SAP ERP 6 EHP 6 based on DB2 for unix and windows, this is done using the high availability option on AIX 7.1 and IBM's power HA.
    We have the following architecture:
              Node A:
                        1-A host for ASCS, ERS and CI.
                        2-A host for the DB instance.
    The same with node B replacing CI with DI.
    Please advise on how to proceed with installation.
    Also I need a bit of clarification regarding how to share (or export) the file systems between the hosts of the same node so that the CI can and DB can be connected, since as far as I know the DB installation will ask for the profile directory through the installation, while the CI will ask to see the db file systems as well as the db instance.
    Thanks in advance

    Hi Ahmed,
    For your query
    Also I need a bit of clarification regarding how to share (or export) the file systems between the hosts of the same node so that the CI can and DB can be connected, since as far as I know the DB installation will ask for the profile directory through the installation, while the CI will ask to see the db file systems as well as the db instance.
    Please refer installation guide sections related to file system planning.
    Here you can see the file system directory structure as well as information on which filesystems to be shared.
    Attached are some screenshot for reference
    Hope this helps.
    Regards,
    Deepak Kori

  • Cisco Prime LMS High Availability

    Hi,
    I am trying to setup prime LMS 4.2 with a pair of soft appliance. As I understand that HA is possible with the use of veritas/vmware for windows/solaris; I was wondering what are the possible high availability options available with a pair of prime LMS appliances? Can it form active/secondary with data synchronization/data redundancy of the LMS on top of the traditional backup/restore of the lms?
    Any input is appreciated.
    Thanks

    As iceman said, in VMWare it is not needed to have a pair of host machines to configure HA. Pairs are managed using third party HA services like veritas.
    In VMWare's HA concept all Host machines are pooled into one cluster and in case of host failure the entire cluster is moved to another host. vMotion can also help to move the entire vm to another host.
    This is when the host fails where vm resides. In case of failure of vm itself, the HA can be set for various actions lilke Automatic restart when hardware or OS failure is detected. OR it can restart another backup host in other cluster when failure is detected.
    You need to check availble HA option on VMWare and you can consider HA options via third party applications like veritas as well.
    -Thanks
    Vinod
    **Support Contributors. Rate them. **

  • Oracle Identity Federation - High Availability

    Hello,
    We are trying to figure out the high availability options supported by the Oracle Identity Federation. While reading the documentation we find it a bit confusing. We read the OIF Administrator Guide here: http://download.oracle.com/docs/cd/E10773_01/doc/oim.1014/b25355/advtopics.htm#CHDBCDFG
    In Section "9.4 High Availability" it said that "Oracle Identity Federation supports the Cold Failover Cluster (CFC) or active-passive high availability configuration,". In the Application Server 10g guide also said the same and explicitly said that the active-active configuration is not supported for the OIF.
    Then in Section "9.5 Setting Up a Load Balancer with Oracle Identity Federation" it explains how to set up a load balancer for the OIF. When it explains this it says that we can have several instances of OIF in different machines, configured with a load balancer. All these instances share the same transient database where the sessions are stored.
    Which is the difference between this load-balancer-based configuration and an active-active high availability configuration? If one node of the load-balancer configuration goes down, the sessions administered by him are lost? That is the difference?
    Thanks!
    Leonardo

    Hi
    I am not very sure about High Availability configuration but for Load balancer as mentioned in the document, You have to have both the instances sharing transient database where sessions will be stored.
    If both the OIF instances are not sharing transient database and you have LB sharing load, It will not work as sessions will be store in memory. So sessions from one OIF instance will not be known and available to the other instance of OIF.
    Thanks
    Kiran Thakkar

  • High availability - minimum requirements

    Hi,
    We currently have TFS2012 installed with SQL2008 on a single server. After a recent outage that required a server rebuild I'm looking to lower the risk of further outages and I've been reading about high availability options for TFS. I'm quite interested
    in the SQL AlwaysOn feature which allows you to run 2 mirrors of your TFS SQL database across 2 servers. My question is can I have TFS on the same servers? I.e.:
    Node A: TFS and SQL
    Node B: TFS and SQL
    I could then NLB the TFS front ends and have SQL in a HA AlwaysOn availability group. Is this possible? I would use the opportunity to upgrade to SQL2012/2014 and TFS 2013.
    Most articles I've read talk about separating SQL out onto it's own hardware - this would require 2 additional servers minimum to run SQL and most likely a 3rd to have an additional TFS front end.
    Other options would be to virtualise the above (perhaps in the future) or to use TFS online - although we have customised our TFS quite a bit so online may not be an option for us.
    Thanks in advance.

    Hi Ceefla, 
    Thanks for your post. 
    According the information in this
    document, you need install SQL Server 2012 for TFS Server if you want use SQL Server Always On Availability Groups feature.
    What’s the “have TFS on the same servers” mean?  Install TFS and SQL Server on the same server machine? We suggest you install the TFS and SQL Server on the different server machines.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Windows Event Collector - Built-in options for load balancing and high availability ?

    Hello,
    I have a working collector. config is source initiated, and pushed by GPO.
    I would like to deploy a second collector for high availability and load balancing. What are the available options ? I have not found any guidance on TechNet articles.
    As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
    In my GPO Policy, if I individually declare both servers, events are forwarded twice, once for each server. Indeed it does cover high availability, but not really optimized.
    Thanks for your help.

    Hi,
    >>As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
    Based on the description, we can utilize DNS round robin to distribute workloads and increase fault tolerance. By default, DNS uses round robin to rotate the order of RR data returned in query answers where multiple RRs of the same type exist for a queried
    DNS domain name. This feature provides a simple method for load balancing client use of Web servers and other frequently queried multihomed computers. Besides, by default, DNS will perform round-robin rotation for all RR types.
    Regarding DNS round robin, the following article can be referred to for more information.
    Configuring round robin
    http://technet.microsoft.com/en-us/library/cc787484(v=ws.10).aspx
    TechNet Subscriber Support
    If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.
    Best regards,
    Frank Shen

  • SQL Server 2005 High Availability and Disaster Recovery options

    Hi, We are are working on a High Availability & Disaster Recovery Planning solution for an application database which is on SQL Server 2005. What different options have we got to implement this for SQL Server 2005 and after we have everything setup how
    do we test the failover is working?
    Thanks in advance.........
    Ione

    DR : Disaster recovery is the best option for the business to minimize their data loss and downtime. The SQL server has a number of native options. But, everything is depends upon your recovery time objective RTO and recovery point objective RPO.
    1. Data center disaster
    Geo Clustering
    2. Server(Host)/Drive (Except shared drive) disaster
    Clustering
    3. Database/Drive disaster     
    Database mirroring
    Log shipping
    Replication
    Log shipping
    Log shipping is the process of automating the full database backup and transaction log on a production server and then automatically restores them on to the secondary (standby) server.
    Log shipping will work either Full or Bulk logged recovery model.
    You can also configure log shipping in the single SQL instance.
    The Stand by database can be either restoring or read only (standby).
    The manual fail over is required to bring the database online.
    Some data can be lost (15 minutes).
    Peer-to-Peer Transactional Replication
    Peer-to-peer transactional replication is designed for applications that might read or might modify the data in any database that participates in replication. Additionally, if any servers that host the databases are unavailable, you can modify the application
    to route traffic to the remaining servers. The remaining servers contain same copies of the data.
    Clustering
    Clustering is a combination of one or more servers it will automatically allow one physical server to take over the tasks of another physical server that has failed. Its not a real disaster recovery solution because if the shared drive unavailable we cannot
    bring the database to online.
    Clustering is best option it provides a minimum downtime (like 5 minutes) and data loss in case any data center (Geo) or server failure.
    Clustering needs extra hardware/server and it’s more expensive.
    Database mirroring
    Database mirroring introduced in 2005 onwards. Database Mirroring maintain an exact copy of a database on a different server. It has automatic fail over option and mainly helps to increase the database availability too.
    Database mirroring only works FULL recovery model.
    This needs two instances.
    Mirror database always in restoring state.
    http://msdn.microsoft.com/en-us/library/ms151196%28v=sql.90%29.aspx
    http://blogs.technet.com/b/wbaer/archive/2008/04/19/high-availability-and-disaster-recovery-with-microsoft-sql-server-2005-database-mirroring-and-microsoft-sql-server-2005-log-shipping-for-microsoft-sharepoint-products-and-technologies.aspx
    http://www.slideshare.net/rajib_kundu/disaster-recovery-in-sql-server
    HADR Considerations
    Need to Understand the business motivations and regulatory requirements that are driving the customer's HA/DR requirements. Understand how your customer categorizes the workload from an HA/DR perspective. There is likely to be an alignment between the needs
    and categorization.
    Check for both the recovery time objective (RTO) and the recovery point objective (RPO) for different workload categories, for both a failure within a data center (local high availability) and a total data center failure (disaster recovery). While RPO and
    RTO vary for different workloads because of business, cost, or technological considerations, customers may prefer a single technical solution for ease in operations. However, a single technical solution may require trade-offs that need to be discussed with
    customers so that their expectations are set appropriately.
    Check and understand if there is an organizational preference for a particular HA/DR technology. Customers may have a preference because of previous experiences, established operational procedures, or simply the desire for uniformity across databases from
    different vendors. Understand the motives behind a preference: A customers' preference for HA/DR may not be because of the functions and features of the HA/DR technology. For example, a customer may decide to adopt a third-party solution for DR to maintain
    a single operational procedure. For this reason, using HA/DR technology provided by a SAN vendor (such as EMC SRDF) is a popular approach.
    To design and adopt an HA/DR solution it is also important to understand the implications of applying maintenance to both hardware and software (including Windows security patching). Database mirroring is often adopted to minimize the service disruption
    to achieve this objective.
    HADR Options :
    Failover clustering for HA and database mirroring for DR.
    Synchronous database mirroring for HA/DR and log shipping for additional DR.
    Geo-cluster for HA/DR and log shipping for additional DR.
    Failover clustering for HA and storage area network (SAN)-based replication for DR.
    Peer-to-peer replication for HA and DR (and reporting).
    Backup & Restore ( DR)
    keep your server DB backups in network location ( DR)
    Always keep your sql server 2005 upto date, in case if you are not getting any official support from MS then you have to take care of any critical issues and more..
    Raju Rasagounder Sr MSSQL DBA

  • NCS Appliance Ver 1.1.1.24 VM High availability disk options

       Hello,
    I currently am running an NCS appliance running version 1.1.1.24.  I am looking to set up a VM server as a redundant High Availability backup server. Per the config guides the VM server needs to be set up with identical physical specifications as the appliance. That means I will need to set a vm up with the following specs:
    CPU: 2 x Intel Xeon Processor (2.4-GHz 12-MB Cache)
    •Memory: 16-GB (1×2-GB, 2Rx8) PC3-10600 CL9 ECC DDR3
    •Network Interface Cards: 2 x 10/100/1000 Gigabit
    The config guide for the VM recommends that I start with a minimum 400Gb disk space. Does anyone have experience deploying a VM as a backup for an  appliance?  Does this sound correct?
    Thanks,

       Hello,
    I currently am running an NCS appliance running version 1.1.1.24.  I am looking to set up a VM server as a redundant High Availability backup server. Per the config guides the VM server needs to be set up with identical physical specifications as the appliance. That means I will need to set a vm up with the following specs:
    CPU: 2 x Intel Xeon Processor (2.4-GHz 12-MB Cache)
    •Memory: 16-GB (1×2-GB, 2Rx8) PC3-10600 CL9 ECC DDR3
    •Network Interface Cards: 2 x 10/100/1000 Gigabit
    The config guide for the VM recommends that I start with a minimum 400Gb disk space. Does anyone have experience deploying a VM as a backup for an  appliance?  Does this sound correct?
    Thanks,

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Windows 2012 RDS - Session Host servers High Availability

    Hello Windows/Terminal server Champs,
    I am new middle of implementing RDS environment for one of my customer, Hope you could help me out.
    My customer has asked for HA for RDS session host where applications are published, and i have prepared below plan for server point of view.
     2 Session Host server, 1 webaccess, 1 License/connection
    Broker & 1 Gateway (DMZ).
     In first Phase, we are planning to target internal user
    who connect to Session host HA where these 2 servers will have application installed and internal user will use RDP to access these application.
    In second Phase we will be dealing with external Party who connect from external network where we are planning to integrate with NetIQ => gateway
    => Webaccess/Session host
     I have successfully installed and configured 2 Session
    Host, 1 license/Broker. 1 webAccess & 1 Gateway. But my main concern to have session Host High Available as it is hosting the application and most of the internal user going to use it. to configure it i am following http://technet.microsoft.com/en-us/library/cc753891.aspx  
    However most of the Architecture is change in RDS 2012. can you please help me out to setup the Session Host HA.
    Note: we can have only 1 Connection broker /Licensing server , 1 webacess server & 1 Gateway server, we cannot increase more server due to cost
    factor.
     thanks in advance.

    Yes, absolutely no problem in just using one connection broker in your environment as long as your customer understands the SPOF.
    the session hosts however aren't really what you would class HA - but to set them up so youhave reduancy you would use either Windows NLB, an external NLB device or windows dns round robin. My preferred option when using the connection broker is DNS round
    robin - where you give each server in the farm the same farm name dns entry - the connection broker then decides which server to allocate the session too.
    You must ensure your session host servers are identical in terms of software though - same software installed in the same paths on all the session host servers.
    if you use the 2012 deployment wizard through server manager roles the majority of the config is done for you.
    Regards,
    Denis Cooper
    MCITP EA - MCT
    Help keep the forums tidy, if this has helped please mark it as an answer
    My Blog
    LinkedIn:

  • Load balancing and High Availability topology

    Our Forms 6i client-server application currently runs on Citrix farm of 20 Windows 2000 boxes (IBM Blade Servers 2 CPU and 2 Gig Memory).
    Application supports 2000 users.
    We are moving to AS 10g r2, forms 10g and the goal is to use same hardware, 20 Windows boxes (or less), for intranet web deployment.
    What will be our best choices for application Load balancing and High Availability?
    Hardware load balancer, Web Cache, mod-oc4j? Combinations?
    Any suggestions, best practices, your experience?

    Gerd, I understand, that you are running 10g web forms through the browser, but using Citrix for deployment. This means that in addition to Application Server and Forms runtime sessions, it will be separate browser session opened for each user. What the advantage of this configuration?
    Michael, we are aware, that Citrix is not supported by Oracle as a deployment platform. That only means that prior contacting Oracle Support we have to reproduce the problem in standard environment. It was never been a problem to reproduce problem :) We were using Citrix as a deployment platform for Forms 6i client/server for 4 years, but now we are forced to upgrade to 10g.
    We are familiar with various Load balancing options available. The question is which option is the most "workable" in our case.

  • High Availability Server in Unix Environment - Checking

    Hi experts - I would like to know, how to check in any SAP - CI server is configured for High availablity option ?
    I have Unix-Solaris installed in my work. Is there any Unix command would show this has HA options?
    Would appreciate your help.
    Thanks,
    Raj

    Hi Raj,
    Try to execute the command cmviewcl -v at the OS level and as root user.
    You will be shown the cluster package available and the node in which the cluster runs, if any.
    Regards,
    Varadhu

Maybe you are looking for

  • Copy/paste no longer functions in Firefox. I'm on Mac.

    I recently updated Firefox and since then copy/paste does not work. It may actually be the paste that is failing as I think I may have glimpsed activity under the Edit when copying using the keyboard short cut [command C]. Help please. This is madden

  • C3-00 IM client error

     i have a problem with the IM msn client from my nokia c3, I logged on and nothing came up apart from the bottom tab with the expand collapse, and send IM, the rest of the screen is blank, showing nothing. i cant see the contacts or anything but the

  • 10.2.0.1.0A AWM  calculations not  correct

    I use 10.1.0.4 olap database, i have started using 10.2.0.1.0A AWM . Dimensions are created as follows 1) create one level (this is from database table that contains 2 columns a)id b)id_desc ) 2) create hierarchy -->level based and choose the above l

  • EPM 11.1.2.2 x Linux 6.3

    Dear, I am newbie in infrastructure Hyperion, I'm installing ODI EPM 11.1.2.2 and 11.1.1.6 on Linux redhat 6.3. But I'm having trouble with deployment tools and can not even begin installing ODI, because as below error OS does not meet the prerequisi

  • Safari is not working on my iPad

    I Am unable to search from Safari on my iPad.  Also I am unable to use links in email.  Can you help?