Failover High Availability recovery

When a node in a failover cluster fails:
¿How does it handle the recovery?
¿Are objects created by the failed node reused, if a single file system was being used, or are they recreated?

Hi,
Could you offer the detail of the issue, because the backup or the node crash have many different circumstances, it was many method to recovery the failed node, you can refer
the following KB for the future repairing.
More information:
Understanding Backup and Recovery Basics for a Failover Cluster
http://technet.microsoft.com/en-us/library/cc771973.aspx
Hope this helps.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • I'm looking for Failover/High available solutions for IPS 4200 Series

    Hi all,
    I tried to find out Failover/High available solutions for IPS 4200 series,I didn't saw failover solutions in IPS guide document. Anybody can be help me!

    I do not know if this is documented anywhere, but I can tell you what I do. As long as the IPS 4200 has power, with the right software settings, the unit can fail such that it will pass traffic. Should the unit loose power, it does stop all traffic. I run a patch cable in parallel with the in line IPS unit, in the same VLAN, with a higher STP cost. Thus all traffic will traverse the IPS unit when possible, but should something happen to it, a $10 patch cable takes over.
    Mike

  • SQL Server 2005 High Availability and Disaster Recovery options

    Hi, We are are working on a High Availability & Disaster Recovery Planning solution for an application database which is on SQL Server 2005. What different options have we got to implement this for SQL Server 2005 and after we have everything setup how
    do we test the failover is working?
    Thanks in advance.........
    Ione

    DR : Disaster recovery is the best option for the business to minimize their data loss and downtime. The SQL server has a number of native options. But, everything is depends upon your recovery time objective RTO and recovery point objective RPO.
    1. Data center disaster
    Geo Clustering
    2. Server(Host)/Drive (Except shared drive) disaster
    Clustering
    3. Database/Drive disaster     
    Database mirroring
    Log shipping
    Replication
    Log shipping
    Log shipping is the process of automating the full database backup and transaction log on a production server and then automatically restores them on to the secondary (standby) server.
    Log shipping will work either Full or Bulk logged recovery model.
    You can also configure log shipping in the single SQL instance.
    The Stand by database can be either restoring or read only (standby).
    The manual fail over is required to bring the database online.
    Some data can be lost (15 minutes).
    Peer-to-Peer Transactional Replication
    Peer-to-peer transactional replication is designed for applications that might read or might modify the data in any database that participates in replication. Additionally, if any servers that host the databases are unavailable, you can modify the application
    to route traffic to the remaining servers. The remaining servers contain same copies of the data.
    Clustering
    Clustering is a combination of one or more servers it will automatically allow one physical server to take over the tasks of another physical server that has failed. Its not a real disaster recovery solution because if the shared drive unavailable we cannot
    bring the database to online.
    Clustering is best option it provides a minimum downtime (like 5 minutes) and data loss in case any data center (Geo) or server failure.
    Clustering needs extra hardware/server and it’s more expensive.
    Database mirroring
    Database mirroring introduced in 2005 onwards. Database Mirroring maintain an exact copy of a database on a different server. It has automatic fail over option and mainly helps to increase the database availability too.
    Database mirroring only works FULL recovery model.
    This needs two instances.
    Mirror database always in restoring state.
    http://msdn.microsoft.com/en-us/library/ms151196%28v=sql.90%29.aspx
    http://blogs.technet.com/b/wbaer/archive/2008/04/19/high-availability-and-disaster-recovery-with-microsoft-sql-server-2005-database-mirroring-and-microsoft-sql-server-2005-log-shipping-for-microsoft-sharepoint-products-and-technologies.aspx
    http://www.slideshare.net/rajib_kundu/disaster-recovery-in-sql-server
    HADR Considerations
    Need to Understand the business motivations and regulatory requirements that are driving the customer's HA/DR requirements. Understand how your customer categorizes the workload from an HA/DR perspective. There is likely to be an alignment between the needs
    and categorization.
    Check for both the recovery time objective (RTO) and the recovery point objective (RPO) for different workload categories, for both a failure within a data center (local high availability) and a total data center failure (disaster recovery). While RPO and
    RTO vary for different workloads because of business, cost, or technological considerations, customers may prefer a single technical solution for ease in operations. However, a single technical solution may require trade-offs that need to be discussed with
    customers so that their expectations are set appropriately.
    Check and understand if there is an organizational preference for a particular HA/DR technology. Customers may have a preference because of previous experiences, established operational procedures, or simply the desire for uniformity across databases from
    different vendors. Understand the motives behind a preference: A customers' preference for HA/DR may not be because of the functions and features of the HA/DR technology. For example, a customer may decide to adopt a third-party solution for DR to maintain
    a single operational procedure. For this reason, using HA/DR technology provided by a SAN vendor (such as EMC SRDF) is a popular approach.
    To design and adopt an HA/DR solution it is also important to understand the implications of applying maintenance to both hardware and software (including Windows security patching). Database mirroring is often adopted to minimize the service disruption
    to achieve this objective.
    HADR Options :
    Failover clustering for HA and database mirroring for DR.
    Synchronous database mirroring for HA/DR and log shipping for additional DR.
    Geo-cluster for HA/DR and log shipping for additional DR.
    Failover clustering for HA and storage area network (SAN)-based replication for DR.
    Peer-to-peer replication for HA and DR (and reporting).
    Backup & Restore ( DR)
    keep your server DB backups in network location ( DR)
    Always keep your sql server 2005 upto date, in case if you are not getting any official support from MS then you have to take care of any critical issues and more..
    Raju Rasagounder Sr MSSQL DBA

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • NAC High Availability: Users getting disconnected during failover

    Hi,
    We have a pair of CAS in in-band virtual-gateway mode in high availability mode.
    We are still running some tests but we have noticed that the clients are losing connectivity during the failover.
    * The service ip is always active (never stops responding pings).
    * The stand-by CAS becomes active immediatly after we shut down the primary, we see it on the CAM.
    * The client however looses connectivity with the internal network for almost two minutes.
    I'm guessing this isn't normal, but would like to know what is the expected behaviour on this.
    Thanks and regards,

    We configured another pair today and we are noticing the same behaviour, however it seems random... sometimes the user barely looses connection, other times it will take from 2-5 minutes for it to come back.
    We are only using eth2 for the failover link since we only have one serial port.
    When we test we make sure both servers are up and then we reboot the primary. The secondary becomes active immediately. When both are up again we repeat the process.
    any other ideas? something we should check?
    Thanks!

  • HANA High Availability System Vs Storage Vs VIP failover

    Dear Experts,
    Hope your all doing great. I would like to seek your expertise on HANA high availability best practice. We have been deciding to use TDI for BW on HANA. The next big question for us is how make it available atleast 99.99%.
    I was going through multiple documents, SDN forums, etc..but would like to see how the experts are performing in real time.
    My view -
    Virtual IP failover is a common HA practice which have been used to failover CI / DB hosts depends on failure/maintenance. In this case, both nodes can be used to run app servers.
    System replication - HANA based required secondary standby node, which doesn't accept user requests, but replicate the database from primary using logs after initial data snapshot either synchronous or Asynchronous. (Can be used as HA or DR - if servers are between different data centers).
    Storage replication - HANA based required secondary standby node, which replicates SAN for HA/DR.
    Could you please provide your expertise method you followed for HANA HA and what are the pros, cons and challenges that you have faced or facing.
    Thanks
    Yoga

    Thanks forbrich
    Do you know any specific doc that describes the installation and configuration steps of 10g RAC on NAS? If possible, can you provide some link that I could use to perform this task?
    I have done RAC installations on SAN without any problems and its something I'm fairly experienced with. With NAS I am not really comfortable since I can't seem to find any documentation that describes step by step installation procedure or guidelines for that matter.
    Thank you for your input
    Best Regards
    Abbas

  • Slides explaining EM High Availability & Disaster Recovery

    Slides explaining EM High Availability & Disaster Recovery
    http://www.oracle.com/technology/deploy/availability/pdf/oracle-openworld-2008/298477.pdf

    Check the following note, it should be helpful.
    Note: 216212.1 - Business Continuity for Oracle Applications Release 11i, Database Releases 9i and 10g
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216212.1

  • How to configure high availability and disaster recovery? And user authenticate

    We are in the process of rolling out our online help which was created using Robohelp.   In our initial rollout we will provide access to the files via our Client Portal which requires authentication.  We are also planning for our next version where we intend to implement Robohelp server functionality.
    Our IT team is looking at options on how to configure for High Availability and Disaster Recovery.  It seems that Robohelp doesn't have any built-in functionality in this area.  In addition we require that our users authenticate.  The options for the server version seem to be more internally focused and we would need to solution the authentication using a third party.
    Would anyone be willing to share their approach in these areas?  Would you be willing to participate in a conference call with our IT Professionals?

    Hello again
    I see my good friend Peter replied to your LInkedIn post where you cross-posted the same question. For those here that have no clue what Peter stated, here it is:
    What are you seeking to recover? Your projects? Your outputs? This sounds like a question more appropriate to Disaster Recovery consultants and far wider reaching than RoboHelp. To me it seems like a question your IT people should be asking direct to such consultants who would expect a fee for their advice.
    I would agree with Peter's reply.
    I'll also go further and ask what exactly is being done in this realm for the application? Help files generally are there to support an application on the server. So whatever you are doing for the applciation should also be able to be used for the WebHelp, FlashHelp or web based AIR Help files, no?
    Cheers... Rick
    Helpful and Handy Links
    RoboHelp Wish Form/Bug Reporting Form
    Begin learning RoboHelp HTML 7, 8 or 9 within the day!
    Adobe Certified RoboHelp HTML Training
    SorcerStone Blog
    RoboHelp eBooks

  • Sql Server High availability failover trigger

    Hello,
    We are implementing sql server 2012 availability groups (AG). Our secondary databases are not accessible in order to save licenses.
    We have a lot of issues concerning monitoring, backup and SSIS. They all come down to the fact that they want basic information from the secondary, that is not accessible. We are implementing SSIS, which is supported on AG, but the SSISDB is encrypted.
    Backup problem
    The secondary instance does not know anything about the backups made in the primary instance. After a failover differential backups fail.
    SSIS problem:
    There is a blog (http://blogs.msdn.com/b/mattm/archive/2012/09/19/ssis-with-alwayson.aspx) that suggest to make a job that checks whether the status has changed from secondary to primary. If so, you can decrypt and encrypt again.
    This job has to be executed every minute. Which is way too much effort for an event that happens once in a while.  There are a few other problems with this solution. The phrase "use ssisdb" has to be included in the a job step. And the
    jobstep fails. The secondary is not accessible.
    Monitoring problems:
    We use Microsoft tooling for monitoring: SCOM. Scom does not recognize a non readable secondary and tries to login continuously.
    There are a few solutions that I can think of:
    -  sql server build in failover trigger
    -  Special status of secondary database.
    Failover trigger:
    We would like a build-in failover trigger, in stead of a time based job, that starts a few standard maintenance actions if only at the time (or directly after) a failover has occurred. Because now our HA cluster is not really high available until :
    - SSISDB works and is accessible after failover
    - Backup information is synchronised
    - SCOM monitoring skips the secondary database (scom produces loads of login failures)
    Does anyone have any suggestion how to fix this?

    No built in trigger can achieve your requirement.

  • Question on replication/high availability designs

    We're currently trying to work out a design for a high-availability system using Oracle 9i release 2. Having gone through some of the Oracle whitepapers, it appears that the ideal architecture involves setting up 2 RAC sites using Dataguard to synchronize the data. However, due to time and financial constraints, we are only allowed to have 2 servers for hosting the databases, which are geographically separate from each other in prevention of natural disasters. Our app servers will use JDBC pools to connect to the databases.
    Our goal is to have both databases be the mirror image of each other at any given time, and the database must be working 24/7. We do have a primary and a secondary distinction between the two, so if the primary fails, we would like the secondary database to take over the tasks as needed.
    The ability to query existing data is mission critical. The ability to write/update the database is less important, however we do need the secondary to be able to process data input/updates when primary is down for a prolonged period of time, and have the ability to synchronize back with the primary site when it is back up again.
    My question now is which replication technology should we try to implement? I've looked into both Oracle Advanced Replication and Dataguard, each seems to have its own advantages and drawbacks:
    Replication - can easily switch between the two databases using multimaster implementation, however data recovery/synchronization may be difficult in case of failure, and possibly will lose data (pending implementation). There has been a few posts in this forum that suggested that replication should not really be considered as an option for high availability, why is that?
    Dataguard - zero data loss in failover/switchover, however manual intervention is required to initiate failover/switchover. Once the primary site fails over to the standby, the standby becomes the primary until DBA manually goes back in and switch the roles. In Oracle 10g release 2, seems that automatic failover is achieved through the use of an extra observer piece. There does not seem to be anyway to do this in Oracle 9i release 2.
    Being new to the implementation of high-availability systems, I am at somewhat of a loss at this point. Both implementations seem to be a possible candidate, but we will need to sacrifice some efforts for both of them also. Would anyone shine some light on this, maybe point out my misconceptions with Advanced Replication and Dataguard, and/or suggest a better architecture/technology to use? Any input is greatly appreciated, thanks in advance.
    Sincerely,
    Peter Tung

    Hi,
    It sounds as if you're talking about the DB_TXN_NOSYNC flag, rather than DB_NOSYNC.
    You mention that in general, you lose uncommitted transactions on system failure. I think what you mean is that you may lose some committed transactions on system failure. This is correct.
    It is also correct that if you use replication you can arrange to have clients have a copy of all committed transactions, so that if the master fails (and enough clients do not fail, of course) then the clients still have the transaction data, even when using DB_TXN_NOSYNC.
    This is a very common usage scenario for Berkeley DB replication/HA, used to achieve high throughput. You will want to pay attention to the configured ack policy, group size setting, setting of the 2SITE_STRICT option (if group size == 2).

  • Access Services 2013 high available on SQL 2012?

    Hi,
    Does anyone know how we get the randomly created databases from access services 2013 services high available on our SQL 2012 cluster?
    For the SharePoint content databases, we use the ‘always on’ functionality of SQL 2012. This functionality is not supported for Access services 2013.
    Thanks in advance,
    Johan

    Hi Edwin,
    Thanks again for your answer.
    Failover clustering is a perfect solution for high availability.
    A last additional question:
    Now we using ‘Always On groups’. The 2 instances are located one a different physical location.With that configuration, we have a High available solution and disaster recovery solution.
    If we going for a Failover Cluster we have a High Available solution, but we don’t have a disaster recovery solution anymore because everything is located on the same physical location.
    We can configure SQL mirroring, but then we have the same behavior like the ‘always On Groups’.-> Not possible to delete db’s from SharePoint and not automatically mirrored when created a new access services 2013 db.
    Is there any solution/configuration to accomplish this?
    Thanks,
    Johan

  • Timesten high availability question

    I have a case presented here and wanted to know if it is actually possible to implement.
    Let us consider four nodes with timesten (11.2) installed in all of them. A datastore with the same name is created on each of the four servers. Two of these servers are in location A and the other two in location B. Servers in location A have replication defined between them and similarly servers in B have replication defined between them. But note that there is no replication defined between any server in location A with any server in location B.
    The basic idea of this entire setup is to maintain high availability of timesten at any point of time (in case of natural disasters, etc..) irrespective of the location of the servers
    Now, we have oracle software installed in four other systems. Two of these servers are in location A and the other two are in location B. Note that they are not installed on the same box as Timesten.
    Scenario 1:
    Question: Timesten in location A goes down, how is the high availability taken care of?
    Answer: Timesten in the other server in location A should come up and because of the replication process, this will solve the problem.
    Is this correct? I think it is.
    Scenario 2:
    Question: Timesten installed on both the nodes at location A go down, how is high availability taken care of?
    Answer: ?
    Please remember from above that timesten does not have a replication policy defined between any server in location A with any server in location B. The requirement says that we should be able to recover all the latest data that the nodes at location A had, by pulling it from oracle DB at location A and putting it into TT server in location B. I would like to know if it is possible to do this?

    Hello,
    Your approach is correct in designing a Disaster Recovery architecture for TimesTen and Oracle Database. TimesTen supports an Active-Standby pair topology that works well with integrating with the Oracle database within a particular site. However, like for any geographical based replication, it is recommended to configure replication across the WAN using the Oracle Database GoldenGate or Streams technologies in ASYNC mode for better throughput and efficiency. It is also recommended to compress replication traffic across the WAN between the Oracle databases.
    So while using the Oracle Database to replicate transactions across the WAN is the right thing to do (using Streams Replication or GoldenGate between the two Oracle databases (assuming using an Oracle RAC 2-node cluster in each site), you will not be able to guarantee that any transactions in site A has made it to site B. GoldenGate and Streams technologies have the ability to replicate the data bi-directionally. What this means is that when site A recovers, transactions that had been trapped there (either between TimesTen and the Oracle DB or in the Oracle DB transaction logs), will attempt to replicate again to site B, so it is important to set up a conflict detection/resolution approach which is possible to do in either GoldenGate or Streams.
    Note that Oracle Data Guard replication is not supported with TimesTen in such a configuration across the WAN where TimesTen datastores need to be maintained in both sites.
    To fully answer the question, however, we should get into the details of the type of cache group tables that you intend to use in TimesTen. If using TimesTen as just a read-only cache while all inserts/updates happen in the Oracle database, then OracleDB would be regarded as the database of record and it would be used to handles all changes while data changes get auto-refreshed from teh Oracle databases in each site into the respective TimesTen tables.
    If the application will be looking to take advantage of fast writes into TimesTen using AWT (async writethrough cache group tables), then it is recommended to configure those tables to be DYNAMIC AWT tables so that if a failover to site B takes place and the data is not in TimesTen (but it is in the Oracle Database), it will be automatically loaded on demand as needed from the Oracle database in site B. Note however that there are restrictions that exist with DYNAMIC load on demand cache groups that you need to look into to find out whether those would work in your application's case (particularly, load on demand works only if the where clause includes an equality predicate on the primary keys, foreigh keys, or unique indexes, etc...)
    To fully answer your question on non data loss across geographies, you'd have to use Synchronous replication between TimesTen and Oracle using Synchronous Writethrough Cache Groups and SYNC replication in Streams for example between the two geographies. Neither of those configurations are used to my knowledge in the field because they are very non optimal and carry huge response time expense, which slows down replication considerably and affects application response times.
    My assumption also is that the need for the Oracle database is because all data does not fit into memory. If the data does fit into memory, then you could also consider a pure timesten replication across the two sites using an active-standby pair on site A and a read-only subscriber on site B that would be made ACTIVE only in the case of a disaster on site A. Once site B takes over, you can also create an active-standby pair in site B based on the newly elected ACTIVE datastore in that site. In all these cases, it is recommended to use SYNC 2-SAFE replication between TimesTen datastores in the same site and ASYNC replication between the two sites.

  • SSRS 2014 Highly Available

    The objective is to setup a production environment where SSRS is highly available. According to this article, the SSRS should be installed on the failover server.  This is counter-intuitive to me.  Is the article correct?  If so, can someone
    explain the rationale behind this topography?
    Link to article: 
    http://msdn.microsoft.com/en-us/library/bb522745.aspx
    Thank you for your assistance in this matter.

    Bryan,
    The article is a little confusing in that it doesn't provide a more robust justification.
    Do you need to put SSRS on a failover server to provide higher availability? Of course not.
    If your configuration for your primary should include SSRS to support your users then definitely follow your instincts.
    What the article fails to mention is that with 8 replicas now available, some organizations are strategically locating failovers (usually disaster recovery servers) to perform double duty, as reporting servers.
    For example, configure a replica that is another part of the country or the world. Perhaps your data center is located on the USA east coast and you create a replica on another office in Singapore. The replica receives continuous updates from the primary
    and works as a DR solution, but... also provides "local" reporting for users. The performance for them is much faster (obviously).
    Also included in SQL Server 2014 is the ability for SSRS to continue providing reporting services on a replica EVEN WHEN the connection to the primary is lost. That way your folks in Singapore still work.
    Hope that helps...

  • DB high availability

    What would be the best approach to for high availability of DB , if the db server or hardware crash ...
    Is RAC good option or some other good approach....

    Also checkout Oracle's own paper on Maximum Availability Architecture:
    http://otn.oracle.com/deploy/availability/pdf/MAA_WP.pdf
    In a nutshell, if you have a 2-node RAC system, and the first mode fails, your application can potentially keeping running without interruption. In the worst case, your users would simply have to re-connect to continue using the application. With Data Guard the likelihood is that all users would be disconnected, and would not be able to re-connect until you have completed failover, which requires manual intervention. Depending on how you configure Data Guard, there is also potential for some data loss on failover. Data Guard is really intended for Disaster Recovery - when you've lost your primary site completely.
    Depending on the nature of your application, you can also make use of both nodes in the RAC configuration rather than having one node idle whilst the other does all the work. Changes may be required to your application for this to work well though, so I'd give it careful thought and thorough testing before exploiting the load balancing capabilities.
    If you read the above paper, you'll see that Oracle recommends RAC AND Data Guard for maximum availability, but the paper does highlight that there are other considerations if you want true high availability - namely, eliminating all possible single points of failure in your hardware and network.

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

Maybe you are looking for