Cisco disaster recovery design

Dears,
I need helping for designing disaster recovery between two sites. The customer want 2 cisco catalyst 6500 series switch as core devices at each sites.
designing physical and logical connection between two 6500 swiches at each sites?(solution1)
designing physical and logical connection between two sites?(solution2)
which sites we need vss, which sw-es we need etherchannel??
Thanks

yes full recovery but active active. for example one server is active site2, one of services or application is active site1 and other services are active in site2. 
but routers, asa is active standby. 

Similar Messages

  • Cisco Expressway Core & Edge Cluster, or Disaster Recovery Setup

    Hi All,
    Dear experts,
    I have two sites HQ & Branch.below are my questions for Expressway Core & Edge Cluster
    1. Can i create one cluster for 6 sever for  Expressway Core and distribute 3 server in HQ &  3 server in Branch for HA
    2. Can i create one cluster for 6 sever for  Expressway Edge and distribute 3 server in HQ & 3 server in Branch for HA
    3. How would be the DNS SRV records and call flow work if the main site  Expressway Edge goes down in HQ. How the Branch Side Expressway Edge becomes active for B2B calls.
    4. What is the best design to have disaster recovery for Expressway core and Expressway Edge between Sites(HQ & Branch)
    Regards,
    Irf

    Regarding 1 and 2, as long as the round trip delay is no more than 30ms maximum, you should be fine.  I presume the HQ Branch offices for the Expressway-Core have a shared internal network, since the Core goes inside the corporate network, while the Edge goes in the DMZ or outside the network for public access.
    For 3, as long as you have your SRV records setup correct, you can set the priority of which office gets used first, and then second, in case of a failure.
    I don't have an answer for 4 unfortunately.

  • Full featured disaster recovery

    Hi,
    is there a possibility to restore or import the backup history when creating a new instance? And is there a way to back it up?
    This would be very handy in case of a disaster recovery (machine completely dead; restore from tape/backup media). Then it would be possible to make a timestamp recovery without the need to look for the different needed log backups.
    bye
    Chris

    Hi Christian,
    for the case of desaster recovery the necessary steps are simple:
    - create a new instance on a new server
    - recover from backup medium (of course not from the backup history)
    When you recover log-backups from a medium, you just provide the start number from which on the DBMGUI should start to recover. It will then increase the number until no more log-backups can be found.
    - After that you can start the database and with that you're done with the desaster recovery.
    Keep in mind: desaster recovery is an action that should produce the latest state of the database available through the backups. It's not designed to make point-in-time-recoveries extremely easy (although you can also specify the recover until option all the time).
    KR Lars

  • ADFS Disaster Recovery

    Hi,
     I'm looking at designing an ADFS solution to accomodate 4,500 users to provide SSO with a web application.
    I'm thinking of using 2 2012 R2 ADFS servers on my internal network and 2 2012 R2 Web Application Proxies in my DMZ, then load balancing the connections using an F5.
    What I'm not sure about is how to achieve DR in another data center. For example is active-active in 2 different data centres supported? I was thinking of replicating my ADFS servers across the Data Centres, then simply performing a failover in the event
    of a disaster, but I'm not sure how well that would work in reality.
    It would be great to hear from someone who's already done this.
    Thanks
    IT Support/Everything

    Hi,
    To provide Office 365 Single Sign-On with integration of on-premises AD and Windows Azure, it is recommended to use the on-premises
    environment for active use and Azure for business continuity. In case of a disaster, the failover between the on-premises infrastructure and the hosted infrastructure is a manual operation.
    It is not recommended to setting up a cross-premises, high-availability (active/active)
    configuration.
    For more detailed information please refer to these articles below:
    White paper: Office 365 Adapter - Deploying Office 365 single sign-on using Azure Virtual Machines
    http://technet.microsoft.com/library/dn509539.aspx
    Deployment scenario: Directory integration components in Azure for disaster recovery
    http://technet.microsoft.com/en-US/library/dn509536.aspx
    To get more efficient assistance, I suggest you refer to Azure and ADFS forums below:
    Azure Active Directory Forum
    https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=WindowsAzureAD
    Claims based access platform (CBA), code-named Geneva Forum
    http://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=Geneva
    Best Regards,
    Amy

  • Concurrent Transactions,Disaster Recovery

    Hello friends,
    I have an application on sql server which is used for online reservation.Could you tell me roadmap in order to handle  concurrent transaction and disaster recovery plan? I am using sql server 2008 R2.
    Thanks in advance.

    Hi,
    I suggest you read this article:
    High Availability and Disaster Recovery (OLTP)---a Technical Reference Guide for Designing Mission-Critical OLTP Solutions
    http://msdn.microsoft.com/en-us/library/ms190202.aspx
    If one high-availability option cannot meet the requirement, you
    may consider combining some of the high-availability options.
    The white paper Proven SQL Server Architectures for High Availability and Disaster Recovery2 shows the details of five commonly used architectures:
    ◦ Failover clustering for HA and database mirroring for DR.
    ◦ Synchronous database mirroring for HA/DR and log shipping for additional DR.
    ◦ Geo-cluster for HA/DR and log shipping for additional DR.
    ◦ Failover clustering for HA and storage area network (SAN)-based replication for DR.
    ◦ Peer-to-peer replication for HA and DR (and reporting).
    The whitepaper also describes the architectures and also presents case studies that illustrate how real-life customers have deployed these architectures to meet their business requirements. You can download it here:
    Proven SQL Server Architectures for High Availability and Disaster Recovery
    Thanks.
    Tracy Cai
    TechNet Community Support

  • ACTIVE/ACTIVE Disaster Recovery configuration

    If I have two separate 10.2.0 RAC databases in two separate geographical locations and each database is receiving updates and sending the changes to the other database via Streams, how would you configure a disaster recovery solution for this?
    In this scenario, each database is intended to be a copy of the other. It is an ACTIVE/ACTIVE type of setup.
    Do you need also have a data guard database for each of these databases to support disaster recovery?
    Thanks for your feedback.

    Hello Sergio,
    To get to Ironport Dcoumentation, please do the following:
    1) go to www.cisco.com
    2) Login with CCO id and password
    3) Select support
    4) On resulting page, under Prduct Name, select Security
    5) You should see  "Email Security" and "Web Security" option there, which will bring you to the Documents.
    For WSA the doc guides are here http://www.cisco.com/en/US/customer/products/ps10164/products_user_guide_list.html
    For The ESA the doc guides are here http://www.cisco.com/en/US/customer/products/ps10154/products_user_guide_list.html
    Regards,
    Eric

  • SQL Server 2005 High Availability and Disaster Recovery options

    Hi, We are are working on a High Availability & Disaster Recovery Planning solution for an application database which is on SQL Server 2005. What different options have we got to implement this for SQL Server 2005 and after we have everything setup how
    do we test the failover is working?
    Thanks in advance.........
    Ione

    DR : Disaster recovery is the best option for the business to minimize their data loss and downtime. The SQL server has a number of native options. But, everything is depends upon your recovery time objective RTO and recovery point objective RPO.
    1. Data center disaster
    Geo Clustering
    2. Server(Host)/Drive (Except shared drive) disaster
    Clustering
    3. Database/Drive disaster     
    Database mirroring
    Log shipping
    Replication
    Log shipping
    Log shipping is the process of automating the full database backup and transaction log on a production server and then automatically restores them on to the secondary (standby) server.
    Log shipping will work either Full or Bulk logged recovery model.
    You can also configure log shipping in the single SQL instance.
    The Stand by database can be either restoring or read only (standby).
    The manual fail over is required to bring the database online.
    Some data can be lost (15 minutes).
    Peer-to-Peer Transactional Replication
    Peer-to-peer transactional replication is designed for applications that might read or might modify the data in any database that participates in replication. Additionally, if any servers that host the databases are unavailable, you can modify the application
    to route traffic to the remaining servers. The remaining servers contain same copies of the data.
    Clustering
    Clustering is a combination of one or more servers it will automatically allow one physical server to take over the tasks of another physical server that has failed. Its not a real disaster recovery solution because if the shared drive unavailable we cannot
    bring the database to online.
    Clustering is best option it provides a minimum downtime (like 5 minutes) and data loss in case any data center (Geo) or server failure.
    Clustering needs extra hardware/server and it’s more expensive.
    Database mirroring
    Database mirroring introduced in 2005 onwards. Database Mirroring maintain an exact copy of a database on a different server. It has automatic fail over option and mainly helps to increase the database availability too.
    Database mirroring only works FULL recovery model.
    This needs two instances.
    Mirror database always in restoring state.
    http://msdn.microsoft.com/en-us/library/ms151196%28v=sql.90%29.aspx
    http://blogs.technet.com/b/wbaer/archive/2008/04/19/high-availability-and-disaster-recovery-with-microsoft-sql-server-2005-database-mirroring-and-microsoft-sql-server-2005-log-shipping-for-microsoft-sharepoint-products-and-technologies.aspx
    http://www.slideshare.net/rajib_kundu/disaster-recovery-in-sql-server
    HADR Considerations
    Need to Understand the business motivations and regulatory requirements that are driving the customer's HA/DR requirements. Understand how your customer categorizes the workload from an HA/DR perspective. There is likely to be an alignment between the needs
    and categorization.
    Check for both the recovery time objective (RTO) and the recovery point objective (RPO) for different workload categories, for both a failure within a data center (local high availability) and a total data center failure (disaster recovery). While RPO and
    RTO vary for different workloads because of business, cost, or technological considerations, customers may prefer a single technical solution for ease in operations. However, a single technical solution may require trade-offs that need to be discussed with
    customers so that their expectations are set appropriately.
    Check and understand if there is an organizational preference for a particular HA/DR technology. Customers may have a preference because of previous experiences, established operational procedures, or simply the desire for uniformity across databases from
    different vendors. Understand the motives behind a preference: A customers' preference for HA/DR may not be because of the functions and features of the HA/DR technology. For example, a customer may decide to adopt a third-party solution for DR to maintain
    a single operational procedure. For this reason, using HA/DR technology provided by a SAN vendor (such as EMC SRDF) is a popular approach.
    To design and adopt an HA/DR solution it is also important to understand the implications of applying maintenance to both hardware and software (including Windows security patching). Database mirroring is often adopted to minimize the service disruption
    to achieve this objective.
    HADR Options :
    Failover clustering for HA and database mirroring for DR.
    Synchronous database mirroring for HA/DR and log shipping for additional DR.
    Geo-cluster for HA/DR and log shipping for additional DR.
    Failover clustering for HA and storage area network (SAN)-based replication for DR.
    Peer-to-peer replication for HA and DR (and reporting).
    Backup & Restore ( DR)
    keep your server DB backups in network location ( DR)
    Always keep your sql server 2005 upto date, in case if you are not getting any official support from MS then you have to take care of any critical issues and more..
    Raju Rasagounder Sr MSSQL DBA

  • What to have for disaster recovery? Full backup?

    What do we need to have? In case of we lose everything what to backup from where? There is an BART system I think in 4.1 does it enough for full system recovery? Or do we also need to export users from BAT->export ? Why are they seperate? (ps. I talk about 4.1)
    thx

    Hi Okan,
    The Utility you are referring to is called BARS (Backup and Restore) and is the data you need for Disaster Recovery/CCM Rebuilds. You will not need the Bat-Export as all the phone details etc are captured in a .tar file during BARS Backup;
    The following list shows the data that is backed up and restored for the Cisco CallManager publisher database:
    Hosts and LMhosts files
    Latest Cisco CallManager publisher database
    DC Directory LDAP directory
    For Cisco CallManager 3.3.x DirectoryConfiguration.ini from C:\dcdsrvr.
    For Cisco CallManager 4.0(x) or later UMDirectoryConfiguration.ini.
    Directory schema files avvid_schemaV*.txt
    Publisher and subscriber configuration information to replication.ini file.
    Cisco CallManager version to version.ini file.
    If the option to backup CDR is chosen, CDR database and CDR/CMR flat files from Local CDR Path
    TFTP files from C:\Program Files\Cisco\TFTPPath (the default path)
    TFTP files from alternate file locations
    Cisco Bulk Administration Tool (BAT) files-templates from C:\CiscoWebs\BAT, CSV files from C:\BAT and the BATversion.asp file.
    HKLM\Software\Cisco Systems, Inc. (registry keys)
    Cisco CallManager DSN
    Security files (certificates) from under C:\Program Files\Cisco\Certificates. This applies only to Cisco CallManager 4.x or later.
    LDAPConfig.ini for IPMA configuration.
    Here is a good doc that has all the info for doing Backups and then restoring on the new box;
    http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/bars/4_0_11/ag-4011.html
    Hope this helps!
    Rob

  • Is anyone doing disaster recovery for a J2EE application?

    We generally use database log shipping to maintain a standby database for our ABAP instances.  We can successfully fail over our production application to our disaster recovery site with no real issues.  With the J2EE instances (EP, ESS/MSS, BI, etc), we have a few concerns:
    hostname cannot change, without going through a system copy procedure, so we would have to keep the hostnames in DR the same. (for example, ref: oss note 757692 - changing hostname is not supported)
    fully qualified domain name - from what I understand, there are potentially issues with changing the fqdn, for example SSO certificates, BSPs, XI has issues, etc.
    we can't keep both hostname and fqdn the same between DR and production, or we could never do a DR test.
    Has anyone implemented disaster recovery for any SAP J2EE application that has run into these concerns and addressed them?  Input would be greatly appreciated regarding how you addressed these issues, or how you architected your disaster recovery implementation.
    Regards,
    David Hull
    The Walt Disney Company

    I haven't done this personally, but I do have some experience with these issues in different HA environments.
    To your first point:  You can change the hostname, note 757692 tells you exactly how to do it.  However like the note says, "Changing the name of a host server in a production system is not automatically supported by SAP."  When it says "supported by SAP" I think it means SAP the company, not SAP's software.  So I would contact SAP to see if this configuration would be covered under your service agreement.  Then you have to think about whether you want to do something that isn't "officially supported" by SAP.  Also I'm sure you'll need some kind of additional licensing for the DR systems as their hardware keys will de different.
    To your second point:  As for SSO certs (SAP Login Tickets), I think they should still work as long as the SID and client number of the issuing system remain the same.  I don't think they are hostname or fqdn dependant.  For BSPs I would think they would still work as long as they use relative paths rather than absolute paths.  And for XI... I have no idea what kind of issues may arise, I'm not an XI guy.
    Again, I haven't done what you're describing myself.  This is just based on my HA experiences.
    Hope this helps a little,
    Glenn

  • SharePoint 2010 backup and restore to test SharePoint environment - testing Disaster recovery

    We have a production SharePoint 2010 environment with one Web/App server and one SQL server.   
    We have a test SharePoint 2010 environment with one server (Sharepoint server and SQL server) and one AD (domain is different from prod environment).  
    Servers are Windows 2008 R2 and SQL 2008 R2.
    Versions are the same on prod and test servers.  
    We need to setup a test environment with the exact setup as production - we want to try Disaster recovery. 
    We have performed backup of farm from PROD and we wanted to restore it on our new server in test environment.Backup completed successfully with no errors.
    We have tried to restore the whole farm from that backup on test environment using Central administration, but we got message - restore failed with errors.
    We choosed NEW CONFIGURATION option during restore, and we set new database names... 
    Some of the errors are:
    FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified user or domain group was not found.
    Warning: Cannot restore object User Profile Service Application because it failed on backup.
    FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    Verbose: Starting object: WSS_Content_IT_Portal.
    Warning: [WSS_Content_IT_Portal] A content database with the same ID already exists on the farm. The site collections may not be accessible.
    FatalError: Object WSS_Content_IT_Portal failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified component exists. You must specify a name that does not exist.
    Warning: [WSS_Content_Portal] The operation did not proceed far enough to allow RESTART. Reissue the statement without the RESTART qualifier.
    RESTORE DATABASE is terminating abnormally.
    FatalError: Object Portal - 80 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    ArgumentException: The IIS Web Site you have selected is in use by SharePoint.  You must select another port or hostname.
    FatalError: Object Access Services failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: Object parent could not be found.  The restore operation cannot continue.
    FatalError: Object Secure Store Service failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: The specified component exists. You must specify a name that does not exist.
    FatalError: Object PerformancePoint Service Application failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SPException: Object parent could not be found.  The restore operation cannot continue.
    FatalError: Object Search_Service_Application_DB_88e1980b96084de984de48fad8fa12c5 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    Aborted due to error in another component.
    Could you please help us to resolve these issues?  

    I'd totally agree with this. Full fledged functionality isn't the aim of DR, getting the functional parts of your platform back-up before too much time and money is lost.
    Anything I can add would be a repeat of what Jesper has wisely said but I would very much encourage you to look at these two resources: -
    DR & back-up book by John Ferringer for SharePoint 2010
    John's back-up PowerShell Script in the TechNet Gallery
    Steven Andrews
    SharePoint Business Analyst: LiveNation Entertainment
    Blog: baron72.wordpress.com
    Twitter: Follow @backpackerd00d
    My Wiki Articles:
    CodePlex Corner Series
    Please remember to mark your question as "answered" if this solves (or helps) your problem.

  • SharePoint 2013 Search - Disaster Recovery Restore

    Hello,
    We are setting up a new SharePoint 2013 with a separate Disaster Recovery farm as a hot-standby.  In a DR scenario, we want to restore all content and service app databases to the new farm, then fix any configuration issues that might arise due to changes
    in server names, etc...
    The issue we're running into is the search service components are still pointing to the production servers even though they're in the new farm with completely different server names.  This is expected, so we're preparing a PowerShell script to remove
    then re-create the search components as needed.  The problem is that all the commands used to apply the new search topology won't function because they can't access the administration component (very frustrating).  It appears we're in a chicken &
    egg scenario - we can't change the search topology because we don't have a working admin component, but we can't fix the admin component because we can't change the search topology.
    The scripts below are just some of the things we've tried to fix the issue:
    $sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
    $local = Get-SPEnterpriseSearchServiceInstance -Local;
    $topology = New-SPEnterpriseSearchTopology -SearchApplication $sa;
    New-SPEnterpriseSearchAdminComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchQueryProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchCrawlComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchContentProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
    New-SPEnterpriseSearchIndexComponent -SearchTopology $topology -SearchServiceInstance $local -IndexPartition 0 -RootDirectory "D:\SP_Index\Index";
    $topology.Activate();
    We get this message:
    Exception calling "Activate" with "0" argument(s): "The search service is not able to connect to the machine that 
    hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a' 
    in search application 'Search Service Application' is in a good state and try again."
    At line:11 char:1
    + $topology.Activate();
    + ~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
        + FullyQualifiedErrorId : InvalidOperationException
    Also, same as above with
    $topology.BeginActivate()
    We get no errors but the new topology is never activated.  Attempting to call $topology.Activate() within the next few minutes will result in an error saying that "No modifications to the search topology can be made because previous changes are
    being rolled back due to an error during a previous activation".
    Next I found a few methods in the object model that looked like they might do some good:
    $sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
    $topology = Get-SPEnterpriseSearchTopology -SearchApplication $sa -Active;
    $admin = $topology.GetComponents() | ? { $_.Name -like "admin*" }
    $topology.RecoverAdminComponent($admin,"server1");
    This one really looked like it worked.  It took a few seconds to run and came back with no errors.  I can even get the active list of components and it shows that the Admin component is running on the right server:
    Name ServerName
    AdminComponent1 server1
    ContentProcessingComponent1
    QueryProcessingComponent1
    IndexComponent1
    QueryProcessingComponent3
    CrawlComponent0
    QueryProcessingComponent2
    IndexComponent2
    AnalyticsProcessingComponent1
    IndexComponent3
    However, I'm still unable to make further changes to the topology (getting the same error as above when calling $topology.Activate()), and the service application in central administration shows an error saying it can't connect to the admin component:
    The search service is not able to connect to the machine that hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again.
    Lastly, I tried to move the admin component directly:
    $sa.AdminComponent.Move($instance, "d:\sp_index")
    But again I get an error:
    Exception calling "Move" with "2" argument(s): "Admin component was moved to another server."
    At line:1 char:1
    + $sa.AdminComponent.Move($instance, "d:\sp_index")
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
        + FullyQualifiedErrorId : OperationCanceledException
    I've checked all the most common issues - the service instance is online, the search host controller service is running on the machine, etc...  but I can't seem to get this database restored to a different farm.
    Any help would be appreciated!

    Thanks for the response Bhavik,
    I did ensure the instance was started:
    Get-SPEnterpriseSearchServiceInstance -Local
    TypeName : SharePoint Server Search
    Description : Index content and serve search queries
    Id : e9fd15e5-839a-40bf-9607-6e1779e4d22c
    Server : SPServer Name=ROYALS
    Service : SearchService Name=OSearch15
    Role : None
    Status : Online
    But after attempting to set the admin component I got the results below.
    Before setting the admin component:
    Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
    IndexLocation : E:\sp_index\Office Server\Applications
    Initialized : True
    ServerName : prodServer1
    Standalone :
    After setting the admin component:
    Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
    IndexLocation :
    Initialized : False
    ServerName :
    Standalone :
    It's shown this status for a few hours now so I don't believe it's still provisioning.  Also, the search service administration is still showing the same error:
    The search service is not able to connect to the machine that hosts the administration component. Verify that the administration
    component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again.

  • Disaster Recovery set-up for SharePoint 2013

    Hi,
    We are migrating our SP2010 application to SP2013. It would be on-premise setup using virtual environment.
    To handle disaster recovery situation, it has been planned to have two identical farms (active, passive) hosted in two different  datacenters.
    I have prior knowledge of disaster recovery only at the content DB level.
    My Question is how do we make two farm identical and how do we keep Database of both the farm always in sync.
    Also if a custom solution is pushed into one of the farm, how does it replicate to the other farm.
    Can someone please help me in understanding this D/R situation. 
    Thanks,
    Rahul

    Metalogix Replicator will replicate content, but nothing below the Web Application level (you'd still have to configure the SAs, Central Admin settings, deploy solutions, etc.).
    While AlwaysOn is a good choice, do remember that ASync AO is not supported across all of SharePoint's databases (see
    http://technet.microsoft.com/en-us/library/jj841106.aspx). LogShipping is a good choice, especially with content databases as they can be placed in a read only mode on the DR farm for an
    active crawl to be completed against them.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Recovering lost data from a very old backup (disaster recovery)

    Hi all,
    I am trying to restore and recover data from an old DAT-72 cassette. All I know is the date when the backup was taken, that is back in November 2006. I do not know the DBID or anything else except for the date.
    To recover this, I bought an internal SCSI HP c7438a DAT-72 tape drive on eBay and installed it on a machine running Windows 2003 Server SP2. I made a fresh Oracle 11g Enterprise Edition installation. HP tape drivers have been installed and Windows sees the tape drive without problem. To act as a Media Manager, I have installed Oracle Secure Backup. Oracle Secure Backup sees the HP tape drive without problems as well.
    I have to admit my information about Oracle is not very in-depth. I read quite a lot of documents, but the more I read the more confused I become. The closest thing I can find to my situation is the following guide about disaster recovery:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96566/rcmrecov.htm#1007948
    I tried the suggestions in this document without success (details below).
    My questions are:
    1. Is it possible to retrieve data without knowing the DBID?
    2. If not, is it possible to figure out the DBID from the tape? I tried to use dd in cygwin, also booted with Knoppix/Debian and Ubuntu CDs to dump the contents of the tape with dd but all of them failed to see the tape device. If there is any way to dump the raw contents of the tape on Windows, I would also welcome input.
    3. Is there any way at all to recover this data from the tape given all the unknowns?
    Thanks very much in advance,
    C:\Program Files>rman target orcl
    Recovery Manager: Release 11.2.0.1.0 - Production on Sat Mar 19 15:01:28 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    target database Password:
    connected to target database: ORCL (not mounted)
    RMAN> SET DBID 676549873;
    executing command: SET DBID
    RMAN> STARTUP FORCE NOMOUNT; # rman starts instance with dummy parameter file
    Oracle instance started
    Total System Global Area 778387456 bytes
    Fixed Size 1374808 bytes
    Variable Size 268436904 bytes
    Database Buffers 503316480 bytes
    Redo Buffers 5259264 bytes
    RMAN> RUN
    2> {
    3> ALLOCATE CHANNEL t1 DEVICE TYPE sbt;
    4> RESTORE SPFILE TO 'C:\SPFILE.TMP' FROM AUTOBACKUP MAXDAYS 7 UNTIL TIME 'SYS
    DATE-1575';
    5> }
    using target database control file instead of recovery catalog
    allocated channel: t1
    channel t1: SID=63 device type=SBT_TAPE
    channel t1: Oracle Secure Backup
    Starting restore at 19-MAR-11
    channel t1: looking for AUTOBACKUP on day: 20061125
    channel t1: looking for AUTOBACKUP on day: 20061124
    channel t1: looking for AUTOBACKUP on day: 20061123
    channel t1: looking for AUTOBACKUP on day: 20061122
    channel t1: looking for AUTOBACKUP on day: 20061121
    channel t1: looking for AUTOBACKUP on day: 20061120
    channel t1: looking for AUTOBACKUP on day: 20061119
    channel t1: no AUTOBACKUP in 7 days found
    released channel: t1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 03/19/2011 15:03:26
    RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece
    RMAN>
    RMAN> RUN
    2> {
    3> ALLOCATE CHANNEL t1 DEVICE TYPE sbt
    4> PARMS 'SBT_LIBRARY=C:\WINDOWS\SYSTEM32\ORASBT.DLL';
    5> RESTORE SPFILE TO 'C:\SPFILE.TMP' FROM AUTOBACKUP MAXDAYS 7 UNTIL TIME 'SYS
    DATE-1575';
    6> }
    allocated channel: t1
    channel t1: SID=63 device type=SBT_TAPE
    channel t1: Oracle Secure Backup
    Starting restore at 19-MAR-11
    channel t1: looking for AUTOBACKUP on day: 20061125
    channel t1: looking for AUTOBACKUP on day: 20061124
    channel t1: looking for AUTOBACKUP on day: 20061123
    channel t1: looking for AUTOBACKUP on day: 20061122
    channel t1: looking for AUTOBACKUP on day: 20061121
    channel t1: looking for AUTOBACKUP on day: 20061120
    channel t1: looking for AUTOBACKUP on day: 20061119
    channel t1: no AUTOBACKUP in 7 days found
    released channel: t1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 03/19/2011 15:04:56
    RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece
    RMAN>
    -----------------------------------

    Hi 845725,
    If the backups were created with OSB might be you can query the tape with obtool.
    http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14236/obref_oba.htmTo list pieces you could use <lspiece> within obtool.
    http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14236/obref_oba.htm#BHBBIFFEIf this works you should be able to identify the controlfile autobackup if it has the standard naming < c-dbid-date-xx > and you than know the DBID or you are able to restore a controlfile from a backup piece in the output list.
    Might be you have to install 9i or 10g rdbms software as 11g was released a year later in 2007.
    Anyway goodluck.
    Regards,
    Tycho

  • Welcome to the SQL Server Disaster Recovery and Availability Forum

    (Edited 8/14/2009 to correct links - Paul)
    Hello everyone and welcome to the SQL Server Disaster Recovery and Availability forum. The goal of this Forum is to offer a gathering place for SQL Server users to discuss:
    Using backup and restore
    Using DBCC, including interpreting output from CHECKDB and related commands
    Diagnosing and recovering from hardware issues
    Planning/executing a disaster recovery and/or high-availability strategy, including choosing technologies to use
    The forum will have Microsoft experts in all these areas and so we should be able to answer any question. Hopefully everyone on the forum will contribute not only questions, but opinions and answers as well. I’m looking forward to seeing this becoming a vibrant forum.
    This post has information to help you understand what questions to post here, and where to post questions about other technologies as well as some tips to help you find answers to your questions more quickly and how to ask a good question. See you in the group!
    Paul Randal
    Lead Program Manager, SQL Storage Engine and SQL Express
    Be a good citizen of the Forum
    When an answer resolves your problem, please mark the thread as Answered. This makes it easier for others to find the solution to this problem when they search for it later. If you find a post particularly helpful, click the link indicating that it was helpful
    What to post in this forum
    It seems obvious, but this forum is for discussion and questions around disaster recovery and availability using SQL Server. When you want to discuss something that is specific to those areas, this is the place to be. There are several other forums related to specific technologies you may be interested in, so if your question falls into one of these areas where there is a better batch of experts to answer your question, we’ll just move your post to that Forum so those experts can answer. Any alerts you set up will move with the post, so you’ll still get notification. Here are a few of the other forums that you might find interesting:
    SQL Server Setup & Upgrade – This is where to ask all your setup and upgrade related questions. (http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/threads)
    Database Mirroring – This is the best place to ask Database Mirroring how-to questions. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabasemirroring/threads)
    SQL Server Replication – If you’ve already decided to use Replication, check out this forum. (http://social.msdn.microsoft.com/Forums/en-US/sqlreplication/threads)
    SQL Server Database Engine – Great forum for general information about engine issues such as performance, FTS, etc. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/threads)
    How to find your answer faster
    There is a wealth of information already available to help you answer your questions. Finding an answer via a few quick searches is much quicker than posting a question and waiting for an answer. Here are some great places to start your research:
    SQL Server 2005 Books Onlinne
    Search it online at http://msdn2.microsoft.com
    Download the full version of the BOL from here
    Microsoft Support Knowledge Base:
    Search it online at http://support.microsoft.com
    Search the SQL Storage Engine PM Team Blog:
    The blog is located at https://blogs.msdn.com/sqlserverstorageengine/default.aspx
    Search other SQL Forums and Web Sites:
    MSN Search: http://www.bing.com/
    Or use your favorite search engine
    How to ask a good question
    Make sure to give all the pertinent information that people will need to answer your question. Questions like “I got an IO error, any ideas?” or “What’s the best technology for me to use?” will likely go unanswered, or at best just result in a request for more information. Here are some ideas of what to include:
    For the “I got an IO error, any ideas?” scenario:
    The exact error message. (The SQL Errorlog and Windows Event Logs can be a rich source of information. See the section on error logs below.)
    What were you doing when you got the error message?
    When did this start happening?
    Any troubleshooting you’ve already done. (e.g. “I’ve already checked all the firmware and it’s up-to-date” or "I've run SQLIOStress and everything looks OK" or "I ran DBCC CHECKDB and the output is <blah>")
    Any unusual occurrences before the error occurred (e.g. someone tripped the power switch, a disk in a RAID5 array died)
    If relevant, the output from ‘DBCC CHECKDB (yourdbname) WITH ALL_ERRORMSGS, NO_INFOMSGS’
    The SQL Server version and service pack level
    For the “What’s the best technology for me to use?” scenario:
    What exactly are you trying to do? Enable local hardware redundancy? Geo-clustering? Instance-level failover? Minimize downtime during recovery from IO errors with a single-system?
    What are the SLAs (Service Level Agreements) you must meet? (e.g. an uptime percentage requirement, a minimum data-loss in the event of a disaster requirement, a maximum downtime in the event of a disaster requirement)
    What hardware restrictions do you have? (e.g. “I’m limited to a single system” or “I have several worldwide mirror sites but the size of the pipe between them is limited to X Mbps”)
    What kind of workload does you application have? (or is it a mixture of applications consolidated on a single server, each with different SLAs) How much transaction log volume is generated?
    What kind of regular maintenance does your workload demand that you perform (e.g. “the update pattern of my main table is such that fragmentation increases in the clustered index, slowing down the most common queries so there’s a need to perform some fragmentation removal regularly”)
    Finding the Logs
    You will often find more information about an error by looking in the Error and Event logs. There are two sets of logs that are interesting:
    SQL Error Log: default location: C:\Program Files\Microsoft SQL Server\MSSQL.#\MSSQL\LOG (Note: The # changes depending on the ID number for the installed Instance. This is 1 for the first installation of SQL Server, but if you have mulitple instances, you will need to determine the ID number you’re working with. See the BOL for more information about Instance ID numbers.)
    Windows Event Log: Go to the Event Viewer in the Administrative Tools section of the Start Menu. The System event log will show details of IO subsystem problems. The Application event log will show details of SQL Server problems.

    hi,I have a question on sql database high availability. I have tried using database mirroring, where I am using sql standard edition, in this database mirroring of synchronous mode is the only option available, and it is giving problem, like sql time out errors on my applicatons since i had put in the database mirroring, as asynchronous is only available on enterprise version, is there any suggestions on this. thanks ---vijay

  • Advice on best way to setup Disaster Recovery for SOA Suite 10.1.3.4

    Hi Everyone,
    I need some advice on the best way to setup Disaster Recovery for a SOA Suite 10.1.3.4 install deploying JSF/ADF OC4J applications.
    The way we are trying to do it at the moment is manually copy the "applications" and "applications-deployments" folders for the OC4J application on the production server, then compress and ship the files across to the DR application server nightly. (We don't require high availability).
    In the event of a disaster we then extract the files and copy to the OC4J instance (pre-created and configured) on the DR server. Unfortunately to date we haven't been able to reliably setup a DR application (seem to mostly get 404 errors etc), even though the OC4J application has its connection pool resolved to the DR database and is showing as "up" in the ASConsole.
    My question is, is there a more "native" way to do what we are trying to do. We do not have Enterprise version of SOA Suite or 11g database so any advanced recovery features are not an option. The setup is also stand alone, i.e. we are not using clustering or RAC etc.
    Any ideas would be really helpful.
    Thanks,
    Leigh.
    PS we are also running the production apps server with Oracle Application server 10g 10.1.2.3.0 as the HTTP apache server (with Forms, Reports and Discoverer deployed) and the SOA Suite 10.1.3 applications use the 10.1.2 HTTP server via the HTTP to AJP bridge. So the 10.1.3 OC4J instance is configured to use AJP on port range 12501 - 12600.

    For enterprise solutions, AS Guard would work.
    http://download.oracle.com/docs/cd/B25221_04/core.1013/b15977/disasrecov.htm#sthref303
    However, since advanced recovery options are not available (as you said), then what you are doing should not be too bad.
    AMN

Maybe you are looking for