Setting up a Disaster Recovery
Hi,
We are planning to setup Disaster Recovery for our entire land scape.
We have implemented EP also.
I just want to all the paths at OS level where all the data including customizations is store apart from Database, so that we can take the file system backup of those directories.
Thanks and Regards,
Naresh Sadu
Hi Naresh,
http://help.sap.com/saphelp_nw04s/helpdata/en/72/96d34c435c3c4ca5894749ef1435ac/frameset.htm
SAP on Oracle
if you're talking about how you do a backup that you can roll back in emergency cases:
1. make a copy from your SBO installation directory (the whole C:Program FilesSAP directory)
2. make a copy from SBO-Common
the Database backup you can do after installing the new Patch. because you anyway have to do a backup
before upsizing them (because SAP will block the update without a backup)
http://help.sap.com/saphelp_nw04s/helpdata/en/72/96d34c435c3c4ca5894749ef1435ac/frameset.htm
Thanks
Suresh
Similar Messages
-
Hi,
Can anybody please guide, who'll play the major role in "Disaster Recovery Test"
like functional / basis / abap etc,,,Hello Mahesh
Everybody has to play a major role , first Basis has to take action then abaper and then for testing Functional people required to put their effort.here is brief excerpt from an article regarding "Disaster recovery for SAP".
It will give all of us an idea about Disater recovery.
When you have your SAP system installed, you don't have a disaster recovery solution.
"SAP has standard methodologies for doing backups and restoring the SAP environment, but there's nothing built into their application that specifically targets disaster recovery,"
In other words, SAP tells you very explicitly what you need to protect, but you're on your own in figuring out how to make it happen. It is common practice among third-party solution providers to ask about disaster recovery, but if you're doing your own thing it is important to be aware of the need for a disaster recovery solution.
Outsourcing vs. building a secondary site:
There are two ways to go about setting up your disaster recovery solution: Outsource or build your own secondary site. Outsourcing may be more convenient and less expensive, especially for smaller companies on a tight budget. Simply approach the outsourcing company with your needs, and they will pretty much take it from there. Graap likens it to an insurance policy, where you pay a premium on an ongoing basis for the security.
If you decide to outsource, ask colleagues for recommendations and spend some time researching prices, which can vary a lot. But make sure the outsourcer can step up to the plate in the unlikely event that you need their services.
Building your own secondary site requires a larger investment up front but the leaves you in full control of your contingency plans rather than be at the mercy of an outsourcing company. If your outsourcing provider falls through for some reason -- such as being in the same disaster zone as your main office during an earthquake for example -- you're in trouble. When building your own site, you can prepare for more scenarios and place it far enough away from your main office.
High availability vs. cost:
Specialists say one of the most important questions to consider is availability and how quickly you need to get your systems back online. The difference between getting back online in 10 minutes or three days could be millions of dollars, so you want to make sure you get just the right solution for your company.
Around-the-clock availability will require mirroring content across two sites in real-time. This enables you to do an instant failover with little or no downtime, rather than force you to physically move from the office to a backup site with a stack of tapes.
Regardless of whether you outsource or set up your own site, a high availability solution is expensive.
"But if that is what it takes to keep your business from going under, it's worth every penny of it".
An added benefit of having a high availability solution is that you can avoid maintenance downtime by working on one server while letting the other handle all traffic. In theory, this leaves a window of risk, but most maintenance tasks, such as backups, can be cancelled if need be.
One consideration for mirroring data is the bandwidth to the secondary site. Replicating data in real-time requires enough capacity to handle it without hitches. Also, a secondary site will require the same disk space as your regular servers. You can probably get away with a smaller and cheaper system, but you still need enough storage space to match your primary servers.
Whatever the choice for disaster recovery, it is vital that both the technology and the business departments know about the plan ahead of time.
Testing your solution:
Ok, so you have a disaster recovery solution in place. Great, you're home free, right? Not quite. It must be tested continuously it to make sure it works in real life. Sometimes management can be reluctant to spend the money for a real test, or perhaps there are pressing deadlines to keep but it should be tested one or two times a year.
Many people who build good plans let them sit collecting dust for years, at which point half the key people in the plan have left or changed positions.Update the names, phone numbers and other vital information frequently and test them, he said. It is for the same reason you do fire drills: When the real thing strikes, there's no room for error.
In testing, consider different scenarios and the physical steps needed to get the data center up and running. For example, many disaster recovery solutions require at least parts of a staff to get on a plane and physically move to the secondary location. But September 11 showed how that is not easy when all planes are grounded.
Costly but vital:
Disaster recovery is not cheap, and it requires lots of testing to stay current, but it could save your critical data.
"Any customer who makes an investment in SAP is purchasing an enterprise-class application, and as such really should have this level of protection for their business". "I can't imagine why anybody would not have an interest in disaster recovery."
Regards
Yogesh -
Disaster Recovery set-up for SharePoint 2013
Hi,
We are migrating our SP2010 application to SP2013. It would be on-premise setup using virtual environment.
To handle disaster recovery situation, it has been planned to have two identical farms (active, passive) hosted in two different datacenters.
I have prior knowledge of disaster recovery only at the content DB level.
My Question is how do we make two farm identical and how do we keep Database of both the farm always in sync.
Also if a custom solution is pushed into one of the farm, how does it replicate to the other farm.
Can someone please help me in understanding this D/R situation.
Thanks,
RahulMetalogix Replicator will replicate content, but nothing below the Web Application level (you'd still have to configure the SAs, Central Admin settings, deploy solutions, etc.).
While AlwaysOn is a good choice, do remember that ASync AO is not supported across all of SharePoint's databases (see
http://technet.microsoft.com/en-us/library/jj841106.aspx). LogShipping is a good choice, especially with content databases as they can be placed in a read only mode on the DR farm for an
active crawl to be completed against them.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
SharePoint 2010 backup and restore to test SharePoint environment - testing Disaster recovery
We have a production SharePoint 2010 environment with one Web/App server and one SQL server.
We have a test SharePoint 2010 environment with one server (Sharepoint server and SQL server) and one AD (domain is different from prod environment).
Servers are Windows 2008 R2 and SQL 2008 R2.
Versions are the same on prod and test servers.
We need to setup a test environment with the exact setup as production - we want to try Disaster recovery.
We have performed backup of farm from PROD and we wanted to restore it on our new server in test environment.Backup completed successfully with no errors.
We have tried to restore the whole farm from that backup on test environment using Central administration, but we got message - restore failed with errors.
We choosed NEW CONFIGURATION option during restore, and we set new database names...
Some of the errors are:
FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified user or domain group was not found.
Warning: Cannot restore object User Profile Service Application because it failed on backup.
FatalError: Object User Profile Service Application failed in event OnPreRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
Verbose: Starting object: WSS_Content_IT_Portal.
Warning: [WSS_Content_IT_Portal] A content database with the same ID already exists on the farm. The site collections may not be accessible.
FatalError: Object WSS_Content_IT_Portal failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified component exists. You must specify a name that does not exist.
Warning: [WSS_Content_Portal] The operation did not proceed far enough to allow RESTART. Reissue the statement without the RESTART qualifier.
RESTORE DATABASE is terminating abnormally.
FatalError: Object Portal - 80 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
ArgumentException: The IIS Web Site you have selected is in use by SharePoint. You must select another port or hostname.
FatalError: Object Access Services failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: Object parent could not be found. The restore operation cannot continue.
FatalError: Object Secure Store Service failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: The specified component exists. You must specify a name that does not exist.
FatalError: Object PerformancePoint Service Application failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
SPException: Object parent could not be found. The restore operation cannot continue.
FatalError: Object Search_Service_Application_DB_88e1980b96084de984de48fad8fa12c5 failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
Aborted due to error in another component.
Could you please help us to resolve these issues?I'd totally agree with this. Full fledged functionality isn't the aim of DR, getting the functional parts of your platform back-up before too much time and money is lost.
Anything I can add would be a repeat of what Jesper has wisely said but I would very much encourage you to look at these two resources: -
DR & back-up book by John Ferringer for SharePoint 2010
John's back-up PowerShell Script in the TechNet Gallery
Steven Andrews
SharePoint Business Analyst: LiveNation Entertainment
Blog: baron72.wordpress.com
Twitter: Follow @backpackerd00d
My Wiki Articles:
CodePlex Corner Series
Please remember to mark your question as "answered" if this solves (or helps) your problem. -
SharePoint 2013 Search - Disaster Recovery Restore
Hello,
We are setting up a new SharePoint 2013 with a separate Disaster Recovery farm as a hot-standby. In a DR scenario, we want to restore all content and service app databases to the new farm, then fix any configuration issues that might arise due to changes
in server names, etc...
The issue we're running into is the search service components are still pointing to the production servers even though they're in the new farm with completely different server names. This is expected, so we're preparing a PowerShell script to remove
then re-create the search components as needed. The problem is that all the commands used to apply the new search topology won't function because they can't access the administration component (very frustrating). It appears we're in a chicken &
egg scenario - we can't change the search topology because we don't have a working admin component, but we can't fix the admin component because we can't change the search topology.
The scripts below are just some of the things we've tried to fix the issue:
$sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
$local = Get-SPEnterpriseSearchServiceInstance -Local;
$topology = New-SPEnterpriseSearchTopology -SearchApplication $sa;
New-SPEnterpriseSearchAdminComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchQueryProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchCrawlComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchContentProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchAnalyticsProcessingComponent -SearchTopology $topology -SearchServiceInstance $local;
New-SPEnterpriseSearchIndexComponent -SearchTopology $topology -SearchServiceInstance $local -IndexPartition 0 -RootDirectory "D:\SP_Index\Index";
$topology.Activate();
We get this message:
Exception calling "Activate" with "0" argument(s): "The search service is not able to connect to the machine that
hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a'
in search application 'Search Service Application' is in a good state and try again."
At line:11 char:1
+ $topology.Activate();
+ ~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : InvalidOperationException
Also, same as above with
$topology.BeginActivate()
We get no errors but the new topology is never activated. Attempting to call $topology.Activate() within the next few minutes will result in an error saying that "No modifications to the search topology can be made because previous changes are
being rolled back due to an error during a previous activation".
Next I found a few methods in the object model that looked like they might do some good:
$sa = Get-SPEnterpriseSearchServiceApplication "Search Service Application";
$topology = Get-SPEnterpriseSearchTopology -SearchApplication $sa -Active;
$admin = $topology.GetComponents() | ? { $_.Name -like "admin*" }
$topology.RecoverAdminComponent($admin,"server1");
This one really looked like it worked. It took a few seconds to run and came back with no errors. I can even get the active list of components and it shows that the Admin component is running on the right server:
Name ServerName
AdminComponent1 server1
ContentProcessingComponent1
QueryProcessingComponent1
IndexComponent1
QueryProcessingComponent3
CrawlComponent0
QueryProcessingComponent2
IndexComponent2
AnalyticsProcessingComponent1
IndexComponent3
However, I'm still unable to make further changes to the topology (getting the same error as above when calling $topology.Activate()), and the service application in central administration shows an error saying it can't connect to the admin component:
The search service is not able to connect to the machine that hosts the administration component. Verify that the administration component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again.
Lastly, I tried to move the admin component directly:
$sa.AdminComponent.Move($instance, "d:\sp_index")
But again I get an error:
Exception calling "Move" with "2" argument(s): "Admin component was moved to another server."
At line:1 char:1
+ $sa.AdminComponent.Move($instance, "d:\sp_index")
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : OperationCanceledException
I've checked all the most common issues - the service instance is online, the search host controller service is running on the machine, etc... but I can't seem to get this database restored to a different farm.
Any help would be appreciated!Thanks for the response Bhavik,
I did ensure the instance was started:
Get-SPEnterpriseSearchServiceInstance -Local
TypeName : SharePoint Server Search
Description : Index content and serve search queries
Id : e9fd15e5-839a-40bf-9607-6e1779e4d22c
Server : SPServer Name=ROYALS
Service : SearchService Name=OSearch15
Role : None
Status : Online
But after attempting to set the admin component I got the results below.
Before setting the admin component:
Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
IndexLocation : E:\sp_index\Office Server\Applications
Initialized : True
ServerName : prodServer1
Standalone :
After setting the admin component:
Get-SPEnterpriseSearchAdministrationComponent -SearchApplication $sa
IndexLocation :
Initialized : False
ServerName :
Standalone :
It's shown this status for a few hours now so I don't believe it's still provisioning. Also, the search service administration is still showing the same error:
The search service is not able to connect to the machine that hosts the administration component. Verify that the administration
component '764c17a1-4c29-4393-aacc-de01119aba0a' in search application 'Search Service Application' is in a good state and try again. -
Recovering lost data from a very old backup (disaster recovery)
Hi all,
I am trying to restore and recover data from an old DAT-72 cassette. All I know is the date when the backup was taken, that is back in November 2006. I do not know the DBID or anything else except for the date.
To recover this, I bought an internal SCSI HP c7438a DAT-72 tape drive on eBay and installed it on a machine running Windows 2003 Server SP2. I made a fresh Oracle 11g Enterprise Edition installation. HP tape drivers have been installed and Windows sees the tape drive without problem. To act as a Media Manager, I have installed Oracle Secure Backup. Oracle Secure Backup sees the HP tape drive without problems as well.
I have to admit my information about Oracle is not very in-depth. I read quite a lot of documents, but the more I read the more confused I become. The closest thing I can find to my situation is the following guide about disaster recovery:
http://download.oracle.com/docs/cd/B10500_01/server.920/a96566/rcmrecov.htm#1007948
I tried the suggestions in this document without success (details below).
My questions are:
1. Is it possible to retrieve data without knowing the DBID?
2. If not, is it possible to figure out the DBID from the tape? I tried to use dd in cygwin, also booted with Knoppix/Debian and Ubuntu CDs to dump the contents of the tape with dd but all of them failed to see the tape device. If there is any way to dump the raw contents of the tape on Windows, I would also welcome input.
3. Is there any way at all to recover this data from the tape given all the unknowns?
Thanks very much in advance,
C:\Program Files>rman target orcl
Recovery Manager: Release 11.2.0.1.0 - Production on Sat Mar 19 15:01:28 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
target database Password:
connected to target database: ORCL (not mounted)
RMAN> SET DBID 676549873;
executing command: SET DBID
RMAN> STARTUP FORCE NOMOUNT; # rman starts instance with dummy parameter file
Oracle instance started
Total System Global Area 778387456 bytes
Fixed Size 1374808 bytes
Variable Size 268436904 bytes
Database Buffers 503316480 bytes
Redo Buffers 5259264 bytes
RMAN> RUN
2> {
3> ALLOCATE CHANNEL t1 DEVICE TYPE sbt;
4> RESTORE SPFILE TO 'C:\SPFILE.TMP' FROM AUTOBACKUP MAXDAYS 7 UNTIL TIME 'SYS
DATE-1575';
5> }
using target database control file instead of recovery catalog
allocated channel: t1
channel t1: SID=63 device type=SBT_TAPE
channel t1: Oracle Secure Backup
Starting restore at 19-MAR-11
channel t1: looking for AUTOBACKUP on day: 20061125
channel t1: looking for AUTOBACKUP on day: 20061124
channel t1: looking for AUTOBACKUP on day: 20061123
channel t1: looking for AUTOBACKUP on day: 20061122
channel t1: looking for AUTOBACKUP on day: 20061121
channel t1: looking for AUTOBACKUP on day: 20061120
channel t1: looking for AUTOBACKUP on day: 20061119
channel t1: no AUTOBACKUP in 7 days found
released channel: t1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/19/2011 15:03:26
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece
RMAN>
RMAN> RUN
2> {
3> ALLOCATE CHANNEL t1 DEVICE TYPE sbt
4> PARMS 'SBT_LIBRARY=C:\WINDOWS\SYSTEM32\ORASBT.DLL';
5> RESTORE SPFILE TO 'C:\SPFILE.TMP' FROM AUTOBACKUP MAXDAYS 7 UNTIL TIME 'SYS
DATE-1575';
6> }
allocated channel: t1
channel t1: SID=63 device type=SBT_TAPE
channel t1: Oracle Secure Backup
Starting restore at 19-MAR-11
channel t1: looking for AUTOBACKUP on day: 20061125
channel t1: looking for AUTOBACKUP on day: 20061124
channel t1: looking for AUTOBACKUP on day: 20061123
channel t1: looking for AUTOBACKUP on day: 20061122
channel t1: looking for AUTOBACKUP on day: 20061121
channel t1: looking for AUTOBACKUP on day: 20061120
channel t1: looking for AUTOBACKUP on day: 20061119
channel t1: no AUTOBACKUP in 7 days found
released channel: t1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 03/19/2011 15:04:56
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece
RMAN>
-----------------------------------Hi 845725,
If the backups were created with OSB might be you can query the tape with obtool.
http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14236/obref_oba.htmTo list pieces you could use <lspiece> within obtool.
http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14236/obref_oba.htm#BHBBIFFEIf this works you should be able to identify the controlfile autobackup if it has the standard naming < c-dbid-date-xx > and you than know the DBID or you are able to restore a controlfile from a backup piece in the output list.
Might be you have to install 9i or 10g rdbms software as 11g was released a year later in 2007.
Anyway goodluck.
Regards,
Tycho -
Welcome to the SQL Server Disaster Recovery and Availability Forum
(Edited 8/14/2009 to correct links - Paul)
Hello everyone and welcome to the SQL Server Disaster Recovery and Availability forum. The goal of this Forum is to offer a gathering place for SQL Server users to discuss:
Using backup and restore
Using DBCC, including interpreting output from CHECKDB and related commands
Diagnosing and recovering from hardware issues
Planning/executing a disaster recovery and/or high-availability strategy, including choosing technologies to use
The forum will have Microsoft experts in all these areas and so we should be able to answer any question. Hopefully everyone on the forum will contribute not only questions, but opinions and answers as well. I’m looking forward to seeing this becoming a vibrant forum.
This post has information to help you understand what questions to post here, and where to post questions about other technologies as well as some tips to help you find answers to your questions more quickly and how to ask a good question. See you in the group!
Paul Randal
Lead Program Manager, SQL Storage Engine and SQL Express
Be a good citizen of the Forum
When an answer resolves your problem, please mark the thread as Answered. This makes it easier for others to find the solution to this problem when they search for it later. If you find a post particularly helpful, click the link indicating that it was helpful
What to post in this forum
It seems obvious, but this forum is for discussion and questions around disaster recovery and availability using SQL Server. When you want to discuss something that is specific to those areas, this is the place to be. There are several other forums related to specific technologies you may be interested in, so if your question falls into one of these areas where there is a better batch of experts to answer your question, we’ll just move your post to that Forum so those experts can answer. Any alerts you set up will move with the post, so you’ll still get notification. Here are a few of the other forums that you might find interesting:
SQL Server Setup & Upgrade – This is where to ask all your setup and upgrade related questions. (http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/threads)
Database Mirroring – This is the best place to ask Database Mirroring how-to questions. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabasemirroring/threads)
SQL Server Replication – If you’ve already decided to use Replication, check out this forum. (http://social.msdn.microsoft.com/Forums/en-US/sqlreplication/threads)
SQL Server Database Engine – Great forum for general information about engine issues such as performance, FTS, etc. (http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/threads)
How to find your answer faster
There is a wealth of information already available to help you answer your questions. Finding an answer via a few quick searches is much quicker than posting a question and waiting for an answer. Here are some great places to start your research:
SQL Server 2005 Books Onlinne
Search it online at http://msdn2.microsoft.com
Download the full version of the BOL from here
Microsoft Support Knowledge Base:
Search it online at http://support.microsoft.com
Search the SQL Storage Engine PM Team Blog:
The blog is located at https://blogs.msdn.com/sqlserverstorageengine/default.aspx
Search other SQL Forums and Web Sites:
MSN Search: http://www.bing.com/
Or use your favorite search engine
How to ask a good question
Make sure to give all the pertinent information that people will need to answer your question. Questions like “I got an IO error, any ideas?” or “What’s the best technology for me to use?” will likely go unanswered, or at best just result in a request for more information. Here are some ideas of what to include:
For the “I got an IO error, any ideas?” scenario:
The exact error message. (The SQL Errorlog and Windows Event Logs can be a rich source of information. See the section on error logs below.)
What were you doing when you got the error message?
When did this start happening?
Any troubleshooting you’ve already done. (e.g. “I’ve already checked all the firmware and it’s up-to-date” or "I've run SQLIOStress and everything looks OK" or "I ran DBCC CHECKDB and the output is <blah>")
Any unusual occurrences before the error occurred (e.g. someone tripped the power switch, a disk in a RAID5 array died)
If relevant, the output from ‘DBCC CHECKDB (yourdbname) WITH ALL_ERRORMSGS, NO_INFOMSGS’
The SQL Server version and service pack level
For the “What’s the best technology for me to use?” scenario:
What exactly are you trying to do? Enable local hardware redundancy? Geo-clustering? Instance-level failover? Minimize downtime during recovery from IO errors with a single-system?
What are the SLAs (Service Level Agreements) you must meet? (e.g. an uptime percentage requirement, a minimum data-loss in the event of a disaster requirement, a maximum downtime in the event of a disaster requirement)
What hardware restrictions do you have? (e.g. “I’m limited to a single system” or “I have several worldwide mirror sites but the size of the pipe between them is limited to X Mbps”)
What kind of workload does you application have? (or is it a mixture of applications consolidated on a single server, each with different SLAs) How much transaction log volume is generated?
What kind of regular maintenance does your workload demand that you perform (e.g. “the update pattern of my main table is such that fragmentation increases in the clustered index, slowing down the most common queries so there’s a need to perform some fragmentation removal regularly”)
Finding the Logs
You will often find more information about an error by looking in the Error and Event logs. There are two sets of logs that are interesting:
SQL Error Log: default location: C:\Program Files\Microsoft SQL Server\MSSQL.#\MSSQL\LOG (Note: The # changes depending on the ID number for the installed Instance. This is 1 for the first installation of SQL Server, but if you have mulitple instances, you will need to determine the ID number you’re working with. See the BOL for more information about Instance ID numbers.)
Windows Event Log: Go to the Event Viewer in the Administrative Tools section of the Start Menu. The System event log will show details of IO subsystem problems. The Application event log will show details of SQL Server problems.hi,I have a question on sql database high availability. I have tried using database mirroring, where I am using sql standard edition, in this database mirroring of synchronous mode is the only option available, and it is giving problem, like sql time out errors on my applicatons since i had put in the database mirroring, as asynchronous is only available on enterprise version, is there any suggestions on this. thanks ---vijay
-
RMAN backup and restore for Disaster Recovery
Hi Guys,
I am very new to Oracle and have a question about RMAN backup and restore feature. I am simulating a disaster recovery scenario by having two VMs running oracle 11g, say hosta and hostb, hosta is sort of production db and the other a disaster recovery db (one that will be made primary when disaster occurs). My goal is to backup production db using RMAN and restore it on the other machine. For some reason when I restore the db on hostb, the command restores the previous backup but not the most recent one e.g. I took a backup yesterday (09/20) and applied it to the hostb that worked fine, but when I try to apply a fresh backup from today (09/21) it always picks up the old backup. Here's a dump of the screen:
Starting restore at 21-SEP-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /oracle/app/dev/oradata/forums/s
ystem01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /oracle/app/dev/oradata/forums/s
ysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /oracle/app/dev/oradata/forums/u
ndotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /oracle/app/dev/oradata/forums/u
sers01.dbf
channel ORA_DISK_1: reading from backup piece /oracle/app/dev/flash_recovery_are
a/FORUMS/backupset/o1_mf_nnnd0_TAG20110920T040950_77jx3zk7_.bkp
channel ORA_DISK_1: piece handle=/oracle/app/dev/flash_recovery_area/FORUMS/back
upset/o1_mf_nnnd0_TAG20110920T040950_77jx3zk7_.bkp tag=TAG20110920T040950
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:01:05
Finished restore at 21-SEP-11
Notice that it reads the backup piece from this location /oracle/app/dev/flash_recovery_are
a/FORUMS/backupset/
whereas my latest backups are stored at a different location.
I am executing following steps at the RMAN prompt on hosta and hostb:
hosta (production site)
backup as compressed backupset database plus archivelog
backup current controlfile format'/oracle/oracle_bkup/ctl_%U'
backup spfile
hostb (Disaster site)
set dbid=13732232063
startup force nomount
restore spfile to '/oracle/app/dev/product/11.2.0/dbhome_1/dbs/spfileforums.ora' from '/export/home/dev/restore_db/backupset/2011_09_21/o1_mf_nnsnf_TAG20110921T114945_77ndg9ys_.bkp'
shutdown immediate
/* create a init<db_name>.ora file with SPFILE= /oracle/app/dev/product/11.2.0/dbhome_1/dbs/spfileforums.ora */
startup force pfile='/export/home/dev/restore_db/initforums.ora' nomount
restore controlfile from '/export/home/dev/restore_db/backupset/2011_09_21/ctl_1hmn3mic_1_1'
/* restart rman here */
quit
alter database mount;
catalog start with '/export/home/dev/restore_db/backupset/2011_09_21' noprompt;
/* call the next two commands on run */
restore database;
recover database;
alter database open resetlogs
quit
Any help will be greatly appreciated.
Thanks,
RajeshThanks guys, I really appreciate all your help here. I redid everything all over again to get all the information you guys wanted. Since I noticed that more eyes are looking into this I am going to reiterate my steps one more time followed by specific answers to questions. My first backup on Host B is located under ..../restore_db/backupset whereas the subsequent one is under .../restore_db/backupset/backupset2.
I take backup on Host A using:
rman target /
backup as compressed backupset database plus archivelog;
backup spfile;
quit;
I restore the backup on Host B using:
set dbid=13732232063;
startup force nomount;
restore spfile to '/oracle/app/dev/product/11.2.0/dbhome_1/dbs/spfileforums.ora' from '/export/home/dev/restore_db/backupset/o1_mf_nnsnf_TAG20110928T171830_787gbpxh_.bkp'
shutdown immediate;
startup force nomount;
restore controlfile from '/export/home/dev/restore_db/backupset/o1_mf_ncsnf_TAG20110928T171638_787gbkxn_.bkp'
quit;
/* restart rman here */
alter database mount;
catalog start with '/export/home/dev/restore_db/backupset' noprompt;
restore database;
recover database;
alter database open resetlogs;
quit;
I take another backup on Host A using (notice no spfile backup this time):
backup as compressed backupset database plus archivelog;
quit;
I restore the database on Host B using:
alter database mount;
catalog start with '/export/home/dev/restore_db/backupset/backupset2' noprompt;
recover database;
alter database open;
quit;
Output of List Backup of database (I have done this after I recovered the second time, also note that it is referring to backupset2 which is were my second backup is stored)
RMAN> list backup of database;
using target database control file instead of recovery catalog
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
89 Full 261.87M DISK 00:01:37 28-SEP-11
BP Key: 91 Status: AVAILABLE Compressed: YES Tag: TAG20110928T171638
Piece Name: /export/home/dev/restore_db/backupset/o1_mf_nnndf_TAG2011092
8T171638_787g77rx_.bkp
List of Datafiles in backup set 89
File LV Type Ckp SCN Ckp Time Name
1 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/system01.dbf
2 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/sysaux01.dbf
3 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/undotbs01.dbf
4 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/users01.dbf
BS Key Type LV Size Device Type Elapsed Time Completion Time
97 Full 259.16M DISK 00:00:00 28-SEP-11
BP Key: 100 Status: AVAILABLE Compressed: YES Tag: TAG20110928T18352
7
Piece Name: /export/home/dev/restore_db/backupset/backupset2/o1_mf_nnndf
TAG20110928T183527787lv0nb_.bkp
List of Datafiles in backup set 97
File LV Type Ckp SCN Ckp Time Name
1 Full 1816853 28-SEP-11 /oracle/app/dev/oradata/forums/system01.dbf
2 Full 1816853 28-SEP-11 /oracle/app/dev/oradata/forums/sysaux01.dbf
3 Full 1816853 28-SEP-11 /oracle/app/dev/oradata/forums/undotbs01.dbf
4 Full 1816853 28-SEP-11 /oracle/app/dev/oradata/forums/users01.dbf
Output of list backup (done after restoring the control file)
RMAN> list backup;
BS Key Size Device Type Elapsed Time Completion Time
87 89.20M DISK 00:00:26 28-SEP-11
BP Key: 87 Status: AVAILABLE Compressed: YES Tag: TAG20110928T171526
Piece Name: /oracle/app/dev/flash_recovery_area/FORUMS/backupset/2011_09
_28/o1_mf_annnn_TAG20110928T171526_787g50bm_.bkp
List of Archived Logs in backup set 87
Thrd Seq Low SCN Low Time Next SCN Next Time
1 34 1302253 20-SEP-11 1306439 20-SEP-11
1 35 1306439 20-SEP-11 1307647 20-SEP-11
1 36 1307647 20-SEP-11 1307701 20-SEP-11
1 37 1307701 20-SEP-11 1311393 20-SEP-11
1 38 1311393 20-SEP-11 1311511 20-SEP-11
1 39 1311511 20-SEP-11 1332479 20-SEP-11
1 40 1332479 20-SEP-11 1344418 20-SEP-11
1 41 1344418 20-SEP-11 1350409 20-SEP-11
1 42 1350409 20-SEP-11 1350449 20-SEP-11
1 43 1350449 20-SEP-11 1350854 21-SEP-11
1 44 1350854 21-SEP-11 1350895 21-SEP-11
1 45 1350895 21-SEP-11 1353114 21-SEP-11
1 46 1353114 21-SEP-11 1353254 21-SEP-11
1 47 1353254 21-SEP-11 1353865 21-SEP-11
1 48 1353865 21-SEP-11 1353988 21-SEP-11
1 49 1353988 21-SEP-11 1375403 21-SEP-11
1 50 1375403 21-SEP-11 1376149 21-SEP-11
1 51 1376149 21-SEP-11 1376206 21-SEP-11
1 52 1376206 21-SEP-11 1376246 21-SEP-11
1 53 1376246 21-SEP-11 1379990 21-SEP-11
1 54 1379990 21-SEP-11 1380229 21-SEP-11
1 55 1380229 21-SEP-11 1380266 21-SEP-11
1 56 1380266 21-SEP-11 1380528 21-SEP-11
1 57 1380528 21-SEP-11 1380724 21-SEP-11
1 58 1380724 21-SEP-11 1380861 21-SEP-11
1 59 1380861 21-SEP-11 1381033 21-SEP-11
1 60 1381033 21-SEP-11 1381077 21-SEP-11
1 61 1381077 21-SEP-11 1402243 22-SEP-11
1 62 1402243 22-SEP-11 1423341 22-SEP-11
1 63 1423341 22-SEP-11 1435456 22-SEP-11
1 64 1435456 22-SEP-11 1454415 23-SEP-11
1 65 1454415 23-SEP-11 1490903 23-SEP-11
1 66 1490903 23-SEP-11 1491266 23-SEP-11
1 67 1491266 23-SEP-11 1491347 23-SEP-11
1 68 1491347 23-SEP-11 1492761 23-SEP-11
1 69 1492761 23-SEP-11 1492891 23-SEP-11
1 70 1492891 23-SEP-11 1493678 23-SEP-11
1 71 1493678 23-SEP-11 1493704 23-SEP-11
1 72 1493704 23-SEP-11 1494741 23-SEP-11
1 73 1494741 23-SEP-11 1494790 23-SEP-11
1 74 1494790 23-SEP-11 1510154 23-SEP-11
1 75 1510154 23-SEP-11 1514286 23-SEP-11
1 76 1514286 23-SEP-11 1531967 24-SEP-11
1 77 1531967 24-SEP-11 1543266 24-SEP-11
1 78 1543266 24-SEP-11 1558427 24-SEP-11
1 79 1558427 24-SEP-11 1566924 24-SEP-11
1 80 1566924 24-SEP-11 1578292 24-SEP-11
1 81 1578292 24-SEP-11 1596894 25-SEP-11
BS Key Size Device Type Elapsed Time Completion Time
88 84.03M DISK 00:00:30 28-SEP-11
BP Key: 88 Status: AVAILABLE Compressed: YES Tag: TAG20110928T171526
Piece Name: /oracle/app/dev/flash_recovery_area/FORUMS/backupset/2011_09
_28/o1_mf_annnn_TAG20110928T171526_787g63s9_.bkp
List of Archived Logs in backup set 88
Thrd Seq Low SCN Low Time Next SCN Next Time
1 82 1596894 25-SEP-11 1609028 25-SEP-11
1 83 1609028 25-SEP-11 1622303 25-SEP-11
1 84 1622303 25-SEP-11 1626430 25-SEP-11
1 85 1626430 25-SEP-11 1634486 25-SEP-11
1 86 1634486 25-SEP-11 1648398 25-SEP-11
1 87 1648398 25-SEP-11 1669259 26-SEP-11
1 88 1669259 26-SEP-11 1686820 26-SEP-11
1 89 1686820 26-SEP-11 1686959 26-SEP-11
1 90 1686959 26-SEP-11 1689168 26-SEP-11
1 91 1689168 26-SEP-11 1704759 26-SEP-11
1 92 1704759 26-SEP-11 1719597 27-SEP-11
1 93 1719597 27-SEP-11 1740407 27-SEP-11
1 94 1740407 27-SEP-11 1750125 27-SEP-11
1 95 1750125 27-SEP-11 1765592 27-SEP-11
1 96 1765592 27-SEP-11 1781498 28-SEP-11
1 97 1781498 28-SEP-11 1802311 28-SEP-11
1 98 1802311 28-SEP-11 1811009 28-SEP-11
1 99 1811009 28-SEP-11 1813811 28-SEP-11
BS Key Type LV Size Device Type Elapsed Time Completion Time
89 Full 261.87M DISK 00:01:37 28-SEP-11
BP Key: 89 Status: AVAILABLE Compressed: YES Tag: TAG20110928T171638
Piece Name: /oracle/app/dev/flash_recovery_area/FORUMS/backupset/2011_09
_28/o1_mf_nnndf_TAG20110928T171638_787g77rx_.bkp
List of Datafiles in backup set 89
File LV Type Ckp SCN Ckp Time Name
1 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/system01.dbf
2 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/sysaux01.dbf
3 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/undotbs01.dbf
4 Full 1813849 28-SEP-11 /oracle/app/dev/oradata/forums/users01.dbf -
Disaster recovery on a new host. Oracle 11.1.0.6
Oracle 11.1.0.6, windows server 2003 R2 Service Pack 2 x64)
I need to test disaster recovery on a new host(with diferent name).
It looks like I am able to complete recovery stage but dbcontrol doesn't work for me.
I was told: if you change host name or ip address you should delete repository and recreate it.
Do I need to import emkey.ora file on a new host as well?
I used this post
11g Disaster recovery on a new host.restored database on a new host
copied emkey.ora to a new host c:\oracle\product\11.1.0\db_1\sysman\config
set ORACLE_SID=ORA2
run cmd
C:\oracle\product\11.1.0\db_1\BIN>emctl config emkey -emkeyfile C:\oracle\produc
t\11.1.0\db_1\sysman\config\emkey.ora -force
Oracle Enterprise Manager 11g Database Control Release 11.1.0.6.0
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Please enter repository password:
The Em Key has been configured successfully.
C:\oracle\product\11.1.0\db_1\BIN>emctl config emkey -copy_to_repos
Oracle Enterprise Manager 11g Database Control Release 11.1.0.6.0
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Please enter repository password:
The Em Key has been copied to the Management Repository. This operation will cau
se the Em Key to become unsecure.
After the required operation has been completed, secure the Em Key by running "e
mctl config emkey -remove_from_repos".
emctl.bat stop dbconsole
emctl config emkey -repos -sysman_pwd password
emctl secure dbconsole -sysman_pwd password
emctl.bat start dbconsole
now able to open
https://oratest05.domainname.com:1158/em/
but unable to logon
Your username and/or password are invalid. -
Disaster Recovery from tape library!
Hi,
I am using oracle 10g and Backup Exec 12 as the media management software. How do i perform the disaster recovery. My entire scenario is like this:
Server A has the DB (Database).
Configured Backup Exec with RMAN and took full DB backup along with autobackup of controlfile to the tape.
Server A crashes and is unusable.
Installs Oracle on server B and needs to restore the DB on this new server.
I know the DBID of the DB but how to configure RMAN to work with Backup Exec, so that I can restore the db from the tape library?
When you configure the RMAN integration with Backup Exec, it finds the database running on the server and will perform the integration. But in my case, since the database does not exist, how will the Backup Exec agent configure the integration?
How do we go about the restoration to Server B in this case?
Thanks!Hi!
I am still not able to restore the db. The scenario is like:
Took a full database backup to tape from server A with controlfile autoconfig
Trying to restore the db on server B. The following is the script used to restore.
connect target /
set dbid=152417844;
startup nomount;
run
+{+
allocate channel sbt1 type 'sbt_tape';
SEND 'BSA_SERVICE_HOST=ipaddress,NBBSA_TOTAL_STREAMS=1';
set controlfile autobackup format for device type 'sbt_tape' to 'cf_%F';
restore spfile from autobackup;
restore controlfile from autobackup;
alter database mount;
restore database;
recover database;
release channel sbt1;
+}+
alter database open resetlogs;
When i try to restore, the output is:
channel sbt1: looking for autobackup on day: 20090624
channel sbt1: looking for autobackup on day: 20090623
channel sbt1: looking for autobackup on day: 20090622
channel sbt1: looking for autobackup on day: 20090621
channel sbt1: looking for autobackup on day: 20090620
channel sbt1: looking for autobackup on day: 20090619
channel sbt1: looking for autobackup on day: 20090618
channel sbt1: no autobackup in 7 days found
released channel: sbt1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 06/24/2009 11:19:28
RMAN-06172: no autobackup found or specified handle is not a valid copy or piece
Please advice!
Edited by: Libra DBA on Jun 24, 2009 1:39 PM -
Disaster Recovery: Active/Passive Mail Flow
Hello. I am planning to deploy an active/passive server environment for Exchange 2013. Here's what I plan to achieve:
EXCHANGE-A - on production site, active (all mail flow goes through here), member of DAG
EXCHANGE-B - on disaster recovery site, passive (mail flow goes through here ONLY when activated), member of DAG
I have tried disabling all receive connectors on EXCHANGE-B, however that causes OWA to delay sending messages for around 20 seconds (messages stuck in Drafts folder before being sent out). I have also tried disabling Exchange Transport services on EXCHANGE-B
but to no avail.
Is it possible to achieve active/passive mail flow server environment? What is the best way to achieve this? I appreciate any leads on this matter.Hi ,
Shadow redundancy configuration is available on the transport config and any change applied on the transport config parameters will get applied to all the transport servers in our organisation and it not specific to the server level.
Command
to disable the shadow redundancy feature :-
set-TransportConfig -shadowredundancyenabled $false
Note : Above command will disable the shadow redundancy feature for your entire organisation.
Thanks & Regards S.Nithyanandham -
Hi Everyone,
In the case of Production system unavailable, by the destruction like due to fire or water floods, to avoid the system down-time for couple of days/weeks. We are planning to go for Disaster Recovery solution.
We are planning to go with this solution for our production systems like EHP4 for SAP ERP 6.0/NW 7.01, Solution Manager 7.0 EHP1 systems.
Our landscape was configured in Distributed systems (CI+DB), OS:Win2003 and DB: MS SQL Server 2005.
My doubts are as follows.
1.) I am not able to find any document for DR solution, can anyone guide me where can I find the Documents releated to DR Solution
2.) In the DR Solution Systems as well we need to connect to SAP or not? I mean for remote support or RFC Connections and etc?
3.) Where can I find the Best Practices for DR? I have gone through the Best Practices but not able to find the proper things related to DR.
4.) Can anyone guide me the correct procedure or steps for the DR solution to follow?
I'm planning to do like these OS intallation, DB installation then SAP Installation then copy the Back-up of Production system to Back-up system (DR solution system).
Thanks and Regards
PavanHello Pavan,
I have found the following documentation about Disaster recovery:
http://help.sap.com/saphelp_erp2004/helpdata/en/6b/bd91381182d910e10000009b38f8cf/
frameset.htm
http://help.sap.com/saphelp_erp2004/helpdata/en/f2/31ad5c810c11d288ec0000e8200722/
frameset.htm
Also the following notes:
437160 : MS Disaster Recovery Articles for MS SQL Server
965908 : SQL Server Database Mirroring and SAP Applications
741289 : Profile parameters of the J2EE Engine are lost
(This Note indicate parameters which must be maintained in
the instance profile)
193816 : Restore with SQL Server
799058 : Setting Up Microsoft SQL Server 2005
I hope this information helps you.
Regards,
Blanca -
Disaster Recovery with different ASM diskgroups
Hi@all,
actually I'm trying to test a Disaster Recovery Scenario. At a Oracle Linux 6 server with installed Grid Infra 12c and Ora Database 11.2.0.4 (there is also a 12.1.0.2 database instance) I'm trying to do a disaster recovery. But I'm getting wild by restoring and recovering the database. The problem here is, that the ASM diskgroup name has changed. As you read out I also switched the physical server, but I think that shouldn't be any problem.
At the old server I've two ASM diskgroups "+DATA" and "+FRA_1", at the new they're called "+DATA_SRVNAME" and "+FRA_SRVNAME". I've already changed the startup parameter in spfile, but now after restoring the controlfile RMAN has the directions to the old diksgroup:
RMAN> report schema;
using target database control file instead of recovery catalog
RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name SID
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
1 0 SYSTEM *** +DATA/SID/datafile/system.438.816606399
So I've tried three ways. First was to rename the datafile name:
ALTER DATABASE RENAME FILE '+DATA/SID/datafile/system.438.816606399' TO '+DATA_SRVNAME/SID/datafile/system.438.816606399';
Second was to set the newname in RMAN:
set newname for datafile 16 to '+DATA_SRVNAME/SID/datafile/mms_basic_tab.455.816617697/';
And second was to recreate the controlfile with
CREATE CONTROLFILE REUSE DATABASE "SID" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 254
MAXINSTANCES 1
MAXLOGHISTORY 1815
LOGFILE
GROUP 1 ('+DATA__SRVNAME', '+FRA_SRVNAME') SIZE 1024M,
DATAFILE
'+DATASRVNAME/SID/datafile/system.438.816606399',
CHARACTER SET AL32UTF8;
But all three methods gave me the error, that the datafile at the new location isn't available (example):
ORA-15012: ASM file '+DATA_QUM169/cogn11/datafile/system.438.816606399' does not exist
So now my question to you: How am I able to tell the controlfile or database to use the other ASM diskgroup?
I know, easiest way would be to create a diskgroup +DATA and do the restore / recover, but at the new server I've no more storage to assign to a new diskgroup and because of there are running other instances I would'nt change the ASM diskgroups.
Did you have anny solution?
Thanks a lot and Regards,
DavePlease try this
RMAN> run
set newname for datafile 1 to '+DATA_SRVNAME';
restore datafile 1;
switch datafile 1;
recover datafile 1;
Did you created +DATA_SRVNAME diskgroup?
and -
Your forum was recommended for my question/issue
We have two semi-annual disaster recovery tests annually. The pre-test process is composed of restoring the Disaster Recovery Client from the Production client, and allowing one set of users to perform their process procedures on the DRS server, while the other users continue normal business operations on the PRD server.
Earlier this year we upgraded from SAP 4.0b --> 4.7c.
During our first disaster recovery test this year users received a message 'change and transport system not configured' . They were able to continue master data updates, even though they received the message.
We would like to have a real disaster recovery test where we switch from the production system to the disaster recovery system (DRS) and continue our business operations from that server (DRS)(located in another state).
<u><b>What must be done to perform this switchover/switchback process between the two clients once the test is completed</b></u>?If I interpreted you correctly: you want to bring up a DR-copy of the PRD system on another server. Then let a limited number of users test this one while all other users continue normal operation on the PRD server.
Be aware that it is pretty dangerous to have both servers running at the same time
You're main concern will be to isolate the DR system from the PRD system and interfacing systems !!!
The result will not be 100% correct:
- it will not be possible to test all interfaces - thus not all business functionality...
the main obstacle with failover systems is interfaces.
- you will not be able to see if the DR system can handle the load
you will not see if the server and/or infrastructure is sufficient
P.S. Remember that the interfacing go both ways.
- You want someone to reach the system - as such you will need to open access into the DR system...
but - you will want "only a limited number of users" to do it.. as such you must play around with SAP logon and/or DNS/IP addresses.
- You do NOT want the DR system to update the PRD system (or send out info to any other partners)
as such - you must restrict outbound traffic from the DR system.
If you do not know what/where to isolate... then rip out the network cable and place the users next to the server -
Disaster Recovery on another Sun Server
I have a question in regards to disaster recovery of Sun servers. We have a client that hosting applications within Sun T platform which utilizes LDOM configuration. These boxes host around five guests.
Currently we use a software called EMC HomeBase(http://www.emc.com/backup-and-recovery/homebase/homebase.htm) to enhance our DR recovery by profiling the Sun box and transmitting that profile to our DR vendor. DR vendor in turn with help from EMC will install the OS, patch level and apply that profile to setup the system with NIC information, file systems and other components so we would only
need to restore our data from NETBACKUP and don't do any system recovery.
Unfortunately, EMC does not support LDOMs nor containers. Now my question is if Sun has a way around this? Please note that the servers will be different models at DR. for example T5220 to a T5440 or V890 to V490.It'd be rather easy to manually install same LDOM configuration on DR site... all EMC seems to be doing is getting the profile & then someone is creating setup manually...
Anyways... here you go:
Backup & Recovery of guest domain
Archive and backup the guest domain configuration:
ldm list-constraints -x <ldom> > <ldom>.xml
Backup the guest domains over the network with backup software as regular server
Restore the guest domain configuration, assuming that your VSW and VDS resources are available:
ldm add-domain -i <ldom>.xml
Make sure that you have enough VCPU, MAU, and Memory to boot your guest domain.
Jumpstart guest domain and install backup client. Restore data from backup.
Backup LDOM configuration
You can take the backup of the configuration using "ldm add-spconfig saveconfigurtion" command which will store in SP of the server.
For taking the backup of the individual domain use "ldm ls-constrants >> filename.xml" it will give the xml output just redirect it to a file.
While creating the domain again use "ldm create ldomname -f filename.xml"
You can not create control domain by the above procedure we hve to configure manualy or even if your control domain crashed install new OS and install LDOM Manager in the Base OS create /etc/hostname.vsw0 & vsw1 files poweroff the server from SP. set bootmode config=saveconfiguration
How to backup the primary domain
Backup the LDM db files in: /var/opt/SUNWldm
Archive and backup the primary domain configuration:
ldm list-constraints -x primary > primary.xml
Backup the primary domains over the network with backup software as regular server.
How to recover primary domain
Update firmware
Install backup client & LDM software
Restore /var/opt/SUNWldm
Start up LDM services
If all goes well, the LDM config will be read and you will be in initial configuration mode. Save the config to the hypervisor and
reboot. If not, parse the primary.xml file to recreate the primary domain configuration, save config, and reboot.
Restore guest domain configs and data. Then bring up guest domains.
Maybe you are looking for
-
Security issue in Web Analysis
I have an issue with granting security to users and groups who are accessing a presentation in a shared desktop. As per Web Analysis Studio Guide,you should also "Edit Security file properties to grant access to groups hyperion web analysis" Can anyo
-
Libgit2 Error - Trying to checkout/clone a git repository with Visual Studio Tools for Git
I'm trying to clone a branch from our git repository using Visual Studio Tools for Git. I'm doing the following: 1. We have a local server that stores all our repositories and they are tracked using GIT+Stash 2. I create a subbranch B1 from master in
-
[Solved] Scary dd behavior: dd finishing before beginning?
Note: There isn't really a solution however the behavior I expected has returned and this event hasn't returned since nor have I been able to reproduce this in an attempt to find out why it happened in the first place. There are suggestions offered h
-
Query on auto-refreshing frames / linked .pdf docs on link
All, I'm in the process of building a website on an intranet which, heavily uses content from a .pdf document. I'm using a frame-structured website (two Vertial frames), with the left-hand side frame used as a nav bar and the pdf content displayed o
-
SQL Sync group stuck on processing status
Hi, The sync group below has been stuck on processing since Jan 8, this sync group sync the hub DB with two Azure DB (all are on Azure), one of the Azure DB can sync normally but the other one stuck (keep in the processing status) Azure Product Subsc