Good Recovery Strategy

Hi,
What will be the good Recovery Strategy for SAP systems including portal  (  I mean considering frequency  of  backup restore of test system etc )
Regards,
Aprit
Edited by: Arpit1983 on Jul 16, 2010 9:42 PM

Hi Eduardo,
I have gone though this note but it did not suggest any restore or recovery stratergy.
I have been through https://websmp106.sap-ag.de/atg .But It is all general information. Is there any recovery startergy SAP suggest like Backup stratergy.
Regards,
Arpit.

Similar Messages

  • Recovery Strategy

    Hi,
    If any one have any good document on Recovery Strategy of SAP system, please do mail to [email protected]
    Thanks in advance,
    Sailesh K

    Hi,
    eb92ab84-05bc-4d48-9e7b-419efb8b29dd wrote:
    Hi Frank,
    I was also thinking on the same line but maintaining a separate table just to remember where it got stuck its a question for maintainability.
    I want to do something in Package itself, can you please suggest some idea. Thanks in advance
    What exactly is the maintenance problem?
    Oracle was designed to keep tables.  You probably have several tables already; why would 1 more be a problem?  It may be appropriate to store the value in some existing table; ask someone who knows more about the application.
    I can't think of any way to store a value in the package between sessions.  The only things I know of in a package that persist from one session to another are hard-coded values.

  • What is a good recovery program

    Hi I am looking for a good recovery program, someone has mentioned  Disk Drill, can anyone recommend this, or are there better options.I've mistakenly deleted an image folder and hopefully I shall be able to recover it, as the images are quite important.
    Regards
    Richard

    General File Recovery
    If you stop using the drive it's possible to recover deleted files that have not been overwritten by using recovery software such as Data Rescue II, File Salvage or TechTool Pro.  Each of the preceding come on bootable CDs to enable usage without risk of writing more data to the hard drive.  Two free alternatives are Disk Drill and TestDisk.  Look for them and demos at MacUpdate or CNET Downloads. Recovery software usually provide trial versions that enable you to determine if the software would help before actually paying for it. Beyond this or if the drive has completely failed, then you would need to send the drive to a recovery service, such as Data Recovery by DriveSavers, which is very expensive.
    The longer the hard drive remains in use and data are written to it, the greater the risk your deleted files will be overwritten.
    Also visit The XLab FAQs and read the FAQ on Data Recovery.

  • Backup Recovery strategy : URGENT!!

    Hi all.
    I'm working in a big company that it has 2 Oracle databases and one standby database. After some days, I'll be responsible to these databases. I want to know that with which problems I can face and how I can I solve these problems.
    For example, corruption of base data files. How can I take my backup and recovery strategy. With standby databse, I think that I must take cold backup every week. Is this true???
    At the end, I ask from Oracle Gurus to tell me please, with which problem may I face and how I must defeat these problems.. Thank you very very much....

    I want to know that with which problems I can face and how I can I solve these problems.No one can tell , It can only be predicted , it is out of the scop of this forum to discuss such. the more you tell about your database , the better adwise you get.
    How can I take my backup and recovery strategy. With standby databse, I think that I must take cold backup every week. Is this true??? Well , you must configure your db to run in archivelog mode, regardless of weathere you are using standby or not .if you wish to run in no archivelog mode, then you willing to loose your transcations at some point of time.
    How to recover your corrupt dbfile is again depend the type of corruption in your datafile . sometime you force to recreate your dbfile , or might need to restore your whole database and sometimes not.
    hare krishna
    Alok

  • Good backup strategy for a mailserver?

    Hi,
    What would be a good backup strategy for a mailserver and what software would you use. I've got Retrospect right now ...
    Thanks a lot.
    Jerome

    Yeah, ptero, I know. I'm using mailbfr and it seems to work quite fine.
    I didn't express myself clearly in this thread, I was in fact looking for the best way to backup a whole MacOS X Server whose primary function is being a mailserver.
    I'm backing up the mailbfr via Retrospect on another server, and I'm using Carbon Copy Cloner from time to time to make an image of the whole server. But is that a good idea? I've got a neat program on my Windows servers which makes 'continuous' incremental images of the whole disk while the servers are being used. Is there anything like that for Mac? Or how do you back up the system configuration of your Mac servers?
    But perhaps this is the wrong subforum to ask questions like this?
    Thanks alot anyway!

  • Backup and Recovery Strategy?

    Hi,
    I have been asked to provide/formulate backup and recovery strategy for the new Oracle env being built. Currently, this all is being done by 3rd party and they won't share anything with me :-) Client will take over the servers in near future.
    Here is some info of the environment:
    Env: There are four production databases. All are 10gR2 (10.2.0.5) on RHE Linux 5.8. All transactions on the databases are through various batch jobs and then the data is used by some search engines/applications.
    Current backup info: There is one level-1 incremental backup done on daily basis and a level-0 is done weekly. Plus, archivelogs are backed up every hour. After each level-1 successful backup, the archivelog backed up that day are deleted. I guess same is the case with level-0 backups. The retention is 30-days.
    New env: I have been told that backups will NOT go to tapes. Instead, they will be stored on the disk. So I assume no special backup software (netbackup, comvault, etc) will be used. I have tried but no one is confirming this assumption.
    I believe above approach can be reused. But if I prepare a "Backup and Recovery Strategy document" with this detail, it will be a one-page document. I am not sure if that is sufficient information to go into such document.
    Please advise what else must be considered, asked and documented.
    Best regards

    user130038 wrote:
    Hi,
    I have been asked to provide/formulate backup and recovery strategy for the new Oracle env being built. Currently, this all is being done by 3rd party and they won't share anything with me :-) Client will take over the servers in near future.
    Here is some info of the environment:
    Env: There are four production databases. All are 10gR2 (10.2.0.5) on RHE Linux 5.8. All transactions on the databases are through various batch jobs and then the data is used by some search engines/applications.
    <snip>
    A brand new application using an already obsolete, barely supported (sustaining support, requiring a special support contract) version of Oracle?
    People who won't talk to each other?
    I see systemic organizational problems ....

  • How to find a good AI strategy for a survival game?

    Hi, I'm a college student here developing a tank survival game for the finals. I have had very little experience in AI strategies, so I wonder if anyone here can give me a hint or two to start.
    Thanks.
    The game's description is below:
    Goal:
    Develop a good AI strategy for survival.
    ������������������������������������������������������������������������������������������
    Game Name: Tank Survival
    How game starts: All tanks will be placed on the board in random position
    How game ends: When we have only one tank surviving, that's the winner
    When does a tank die? When the energy of a tank is zero, it dies.
    Game environment: A 15x15 closed environment, like a chess board
    Participants: 5 teams, 3 tanks per team
    ������������������������������������������������������������������������������������������
    Game Basic Rule
    For each tank, only 1 action is allowed each turn.
    A tank can move with a shield on, but shield costs energy.
    When one tank destroyed another one, it takes all the energy left of the victim
    tank. *(Bonus from kill � 1000 + (1/2)*Enemy energy before kil)
    The tank's weapon can only shoot in continuous linear way.
    ������������������������������������������������������������������������������������������
    Facts about the tank
    Initial Tank Energy - 10000
    Life Support Energy cost �- 50
    Missile yield multiplier - � 3
    Shield multiplier - � 5
    Radar Cost = [(distance * accuracy) ^1.3] * 3 + 100
    Missile Cost � yield + accuracy * distance * 7
    Movement Cost = (distance^1.4) *50
    Shield Cost = multiplier * Energy shielded
    Maximum movement - distance � 5 (any direction)
    Bonus from kill -� 1000 + (1/2)*Enemy energy before kill
    Missile accuracy -� not disclosed
    Radar Accruacy �- not disclosed
    ������������������������������������������������������������������������������������������
    Key problems
    � I only have 3 tanks. . .
    � I have to survive among the other 15 tanks.
    � None of the actions, misile, radar, or shield are 100% working. There are always chances that they don't work.

    I think the first stage is to work out what constitutes a "state" and a "move." A state is the current board positions with all the tank energies and other parameters. A move is a transition from one state to another. Can you move around? Can you fire the gun and move the tank at the same time? If you fire the gun, does it miss sometimes?
    You then have to have a way to evaluate a state. For instance, if you move closer to an enemy tank, does that put you in a better position or a worse position?
    You can then draw a Moves Tree. This starts from where you are, shows all the moves you could make and the states you could end up in after one move. You can attach a value to each state.
    Then you can extend the state tree by adding all the counter moves your enemy could make, and the states in which those moves would leave the game. Thus the transition from your state now, to your state after the enemy has moved, will show a gain or a loss, depending on whether you or the enemy gained more or lost less.
    The strategy that wins the game will be the one that takes you from the start state to the goal state in such a way that your enemy cannot stop you. There are two possible strategies for choosing moves, Minimax and Maximin. Minimax means minimise your maximum loss; Maximim means maximise your minimum gain.
    Try it first with a simple game like Nought and Crosses. The state tree grows very rapidly in the number of moves, so you may need some algorithm for pruning it or for generating only part of it at a time, as modern chess playing algorithms do.
    Write in a computer language that is good at handling state trees. I used to use Prolog but some programmers swear by Lisp. You won't be able to solve the problem in a crap language like Basic, Pascal or anything like those.
    You will need to read around the subject. If Alan Bundy has written anything recently, look at that first: he is an excellent writer and a very skilful practitioner as well.

  • Recovery Strategy - Need Some Information - Critical

    Hello all,
    I need help from you all to understand the following scenario:
    1.We have a Production database which is running in ARCHIEVELOG mode.
    2.This database is supported with FAILOVER Machenism.
    3.The backup strategy is as follows
    3.1 Sunday Level 0
    3.2 Monday,
    3.3 Tuesday Level 2
    3.3 Wednesday Level 1
    3.4 Thursday Level 2
    3.5 Friday Level 1
    3.6. Saturday Level 2
    Now follows my question:
    1. If I need to delete some of the old Archieve files, Can I delete them?
    2. If I can delete them, up to which date can I delete them?
    3. Consider on Sunday the date is 16, and my db is crashed on 20 i.e on Thursday and I have deleted the archieve files till 15 i.e. on Saturday. In this situation can I recover the database completely?
    If you need any clarification, please revert.
    This is very critical issue for me.
    Your help will be appreciated.
    Himanshu

    Hi again Himanshu,
    In fact you don't even need to restore the archived redo logs if you backup them!
    For example in your backup scripts you can includeBACKUP ARCHIVE LOG ALL DELETE INPUT;This will include the in your backup sets, so they'll be targets for your backup retention policy.
    Then, if it crashes on monday, you just RESTORE UNTIL "sunday";
    RECOVER UNTIL "monday sometime";RMAN will find the necessary backupsets, including those for the archived redo logs and do it for you.
    Or you can RESTORE ARCHIVE LOG also. Refer to RMAN guide for syntax.
    HTH,
    Yoann.
    Message was edited by:
    Yoann Mainguy
    "sunday" and "monday sometime"are not RMAN keywords! (just to get it straight :) )

  • Is this a good backup strategy for a drive with Logic projects on it ?

    Let's say you build on a LP9 project, that could be editing regions on the arrange window, adding/ deleting plugins, and saving the project under the same name. Those edits are stored in the project file and now the session is different than the one on the backup disk. Is it a good idea to have Carbon Copy Cloner archive the older project into a seperate folder and copy the newly edited version into the same project folder replacing the older version ? Or re-save the edited project (which I do by habit) on the work disk as a new name to reflect that it is a new version and now this new version will be cloned to the backup. But what if you delete digital audio off the work disk either on purpose or accidental, should the backup task also delete that audio from the backup drive or just move it to an archive folder for just in case down the road ? I am trying to design a new scheme where every day when I leave my studio a Macbook pro and my logic work disk goes with me leaving behind the daily incremental backup. Two events are my concern, the first is while away from studio I might edit or add to a logic project and want that work to be automatically backed up when I get back to studio and dock the laptop. Second if the studio burns down while I'm gone, I will have lost no data because the work drive is in my possesion. I think CCC is a good choice to manage my new back up scheme. Anybody have comments or suggestions ? Thanks much appreciated.

    One way of making it more robust would be make a copy on DVD or BluRay, as I have read that several professionals do, for long term archiving, but this might be seen as too much of a *** for a small/non-professional catalogue.
    Is one of the hard drives in a RAID setup? This would increase the safety factor and mean that it would take a three fold failure for pictures to be lost. Additionally, the use of professional quality hard drives built for a hard life in NASs etc,like the WD Red Pro, would give a meaningful increase in life expectancy over consumer level disks.

  • Backup & Recovery Strategy

    Hi Guru's.....
    Can anyone share his experience of backup & recovery using BCV (Business Continuity Volumes).
    I need to integrate my backup & recovery using BCV's. If you have any document or white paper which can help me , please send me.
    Is BCV more efficient , consistent , & reliable than RMAN ???
    Please advice.
    Regards,
    MB

    I have used BCVs for backup and cloning. Works quite well.
    I think you are not clear on what is a BCV. Simply, a BCV is a third (or fourth or fifth...) mirror in a mirrored RAID array. If you are referring to some sort of file system replication, where the SAN server copies changed data blocks to a second set of volumes, as a BCV, I suspect that someone may be playing fast and loose with the definition of BCV -- at least I have never heard of these replicated volumes as true BCVs. They may be useful for file systems that house flat files, but for database backup and recovery, they are useless -- I sure would not bet my job on them.
    A simple oracle backup scenario using BCVs would be something like.
    0) demount the BCVs from alternate mount points (if so mounted)
    1) resilver the BCVs
    2) put the database in hot backup mode
    3) switch logfile
    4) quiesce database writing
    5) split the BCVs
    6) un-quiesce database writing
    7) take the database out of hot backup mode
    8) switch logfile
    9) backup the control file to a named file -- not to trace , init.ora file, and password file (if used).
    a) backup the archived redo logs.
    b) mount the BCVs to alternate volume names or to a different server
    c) backup the BCVs to tape or ...
    At the moment you split the BCVs, you have created an instant hot backup, so treat it as such. -- Remember you DO NOT want the online redo log files.
    If you split the BCVs without putting the database in hot backup mode you will have what is essentially a copy of a crashed database. So if you use it to restore for DR, you will be starting up in crash recovery mode.
    A (fourth) BCV can be useful for cloning a large production database for testing.

  • Disk crash, need recovery strategy help.

    We had a two drive failure on our AS400 and had to reinstall the operating system. Can we install A7.3 from scratch and then copy in our libraries from our latest backup and JDE function properly? Any ideas would help!

    Well, it may be that I am not correctly understanding your disaster situation. I am assuming that you have the same IBM CPU and same IBM hardware serial number. It sounds like you had to replace a disk drive and a tape drive, but not the actual CPU itself. Now it does sound like your operating system backup was eaten up by the tape drive and lost, and thus you are saying you had to reinstall the IBM operating system? Would you not have a slightly older backup tape that you could recover from? Even if you did not and had to reinstall the operating system, JDE World is fairly independent of the OS/400 itself (other than needing to run on it). So I am thinking that you do NOT need to reinstall JD Edwards. You should be able to restore the JD Edwards libraries, though you would need to have the QGPL library as part of that restore process, and run okay.
    If my assumptions are wrong, then what I said may not apply. Certainly if you restore onto a new CPU, with a different serial number, you will have to contact JDE support to get a new license key. When doing a hot site test, I restore the operating system, and then restore all the user libaries, get new license keys, and I am up and running. Note that in a hot site test I did not have to do a reinstall of JDE. But certainly the difference maybe that in a hot site test, I am restoring the operating system from backup tape (so not reinstalling OS/400). So I have not personally been involved in the exact situation that you may be facing (I doubt many folks have such experience).
    You probably want to be in touch with JDE support and look to them for guidance on what to do, giving them your exact restore situation that you are facing.
    Hope this helps a little bit. Bit surprised that these days a disk failure would cause a system crash (have not seen that since the 1980s) - most systems have disk mirroring or RAID of some kind now. So the system can still run while waiting for the disk drive to be replaced (perfomance may suffer, but better that than losing the whole system).
    Good luck.
    John Dickey

  • DATABASE RECOVERY STRATEGY

    Hello,
    I want to plan the strategy to minimize the data loss. For that I need following information:
    1. Is there any tool/utility in Oracle by which I can raise alerts for the tablespaces that are reaching to the limit.(Automated tool or utility)
    2. Is there utility, by whcih I get the information regarding the status about the free space available on hard disk?

    enterprise manager can be configured for both checks. it can also alert you by mail/sms...

  • First major disaster :) recovery strategy

    Hi,
    After 15 years of using Linux and Unix, I have had my major blunder: starting a rm -rf /usr/lib command, when I just wanted to erase a temporary .so file I had put there for some purpose. I stopped the obnoxious command, but too late, and some files have already been deleted. Pacman and yaourt are broken (needs dependencies like libfetch.so). This is not that a big issue though, since my personal stuff is intact (and backed up anyway).
    Now I have to think the strategy that will allow me to fix my system:
    Can I fix the system at this stage or do I need to reinstall?
    Can I generate a list of installed packages (repository and AUR) even if pacman is broken (ala pacman -Qa and pacman -Qma but another way) and my pakg cache is empty?
    In case I need to repair pacman, I was considering using an Archboot image and copy the lib files necessary to pacman to work in /usr/lib. Would that work?
    If you have further recommendations, please let me know what is best. Thanking you in advance

    chemicalfan wrote:
    There's a script somewhere in the Arch wiki that will rebuild a package list if you lose pacman, but I don't remember whereabouts it is, sorry!
    Edit: Try this - https://bbs.archlinux.org/viewtopic.php?pid=670876
    The particular script you reference is useful when /var/lib/pacman/local is compromised. The link to Xyne's script is far more useful in this scenario. Neat.
    edit: So, after looking at Xyne's script more closely, he's trying to do something similar to what the lost-db script does. Still not quite useful here. I've taken his idea and created the following snippet which compares the local DB's filelists to the filesystem. You can specify a tolerance and only return packages which are below this tolerance, e.g. a tolerance of 90 means only show packages where <=90% of the files are present. You may get some false positives when this isn't run as root, but you can start with a tolerance of 100 and work your way down from there.
    #!/bin/bash
    declare curpkg= pkg= file=
    declare -i tolerance=$1 filecount= hitcount=
    summary() {
    local ratio=
    # boring
    if (( hitcount == filecount )); then
    return
    fi
    # under tolerance
    ratio=$(( hitcount * 100 / filecount % 100 ))
    if (( ratio < tolerance )); then
    printf '%-30s %-10s (%s%%)\n' "$pkg" "$hitcount/$filecount" "$ratio"
    fi
    pacman -Ql | while read pkg file; do
    if [[ -z $curpkg || $pkg != "$curpkg" ]]; then
    summary "$pkg" "$hitcount" "$filecount"
    curpkg=$pkg
    hitcount=0
    filecount=0
    fi
    # skip directories
    [[ $file = */ ]] && continue;
    (( ++filecount ))
    [[ -e "$file" || -L "$file" ]] && (( ++hitcount ))
    done
    Example output...
    $ filecheck 100
    feh 78/80 (97%)
    linux-rampage-headers 1170/1171 (99%)
    namcap 46/49 (93%)
    Here, feh and namcap are false positives because I'm not root. I really did delete a file from my kernel package so I can account for that.
    Last edited by falconindy (2011-08-18 15:57:07)

  • Fast Recovery Strategy

    Hi @all,
    I have a customer with a difficult problem. For developemend of his own Produkt he need an SAP ERP 6.0 System. His Software are available in at least 4 Versions plus many Underversions so that is not possible to create for any versions, a own SAP System.
    What he now need is a "switching" between the versions. And this very fast. What I know is that MaxDB 7.7 can handle several Snapshots. But when I revert to a snapshot newer snapshots was deleted.
    Did anyone an idea?
    KG
    Martin

    Hi Martin,
    what exactly is the limiting factor for your customer that prevents to have a distinct ERP installation for each product?
    Is it CPU, Memory or diskspace ?
    What is used from the ERP installation for the product of your customer?
    Maybe he can install a virtual machine for each product and start/stop (resume/suspend) them as he needs them?
    As disks are cheap and always get cheaper this might be an easy option that avoids any interference between the different ERP setups.
    regards,
    Lars

  • Data recovery of trashed files

    I recently lost some data on a Hard Drive by inadvertently tossing a folder in a the trash and emptying it. ( there is no software for carelessness). My back up software just mirrored the affected drive and erased the data from the back up drive. I have turned the drive off and am now seeking a great utility for recover trashed data. I have used the demo version of Data Rescue II by Prosoft which indicated there might be some data to recover but want more information on data recovery software before I make the buy.
    I am also looking for a good data recovery strategy in the event I find myself in this predicament again. Any thoughts out there.
    G5   Mac OS X (10.3.9)  

    I find myself in the same predicament after deleting the trash and realising I had placed files in there by accident. Have you solved your problem yet?
    I came across some free Quick Recovery for Mac software at http://3d2f.com/programs/30-815-quick-recovery-for-macintosh-download.shtml. Has anyone used this?

Maybe you are looking for

  • How to copy pictures from one iPad to another iPad?

    I want to copy pictures in my new iPad3 to my iPad1 without going through my apple computer.  Is there a way to do it?  I tried to use the usb connector to sync but it can only go from my iPad 3 to iPad1 but not the other way around.  (I did try to s

  • Query designing problem

    Hello I am trying to build a Query . I am facing problem . I want it following way in Columns = Keyfigure Net volume (User entry period range)------Net volume (period range for previous year) So user should select only once the period range for first

  • Updating a XML document with a processing instruction

    Greetings Guru's I have a fully functional XML database solution for our new system. The only thing left is for me to update a xml document in the database with a processing instruction to find the style sheet (JAXB strips it out when the XML must go

  • Commit after every three UPDATEs in CURSOR FOR loop

    DB Version: 11g I know that experts in here despise the concept of COMMITing inside loop. But most of the UPDATEs being fired by the code below are updating around 1 million records and it is breaking our UNDO tablespace. begin for rec in       (sele

  • Anyconnect authentication via Radius (IAS) using AD groups

    Hi all, I'm trying to figure out how to setup our ASA to use AD group membership to assign users a profile using Radius.  The goal is to setup different access into the network.  For instance, one group would be allowed full access to the network, in