Best Practises for Backup & Recovery windows

Hi All,
Could you please let me know the following .
1. Best Practises to be followed for backup & recovery on windows server 2008
2. how to plan for the DR on windows
Regards
Mohammed. Abdul Muqeet

Mohammed Abdul Muqeet wrote:
Hi All,
Could you please let me know the following .
1. Best Practises to be followed for backup & recovery on windows server 2008
As far as the Oracle database is concerned, best practices on Windows server 2008 are exactly the same as for any other OS.
http://docs.oracle.com/cd/E11882_01/backup.112/e10642/toc.htm
(and my own prejudice, developed over 30+ years in this industry and working with Oracle on Windows beginning with Oracle 7.3 on windows 3.11, is that best practice is to stay as far away from Windows as possible)
2. how to plan for the DR on windowsAs far as the Oracle database is concerned, best practices on Windows server 2008 are exactly the same as for any other OS.
Backup everything. Get the backups off site. Perform regular DR drills. Document everything. Keep a copy of the documentation off site. (Doesn't do any good to have the DR manual located such that it gets destroyed in the same disaster that takes out your data center. ) Have an off-site DR site.
While it isn't in your lane as a DBA, you need to keep reminding managment that DR is about more than just the database, and even more than just the data center itself. If there is a physical disaster (ask the people in Oklahoma City about that ... over the years - multiple F5 tornadoes, plus a truck bomb) all kinds of infrastructure and documentation issues. What about important records (like the DR manual) that exist only in someone's desk drawer? Where are the business units going to set up shop and how are they going to connect to the restored database at the DR site?
I've seen DR manuals that got down to the detail of temporary housing for employees and how to get coffee at the DR site.
>
>
Regards
Mohammed. Abdul MuqeetEdited by: EdStevens on May 23, 2013 7:44 AM

Similar Messages

  • Best Practise for rebooting ISE Nodes?

    Hello Community,
    I administer an ISE installation with two nodes (I am not an ISE Specialist, my job is just to manage the user/mac-adresses... but now I have to move my ISE Nodes from one VMWare Cluster to another VMWare Cluster.
    (Both VMWare environments are connected to our enterprise network, but are different environments. vMotion not possible)
    I would shutdown ISE02, move it to our new VMWare environment and start it again.
    Than I would do this with our ISE01 Node...
    Are there any best practises for doing this? (Shutdown application first, stopl replikation etc)?
    Can I really simply reboot an ISE Node - or have I consider something bevor I doing this? After I doing this?
    Any tasks after reboot?
    Thank you for any answer!
    ISE01    
    Administration, Monitoring, Policy Service    
    PRI(A), SEC(M)
    ISE02    
    Administration, Monitoring, Policy Service    
    SEC(A), PRI(M)

    There is a lot to consider here.  If changing environments means changing IP Address and IP Scopes, then your policies, profiles, and dACLs would also have to change among other things.  If this is the case, create a new ISE VM in the new environment using the built in evaluation license and recreate the deployment from the old environment using the addressing scheme of the new environment.  Then spin-up a new Secondary node and register it on the Primary.  Once this is done, you can re-host the license from your old environment onto your new environment.  You can use this tool to re-host:
    https://tools.cisco.com/SWIFT/LicensingUI/loadDemoLicensee?FormId=3999
    If IP Addressing is to remain the same, it gets simpler. 
    First, and always, perform a configuration and operational backup.
    If downtime is not an issue, or if you have a maintenance window of an hour or so: Simply shut down both nodes.  Transfer them to the New Environment and turn them on, Primary Node first, of course.
    If downtime is an issue, shut down the Secondary Node and transfer it to the New Environment.  Start the Secondary Node and when it is up, shut down the Primary Node.  Once services on the primary node have stopped, promote the Secondary Node to Primary Node.
    Transfer the OLD Primary Node to the New Environment and turn it on.  It should assume the role of Secondary Node.  If it does not, assign that role through the GUI.
    Remember, the correct way to shut down an ISE node is:
    application stop ise
    halt
    By using these commands, the risk of database corruption decreases by about 90% (Remember to always backup).
    Please Rate Helpful posts and mark this question as answered if, in fact, this does answer your question.  Otherwise, feel free to post follow-up questions.
    Charles Moreton

  • What is the best practise for setting dirty flag of a page/view?

    For a page/view, normaylly there are 2 things to do for diry data:
    1. when it's clean, Save button is disabled, when it's dirty, save button is enabled.
    2. when it's dirty and the window is closed, a popup says "you have unsaved data, close will lose the data".
    My thought is: it must be handled at client side, because not all valuechange is auto submitted. E.g., you type the 1st letter of a string in a input box, the server side does not know it, but save button should be enabled immediately.
    Is it possible to capture all valueChange events in a page or a view at client side?
    I'm not sure what is the best practise for setting dirty flag? If there is better solution? Does ADF provide facility for this?

    public void save(ActionEvent event){
    boolean formValid = isFormValid();
       if (formValid) {
      save button is enabled.
        private boolean isFormValid() {
            boolean valid = true;
            if (Check Condition 1) {
                valid = false;
               showErrorMessage1();
            if (Check Condition 2) {
                valid = false;
               showErrorMessage1();
            return valid;
        private void showErrorMessage1() {
                    when it's dirty and the window is closed, a popup says "you have unsaved data, close will lose the data".

  • Best Practise for connecting to Ethernet based device

    Hi,
    I have inherited a system where we have a cDAQ-9181 controlling an vehicle access barrier, with a LabView application on  a PC talking to it via Ethernet.
    (The application is very simple - press a button > send a value to the 9181 unit > opens the barrier )
    All works fine most of the time.
    ( We occasionally get network related errors. The LabView application sometimes thinks another PC has reserved the unit, or gives “error 89130 - device not available for routing” )
    The users would now like to be able to easily run the application from a second PC ( not at the same time ), but this seems to be a problem. If I exit the application on PC “A” and run it on PC “B” it struggles to reserve the chassis, and throws the “89130” error and I have to restart the unit via MAC.
    While I’m a “veteran” control programmer, I’m new to LabView, and would be very grateful for any pointers on “best practise” for talking to devices via Ethernet, or any specific suggestions for handling multiple PCs talking to a single device.
    Thank You.
    Tim.

    Hi Tim,
    Thank you for your post and welcome to the NI forums.
    There are lots of knowledgebase articles on our website and you should be able to find documentation for most of our hardware.
    There is a good troubleshooting guide for cDAQ Ethernet here (http://ae.natinst.com/public.nsf/web/searchinternal/e67b4e4749f378ff862577270059bd4b?OpenDocument) - it outlines the steps to take to ensure you have a stable a connection as possible. You may have already seen it, but the quick-start guide for your specific device may also be worth consulting for best practices. Are these helpful?
    As for using more than one PC - this shouldn't be too much of an issue. I would expect that the resource isn't being closed correctly - when you exit the App on PC 'A', how are you closing off the resource?
    Best regards,
    Eden S
    Applications Engineer
    National Instruments UK & Ireland

  • SAP Best Practises for Logistic Modules

    Dear Reader,
    Does anyone know how to find SAP Best Practises for Netweaver 2004s Logistic scenarios (SD, MM,)
    The old Netwvr 2004 scenarioas are less helpful, especially in Inventory mgmt.
    Thanks

    Check this link, this is the latest I guess.
    [SAP Best Practices|http://help.sap.com/bp_bw370/html/index.htm]
    Cheers,
    Neel.

  • Best practises for replication

    Hi,
    I want to know what is best practise for duration of replicaation of database between two Cisco ACS.
    Regards,
    Atif.

    Hi Atif,
    The replication time interval should always be higher.
    Reason: Everytime you replicate the data it requires ACS services to restart so doing this frequently may affect your production enviroment.
    However, if you want to replicate internal user's password then there is an option to replicate password changes right awayvwithout a full replication.  You can enable this option under System Configuration -> Local Password Management.  With this enabled you could potentially set the replications to a larger interval.
    It also depend how often you do changes in your ACS. If its normal then I would say set it to every sunday 12:00 PM.
    This is how replication happens:
    The primary ACS stops its authentication and creates a copy of the ACSinternal database components that it is configured to replicate. During this
    step, if AAA clients are configured properly, those that usually use the primary ACS fail over to another ACS. The primary ACS resumes its authentication service.
    After the preceding events on the primary ACS, the database replication process continues on the secondary ACS. The secondary ACS stops its authentication service and replaces its database components with the database components that it received from the primary ACS. During this step, if AAA clients are configured properly, those that usually use the secondary ACS fail over to another ACS. The secondary ACS resumes its authentication service.
    HTH
    Regards,
    JK
    Plz rate helpful posts-

  • Best Practises for doing Master Scheduling using SNP

    Hello Gurus ,
                         Can you please suggest the best practises for doing Master Scheduling using SNP . Which engine to use , what would that mean etc
    Regards,
    Nick

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • ECC 6.0 upgrade to  Best practise for chemical industry implementation

    Dear All.,
    Please tell me what all the  EHP3 must be updated in the SAP erp ecc 6.0. I have already updated
    SAP_APPL 60304.
    Tell me what is the next Ehp3 required before i start best practise in saint.
    Thanks & Regards,
    Manju

    Dear sir.,
    I have  installed SAP ECC 6.0 for a chemical industry, now am planning to implement Best practise to reduce implementation time.
    1) As per the SAP Quick quide i have downloaded best practise for chemical industry(IS) and implemented through SAINT,  and the prerequisites were maintained as per the SAP notes 774615,1301301,1064635,
    2)The above mentioned notes explains minimum stack11 and sapka=18,sapkb=18,sapbw=20,PI_Basis-2005_1_700=20 and sap_ap=15 to update Ehp3 for SAP_APPL603.
    3) After updating all the above mentioned patch level i implemented the Best practise in saint and that was successful.
    But still  the problem persist in activation of BP by tcode /n/smb/bbi only 70% has been completed and activation stoped with *ABAP program error ASSIGNSUBSTRING_NOT_ALLOWED and relevent note is 1295083*_
    Please  any one tell me, is that required to update all the EHP3 components since i have maintained only SAP_APPL60304.
    What all Ehp3 components should be updated to get a smooth activation og BP for chemical industry.

  • Wats the best practise for performance

    Hi all,
    In my out line i have 15 dimensions and for one dimension i have 39000 members so wat is the best practise for performance , If we have more dimensions and more meebers is there any problem for performance
    so wat is the best practise for dimensions and members??
    Thanks in advance??

    If it is ASO application it is not a problem.
    If it is a BSO application surely it will hit the performance.
    More dimensions will create performance issues.
    If the said 39000 members dimension is a Flat dimension. It will be another issue.
    If BSO is obvious try to split into two models.
    Create intermediate groupings for the Filat dimension.

  • How to install Best Practise for HCM

    Hi all:
        Could you please tell me how to install best practise for HCM , as there is no autoran file.
        Thank you very much!!!
    Best regards
    Frank

    Hi,
    Did you chk this link;
    http://help.sap.com/content/bestpractices/overview/index.htm
    http://help.sap.com/bp_hcmv1600/HCM_US/HTML/index.htm
    Regards
    CSM Reddy

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Understanding report need backup recovery window ..

    Hello,
    I have a big database which backed up within several days at night time. The retention policy is "recovery window of 3 days". I've tried to analyze the results of
    report need backup;But I noticed than this command return just the list of datafiles having latest backup older than 3 days. I've tried to read again the docs:
    >
    Reports data files for which there are not sufficient backups to satisfy a recovery window-based retention policy for the specified number of days, that is, data files without sufficient backups for point-in-time recovery to any point back to the time SYSDATE - integer.
    >
    Please correct me if I'm wrong, but I consider that the ability to make recover to any point in time within 3 days require the backup taken earlier than 3 days ago and archivelogs from the time when backup was taken to the recover PIT. Im I right? If so then how I should interprete the results of report need backup ? I can't find the detailed explanations of this in the docs :(
    Thanks in advance,
    Constantine

    You are correct in your appreciations.
    This report is about the need of backups, without considering archivelog backups.
    It does not mean that listed files are in an unrecoverable situation.
    Regards.

  • Best practise for SAP users who leave the company

    Hi
    Could anyone reccommend a best practise document or give advice on how to deal with SAP user ID's when employee's/contractors/consultants leave? I am the basis admin just starting an SAP implementation and we have no dedicated authorisation team at the moment, so I have been asked to look into this :
    Currently we set the validity date in SU01 to the termination date.
    We chack there are no background jobs scheduled under that user id, if there are, we change the job owner to a valid user (we try to run all background jobs under an admin account).
    We do not delete the user as from an audit point of view I believe it restricts information you can report on and there are implications on change documents etc, so best to lock it with validity dates.
    Can anyone advise further?
    We are running SAP ECC 5.0 on Windows 2003 64 Bit/MS SQL 2000.
    Thanks for any help.

    Hi,
    Different people will tell you different versions of what they believe is best practice, but in my opinion you are already doing reasonably well.
    What I prefer is
    1. Lock ID & set validity date.
    2. Assign user to user group LEAVER or EXPIRED or something similar (helps with reporting) out of SUIM/S_BCE* reports.
    3. Delete role assignment (should you need it, the role assignment will be in the change history docs anyway).
    4. Check background jobs & act accordingly.
    For ease of getting info I prefer not to delete the ID though plenty of people do.

  • Best practises for install BPEL

    We are planning to start to use BPEL, but we are having problems finding any documents on best practise on the install of the product.
    1. Sould we install in Lunix or Windows.
    2. In prodcution we will need fault tollerance can BPEL do this well.
    3. Does the BPEL monitoring work well.
    We want to install this right first time rather than just install from the CD's then try to fix the issues we wrong design later, we have been burnt by do this way before.
    Regards
    Sean Bell

    Hi,
    1) install it on the platform you know best. because
    2) you can only get fault tolerance or high availability on a platform you know how to administer.
    for HA look at
    http://www.oracle.com/technology/products/ias/bpel/pdf/bpel-admin-webinar.pdf
    3) BAM is the word you are searching for:
    http://www.oracle.com/technology/products/integration/bam/index.html

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

Maybe you are looking for