RMAN Best Practices

Hi RMAN guys,
i´m thinking about the BEST strategy to make a full rman backup WITHOUT the catalog.
I would like anybody to comment my experiences:
1. I had a recovery case (but it´s more then 10 months ago, and i´m not remembering anything in detail, sorry) where i lost nearly anything.
In this case i needed the DB-ID for recovery, so i think it´s good know, what db-id you have
=>SELECT dbid,name FROM v$database;
2. RMAN needs the database to be at least in nomount, so you should have a spfile copy outside of your rman backup
=> sql "CREATE PFILE=''c:\bak\init.ora'' FROM spfile";
3. A Checkpoint at the beginning of your Backup is always good
=> sql "Alter system checkpoint";
or would say let´s make a log-switch?
=> sql "alter system switch logfile";
4. If you loose everything INCLUDING all Controlfile Copies, you should know which datafiles/controlfile/redologs you had:
=>
SELECT file_name,tablespace_name FROM dba_data_files;
SELECT member FROM v$logfile;
SELECT name FROM v$controlfile;
Than you could use the dbms_backup_restore.restoreDataFileTo Package-Procedure
5. Make a full backup of the database;
But what special parameters are important?
+ MAXCORRUPT?
+ FORMAT '....' or Default?
+ PLUS ARCHIVELOG DELETE INPUT ?
+???
5a. Make also copy of:
Listener.ora, tnsnames.ora, sqlnet.ora
pwd<sid>.ora
Windows: Registry of Oracle-Tree
UNIX: oratab
+???
6. Validate the backup
7. DELETE OBSOLETE Backups;
8. Send yourself an email with the rman-log
utl_mail ....
9.x -Y.x ???
Any Comments ?
Thanks
Marco

1. OK
2. OK: spfile or pfile
3. Ok but not mandatory: Re: alter system checkpoint necessary?
4. OK. Or use the CATALOG command to 'register' them in the control file.
5. All parameters can be important. I will suggest to note the FORMAT ... so that you know where the backups are. With the following script you backup all possible ;-)
RMAN> run {
allocate channel ch1 type disk format '/db/ARON/BACKUP/RMAN/backup_%d_%t_%s_%p_%U.bck';
allocate channel ch2 type disk format '/db/ARON/BACKUP/RMAN/backup_%d_%t_%s_%p_%U.bck';
backup
incremental level 0
database
plus archivelog delete input;
backup current controlfile;
backup spfile;
release channel ch1;
release channel ch2;
}5a. That is always good. Don't forget ORACLE_HOME.
6. Always good.
7. To save space that is good.
Bye, Aron

Similar Messages

  • Setting up RMAN -- Best Practice

    On the readings I've done regarding RMAN, I've seen recommendations that RMAN should have a recovery catalog configured, even though that's optional. Further, I've seen recommendations that RMAN's recovery catalog should have its own database.
    I've got one machine, that I want to run RMAN on, with the backups residing on that machine and disks. I then plan on archiving the backup files that RMAN produces to another set of cheaper, though less efficient, disks.
    What I thought I'd do in configuring an RMAN recovery catalog is create a separate tablespace for it, and store the RMAN schema there instead of creating a separate database on the same machine.
    It seems to me that unless you have an environment where you have one RMAN backing up multiple oracle instances, a separate RMAN database isn't justified.
    Comments?
    === Al

    hi,
    What I thought I'd do in configuring an RMAN recovery
    catalog is create a separate tablespace for it, and
    store the RMAN schema there instead of creating a
    separate database on the same machine.
    However if you just create a separate tablespace and you need to recover that database you will have major problems. If you are going to use the recovery catalog option then create a separate database ideally on a separate machine.
    It seems to me that unless you have an environment
    where you have one RMAN backing up multiple oracle
    instances, a separate RMAN database isn't justified.if you have just a couple of databases to manage then use RMAN controlfile backup.
    regards
    Alan

  • Best practice for database migration in 11g

    Hello,
    Database migration is required due to OS change.  Here, I have two database instances say A and B in the old server where RDBMS_VERSION is 11.1.0.7.0. They need to be migrated into a new OS where the oracle has been installed with version 11.2.0.2.0.
    Since all data + objects need to be migrated into the new server, I want to know what the best practice is and how to do that. Thanks in advance for your necessary guidance.
    Thanks and Regards,
    Prosenjit

    Hi Prosenjit,
    you have some options.
    1. RMAN Restore: you can restore your database via rman to the new host, and then upgrade it.
        Please follow instruction from MOS Note: RMAN Restore of Backups as Part of a Database Upgrade (Doc ID 790559.1)
    2. Data Guard: check the MOS Note: Mixed Oracle Version support with Data Guard Redo Transport Services (Doc ID 785347.1)
    3. Full Export / Import (DataPump)
    Borys

  • Backup validation best practice  11GR2 on Windows

    Hi all
    I am just reading through some guides on checking for various types of corruption on my database. It seems that having DB_BLOCK_CHECKSUM set to TYPICAL takes care of much of the physical corruption and will alert you to the fact any has occurred. Furthermore RMAN by default does its own physical block checking. Logical corruption on the other hand does not seem to be checked automatically unless the CHECK LOGICAL is added to the RMAN command. There are also various VALIDATE commands that could be run on various objects.
    My question is really, what is best practice for checking for block corruption. Do people even bother regularly checking this and just allow Oracle to manage itself, or is it best practice to have the CHECK LOGICAL command in RMAN (even though its not added by default when configuring backup jobs through OEM) or do people schedule jobs and output reports from a VALIDATE command on a regular basis?
    Many thanks

    To use CHECK LOGICAL clause is considered best practice at least by Oracle Support according to
    NOTE:388422.1  Top 10 Backup and Recovery best practices
    (referenced in http://blogs.oracle.com/db/entry/master_note_for_oracle_recovery_manager_rman_doc_id_11164841).

  • Best practice for E-business suite 11i or R12 Application backup

    Hi,
    I'm taking RMAN backup of database. What would be "Best practice for E-business suite 11i or R12 Application backup" procedure?
    Right now I'm taking file level backup. Please suggest if any.
    Thanks

    Please review the following thread, it should be helpful.
    Reommended backup and recovery startegy for EBS
    Reommended backup and recovery startegy for EBS

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • E-business backup best practices

    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i

    user11969921 wrote:
    What is E-business backup best practices, tools or techniques?
    For example what we do now is taking copy from the oracle folder in D partition every two days.
    but it take long time, and i believe there is a better way.
    We are on windows server 2003 and E-business 11i
    Please see previous threads for the same topic/discussion -- https://forums.oracle.com/search.jspa?view=content&resultTypes=&dateRange=all&q=backup+EBS&rankBy=relevance
    Please also see RMAN manuals (incremental backup, hot backup, ..etc) -- http://www.oracle.com/pls/db112/portal.portal_db?selected=4&frame=#backup_and_recovery
    Thanks,
    Hussein

  • Best practices for an oracle application upgrade

    Hello,
    We have an enterprise application deployed on Oracle Weblogic and connecting to an Oracle database (11g).
    The archive is versioned and we are using Weblogic's feature to upgrade to new versions and retire old versions.
    In a case of emergency when we need to rollback an upgrade, the job is really easy on Weblogic but not the same on Oracle DB.
    For most of our releases, the release package is an ear plus some database scripts.
    Releases are deployed with minimum downtime, so while we are releasing our clients are still writing to the DB.
    In case of a rollback is needed, we need to make sure the changes we made to the DB structure (Views, SP, Tables...) are reverted but data inserted by clients stayed intact.
    Correct me if I am wrong, but Flashback and RMAN TSPITR are not the good options here.
    What other people usually do in similar cases? What are best practices and deployment plans for our case?
    Guides and direction are welcomed.
    Thanks!

    Hi Magnus
    I guess you have to install again to ensure no problems. BP installation also involves ensuring correct SP levels (cannot be higher) for all software components.
    Best regards
    Ramki

  • Best practices for deploying EMGrid Control

    Can i use one db for OEM & RMAN repository? Looking for Best practices for deploying EMGrid Control in our environment, I have experience working with EMGrid control it was very slow , how to make it fast ? Like i enjoy the speed of EMDBControl....

    DBA2008 wrote:
    Is this good idea to put RPM recovery catalog & OID schema in OEM Repository DB? I am thinking just to consolidate all these schema's in one db.Unless you are really starved for resources, I would not recommend storing the OID and OEM repositories in the same database. Both of these repositories support different products, and you risk creating unnecessary dependencies when patching or upgrading. As a completely fictitious example, what if your OID installation has a critical issue that requires a repository database upgrade to version 10.2.0.6, and the Grid Control repository database is only certified for version 10.2.0.5?
    Regards,
    John P.
    http://only4left.jpiwowar.com

  • Request info on Archive log mode Best Practices

    Hi,
    Could anyone from their personal experience share with me the Best Practices for maintaining Archiving on any version of oracle. Please tell me
    1) Whether to place archives and log files on same disks?
    2) How many lgwr processes to use.
    3) checkpoint frequency.
    4) How to maintain speed of the server being run in archivelog mode.
    5) Errors to look.
    Thanks,

    1. Use separate mount point for archive logs like /archv
    2. Start using with 1 and check the performance.
    3. This is depends upon the redo log file size. Create your redo log file such that hourly maximum 5-8 log switch will happen. Try to make it less than 5 log switch per hour.
    4. Check the redo log file size.
    5. Check for archive log mount point space allocation. Take the backup of archive by RMAN and deleted the backed up archive logs from the archived destination.
    Regards
    Asif Kabir

  • Database Administration - Best Practices

    Hello Gurus,
    I would like to know various best practices for managing and administering Oracle databases. To give you all an example what I am thinking about - for example, if you join a new company and would like to see if all the database conform to some kind of standard/best practices, what would you look for - for instance - are the control files multiplexed, are there more than one member for each redo log group, is the temp tablespace using TEMPFILE or otherwise...something of that nature.
    Do you guys have some thing in place which you use on a regular basis. If yes, I would like to get your thoughts and insights on this.
    Appreciate your time and help with this.
    Thanks
    SS

    I have a template that I use to gather preliminary information so that I can at least get a glimar of what is going on. I have posted the text below...it looks better as a spreedsheet.
    System Name               
    System Description               
         Name      Phone     Pager
    System Administrator               
    Security Administrator               
    Backup Administrator               
    Below This Line Filled Out for Each Server in The System               
    Server Name               
    Description (Application, Database, Infrastructure,..)               
    ORACLE version/patch level          CSI     
              Next Pwd Exp     
    Server Login               
    Application Schema Owner               
    SYS               
    SYSTEM               
         Location          
    ORACLE_HOME               
    ORACLE_BASE               
    Oracle User Home               
    Oracle SQL scripts               
    Oracle RMAN/backup scripts               
    Oracle BIN scripts               
    Oracle backup logs               
    Oracle audit logs               
    Oracle backup storage               
    Control File 1               
    Control File 2               
    Control File 3                    
    Archive Log Destination 1                    
    Archive Log Destination 2                    
    Datafiles Base Directory                    
    Backup Type     Day     Time     Est. Time to Comp.     Approx. Size
    archive log                    
    full backup                    
    incremental backup                    
    As for "Best" practices, well I think that you know the basics from your posting but a lot of it will also depend on the individual system and how it is integrated overall.
    Some thoughts I have for best practices:
    Backups ---
    1) Nightly if possible
    2) Tapes stored off site
    3) Archives backed up through out day
    4) To Disk then to Tape and leave backup on disk until next backup
    Datafiles ---
    1) Depending on hardware used.
    a) separate datafiles from indexes
    b) separate high I/O datafiles/indexes on dedicated disks/lungs/trays
    2) file names representative of usage (similar to its tablespace name)
    3) Keep them of reasonable size < 2 GB (again system architecture dependent)
    Security ---
    At least meet DOD - DISA standards where/when possible
    http://iase.disa.mil/stigs/stig/database-stig-v7r2.pdf
    Hope that gives you a start
    Regards
    tim

  • Recovery catalog best practice

    Hi there,
    I want to ask you about your recovery catalog best practices
    should it be separate server, how to backup it, should I have standby recovery catalog, etc. ?
    we have dc and rdc and about 20 databases, we do backups to tapes and we use recovery catalog (in dc 11.2.0.3) on separate lpar on aix 7.1
    I mostly wonder how to protect it
    regards!

    for recovery catalog you also do rman backup and put it in tape or Disk (if it is small database) so that at the time of server failure you can get all information and make a schedule to try to recover recovery database backup on diffrent server to check its creadibility.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

Maybe you are looking for

  • Cross-dissolve from white frame to adjustment layer no longer works in CC 2014

    Windows 7, Premiere Pro CC 2014 I have a video clip in a nested sequence on V1 and an adjustment layer on V2. I want to dissolve from a white frame to the clip using the adjustment layer. In PP CC (not 2014) this was simple: Starting with a white PNG

  • In SM50 UP2 Running all the time with report SAPLNDE_ASSET_MASTER_UPDATE

    Dear all, I've this strange situation. After the restart of the appliacation and DB instance in SM50 the 2 UP2 WPs are all the time running on  Report / Spool action : SAPLNDE_ASSET_MASTER_UPDATE and the main program is RSM13000. Action is Sequential

  • Approver Details Not Visible After Changing the Approval Process

    Hi Gurus, This expense report belongs to an open approval task when an approval process has ended its validity date. After a new approval process is activated, all pending approval tasks experience the issue of approver details not showing. In order

  • Non-Standard Time format

    SS2008 R2 I have to store and manipulate data from a new phone system.  Some of the columns are "Time" interval.  So while they behave like a SS "Time" data type they can hold a value greater than 24 hours (ex. 25:10:33) .  This causes SS to blow, of

  • Having trouble downloading the basic content

    havingh trouble with  downloading the basic content of main stage 3 at the end of the  download it keeps saying something about Internet connection i tryed two different Internet places. my house and the apple stores as well did the same thing.