Backups for large databases

I am wondering how people do restores of very large DB's. Ours is not that large yet bit will grow to the point where exports and imports are not feasible. The data only changes periodically and as it is a web application,cold backups are not really an option. We don't run in archived log mode because of the static nature of the data. Any suggestions?

put the read only tables in a read only tablespace and slowly changing tables to another tablespace. the most frequent ones in another tablespace.
take transportable tablespace export of the frequently changing tablespace daily and slowly changing 2-3 times a week (depending on your site specifics). this involves nothing but a metadata export of the datadictionary info of the tablespaces exported and an OS level copy of the datafiles of those tablespaces.
this is the best way for you to backup/recover. check out oracle documentation or this website for transportable tablespaces.
I guess it comes to a point where you have to make a tradeoff between performance and recoverability. In my opinion always take recoverability over performance.
If the periodic change of data is nothing but a bulk data load then after the dataload take a backup of a database. having multiple recovery scenarios is the best way for recovery.

Similar Messages

  • RMAN Tips for Large Databases

    Hi Friends,
    I'm actually starting administering a large Database 10.2.0.1.0 on Windows Server.
    Do you guys have Tips or Docs saying the best practices for large Databases? I mean large as 2TB of data.
    I'm good administering small and medium DBs, but some of them just got bigger and bigger!!!
    Tks a lot

    I would like to mention below links :
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/partconc.htm
    http://download.oracle.com/docs/cd/B28359_01/server.111/b32024/vldb_backup.htm
    For couple of good advices and considerations for RMAN VLDB:
    http://sosdba.wordpress.com/2011/02/10/oracle-backup-and-recovery-for-a-vldb-very-large-database/
    Google "vldb AND RMAN in oracle"
    Regards
    Girish Sharma

  • HOT Backups for smaller databases

    Should we schedule hot backups for small databases? the database size is around 1.5 GB. We have already scheduled daily FULL DB Export.
    The database is Oracle 8i.

    RMAN> run {
    2> allocate channel ch1 type disk format 'e:\rman_backup\backup%d_DB_%u_%p';
    3> backup database;
    4> backup archivelog all;
    5> release channel ch1;
    6> }
    RMAN-03022: compiling command: allocate
    RMAN-03026: error recovery releasing channel resources
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure during compilation of command
    RMAN-03013: command type: allocate
    RMAN-06172: not connected to recovery catalog database
    RMAN>

  • RMAN backup for two databases with similar DBID

    We have two databases with SID 'uiivc' and 'uiivc1' running from one of the HP Unix 11.11 PA-RISC server. I did configured RMAN backup for both databases. But it is found that both databases have same DBID [connected to target database: UIIVC (DBID=3005194057) for UIIVC & connected to target database: UIIVC1 (DBID=3005194057) for UIIVC1] and backup configured via crontab seen running fine with full backup at weekend and archivelog backup at daily for both databases. Now my query is that will it create any issues at time of restoration since both databases have similar DBID? Does RMAN uses DBID to distinguish databases?
    Kindly help to clarifiy
    Regards
    Manoj Thakkan
    [email protected]

    Hi Tycho,
    I configured RMAN to store controlfile backups in different folders on each database.
    UIIVC database:
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/hot_backups/rmanbkp/uiivc/cf%F';
    UIIVC1 database:
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/hot_backups/rmanbkp/uiivc1/cf%F';
    Server output:
    uii-79:uiivc1 > ls -lrt /hot_backups/rmanbkp/uiivc
    total 43904816
    -rw-r----- 1 oracle dba 21144928256 Jul 15 10:58 8rkk68p5_1_1
    -rw-r----- 1 oracle dba 287110144 Jul 15 13:41 8tkk6sfv_1_1
    -rw-r----- 1 oracle dba 862700544 Jul 16 08:00 90kk8sel_1_1
    -rw-r----- 1 oracle dba 152350720 Jul 16 08:09 92kk8tgg_1_1
    -rw-r----- 1 oracle dba 32145408 Jul 16 08:09 cfc-3005194057-20090716-01
    uii-79:uiivc1 > ls -lrt /hot_backups/rmanbkp/uiivc1
    total 24186112
    -rw-r----- 1 oracle dba 6373965824 Jul 13 13:10 83kk1ftf_1_1
    -rw-r----- 1 oracle dba 784803840 Jul 13 13:46 85kk1j5e_1_1
    -rw-r----- 1 oracle dba 1022674944 Jul 13 23:07 87kk2kjs_1_1
    -rw-r----- 1 oracle dba 1814230016 Jul 14 23:46 89kk591i_1_1
    -rw-r----- 1 oracle dba 1211042816 Jul 15 23:06 8bkk7tc0_1_1
    -rw-r----- 1 oracle dba 1165975552 Jul 15 23:10 8ckk7tnr_1_1
    -rw-r----- 1 oracle dba 10534912 Jul 15 23:10 cfc-3005194057-20090715-00
    And I saw backups are placing in the respective folders too. So with these settings, can I safely assume there is no issues with RMAN backup and restore having similar DBID for two databases running on same server?
    Regards
    Manoj Thakkan

  • SAP EHP Update for Large Database

    Dear Experts,
    We are planning for the SAP EHP7 update for our system. Please find the system details below
    Source system: SAP ERP6.0
    OS: AIX
    DB: Oracle 11.2.0.3
    Target System: SAP ERP6.0 EHP7
    OS: AIX
    DB: 11.2.0.3
    RAM: 32 GB
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Please advise on this.
    Regards,
    Raja. G

    Hi Raja,
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Although 3TB DB size may not have direct impact on the upgrade process, downtime of the system may vary with larger database size.
    Points to consider
    1) DB backup before entering into downtime phase
    2) Number of Programs & Tables stored in the database. ICNV Table conversions and XPRA execution will be dependent on these parameters.
    Hope this helps.
    Regards,
    Deepak Kori

  • DPM is Only Allowing Express Full Backups For a Database Set to Full Recovery Model

    I have just transitioned my SQL backups from a server running SCDPM 2012 SP1 to a different server running 2012 R2.  All backups are working as expected except for one.  The database in question is supposed to be backuped up iwht a daily express
    full and hourly incremental schedule.  Although the database is set to full recovery model, the new DPM server says that recovery points will be created for that database based on the express full backup schedule.  I checked the logs on the old DPM
    server and the transaction log backups were working just fine up until I stopped protection the data source.  The SQL server is 2008 R2 SP2.  Other databases on the same server that are set to full recovery model are working just fine.  If we
    switch the recovery model of a database that isn't protected by DPM and then start the wizard to add it to the protection group it properly sees the difference when we flip the recovery model back and forth.  We also tried switching the recovery model
    on the failing database from full to simple and then back again, but to no avail.  Both the SQL server and the DPM server have been rebooted.  We have successfully set up transaction log backups in a SQL maintenance plan as a test, so we know the
    database is really using the full recovery model.
    Is there anything that someone knows about that can trigger a false positive for recovery model to backup type mismatches?

    I was having this same problem and appear to have found a solution.  I wanted hourly recovery points for all my SQL databases.  I was getting hourly for some but not for others.  The others were only getting a recovery point for the Full Express
    backup.  I noted that some of the databases were in simple recovery mode so I changed them to full recovery mode but that did not solve my problem.  I was still not getting the hourly recovery points.
    I found an article that seemed to indicate that SCDPM did not recognize any change in the recovery model once protection had started.  My database was in simple recovery mode when I added it (auto) to protection so even though I changed it to full recovery
    mode SCDPM continued to treat it as simple. 
    I tested this by 1) verify my db is set to full recovery, 2) back it up and restore it with a new name, 3) allow SCDPM to automatically add it to protection over night, 4) verify the next day I am getting hourly recovery points on the copy of the db. 
    It worked.  The original db was still only getting express full recovery points and the copy was getting hourly.  I suppose that if I don't want to restore a production db with an alternate name I will have to remove the db from protection, verify
    that it is set to full, and then add it back to protection.   I have not tested this yet.
    This is the article I read: 
    Article I read

  • DR startegy for Large database

    Hi All,
    We have a 30TB database for which we need to design a backup strategy.( Oracle 11gR1 SE, 2 Node RAC with ASM)
    Client needs a DR site for the database and from the DR site we will be running tape backup.
    The main constraint we are facing here are size of DB which will grow till 50 TB in future and another is we are running in Oracle standard edition.
    Taking a full RMAN backup to a SAN box will take around 1 week for us for a DB size of 30TB.
    Options for us:
    1. Create a manual standby and apply archive logs( We cant use Dataguard as we are in SE edition)
    2. Storage level replication ( Using HP Continous access)
    3. Use thrid party tools such as Shareplex,Golden gate, DBVisit etc
    Which one will be the best option here with respect to cost and time or do we have any other option better than this.
    We cant upgrade to Oracle EE edition as of now since we need to meet the project deadline for Client. We are migrating legacy data to production now and this will be interrupted if we go for a upgrade.
    Thanks in advance.
    Arun
    Edited by: user12107367 on Feb 26, 2011 7:47 AM
    Modified the heading from Backup to DR

    Arun,
    Yes this limitation about BCT is problematic in SE but after all if everything was included in SE who would pay the EE licence :) ?.
    Only good thing if BCT is not in use is that RMAN checks the whole database for corruption even if the backup is an incremental one. There is no miraculous "full Oracle" solution if your backups are so slow but as you mentioned the manual standby with delayed periodic applications of the archives is possible. It's up to you to evaluate if if works in your case though : how many archive log files will you daily generate and how long will it take to apply them on your environment ?
    (notice about Golden Gate it's no more a third party tool : it's now an Oracle tool and it is clearly introduced as a recommended replacement for Streams)
    Best regards
    Phil

  • Sql Server Management Assistant (SSMA) Oracle okay for large database migrations?

    All:
    We don't have much experience with the SSMA (Oracle) tool and need some advice from those of you familiar with it.  We must migrate an Oracle 11.2.0.3.0 database to SQL Server 2014.  The Oracle database consists of approximately 25,000 tables and 30,000
    views and related indices.  The database is approximately 2.3 TB in size.
    Is this do-able using the latest version of SSMA-Oracle?  If so, how much horsepower would you throw at this to get it done?
    Any other gotchas and advice appreciated.
    Kindest Regards,
    Bill
    Bill Davidson

    Hi
    Bill,
    SSMA supports migrating large database of Oracle. To migrate Oracle database to SQL Server 2014, you could use the latest version:
    Microsoft SQL Server Migration Assistant v6.0 for Oracle. Before the migration, you should pay attention to the points below.
    1.The account that is used to connect to the Oracle database must have at least CONNECT permissions. This enables SSMA to obtain metadata from schemas owned by the connecting user. To obtain metadata for objects in other schemas and then convert objects
    in those schemas, the account must have the following permissions: CREATE ANY PROCEDURE, EXECUTE ANY PROCEDURE, SELECT ANY TABLE, SELECT ANY SEQUENCE, CREATE ANY TYPE, CREATE ANY TRIGGER, SELECT ANY DICTIONARY.
    2.Metadata about the Oracle database is not automatically refreshed. The metadata in Oracle Metadata Explorer is a snapshot of the metadata when you first connected, or the last time that you manually refreshed metadata. You can manually update metadata
    for all schemas, a single schema, or individual database objects. For more information about the process, please refer to the similar article: 
    https://msdn.microsoft.com/en-us/library/hh313203(v=sql.110).
    3.The account that is used to connect to SQL Server requires different permissions depending on the actions that the account performs as the following:
     • To convert Oracle objects to Transact-SQL syntax, to update metadata from SQL Server, or to save converted syntax to scripts, the account must have permission to log on to the instance of SQL Server.
     • To load database objects into SQL Server, the account must be a member of the sysadmin server role. This is required to install CLR assemblies.
     • To migrate data to SQL Server, the account must be a member of the sysadmin server role. This is required to run the SQL Server Agent data migration packages.
     • To run the code that is generated by SSMA, the account must have Execute permissions for all user-defined functions in the ssma_oracle schema of the target database. These functions provide equivalent functionality of Oracle system functions, and
    are used by converted objects.
     • If the account that is used to connect to SQL Server is to perform all migration tasks, the account must be a member of the sysadmin server role.
    For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313158(v=sql.110)
    4.Metadata about SQL Server databases is not automatically updated. The metadata in SQL Server Metadata Explorer is a snapshot of the metadata when you first connected to SQL Server, or the last time that you manually updated metadata. You can manually update
    metadata for all databases, or for any single database or database object.
    5.If the engine being used is Server Side Data Migration Engine, then, before you can migrate data, you must install the SSMA for Oracle Extension Pack and the Oracle providers on the computer that is running SSMA. The SQL Server Agent service must also
    be running. For more information about how to install the extension pack, see Installing Server Components (OracleToSQL). And when SQL Express edition is used as the target database, only client side data migration is allowed and server side data migration
    is not supported. For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313202(v=sql.110)
    For how to migrate Oracle Databases to SQL Server, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313159(v=sql.110).aspx
    Regards,
    Michelle Li

  • BRTOOLS with tape continuation for large database

    Hello,
    I have a R/3 database of 2TB which needs to be copied onto a new staging server for upgrade.
    The problem I am facing is that I dont have any space in the SAN(storage) for taking the backup on disk.
    So the only option I have is to take backups on tape.
    Even tried backup with compress mode on tape but ended up with CPIO error for handling larger than 2GB files(note 20577).
    And Since the database size is 2TB and the tapes I have hold 800GB I would have to use multiple tapes.
    Since my some of the files in the database like *DATA_1 range between 6GB to 10GB  CPIO cannot
    handle files larger than 2GB as per note 20577.
    So I had to change the parameter   tape_copy_cmd = dd   in init<sid>.sap.
    But DD will end once the end of tape is reached with a error message thereby failing my backup.
    Please help me get out of this situation.
    Regards,
    Guru

    Hi,
    Please check the 'Sequential backup' section in the backup guide. If its not possible to use a tape with a big capacity, you could use this method instead.
    You would need to add/ modify the following parameter in init<sid>.ora :
    1. volume_backup
    2. tape_address
    3.tape_address_rew
    4.exec_parallel
    You'll find more info about these parameters in www.help.sap.com and in the backup guide itself.
    Br,
    Javier

  • Hot Backup for oracle database?

    Dear all,
    I want to change Cold Backup to Hot Backup. Does anyone how to do Hot Backup and has some simple document I can follow? If the database is running in ARCHIVELOG mode, is the size grow very fast or other effect will overcome?
    Please advice,
    Amy

    I want to change Cold Backup to Hot Backup. Does
    anyone how to do Hot Backup and has some simple
    document I can follow?online/hot backup don’t need to shutdown the database we can put the database in backup mode and then start taking backup even though users database activity read/write data going on. This strategy useful if ours database goes for 24x7.
    For online/hot backup the database should be in archive mode.
    I hope you know how to turn on archive.Before turning on check the archive on or not.
    SQL> archive log list
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     1
    Next log sequence to archive   2
    Current log sequence           2
    SQL> you may also check it by connecting to database as sys
    SQL> select log_mode from v$database;
    LOG_MODE
    ARCHIVELOG
    SQL> if yours database is not in archive mode then first enable archive,please before turning on the archive take cold backup.
    SQL>shutdown immediate
    SQL>startup mount
    SQL>alter database archivelog
    SQL>alter database open;
    SQL> archive log list
    Now you are able to take hot/online backup ,its upto you you use user managed backup and recovery or use RMAN oracle own shipped free tool for backup and recovery ,however my recommendation would goes for RMAN.
    http://download-uk.oracle.com/docs/cd/A97630_01/server.920/a96572/toc.htm
    http://www.oracle.com/technology/deploy/availability/htdocs/rman_overview.htm
    ARCHIVELOG mode, is the size grow very fast or other
    effect will overcome?However, the growing size pace either fast or slow everyday can be impractical it alls depends on activity of yours database operation.You will have to observe it yourself by turning on archive log its vary database to database activity.
    Khurram

  • Oracle for large database + configuration

    Hi
    I have some historical data for Stocks and Options that I want to save into an Oracle database. Currently I have about 190G of data and expect to grow about 5G per month. I have not completely thought about how to organize the tables. It is possible that there might just be one table which might be larger than the hard disk I have.
    I am planning to put this on a DELL box, running Windows 2000. Here is the configuration.
    Intel Xeon 2.4GHz, 2G SDRAM, with 3 146G SCSI harddrive with PERC3 SCSI controller. This machine roughly costs 7000$.
    Is there any reason that this wont work. Will Oracle be able to organize one database across multiple disks? How about Tables? Can Tables span multiple disks.
    All this data is going to be read only.
    My other cheaper choice is
    Intel box running P3, 2G RAM, 2 200G IDE drives. My questions for this are/ Will this configuration work?
    Also, for this kind of a database what kind of total disk space should I budget for?
    thanks
    Venkat

    The Server Manager was deprecated in 9i. Instead of using it you have to use the SQL*Plus. Do you have another JRE installed ?
    How to create a database manually in 9i
    Administrator's Guide for Windows Contents / Search / Index / PDF
    http://download-east.oracle.com/docs/cd/B10501_01/win.920/a95491.pdf
    Joel Pérez

  • Selection Interface for large database

    I am looking for a working example of a CF selection field that fills or builds a name list as you type.  The database has about 600,000 names with 400 new people being added each day.  I am looking for a smart tool that watches you type and brings down a name list as you go.  In the list the user needs to see the name and other identifying information like DOB and phone number.  User clicks the row and the persons recorded is located.  I think I have a good understanding the CFC side of this. 
    If you type fast the tool waits for a second. "Sounds like" support would also be nice.
    Thanks for any ideas?

    You mean AutoSuggest? See this link:
    http://forta.com/blog/index.cfm/2007/5/31/ColdFusion-Ajax-Tutorial-1-AutoSuggest
    You might want to adjust the code to work on official version because the example was built for beta version.

  • 11gR1 backup for 11gR2 database clone.

    Hi all,
    Our OS is 64 bit RHEL 5.7. On this we have EBS R12.1.3. with db version 11.1. Recently I installed EBS 12.1.3 (actually installed 12.1.1 and upgraded to 12.1.3) on a VM and upgraded its 11.1.0.7 db to 11.2.0.3. Now as I want to test the CPU patches on this instance I decided to clone the production instance here. Since this is a higher version oracle home (11.2.0), I will have to do some new steps here but am not sure what they are. I have restored the controlfile, restored and recovered the database and opened the database with STARTUP UPGRADE command. What steps should I perform next?
    Regards,
    Vinod

    I believe this is what you need to do:
    - Clone your application tier node only
    Cloning Oracle Applications Release 12 with Rapid Clone [ID 406982.1]
    Rapid Clone Documentation Resources For Release 11i and 12 [ID 799735.1]
    - Re-apply the application patches you applied for 11.2.0.3 upgrade using "nodatabaseportion"
    Interoperability Notes EBS 12.0 and 12.1 with Database 11gR2 [ID 1058763.1]
    AD Command Line Options for Release R12 [ID 1078973.1]
    - Apply CPU patches
    Oracle E-Business Suite Releases 11i and 12 Critical Patch Update Knowledge Document (April 2013) [ID 1530756.1]
    Thanks,
    Hussein

  • Brbackup for large database

    Dear All,
    In our PRD Server the DB size more 800 GB.Now we using the Brtools for backup.The tape size is 400/800 GB ultrium 3.In future the DB Size may increase .How to take the backup .Is there any way to take backup in to split up backup.
    Kindly Give the solution.
    Regards
    guna

    Hello,
    if your backup is to big for one tape, you may do one of the following:
    Use more than one tape drive. You may specify them in your init<SI>.sap file.
    Use one drive, and manually enter another tape as soon as the first is full. You probably don't want to, though...
    Use one drive and a loader that will automatically change tapes.
    So most probably you will have to pay for additional hardware.
    regards

  • Remove fragmentation for large database

    I have two databases each with page file size close to 80GB, Index 4GB. Average Clustering Ratio on
    them is close 0.50. I am in a dillema how to defragment these databases. I have two options -
    1> Export level0 data, clear all data (using Reset), Reimport level0 data and fired a calc all.
    2> Export all data. clear all data (using Reset), Reimport all data.
    Here is the situation.
    -> Export all data is running for 19 hours. Hence I could not continue with option 2.
    -> Option 1 works fine, but when I fire the calc after loading level0 data, Average clustering ratio
    goes back to 0.50. So the database is fragmented again. I am back to the point where I had started.
    How do you guys suggest to handle this situation?
    This is version Essabse 7. ( yeah, it is old).

    The below old thread seems to be worth reading:
    [Thread: Essbase Defragmentation|http://forums.oracle.com/forums/thread.jspa?threadID=713717&tstart=0]
    Cheers,
    -Natesh

Maybe you are looking for