DR startegy for Large database

Hi All,
We have a 30TB database for which we need to design a backup strategy.( Oracle 11gR1 SE, 2 Node RAC with ASM)
Client needs a DR site for the database and from the DR site we will be running tape backup.
The main constraint we are facing here are size of DB which will grow till 50 TB in future and another is we are running in Oracle standard edition.
Taking a full RMAN backup to a SAN box will take around 1 week for us for a DB size of 30TB.
Options for us:
1. Create a manual standby and apply archive logs( We cant use Dataguard as we are in SE edition)
2. Storage level replication ( Using HP Continous access)
3. Use thrid party tools such as Shareplex,Golden gate, DBVisit etc
Which one will be the best option here with respect to cost and time or do we have any other option better than this.
We cant upgrade to Oracle EE edition as of now since we need to meet the project deadline for Client. We are migrating legacy data to production now and this will be interrupted if we go for a upgrade.
Thanks in advance.
Arun
Edited by: user12107367 on Feb 26, 2011 7:47 AM
Modified the heading from Backup to DR

Arun,
Yes this limitation about BCT is problematic in SE but after all if everything was included in SE who would pay the EE licence :) ?.
Only good thing if BCT is not in use is that RMAN checks the whole database for corruption even if the backup is an incremental one. There is no miraculous "full Oracle" solution if your backups are so slow but as you mentioned the manual standby with delayed periodic applications of the archives is possible. It's up to you to evaluate if if works in your case though : how many archive log files will you daily generate and how long will it take to apply them on your environment ?
(notice about Golden Gate it's no more a third party tool : it's now an Oracle tool and it is clearly introduced as a recommended replacement for Streams)
Best regards
Phil

Similar Messages

  • RMAN Tips for Large Databases

    Hi Friends,
    I'm actually starting administering a large Database 10.2.0.1.0 on Windows Server.
    Do you guys have Tips or Docs saying the best practices for large Databases? I mean large as 2TB of data.
    I'm good administering small and medium DBs, but some of them just got bigger and bigger!!!
    Tks a lot

    I would like to mention below links :
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/partconc.htm
    http://download.oracle.com/docs/cd/B28359_01/server.111/b32024/vldb_backup.htm
    For couple of good advices and considerations for RMAN VLDB:
    http://sosdba.wordpress.com/2011/02/10/oracle-backup-and-recovery-for-a-vldb-very-large-database/
    Google "vldb AND RMAN in oracle"
    Regards
    Girish Sharma

  • SAP EHP Update for Large Database

    Dear Experts,
    We are planning for the SAP EHP7 update for our system. Please find the system details below
    Source system: SAP ERP6.0
    OS: AIX
    DB: Oracle 11.2.0.3
    Target System: SAP ERP6.0 EHP7
    OS: AIX
    DB: 11.2.0.3
    RAM: 32 GB
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Please advise on this.
    Regards,
    Raja. G

    Hi Raja,
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Although 3TB DB size may not have direct impact on the upgrade process, downtime of the system may vary with larger database size.
    Points to consider
    1) DB backup before entering into downtime phase
    2) Number of Programs & Tables stored in the database. ICNV Table conversions and XPRA execution will be dependent on these parameters.
    Hope this helps.
    Regards,
    Deepak Kori

  • Sql Server Management Assistant (SSMA) Oracle okay for large database migrations?

    All:
    We don't have much experience with the SSMA (Oracle) tool and need some advice from those of you familiar with it.  We must migrate an Oracle 11.2.0.3.0 database to SQL Server 2014.  The Oracle database consists of approximately 25,000 tables and 30,000
    views and related indices.  The database is approximately 2.3 TB in size.
    Is this do-able using the latest version of SSMA-Oracle?  If so, how much horsepower would you throw at this to get it done?
    Any other gotchas and advice appreciated.
    Kindest Regards,
    Bill
    Bill Davidson

    Hi
    Bill,
    SSMA supports migrating large database of Oracle. To migrate Oracle database to SQL Server 2014, you could use the latest version:
    Microsoft SQL Server Migration Assistant v6.0 for Oracle. Before the migration, you should pay attention to the points below.
    1.The account that is used to connect to the Oracle database must have at least CONNECT permissions. This enables SSMA to obtain metadata from schemas owned by the connecting user. To obtain metadata for objects in other schemas and then convert objects
    in those schemas, the account must have the following permissions: CREATE ANY PROCEDURE, EXECUTE ANY PROCEDURE, SELECT ANY TABLE, SELECT ANY SEQUENCE, CREATE ANY TYPE, CREATE ANY TRIGGER, SELECT ANY DICTIONARY.
    2.Metadata about the Oracle database is not automatically refreshed. The metadata in Oracle Metadata Explorer is a snapshot of the metadata when you first connected, or the last time that you manually refreshed metadata. You can manually update metadata
    for all schemas, a single schema, or individual database objects. For more information about the process, please refer to the similar article: 
    https://msdn.microsoft.com/en-us/library/hh313203(v=sql.110).
    3.The account that is used to connect to SQL Server requires different permissions depending on the actions that the account performs as the following:
     • To convert Oracle objects to Transact-SQL syntax, to update metadata from SQL Server, or to save converted syntax to scripts, the account must have permission to log on to the instance of SQL Server.
     • To load database objects into SQL Server, the account must be a member of the sysadmin server role. This is required to install CLR assemblies.
     • To migrate data to SQL Server, the account must be a member of the sysadmin server role. This is required to run the SQL Server Agent data migration packages.
     • To run the code that is generated by SSMA, the account must have Execute permissions for all user-defined functions in the ssma_oracle schema of the target database. These functions provide equivalent functionality of Oracle system functions, and
    are used by converted objects.
     • If the account that is used to connect to SQL Server is to perform all migration tasks, the account must be a member of the sysadmin server role.
    For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313158(v=sql.110)
    4.Metadata about SQL Server databases is not automatically updated. The metadata in SQL Server Metadata Explorer is a snapshot of the metadata when you first connected to SQL Server, or the last time that you manually updated metadata. You can manually update
    metadata for all databases, or for any single database or database object.
    5.If the engine being used is Server Side Data Migration Engine, then, before you can migrate data, you must install the SSMA for Oracle Extension Pack and the Oracle providers on the computer that is running SSMA. The SQL Server Agent service must also
    be running. For more information about how to install the extension pack, see Installing Server Components (OracleToSQL). And when SQL Express edition is used as the target database, only client side data migration is allowed and server side data migration
    is not supported. For more information about the process, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313202(v=sql.110)
    For how to migrate Oracle Databases to SQL Server, please refer to the  similar article: 
    https://msdn.microsoft.com/en-us/library/hh313159(v=sql.110).aspx
    Regards,
    Michelle Li

  • BRTOOLS with tape continuation for large database

    Hello,
    I have a R/3 database of 2TB which needs to be copied onto a new staging server for upgrade.
    The problem I am facing is that I dont have any space in the SAN(storage) for taking the backup on disk.
    So the only option I have is to take backups on tape.
    Even tried backup with compress mode on tape but ended up with CPIO error for handling larger than 2GB files(note 20577).
    And Since the database size is 2TB and the tapes I have hold 800GB I would have to use multiple tapes.
    Since my some of the files in the database like *DATA_1 range between 6GB to 10GB  CPIO cannot
    handle files larger than 2GB as per note 20577.
    So I had to change the parameter   tape_copy_cmd = dd   in init<sid>.sap.
    But DD will end once the end of tape is reached with a error message thereby failing my backup.
    Please help me get out of this situation.
    Regards,
    Guru

    Hi,
    Please check the 'Sequential backup' section in the backup guide. If its not possible to use a tape with a big capacity, you could use this method instead.
    You would need to add/ modify the following parameter in init<sid>.ora :
    1. volume_backup
    2. tape_address
    3.tape_address_rew
    4.exec_parallel
    You'll find more info about these parameters in www.help.sap.com and in the backup guide itself.
    Br,
    Javier

  • Oracle for large database + configuration

    Hi
    I have some historical data for Stocks and Options that I want to save into an Oracle database. Currently I have about 190G of data and expect to grow about 5G per month. I have not completely thought about how to organize the tables. It is possible that there might just be one table which might be larger than the hard disk I have.
    I am planning to put this on a DELL box, running Windows 2000. Here is the configuration.
    Intel Xeon 2.4GHz, 2G SDRAM, with 3 146G SCSI harddrive with PERC3 SCSI controller. This machine roughly costs 7000$.
    Is there any reason that this wont work. Will Oracle be able to organize one database across multiple disks? How about Tables? Can Tables span multiple disks.
    All this data is going to be read only.
    My other cheaper choice is
    Intel box running P3, 2G RAM, 2 200G IDE drives. My questions for this are/ Will this configuration work?
    Also, for this kind of a database what kind of total disk space should I budget for?
    thanks
    Venkat

    The Server Manager was deprecated in 9i. Instead of using it you have to use the SQL*Plus. Do you have another JRE installed ?
    How to create a database manually in 9i
    Administrator's Guide for Windows Contents / Search / Index / PDF
    http://download-east.oracle.com/docs/cd/B10501_01/win.920/a95491.pdf
    Joel Pérez

  • Backups for large databases

    I am wondering how people do restores of very large DB's. Ours is not that large yet bit will grow to the point where exports and imports are not feasible. The data only changes periodically and as it is a web application,cold backups are not really an option. We don't run in archived log mode because of the static nature of the data. Any suggestions?

    put the read only tables in a read only tablespace and slowly changing tables to another tablespace. the most frequent ones in another tablespace.
    take transportable tablespace export of the frequently changing tablespace daily and slowly changing 2-3 times a week (depending on your site specifics). this involves nothing but a metadata export of the datadictionary info of the tablespaces exported and an OS level copy of the datafiles of those tablespaces.
    this is the best way for you to backup/recover. check out oracle documentation or this website for transportable tablespaces.
    I guess it comes to a point where you have to make a tradeoff between performance and recoverability. In my opinion always take recoverability over performance.
    If the periodic change of data is nothing but a bulk data load then after the dataload take a backup of a database. having multiple recovery scenarios is the best way for recovery.

  • Selection Interface for large database

    I am looking for a working example of a CF selection field that fills or builds a name list as you type.  The database has about 600,000 names with 400 new people being added each day.  I am looking for a smart tool that watches you type and brings down a name list as you go.  In the list the user needs to see the name and other identifying information like DOB and phone number.  User clicks the row and the persons recorded is located.  I think I have a good understanding the CFC side of this. 
    If you type fast the tool waits for a second. "Sounds like" support would also be nice.
    Thanks for any ideas?

    You mean AutoSuggest? See this link:
    http://forta.com/blog/index.cfm/2007/5/31/ColdFusion-Ajax-Tutorial-1-AutoSuggest
    You might want to adjust the code to work on official version because the example was built for beta version.

  • Remove fragmentation for large database

    I have two databases each with page file size close to 80GB, Index 4GB. Average Clustering Ratio on
    them is close 0.50. I am in a dillema how to defragment these databases. I have two options -
    1> Export level0 data, clear all data (using Reset), Reimport level0 data and fired a calc all.
    2> Export all data. clear all data (using Reset), Reimport all data.
    Here is the situation.
    -> Export all data is running for 19 hours. Hence I could not continue with option 2.
    -> Option 1 works fine, but when I fire the calc after loading level0 data, Average clustering ratio
    goes back to 0.50. So the database is fragmented again. I am back to the point where I had started.
    How do you guys suggest to handle this situation?
    This is version Essabse 7. ( yeah, it is old).

    The below old thread seems to be worth reading:
    [Thread: Essbase Defragmentation|http://forums.oracle.com/forums/thread.jspa?threadID=713717&tstart=0]
    Cheers,
    -Natesh

  • Brbackup for large database

    Dear All,
    In our PRD Server the DB size more 800 GB.Now we using the Brtools for backup.The tape size is 400/800 GB ultrium 3.In future the DB Size may increase .How to take the backup .Is there any way to take backup in to split up backup.
    Kindly Give the solution.
    Regards
    guna

    Hello,
    if your backup is to big for one tape, you may do one of the following:
    Use more than one tape drive. You may specify them in your init<SI>.sap file.
    Use one drive, and manually enter another tape as soon as the first is full. You probably don't want to, though...
    Use one drive and a loader that will automatically change tapes.
    So most probably you will have to pay for additional hardware.
    regards

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • Startup restrict for export of large database ?

    Hello,
    the Oracle Admin guide suggests one possible use for the "restricted" mode of an oracle database is to do a consistent export of a large database.
    But is this necessary as the option "CONSISTENT=Y" exists in the exp tool ? At least i understand that using CONSISTENT=Y may need a lot of undo space on a large database, but could there be any other reason than this to do an export in restricted mode rather than using the CONSISTENT=Y parameter ?

    Hello,
    the Oracle Admin guide suggests one possible use for
    the "restricted" mode of an oracle database is to do
    a consistent export of a large database.
    But is this necessary as the option "CONSISTENT=Y"
    exists in the exp tool ? At least i understand that
    using CONSISTENT=Y may need a lot of undo space on a
    large database, but could there be any other reason
    than this to do an export in restricted mode rather
    than using the CONSISTENT=Y parameter ?I believe the primary reason is like you mentioned, CONSISTENT=Y is going to need a lot of undo space for a busy large database. Depends on your situation, it might be feasible to allocate such undo space.

  • Kernel parameter for large concurrent connections database.

    Hi,
    Does any one have suggestion about kernel parameters we have to modify for large concurrent connections database (about 20000 concurrent connections) ? We would like to use Sun M8000 as our database server. Oracle installation guide indicated many parameters and values, but it seems ok for general database, not the one we need. Please somebody help us. Thanks !

    I certainly don't have any idea where the 150G memory usage for 20000 concurrent users come about. That's 7.5MB per user session.
    The session memory usage is depends on Oracle version, OS , Database configuration and application behavior etc..
    From following query we can tell, average session memory usage is around 1.5M in normal usage on 10g linux platform.
    It could go up to 20-30M if user session doing large data manipulation.
    SYS@test >  l
      1   select value, n.name|| '('||s.statistic#||')' , sid
      2    from v$sesstat s , v$statname n
      3    where s.statistic# = n.statistic#
      4    and n.name like '%ga memory%'
      5*   order by sid
    SYS@azerity > /
         VALUE N.NAME||'('||S.STATISTIC#||')'        SID
        486792 session pga memory max(26)             66
        486792 session pga memory(25)                 66
        225184 session uga memory(20)                 66
        225184 session uga memory max(21)             66
        225184 session uga memory max(21)             70
        486792 session pga memory max(26)             70
        486792 session pga memory(25)                 70
        225184 session uga memory(20)                 70
        225184 session uga memory(20)                 72
        486792 session pga memory max(26)             72
        225184 session uga memory max(21)             72
        486792 session pga memory(25)                 72
        486792 session pga memory max(26)             74
        225184 session uga memory(20)                 74
        486792 session pga memory(25)                 74
        225184 session uga memory max(21)             74
    ......Aggregate by SID
      1   select sum(value),sid
      2    from v$sesstat s , v$statname n
      3    where s.statistic# = n.statistic#
      4    and n.name like '%ga memory%'
      5    group by sid
      6*   order by sid
    SYS@test > /
    SUM(VALUE)        SID
       1423952         66
       1423952         70
       1423952         72
       1423952         74
       1423952         75
       1423952         77
       1423952         87
       1423952        101
       2733648        104
       1423952        207
      23723248        209
      23723248        210
       1293136        215
       7370320        216
    .........

  • Problem with  large databases.

    Lightroom doesn't seem to like large databases.
    I am playing catch-up using Lightroom to enter keywords to all my past photos. I have about 150K photos spread over four drives.
    Even placing a separate database on each hard drive is causing problems.
    The program crashes when importing large numbers of photos from several folders. (I do not ask it to render previews.) If I relaunch the program, and try the import again, Lightroom adds about 500 more photos and then crashes, or freezes again.
    I may have to go back and import them one folder at a time, or use iView instead.
    This is a deal-breaker for me.
    I also note that it takes several minutes after opening a databese before the HD activity light stops flashing.
    I am using XP on a dual core machine with, 3Gigs of RAM
    Anyone else finding this?
    What is you work-around?

    Christopher,
    True, but given the number of posts where users have had similar problems ingesting images into LR--where LR runs without crashes and further trouble once the images are in--the probative evidence points to some LR problem ingesting large numbers.
    I may also be that users are attempting to use LR for editing during the ingestion of large numbers--I found that I simply could not do that without a crash occuring. When I limited it to 2k at a time--leaving my hands off the keyboard-- while the import occured, everything went without a hitch.
    However, as previously pointed out, it shouldn't require that--none of my other DAMs using SQLite do that, and I can multitask while they are ingesting.
    But, you are right--multiple single causes--and complexly interrated multiple causes--could account for it on a given configuration.

  • Move large database to other server using RMAN in less downtime

    Hi,
    We have large database around 20TB. We want to migrate (move) the database from one server to other server. We do not want to use standby option.
    1)     How can we move database using RMAN in less downtime
    2)     Other than RMAN is there any option is available to move the database to new server
    For option 1 (restore using RMAN),
    Whether below options are valid?
    If this option is valid, how to implement this?
    1)     How can we move database using RMAN in less downtime
    a)     Take the full backup from source (source db is up)
    b)     Restore the full backup in target (source db is up)
    c)     Take the incremental backup from source (source db is up)
    d)     Restore incremental backup in target (source db is up)
    e)     Do steps c and d, before taking downtime (source db is up)
    f)     Shutdown and mount the source db, and take the incremental backup (source db is down)
    g)     Restore last incremental backup and start the target database (target is up and application is accessing this new db
    database version: 10.2.0.4
    OS: SUN solaris 10
    Edited by: Rajak on Jan 18, 2012 4:56 AM

    Simple:
    I do this all the time to relocate file system files... But the principle is the same. You can do this in iterations so you do not need to do it all at once:
    Starting 8AM move less-used files and more active files in the afternoon using the following backup method.
    SCRIPT-1
    RMAN> BACKUP AS COPY
    DATAFILE 4 ####"/some/orcl/datafile/usersdbf"
    FORMAT "+USERDATA";
    Do as many files as you think you can handle during your downtime window.
    During your downtime window: stop all applications so there is no contention in the database
    SCRIPT-2
    ALTER DATABASE DATAFILE 4 offline;
    SWITCH DATAFILE 4 TO COPY;
    RECOVER DATAFILE 4;
    ALTER DATABASE DATAFILE 4 online;
    I then execute the delete of the original file at somepoint later - after we make sure everything has recovered and successfully brought back online.
    SCRIPT-3
    DELETE DATAFILECOPY "/some/orcl/datafile/usersdbf"
    For datafiles/tablespaces that are really busy, I typically copy them later in the afternoon as there are fewer archivelogs that it has to go through in order to make them consistent. The ones in the morning have more to go through, but less likelihood of there being anything to do.
    Using this method, we have moved upwards 600G at a time and the actual downtime to do the switchover is < 2hrs. YMMV. As I said, this can be done is stages to minimize overall downtime.
    If you need some documentation support see:
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_rman.htm#CHDBDJJG
    And before you do ANYTHING... TEST TEST TEST TEST TEST. Create a dummy tablespace on QFS and use this procedure to move it to ASM to ensure you understand how it works.
    Good luck! (hint: scripts to generate these scripts can be your friend.)

Maybe you are looking for

  • Problem with Delivery IDOC staus 51

    Hi all, I have problem with Delivery IDOC. An IDOC is sent from the logistics provider with reference to a manually created delivery, which creates another IDOC & another partial delivery. The second IDOC created then confirms the partial delivery cr

  • GR Error

    Hi all, I did a scheduling aggrement for a item and maintained the qty 10,280 drums. Then i did a inbound delivery for qty 528. I did batch split of 528 in to 6. so it is 8011512612734+46. While doing posts Goods receipt i am getting an error message

  • Attribute dimensions

    HiWe are not able to get the correct result while we use attribute dimension in our retrieval. We're not sure what's wrong. One of our dimension is Dynamic Calc and tagged with 2 pass calc. Could this be the reason?Can anyone give us some suggestions

  • H.264 video on 5530

    Is there any way to play .mp4 video files that use the h.264 codec on the Nokia 5530? I have only found a shady video player that required to be connected to the internet to work, and played the video very slowly. Which video player do you guys recom

  • Problems in byte conversion.

    i have a bytestream. if i convert this to string and then again generated the bytes from string, though the length of bytes are same the values are different. Detailed Explanation. i have a bytestream of an image. and if i write this bytes to a text