Database Recovery Scripts

Does anyone have a set of database recovery scripts for various scenarios for 8i and 9i databases running on Windows 2000 and 2003?
Cheers,
Derek.

>>
We do a cold backup each night and have archive on. The scenarios are any that may occur i.e., media failure, dropped tables, lost control files etc.
>>
Hey Derek, the problem with cold backup is incase of media recovery required, you can't simply resotre the datafile/s which have/has problems. You need to restore complete database.
Its difficult to have incomplete reocvery or point-in-time recovery having cold backups.
I strongly recommend you start thinking about online backups. You need to assess your business requirements, and much you can efforts to loose?
jaffar

Similar Messages

  • Pacman local database recovery

    Hi,
    So my pacman database @ /var/lib/pacman/local has been killed, it's probably my fault but pacman-cage certainly didn't help
    Anyway, if i use the recovery method outlined here: https://wiki.archlinux.org/index.php/Pa … l_database will I be screwed if the reconstructed database contains newer versions than what I actually have installed? For example, my database broke a few days ago and quite a bit of newer software has hit the mirrors since then as far as I can tell the recovery scripts dont or can't account for that?
    Maybe it would be easier for me to start from scratch and reinstall (without pacman-cage this time...)?
    Thanks

    I've never run the recovery technique you've linked to, but from what it describes, it reads the pacman log file on your machine to generate a list of packages.  I really don't see how the list it builds could be anything other than the versions that are actually installed on your machine.
    I would go ahead and attempt the procedure.  If that fails, then go ahead and re-install.

  • Questions About Database Recovery (-30975)

    Hello,
    In Berkeley 4.5.20, we are seeing the following error sporadically, but more frequently than we'd like (which is, to say, not at all): "BerkeleyDbErrno=-30975 - DbEnv::open: DB_RUNRECOVERY: Fatal error, run database recovery"
    This exception is being thrown mostly, if not exclusively, during the environment open call. Still investigating.
    I will post my environment below, but first some questions.
    1. How often should a database become become corrupt?
    2. What are the causes of this corruption? Can they be caused by "chance?" (I.e. app is properly coded.) Can they be caused by improper coding? If so, is there a list of common things to check?
    3. Does Oracle expect application developers to create their own recovery handlers, especially for apps that require 100% uptime? E.g. using DB_ENV->set_event_notify or filtering on DB_RUNRECOVERY.
    Our environment:
    Windows Server 2003 SP2
    Berkeley DB 4.5.20
    set_verbose(DB_VERB_WAITSFOR, 1);
    set_cachesize(0, 65536 * 1024, 1);
    set_lg_max(10000000);
    set_lk_detect(DB_LOCK_YOUNGEST);
    set_timeout(60000000, DB_SET_LOCK_TIMEOUT);
    set_timeout(60000000, DB_SET_TXN_TIMEOUT);
    set_tx_max(100000);
    set_flags(DB_TXN_NOSYNC, 1);
    set_flags(DB_LOG_AUTOREMOVE, 1);
    set_lk_max_lockers(10000);
    set_lk_max_locks(10000);
    set_lk_max_objects(10000);
    open(sPath, DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_THREAD | DB_INIT_TXN | DB_RECOVER, 0);
    set_pagesize     (4096);
    u_int32_t dbOpenFlags = DB_CREATE | DB_AUTO_COMMIT;
    pDbPrimary->open(NULL, strFile, NULL, DB_HASH, dbOpenFlags, 0);
    We also have a number of secondary databases.
    One additional piece of information that might be relevant is that the databases where this happens (we have 8 in total managed by our process,) seem to be the two specific databases that at times aren't opened until well after the process is up and running due to the nature of their data. This is to say that 6 of the other databases are normally opened during startup of our service. We are still investigating this to see if this is consistently true.

    Here is the output from the error logs (we didn't have this properly set up until now) when this error opening the environment happens:
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley MapViewOfFile: Not enough storage is available to process this command.
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: Not enough space
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003: Access is denied.
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley MapViewOfFile: Not enough storage is available to process this command.
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: Not enough space
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: DB_RUNRECOVERY: Fatal error, run database recovery
    12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley unable to join the environment
    12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003.del.0547204268: Access is denied.
    12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003: Access is denied.
    12/17/2007 17:19:18 (e64/518) 1024: Database EInitialize failed. (C:\xxxxxxxx\Database\xxxJOB_OAT: BerkeleyDbErrno=-30975 - DbEnv::open: DB_RUNRECOVERY: Fatal error, run database recovery)
    The last line is generated by a DbException and was all we were seeing up until now.
    I also set_verbose(DB_VERB_RECOVERY, 1) and set_msgcall to the same log file. We get verbose messages on the 1st 7 database files that open successfully, but none from the last one, I assume because they output to set_errcall instead.
    There is 67GB of free space on this disk by the way, so not sure what "Not enough space" means.
    Thanks again for your help.

  • Problem in performing multiple Point-In-Time Database Recovery using RMAN

    Hello Experts,
    I am getting an error while performing database point in time recovery multiple times using RMAN. Details are as follows :-
    Environment:
    Oracle 11g, ASM,
    Database DiskGroups : DG_DATA (Data files), DG_ARCH(Archive logs), DG_REDO(Redo logs Control file).
    Snapshot DiskGroups :
    Snapshot1 (taken at 9 am): SNAP1_DATA, SNAP1_ARCH, +SNAP1_REDO
    Snapshot2 (taken at 10 am): SNAP2_DATA, SNAP2_ARCH, +SNAP2_REDO
    Steps performed for point in time recovery:
    1. Restore control file from snapshot 2.
         RMAN> RESTORE CONTROLFILE from '+SNAP2_REDO/orcl/CONTROLFILE/Current.256.777398261';
    2. For 2nd recovery, reset incarnation of database to snapshot 2 incarnation (Say 2).
    3. Catalog data files from snapshot 1.
    4. Catalog archive logs from snapshot 2.
    5. Perform point in time recovery till given time.
         STARTUP MOUNT;
         RUN {
              SQL "ALTER SESSION SET NLS_DATE_FORMAT = ''dd-mon-yyyy hh24:mi:ss''";
              SET UNTIL TIME "06-mar-2013 09:30:00";
              RESTORE DATABASE;
              RECOVER DATABASE;
              ALTER DATABASE OPEN RESETLOGS;
    Results:
    Recovery 1: At 10.30 am, I performed first point in time recovery till 9:30 am, it was successful. Database incarnation was raised from *2* to *3*.
    Recovery 2: At 11:10 am, I performed another point in time recovery till 9:45 am, while doing it I reset the incarnation of DB to *2*, it failed with following error :-
    Starting recover at 28-FEB-13
    using channel ORA_DISK_1
    starting media recovery
    media recovery failed
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/06/2013 11:10:57
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed
    start until time 'MAR 06 2013 09:45:00'
    ORA-00283: recovery session canceled due to errors
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '+DG_REDO/orcl/onlinelog/group_1.257.807150859'
    ORA-17503: ksfdopn:2 Failed to open file +DG_REDO/orcl/onlinelog/group_1.257.807150859
    ORA-15012: ASM file '+DG_REDO/orcl/onlinelog/group_1.257.807150859' does not exist
    Doubts:
    1. Why did recovery failed 2nd time, but not 1st time and why is RMAN looking for online redo log group_1.257.807150859 in 2nd recovery ?
    3. I tried restoring control file from AutoBackup, in that case both 1st and 2nd recovery succeeded.
    However for this to work, I always need to keep the AutoBackup feature enabled.
    How reliable is control file AutoBackup ? Is there any alternative to using AutoBackup, can I restore control file from snapshot backup only ?
    4. If I restore control file from AutoBackup, then from what point of time/SCN does RMAN restores the control file ?
    Please help me out in this issue.
    Thanks.

    992748 wrote:
    Hello experts,
    I'm little newbie to RMAN recovery. Please help me in these doubts:
    1. If I have datafiles, archive logs & control files backup, but current online REDO logs are lost, then can I perform incomplete database recovery ?yes, if you have backups of everything else
    2. Till what maximum time/scn can incomplete database recovery be performed ??Assuming the only thing lost is the redo logs, you can recover to the last scn in the last archivelog.
    3. What is role of online REDO logs in incomplete database recovery ? They provide the final redo changes - the ones that have not been written to archivelogs
    Are they required for incomplete recovery ?It depends on how much incomplete recovery you need to do.
    Think of all of your changes as a constant stream of redo information. As a redolog fills, it is copied to archive, then (eventually) reused. over time, your redo stream is in archivelog_1, continuing into archvivelog_2, then to 3, and eventually, when you get to the last archivelog, into the online redo. A recovery will start with the oldest necessary point in the redo stream and continue forward. Whether or not you need the online redo for a PIT recovery depends on how far forward you need to recover.
    But you should take every precaution to prevent loss of online redo logs .. starting with having multiple members in each redo group ... and keeping those multiple members on physically separate disks.

  • Build new database through scripts must understand spanish character sets.

    Hello Gurus,
    I need some simple advice, a good chance for some quick points for you.
    I have never built a database to understand any other character set other than American English. I now have to build a database that will be used for Spanish characters- keyboards, etc. But I will be using English for the 11g software install. I only wish to be able to show Spanish characters in the data for customers names.
    I will be creating the database with scripts I have made to make the standard template for database files, control files, etc.
    Then I will be importing from a dump I have done that was made with American English character sets.
    System is 11g (11.2.0.3.0) on Linux Enterprise Server 5.8.
    I was thinking to use the AL32UTF8 character set, but I am unsure where to use it.
    My original test did not show Spanish characters for customers names like the 'tilda' or 'sueano' (pardon my spelling). But in this case I did not make the exeception for Spanish, I only used the standard American English build (no changes in the init.ora file or initial database build script).
    How can I adjust my parameter file for the initial creation of the database template to be able to understand the Spanish character set and still be able to import my dump file without error.
    EXAMPLE of a build script:
    CREATE DATABASE mynewdb
    USER SYS IDENTIFIED BY sys_password
    USER SYSTEM IDENTIFIED BY system_password
    LOGFILE GROUP 1 ('/u01/app/oracle/oradata/mynewdb/redo01.log') SIZE 100M,
    GROUP 2 ('/u01/app/oracle/oradata/mynewdb/redo02.log') SIZE 100M,
    GROUP 3 ('/u01/app/oracle/oradata/mynewdb/redo03.log') SIZE 100M
    MAXLOGFILES 5
    MAXLOGMEMBERS 5
    MAXLOGHISTORY 1
    MAXDATAFILES 100
    CHARACTER SET US7ASCII
    NATIONAL CHARACTER SET AL16UTF16
    If I replace NATIONAL CHARACTER SET AL16UTF16 to AL32UTF8 will it work to show Spanish characters?
    Sorry for the long winded question, any advice will be great.
    Thankfully,
    Shawn

    Hello,
    the national charsets is for column types like nvarchar not for normal varchar data types. So if your dump file contains such column types you will also need to set it. The charset is for the normal column types like varchar. The use of unicode is best pratice if you use multiel language, but keep in mind that multibyte charset can be a problem during the import because varchar2(10) means 10byte and not 10 chars, so errors like identifier to long can occur during import.
    You can create the database.
    Check this documentation:
    http://docs.oracle.com/cd/B28359_01/server.111/b28298/ch2charset.htm
    You can use a charset like WE8MSWIN1252 which covers spanish also (as far i know) and is a superset to us7ascii
    regards
    Peter

  • "Fatal error, run database recovery " when there are no txns to recover.

    Hi, all.
    I have a DB file containing multiple databases. Without using DBEnvironments, I can open it to get the dbnames. I can open the databases RDONLY,
    and see that their contents are correct. I can open them RW, and everything works.
    But when I try to create a new one, I get this:
    D = bsddb3.db.DB()
    D.open('test.db',dbname='test',dbtype=B.DB_BTREE,flags=B.DB_CREATE)Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    bsddb3.db.DBRunRecoveryError: (-30974, 'DB_RUNRECOVERY: Fatal error, run database recovery -- PANIC: fatal region error detected; run recovery')
    Note that this is in the non-transactional case. There is no Env, and there are no logfiles or __db files. So the error code mystifies me.
    Strace shows that the file is opened RW, and read through.
    B.DB_VERSION_STRING'Berkeley DB 4.8.24: (August 14, 2009)'
    >>>
    So, where to proceed? Many thanks for any and all help.

    Hmm. Other thing to note:
    [tradedesk@vader 2010-05-06.test]$ /usr/local/BerkeleyDB.4.8/bin/db_verify foo.db
    db_verify: Subdatabase entry references page 266 of invalid type 13
    db_verify: Page 0: non-invalid page 40 on free list
    db_verify: trading.db: DB_VERIFY_BAD: Database verification failed
    Not sure how that came about or how to prevent it, but it might have to do wit this issue.

  • Can we get database creation script using any packages?

    Hi Friends,
    we will get table creation script using dbms_metadata.get_ddl package. just like that is there any way to get database creation script? i know that we can add some lines to controlfile trace to convert it into database creation script. but i would like to know whether it is possible through packages?
    thanks in advance.

    I think there's no package to use it for getting database creation script. But anyway, you can search it in [Oracle Database PL/SQL Packages and Types Reference|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/toc.htm]
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com
    [Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

  • SQLStateMapping.java:70 Error When Loading Database Capture Script Output

    I'm running "Migration->Third Party Database Offline Capture->Load Database Capture Script Output" (Sybase 12) (SQLDeveloper 1.5.5)
    After Tables are loaded (16000+ tables), I'm getting the following error in Migration Log:
    Error ocurred during capture: In Columns for <column_name>
    oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
    I could not find any hits that match this. What's the best method to troubleshoot this?

    Log a service request with the offline scripts and we can check them out and forward them also to Development if needed.

  • Failure during database recovery on Homogeneous System Copy

    Dear all,
    i am trying to do system copy, and it fails after the execution step:  database recovery
    MaxDB: 7.6.5.15
    SAP Netweaver 7 Ehp 1
    apparantly this is something to do with LOAD_SYSTAB.
    I could run load_systab [-u <sysdba_user>,<sysdba_user_password>] manually, but the Log file of SAPinst shows the following:
    WARNING[E] 2009-09-28 17:17:57.328
               CJSlibModule::writeError_impl()
    The dbmcli call for action LOAD_SYSTAB failed. SOLUTION: Check the logfile XCMDOUT.LOG.
    TRACE      2009-09-28 17:17:57.546 [iaxxejsbas.hpp:408]
               handleException<ESAPinstJSError>()
    Converting exception into JS Exception EJSException.
    TRACE      2009-09-28 17:17:57.562
    Function setMessageIdOfExceptionMessage: dbmodada.actorext.dbmcliCallFailed
    WARNING[E] 2009-09-28 17:17:57.562
               CJSlibModule::writeError_impl()
    The dbmcli call for action LOAD_SYSTAB failed. SOLUTION: Check the logfile XCMDOUT.LOG.
    TRACE      2009-09-28 17:17:57.562 [iaxxejsbas.hpp:483]
               EJS_Base::dispatchFunctionCall()
    JS Callback has thrown unknown exception. Rethrowing.
    ERROR      2009-09-28 17:17:57.781 [sixxcstepexecute.cpp:950]
    FCO-00011  The step sdb_instance_load_systables with step key |NW_ABAP_OneHost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|1|0|NW_CreateDBandLoad|ind|ind|ind|ind|10|0|NW_CreateDB|ind|ind|ind|ind|0|0|NW_ADA_DB|ind|ind|ind|ind|6|0|SdbPreInstanceDialogs|ind|ind|ind|ind|4|0|SdbInstanceDialogs|ind|ind|ind|ind|1|0|SDB_INSTANCE_CREATE|ind|ind|ind|ind|0|0|sdb_instance_load_systables was executed with status ERROR .
    TRACE      2009-09-28 17:17:58.93 [iaxxgenimp.cpp:752]
                CGuiEngineImp::showMessageBox
    <html> <head> </head> <body> <p> An error occurred while processing option SAP NetWeaver 7.0 including Enhancement Package 1 Support Release 1 > Software Life-Cycle Options > System Copy > MaxDB > Target System Installation > Central System > Based on AS ABAP > Central System. You can now: </p> <ul> <li> Choose <i>Retry</i> to repeat the current step. </li> <li> Choose <i>View Log</i> to get more information about the error. </li> <li> Stop the option and continue with it later. </li> </ul> <p> Log files are written to C:\Program Files/sapinst_instdir/NW701/LM/COPY/ADA/SYSTEM/CENTRAL/AS-ABAP/. </p> </body></html>
    TRACE      2009-09-28 17:17:58.109 [iaxxgenimp.cpp:1255]
               CGuiEngineImp::acceptAnswerForBlockingRequest
    Waiting for an answer from GUI
    XCMDOUT.LOG shows only the SAP users data from the source system, and not for the target system which is having the error.
    Could somebody please advise me what to do?
    Thank you,
    Mariana

    Dear Christian,
    yes, I solved this LOAD_SYSTAB problem.
    This is what I did:
    1. check XCMDOUT.LOG
    2. However in my case, I did not see any clue there, so I read this link about LOAD_SYSTAB http://maxdb.sap.com/doc/7_7/45/11cbd6459d7201e10000000a155369/content.htm
    I tried it manually, and it worked: dbmcli u2013d <DB_ID> u2013u DBMUser,password1 load_systab u2013u superdba,password2
    From there, I know that I entered the wrong SYSADM User (superdba) password, this password was in my case the same one a SAPinst Master Password.
    According to https://websmp130.sap-ag.de/sap(bD1kZSZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=25591
    a new installation of MaxDB database, by default, the credential for SYSADM is: "superdba,admin"
    So, accordingly, the solution is:
    change the SYSADM for the <DB_ID> in DBMGUI: D7D - Configuration - Database User area, exactly as the SAPinst Master Passwort.
    Hope this helps.
    Regards,
    Mariana

  • New database create script using DBCA

    Hi,
    I'm trying to generate a database create script using DBCA. I have another database running on the same physical server(HP-UX and oracle 10g R2). When I run the DBCA, it is creating scripts to Clone DB and Clone RMAN restore. Why it is not generating to create a new database instead of cloning the DB.
    Thanks

    Rock2 wrote:
    Hi,
    I'm trying to generate a database create script using DBCA. I have another database running on the same physical server(HP-UX and oracle 10g R2). When I run the DBCA, it is creating scripts to Clone DB and Clone RMAN restore. Why it is not generating to create a new database instead of cloning the DB.
    ThanksWhen you launch dbca, you need to select to create a 'custom' database, not one of the pre-canned templates. The templates will all result in a script that does an rman restore from a backup that comes with the product. Selecting 'custom' database will result in scripts built around the CREATE DATABASE sql statement.

  • Database creation scripts from a running RAC

    Hi all
    We have a two node Oracle 10g R2 RAC running on SLES 10 SP2 Itanium systems.
    We have 4 database instances running on the two nodes.
    I need to create database creation scripts for one of the databases. eg. the script should contain all current configuration of the database (i.e i dont want the script which would have been created at the time of creation of the database)
    Is this possible to achieve?
    thanks

    Hi,
    You can follow this steps to create yours scripts using notes below:
    Create Database Manually
    *How to create a RAC database using DBCA generated scripts from templates [ID 856783.1]*
    Create Database Service Manually
    *How To Configure Server Side Transparent Application Failover [ID 460982.1]*
    *10g & 11g :Configuration of TAF(Transparent Application Failover) and Load Balancing [ID 453293.1]*
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Mar 18, 2011 11:36 AM

  • Database recovery (online redolog ?)

    hi all,
    Been awhile since i touch on oracle db, i have been reading around and the emphasis for recovery is always on the backup and archivelog, but i think its wrong.
    can i check ->
    q1) for full database recovery, do i need the online redo log as well ?
    q2) if the answer to q1) is yes, how do i duplicate online redo log to standby site ? (i don't think rsync will work as it cannot sure consistency in the redolog)
    will oracle dataguard sync online redolog as well ?
    q3) for archivelog, beside manual rsyncing, LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1'
    do i need the enterprise edition for the above ?
    Regards,
    Alan

    q1) For a complete recovery, yes you need online redolog as well. Without online redolog,its still considered incomplete recovery since u lose data resides in online redolog
    q2) You no need to synch online redo log manually. Once the backup is restored to the DR dataguard site and MRP process initiated, Oracle will synch online redolog/archivelog automatically based on protection mode specified
    q3) Oracle dataguard applies to Enterprise Edition only. Without Enterprise Edition, we can configure log shipping (manual way).
    Regards,
    Ilan

  • DB_RUNRECOVERY: Fatal error, run database recovery

    I am getting this error when trying to add data to QUEUE. But after I restart my app, this error does not happen anymore.
    2009-08-16 10:27:12.558990 [ERR] mod_cdr_bdb.c:370 Unable to add cdr to Queue. Error=DB_RUNRECOVERY: Fatal error, run database recovery
    Does anyone know what could be the cause of the error?

    Hi,
    Do you know the steps that lead up to this error? Can you reproduce it?
    Were there any error messages sent to the error log file? Can you confirm that you have verbose error messages turned on by always initializing one of the error callback interfaces in your environment. This will provide verbose error messages:
    DB_ENV->set_errcall, DB_ENV->set_errfile, DB_ENV->set_errpfx, and DB_ENV->set_verbose.
    What flags are you using when opening the environment and the database?
    The procedure you have to follow when you receive this error is described here: [DB_RUNRECOVERY|http://www.oracle.com/technology/documentation/berkeley-db/db/ref/program/errorret.html#DB_RUNRECOVERY]
    DB_RUNRECOVERY:
    There exists a class of errors that Berkeley DB considers fatal to an entire Berkeley DB environment. An example of this type of error is a corrupted database page. The only way to recover from these failures is to have all threads of control exit the Berkeley DB environment, run recovery of the environment, and re-enter Berkeley DB. (It is not strictly necessary that the processes exit, although that is the only way to recover system resources, such as file descriptors and memory, allocated by Berkeley DB.)
    When this type of error is encountered, the error value DB_RUNRECOVERY is returned. This error can be returned by any Berkeley DB interface. Once DB_RUNRECOVERY is returned by any interface, it will be returned from all subsequent Berkeley DB calls made by any threads of control participating in the environment.
    Applications can handle such fatal errors in one of two ways: first, by checking for DB_RUNRECOVERY as part of their normal Berkeley DB error return checking, similarly to DB_LOCK_DEADLOCK or any other error. Alternatively, applications can specify a fatal-error callback function using the DB_ENV->set_event_notify method. Applications with no cleanup processing of their own should simply exit from the callback function.Thanks,
    Bogdan Coman

  • Object Level Recovery or Whole Database recovery

    I'm hoping someone may know how to advise me on the following;
    On a datawarehouse db (10.2.0.1.0) a team member removed records from three tables, and I have since attempted flashback recovery without success. The database is in Archivelog mode, with Flashback enabled, but no Flashback logging enabled. The rows were removed on Friday afternoon (it is now Monday). I attempted to get flashback logging enabled by tagging the "Enable Flashback Database" tag in the Flash Recovery region of Recovery settings, and restarting the database. The database when restarted went into mount state, and subsequently on restarting (from mount - I did not dismount the db), it still has Flashback logging disabled. I attempted flashback again but the team member states the record stil arn't there. EM however had given the message 'The select tables...X X....have been flashed back'. However I can see also that Em says flashback logging is still disabled.
    I now consider I might be better off to perform a 'Whole Database Recovery', as I simply want to get the tables recovered. I'm not sure if this will mean re-keying though. Can anyone advise? Thanks in advance. DW
    Message was edited by:
    David_W

    The first step you should try is flashback query. Because using flashback query your database will remain intact, you don't lose anything from your database. Of course most likely it's too late for you now. Just for future reference.
    Flashback database is only available after you configured Flash Recovery Area and turn it on. Sound like it's does apply here as well. Remember even you could successful flashback your database to the point before deletion, you will lost all data changes after that point. Flashback database only buy you sometime, because you don't need to restore datafile from backup.
    The third option would be restore from your last backup, ( the latest one before deletion happen) and do incomplete restore to the point in time right before the incident.

  • VISION demo database creation scripts

    Does anyone have a copy of the VISION demo database creation scripts that they could give me. Have to try and add this into an already installed EBS system.
    rgds
    Alan

    The VISION demo database is a fully populated EBS installation delivered by rapid install. You would just use rapidwiz and select the Vision Demo, and not fresh production. You need the VISION data in order to be able to run through the various process flows. You do not want to import Vision data into a non-VISION EBS.

Maybe you are looking for

  • Password for my appletv

    I am trying to set up Apple TV in my classroom, but can't put in my network password (which I usually do online?)  I have full network strength.  Any help out there?

  • Access BPEL metadata tables in Olite install of SOA Suite

    Hi, Iwanted to know if there is a way to acess bpel metadata tables in olite. like for esb we use system/any user/pass and oraesb sid. I am unable to do so by changing sid from oraesb to orabpel. It gives access violation error. Thanks in advance

  • Placing Illustrator Files into InDesign-Apparel Industry Related?

    Hi, I am working on a larger document, approx. 60+ pages. I'm actually having 2 issues: The apparel sketches are done in Illustrator, and normally what I do is raterize the files and place them into InDesign. I've been encountering some difficulties

  • Not save the record

    frm- 40735: insert triger raised unhandled exception ora-01722 when insert the record so the record is not saved

  • How to make Oracle HTTP server a proxy server...

    I need a proxy server for Google Mini search Appliance. I have Oracle 9iAS on AIX 5.2; can someone tell me how to make Oracle HTTP server function as Proxy server? Thanks, Jess