Database recovery status

Hi:
Yesterday night my database brokedown. it had 20 GB used on memory at crash time.
I started database and it took more than 20 minutes on load database to memory... but I could not see how fast or a rate it was recovering to memory...
Is there any way to monitor the rate which database is recovering?
Is thre any way to accelerate this operation?
Regards.

Hi Simon:
It is nice to know about you....
thanks for recomendation, we are in process to implement it.
About Chris recomendation, I installed timesten in a diferent machine with a new Linux version (well actually we have Linux 5.4 in production), I tested on Linux 5.7
Those are my metrics:
According to logs after my server crash and my database recover:
Timesten 7.0.5/ Linux 5.4
Checkpoint file read status: 427.5 mb read in 10 sec (42.8 mb/sec); 36436.5 mb remain, estimate completion in 852 sec
Checkpoint file read status: 810.2 mb read in 20 sec (40.5 mb/sec); 36053.7 mb remain, estimate completion in 889 sec
Checkpoint file read status: 1262.0 mb read in 30 sec (42.1 mb/sec); 35602.0 mb remain, estimate completion in 846 sec
Checkpoint file read status: 1753.8 mb read in 40 sec (43.8 mb/sec); 35110.2 mb remain, estimate completion in 800 sec
Checkpoint file read status: 2262.0 mb read in 50 sec (45.2 mb/sec); 34602.0 mb remain, estimate completion in 764 sec
Checkpoint file read status: 2761.8 mb read in 60 sec (46.0 mb/sec); 34102.2 mb remain, estimate completion in 740 sec
fiber channel size from disk ( 2GB )
We tested on Timesten 7.0.5/Linux 5.7 (Simulating a server crash)
2013-05-15 17:32:35.83 Info: : 9991: 9994/0xf4b2010: Reading checkpoint file 0; data segment = 18528.4 mb (note: may be less than permsize)
2013-05-15 17:32:45.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 2133.5 mb read in 10 sec (213.3 mb/sec); 16394.9 mb remain, estimate completion in 76 sec
2013-05-15 17:32:55.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 4354.2 mb read in 20 sec (217.7 mb/sec); 14174.2 mb remain, estimate completion in 65 sec
2013-05-15 17:33:05.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 6683.2 mb read in 30 sec (222.8 mb/sec); 11845.2 mb remain, estimate completion in 53 sec
2013-05-15 17:33:15.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 8893.8 mb read in 40 sec (222.3 mb/sec); 9634.7 mb remain, estimate completion in 43 sec
2013-05-15 17:33:25.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 11425.2 mb read in 50 sec (228.5 mb/sec); 7103.2 mb remain, estimate completion in 31 sec
2013-05-15 17:33:35.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 13626.5 mb read in 60 sec (227.1 mb/sec); 4901.9 mb remain, estimate completion in 21 sec
2013-05-15 17:33:45.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 15854.0 mb read in 70 sec (226.5 mb/sec); 2674.4 mb remain, estimate completion in 11 sec
2013-05-15 17:33:55.00 Info: : 9991: 9994/0xf4b2010: Checkpoint file read status: 18188.8 mb read in 80 sec (227.4 mb/sec); 339.7 mb remain, estimate completion in 1 sec
fiber channel size from disk ( 8GB )
This shows a considerable improving on performance.
Regards.

Similar Messages

  • Failure during database recovery on Homogeneous System Copy

    Dear all,
    i am trying to do system copy, and it fails after the execution step:  database recovery
    MaxDB: 7.6.5.15
    SAP Netweaver 7 Ehp 1
    apparantly this is something to do with LOAD_SYSTAB.
    I could run load_systab [-u <sysdba_user>,<sysdba_user_password>] manually, but the Log file of SAPinst shows the following:
    WARNING[E] 2009-09-28 17:17:57.328
               CJSlibModule::writeError_impl()
    The dbmcli call for action LOAD_SYSTAB failed. SOLUTION: Check the logfile XCMDOUT.LOG.
    TRACE      2009-09-28 17:17:57.546 [iaxxejsbas.hpp:408]
               handleException<ESAPinstJSError>()
    Converting exception into JS Exception EJSException.
    TRACE      2009-09-28 17:17:57.562
    Function setMessageIdOfExceptionMessage: dbmodada.actorext.dbmcliCallFailed
    WARNING[E] 2009-09-28 17:17:57.562
               CJSlibModule::writeError_impl()
    The dbmcli call for action LOAD_SYSTAB failed. SOLUTION: Check the logfile XCMDOUT.LOG.
    TRACE      2009-09-28 17:17:57.562 [iaxxejsbas.hpp:483]
               EJS_Base::dispatchFunctionCall()
    JS Callback has thrown unknown exception. Rethrowing.
    ERROR      2009-09-28 17:17:57.781 [sixxcstepexecute.cpp:950]
    FCO-00011  The step sdb_instance_load_systables with step key |NW_ABAP_OneHost|ind|ind|ind|ind|0|0|NW_Onehost_System|ind|ind|ind|ind|1|0|NW_CreateDBandLoad|ind|ind|ind|ind|10|0|NW_CreateDB|ind|ind|ind|ind|0|0|NW_ADA_DB|ind|ind|ind|ind|6|0|SdbPreInstanceDialogs|ind|ind|ind|ind|4|0|SdbInstanceDialogs|ind|ind|ind|ind|1|0|SDB_INSTANCE_CREATE|ind|ind|ind|ind|0|0|sdb_instance_load_systables was executed with status ERROR .
    TRACE      2009-09-28 17:17:58.93 [iaxxgenimp.cpp:752]
                CGuiEngineImp::showMessageBox
    <html> <head> </head> <body> <p> An error occurred while processing option SAP NetWeaver 7.0 including Enhancement Package 1 Support Release 1 > Software Life-Cycle Options > System Copy > MaxDB > Target System Installation > Central System > Based on AS ABAP > Central System. You can now: </p> <ul> <li> Choose <i>Retry</i> to repeat the current step. </li> <li> Choose <i>View Log</i> to get more information about the error. </li> <li> Stop the option and continue with it later. </li> </ul> <p> Log files are written to C:\Program Files/sapinst_instdir/NW701/LM/COPY/ADA/SYSTEM/CENTRAL/AS-ABAP/. </p> </body></html>
    TRACE      2009-09-28 17:17:58.109 [iaxxgenimp.cpp:1255]
               CGuiEngineImp::acceptAnswerForBlockingRequest
    Waiting for an answer from GUI
    XCMDOUT.LOG shows only the SAP users data from the source system, and not for the target system which is having the error.
    Could somebody please advise me what to do?
    Thank you,
    Mariana

    Dear Christian,
    yes, I solved this LOAD_SYSTAB problem.
    This is what I did:
    1. check XCMDOUT.LOG
    2. However in my case, I did not see any clue there, so I read this link about LOAD_SYSTAB http://maxdb.sap.com/doc/7_7/45/11cbd6459d7201e10000000a155369/content.htm
    I tried it manually, and it worked: dbmcli u2013d <DB_ID> u2013u DBMUser,password1 load_systab u2013u superdba,password2
    From there, I know that I entered the wrong SYSADM User (superdba) password, this password was in my case the same one a SAPinst Master Password.
    According to https://websmp130.sap-ag.de/sap(bD1kZSZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=25591
    a new installation of MaxDB database, by default, the credential for SYSADM is: "superdba,admin"
    So, accordingly, the solution is:
    change the SYSADM for the <DB_ID> in DBMGUI: D7D - Configuration - Database User area, exactly as the SAPinst Master Passwort.
    Hope this helps.
    Regards,
    Mariana

  • Questions About Database Recovery (-30975)

    Hello,
    In Berkeley 4.5.20, we are seeing the following error sporadically, but more frequently than we'd like (which is, to say, not at all): "BerkeleyDbErrno=-30975 - DbEnv::open: DB_RUNRECOVERY: Fatal error, run database recovery"
    This exception is being thrown mostly, if not exclusively, during the environment open call. Still investigating.
    I will post my environment below, but first some questions.
    1. How often should a database become become corrupt?
    2. What are the causes of this corruption? Can they be caused by "chance?" (I.e. app is properly coded.) Can they be caused by improper coding? If so, is there a list of common things to check?
    3. Does Oracle expect application developers to create their own recovery handlers, especially for apps that require 100% uptime? E.g. using DB_ENV->set_event_notify or filtering on DB_RUNRECOVERY.
    Our environment:
    Windows Server 2003 SP2
    Berkeley DB 4.5.20
    set_verbose(DB_VERB_WAITSFOR, 1);
    set_cachesize(0, 65536 * 1024, 1);
    set_lg_max(10000000);
    set_lk_detect(DB_LOCK_YOUNGEST);
    set_timeout(60000000, DB_SET_LOCK_TIMEOUT);
    set_timeout(60000000, DB_SET_TXN_TIMEOUT);
    set_tx_max(100000);
    set_flags(DB_TXN_NOSYNC, 1);
    set_flags(DB_LOG_AUTOREMOVE, 1);
    set_lk_max_lockers(10000);
    set_lk_max_locks(10000);
    set_lk_max_objects(10000);
    open(sPath, DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_MPOOL | DB_THREAD | DB_INIT_TXN | DB_RECOVER, 0);
    set_pagesize     (4096);
    u_int32_t dbOpenFlags = DB_CREATE | DB_AUTO_COMMIT;
    pDbPrimary->open(NULL, strFile, NULL, DB_HASH, dbOpenFlags, 0);
    We also have a number of secondary databases.
    One additional piece of information that might be relevant is that the databases where this happens (we have 8 in total managed by our process,) seem to be the two specific databases that at times aren't opened until well after the process is up and running due to the nature of their data. This is to say that 6 of the other databases are normally opened during startup of our service. We are still investigating this to see if this is consistently true.

    Here is the output from the error logs (we didn't have this properly set up until now) when this error opening the environment happens:
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley MapViewOfFile: Not enough storage is available to process this command.
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: Not enough space
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003: Access is denied.
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley MapViewOfFile: Not enough storage is available to process this command.
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: Not enough space
    12/17/2007 17:18:12 (e64/518) 1024: Berkeley Error: CDbBerkeley PANIC: DB_RUNRECOVERY: Fatal error, run database recovery
    12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley unable to join the environment
    12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003.del.0547204268: Access is denied.
    12/17/2007 17:18:30 (e64/518) 1024: Berkeley Error: CDbBerkeley DeleteFile: C:\xxxxxxxx\Database\xxxJOB_OAT\__db.003: Access is denied.
    12/17/2007 17:19:18 (e64/518) 1024: Database EInitialize failed. (C:\xxxxxxxx\Database\xxxJOB_OAT: BerkeleyDbErrno=-30975 - DbEnv::open: DB_RUNRECOVERY: Fatal error, run database recovery)
    The last line is generated by a DbException and was all we were seeing up until now.
    I also set_verbose(DB_VERB_RECOVERY, 1) and set_msgcall to the same log file. We get verbose messages on the 1st 7 database files that open successfully, but none from the last one, I assume because they output to set_errcall instead.
    There is 67GB of free space on this disk by the way, so not sure what "Not enough space" means.
    Thanks again for your help.

  • Problem in performing multiple Point-In-Time Database Recovery using RMAN

    Hello Experts,
    I am getting an error while performing database point in time recovery multiple times using RMAN. Details are as follows :-
    Environment:
    Oracle 11g, ASM,
    Database DiskGroups : DG_DATA (Data files), DG_ARCH(Archive logs), DG_REDO(Redo logs Control file).
    Snapshot DiskGroups :
    Snapshot1 (taken at 9 am): SNAP1_DATA, SNAP1_ARCH, +SNAP1_REDO
    Snapshot2 (taken at 10 am): SNAP2_DATA, SNAP2_ARCH, +SNAP2_REDO
    Steps performed for point in time recovery:
    1. Restore control file from snapshot 2.
         RMAN> RESTORE CONTROLFILE from '+SNAP2_REDO/orcl/CONTROLFILE/Current.256.777398261';
    2. For 2nd recovery, reset incarnation of database to snapshot 2 incarnation (Say 2).
    3. Catalog data files from snapshot 1.
    4. Catalog archive logs from snapshot 2.
    5. Perform point in time recovery till given time.
         STARTUP MOUNT;
         RUN {
              SQL "ALTER SESSION SET NLS_DATE_FORMAT = ''dd-mon-yyyy hh24:mi:ss''";
              SET UNTIL TIME "06-mar-2013 09:30:00";
              RESTORE DATABASE;
              RECOVER DATABASE;
              ALTER DATABASE OPEN RESETLOGS;
    Results:
    Recovery 1: At 10.30 am, I performed first point in time recovery till 9:30 am, it was successful. Database incarnation was raised from *2* to *3*.
    Recovery 2: At 11:10 am, I performed another point in time recovery till 9:45 am, while doing it I reset the incarnation of DB to *2*, it failed with following error :-
    Starting recover at 28-FEB-13
    using channel ORA_DISK_1
    starting media recovery
    media recovery failed
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/06/2013 11:10:57
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover if needed
    start until time 'MAR 06 2013 09:45:00'
    ORA-00283: recovery session canceled due to errors
    ORA-00313: open failed for members of log group 1 of thread 1
    ORA-00312: online log 1 thread 1: '+DG_REDO/orcl/onlinelog/group_1.257.807150859'
    ORA-17503: ksfdopn:2 Failed to open file +DG_REDO/orcl/onlinelog/group_1.257.807150859
    ORA-15012: ASM file '+DG_REDO/orcl/onlinelog/group_1.257.807150859' does not exist
    Doubts:
    1. Why did recovery failed 2nd time, but not 1st time and why is RMAN looking for online redo log group_1.257.807150859 in 2nd recovery ?
    3. I tried restoring control file from AutoBackup, in that case both 1st and 2nd recovery succeeded.
    However for this to work, I always need to keep the AutoBackup feature enabled.
    How reliable is control file AutoBackup ? Is there any alternative to using AutoBackup, can I restore control file from snapshot backup only ?
    4. If I restore control file from AutoBackup, then from what point of time/SCN does RMAN restores the control file ?
    Please help me out in this issue.
    Thanks.

    992748 wrote:
    Hello experts,
    I'm little newbie to RMAN recovery. Please help me in these doubts:
    1. If I have datafiles, archive logs & control files backup, but current online REDO logs are lost, then can I perform incomplete database recovery ?yes, if you have backups of everything else
    2. Till what maximum time/scn can incomplete database recovery be performed ??Assuming the only thing lost is the redo logs, you can recover to the last scn in the last archivelog.
    3. What is role of online REDO logs in incomplete database recovery ? They provide the final redo changes - the ones that have not been written to archivelogs
    Are they required for incomplete recovery ?It depends on how much incomplete recovery you need to do.
    Think of all of your changes as a constant stream of redo information. As a redolog fills, it is copied to archive, then (eventually) reused. over time, your redo stream is in archivelog_1, continuing into archvivelog_2, then to 3, and eventually, when you get to the last archivelog, into the online redo. A recovery will start with the oldest necessary point in the redo stream and continue forward. Whether or not you need the online redo for a PIT recovery depends on how far forward you need to recover.
    But you should take every precaution to prevent loss of online redo logs .. starting with having multiple members in each redo group ... and keeping those multiple members on physically separate disks.

  • How can I determine what sites are being referenced within Central Admin Upgrade and Migration Manage Databases Upgrade Status?

    When I go to Central Admin > Upgrade and Migration  > Manage Databases Upgrade Status, I have 2 content databases which have the status:
    Database is up to date, but some sites are not completely upgraded.
    How can I determine which sites are not completely upgraded?

    Manage Databases Upgrade Status will provide you all active and offline DB details, you can get same result
    using below PowerShell cmdlet.
    Get-SPDatabase and Get-SPContentDatabase will provide all active database/Content DB in Farm which include Service application db, central admin DB.
    Get-SPDatabase | Format-Table Name, ID
    Coming back to your question, if you find that there are some site are not completely upgraded then run below command and understand the cause if issue on specific DB.
    Test-SPContentDatabase WSS_ContentDB_Name
    If you find any missing file issue in DB then resolve these issue to upgrade content database.
    (verify all customizations referenced within the content database are also installed in the web application. This cmdlet can be issued
    against a content database currently attached to the farm, or a content database that is not connected to the farm )
    Use the Upgrade-SPContentDatabase cmdlet
    to resume a failed database upgrade or begin a build-to-build database upgrade against a SharePoint content database
    Upgrade-SPContentDatabase WSS_Content
    reference:
    http://technet.microsoft.com/en-us/library/ff607813(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/ff607941(v=office.15).aspx
    If my contribution helps you, please click Mark As Answer on that post and
    Vote as Helpful
    Thanks, ShankarSingh(MCP)

  • "Fatal error, run database recovery " when there are no txns to recover.

    Hi, all.
    I have a DB file containing multiple databases. Without using DBEnvironments, I can open it to get the dbnames. I can open the databases RDONLY,
    and see that their contents are correct. I can open them RW, and everything works.
    But when I try to create a new one, I get this:
    D = bsddb3.db.DB()
    D.open('test.db',dbname='test',dbtype=B.DB_BTREE,flags=B.DB_CREATE)Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    bsddb3.db.DBRunRecoveryError: (-30974, 'DB_RUNRECOVERY: Fatal error, run database recovery -- PANIC: fatal region error detected; run recovery')
    Note that this is in the non-transactional case. There is no Env, and there are no logfiles or __db files. So the error code mystifies me.
    Strace shows that the file is opened RW, and read through.
    B.DB_VERSION_STRING'Berkeley DB 4.8.24: (August 14, 2009)'
    >>>
    So, where to proceed? Many thanks for any and all help.

    Hmm. Other thing to note:
    [tradedesk@vader 2010-05-06.test]$ /usr/local/BerkeleyDB.4.8/bin/db_verify foo.db
    db_verify: Subdatabase entry references page 266 of invalid type 13
    db_verify: Page 0: non-invalid page 40 on free list
    db_verify: trading.db: DB_VERIFY_BAD: Database verification failed
    Not sure how that came about or how to prevent it, but it might have to do wit this issue.

  • Database recovery (online redolog ?)

    hi all,
    Been awhile since i touch on oracle db, i have been reading around and the emphasis for recovery is always on the backup and archivelog, but i think its wrong.
    can i check ->
    q1) for full database recovery, do i need the online redo log as well ?
    q2) if the answer to q1) is yes, how do i duplicate online redo log to standby site ? (i don't think rsync will work as it cannot sure consistency in the redolog)
    will oracle dataguard sync online redolog as well ?
    q3) for archivelog, beside manual rsyncing, LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1'
    do i need the enterprise edition for the above ?
    Regards,
    Alan

    q1) For a complete recovery, yes you need online redolog as well. Without online redolog,its still considered incomplete recovery since u lose data resides in online redolog
    q2) You no need to synch online redo log manually. Once the backup is restored to the DR dataguard site and MRP process initiated, Oracle will synch online redolog/archivelog automatically based on protection mode specified
    q3) Oracle dataguard applies to Enterprise Edition only. Without Enterprise Edition, we can configure log shipping (manual way).
    Regards,
    Ilan

  • DB_RUNRECOVERY: Fatal error, run database recovery

    I am getting this error when trying to add data to QUEUE. But after I restart my app, this error does not happen anymore.
    2009-08-16 10:27:12.558990 [ERR] mod_cdr_bdb.c:370 Unable to add cdr to Queue. Error=DB_RUNRECOVERY: Fatal error, run database recovery
    Does anyone know what could be the cause of the error?

    Hi,
    Do you know the steps that lead up to this error? Can you reproduce it?
    Were there any error messages sent to the error log file? Can you confirm that you have verbose error messages turned on by always initializing one of the error callback interfaces in your environment. This will provide verbose error messages:
    DB_ENV->set_errcall, DB_ENV->set_errfile, DB_ENV->set_errpfx, and DB_ENV->set_verbose.
    What flags are you using when opening the environment and the database?
    The procedure you have to follow when you receive this error is described here: [DB_RUNRECOVERY|http://www.oracle.com/technology/documentation/berkeley-db/db/ref/program/errorret.html#DB_RUNRECOVERY]
    DB_RUNRECOVERY:
    There exists a class of errors that Berkeley DB considers fatal to an entire Berkeley DB environment. An example of this type of error is a corrupted database page. The only way to recover from these failures is to have all threads of control exit the Berkeley DB environment, run recovery of the environment, and re-enter Berkeley DB. (It is not strictly necessary that the processes exit, although that is the only way to recover system resources, such as file descriptors and memory, allocated by Berkeley DB.)
    When this type of error is encountered, the error value DB_RUNRECOVERY is returned. This error can be returned by any Berkeley DB interface. Once DB_RUNRECOVERY is returned by any interface, it will be returned from all subsequent Berkeley DB calls made by any threads of control participating in the environment.
    Applications can handle such fatal errors in one of two ways: first, by checking for DB_RUNRECOVERY as part of their normal Berkeley DB error return checking, similarly to DB_LOCK_DEADLOCK or any other error. Alternatively, applications can specify a fatal-error callback function using the DB_ENV->set_event_notify method. Applications with no cleanup processing of their own should simply exit from the callback function.Thanks,
    Bogdan Coman

  • Object Level Recovery or Whole Database recovery

    I'm hoping someone may know how to advise me on the following;
    On a datawarehouse db (10.2.0.1.0) a team member removed records from three tables, and I have since attempted flashback recovery without success. The database is in Archivelog mode, with Flashback enabled, but no Flashback logging enabled. The rows were removed on Friday afternoon (it is now Monday). I attempted to get flashback logging enabled by tagging the "Enable Flashback Database" tag in the Flash Recovery region of Recovery settings, and restarting the database. The database when restarted went into mount state, and subsequently on restarting (from mount - I did not dismount the db), it still has Flashback logging disabled. I attempted flashback again but the team member states the record stil arn't there. EM however had given the message 'The select tables...X X....have been flashed back'. However I can see also that Em says flashback logging is still disabled.
    I now consider I might be better off to perform a 'Whole Database Recovery', as I simply want to get the tables recovered. I'm not sure if this will mean re-keying though. Can anyone advise? Thanks in advance. DW
    Message was edited by:
    David_W

    The first step you should try is flashback query. Because using flashback query your database will remain intact, you don't lose anything from your database. Of course most likely it's too late for you now. Just for future reference.
    Flashback database is only available after you configured Flash Recovery Area and turn it on. Sound like it's does apply here as well. Remember even you could successful flashback your database to the point before deletion, you will lost all data changes after that point. Flashback database only buy you sometime, because you don't need to restore datafile from backup.
    The third option would be restore from your last backup, ( the latest one before deletion happen) and do incomplete restore to the point in time right before the incident.

  • Database connection status indicator

    I want to put something on the front panel of my program that will show the database connection status. I'm thinking of asking to list tables (it doesn't have to be displayed), if the connection is no longer good, it will show error. I tried it and I think it works okay but I don't know whether this is proper way to do it, maybe there are better ways there, could someone point it out for me?
    Best regards,
    Solved!
    Go to Solution.

    You definately do not want to open and close the connection. Maybe it will fail but the "real" connection is fine. Maybe the real one failed but this other open and close works. This could happen if the server has the max number of connections.
    Does your connection just go away?
    A simple way to check the connection is to just have a loop that periodically executes a meaningless statement.
    =====================
    LabVIEW 2012

  • CUCM Database Replication Status MIB

    Hi Guys,
    Can you please help me whether we could monitor CCM database replication status through SNMP MIB or OID..
    Also please guide me in how to configure SNMP traps for the MIBS??
    Regards,
    Indrajith PC

    Hi,
    please take a look at the CUCM serviceability guide:
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/service/9_0/admin/CUCM_BK_C136FE37_00_cisco-unified-serviceability-administration-90/CUCM_BK_C136FE37_00_cisco-unified-serviceability-administration-guide_preface_00.html
    I'm afraid there's no trap generated when DB replication issues occur, so you might want to do periodic polling.
    The above referenced document contains some basic information which MIB's may be interesting to you and how to get them.
    Personally, I would download and register them all, and then just walk the whole tree and cherrypick ;-)
    G.

  • Database Recovery Scripts

    Does anyone have a set of database recovery scripts for various scenarios for 8i and 9i databases running on Windows 2000 and 2003?
    Cheers,
    Derek.

    >>
    We do a cold backup each night and have archive on. The scenarios are any that may occur i.e., media failure, dropped tables, lost control files etc.
    >>
    Hey Derek, the problem with cold backup is incase of media recovery required, you can't simply resotre the datafile/s which have/has problems. You need to restore complete database.
    Its difficult to have incomplete reocvery or point-in-time recovery having cold backups.
    I strongly recommend you start thinking about online backups. You need to assess your business requirements, and much you can efforts to loose?
    jaffar

  • Database Recovery Time

    Hi
    How to calculate Database recovery time in 10g? On what factors does it depend on?
    Regards
    JIL

    JIL wrote:
    Hi
    How to calculate Database recovery time in 10g? On what factors does it depend on?
    Regards
    JILIts depend on:-
    (1)Your backup stratergy.
    (2)How much work you need to do during recovery like, number of archive log file/incremental backup needed for recovery.
    (3)Whether require archive log file on disk or on tape.
    (4)Whther you store baclup as file copy or backupset.
    (5)And most important is, Your expertise on backup and recovery.

  • Need help on Database recovery

    Hi All,
    My Development system got crashed, I want to do the database recovery. i tried to do it, but it didn't work. Some of the data files got corrupted. can any one helps me out in this.
    Thanks,
    Swetha.

    thx

  • Incomplete Database Recovery

    Can anyone help me how to do a Incomplete recovery database recovery (Change/Time/Cancel) based.
    I tried by looking documentation , but always I end up with error.
    Many Thanks
    -Prabhu

    Hi,
    Would you please post what you do and what error you have?
    Michel

Maybe you are looking for