Recovery LOG Script

Hi we have copied our PRD system into a new box.
The new system is not a standby system of the PRD system, just a copy.
We have build a script that sends logbackups to tape and to this new system for recovery.
Is there a way to recover the log on the copied system with a script without user interaction?
I have tried this but no sucess
dbmcli -d PRD -u superdba,******* recover_start Autologbackup LOG
-24991,ERR_NODBSESSION: no database session available
Any help?

To make a more complete and automated Windows script, I have made some amendments so as it will generate a recovery file, using the logs it needs to get up to the last avilable log file page number, compared to its current log page number, and apply it, each time it is run, and then shut the db back down ready to be put back to admin mode for more log applications at a later point. It also looks at the live host to find out what backup page it is up to.
This needs to be run as a batch file rather than from the command line, due to the difference in the way windows interprets the %% variables between the two modes.
This can then be scheduled to run at set times and keeps the standby up to date. The script is in three sections, which gets around the issue of it trying to import the recovery script before it has finished.
The scripts need to be saved separately, and called from the central batch file on the standby server.
To adapt this script the database name needs changing from CSP, password needs setting where xxxx is present after SUPERDBA. Also the LiveDB variable needs setting to the FQDN of the live host server i.e. ADCB-EF-1.domain. Obviously if the locations of you dbmcli.exe and things are different they need to be amended also.
The command which searches for the logs (`"dir E:\Backup\CSP\Log\Auto\au. /b /o:e"`) would need to have the File\Device path for the log backup and the first couple of letters of the file name as would the name of the defined backup medium, in this case Auto_Log_Backup
Script 1 u2013 Top level script to call the others and delete any previous import script file.
<Standby.bat> -
del D:\Batch\Recovery_Script\import_script.txt /f /q
call d:\batch\Recovery_Builder.bat
sleep 5
call d:\batch\Recovery_Apply.bat
Script 2 u2013 Sets DB to Admin mode and then builds the import list and exports to script file.
<Recovery_Builder.bat>
rem %%a = Logfile Name
rem %%b = log file number
rem %%j = Page number
Set LiveDB=xxxxxx
D:\sapdb\programs\pgm\dbmcli.exe -n localhost -d CSP -u SUPERDBA,xxxxx db_admin
for /f "usebackq tokens=1,2,3,4 delims= " %%a in (`"D:\sapdb\programs\pgm\dbmcli.exe -n localhost -d CSP -u SUPERDBA,xxxxx db_restartinfo |findstr /c:"Used LOG Page""`) do (set current_page=%%d)
for /f "usebackq tokens=1,2 delims=." %%a in (`"dir E:\Backup\CSP\Log\Auto\au. /b /o:e"`) do (
for /f "usebackq tokens=1,2,3,4 delims= " %%g in (`"D:\sapdb\programs\pgm\dbmcli.exe -n %LiveDB% -d CSP -u SUPERDBA,xxxxx medium_label Auto_log_Backup %%b |findstr /c:"Last LOG Page""`) do (
call :find_backup_page %%j %%a %%b
:exit_loop
for /f "usebackq tokens=1,2 delims=." %%a in (`"dir E:\Backup\CSP\Log\Auto\au. /b /o:e"`) do (
if %%b GTR %first_file% (
echo recover_replace Auto_log_Backup "E:\Backup\CSP\Log\Auto\Autolog" %%b >> D:\Batch\Recovery_Script\import_script.txt))
goto end
:find_backup_page
set backup_page=%1
if %current_page% EQU %1 (
echo db_connect >> D:\Batch\Recovery_Script\import_script.txt
echo db_admin >> D:\Batch\Recovery_Script\import_script.txt
echo recover_start Auto_log_Backup LOG %3 >> D:\Batch\Recovery_Script\import_script.txt
set first_file=%3
if %current_page% GTR %1 goto exit_loop
:end
Script 3 u2013 Imports the generated script and then sets the DB back to offline.
<Recovery_Apply.bat>
D:\sapdb\programs\pgm\dbmcli.exe -n localhost -d CSP -u SUPERDBA,xxxxx -i "D:\Batch\Recovery_Script\import_script.txt"

Similar Messages

  • Media Recovery Log    DATAGUARD

    hi all ,
    i have installed 10.2.0.2 databases and configure dataguard. everything works fine but i have 1 problem.
    imagine that server 1 is primary ;
    when i check on server 1 ; (primary DB)
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination            /u01/app/oracle/archive/SID
    Oldest online log sequence 215
    Next log sequence to archive 218
    Current log sequence 218
    server 2 ( Standby server )
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination            dgsby_Stan
    Oldest online log sequence 216
    Next log sequence to archive 0
    Current log sequence 218
    on the alert_log of stanby server , it locate the archived logs from primary on */u01/app/oracle/product/10g/dbs/* not on */u01/app/oracle/archive/SID*
    Primary database is in MAXIMUM AVAILABILITY mode
    Standby controlfile consistent with primary
    RFS[6]: Successfully opened standby log 5: '/u02/app/oracle/oradata/SID/stdby_log01_1.log'
    Mon Dec 19 13:16:28 2011
    Media Recovery Log /u01/app/oracle/product/10g/dbs/dgsby_Stan1_217_767463156.dbf
    Media Recovery Waiting for thread 1 sequence 218 (in transit)
    Mon Dec 19 13:16:31 2011
    Recovery of Online Redo Log: Thread 1 Group 5 Seq 218 Reading mem 0
    Mem# 0 errs 0: /u02/app/oracle/oradata/SID/stdby_log01_1.log
    if i do switchover i faced with same reverse situation ; ( old primary acts like the old standby and old standby acts like old primary db )
    i have already set log_archive_dest_1 on both server.
    i want both standby and primary db locates the archive logs on */u01/app/oracle/archive/SID* not on */u01/app/oracle/product/10g/dbs/*
    what do i have to correct this situation ???
    thanks

    here is the output from standby ;
    SQL>
    SQL> show parameter standby_archive_dest
    NAME TYPE VALUE
    standby_archive_dest string /u01/app/oracle/archive/TELE
    SQL>
    SQL> show parameter log_archive_dest_1
    NAME TYPE VALUE
    log_archive_dest_1 string location="/u01/app/oracle/arch
    ive/SID", valid_for=(ONLINE_L
    OGFILE,ALL_ROLES)
    log_archive_dest_10 string
    SQL> show parameter log_archive_dest
    NAME TYPE VALUE
    log_archive_dest string
    log_archive_dest_1 string location="/u01/app/oracle/arch
    ive/SID", valid_for=(ONLINE_L
    OGFILE,ALL_ROLES)
    log_archive_dest_10 string
    log_archive_dest_2 string location="dgsby_Stan", valid_
    for=(STANDBY_LOGFILE,STANDBY_R
    OLE)
    log_archive_dest_3 string
    log_archive_dest_4 string
    log_archive_dest_5 string
    NAME TYPE VALUE
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    log_archive_dest_state_1 string ENABLE
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string ENABLE
    log_archive_dest_state_3 string ENABLE
    log_archive_dest_state_4 string enable
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    NAME TYPE VALUE
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable

  • What is Media Recovery Log ..? Physical Standby Database

    Hello All,
    In my physical stdby database alert log I could see the below message, I'm not sure what id Media Recovery Log and why it is dumping ORACLE_HOME Directory..?
    Could any one please help on this..? I need to house keep ORACLE_HOME directory.
    Oracle 11gR2 / RHEL5
    RFS[164]: No standby redo logfiles available for thread 2
    RFS[164]: Opened log for thread 2 sequence 1995 dbid 287450012 branch 760028574
    Fri Nov 11 19:39:05 2011
    Media Recovery Log /u01/app/oracle/product/11.2.0/db_1/dbs/arch2_1992_760028574.dbf
    Media Recovery Log /u01/app/oracle/product/11.2.0/db_1/dbs/arch2_1993_760028574.dbf
    Media Recovery Waiting for thread 1 sequence 5568 (in transit)
    Fri Nov 11 19:39:53 2011
    Archived Log entry 948 added for thread 2 sequence 1994 rlc 760028574 ID 0x1122a1
    FYI : The primary and standby database are in sync.

    Hello;
    The alert log is providing you with information. If you want more run this query :
    select process,status,client_process,sequence# from v$managed_standby;You should see processes like ARCH, RFS and MRP0.
    MRP will tell you if its applying log(s), waiting for a log, or if it thinks there's a gap.
    PROCESS   STATUS       CLIENT_P  SEQUENCE#
    ARCH      CONNECTED    ARCH              0
    ARCH      CONNECTED    ARCH              0
    ARCH      CONNECTED    ARCH              0
    ARCH      CONNECTED    ARCH              0
    ARCH      CONNECTED    ARCH              0
    ARCH      CONNECTED    ARCH              0
    ARCH      CONNECTED    ARCH              0
    ARCH      CONNECTED    ARCH              0
    MRP0      APPLYING_LOG N/A            5051
    RFS       IDLE         N/A               0
    RFS       IDLE         UNKNOWN           0There's a tiny amount of information about this under "9.3.1 Adding a Datafile or Creating a Tablespace" in Oracle document E10700-02.
    If you find this helpful please click the "helpful" button.
    Best Regards
    mseberg

  • Ideas requested for logging script activities

    I have a form developed in LC with considerable scripting. I need to strip out all the debugging alerts and send the form out to users but I would like to figure out some way to debug any future problems.
    My thought is to somehow log script activities so that when a failing form is returned to me I can see what was going on to produce the problem.
    One idea is to use a hidden listbox to record script activity - results of IF statements, FOR loops, index values, etc. I could use addItem to make a history of the last n activities and hopefully see what led to the problem.
    I'd like to make the list circular, that is after n entries go back to the top and overwrite the oldest entries. Not sure how to accomplish this...
    Any ideas are most welcome! Thanks!

    Added a number box called RecNo as hidden
    Added a listbox called LogBox as hidden along with some logic to unhide it and hide it again
    I created a common function:
    function LogMsg(msg) {
         LogBox.addItem(worksheet.RecNo.rawValue + msg); // add record number and message
         RecNo.rawValue++; // increment
         if (RecNo.rawValue > 20) { // size limit for listbox, pick a favorite number
              LogBox.deleteItem(0); // delete top (oldest) entry
    From places in code I want to track if there is a failure:
         Script_object.LogMsg("any message");

  • While Recovery  log files are not displayed in recovery mode

    Hi Folks,
    These days i am performing backup recovery testing for maxdb.
    Maxdb 7.7
    I am having one doubt please help me out to clear that.
    Scenario is like this:
    I have taken a  complete Data Backup.
    After that some data inserted and one log file created .Some transaction's are in log area (Online Logs) not written to the log file(Offline logs).
    When i am performing recovery using recovery mode in that mode it is not showing the log files those are required to be restored at the time of recovery.
    [recovery_mode|http://www.freeuploadimages.org/images/fdsnjrndfbr9ftfnco9.jpg]
    After recovery it is showing me the data as it was their before the recovery.
    When i am going with recovery with initialization option in that it is showing the logs those will be restored while performing the recovery.
    [recovery_with_initilization|http://www.freeuploadimages.org/images/emupbqyhfunqe9q00yd3.jpg]
    After recovery with initialization it  had recovered until it found the data in logs.
    My doubt is why in recovery mode it is not showing the log file's those are required to be recovered as it is showing in case of recovery with initialization.
    In Recovery mode is it not recovering from the offline log files.....?
    If yes than from where it is restoring the data inserted in DB  after the complete backup.
    As per my understanding in recovery mode it will perform recovery in the below steps.(Increment backup's i am not taking)
    1.Complete Data Recovery.
    2.All the log files will be applied.(Offline logfile)
    *3.Log's available in the Log Area will be applied.(Online Log file) *
    Regards,
    Sahil Garg
    Edited by: sahil garg on Feb 26, 2011 9:03 PM

    Hello Sahil,
    please check the documentation and the expert sessions before trying out this stuff.
    The reason for MaxDB not asking for any log backups after you've modified a few records is, that we don't throw away the log data when a log backup is made.
    Instead we keep track of which log information is still available for recovery in the log area and use this during recovery, instead of trying to get the data from the log backup.
    As soon as the log data is lost (because it's overwritten or the log area was deleted as it is the case for a recovery with initialization) MaxDB of course has to ask for the log backups - and it does so.
    But really - go and check the documentation and the expert sessions (maybe also my own blogs on this topic) about this.
    regards,
    Lars

  • Who has a check alert log script?

    Hi,
    Can anyone provide me some good linux script that will read my alert.log
    file and report any ORA- error through email dailly
    thank you

    I use this script to monitor my instances. I setup cron job to run this every hour.
    5 * * * * /home/oracle/bin/check_alert <SID> 2it's ksh
    #!/bin/ksh
    # PROGRAM       check_alert.ksh
    # FUNCTION      Checks ORACLE Alert logs and pages in case of
    #               any new errors. SID is Oracle database identifier.
    # CALLED BY     cron
    SID=$1                          # Oracle database identifier
    PAGEMESSAGES=$2                 # Maximum number of new messages that get paged
    PARAM=$#
    TMP=/tmp                        # Temporary directory
    MAILX=/bin/mailx            # UNIX Mail Program
    LIBDIR=/home/oracle/bin    # Directory where useful information is saved
    ALERTDIR=/home/oracle/admin/${SID}/bdump # Directory where
                                                # Oracle alert file resides
    FILE=alert_${SID}.log           # Oracle alert file name
    MAILDBA=<dba email>                     # DBA email address
    PAGEDBA=<pagermail>         # Page only DBA staff
    PAGEOTHER=NULL                  # Do not page OTHER staff members
    export PAGEMESSAGES
    export PARAM
    export TMP
    export MAILX
    export LIBDIR
    export ALERTDIR
    export FILE
    export SID
    export MAILDBA
    export PAGEDBA
    export PAGEOTHER
    checkParameters()
       if [ $PARAM -ne 2 ]
       then
          echo "**USAGE** : $0 <SID> <Count>"
          exit 1
       fi
    LASTCOUNT=`cat $LIBDIR/.oraErrCount_${SID}` # Count of ORA- errors
                                                # detected during last program run
    export LASTCOUNT
    sendAlertMessage()
          MESSAGE="**ALARM**:${SID}:`grep "ORA-" $ALERTDIR/$FILE | tail  -${count} | head -1`"
          echo $MESSAGE | $MAILX ${PAGEDBA}
          echo $MESSAGE | $MAILX -s"`uname -n`:${SID}:ORACLE Trace file Alert" ${MAILDBA}
          echo "$NAME:$MESSAGE:`date`"
    probeAlertLog()
       #set -x
       # Count all Oracle errors - search for string "ORA-"
       CheckError=`grep "ORA-" $ALERTDIR/$FILE | wc -l`
       # keep a count of current errors present in the Alert file
       echo $CheckError > $LIBDIR/.oraErrCount_${SID}
       count=1
       # If new errors are detected (same alert log)
       if [ $CheckError -gt $LASTCOUNT ]
       then
          while [ $LASTCOUNT -lt $CheckError ]
          do
             sendAlertMessage;
             if [ $count -eq $PAGEMESSAGES ]
             then
                break;
             fi
             ((count=$count+1))
             ((LASTCOUNT=$LASTCOUNT+1))
          done
       else
          # Looks like alert log file has been switched!
          if [ $CheckError -lt $LASTCOUNT ]
          then
             while [ $count -le $CheckError ]
             do
                sendAlertMessage;
                if [ $count -eq $PAGEMESSAGES ]
                then
                   break;
                fi
                ((count=$count+1))
             done
          fi
       fi
    checkParameters;
    probeAlertLog;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Event log script help

    1) What does the contents of aserverlist.txt look like? It should be one server name per line and no blank lines.2) Correct. don't use format table inside the script. It messes things up.
    Powershell$computers = Get-Content "\\server\IT\!Utilities\powershellcommands\aserverlist.txt";foreach($computer in $computers) { Get-EventLog -ComputerName $computer -LogName System -EntryType "Warning" -After (Get-Date).Adddays(-1) | Select-object MachineName, Index, TimeGenerated, EntryType, Source, InstanceID, Message}

    Re your query1. Good - that's helpful.2. Exactaly what did you try? Please profivide the exact syntax you used. What errors if any did you get.3. So do a bit of debugging. RUn the 1st line and look at what is in the $Computersd array, Then work through the loop computer by computer seeing what the Get-Eventlog produces. I'd cut out the pipe to Format-Table initially just to get things working.You have not provided enough information to give you a clearer answer.Also, next time you post code, please use the tool in teh tool bar. It woudl have made yoru code look like this:Powershell$computers = Get-Content "\\server\IT\!Utilities\powershellcommands\aserverlist.txt";foreach($computer in $computers){Get-EventLog -ComputerName $computer -LogName System -EntryType "Warning" -After (Get-Date).Adddays(-1) | Format-Table -Wrap -Property ...

  • Creation of a liveCache Standby system with automatic recovery

    Hello folks with MaxDB affinity
    I'm currently studying the documentation about creating a standby system for a liveCache.
    I would like to realize the same scenario as we've already done in our MSSQL environments: for every clustered MSSQL Server, we've created a standby instance, which is feeded via a logshipping job on the cluster side. The standby system recovers the logs automatically with a MSSQL job, too.
    So, my finally question: since I found the word document (Standby_DB_MaxDB_eng.doc), which describes this steps in manual, I would like to know, if there's chance to do so automatically?
    Our liveCache is a Windows 2003 Server (standby system, too).
    Thank you in advance!
    Peter

    Hi Peter,
    just search for "standby + logshipping" in this forum...
    Re: Recovery LOG Script
    regards,
    Lars

  • Oracle 9i request wrong archive log name during recovery

    Hello all,
    I am facing a problem with a database 9i Enterprise Edition running on Windows 2003 Enterprise Edition environment. According to the customer, when he takes the database offline using MSCS to take an offline backup, the database crashes when he tries to put it back online... But here a strange thing happens... when the command RECOVER DATABASE is issued, it asks for the wrong archive log... for example:
    The actual sequence is 14322, but Oracle asks for the archive log SID_4322... (in this case we have to rename the original file SID_14322 to SID_4322). I don't understand why it asks for the archive incorrectly... I issued the command ARCHIVE LOG LIST and it says that the current thread is 14322 (which is correct)... every new archive is created with the correct name (SID_143XX)... why is it asking for an archive without the number 1 in this case? Rename all the files to recover a database is not very pleasent job... creating an script to do so is not a solution in my opinion, hehe. Thanks for your help in advance.
    Best regards,
    Bruno Carvalho
    Edited by: Bruno Carvalho on Jul 5, 2010 10:08 AM

    Hello Damorgan,
    MSCS = Microsoft Cluster Services (it's an MMC console). The backup strategy is very old, and they use BR*Tools script to create a job on ArcServe Brigstor, and then the backup is performed via backint. (It's an SAP ERP system, which should be backed up using BR*Tools). I checked the alert log and I think that the problem is the length of the archive log name (in the alert log it gives me the error below):
    ARCH: Warning. Log sequence in archive filename wrapped
    to fix length as indicated by %S in LOG_ARCHIVE_FORMAT.
    Old log archive with same name might be overwritten.
    Media Recovery Log E:\ORACLE\SID\SAPARCH\SIDARCH\SIDARCH_43454.DBF <- this should be SIDARCH_143454.DBF
    Now I'm looking for a parameter where I can set the length of the archives, your help would be very appreciated (and please let me know if I'm not going the right way).
    Best regards,
    Bruno Carvalho

  • DB recovery crashes when executed from script, finishes OK manually

    Hello everybody,
    I have a DB restore script which I used/tested a lot of times. On the last run it crashed in the archivelog recovery phase, but I was able to open the instance when applying the remaining commands manually.
    DB is a 10g2 on Solaris 10, I have a full backup made with EMC NetWorker. After taking the backup, the machine was re-installed from scratch (OS, additional 3rd party software, Oracle software and backup client). A new, empty DB instance was also created.
    Restore script performs the following:
    - checks/creates directory structure needed by Oracle instance;
    - stops the listener;
    - shuts down the new instance;
    - startup mount exclusive, enable restricted session;
    - drops database instance via RMAN
    - recovers the orapw, spfile, tnsnames.ora and listener.ora files from the backupset;
    - stars the instance in nomount;
    - replicates the controlfile from a saved copy in the backupset;
    - mounts the instance;
    - starts the following RMAN sequence:
    run {
    set until $LAST_SCN_IN_BACKUPSET;
    allocate channel c3 type 'sbt_tape' parms 'ENV=(....)';
    restore database check readonly;
    recover database check readonly;
    release channel c3;
    sql 'alter database open resetlogs';
    - starts the listener.
    Almost all is performed OK, until the "recover database check readonly" command, where the recover crashes with the following messages:
    Starting recover at 18-FEB-09
    starting media recovery
    channel c3: starting archive log restore to default destination
    channel c3: restoring archive log
    archive log thread=1 sequence=17
    channel c3: reading from backup piece 09k7hb2h_1_1
    channel c3: restored backup piece 1
    piece handle=09k7hb2h_1_1 tag=TAG20090216T181752
    channel c3: restore complete, elapsed time: 00:00:36
    archive log filename=<...>/oracle/oradata/SNM/arch/arch_1_17_678643049.arc thread=1 sequence=17
    released channel: c3
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 02/18/2009 19:58:19
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '<...>/oracle/oradata/SNM/arch/arch_1_17_678643049.arc'
    ORA-00283: recovery session canceled due to errors
    ORA-19755: could not open change tracking file
    ORA-19750: change tracking file: '/alcatel/oracle/oradata/SNM/bct.dbf'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Recovery Manager complete.
    The curious thing is that when I ran the following commands manually (from RMAN), the recovery was OK:
    run {
    set until scn $LAST_SCN_IN_BACKUPSET;
    recover database check readonly;
    (finished in a couple of minutes)
    sql 'alter database open resetlogs';
    (finished after a few minutes)
    lsnrctl start (from shell) - OK.
    Could somebody point me to the possible causes of this behavior?
    Thank you all for your time,
    Adrian

    Hi Werner,
    Thank you for your reply.
    Yes, I'm using the block change tracking file. As the DB instance was re-created when the machine was installed, the bct must have been created also. However, in my script I dropped the DB instance, and the bct was, most likely, deleted, as were all the other datafiles.
    After the restore script crashed, I had no bct file in the expected location (which is the location of all the datafiles).
    I'm wondering why the bct could not have been created during the run of the restore script.
    Looking through the alert log, I realise I tried an "alter database open resetlogs" before running the recovery sequence, and at that point, the bct was created.
    Then, the second attempt to open the instance was successfull - please see below the messages from the alert log:
    Thu Feb 19 09:43:47 2009
    alter database open
    Thu Feb 19 09:43:47 2009
    CHANGE TRACKING is enabled for this database, but the
    change tracking file can not be found. Recreating the file.
    Change tracking file recreated.
    Block change tracking file is current.
    ORA-1589 signalled during: alter database open...
    Thu Feb 19 09:43:54 2009
    alter database open resetlogs
    ORA-1196 signalled during: alter database open resetlogs...
    Thu Feb 19 09:48:04 2009
    alter database recover datafile list clear
    Thu Feb 19 09:48:04 2009
    Completed: alter database recover datafile list clear
    Thu Feb 19 09:48:04 2009
    alter database recover datafile list
    1 , 2 , 3 , 4 , 5 , 6 , 7 , 8
    Completed: alter database recover datafile list
    1 , 2 , 3 , 4 , 5 , 6 , 7 , 8
    Thu Feb 19 09:48:04 2009
    alter database recover if needed
    start until change 7033941 using backup controlfile
    Media Recovery Start
    parallel recovery started with 7 processes
    ORA-279 signalled during: alter database recover if needed
    start until change 7033941 using backup controlfile
    Thu Feb 19 09:48:05 2009
    alter database recover logfile '/alcatel/oracle/oradata/SNM/arch/arch_1_17_678643049.arc'
    Thu Feb 19 09:48:05 2009
    Media Recovery Log /alcatel/oracle/oradata/SNM/arch/arch_1_17_678643049.arc
    Thu Feb 19 09:48:43 2009
    Incomplete Recovery applied until change 7033941
    Thu Feb 19 09:48:43 2009
    Media Recovery Complete (SNM)
    Completed: alter database recover logfile '/alcatel/oracle/oradata/SNM/arch/arch_1_17_678643049.arc'
    Thu Feb 19 09:49:10 2009
    alter database open resetlogs
    <....>
    Thu Feb 19 09:50:47 2009
    LOGSTDBY: Validation complete
    Starting control autobackup
    Control autobackup written to DISK device
    handle '/alcatel/oracle/oradata/SNM/flash_recovery_area/SNM/autobackup/2009_02_19/o1_mf_s_679225849_4st3tswj_.bkp'
    Completed: alter database open resetlogs
    So, to recap:
    - bct file not created at first try (restore script);
    - bct file created during the failed attempt to open the db (manual command);
    - second attempt to open the db successfull (manual command).
    The behavior doesn't seem to be systematic, as I don't think this happens every time. In this case, I'm starting to wonder if it wouldn't be a good idea to disable the bct before starting the restore, and then to enable it back when the db is opened.
    Thank you again for your idea,
    Adrian

  • Recovery is repairing media corrupt block x of file x in standby alert log

    Hi,
    oracle version:8.1.7.0.0
    os version :solaris  5.9
    we have oracle 8i primary and standby database. i am getting erorr in alert log file:
    Thu Aug 28 22:48:12 2008
    Media Recovery Log /oratranslog/arch_1_1827391.arc
    Thu Aug 28 22:50:42 2008
    Media Recovery Log /oratranslog/arch_1_1827392.arc
    bash-2.05$ tail -f alert_pindb.log
    Recovery is repairing media corrupt block 991886 of file 179
    Recovery is repairing media corrupt block 70257 of file 184
    Recovery is repairing media corrupt block 70258 of file 184
    Recovery is repairing media corrupt block 70259 of file 184
    Recovery is repairing media corrupt block 70260 of file 184
    Recovery is repairing media corrupt block 70261 of file 184
    Thu Aug 28 22:48:12 2008
    Media Recovery Log /oratranslog/arch_1_1827391.arc
    Thu Aug 28 22:50:42 2008
    Media Recovery Log /oratranslog/arch_1_1827392.arc
    Recovery is repairing media corrupt block 500027 of file 181
    Recovery is repairing media corrupt block 500028 of file 181
    Recovery is repairing media corrupt block 500029 of file 181
    Recovery is repairing media corrupt block 500030 of file 181
    Recovery is repairing media corrupt block 500031 of file 181
    Recovery is repairing media corrupt block 991837 of file 179
    Recovery is repairing media corrupt block 991838 of file 179
    how i can resolve this.
    [pre]
    Thanks
    Prakash
    Edited by: user612485 on Aug 28, 2008 10:53 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Dear satish kandi,
    recently we have created index for one table with nologgign option, i think for that reason i am getting that error.
    if i run dbv utility on the files which are shown in alert log file i am getting the following results.
    bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx055.dbf blocksize=4096
    DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:18:27 2008
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx053.dbf
    Block Checking: DBA = 751593895, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 751593896, Block Type =
    .DBVERIFY - Verification complete
    Total Pages Examined : 1048576
    Total Pages Processed (Data) : 0
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 1036952
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 7342
    Total Pages Empty : 4282
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx053.dbf blocksize=4096
    DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:23:12 2008
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx054.dbf
    Block Checking: DBA = 759492966, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 759492967, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 759492968, Block Type =
    .DBVERIFY - Verification complete
    Total Pages Examined : 1048576
    Total Pages Processed (Data) : 0
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 585068
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 8709
    Total Pages Empty : 454799
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    bash-2.05$ dbv file=/oracle15/oradata/pindb/pinx054.dbf blocksize=4096
    DBVERIFY: Release 8.1.7.0.0 - Production on Fri Aug 29 12:32:28 2008
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = /oracle15/oradata/pindb/pinx055.dbf
    Block Checking: DBA = 771822208, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 771822209, Block Type =
    Found block already marked corrupted
    Block Checking: DBA = 771822210, Block Type =
    .DBVERIFY - Verification complete
    Total Pages Examined : 1048576
    Total Pages Processed (Data) : 0
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 157125
    Total Pages Failing (Index): 0
    Total Pages Processed (Other): 4203
    Total Pages Empty : 887248
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    My doubts are :
    1.if i drop the index and recreate the index with logging option will this error won't repeat in alert log file
    2.in future if i activate the standby database will database is going to open without any error.
    Thanks
    Prakash
    .

  • Restore a log backup on to database whose recovery model is SIMPLE ?

    Hi All,
    Today, it was new thing I found when I was testing some log shipping?
    Question is can we restore a LOG BACKUP on a database whose recovery model is SIMPLE?
    Brief background:
    I setup log shipping in SQL Server 2012 SP1 Enterprise Edition between 2 instances. Log shipping is in Standby/readonly mode. So, everything was working very fine...
    Now I changed the recovery model of secondary database on 2nd instance from Full to SIMPLE, still the restores are happening very fine. So, is this a normal behavior? other thing is once after restore job is complete the recovery model is getting changed to
    FULL automatically on secondary db and I verify it before and after restore job using below query:
    select name,recovery_model_desc from sys.databases
    Based on my knowledge when a database is in SIMPLE recovery model, we can't perform any log backups on the database and since no log backups, we cant perform Point in Time recovery?
    Please clarify what I am seeing is normal ? Also, would like to know what is happening behind the scenes?
    Thanks and appreciate your help.
    Sam

    Hi,
    >>Question is can we restore a LOG BACKUP on a database whose recovery model is SIMPLE?
    Yes you can restore Log backup on database which is in simple recovery.
    You must understand what recovery model is for. Recovery model affects Logging and recovery of a database it does not puts restriction on what can be restore on it. Of course you cannot take log backup of database in simple recovery but you can restore.
    Reason being in simple recovery log is truncated after checkpoint (unless something is holding logs like long running transaction). Recovery model also controls the extent of logging which would be done for transaction and hence recovery which you can do
    PS: Dont play with logshipping like this which is configured on prod environment
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Recovery Task - SCOM, execute large Powershell scripts

    Hi,
    As a standard we use powershell for scripting. Now we want to do this in SCOM to.
    But on a recovery task i can only add one sentence of powershell commandline, or a VBscript. This is really strange for a monitoring tool, that has got so much powershell cmdlets.
    As a workaround i can execute powershell in the VBS code. But then i need to place my script on the server where the alert came from to resolve it.
    Now we want to send an SMS as a recovery task, so that the engineer knows there is an issue after working time.The problem is with the workaround is that i now need to give all application servers an 'accept' on the firewall to our SMS provider. Instead
    of 3 management servers.
    Has anybody got a good solution to start a powershell taks as a recovery on your management server for a application server to solve an issue or to send an SMS.
    Kind regards,
    André

    Hi
    I have two question:
    1- Which module did you use for your recovery task? because you said that you can only add one sentence of power-shell command-line,
    or a Vb-script. if you used regular recovery task module you can run a long script of power-shell or vb-script.
    2- Do you want to run that recovery task script on RMS or agents? Because if you want to run script on RMS it's so easy and you have to just create
    a normal vb-script recovery task module and call your SMS provider in it. Or in vb-script you have to call a power-shell script that located on RMS. 
    Another way is creating power-shell recovery task module and call your SMS provider in it.
    If you answer these questions I can help you better and more.
    regards
    Alireza

  • Rman hotbackup script log help needed

    Hi I am trying this script to do a hot backup of the db. The db is 10g R2.
    The script runs and does the backup. But the log file is not created and there is a file called rman_.log which is getting appended with
    ps -ef | egrep pmon_$ORACLE_SID | grep -v grep
    date +%d-%m-%y
    Also I am unable to log the rman job. Can anyone advise what am I doing wrong. Thanks in advance.
    # ~~~~~ set the variables ~~~~~ #
    ORACLE_HOME=/u01/app/oraprd/product/10.2.0/db_1
    ORACLE_BASE=/u01/app/oraprd
    ORACLE_SID=testdb
    DATE_TODAY='date +%d-%m-%y'
    export ORACLE_HOME ORACLE_BASE ORACLE_SID
    export DATE_TODAY
    LOG_FILE=/scripts/rman_$ORACLE_SID_$DATE_TODAY_hot_bkp.log
    export LOG_FILE
    db_status="CLOSED"
    # ~~~~~ Check if output file exists ~~~~~ #
    if [ ! -e $LOG_FILE ] ; then
    touch /scripts/rman_$ORACLE_SID_$DATE_TODAY_hot_bkp.log
    chmod 755 /scripts/rman_$ORACLE_SID_$DATE_TODAY_hot_bkp.log
    fi
    # ~~~~~ Check status of database ~~~~~ #
    pmon='ps -ef | egrep pmon_$ORACLE_SID | grep -v grep'
    echo $pmon >> $LOG_FILE
    echo $DATE_TODAY >> $LOG_FILE
    if [ "$pmon" = "" ]; then
    db_status="CLOSED"
    echo "The db was closed; now starting to take backup" >> $LOG_FILE
    else
    db_status=sqlplus -s '/ as sysdba' <<EOF
    startup;
    exit
    EOF
    fi
    if [ $db_status = "MOUNTED" -o $db_status = "OPEN" ]; then
    echo "The db was open; now starting to take backup" >> $LOG_FILE
    exit
    EOF
    fi
    rman target / nocatalog log=$/scripts/rman_scripts/rman_${ORACLE_SID}_${DATE_TODAY}_hot_bkp.log <<EOF
    run {
    sql "alter system archive log current";
    backup current controlfile;
    backup database plus archivelog;
    delete noprompt obsolete;
    exit;
    eof

    Hi,
    Now the script is logging but I am unable to run as a cron job.
    $chmod 755 script
    verfied the owner
    $ crontab -l
    00 02 * * * /scripts/rman_scripts/rman_prod_hot_bkp.sh >> /scripts/logs/rman_prod.hot_bkp.log 2>&1
    but it does not work. when I
    $ /scripts/rman_scripts/rman_prod_hot_bkp.sh
    it works and the db is backed up. I know it is unix thing, but just can't seemt to figure out. Any ideas ????

  • Media Recovery Waiting for thread 1 sequence (in transit)

    I have rebuilt our standby database using an rman duplicate since it was missing many archive logs.
    Following the duplicate, the standby is now almost in sync with the primary. Logs are shipping across but are not being applied in a timely manner. How long should it take for an archive log from the primary to be applied to the standby?
    I need to know this so that a proper script can be set up to check the primary and standby. At the moment they are never exactly in sync - always one sequence number behind the primary.
    Why is the standby is not applying in a timely manner?
    From the alert log:
    Media Recovery Waiting for thread 1 sequence 11278 (in transit)
    The log seems to be "in transit" for a long time
    PRIMARY:
    SQL> select max (sequence#) current_seq from v$log;
    CURRENT_SEQ
    11278
    SB:
    SQL> select MAX (SEQUENCE#), APPLIED FROM V$ARCHIVED_LOG where APPLIED ='YES' GROUP BY APPLIED;
    MAX(SEQUENCE#) APP
    11277 YES
    ALERT LOG:
    RFS[2]: Archived Log: '/backup/prod/log_1_11277_704816194.dbf'
    Primary database is in MAXIMUM PERFORMANCE mode
    Mon Nov 1 15:22:01 2010
    Media Recovery Log /backup/prod/log_1_11272_704816194.dbf
    Mon Nov 1 15:26:49 2010
    Media Recovery Log /backup/prod/log_1_11273_704816194.dbf
    Mon Nov 1 15:29:54 2010
    Media Recovery Log /backup/prod/log_1_11274_704816194.dbf
    Mon Nov 1 15:34:18 2010
    Media Recovery Log /backup/prod/log_1_11275_704816194.dbf
    Mon Nov 1 15:36:42 2010
    Media Recovery Log /backup/prod/log_1_11276_704816194.dbf
    Mon Nov 1 15:39:43 2010
    Media Recovery Log /backup/prod/log_1_11277_704816194.dbf
    Mon Nov 1 15:42:34 2010
    Media Recovery Waiting for thread 1 sequence 11278 (in transit)
    I should add that I understand that for the Primary and Standby to be out by one log is not cause for concern (they are applying). Its just that I wanted to script a check that would compare them both, and and the moment they are never equal - when I understand that they should be and that the logs should be applied almost immediately.
    Edited by: Dan A on Nov 1, 2010 4:36 PM

    How long should it take for an archive log from the primary to be applied to the standby?depends on network speed also.
    make sure the archives are shipped to standby location.
    PRIMARY:
    SQL> select max (sequence#) current_seq from v$log;
    CURRENT_SEQ
    11278(this is log not archivelog ) ..... :)SB:
    SQL> select MAX (SEQUENCE#), APPLIED FROM V$ARCHIVED_LOG where APPLIED ='YES' GROUP BY APPLIED;
    MAX(SEQUENCE#) APP
    11277 YES
    Hi check is MRP started or not?
    primary database you need not check current sequence, check last generated sequence..not current sequence.
    current sequence is redo log which has been not yet archived
    I think everything is perfect here.. no issues.
    Hope you understood, let me know if not clear , thanks.

Maybe you are looking for

  • Button in IP

    Hi all; I’m new in IP and have some doubts how to built one button. This is the customer scenario: I had filtered some characteristics but only one using variable. This variable is assign to characteristic ZORCAB. Also, I have one planning function w

  • Weird errors with wireless router

    I have a TP-Link router set up at my relatives' place. It works perfectly well with two iPhones and an iPad, with no problems whatsoever. My MacBook Pro is a different story. I have set it up with this router. When I manage to get it to work, it work

  • Problems with routing macosx 10.4.11 server

    Hello, guys! Sorry for my grammar, I'm from Russia))) now the problem. is there a program for Mac similar to Kerio Winroute? I need such, because someone in my network is eating too much traffic) and I want to make some speed limits for him. how to f

  • Won't print but scan and fax are working C7180 running XP - status sez printer disconnect​ed.

    My C7180 has been a stellar performer until this morning.  The printer won't print.  It is connected because I can scan to my computer.  The printer answers my fax line.  BUT - when I try to print I get the 'baloon' on the bottom of the screen tellin

  • Audigy 2 ZS, Mediasource 5, DTS wav and the Logitech Z-5500 speak

    I'm trying to play a DTS Wav file from audio cd in mediasource 5.0?and get the following results:- mediasource: with spdiff passthrough on in the soundcard settings?and the Logitech Z-5500 speakset set to coax I get DTS digital surround but mediasour