SCN command

What does the SCN command mean in a PDF file. Example: 0.99608 0 0 scn
Thanks,
-Lars

You can find out more details in the ISO 32000-1 document, but the short answer is "Set Color N".

Similar Messages

  • RMAN restore using set until scn

    Hi guys.
    Quick question about using set until scn
    when I do list backup output shows multiple scn's for one full (I allocated 4 tape channels for the backup so there are 4 scn's in the output)
    Which one do I specify in the set until scn command? Below there is ..163 ...164 ...165 ...166
    Here is the backup I want to use:
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    12526 Incr 0 19G SBT_TAPE 00:27:22 Dec 31 2009 10:25:30
    BP Key: 12526 Status: AVAILABLE Tag: TAG20091231T095808
    Piece Name: rcworaprd-vprd2-full<12920:707047088:1>.dbf
    List of Datafiles in backup set 12526
    File LV Type Ckp SCN Ckp Time Name
    1 0 Incr 1697159163 Dec 31 2009 09:58:08 /u02/oradata/vprd2/system01.dbf
    11 0 Incr 1697159163 Dec 31 2009 09:58:08 /u02/oradata/vprd2/ppa_data01.dbf
    12 0 Incr 1697159163 Dec 31 2009 09:58:08 /u02/oradata/vprd2/ppa_index01.dbf
    13 0 Incr 1697159163 Dec 31 2009 09:58:08 /u02/oradata/vprd2/itd_index01.dbf
    16 0 Incr 1697159163 Dec 31 2009 09:58:08 /u02/oradata/vprd2/rcl_data01.dbf
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    12527 Incr 0 21G SBT_TAPE 00:29:19 Dec 31 2009 10:27:27
    BP Key: 12527 Status: AVAILABLE Tag: TAG20091231T095808
    Piece Name: rcworaprd-vprd2-full<12921:707047088:1>.dbf
    List of Datafiles in backup set 12527
    File LV Type Ckp SCN Ckp Time Name
    4 0 Incr 1697159164 Dec 31 2009 09:58:08 /u02/oradata/vprd2/tools01.dbf
    7 0 Incr 1697159164 Dec 31 2009 09:58:08 /u02/oradata/vprd2/xdb01.dbf
    8 0 Incr 1697159164 Dec 31 2009 09:58:08 /u02/oradata/vprd2/user_index02.dbf
    9 0 Incr 1697159164 Dec 31 2009 09:58:08 /u02/oradata/vprd2/una_data01.dbf
    14 0 Incr 1697159164 Dec 31 2009 09:58:08
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    12528 Incr 0 29G SBT_TAPE 00:33:48 Dec 31 2009 10:31:57
    BP Key: 12528 Status: AVAILABLE Tag: TAG20091231T095808
    Piece Name: rcworaprd-vprd2-full<12923:707047089:1>.dbf
    List of Datafiles in backup set 12528
    File LV Type Ckp SCN Ckp Time Name
    6 0 Incr 1697159166 Dec 31 2009 09:58:09 /u02/oradata/vprd2/user_index01.dbf
    17 0 Incr 1697159166 Dec 31 2009 09:58:09 /u02/oradata/vprd2/rcl_index01.dbf
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    12529 Incr 0 23G SBT_TAPE 00:38:09 Dec 31 2009 10:36:17
    BP Key: 12529 Status: AVAILABLE Tag: TAG20091231T095808
    Piece Name: rcworaprd-vprd2-full<12922:707047088:1>.dbf
    List of Datafiles in backup set 12529
    File LV Type Ckp SCN Ckp Time Name
    2 0 Incr 1697159165 Dec 31 2009 09:58:09 /u02/oradata/vprd2/drsys01.dbf
    3 0 Incr 1697159165 Dec 31 2009 09:58:09 /u02/oradata/vprd2/eng_data01.dbf
    5 0 Incr 1697159165 Dec 31 2009 09:58:09 /u02/oradata/vprd2/user_data01.dbf
    10 0 Incr 1697159165 Dec 31 2009 09:58:09 /u02/oradata/vprd2/una_index01.dbf
    15 0 Incr 1697159165 Dec 31 2009 09:58:09 /u02/oradata/vprd2/eng_index01.dbf
    Or would it be better to use the set until time... using the oldest time from the backupset 12529 (Dec 31 2009 10:36:17)
    All input is appreciated.
    Thanks
    Jamie

    Note that an online backup is an inconsistent backup:
    >
    Any backup taken when the database has not been shut down normally is an inconsistent backup. When a database is restored from an inconsistent backup, Oracle must perform media recovery before the database can be opened, applying any pending changes from the redo logs.
    As long as your database is running in ARCHIVELOG mode, and you back up your archived redo log files as well as your datafiles, inconsistent backups can be the foundation for a sound backup and recovery strategy. Inconsistent backups are an important part of the backup strategy for most databases, because they offer superior availability. For example, backups taken while the database is still open are inconsistent backups
    >
    If you need to restore your database using the listed backups, you need to apply the archived redo logs until a SCN which is greater than any SCN in the datafile backup sets: otherwise Oracle won't open the database because some datafiles may have different SCN.

  • BACKUP INCREMENTAL FROM SCN

    Hi,
    i have 10gr1 db , and has a standby db.For a reason i have ora-00326 error on standby db . After some works, the only thing i can do for standby db is recreate standby.But this doc shows that this is not necessary
    http://www.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14191/rcmdupdb008.htm
    But , i could not execute "backup incremental from scn" command for 10gr1.Is there any way to execute this or not ?
    Thanks.

    Hi,
    You mentioned you have a 10gR1 database and the feature was introduced in 10.2.0.1
    The complete scenario is there : http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/scenarios.htm#CIHEGFEG
    but you won't find it in 10gR1.
    If you often have unresolveable gaps I think you should focus on this problem though.
    Best regards
    Phil
    Edited by: Philippe Florent on Feb 7, 2011 5:03 PM -- I am too slow :)

  • What generates new SCN ? A command or a transaction ?

    Hello,
    can you explain me please: When is a new SCN generated ? Only per transaction or per single SQL statement ? Sometimes i read that SCN's are issued per transaction, but i'm not sure if thats correct.
    If one issues lets say 5 DML SQL statements (inserts, deletes) within one transaction, how may unique SCN's will be generated ? One for the whole transaction or 5 ?
    And don't need "select" SQL statements also to have a SCN in order to get a read consistent image ?

    When is a new SCN generated ? SCN keeps ticking irrespective of transactions or database activity.
    Following is the SCN generation on my personal database where no user activity is happening.
    SQL> set time on
    18:41:54 SQL> select DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER from dual;
    GET_SYSTEM_CHANGE_NUMBER
                     1405509
    18:42:02 SQL> /
    GET_SYSTEM_CHANGE_NUMBER
                     1405510
    18:42:05 SQL> /
    GET_SYSTEM_CHANGE_NUMBER
                     1405512
    18:42:10 SQL> /
    GET_SYSTEM_CHANGE_NUMBER
                     1405514As Paul said, all those 5 transactions fall under one SCN. Actually, the SCN is assigned to the COMMIT statement which is then copied/transfered to all the transactions which were affected by that COMMIT statement.

  • Trnslate command in abap

    dear all,
    i am a student of abaper plz solve the below problem.
    accept a string like 'emax technologies ,change all the occurenses of e to g?
    As a student, you should be doing lots of searching and lots of reading documentation. Not until that should you post a question here. Please read the Rules of this forum to avoid trouble.
    Edited by: kishan P on Aug 30, 2010 2:27 PM

    Hi Sandeep,
    Use REPLACE command to achieve this.
    Do a F1 on REPLACE command.
    And Please do search on SCN before posting such basic questions.
    Regards
    Abhii

  • TABLE ILLEGAL STATEMENT  error with MODIFY command

    Hi gurus,
    i want you to inform me about table illegal statement error. The error occurs when i use modify as below.
    loop at itab.
       select .......
             where xxx eq itab-xxxx.
           MODIFY itab.
      endselect.
    endloop.
    i know that i have to give the sy-tabix as INDEX parameter to the modify command. but i want to know why i have to do this?
    cause when i debug, i follow the sy-tabix field and it have not a change in select endselect.
    may the reason of the error about cursor in select and cursor effects modify command?
    or why?
    Thx,

    Hello,
    I guess this is because your MODIFY statement is inside the SELECT ... ENDSELECT & not inside the LOOP ... ENDLOOP.
    SAP documentation says:
    Within a LOOP loop, the INDEX addition can be ommitted. In this case the current table line of the LOOP loop is changed.
    You have to change the coding:
    DATA: v_index TYPE i.
    loop at itab.
    v_index = sy-index.
    select .......
    where xxx eq itab-xxxx.
    MODIFY itab INDEX v_index.
    endselect.
    endloop.
    BR,
    Suhas
    PS: The coding practice followed is not very performance oriented as well. May be you should have a look around in some blogs, wikis in SCN & change the code accordingly.
    Edited by: Suhas Saha on Nov 19, 2009 9:41 AM

  • How can I determine what is the minimum SCN number I need to restore up to.

    Say if I have a full database backup, I know I have file inconsistency, but I want to know what is the minimum time or SCN number a need to roll forward to in order to be able to open the database?
    For example: I do a database restore.
    restore database ;
    RMAN> sql 'alter database open read only';
    sql statement: alter database open read only
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of sql command on default channel at 03/16/2009 15:00:04
    RMAN-11003: failure during parse/execution of SQL statement: alter database open read only
    ORA-16004: backup database requires recovery
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
    I need need to apply archive log files. All references I find for ORA-00194 state the solution is to "apply more logs until the file is consistent " But "HOW MANY LOGS", or more apporaite up to what time or SCN? How does one determine what TIME or SCN is required to get all file consistent?
    I thought this query might provide the answer, but it doesn't
    select max(checkpoint_change#)
    from v$datafile_header
    MAX(CHECKPOINT_CHANGE#)
    7985876903
    --It applies a bit more redo, but not enough to make my datafiles consistent.
    recover database until SCN=7985876903 ;
    Starting recover at 03/16/09 15:04:54
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    using channel ORA_DISK_3
    using channel ORA_DISK_4
    using channel ORA_DISK_5
    using channel ORA_DISK_6
    using channel ORA_DISK_7
    using channel ORA_DISK_8
    starting media recovery
    channel ORA_DISK_1: starting archive log restore to default destination
    channel ORA_DISK_1: restoring archive log
    archive log thread=1 sequence=18436
    channel ORA_DISK_1: reading from backup piece /temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1
    channel ORA_DISK_1: restored backup piece 1
    piece handle=/temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1 tag=TAG20090315T041716
    channel ORA_DISK_1: restore complete, elapsed time: 00:02:26
    archive log filename=/u01/app/oracle/flash_recovery_area/P1/archivelog/2009_03_16/o1_mf_1_18436_4vxd81yc_.arc thread=1 se quence=18436
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
    I've discover I need to apply archive logs until this query reports all datafiles as FUZZY=NO , but this only works by guessing at some time periord to roll forward to, then checking the FUZZY column, and try again. Is there a way to know, I have to roll forward to a specific SNC in order for all my datafiles to be consistent?
    select file#
         , status
         , checkpoint_change#
         , checkpoint_time
         , FUZZY
         , RECOVER
    ,LAST_DEALLOC_SCN
    from v$datafile_header
    order by checkpoint_time
    Thanks,
    Jason

    The minimum point in time is the time when the last backup piece for datafiles in that backup was completed.
    Your alert.log should show the redo log sequence number at that time.
    You can query V$ARCHIVED_LOG and get the FIRST_CHANGE# of the first archivedlog generated after that backup piece completed.
    A
    LIST BACKUP;in RMAN should also show you the SCNs at the time of the backups.
    You can also query SCN_TO_TIMESTAMP -- eg
    select timestamp_to_scn(to_timestamp('15-MAR-09 09:24:01','DD-MON-RR HH24:MI:SS')) from dual;will return an approximation of the SCN.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com
    Edited by: Hemant K Chitale on Mar 17, 2009 9:41 AM
    added the LIST BACKUP command from RMAN.

  • Getting error while running duplicate command

    RMAN> duplicate target database to JEFFDUP;
    Starting Duplicate Db at 18-JAN-11
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: sid=156 devtype=DISK
    contents of Memory Script:
    set until scn 1188531;
    set newname for datafile 1 to
    "C:\ORACLE\SYSTEM01.DBF";
    set newname for datafile 2 to
    "C:\ORACLE\UNDOTBS01.DBF";
    set newname for datafile 3 to
    "C:\ORACLE\SYSAUX01.DBF";
    set newname for datafile 4 to
    "C:\ORACLE\USERS01.DBF";
    set newname for datafile 5 to
    "C:\ORACLE\EXAMPLE01.DBF";
    restore
    check readonly
    clone database
    executing Memory Script
    executing command: SET until clause
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    Starting restore at 18-JAN-11
    using channel ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: starting datafile backupset restore
    channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to C:\ORACLE\SYSTEM01.DBF
    restoring datafile 00002 to C:\ORACLE\UNDOTBS01.DBF
    restoring datafile 00003 to C:\ORACLE\SYSAUX01.DBF
    restoring datafile 00004 to C:\ORACLE\USERS01.DBF
    restoring datafile 00005 to C:\ORACLE\EXAMPLE01.DBF
    channel ORA_AUX_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_2\FLASH_RECOVERY_AREA\ORCL4\BACKUPSET\2011_01_10\O1_MF_NNNDF_TAG20110110T001815_6L
    QJM5_.BKP
    channel ORA_AUX_DISK_1: restored backup piece 1
    piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_2\FLASH_RECOVERY_AREA\ORCL4\BACKUPSET\2011_01_10\O1_MF_NNNDF_TAG20110110T001815_6LN0QJM5_.BKP tag=TAG20110110T001815
    channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:06
    Finished restore at 18-JAN-11
    sql statement: CREATE CONTROLFILE REUSE SET DATABASE "JEFFDUP" RESETLOGS ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1 ( 'C:\ORACLE\REDO01.LOG' ) SIZE 50 M REUSE,
    GROUP 2 ( 'C:\ORACLE\REDO02.LOG' ) SIZE 50 M REUSE,
    GROUP 3 ( 'C:\ORACLE\REDO03.LOG' ) SIZE 50 M REUSE
    DATAFILE
    'C:\ORACLE\SYSTEM01.DBF'
    CHARACTER SET WE8MSWIN1252
    **RMAN-00571: ===========================================================**
    **RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============**
    **RMAN-00571: ===========================================================**
    **RMAN-03002: failure of Duplicate Db command at 01/18/2011 19:10:32**
    **RMAN-06136: ORACLE error from auxiliary database: ORA-01503: CREATE CONTROLFILE failed**
    **ORA-01130: database file version 10.2.0.3.0 incompatible with ORACLE version 10.2.0.0.0**
    **ORA-01110: data file 1: 'C:\ORACLE\SYSTEM01.DBF'**
    Please help me in this..
    Thanks...

    >
    01504, 00000, "database name '%s' does not match parameter db_name '%s'"
    // *Cause:  The name in a database create or mount does not match the name
    // given in the INIT.ORA parameter db_name.
    // *Action: correct or omit one of the two names.
    >
    I think you have to edit the pfile from the auxiliary db..
    see:
    http://www.google.de/url?sa=t&source=web&cd=1&ved=0CBsQFjAA&url=http%3A%2F%2Fblogs.oracle.com%2FAlejandroVargas%2Fgems%2FRMANDUPDBPRACTICE.pdf&ei=PLQ1TYigMIqUOtyI_bYC&usg=AFQjCNHwggp30FDq-l3Wq5Pu4gk48X3Xhw
    Regards,
    - wiZ

  • How to create a single SCN [Inbound Deliery]  ' VL31n '   for multiple PO's

    How to create a single SCN [Inbound Deliery]  ' VL31n '   for multiple Purchase-Orders with the help of BAPI or BDC Recording in 4.6 b version.
    Manually its possible.. but how is it possible in the background i.e through BDC Recording or BAPI's.  
    [ As we donot have an option of creating one same  SCN [Inbound Deliery]  for multiple Purchase-Orders in 'VL31n' ].
    Please provide the needful Information.

    These mpeg2 clips do not have audio.
    I simply want to create a script that can read in the files, append to each other and then export to an .mov format.
    I want the process to be called from a command line that will open QT, run the script, read in the files, append and export.
    Thanks.
    G5 and Mac Pro Mac OS X (10.4.8) PC's and Windows
    G5 and Mac Pro Mac OS X (10.4.8) PC's and Windows

  • "ORA-01203 - wrong creation SCN" got during copy of a db on another machine

    Hello colleagues,
    I copy a database from a machine to a second one through this procedure:
    I set each tablespace (data and temp) in backup mode
    I copy the datafiles (data and temp)
    I copy the control file
    I copy archived redo logs
    On the second machine I try to startup the database by the command
    On the second machine I try to startup the database but the following errors are got:
    SQL> @/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/tmp/oracle/CACH
    E/apply_redo.sql;
    ORACLE instance started.
    Total System Global Area 423624704 bytes
    Fixed Size 2044552 bytes
    Variable Size 209718648 bytes
    Database Buffers 209715200 bytes
    Redo Buffers 2146304 bytes
    Database mounted.
    alter database recover automatic from '/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/data/warm_rep
    l/WarmArchive/CACHE' database until cancel using backup controlfile
    but the following errors are got:
    SQL> @/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/tmp/oracle/CACH
    E/apply_redo.sql;
    ORACLE instance started.
    Total System Global Area 423624704 bytes
    Fixed Size 2044552 bytes
    Variable Size 209718648 bytes
    Database Buffers 209715200 bytes
    Redo Buffers 2146304 bytes
    Database mounted.
    alter database recover automatic from '/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/data/warm_rep
    l/WarmArchive/CACHE' database until cancel using backup controlfile
    ERROR at line 1:
    ORA-00283: recovery session canceled due to errors
    ORA-01110: data file 1: '/cache/db/db01/system_1.dbf'
    ORA-01122: database file 1 failed verification check
    ORA-01110: data file 1: '/cache/db/db01/system_1.dbf'
    ORA-01203: wrong incarnation of this file - wrong creation SCN
    You see the mount command and the error got.
    What can I do to troubleshoot the problem?
    thanks for the support
    Enrico
    The complete copy procedure is the following:
    #!/bin/ksh
    # Step 2 -- Verifying the DBMS ARCHIVELOG mode
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    spool ${ORACLE_TMP_DIR}/archive.log
    archive log list;
    spool off
    EOF
    grep NOARCHIVELOG ${ORACLE_TMP_DIR}/archive.log >/dev/null 2>&1
    # Step 3 -- Creating DB_filenames.conf / DB_controfile.conf fles
    [ -f ${ORACLE_TMP_DIR}/DB_filenames.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_filenames.conf
    [ -f ${ORACLE_TMP_DIR}/DB_controfile.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_controfile.conf
    [ -f ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    set linesize 600;
    spool ${ORACLE_TMP_DIR}/DB_filenames.conf
    select 'TABLESPACE=',tablespace_name from sys.dba_data_files;
    select 'FILENAME=',file_name from sys.dba_data_files;
    select 'LOGFILE=',MEMBER from v\$logfile;
    spool off
    EOF
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    set linesize 600;
    spool ${ORACLE_TMP_DIR}/DB_controfile.conf
    select name from v\$controlfile;
    spool off
    EOF
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    set linesize 600;
    spool ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf
    select 'TABLESPACE=',tablespace_name from sys.dba_temp_files;
    select 'FILENAME=',file_name from sys.dba_temp_files;
    spool off
    EOF
    note "Executing cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf ..."
    cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf
    [ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf!"\
         && LocalExit 1
    chmod ug+x ${INSTANCE_DATA_DIR}/DB_filenames.conf
    note "Executing cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf ..."
    cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf
    [ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf!"\
         && LocalExit 1
    chmod ug+x ${INSTANCE_DATA_DIR}/DB_controfile.conf
    note "Executing cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ..."
    cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf
    [ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf!"\
         && LocalExit 1
    chmod ug+x ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf
    set -a
    set -A arr_tablespace `grep "^TABLESPACE=" ${INSTANCE_DATA_DIR}/DB_filenames.conf | awk '{ print \$2 }'`
    index=`grep "^TABLESPACE" ${INSTANCE_DATA_DIR}/DB_filenames.conf | wc -l`
    backup_status=0
    i=0
    while [ $i -lt $index ]
    do
    note "tablespace=${arr_tablespace[$i]}"
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    set linesize 600;
    spool ${ORACLE_TMP_DIR}/tablespace.log
    select 'FILENAME=',file_name from sys.dba_data_files where tablespace_name='${arr_tablespace[$i]}';
    spool off
    alter tablespace ${arr_tablespace[$i]} end backup;
    spool ${ORACLE_TMP_DIR}/backup_tablespace.log
    alter tablespace ${arr_tablespace[$i]} begin backup;
    spool off
    EOF
    set -A arr_filename `grep "^FILENAME=" ${ORACLE_TMP_DIR}/tablespace.log | awk '{ print \$2 }'`
    index1=`grep "^FILENAME" ${ORACLE_TMP_DIR}/tablespace.log | wc -l`
    h=0
    while [ $h -lt $index1 ]
    do
    name=`basename ${arr_filename[$h]}`
    note "filename = ${arr_filename[$h]}"
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    host compress -c ${arr_filename[$h]} > ${BACKUP_AREA}/$name.Z
    EOF
    h=`expr $h + 1`
    done
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    spool ${ORACLE_TMP_DIR}/backup_tablespace.log
    alter tablespace ${arr_tablespace[$i]} end backup;
    spool off
    EOF
    i=`expr $i + 1`
    done
    [ $backup_status -eq 1 ] && LocalExit 1
    set -a
    set -A arr_tablespace `grep "^TABLESPACE=" ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf | awk '{ print \$2 }'`
    index=`grep "^TABLESPACE" ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf | wc -l`
    ${ORACLE_TMP_DIR}/tablespace.logi=0
    while [ $i -lt $index ]
    do
    note "tablespace=${arr_tablespace[$i]}"
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    set linesize 600;
    spool ${ORACLE_TMP_DIR}/tablespace.log
    select 'FILENAME=',file_name from sys.dba_temp_files where tablespace_name='${arr_tablespace[$i]}';
    spool off
    EOF
    set -A arr_filename `grep "^FILENAME=" ${ORACLE_TMP_DIR}/tablespace.log | awk '{ print \$2 }'`
    index1=`grep "^FILENAME" ${ORACLE_TMP_DIR}/tablespace.log | wc -l`
    h=0
    while [ $h -lt $index1 ]
    do
    name=`basename ${arr_filename[$h]}`
    note "filename = ${arr_filename[$h]}"
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    host compress -c ${arr_filename[$h]} > ${BACKUP_AREA}/$name.Z
    EOF
    h=`expr $h + 1`
    done
    i=`expr $i + 1`
    done
    # "log switch & controlfile backup"
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    spool ${ORACLE_TMP_DIR}/backup_controlfile.log
    alter database backup controlfile to '${BACKUP_AREA}/ctrl_pm.ctl' reuse;
    host chmod a+rw ${BACKUP_AREA}/ctrl_pm.ctl
    alter system archive log current;
    spool off
    spool ${ORACLE_TMP_DIR}/archive_info.log
    archive log list;
    spool off
    EOF
    # Step 5 -- Copying the DBMS on the companion node
    note "transferring archived redo log files from ACT to SBY host"
    name=`grep 'Archive destination' ${ORACLE_TMP_DIR}/archive_info.log| awk '{ print \$3 }'`
    set -A vett_logfiles `grep "^LOGFILE=" ${INSTANCE_DATA_DIR}/DB_filenames.conf | awk '{ print \$2 }'`
    index=`grep "^LOGFILE" ${INSTANCE_DATA_DIR}/DB_filenames.conf | wc -l`
    i=0
    while [ $index -gt 0 ]
    do
    name=`basename ${vett_logfiles[$i]}`
    ###MOD001
    $ORACLE_HOME/bin/sqlplus /nolog << EOF
    connect / as sysdba
    host cp ${vett_logfiles[$i]} ${BACKUP_AREA}/$name
    host chmod a+rw ${BACKUP_AREA}/$name
    EOF
    if [ $? -ne 0 ]; then
    error "Error copying logfile on LOCAL_BACKUP_AREA"
    LocalExit 1
    fi
    note "log_file=${vett_logfiles[$i]}"
    index=`expr $index - 1`
    i=`expr $i + 1`
    done
    note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -k -ret 2 ..."
    RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -k -ret 2
    if [ $? -ne 0 ]; then
    error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -ret 2!"
    LocalExit 1
    fi
    note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -k -ret 2 ..."
    RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -k -ret 2
    if [ $? -ne 0 ]; then
    error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -ret 2!"
    LocalExit 1
    fi
    note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2 ..."
    RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2
    if [ $? -ne 0 ]; then
    error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2!"
    LocalExit 1
    fi
    note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${BACKUP_AREA} ${RECOVER_AREA} 0 -k -ret 2 ..."
    RemoteCopy ${COMPANION_HOSTNAME} ${BACKUP_AREA} ${RECOVER_AREA} 0 -k -ret 2

    If the Operating system is same :
    Working Machine
    ================
    Shutdown the database and copy everything
    Copy the init.ora
    Copy the pdf,ctl,and log
    Copy the bdump, udump etc..
    On the second machine
    ==================
    Copy your file in the same path as original i.e
    C:\oracle..<dbname>\system.dbf
    C:\oracle..<dbname>\system.dbf
    Start the database
    If your paths in second machine does not match as original update this thread again
    Michael
    http://mikegeorgiou.blogspot.com

  • In 10g can we dump control file and get SCN info.

    Hi,
    I have a question can any one help me on this.
    In 9.2.0.5 we could dump the control file and get the SCN using the command :
    alter session set events 'immediate trace name CONTROLF level 10';
    In 10.1.0.4.0 the output of the dump has changed and we do not get the SCN. or any of the following information :
    DATABASE ENTRY
    CHECKPOINT PROGRESS RECORDS
    EXTENDED DATABASE ENTRY
    REDO THREAD RECORDS
    LOG FILE RECORDS
    DATA FILE RECORDS
    RMAN CONFIGURATION RECORDS
    LOG FILE HISTORY RECORDS
    OFFLINE RANGE RECORDS
    ARCHIVED LOG RECORDS
    BACKUP SET RECORDS
    BACKUP PIECE RECORDS
    BACKUP DATAFILE RECORDS
    Can I get similar output in 10g as 9i ?
    with regards,
    Dilip.

    Hi.
    What are you trying to achieve here? If you just want the current SCN, you can get it using one of these:
    SQL> select current_scn from v$database;
    CURRENT_SCN
    8058824527
    1 row selected.
    SQL>or
    SQL> select dbms_flashback.get_system_change_number from dual;
    GET_SYSTEM_CHANGE_NUMBER
                  8079317404
    1 row selected.
    SQL>Cheers
    Tim...

  • How to find the timestamp and SCN in the standby database?

    Hai,
    I have Oracle 9.2.0.4 RAC with 2 nodes in the production. The logs generated at these servers will be manully moved to my standby database and will be applied. To know what isthe maximum log files applied in the standby database, i am using the below mentioned query in the standby database,
    Select thread#,max(sequence#) from v$log_history group by thread#
    In general i am using "recover standby database until cancel" command and then checking the database with the above mentioned query whether all the logs are applied or not.
    If i use time based or scn based recovery in standby database i.e., "recover standby database until time <time>" or "recover standby database until change <scn number>" , after completion of the recovery, apart from the message "Media recovery complete" or by seeing the alert log, is there any way to query the standby database, so that i can identify the time or scn upto which the archived redo log files got applied.

    Hi Sridhar,
    There should be some view which will have applied_scn information. There is one more option i can suggest, you can create a heart beat table in production with 2 column like scn and timestamp. Update this table every minute. From standby db you can query this table and get fair idea on applied_scn and timestamp.
    While exporting you can export using flashback_scn by taking the value from heartbeat table of standby.
    This heartbeat table is used very common in streams environment. Just see if this helps you.
    hth,
    http://borndba.com

  • RMAN-05556: not all datafiles have backups that can be recovered to SCN

    Oracle 11.2.0.2 SE-One
    Oracle Linux 5.6 x86-64
    Weekly refresh of a test db from prod, using rman DUPLICATE DATABASE, failed with “RMAN-05556: not all datafiles have backups that can be recovered to SCN”
    Background Summary:
    Weekly inc 0 backup of production starts on Sunday at 0100, normally completes around 1050.  Includes backups of archivelogs
    Another backup of just archivelogs runs on Sunday at 1200, normally completes NLT 1201.
    On the test server, the refresh job starts on Sunday at 1325.  In the past this script used a set until time \"to_date('`date +%Y-%m-%d` 11:55:00','YYYY-MM-DD hh24:mi:ss')\"; -- hard-coded for ‘today at 11:55’.
    For a variety of reasons I decided to replace this semi-hard coding of the UNTIL with a value determined by querying the rman catalog, getting the completion time of the inc 0 backup.  This tested out just fine in my vbox lab, even when I deliberately drove some updates and log switches during the period the backup was running.  But the first time to go live I got the above reported error.
    Details:
    The key part of the inc 0 backup is this (run from a shell script)
    export BACKUP_LOC=/u01/backup/dbprod
    $ORACLE_HOME/bin/rman target=/ catalog rman/***@rmcat<<EOF
    configure backup optimization on;
    configure default device type to disk;
    configure retention policy to recovery window of 2 days;
    crosscheck backup;
    crosscheck archivelog all;
    delete noprompt force obsolete;
    delete noprompt force expired backup;
    delete noprompt force expired archivelog all;
    configure controlfile autobackup on;
    configure controlfile autobackup format for device type disk to '$BACKUP_LOC/%d_%F_ctl.backup';
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '$BACKUP_LOC/%U.rman' MAXPIECESIZE 4096 M;
    sql "alter system archive log current";
    show all;
    backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
    backup as compressed backupset incremental level 0 database tag tag_dbprod;
    sql "alter system archive log current";
    backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
    list recoverable backup;
    EOF
    The archivelog-only backup (runs at noon) looks like this:
    export BACKUP_LOC=/u01/backup/dbprod
    $ORACLE_HOME/bin/rman target=/ catalog rman/***@rmcat<<EOF
    configure backup optimization on;
    configure default device type to disk;
    configure retention policy to recovery window of 2 days;
    crosscheck backup;
    crosscheck archivelog all;
    delete noprompt force obsolete;
    delete noprompt force expired backup;
    delete noprompt force expired archivelog all;
    configure controlfile autobackup on;
    configure controlfile autobackup format for device type disk to '$BACKUP_LOC/%d_%F_ctl.backup';
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '$BACKUP_LOC/%U.rman' MAXPIECESIZE 4096 M;
    sql "alter system archive log current";
    show all;
    backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
    list recoverable backup;
    EOF
    And the original refresh looked like this:
    >> a step to ftp the backups from the prod server to the test server, and some other housekeeping  <<, then
    cd /backup/dbtest
    echo "connect catalog rman/***@rmcat" >  /backup/dbtest/dbtest_refresh.rman
    echo "connect target sys/*******@dbprod" >> /backup/dbtest/dbtest_refresh.rman
    echo "connect auxiliary /"             >> /backup/dbtest/dbtest_refresh.rman
    echo "run {"                           >> /backup/dbtest/dbtest_refresh.rman
    echo "set until time \"to_date('`date +%Y-%m-%d` 11:55:00','YYYY-MM-DD hh24:mi:ss')\";"  >> /backup/dbtest/dbtest_refresh.rman
    echo "duplicate target database to DBTEST;"  >> /backup/dbtest/dbtest_refresh.rman
    echo "}" >> /backup/dbtest/dbtest_refresh.rman
    So, my mod to the refresh was
    bkup_point=`sqlplus -s rman/***@rmcat <<EOF1
    set echo off verify off feedback off head off pages 0 trimsp on
    select to_char(max(completion_time),'yyyy-mm-dd hh24:mi:ss')
    from rc_backup_set_details
    where db_name='DBPROD'
    and backup_type='D'
    and incremental_level=0
    exit
    EOF1`
    cd /backup/dbtest
    echo "connect catalog rman/***@rmcat"     > /backup/dbtest/dbtest_refresh.rman
    echo "connect target sys/*******@dbprod"    >> /backup/dbtest/dbtest_refresh.rman
    echo "connect auxiliary /"                >> /backup/dbtest/dbtest_refresh.rman
    echo "run {"                              >> /backup/dbtest/dbtest_refresh.rman
    echo "set until time \"to_date('${bkup_point}','YYYY-MM-DD hh24:mi:ss')\";"  >> /backup/dbtest/dbtest_refresh.rman
    echo "duplicate target database to DBTEST;" >> /backup/dbtest/dbtest_refresh.rman
    echo "}"                                  >> /backup/dbtest/dbtest_refresh.rman
    Now the fun begins.
    First, an echo in the refresh script confirmed the ‘bkup_point’:
    =======================================================
    We will restore to 2013-08-25 10:41:38
    =======================================================
    Internally, rman reset the ‘until’ as follows:
    executing command: SET until clause
    Starting Duplicate Db at 25-Aug-2013 15:35:44
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=162 device type=DISK
    contents of Memory Script:
       set until scn  45633141350;
    Examining the result of LIST BACKUP (the last step of all of my rman scripts) the full backup shows this:
    BS Key  Type LV Size       Device Type Elapsed Time Completion Time    
    5506664 Full 61.89M     DISK        00:00:03     25-Aug-2013 02:11:32
            BP Key: 5506678   Status: AVAILABLE  Compressed: NO  Tag: TAG20130825T021129
    Piece Name: /u01/backup/dbprod/DBPROD_c-3960114099-20130825-00_ctl.backup
      SPFILE Included: Modification time: 24-Aug-2013 22:33:08
      SPFILE db_unique_name: DBPROD
      Control File Included: Ckp SCN: 45628880455   Ckp time: 25-Aug-2013 02:11:29
    BS Key Type LV Size       Device Type Elapsed Time Completion Time    
    5507388 Incr 0 206.03G    DISK        08:30:00     25-Aug-2013 10:41:30
      List of Datafiles in backup set 5507388
      File LV Type Ckp SCN    Ckp Time             Name
      1    0 Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/datafile/system.258.713574775
      >>>>>>>>> snip lengthy list <<<<<<<<<
      74   0 Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/event_i2.dbf
      Backup Set Copy #1 of backup set 5507388
      Device Type Elapsed Time Completion Time      Compressed Tag
      DISK        08:30:00     25-Aug-2013 10:41:36 YES        TAG_DBPROD
        List of Backup Pieces for backup set 5507388 Copy #1
        BP Key  Pc# Status      Piece Name
        5507391 1   AVAILABLE   /u01/backup/dbprod/eeoi55iq_1_1.rman
        >>>>>>>>>>>>> snip lengthy list <<<<<<<<<<<
        5507442 52  AVAILABLE   /u01/backup/dbprod/eeoi55iq_52_1.rman
    Notice the slight difference in time between what is reported in the LIST BACKUP and what was reported by my query to the catalog.
    Continuing with the backup list, the second archivelog  backup in the script generated six backupsets.  The fifth set  showed:
    BS Key Size       Device Type Elapsed Time Completion Time    
    5507687 650.19M DISK        00:02:18     25-Aug-2013 10:54:53
            BP Key: 5507694   Status: AVAILABLE  Compressed: YES  Tag: TAG20130825T104156
    Piece Name: /u01/backup/dbprod/ekoi643j_1_1.alog
      List of Archived Logs in backup set 5507687
      Thrd Seq     Low SCN    Low Time             Next SCN   Next Time
      1    1338518 45632944587 25-Aug-2013 05:58:18 45632947563 25-Aug-2013 05:58:20
        >>>>>>>>>>>>> snip lengthy list <<<<<<<<<<<
      1    1338572 45633135750 25-Aug-2013 10:08:21 45633140240 25-Aug-2013 10:08:24
      1    1338573 45633140240 25-Aug-2013 10:08:24 45633141350 25-Aug-2013 10:30:06
      1    1338574 45633141350 25-Aug-2013 10:30:06 45633141705 25-Aug-2013 10:41:51
      1    1338575 45633141705 25-Aug-2013 10:41:51 45633141725 25-Aug-2013 10:41:55
    Notice the availability of the archivelogs including the referenced scn.
    Investigation of the ftp portion of the refresh script confirmed that all backup pieces were copied from the prod server.
    So what am I overlooking?  Having reverted back to the original script to get the refresh completed,

    HemantKChitale wrote:
    So, technically, you only need the database and archivelogs backed up by the database script and not the noon run of the archivelog backup.
    backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
    backup as compressed backupset incremental level 0 database tag tag_dbprod;
    sql "alter system archive log current";
    backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
    Yet, why does backupset 5 of the noon archivelog backup show archivelogs from 10:30 to 10:40  if they had been deleted by the database backup script which has a delete input ?  It is as if the database backup script did NOT delete the archivelogs and the noon run was the one to backup the archivelogs (again ?)
    No, that is from the morning full backup.  Note the 'Completion Time" of 25-Aug-2013 10:54:53
    However, the error message seems to point to a datafile.  Why would reverting the recovery point to 11:55 make a difference, I wonder.
    As do I.
    Also puzzling to me are the times associated with the completion of the backups.  I don't recall ever having to scrutinize a backup listing this closely so I'm sure it's just a matter of filling in some gaps in my understanding, but I noticed this.  The backup report (list backup;) shows this for the inc 0 backup:
    BS Key  Type LV Size  
    Device Type Elapsed Time Completion Time
    5507388 Incr 0  206.03G
    DISK   
    08:30:00
    25-Aug-2013 10:41:30   ------- NOTE THE COMPLETION TIME ----
      List of Datafiles in backup set 5507388
      File LV Type Ckp SCN
    Ckp Time        
    Name
      1
    0  Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/datafile/system.258.713574775
    ------ SNIP ------
      74   0  Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/event_i2.dbf
      Backup Set Copy #1 of backup set 5507388
      Device Type Elapsed Time Completion Time 
    Compressed Tag
      DISK   
    08:30:00
    25-Aug-2013 10:41:36 YES   
    TAG_DBPROD   ------- NOTE THE COMPLETION TIME ----
    List of Backup Pieces for backup set 5507388 Copy #1
    BP Key  Pc# Status 
    Piece Name
    5507391 1   AVAILABLE   /u01/backup/dbprod/eeoi55iq_1_1.rman
    ------ SNIP ------
    5507442 52  AVAILABLE   /u01/backup/dbprod/eeoi55iq_52_1.rman
    Then the autobackup of the control file immediatly following:
    BS Key  Type LV Size  
    Device Type Elapsed Time Completion Time
    5507523 Full
    61.89M
    DISK   
    00:00:03
    25-Aug-2013 10:41:47   ------- NOTE THE COMPLETION TIME ----
    BP Key: 5507587   Status: AVAILABLE  Compressed: NO  Tag: TAG20130825T104144
    Piece Name: /u01/backup/dbprod/DBPROD_c-3960114099-20130825-01_ctl.backup
      SPFILE Included: Modification time: 25-Aug-2013 05:57:15
      SPFILE db_unique_name: DBPROD   
      Control File Included: Ckp SCN: 45633141671   Ckp time: 25-Aug-2013 10:41:44
    Then the archivelog backup immediately following (remember, this created a total of 5 backupset, I'm showing number 4)
    BS Key  Size  
    Device Type Elapsed Time Completion Time
    5507687 650.19M
    DISK   
    00:02:18
    25-Aug-2013 10:54:53   ------- NOTE THE COMPLETION TIME ----
    BP Key: 5507694   Status: AVAILABLE  Compressed: YES  Tag: TAG20130825T104156
    Piece Name: /u01/backup/dbprod/ekoi643j_1_1.alog
      List of Archived Logs in backup set 5507687
      Thrd Seq
    Low SCN
    Low Time        
    Next SCN   Next Time
      1
    1338518 45632944587 25-Aug-2013 05:58:18 45632947563 25-Aug-2013 05:58:20
    ------ SNIP ------
      1
    1338572 45633135750 25-Aug-2013 10:08:21 45633140240 25-Aug-2013 10:08:24
      1
    1338573 45633140240 25-Aug-2013 10:08:24 45633141350 25-Aug-2013 10:30:06
      1
    1338574 45633141350 25-Aug-2013 10:30:06 45633141705 25-Aug-2013 10:41:51
      1
    1338575 45633141705 25-Aug-2013 10:41:51 45633141725 25-Aug-2013 10:41:55
    and the controlfile autobackup immediately following:
    BS Key  Type LV Size  
    Device Type Elapsed Time Completion Time
    5507984 Full
    61.89M
    DISK   
    00:00:03
    25-Aug-2013 10:55:07   ------- NOTE THE COMPLETION TIME ----
    BP Key: 5508043   Status: AVAILABLE  Compressed: NO  Tag: TAG20130825T105504
    Piece Name: /u01/backup/dbprod/DBPROD_c-3960114099-20130825-02_ctl.backup
      SPFILE Included: Modification time: 25-Aug-2013 05:57:15
      SPFILE db_unique_name: DBPROD
      Control File Included: Ckp SCN: 45633142131   Ckp time: 25-Aug-2013 10:55:04
    and yet, querying the rman catalog
    SQL> select to_char(max(completion_time),'yyyy-mm-dd hh24:mi:ss')
      2  from rc_backup_set_details
      3  where db_name='DBPROD'
      4  and backup_type='D'
      5  and incremental_level=0
      6  ;
    TO_CHAR(MAX(COMPLET
    2013-08-25 10:41:38
    SQL>
    which doesn't match (to the second) the completion time of either the full backup or the associated controlfile autobackp.
    Hemant K Chitale
    I hope this posts in a readable, understandable manner.  I really struggeled with the 'enhanced editor', which I normally use.  When I pasted in blocks from the rman report, it kept trying to make some sort of table structure out of it .... guess I'll have to follow that up with a question in the Community forum ....

  • OS command syntax to run RPG program (FTP Adapter)

    Hi All,
    I try to run RPG program from File Adapter OS command, I do not know correct OS syntax.
    RPG program will create sales order in JD, PI and JD is on IBM i5 OS.
    I am able to run OS command like "mkdir" but i do not know correct syntax to run RPG program.
    I try few OS command but doesn't work like:
    CALL PGM(Library Name/Program Name) PARM(xx,yyy,zzz)
    CALL Library Name.Progran Name(param)
    With regards
    Sunil

    Hi,
    Please check with the link:
    https://www.sdn.sap.com/irj/scn/wiki?path=/display/xi/morewiththeFileAdapter
    Hope it will throw some light to your solution.
    Thanks.
    Regards,
    Vineetha.

  • Hide command window from transactions in sap portal

    Dear experts
                  sir i want to hide command window from transactions iview in sap portal when i using sap gui for windows  sir i know very well settings for sap gui for html. so is there any possiblity for hiding command window from transactions iview when i use sap gui for windows
    please help...............
    Edited by: mousam jaini on Nov 1, 2010 12:11 PM

    Hi Mousam
    You can get solution from the  SAP Note: 1010519
    If ur system doesnot meet the pre-requiste, then you can set the below parameter under transaction sicf:
    ~webgui_simple_toolbar=1
    Pls go through the below article to display the SAP Transaction as SAP GUI
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e046cb5c-711a-2a10-95a9-81b365901b95
    Thanks
    Keshari

Maybe you are looking for

  • Phone keeps freezing then restarting

    I have a Sony Xperia z3 and when I go on facebook and watch any kind of video from different people or website my phone freezes then restarts rhe phone please help anyone

  • RF Guns Model is T2425, T2420  then what is the screen size?

    Hi all, The RF gun which is using by the client is T2425 and T2420 then what is the screen size for those devices. Is that 8x40 or 16x20?? Thanks Suresh

  • Create Object for archiving Custom Table

    Hi guru!! I have a problem, I must create a archiving object for custom table.. I must create a report for READ, DELETE and ARCHIVING this type of table.. colud you help me? thanks a lot guru!!!

  • Error when trying to re-install Adobe CC: Adobe Genuine Software Validation Failure

    My subscription expired while away on holidays. I renewed my subscription and have tried to re-install without success. The first issue I had was with administrator permissions on my pc. I have fixed this but now get an error stating that the product

  • Xfdf file problem

    Hello!  I have Adobe Reader version 11.0.3 and Windows 7 operating system.  I am trying to open an Acrobat file with the ".xfdf" file extension.  The problem is, when i try to open the file nothing happens.  It should open up a secure webpage as a pd