Database refresh using dump and archive logs

Hi all,
I have a full datapump dump taken from oracle 11g R2 database (PROD) running in HP-UX server.The dump was taken on March,1,2012.
Also i have all the archive logs till today (March 13,2012).
I want to clone it to a new database (TEST) in windows machine with this dump, and i want to refresh(restore) the database (TEST) using this archive logs to make the database sync with the (PROD) till today.
I need your suggestions.

raoofdba wrote:
Hi all,
I have a full datapump dump taken from oracle 11g R2 database (PROD) running in HP-UX server.The dump was taken on March,1,2012.
Also i have all the archive logs till today (March 13,2012).
I want to clone it to a new database (TEST) in windows machine with this dump, and i want to refresh(restore) the database (TEST) using this archive logs to make the database sync with the (PROD) till today.
I need your suggestions.I suggest you to perform transport tablespace method.below is the link: -
http://neeraj-dba.blogspot.in/2012/01/cross-platform-transportable.html

Similar Messages

  • Urgent: Huge diff in total redo log size and archive log size

    Dear DBAs
    I have a concern regarding size of redo log and archive log generated.
    Is the equation below is correct?
    total size of redo generated by all sessions = total size of archive log files generated
    I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
    My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
    Before i start measuring i cleared up archive directory and started to monitor from a specific time.
    Environment: Oracle 9i Release 2
    How I tracked the sizing information is below
    logon as SYS user and run the following statements
    DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
    CREATE TABLE REDOSTAT
    AUDSID NUMBER,
    SID NUMBER,
    SERIAL# NUMBER,
    SESSION_ID CHAR(27 BYTE),
    STATUS VARCHAR2(8 BYTE),
    DB_USERNAME VARCHAR2(30 BYTE),
    SCHEMANAME VARCHAR2(30 BYTE),
    OSUSER VARCHAR2(30 BYTE),
    PROCESS VARCHAR2(12 BYTE),
    MACHINE VARCHAR2(64 BYTE),
    TERMINAL VARCHAR2(16 BYTE),
    PROGRAM VARCHAR2(64 BYTE),
    DBCONN_TYPE VARCHAR2(10 BYTE),
    LOGON_TIME DATE,
    LOGOUT_TIME DATE,
    REDO_SIZE NUMBER
    TABLESPACE SYSTEM
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    GRANT SELECT ON REDOSTAT TO PUBLIC;
    CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
    BEFORE LOGOFF
    ON DATABASE
    DECLARE
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    INSERT INTO SYS.REDOSTAT
    (AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
    SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
    LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
    FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
    WHERE
    A.SID = B.SID
    AND
    B.STATISTIC# = C.STATISTIC#
    AND
    C.NAME = 'redo size'
    AND
    A.AUDSID = sys_context ('USERENV', 'SESSIONID');
    COMMIT;
    END TR_SESS_LOGOFF;
    Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
    Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
    I have seen the similar implementation as above at many sites.
    Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
    If I didnt find a solution I would raise a SR with Oracle.
    Thanks
    [V]

    You can query v$sess_io, column block_changes to find out which session generating how much redo.
    The following query gives you the session redo statistics:
    select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
    where a.statistic# = b.statistic#
    and b.name like '%redo%'
    and a.value > 0
    group by a.sid,b.name
    If you want, you can only look for redo size for all the current sessions.
    Jaffar

  • RMAN BACKUPS AND ARCHIVED LOG ISSUES

    제품 : RMAN
    작성날짜 : 2004-02-17
    RMAN BACKUPS AND ARCHIVED LOG ISSUES
    =====================================
    Scenario #1:
    1)RMAN이 모든 archived log들을 삭제할 때 실패하는 경우.
    database는 두 개의 archive destination에 archive file을 생성한다.
    다음과 같은 스크립트를 수행하여 백업후에 archived redo logfile을 삭제한다.
    run {
    allocate channel c1 type 'sbt_tape';
    backup database;
    backup archivelog all delete input;
    Archived redo logfile 삭제 유무를 확인하기 위해 CROSSCHECK 수행시 다음과
    같은 메시지가 발생함.
    RMAN> change archivelog all crosscheck;
    RMAN-03022: compiling command: change
    RMAN-06158: validation succeeded for archived log
    RMAN-08514: archivelog filename=
    /oracle/arch/dest2/arcr_1_964.arc recid=19 stamp=368726072
    2) 원인분석
    이 문제는 에러가 아니다. RMAN은 여러 개의 arhive directory중 하나의
    directoy안에 있는 archived file들만 삭제한다. 그래서 나머지 directory안의
    archived log file들은 삭제되지 않고 남게 되는 것이다.
    3) 해결책
    RMAN이 강제로 모든 directory안의 archived log file들을 삭제하게 하기 위해서는
    여러 개의 채널을 할당하여 각 채널이 각 archive destination안의 archived file을
    백업하고 삭제하도록 해야 한다.
    이것은 아래와 같이 구현될 수 있다.
    run {
    allocate channel t1 type 'sbt_tape';
    allocate channel t2 type 'sbt_tape';
    backup
    archivelog like '/oracle/arch/dest1/%' channel t1 delete input
    archivelog like '/oracle/arch/dest2/%' channel t2 delete input;
    Scenario #2:
    1)RMAN이 archived log를 찾을 수 없어 백업이 실패하는 경우.
    이 시나리오에서 database를 incremental backup한다고 가정한다.
    이 경우 RMAN은 recover시 archived redo log대신에 incremental backup을 사용할
    수 있기 때문에 백업 후 모든 archived redo log를 삭제하기 위해 OS utility를 사용한다.
    그러나 다음 번 backup시 다음과 같은 Error를 만나게 된다.
    RMAN-6089: archive log NAME not found or out of sync with catalog
    2) 원인분석
    이 문제는 OS 명령을 사용하여 archived log를 삭제하였을 경우 발생한다. 이때 RMAN은
    archived log가 삭제되었다는 것을 알지 못한다. RMAN-6089는 RMAN이 OS 명령에 의해
    삭제된 archived log가 여전히 존재하다고 생각하고 백업하려고 시도하였을 때 발생하게 된다.
    3) 해결책
    가장 쉬운 해결책은 archived log를 백업할 때 DELETE INPUT option을 사용하는 것이다.
    예를 들면
    run {
    allocate channel c1 type 'sbt_tape';
    backup archivelog all delete input;
    두 번째로 가장 쉬운 해결책은 OS utility를 사용하여 archived log를 삭제한 후에
    다음과 같은 명령어를 RMAN prompt상에서 수행하는 것이다.
    RMAN>allocate channel for maintenance type disk;
    RMAN>change archivelog all crosscheck;
    Oracle 8.0:
         RMAN> change archivelog '/disk/path/archivelog_name' validate;
    Oracle 8i:
    RMAN> change archivelog all crosscheck ;
    Oracle 9i:
    RMAN> crosscheck archivelog all ;
    catalog의 COMPATIBLE 파라미터가 8.1.5이하로 설정되어 있으면 RMAN은 찾을 수 없는
    모든 archived log의 status를 "DELETED" 로 셋팅한다. 만약에 COMPATIBLE이 8.1.6이상으로
    설정되어 있으면 RMAN은 Repository에서 record를 삭제한다.

    Very strange, I issue following command in RMAN on both primary and standby machine, but it they don't delete the 1_55_758646076.dbf, I find in v$archived_log, this "/home/oracle/app/oracle/dataguard/1_55_758646076.dbf" had already been applied.
    RMAN> connect target /
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    old RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    new RMAN configuration parameters are successfully stored
    RMAN>
    ----------------------------------------------------------------------------------

  • Redo and archive log association

    Hi,
    I am curious about the redo file and archive log file association, is it one to one, or one to many? That is, does one arc log file hold data from just one redo file or many redo files?
    Or , is the association redo group to arc file as opposed to redo file to arc file?
    The size of the arc log files on my machine, sometimes far exceeds the size of a single redo file and sometimes goes well under the size.
    Thanks.

    I am curious about the redo file and archive log file association, is it one to one, or one to many?One archive log file represents one redo log file contents in a group.You can have multiple logfile members in a group. All the members have the same data.
    The size of the arc log files on my machine, sometimes far exceeds the size of a single redo file and sometimes goes well under the size.The size of the archive log can be smaller from redo logfile in following scenarios:--
    1. Manual log switch.
    2. Setting archive_lag_target parameter.
    sometimes far exceeds the size of a single redo file I am not very sure about it.I haven't seen size greater than the redo logfile.
    Anand
    Edited by: Anand... on Sep 2, 2009 7:45 PM

  • How to recover the database when some of the archive log file get deleted.

    I am facing a problem with Oracle database, which is related to archivelogs.
    Our development database is running in archivelog mode, but we don't have backups scheduled and have no recovery catalog.
    When the database was in running condition, disk got full, so some archivelogs were deleted manually.
    After this they restarted the DB, and now DB is not coming up. Errors are as follows:
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1444383504 bytes
    Fixed Size 731920 bytes
    Variable Size 486539264 bytes
    Database Buffers 956301312 bytes
    Redo Buffers 811008 bytes
    Database mounted.
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: '/export/home/oracle/dev/ADVFRW/ADVFRW.system'
    SQL> recover datafile '/export/home/oracle/dev/ADVFRW/ADVFRW.system'
    ORA-00283: recovery session canceled due to errors
    ORA-01610: recovery using the BACKUP CONTROLFILE option must be done
    SQL> recover database using backup controlfile;
    ORA-00279: change 215548705 generated at 09/02/2008 17:06:10 needed for thread
    1
    ORA-00289: suggestion :
    /export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC
    ORA-00280: change 215548705 for thread 1 is in sequence #1107
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC
    ORA-00308: cannot open archived log
    '/export/home/oracle/dev/ADVFRW/ADVFRW.archivelog1/LOG_ADVFRW_1107_1.ARC'
    ORA-27037: unable to obtain file status
    SVR4 Error: 2: No such file or directory
    Additional information: 3
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    Media recovery cancelled.
    SQL>
    1. How to recover the database and bring it online
    Any help will be highly appreciated.
    With Regards
    Hemant Joshi
    Edited by: hem_Kec on Sep 7, 2008 9:07 AM

    Hi,
    Archive log files are the copies of redolog files.As redo log files are circularly overwritten,oracle generates archive log file of the corresponding redo logfiles being overwritten.So if you have a backup that dates back to 10 am in the morning and if your database creashed at 3 pm,you cannot use the redo log files alone as they have incomplete information.To completely recover the database upto 3 pm,you need archive log files generated between 10 am to 3 pm. In your case since you are missing one archive log file,you cannot perform complete recovery and hence would suffer data loss.

  • Full Backups, Level 0 Backups, and Archived Logs

    We have an active Oracle server and a standby Oracle server. We keep the standby database up to date with a cron script. The script tells the active database to do 'alter system switch logfile;'. We then rsync the archived logs to our standby server and have rman apply them.
    This works everyday except Monday (of course!) and it only recently started failing on Mondays. The only change was that our Sunday backups used to be 'Full' backups but are now 'level 0' backups. Ever since that change, the first attempt to apply the archived logs to the standby server after the level 0 is taken on the active server gives us something like this:
    ORA-00308: cannot open archived log
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60519_%u_.arc'
    ORA-27037: unable to obtain file status
    Of course, the file is not there and doesn't exist on the active server either. And of course, the nightly level1 backups fo not give us problems applying archived logs to the standby database the rest of the week.
    The only way I know to recover from this is to apply the level 0 backup or take a new level 0 and apply it. After that, all subsequent archive logs just work. Any idea why changing from Full to Level 0 would break this? The Oracle docs insist that a Level 0 is identical to a Full except that level 1s can reference them as parents. This simply cannot be true based on what I'm seeing! I really want to keep the level 0 backups in play if possible. Level 1 cumulatives wont be useful without them.

    Here are the RMAN settings:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/102/dbs/snapcf_ORCL.f'; # default
    I'm not sure how changing ARCHIVELOG BACKUP COPIES would help. Can you give me a little more information about how that setting comes into play in this situation?
    I actually don't want an archive deletion policy here. We have this done in script three days after the needed archive logs have been applied. Is it possible that the we're deleting archivelogs too soon? Would we ever need to reach back in time to previously applied archive logs to apply new ones?
    The %u does resolve, but this message isn't showing it. Here is that same log entry plus a few previous entries that show it does resolve.
    ORA-00279: change 1284618956 generated at 04/13/2012 15:30:05 needed for thread
    1
    ORA-00289: suggestion :
    /opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60518_%u_.arc
    ORA-00280: change 1284618956 for thread 1 is in sequence #60518
    ORA-00278: log file
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_13/o1_mf_1_60517_7rjzox
    0l_.arc' no longer needed for this recovery
    ORA-00279: change 1284618958 generated at 04/13/2012 15:30:05 needed for thread
    1
    ORA-00289: suggestion :
    /opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60519_%u_.arc
    ORA-00280: change 1284618958 for thread 1 is in sequence #60519
    ORA-00278: log file
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_13/o1_mf_1_60518_7rjzox
    0x_.arc' no longer needed for this recovery
    ORA-00308: cannot open archived log
    '/opt/oracle/flash_recovery_area/ORCL/archivelog/2012_04_16/o1_mf_1_60519_%u_.ar
    c'
    ORA-27037: unable to obtain file status
    Linux-x86_64 Error: 2: No such file or directory
    Additional information: 3

  • "recover database until cancel" asks for archive log file that do not exist

    Hello,
    Oracle Release : Oracle 10.2.0.2.0
    Last week we performed, a restore and then an Oracle recovery using the recover database until cancel command. (we didn't use backup control files) .It worked fine and we able to restart the SAP instances. However, I still have questions about Oracle behaviour using this command.
    First we restored, an online backup.
    We tried to restart the database, but got ORA-01113,ORA-01110 errors :
    sr3usr.data1 needed media recovery.
    Then we performed the recovery :
    According Oracel documentation, "recover database until cancel recovery" proceeds by prompting you with the suggested filenames of archived redo log files.
    The probleme is it  prompts for archive log file that do not exist.
    As you can see below, it asked for SMAarch1_10420_610186861.dbf that has never been created. Therefore, I cancelled manually the recovery, and restarted the database. We never got the message "media recovery complete"
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10417_61018686
    Fri Sep  7 14:09:45 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf'
    Fri Sep  7 14:09:45 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_61018686
    Fri Sep  7 14:10:03 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf'
    Fri Sep  7 14:10:03 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf
    ORA-279 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_61018686
    Fri Sep  7 14:10:13 2007
    ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf'
    Fri Sep  7 14:10:13 2007
    Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
    Errors with log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
    ORA-308 signalled during: ALTER DATABASE RECOVER    LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_61018686
    Fri Sep  7 14:15:19 2007
    ALTER DATABASE RECOVER CANCEL
    Fri Sep  7 14:15:20 2007
    ORA-1013 signalled during: ALTER DATABASE RECOVER CANCEL ...
    Fri Sep  7 14:15:40 2007
    Shutting down instance: further logons disabled
    When restaring the database we could see that, a recovery of online redo log has been performed automatically, is it the normal behaviour of a recovery using "recover database until cancel"  command ?
    Started redo application at
    Thread 1: logseq 10416, block 482
    Fri Sep  7 14:24:55 2007
    Recovery of Online Redo Log: Thread 1 Group 4 Seq 10416 Reading mem 0
      Mem# 0 errs 0: /oracle/SMA/origlogB/log_g14m1.dbf
      Mem# 1 errs 0: /oracle/SMA/mirrlogB/log_g14m2.dbf
    Fri Sep  7 14:24:55 2007
    Completed redo application
    Fri Sep  7 14:24:55 2007
    Completed crash recovery at
    Thread 1: logseq 10416, block 525, scn 105140074
    0 data blocks read, 0 data blocks written, 43 redo blocks read
    Thank you very much for your help.
    Frod.

    Hi,
    Let me answer your query.
    =======================
    Your question: While performing the recovery, is it possible to locate which online redolog is needed, and then to apply the changes in these logs
    1.   When you have current controlfile and need complete data (no data loss),
          then do not go for until cancel recovery.
    2.   Oracle will apply all the redologs (including current redolog) while recovery
         process is    on.
    3.  During the recovery you need to have all the redologs which are listed in the    view    V$RECOVERY_LOG and all the unarchived and current redolog. By querying  V$RECOVERY_LOG  you    can find out about the redologs required.
    4. If the required sequence is not there in the archive destination, and if recovery process    asks for that sequence you can query V$LOG to see whether requested sequence is part of the    online redologs. If yes you can mention the path of the online redolog to complete the recovery.
    Hope this information helps.
    Regards,
    Madhukar

  • Recover database but don't have archive log file

    Hi
    I use old backup set in tape and restore all datafile completed but I cannot recover that show below errror in RMAN
    RMAN> recover database;
    Starting recover at 08-SEP-09
    using channel ORA_DISK_1
    starting media recovery
    unable to find archive log
    archive log thread=1 sequence=29166
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 09/08/2009 21:49:36
    RMAN-06054: media recovery requesting unknown log: thread 1 seq 29166 lowscn 1648727512
    But in backup set and include archive log already that don't have seq 29166 that have last seg is 29165 when I tyr recover in sqlplus that show below error
    SQL> recover database using backup controlfile;
    ORA-00279: change 1648727512 generated at 09/05/2009 00:02:07 needed for thread
    1
    ORA-00289: suggestion : /oradata/archive/hrprd/1_29166_671345511.arc
    ORA-00280: change 1648727512 for thread 1 is in sequence #29166
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00308: cannot open archived log
    '/oradata/archive/hrprd/1_29166_671345511.arc'
    ORA-27037: unable to obtain file status
    IBM AIX RISC System/6000 Error: 2: No such file or directory
    Additional information: 3
    And I tried to open with reset log that show below error
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01113: file 1 needs media recovery
    ORA-01110: data file 1: '/oradata/data/hrprd/system01.dbf'
    How can I do to open database?
    Taohiko.

    taohiko wrote:
    Hi Werner
    I tried and show below error
    SQL> recover database using backup controlfile until cancel;
    ORA-00279: change 1648727512 generated at 09/05/2009 00:02:07 needed for thread
    1
    ORA-00289: suggestion : /oradata/archive/hrprd/1_29166_671345511.arc
    ORA-00280: change 1648727512 for thread 1 is in sequence #29166
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01152: file 1 was not restored from a sufficiently old backup
    ORA-01110: data file 1: '/oradata/data/hrprd/system01.dbf'
    ORA-01112: media recovery not started
    How to do next step?
    TaohikoHi Taohiko
    Have you made backup of all archived redo log files? If so, why this one was disappeared?
    Have you applied all archived redo log files before that required archived redo log file?
    Please try to recover the database using the following command:
    RMAN> RESTORE DATABASE;
    RMAN> RECOVER DATABASE UNTIL SEQUENCE 29166;
    RMAN>ALTER DATABASE OPEN RESETLOGS;

  • Hangup transport archive log in primary and archive log apply

    Hi
    I am building Dataguard from 3-node primary cluster to 3-node standby cluster
    Oracle Version:10.2.0.4
    Operating system : LInux 64 bit
    After I restored standby database, I configured dataguard broker with wrong unique_name parameter in standby cluster using grid control
    after I corrected mistake, disabled dataguard broker parameters, delete dataguard broker files and reboot standby cluster but don't reboot primary cluster because is production enviroment.
    I have problem with following symptoms:
    -Hangup transport archive log while recovery database in standby then gap archivelog is produced.
    -Copy and register all archivelog gap in standby but don't apply archive log.
    - Don't register like applied in v$archived_log in primary the archives applied in standby manually.
    -RMAN command: "backup as COMPRESSED BACKUPSET tag 'Backup Full Disk' archivelog all not backed up delete all input;"
    don't delete in primary archive log applied because message " archive log is necessary"
    I think that is necessary reboot primary cluster.
    Please helpme

    Post the results of queries. It is difficult to understand.
    post from primary
    SQL> select thread#,max(sequence#) from v$archived_log group by thread#;select     ds.dest_id id
    ,     ad.status
    ,     ds.database_mode db_mode
    ,     ad.archiver type
    ,     ds.recovery_mode
    ,     ds.protection_mode
    ,     ds.standby_logfile_count "SRLs"
    ,     ds.standby_logfile_active active
    ,     ds.archived_seq#
    from     v$archive_dest_status     ds
    ,     v$archive_dest          ad
    where     ds.dest_id = ad.dest_id
    and     ad.status != 'INACTIVE'
    order by
         ds.dest_id
    Post from standby.
    SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
    select * from v$managed_standby;

  • RMAN and archived log

    I have 2 identical Database from day 1 (1 Prod and 1 Dev). Prod db runs in archive mode
    which produces a number of archived logs after a few hours.
    Can I use rman to apply these archived log to the Dev instance to get a copy of the Prod DB? I've copied all *.arc files rom
    Prod back to Oracle\oradata\devdb directory
    1. rman target internal/<passwd> nocatalog
    2. shutdown normal;
    3. startup mount pfile=init.ora
    4. run {
    allocate channel ch1 type disk;
    restore archivelog from logseq 11 until logseq 13;
    recover database;
    I am getting an error saying
    RMAN 06050: archivelog thread 1 sequence 11 is already on disk ....
    RMAN 06177: restore not done, all files readonly, offline or already restored?
    Thanks
    null

    Hi!
    The quick answer is no. But there are several things to know.
    1. Production Database in archive mode.
    2. Develpment database in no archive mode?
    You cannot apply the redo log of (1) to (2) because the SCN (sequence change number) and won't be the same in the both databases. Means more, except if you clone the databases, the control files could be different so remember that redo-logs are binaries, not just a clear-text script.
    Other think to know is that in development you can modify, delete apppend data/tables/datafiles/tablespaces.. how can synchronize this with the production? (this situation can be the same production vs development).
    If you have a standby database you can do something like you wish, but not identical (tell me and I will explain my little knowledge about it ;)
    So you must first know what do you wanna keep in development database, if you just want to have a daily-mirror and can lost production data the best option is to backup production database with RMAN (full, incremental...) in development and use it to replicate another instance every day. Well I have some troubles replicating with RMAN, but people here say it's possible ;)
    If you want to keep the same data, you can use an incremental data export (exp utility) and import modified data to development.
    A10!
    P.S:Sorry for this loooooong text ;)

  • FRA and archive logs

    Hello all,
    i have db from 10.2.0.4 to 11.2.0.3, my question is with regards to FRA. We do not use the full blow FRA, but only use it to store our Archive logs ONLY. This is still in testing phase for us...
    Our Current method it to backup the archive logs and blow them away (script ran from cron and RMAN)...
    and now from what i understand about FRA, if my archive logs are sitting in FRA then once its get 80% or whatever that value is, it will delete it for me as long as i have backed up my archive logs.... So my question is, is there a acutal threahold or no ?? Reading from below doc looks like this is unpredictable.... but is there a hidden parameter or some setting i can set to say....once the FRA is lets say 50% full, delete the files(as long as they are backed up first)
    http://docs.oracle.com/cd/B19306_01/backup.102/b14192/setup005.htm
    section -- 3.5.6.1 When Files are Eligible for Deletion from the Flash Recovery Area
    if there is no such setting, how can we setup managing archive logs (besides using rman to backup and delete)...

    Hello;
    It is not unpredictable. Oracle decides. The document is a bit vague to say the least.
    Just another reason to check your alert logs often.
    As a general rule I set db_recovery_file_dest_size on the high side.
    We kicked this around the Data Guard forum a few weeks back :
    FRA - Flashback logs usage varies on Primary against Standby Database
    Best Regards
    mseberg
    Edited by: mseberg on Jun 28, 2012 10:22 AM

  • RMAN,Data Guard and Archive log deletion

    Our DG environment is running Oracle 11g R2
    we have a 3 node DG environment with
    A being the Primary
    B and C being Active Data Guard Standbys
    Backups are taken off of B and go directly to tape.
    Standby Redo Logs and Fast Recovery Area are being used
    Taking recommendation from "Using Recovery Manager with Oracle Data Guard in Oracle Database 10g"
    RMAN Setting on Primary ("A")
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
    RMAN Setting on Standby ("B") where Backup is done
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE
    RMAN Setting on other Standby ("C")
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
    How can we know what archive logs are eligible to be deleted from "A" and "C" ?
    When does the delete take place?
    How can we tell when the archive logs are being deleted from "A" and "C" ?

    Dear user10260925,
    The documentation that you have read is reliable but insufficient.
    The Oracle can manage the archivelog directory and knows which one is eligible for the deletion. Those stuff you have posted in here has been taken from the online documentation and is supported and can be used when the Oracle knows and manages the archivelogs. That is simply called the flash recovery area. Please read about the FRA in this very moment.
    Under normal circumstances people in the industry uses some scripts to achive the archivelog deletion on the standby system.
    Here is a useful example to you;
    # Remove old archivelogs
    00,30 * * * * /home/oracle/scripts/delete_applied_redo_logs_OPTSTBY.sh
    vals3:/home/oracle#cat /home/oracle/scripts/delete_applied_redo_logs_OPTSTBY.sh
    export ORACLE_SID=optstby
    export ORACLE_HOME=/oracle/product/10.2.0/db_1
    cd /db/optima/archive/OPTPROD/archivelog
    /oracle/product/10.2.0/db_1/bin/sqlplus "/ as sysdba" @delete_applied_redo_logs.sql
    grep arc delete_applied_redo_logs.lst > delete_applied_redo_logs_1.sh
    chmod 755 delete_applied_redo_logs_1.sh
    sh delete_applied_redo_logs_1.sh
    rm delete_applied_redo_logs_1.sh
    rm delete_applied_redo_logs.lst
    vals3:/home/oracle#cd /db/optima/archive/OPTPROD/archivelog
    vals3:/db/optima/archive/OPTPROD/archivelog#cat delete_applied_redo_logs.sql
    set echo off
    set heading off
    spool /db/optima/archive/OPTPROD/archivelog/delete_applied_redo_logs.lst
    select 'rm -f ' || name from v$archived_log where applied = 'YES';
    spool off
    exit
    vals3:/db/optima/archive/OPTPROD/archivelog#Hope That Helps.
    Ogan

  • RAC online and archive logs question

    Hello All,
    I setup a RAC database instances prod1 and prod2 (10.2.0.4). Datafiles and onlinelogs are on ASM.
    Does these results look good queried from two instances. I am kind of concerned about the Group3 that has the same name for both the members.
    Also archived logs are going to the ASM, is this a good practice. I was reading Oracle RMAN book and it mentioned archived logs go to local disk.
    Is it possible to archive to local disk for online that are on ASM? Please advice. Early reply appreciated.. Thanks San~
    PROD1 Instance
    SQL> select member from v$logfile;
    MEMBER
    +DATA/prod/onlinelog/group_2.264.706892209
    +FLASH/prod/onlinelog/group_2.259.706892211
    +DATA/prod/onlinelog/group_1.261.706892209
    +FLASH/prod/onlinelog/group_1.260.706892209
    +DATA/prod/onlinelog/group_3.258.706892235
    +FLASH/prod/onlinelog/group_3.258.706892235
    +DATA/prod/onlinelog/group_4.256.706892237
    +FLASH/prod/onlinelog/group_4.257.706892237
    8 rows selected.
    PROD2 Instance
    SQL> select member from v$logfile;
    MEMBER
    +DATA/prod/onlinelog/group_2.264.706892209
    +FLASH/prod/onlinelog/group_2.259.706892211
    +DATA/prod/onlinelog/group_1.261.706892209
    +FLASH/prod/onlinelog/group_1.260.706892209
    +DATA/prod/onlinelog/group_3.258.706892235
    +FLASH/prod/onlinelog/group_3.258.706892235
    +DATA/prod/onlinelog/group_4.256.706892237
    +FLASH/prod/onlinelog/group_4.257.706892237
    8 rows selected.
    ===
    SQL> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 3
    Next log sequence to archive 4
    Current log sequence 4
    ====
    Thanks
    San

    Hi San,
    sannidhi wrote:
    Also archived logs are going to the ASM, is this a good practice. I was reading Oracle RMAN book and it mentioned archived logs go to local disk.
    Is it possible to archive to local disk for online that are on ASM? Please advice. Early reply appreciated.. Thanks San~
    It is recommend to store archived log files on ASM and on Shared disk, check your archive log format which suppose to represent uniqueness across all instances.
    Yes, technically it is possible to archive to local disk, but not recommended as if you loose local disk there will be gaps in the archived log files and also it increases the administration.
    Regards,
    Thota

  • Import performance and archive logs

    Well we are working on Oracle 10 R2 on Solaris.
    During import (impdp) its generating huge volume of archive logs.
    Our database size is in terabytes.
    How to stop the archive log generation during import or atleast minimize the generation ??

    Hello,
    If you can restart your database then you may set your database in NOARCHIVELOG mode.
    Then, after the import is finished, you'll have to set back your database in ARCHIVELOG mode (you'll need to restart again the database).
    Afterwards, you'll have to Backup your database.
    Else, without changing the Archive mode of the database, you can Backup and compress your archived "logs".
    For instance, with RMAN:
    connect target /
    backup
      as compressed backupset
      device type disk
      tag 'BKP_ARCHIVE'
      archivelog all not backed up
      delete all input;
    exit;By that way you'll save space on disk.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Database locking using JSP and Oracle database

    Dear All
    I am reading about how to do database locking in general and i want to implement these mechanisms using JSP pages and oracle database, but i have the following questions:-
    1.If i write a “select for update” quesry in the JSP page will it locks the record ? or it will not lock the record because the connection between the JSP pages and the server will be stateless in most online systems?
    2.If i write all my jave code in transaction , something like this:-
    • Begin transaction
    • Commit or
    • Rollback
    Then should i be worried about the locking issues or the database manger will handle the locking mechanisms to insure data integrity(and what the default mechanism (if any) that the oracle database manger use to do the locking)?
    3. If the answer for question 2 is no, then how can i handle the optimistic and the pentemistic locking using JSP pages?
    BR

    One way to solve this issue is as follows:
    * You add a new column to each database table called 'version' which is of int type.
    * Each time you alter any field in a record, you increament the version number.
    * When you read a record and display it, you store the version number in your code
    * when you go to update the record, you write your sql something like this:
    update person set firstName=? where personId=? and version=?
    Where the version is whatever you stored locally. If someone altered the record in the database while your
    end user was looking at it, the version numbers will not match and the sql statement will
    return zero as the number of records it altered. If its zero, inform the end user someone altered the record
    while he was looking at it and weather or not he wants to proceed.
    The chances of two people altering the same record in a table while both are logged in and viewing the same set of data is small so such collisions will be few.
    You only need transactions if you are updating more than one record at a time (in the same table or multiple tables).
    You dont need it for reading records if you use a single sql statement to read (for example: to join multiple tables).
    In general, you get a (pooled) connection, use it, and close it as quickly as possible in a try/catch/finally block. You dont hold onto it for the duration of the user's session. A book on JDBC should help clarify this.

Maybe you are looking for

  • Crystal Report Parameter multiple values

    In crsytal report 2008, I created a report to enable user to view list of sales order. I created the following parameter to display list of sales order: doc@select docnum from ORDR In SAP B1 8.8, it displays a dropdown list of SO and users can select

  • Color signatures not saved in Preview

    Hi all When I paste a signature into Preview (using the inbuilt Preview signature annotation) with a colored ink, it is fine on the screen, and also if printed to paper or pdf; but if the file is simply saved, upon reopening, the signature has change

  • How Can I install Oracle 10g R1 Linux x86 on Red Hat Enterprise 6

    I have tried so many times to install Oracle 10g R1 10.1.0.3 on the Red Hat Enterprise Linux 6 x86_64 but just its appear to be imposible, The Installer failed on the linking and make process. i need to know if anyone has been installed the Oracle 10

  • Invalid camera view index

    Hello, i keep getting an error box popping up After Effects error: invalid camera view index. (26::327) the error pops up continuously so I have to force quit. there are no cameras in the comp. using afx cs5 win 7 professional intel i7 920 thanks s

  • MicrosoftDNS Container Conflict Resolution - Creation times are the same

    Everyone, I am trying to resolve an AD conflict for the MicrosoftDNS container. From my research the general rule appears to be "delete the newer object". The problem I have is that the creation time of both containers is identical: Class       Creat