Redo log in case of NOARCHIVELOG Mode.

================================================
This post is now available at .. Redo log files in case of NOARCHIVELOG Mode.
================================================
Question is related with the oracle architure..
database requires a minimum of two redo log files to guarantee that one is always available for writing while the other is being archived, this sounds perfect when DB is running in ARCHIVELOG mode but at the same time it also forces database to have 2 redo log files even when the DB is running in NOARCHIVELOG mode?
Any particular reason..
I would look for reasons not answers on what redo log is and what information it holds etc..
Edited by: pgoel on Mar 12, 2011 4:04 PM

======================================
SORRY, WRONG FORUM moving it to the corrent forum
======================================
Edited by: pgoel on Mar 12, 2011 4:01 PM

Similar Messages

  • Redo log files in case of NOARCHIVELOG Mode.

    Question is related with the oracle architure..
    database requires a minimum of two redo log files to guarantee that one is always available for writing while the other is being archived, this sounds perfect when DB is running in ARCHIVELOG mode but at the same time it also forces database to have 2 redo log files even when the DB is running in NOARCHIVELOG mode?
    Any particular reason..
    I would look for reasons not answers on what redo log is and what information it holds etc..

    pgoel wrote:
    If you had only one file all further changes would have to stop until all changed data blocks had been written to disc. By insisting on a minimum of two log files Oracle can allow the log writer to fill the second log file as the database writer writes out the dirty blocks covered by changes desrcibed in the first log file.What about having a big size redo log file instead of two? Checkppoint being initiated when the redo log file is half filled
    I mean, understand the logic, two is better, even best.. but still not getting convinced. I am just trying to think, can not Oracle work with 1 group with 1 big file..specailly when Noarch mode ? Having
    Edited by: pgoel on Mar 12, 2011 7:27 PMNo, you still didn't understand and I am not sure how else we can say it. Okay, think about log groups as two buckets which are used to fill the redo content. The LGWR is filling one bucket at a time till it can't accept any more content. Once filled, rather than spilling the content out, it jumps over to the second bucket. Now, taking your statement of big sized redo, Pgoel, it doesn't matter how big sized bucket you bring in, eventually it will fill in. Its not possible that you wont be able to fill it , it just would take more longer than the normal timings. Thats all. So in any case , you need the second bucket. And I am not sure that why you are struck on the archivelog mode. I hope you do understand that its an optional mode. This means, this may or may not be there. If there, its a good thing that before doing the flush of the redo content, Oracle would be able to save the contents in the archived (think about the name, it's very meaning is to preserve, to archive) log file. If not, the redo content wuold be lost and there won't be any record of transactions leaving you to reenter the lost transactions. Its a meaningless point on which you are struck i.e. the archive or no-archivelog mode. Be it whatever, Oracle would still need minimum of the two log groups.
    HTH
    Aman....

  • Datafile recover in case of noarchivelog mode

    my database is in noarchivelog mode and there is no backup available and one of my datafile is currupted then how can i recover the datafile??
    thanx in advance

    saugat chatterjee wrote:
    my database is in noarchivelog mode and there is no backup available and one of my datafile is currupted then how can i recover the datafile??
    thanx in advanceWhat ever version. To recover datafile archiving should be enabled. Noarchive norecovery...

  • Redo log content in NOARCHIVELOG mode

    I have several servers running Oracle 9i databases in NOARCHIVELOG mode. I know this means that the online redo logs are not archived. I have been tracing the Oracle log writer with truss, and only see I/O going to the Oracle control files. I see no I/O going to the online redo logs. Can someone point me to the Oracle documentation that discusses exactly what gets written to the online redo log files while in NOARCHIVELOG mode? Thanks in advance for any assistance.

    Try Oracle Concepts on
    http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96524.pdf

  • Online Redo Log groups

    Dear All,
    How to check the health of redo log file, we have 200 MB undo tablespace in our production server is it enough for huge transactions. Can I check how much time my redo log file data have been overwritten?
    Further in which situation we will add Online Redo Log Groups and which situation we will add Log Members.
    My rollback segment is using System tablespace is it recommended?
    What is recommendation about 1 redo log group is redo log member or 1 redo log group is multiple redo log members.

    Thanks Mr. Nicolas. for your informative guidence.
    Can I check how much time my redo log file data have been overwritten?Check v$loghist.
    We have 218 records in v$loghist, it means 218 times data have been overwritten, i think its not good. Can you guide me how to rectyify this.
    in which situation we will add Online Redo Log GroupsIn case of checkpoint not complete reported into alert.log.
    How to findout checkpoint entry in alert.log
    which situation we will add Log Members.This is the redolog multiplexing, at least two members for each redolog group.
    Ok, Can we do multiplexing for members or just do for groups.
    My rollback segment is using System tablespace is it recommended?No.
    OK, can we change rollback segments tablespace.
    1 redo log group is redo log member or 1 redo log group is multiple redo log membersA minimum of two redolog group with two members for each.
    After, it depend of your db activity.
    We have just one member for each group and we have three groups, so whats ur recommnedation we will add 1 member in each group.

  • Redo Log Switch 결과...

    환경 : 8.1.7.3.0 (no archive log 모드)
    log_checkpoint_timeout = 0
    log_checkpoint_interval = 999999999
    redo log size = 200M
    현재 check point는 log switch 상태에서만 가능하도록 설정된 것 같습니다.
    거래량이 적어서 그런지 log switch는 30시간 주기입니다.
    제가 실행한 것은 아래 4번째 로그가 current일때
    alter system checkpoint를 하고 조금 있다가..
    alter system switch logfile를 하여 1번째가 current가 된 상황입니다.
    3/16일 14시 까지도 계속 active상태입니다...
    1. 문제가 생긴건지요??? 도움부탁합니다...
    2. no archive log mode에서도 switch 주기를 줄이는 것이 복구에 도움이 되나요...?
    ===========================================
    STATUS , FIRST_CHANGE#,FIRST_TIME
    CURRENT , 8846777646687,2007-03-15 16:57:55
    INACTIVE, 8846777587798,2007-03-14 10:34:40
    INACTIVE, 8846777609448,2007-03-14 17:17:38
    ACTIVE , 8846777643690,2007-03-15 16:01:22

    no archivemode에서 정상복구를 바라는 것인지요?
    잘못된 정책이란 생각이 듭니다.
    no archive mode에서 v$log의 first_change# 중에 가장 작은 것
    이 v$recover_file의 change# 보다 크거나 같다면 복구불능입니다.
    배치작업이라도 있어서 log switch가 한번의 cycle을 돌게 되면 이전
    백업으로는 복구불능입니다. archive mode로 지금 바로 바꾸시지요..
    log_checkpoint_timeout은 checkpoint에 대한 timeout 시간값을
    지정하는 것입니다.
    LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
    checkpoint는 아시는 바와 같이 데이터파일과
    리두로그 컨트롤파일의 SCN을 일치시키는 것입니다. 주요한 것은
    DBWR프로세스가 데이터파일에 write를 하겠구요.
    물론 checkpoint와 인스턴스 복구는 관련이 있습니다. checkpoint timeout을
    적당히주면 instance recovery에서 좀 더 빠르게 instance 복구후 DB가
    open되겠습니다. 만약 설정하신대로 하신다면 DB를 abort로
    내리고 open하게 되면
    instance recovery시에 더 많은 시간이 필요하겠습니다.
    게다가 트랜잭션으로 인한 log switch하는 시간이 30시간보다
    작다면 timeout을 준들 영향을 주지 않겠지요. redo log가 꽉
    차게 되면 log switch를 자동으로 하게 되는데 log switch를
    하기전에 checkpoint를 주게 되어 있으니까요.
    그런데 checkpoint와 물리적/논리적 복구와는 다른 개념입니다.
    checkpoint는 위에서 말씀드린 instance recovery와 관련이 있고
    물리적/논리적 복구에서는 archive file이 떨어져 있는가 current redo log가
    존재하는가에 따라서 복구가능여부를 결정되는 것이지요..
    그리고 ACTIVE 상태라는 것은 문서상의 정의에서는 archive mode일 경우
    archiving이 되는 중일 경우, 그리고 이 상태는 complete recovery시에 redo log
    적용시 필요한 정보가 있다는 것입니다.
    no archive mode에서 복구정책을 적용 하겠다는 것은 위험한 발상인 것 같습니다.
    물론 DSS시스템의 경우에는 이미 정책을 no archive mode로
    만들어두고 주말마다 offline backup을 하기도 합니다.
    하지만 DSS에서는 하루에 300번 이상의 log switch가 일어나는
    경우가 있을 정도이니 아무리 백업이 되어 있다 한들 완전복구는
    불능이겠지요. offline backup을 했을 때까지만 복구가 됩니다.
    $LOG
    This view contains log file information from the control files.
    Column Datatype Description
    GROUP#
    NUMBER
    Log group number
    THREAD#
    NUMBER
    Log thread number
    SEQUENCE#
    NUMBER
    Log sequence number
    BYTES
    NUMBER
    Size of the log (in bytes)
    MEMBERS
    NUMBER
    Number of members in the log group
    ARCHIVED
    VARCHAR2(3)
    Archive status (YES |NO)
    STATUS
    VARCHAR2(16)
    Log status:
    UNUSED - Online redo log has never been written to. This is the state of a redo log that was just added, or just after a RESETLOGS, when it is not the current redo log.
    CURRENT - Current redo log. This implies that the redo log is active. The redo log could be open or closed.
    ACTIVE - Log is active but is not the current log. It is needed for crash recovery. It may be in use for block recovery. It might or might not be archived.
    CLEARING - Log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE statement. After the log is cleared, the status changes to UNUSED.
    CLEARING_CURRENT - Current log is being cleared of a closed thread. The log can stay in this status if there is some failure in the switch such as an I/O error writing the new log header.
    INACTIVE - Log is no longer needed for instance recovery. It may be in use for media recovery. It might or might not be archived.
    no archive mode에서도 복구하는 여러가지 방법들이 있기는 합니다.
    예를들어 current redo log가 깨졌을 때에 recovery 방법이라던지
    등등이 문서에 있긴하지요. 하지만 no archivemode에서 백업을 붓고
    복구하는 방법은 찾아보기 힘드실 것입니다. 위에서도 말씀드렸듯이
    no archive mode에서
    v$recover_file의 CHANGE# > v$logl의 minimum FIRST_CHANGE# 이면 데이터파일은 복구가능합니다.
    그러나 CHANGE# <= minimum FIRST_CHANGE# 이면 복구 불가능합니
    다. 그러니 백업을 붓고 복구를 하는 방법에 대한 문서는 거의
    찾기 힘듭니다. advance 방법에 대한 문서에서만 adjust_scn을 쓴다던지 하는 등이 나와있을 뿐입니다.
    글 수정:
    민천사(민연홍)
    아무래도 졸면서 썼나봅니다.;;
    interval과 timeout은 엄연히 다른데요. 왜 timeout과 interval을
    혼동했는지..;;
    LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
    LOG_CHECKPOINT_TIMEOUT specifies (in seconds) the amount of time that has passed since the incremental checkpoint at the position where the last write to the redo log (sometimes called the tail of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for more than integer seconds.

  • Trying to archive more often: archive_lag_target vs downsizing redo logs

    Hi,
    I have a small production 10.2.0.1 database (small: all the files are 17GB total) on Linux which is used in Agile PLM. Ever since we went live, it produces only 2-4 archive logs per day. This was OK when our company was only concerned with being able to restore from the nightly cold backup. Now we want to make sure we can recover to the last hour of work. So I need the database to spit out more logs. It has 4 log groups (with 2 members each), each 50M in size. I am going to add 2 more log groups to have 6, since that is what we did for our 11i instance based on a consultants recommendation. That could prevent some problems, but won't cause it to spit out more logs. I did some research (here included) and found that setting archive_lag_target-3600 will FORCE the db to spit out logs every hour, and doing some testing, this works very nicely. The archive logs are only about 1.2M when they do get spit out every hour, but that is fine. The question is, is it OK to turn on archive_lag_target while keeping the size of the logs at 50M, and have mostly "small" logs being spit out? Or should I reduce the size of the logs to say 20M (by dropping the old ones and creating new ones)? I actually tried 20M and then during a busy time it spit out 2 of them within 10 minutes, but then I looked and saw it did the same thing with the 50M size, so I figured why not keep the 50M redo log size in the first place? It would actually make my go-live plan easier as I would just add 2 log groups at the current size in prod, and not have to drop and recreate a bunch of logs. I think this is a good plan -- my only worry is that if traditionally the way to increase the frequency of the logs was to reduce their size, I feel like I am "cheating" by using the archive_lag_target parameter to do this. I also want to not change too many things at once in production at one time. Thanks in advance. Marv

    user11965205 wrote:
    is, is it OK to turn on archive_lag_target while keeping the size of the logs at 50M, and have mostly "small" logs being spit out?Yes it is OK: you should keep "large" redo log in case your database instance has sometimes much more write activity to avoid checkpoint not complete issue.

  • Restore in noarchivelog mode ( redo log files have been dropped )

    1, A full backup taken using RMAN is available on disk.
    2, The current control files were NOT damaged and do not need to be restored.
    3, All data files are damaged .
    4, The database is in NOARCHIVELOG mode.
    I restore database :
    1. RMAN> STARTUP MOUNT
    2. RMAN> RESTORE DATABASE;
    3. RMAN> recover database;
    in this step , I got the information about needing redo log files ; but the redo log
    files have been dropped , what should i do ?
    else
    I want to know there is the command 'recover database using backup control file'
    in rman or not ?
    Tks

    Possibly loss of data (because information in online redo logs is lost):
    recover database until cancel;
    (cancel immediately)
    alter database open resetlogs; (to build a new set of redo logs)
    It's not necessary to use a backup controlfile here.

  • ORA-00258: manual archiving in NOARCHIVELOG mode must identify log

    Hi I am new to oracle streams. I am trying to setup a one way replication from one database to another using Oracle 10g (10.2.0.1.0) on Windows XP SP3 (32 bit).
    I ran the following proc as the streams admin schema:
    begin
    dbms_streams_adm.maintain_schemas(
    schema_names => 'XXCOW',
    source_directory_object => 'repl_exp_dir',
    destination_directory_object => 'repl_imp_dir',
    source_database => 'PWBSD',
    destination_database => 'PDVSD',
    perform_actions => true,
    dump_file_name => 'exp_app23.dmp',
    capture_queue_table => 'rep_capt_table',
    capture_queue_name => 'rep_capt_queue',
    capture_queue_user => NULL,
    apply_queue_table => 'rep_dest_table',
    apply_queue_name => 'rep_dest_queue',
    apply_queue_user => NULL,
    capture_name => 'capture_pubs',
    propagation_name => 'prop_pubs',
    apply_name => 'apply_pubs',
    log_file => 'exp_app23.log',
    bi_directional => false,
    include_ddl => true,
    instantiation => dbms_streams_adm.instantiation_schema);
    end;
    The script failed the first time because i forgot to configure the source database in archive log mode.
    The steps i followed to change to archivelog mode:
    SQL> select name from v$database;
    NAME
    PWBSD
    SQL> alter system set LOG_ARCHIVE_DEST = 'D:\data\oracle\oradata\PWBSD\archive' scope=both;
    System altered.
    SQL> conn sys/sys@pwbsd as sysdba
    Connected.
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup mount
    ORACLE instance started.
    Total System Global Area 612368384 bytes
    Fixed Size 1250428 bytes
    Variable Size 197135236 bytes
    Database Buffers 406847488 bytes
    Redo Buffers 7135232 bytes
    Database mounted.
    SQL> alter database archivelog;
    Database altered.
    SQL> alter database open;
    Database altered.
    SQL>
    I configured it in archive log mode and ran the proc above again.
    I got the following output this time:
    job finished
    begin
    ERROR at line 1:
    ORA-23616: Failure in executing block 90 for script
    959ECF1D1159402A8C16687AE5E3B5CD
    ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 457
    ORA-06512: at "SYS.DBMS_STREAMS_MT", line 2201
    ORA-06512: at "SYS.DBMS_STREAMS_MT", line 7486
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2624
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 2685
    ORA-06512: at line 2
    I ran the following to check the error:
    select * from dba_recoverable_script_errors;
    The output is:
    SCRIPT ID: 959ECF1D1159402A8C16687AE5E3B5CD
    BLOCK NUM: 90
    ERROR_NUMBER: -258
    ERROR_MESSAGE: ORA-00258: manual archiving in NOARCHIVELOG mode must identify log
    ORA-06512: at "SYS.DBMS_RECO_SCRIPT_INVOK", line 129
    ORA-06512: at "SYS.DBMS_STREAMS_RPC", line 358
    It seemed like it was still complaining about archive log mode,
    I verified that the PWBSD db is in archivelog mode by running the following:
    select name, log_mode from v$database;
    NAME: PWBSD
    LOG_MODE: ARCHIVELOG
    What could be the problem and how do i proceed to fix it?

    Hi Parthiv,
    The steps given by you is not clear.
    please try to fallow the steps given in the below link. It may be helpful to you to setup schema level streams:
    http://gssdba.wordpress.com/2011/04/20/steps-to-implement-schema-level-oracle-streams/
    Thanks and Regards,
    Satish.G.S
    gssdba.wordpress.com

  • Can we use online redo log to recover lost datafile in NOARCHIVE mode?

    I am working on OCA exam and confued about these 2 sample questions. (similar questions with totally different answer)
    Please give me hint about the different between these 2 questions.
    ** If the database is in NOARCHIVELOG mode, and one of the datafile for tablespace USERS is lost, what kind of recovery is possible? (answer: B)
    A. All transactions except those in the USERS tablespace are recoverable up to the loss of the datafile.
    B. Recovery is possible only up to the point in time of the last full database backup.
    C. The USERS tablespace is recoverable from the online redo log file as long as none of the redo log files have been reused since the last backup.
    D. Tablespace point in time recovery is available as long as a full backup of the USERS tablespace exists.
    ** The database of your company is running in the NOARCHIVELOG mode. You perform a complete backup of the database every night. On Monday morning, you lose the USER1.dbf file belonging to the USERS tablespace. Your database has four redo log groups, and there have been two log switches since Sunday night's backup.
    Which is true (answer: B)
    A. The database cannot be recovered.
    B. The database can be recovered up to the last commit.
    C. The database can be recovered only up to the last completed backup.
    D. The database can be recovered by performing an incomplete recovery.
    E. The database can be recovered by restoring only the USER!.dbf datafile from the most recent backup.

    I think Gaurav is correct, you can recover to the last commit even in NOARCHIVELOG, as long as all the changes in the redo logs have not been overwritten. So answer should be B for question 2.
    Here is my test:
    SQL> select log_mode from v$database;
    LOG_MODE
    NOARCHIVELOG
    SQL> select tablespace_name, file_name from dba_data_files;
    TABLESPACE_NAME
    FILE_NAME
    USERS
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DBF
    SYSAUX
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSAUX01.DBF
    UNDOTBS1
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\UNDOTBS01.DBF
    SYSTEM
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSTEM01.DBF
    DATA
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\DATA01.DBF
    SQL> create table names
    2 ( name varchar(16))
    3 tablespace users;
    Table created.
    so this segment 'names' is created in the datafile users01.
    At this point I shut down and mount the DB, then:
    RMAN> backup database;
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:29
    Finished backup at 06-OCT-07
    SQL>alter database open
    SQL> insert into names values ('pippo');
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    At this point I delete datafile users01 and restart:
    SQL> startup
    ORACLE instance started.
    Total System Global Area 167772160 bytes
    Fixed Size 1247900 bytes
    Variable Size 67110244 bytes
    Database Buffers 96468992 bytes
    Redo Buffers 2945024 bytes
    Database mounted.
    ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
    ORA-01110: data file 4: 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DBF'
    restoring the backup taken before inserting the value 'pippo' in table names:
    RMAN> restore database;
    Starting restore at 06-OCT-07
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting datafile backupset restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    restoring datafile 00001 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSTEM01.D
    BF
    restoring datafile 00002 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\UNDOTBS01.
    DBF
    restoring datafile 00003 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\SYSAUX01.D
    BF
    restoring datafile 00004 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\USERS01.DB
    F
    restoring datafile 00005 to C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORA101RC\DATA01.DBF
    channel ORA_DISK_1: reading from backup piece C:\ORACLE\PRODUCT\10.2.0\DB_1\DATA
    BASE\0AITR52K_1_1
    channel ORA_DISK_1: restored backup piece 1
    piece handle=C:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\0AITR52K_1_1 tag=TAG20071006
    T181337
    channel ORA_DISK_1: restore complete, elapsed time: 00:02:07
    Finished restore at 06-OCT-07
    RMAN> recover database;
    Starting recover at 06-OCT-07
    using channel ORA_DISK_1
    starting media recovery
    media recovery complete, elapsed time: 00:00:05
    Finished recover at 06-OCT-07
    SQL> alter database open;
    Database altered.
    SQL> select * from names;
    NAME
    pippo
    SQL>
    enrico

  • Db restore non archive mode lost redo log file..restore from controlfile tr

    i have a db 11g I had taken non archive backup but failed to take redo log files backup...
    so while i restored the db ... after formatting the machine ..the oracle instance wont start.
    I create a controlfile trace but when i run it i get errors.
    since i dont have the older log files.. how do i get around with this issue
    Thanks
    Following is the sample of control file trace ..Note i cannot create the redo log file
    since db wont be mounted at most it shall be in nonmount mode
    and below is my created controlfile ....
    CREATE CONTROLFILE REUSE DATABASE "XE" NORESETLOGS NOARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_80L7C259_.LOG'
    SIZE 50M BLOCKSIZE 512,
    GROUP 2
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_80L7C375_.LOG'
    SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    DATAFILE
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBF',
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\UNDOTBS1.DBF',
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSAUX.DBF',
    'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF'
    CHARACTER SET AL32UTF8
    I dont have these 2 files ..what do i do to get around this situation
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_80L7C259_.LOG'
    SIZE 50M BLOCKSIZE 512,
    GROUP 2
    'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_80L7C375_.LOG'
    SIZE 50M BLOCKSIZE 512
    -- STANDBY LOGFILE
    DATAFILE
    Edited by: zycoz100 on Feb 27, 2013 10:57 PM

    If you have a cold backup (database shutdown properly) without the redo logs, change this :
    CREATE CONTROLFILE REUSE DATABASE "XE" NORESETLOGS NOARCHIVELOGto
    CREATE CONTROLFILE REUSE DATABASE "XE" RESETLOGS NOARCHIVELOGYou have to change the NORESETLOGS to RESETLOGS for Oracle to recreate the online redo logs.
    Hemant K Chitale

  • Multiplexing Redo Log and maximum protection mode.

    Assume that during writing into redo logs the instance crashes. As a result, members of active redo group are not synchronized, some of them had more data. How Oracle will handle this when instance starts? And there can be case when at startup time some members that had more redo before crash, are lost.
    Now assume that we have standby database with maximum protection mode. After LGWR has written to local redo logs and before writing to standby redo logs, primary instance crashes. In this case standby site lost last transaction.
    Is it correct? Thanks.

    Assume that during writing into redo logs the instance crashes. As a result, members of active redo group are not synchronized, some of them had more data. How Oracle will handle this when instance starts? And there can be case when at startup time some members that had more redo before crash, are lost.
    Members of a particular group are written concurrently by LGWR, all members of a log group will have same data.  If any member of a particular group is lost or not reachable, oracle will read from the available log member during instance recovery.
    Multiplexing Redo Log Files
    http://docs.oracle.com/cd/B19306_01/server.102/b14231/onlineredo.htm#i1006249
    To answer your second question,In this mode no transaction commits on primary unless the redo is also written on atleast one standby database, otherwise primary will go down.
    Check below
    Maximum Protection
    http://docs.oracle.com/cd/B28359_01/server.111/b28294/protection.htm#CHDHFHJI

  • Asynch Hot Log mode does not use hot (online) redo logs

    Version 10.2
    We have just set up a test of the Asynch Hot Log replication according to Chap 16 of the Data Warehousing guide.
    We can see data put into the change table. However, it seems that data gets written to the change table ONLY after a log switch. This would suggest that the capture process is not reading the online logs, but is only reading the archived logs.
    I don't think this can be correct behavior because the docs indicate that Oracle "seamlessly switches" between the online and the archived redo logs.
    Is there a flag or something to set to cause the online logs to be available to the capture process? Or is this a bug? Has anyone else observed this behavior?
    Thanks for any insight.
    -- Chris Curzon

    According to the 10g Dataguard docs, section 2.5.1:
    "Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."yes, those are used when database is open.
    You should not perform any changes in Standby. Even if those exist online redo log files, whats the difficulty you have seen?
    These will be used whenever you performed switchover/failover. So nothing to worry on this.
    Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?Then oracle functionality itself harm if you think in that way. When they not used in open then what the harm with that?
    Standby_File_management --> for example if you add any datafile, those information will be in archives/redos once they applied on standby those will be added automatically when it is set to AUTO if its manual, then it creates a unnamed file in $ORACLE_HOME/dbs location later you have to rename that file and recovery need to perform .
    check this http://docs.oracle.com/cd/B14117_01/server.101/b10755/initparams206.htm
    HTH.

  • Database in log archive mode and redo log file in mode not archive

    Hello,
    I have a dabatabase running in archive log mode, recently changed, I have 5 redo log groups and one of them (the current one) shows in the v$log view, that ARC: NO, I mean, no archiving. All redo logs except it shows ARC:Yes
    What does it mean?
    Am I going to have problems with this redo log file?
    Thanks

    If you do describe on v$log, you'll find that the full column name is Archived (meaning is it archived yet?).
    You could try alter system switch logfile and then check v$log again a few times after.
    Use the docu for finding out more about v$ views and so on
    http://www.oracle.com/pls/db102/print_hit_summary?search_string=v%24log

  • Expdp Scheduler objects Open mode Redo logs

    Hi,
    i) What are the advanced options of expdp that permit exporting database scheduler objects and prevent exp from exporting database scheduler objects?
    ii)If the database has to be in Open mode, does RESTRICTED or NON-RESTRICTED mode have any significance here?
    AskTom says that DBMS_SCHEDULER commits when scheduling a job. Hence I guess redo logs are generated each time a job is scheduled using DBMS_SCHEDULER. Hence I guess export cannot be done in read-only mode.
    Many thanks

    Hi,
    Here is the scenario mentioned by my DBA. We are using Oracle 10g R2. Maybe we will upgrade to Oracle 11g. My implementation in Oracle 10g is based on DBMS_SCHEDULER.
    i)Are you always using Data Pump Utility for Export? Kindly provide me the exact command or statement used if you are executing from command-line. Do you specify EXCLUDE parameter if you want to exclude any of the tables or objects? Do you use MetaData filters available in Data Pump?
    ii)Regarding the database mode during export or backup, do you use read-only mode or open mode restricted or quiesced mode?
    iii)Kindly specify the exact error message displayed during export.
    iv)Based on the database mode used and the error displayed, is it related to redo logs or changes in tables during export using expdp?
    i)Expdp parfile=expdp_xx.par
    Where expdp_xx.par contains
    Directory=directory_name
    Dumpfile=expdp_xx.dmp
    Logfile=expdp_xx.log
    Schemas=SCHEMA_NAME
    parallel=4 <- we tried with or without this parameter as well.
    ii) Database is running in regular open mode (open for business for all users) while we are running the exp or expdp process.
    iii). No error displayed. It just keep running with no end in site.
    iv). No.. Oracle tracing saw a repeated sql running on the a table created by expdp process for RULE/RULE SET Objects.
    Kindly suggest the steps required to handle the creation of RULE/RULE SET Objects during expdp process. These RULE/RULE SET Objects are created during the creation of CHAIN_RULE objects. My implementation is verifying multi-threading in DBMS_SCHEDULER. We want to check the basic functionalities provided by DBMS_SCHEDULER first. Hence EVENT_CONDITION and QUEUE_SPEC are ruled out. Instead of EVENT_CONDITION, we are using condition attribute of DEFINE_CHAIN_STEP. So jobs are created every second verifying whether the condition attribute of DEFINE_CHAIN_STEP is satisfied. The jobs create chains and hence rules and rule set objects are created.
    Given the above scenario, kindly indicate to me how to complete expdp process and how to avoid the RULE/RULE SET Objects creation hampering the expdp process.

Maybe you are looking for