Questions on redo

Hi all,
does the LGWR only write transactions that are COMMITED from the redo buffer to the redo logs? or they will write uncommited transactions to the redo log as well?
please advise.
thanks

Ok now lets clarify some things...
The redlog groups can containt some uncomitted statements... but they will be come permanent only if they are commited... you can read that from the link above:
http://www.dbasupport.com/oracle/ora9i/background_process01.shtml
or here: http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14220/process.htm
Sometimes, if more buffer space is needed, LGWR writes redo log entries before a transaction is committed. These entries become permanent only if the transaction is later committed.
and now to your questions:
not the manipulated value itself but the old value from the undo management (which self was comitted by oracle)...Ok a little example:
You modify a value from 0 to 1. But you don't commit this transaction. An undo entry is generated in the undo tablespace (if you are using automatic undo) for that modification. Then a checkpoint occur in your database (for example, you made this by hand). So the DBWR will write down the modified value "1" to the table in the datafiles, but you have not commited the statement.
So now your instance crashes... how should oracle know what the old value was?
Yes oracle have to rollback the uncomitted statement from the undo record to "0".
Lets make the situation even worst... maybe while the crash you lost some datafiles from the undo tablespace... so your undo management is not consistent.
And now? Yes oracle have all the uncomitted statements from the undo tablespace also "commited" in the redo log files... to rebuild the undo managment. So the old value 0 is a commited transaction in the redo log files for the undo tablespace...
how does oracle knows which records in the redo logs are not commited and which are the one which are commited? which recoreds to rollback and not roll foward perhaps during recovery.hmm i think you should read some documentation about that phases first... all commited statements are assigned by a SCN...
Regards
Stefan

Similar Messages

  • Fundamental questions on redo logs and rollbacks

    Hi all,
    Some basic questions, I really want to understand it very clearly.
    Suppose that we have updated few records in a table. We know that the blocks to be updated will be fetched into buffer cache, they will be updated with new value and commited eventually. The questions I have are ,
    1) What exact information will go to redo log ? is it a copy of the block before change and copy of the block after change ?
    2) What exactly goes to rollback segment? is it copy of block before change (for update) and just the rowid for inserted row and the copy of block for a deleted row ?
    3) Whatever we do, is it the whole block that goes to redo or rollback ? Means if there are 10 rows in the block and we update one of them, still whole block goes to redo or rollback ?
    4) If we rollback, what goes where ? Is there anything that goes to redo if we rollback ?
    Please explain.
    Thanks.

    Redo stores changes made in the database, and undo/rollback stores the reverse of those changes. Data blocks may be changed prior to a commit, and recorded in both locations.
    So, when a database is recovered, redo is applied to the backup datafiles, rolling every change forward, and then undo is applied to reverse any uncomitted transactions.
    Undo/rollback can also be used simply to roll back a transaction in an active instance. Redo is only used during instance recovery.
    I don't know if this is tracked via the storage of block images, or if it just stores the change itself.
    -cf

  • HT5312 how do i reset my security questions and redo

    I cant remember my answers for security questions how can i redo this because i have a new laptop
    and can not down load anything without answering questions that i can not remember please help

    Read the steps half-way down the page that you posted from, that tells you how to reset them i.e. if you have a rescue email address (which is not the same thing as an alternate email address) set up on your account then steps 1 to 5 half-way down that page should let you reset them.
    If you don't have a rescue email address (you won't be able to add one until you can answer 2 of your questions) then you will need to contact iTunes Support / Apple to get the questions reset.
    Contacting Apple about account security : http://support.apple.com/kb/HT5699
    When they've been reset (and if you don't already have a rescue email address) you can then use the steps half-way down the HT5312 page that you posted from to add a rescue email address for potential future use

  • Question about redo generation

    select * from v$version;
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    "CORE     11.2.0.1.0     Production"
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - ProductionSetup for test
    create table parent_1 (id number(12) NOT NULL);
    alter table parent_1 add constraint parent_1_pk primary key (id);
    create table parent_2 (id number(12) NOT NULL);
    alter table parent_2 add constraint parent_2_pk primary key (id);
    create table child_table (ref_id number(12) NOT NULL,ref_id2 number(12) NOT NULL, created_at timestamp(6));
    alter table child_table add constraint child_table_pk primary key (ref_id, ref_id2);
    alter table child_table add constraint child_table_fk1 foreign key (ref_id) references parent_1(id);
    alter table child_table add constraint child_table_fk2 foreign key (ref_id2) references parent_2(id);
    insert into parent_1 select rownum from all_objects;
    insert into parent_2 values (1);
    insert into parent_2 values (2);
    insert into child_table (select id, 1, systimestamp from parent_1);
    insert into child_table (select id, 2, systimestamp from parent_1);
    commit;Code version 1:
    declare
       type t_ids is table of NUMBER(12);
       v_ids t_ids;
       start_redo NUMBER;
       end_redo NUMBER;
      cursor c_data is SELECT id FROM parent_1;
    begin
       select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
       open c_data;
       LOOP
        FETCH c_data
        BULK COLLECT INTO v_ids LIMIT 1000;
        exit;
       end loop;
      CLOSE c_data;
        for pos in v_ids.first..v_ids.last LOOP
      BEGIN
        insert into child_table values (v_ids(pos), 2, systimestamp);
        EXCEPTION
          WHEN DUP_VAL_ON_INDEX THEN
            update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
      END;
      END LOOP;
    end;
    /Version 2:
    declare
       type t_ids is table of NUMBER(12);
       v_ids t_ids;
       start_redo NUMBER;
       end_redo NUMBER;
      cursor c_data is SELECT id FROM parent_1;
      ex_dml_errors EXCEPTION;
      PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
      pos NUMBER;
      l_error_count NUMBER;
    begin
       select value into start_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
       open c_data;
       LOOP
        FETCH c_data
        BULK COLLECT INTO v_ids LIMIT 1000;
        exit;
       end loop;
      CLOSE c_data;
      BEGIN
        FORALL i IN v_ids.first .. v_ids.last SAVE EXCEPTIONS
        insert into child_table values (v_ids(i), 2, systimestamp);
      EXCEPTION
        WHEN ex_dml_errors THEN
          l_error_count := SQL%BULK_EXCEPTIONS.count;
          FOR i IN 1 .. l_error_count LOOP
            pos := SQL%BULK_EXCEPTIONS(i).error_index;
            update child_table set created_at = systimestamp where ref_id = v_ids(pos) and ref_id2 = 2;
          END LOOP;
      END;
       select value into end_redo from v$mystat where statistic# = (select statistic# from v$statname where name like 'redo size');
      dbms_output.put_line('Created redo : ' || (end_redo-start_redo));
    end;
    /Version 1 output:
    Created redo : 682644
    Version 2 output:
    Created redo : 7499364
    Why is version 2 generating significant more redo ?

    As both pieces of code erroneously replace non-procedural code by procedural code, ignoring the power of a RDBMS to process sets, and are examples of slow by slow programming,
    both pieces of code are undesirable, so the difference in redo generation doesn't matter.
    Sybrand Bakker
    Senior Oracle DBA

  • Question on redo log files at the standby

    Oracle version: 10.2.0.5
    Platform : AIX
    We have 2 node RAC primary with 2 node RAC standby
    Primary Instance1 named as cmapcp1
    Primary Instance2 named as cmapcp2
    Standby Instance1 named as cmapcp3
    Standby Instance2 named as cmapcp4At standby side
    SQL> show parameter log_file_name_convert
    NAME                 TYPE                 VALUE
    log_file_name_conver string               cmapcp1, cmapcp3, cmapcp2, cmapcp4
    Despite the value set for log_file_name_convert, I don't see any change in names of Online and Standby redo logs at the Standby site.
    -- From primary
    SQL> select member,type from v$logfile;
    MEMBER                                             TYPE
    +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf             STANDBY
    16 rows selected.-- From standby
    SQL> select member,type from v$logfile;
    MEMBER                                             TYPE
    +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf             STANDBY
    16 rows selected.--- Another thing I noticed, v$log doesn't list Standby Redo Logs. This is expected behaviour , I guess
    Below is the output from Primary and Standby (it is the same)
    set linesize 200
    set pagesize 50
    col member for a50
    break on INST SKIP PAGE on GROUP# SKIP 1
    select l.thread# inst, l.group#,lf.member, lf.type
        from v$log l , v$logfile lf
        where l.group# = lf.group#
        order by 1,2 ;
          INST     GROUP# MEMBER                                             TYPE
             1          1 +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
                        2 +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
                        3 +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
          INST     GROUP# MEMBER                                             TYPE
             2          4 +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
                        5 +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
                        6 +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE

    John_75 wrote:
    Thank you ckpt, mseberg.
    I think log_file_name_convert is set wrongly as you've mentioned. But If I don't want to any change to name of Online or standby redo log files in standby, I don't have to set log_file_name_convert at all. Right ?From Same link
    If you specify an odd number of strings (the last string has no corresponding replacement string), an error is signalled during startup. If the filename being converted matches more than one pattern in the pattern/replace string list, the first matched pattern takes effect. There is no limit on the number of pairs that you can specify in this parameter (other than the hard limit of the maximum length of multivalue parameters).

  • Question on Redo logs in RAC

    DB version:11.2
    Platform : Solaris 10
    We create RAC DBs manually. Below is a log of the DB creation from Node1 . Instance in Node2 is not yet created (only binary is installed in Node2).
    SQL> conn / as sysdba
    Connected to an idle instance.
    SQL> startup nomount pfile=/u03/oracle/11.2/db_1/dbs/initnehprd1.ora
    ORACLE instance started.
    Total System Global Area 1252643278 bytes                                      
    Fixed Size                  2219208 bytes                                      
    Variable Size             771752760 bytes                                      
    Database Buffers          469762048 bytes                                      
    Redo Buffers                8929280 bytes  
    SQL> CREATE DATABASE nehprd MAXINSTANCES 8 MAXLOGFILES 16 MAXLOGMEMBERS 4 MAXDATAFILES 1024
      2  CHARACTER SET AL32UTF8 NATIONAL CHARACTER SET AL16UTF16
      3  DATAFILE '+DG_DATA01/nehprd/nehprd_system01.dbf' SIZE 1000m EXTENT MANAGEMENT LOCAL
      4  SYSAUX DATAFILE '+DG_DATA01/nehprd/nehprd_sysaux01.dbf' SIZE 600m
      5  DEFAULT TEMPORARY TABLESPACE temp
      6  TEMPFILE '+DG_DATA01/nehprd/nehprd_temp01.dbf' SIZE 2000m EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5m
      7  UNDO TABLESPACE undotbs11 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1101.dbf' SIZE 700m
      8  LOGFILE
      9          GROUP 1 ('+DG_DATA01/nehprd/nehprd_log01.dbf') SIZE 150m,
    10          GROUP 2 ('+DG_DATA01/nehprd/nehprd_log02.dbf') SIZE 150m,
    11          GROUP 3 ('+DG_DATA01/nehprd/nehprd_log03.dbf') SIZE 150m
    12  /
    Database created.
    Elapsed: 00:00:18.95
    SQL> CREATE UNDO TABLESPACE undotbs12 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1201.dbf' SIZE 700m;
    Tablespace created.
    Elapsed: 00:00:01.30
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 4 '+DG_DATA01/nehprd/nehprd_log04.dbf' SIZE 150m;
    Database altered.
    Elapsed: 00:00:00.25
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 5 '+DG_DATA01/nehprd/nehprd_log05.dbf' SIZE 150m;
    Database altered.
    Elapsed: 00:00:00.43
    SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 6 '+DG_DATA01/nehprd/nehprd_log06.dbf' SIZE 150m;
    Database altered.But after the above activity, the following log files are created in the DB.
    6 log groups for each Instance and they all are on the same location +DG_DATA01/nehprd  !
    INST_ID     GROUP# STATUS  TYPE             MEMBER                                   IS_
             1          1         ONLINE  +DG_DATA01/nehprd/nehprd_log01.dbf             NO
             1          2         ONLINE  +DG_DATA01/nehprd/nehprd_log02.dbf             NO
             1          3         ONLINE  +DG_DATA01/nehprd/nehprd_log03.dbf             NO
             1          4         ONLINE  +DG_DATA01/nehprd/nehprd_log04.dbf             NO
             1          5         ONLINE  +DG_DATA01/nehprd/nehprd_log05.dbf             NO
             1          6         ONLINE  +DG_DATA01/nehprd/nehprd_log06.dbf             NO
             2          1         ONLINE  +DG_DATA01/nehprd/nehprd_log01.dbf             NO
             2          2         ONLINE  +DG_DATA01/nehprd/nehprd_log02.dbf             NO
             2          3         ONLINE  +DG_DATA01/nehprd/nehprd_log03.dbf             NO
             2          4         ONLINE  +DG_DATA01/nehprd/nehprd_log04.dbf             NO
             2          5         ONLINE  +DG_DATA01/nehprd/nehprd_log05.dbf             NO
             2          6         ONLINE  +DG_DATA01/nehprd/nehprd_log06.dbf             NO How was redo log group 4,5,6 created for thread 1 and how was redo log group 1,2,3 created for thread 2 ?

    Hi,
    To make things worse, when you query v$logfile , It will show 6 redo logfiles belonging to 6 redo groups for each instance.The fact that it shows all groups of redo does not mean it belongs to that instance. Try to query v$database or v$datafile, this means that database/datafiles belongs to only one instance, of course not.
    Isn't this a bit of a bug ?Of course not. It's concept.
    To understand it you need understand the difference between instance and database. An database (i.e files) can be opened by many instances.
    An Oracle database server consists of a database and at least one database instance (commonly referred to as simply an instance). Because an instance and a database are so closely connected, the term Oracle database is sometimes used to refer to both instance and database. In the strictest sense the terms have the following meanings:
    Database
    A database is a set of files, located on disk, that store data. These files can exist independently of a database instance.
    Database instance
    An instance is a set of memory structures that manage database files. The instance consists of a shared memory area, called the system global area (SGA), and a set of background processes. An instance can exist independently of database files.
    Database: (v$database)
    CONTROLFILE (v$controlfile)
    DATAFILE (v$datafile)
    ONLINELOG (v$logfile,v$log)
    ARCHIVELOG (v$archivelog)
    SPFILE
    These views above will show the same values in either instance, because if the file (database) is changed it is modified in all instances. That's means you not need use gv$ because the information are the same in all instances, also you not need get info connecting in each instance querying theses v$ because the inf are the same independent of the instance
    Instances: (v$instance)
    PARAMETERS (v$parameter)
    MEMORY STRUCTURE (e.g v$session)
    The view v$session will show information about sessions from that instance only. In RAC each instance have you own info about session so you will need query gv$session because it get information about session from others instances.
    The fact that each instance assign its own REDO/UNDO not mean they are part of the instances, REDO/UNDO are part of Database. They can be write by assigned instance and read by all instance (just it)
    It's not a bug when you query v$datafile, v$logfile, v$controlfile in all instances you will get same result, because it's the DATABASE. (An database (i.e files) can be opened by many instances).
    Levi Pereira

  • Online redo logs on a physical standby?

    A question on REDO logs on physical standby databases. (10.2.0.4 db on Windows 32bit)
    My PRIMARY has 3 ONLINE REDO groups, 2 members each, in ..ORADATA\LOCP10G
    My PHYSICAL STANDBY has 4 STANDBY REDO groups, 2 members each, in ..ORADATA\SBY10G
    I have shipping occurring from the primary in LGWR, ASYNC mode - max availablility
    However I notice the STANDBY also has ONLINE REDO logs, same as the PRIMARY, in the ..ORADATA\SBY10G folder
    According to the 10g Dataguard docs, section 2.5.1:
    "Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."
    I have tried to drop these on the STANDBY when not in apply mode, but I get the following:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    Database altered.
    SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
    ALTER DATABASE DROP LOGFILE GROUP 3
    ERROR at line 1:
    ORA-01275: Operation DROP LOGFILE is not allowed if standby file management is
    automatic.
    I also deleted them while the STANDBY instance was idle, but it recreated them when moved to MOUNT mode.
    So my question is why is my PHYSICAL recreating and using these, if the docs say the shouldn't?
    I saw the same error mentioned here: prob. with DataGuard
    Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?
    Or, is this a prooduct of having management=AUTOMATIC - i.e. the database will create these 'automatically'
    Ta
    bt

    According to the 10g Dataguard docs, section 2.5.1:
    "Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."yes, those are used when database is open.
    You should not perform any changes in Standby. Even if those exist online redo log files, whats the difficulty you have seen?
    These will be used whenever you performed switchover/failover. So nothing to worry on this.
    Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?Then oracle functionality itself harm if you think in that way. When they not used in open then what the harm with that?
    Standby_File_management --> for example if you add any datafile, those information will be in archives/redos once they applied on standby those will be added automatically when it is set to AUTO if its manual, then it creates a unnamed file in $ORACLE_HOME/dbs location later you have to rename that file and recovery need to perform .
    check this http://docs.oracle.com/cd/B14117_01/server.101/b10755/initparams206.htm
    HTH.

  • Redo Logs Groups and Members

    Hi -
    I have a few questions regarding redo log groups and naming conventions I was hoping someone could address or point me to some docs.
    I am multiplexing my control file and redo logs across HDDs for an XE installation.
    The original logs created at install have the naming form of:
    O1_MF_1_462H1GK7_.LOG.
    1. What is behind the naming scheme (specifically the _462H1GK7_ section)?
    2. Is there a generally recognized naming scheme for adding new group members in XE?
    3. I noticed that with any XE install I have done, the redo logs groups default to Group 1 and Group 3, with no Group 2 to be found. Is this normal/required? If not, is it best to add group 2 and then remove group 3? I'm not sure if has much bearing here, but the 10gR2 docs state that skipping group numbers will consume space in the control files.
    Thanks in advance for any assistance,
    Scott

    The odd-looking filename is from using Oracle Managed Files (OMF). You can override the naming scheme or create your own groups and members. Very common to include "redo" in the file name along with group and member identifiers. An example would be:
    <path>/redo01a.log
    <path>/redo01b.log
    <path>/redo02a.log
    etc.
    You can see group 01 has two members, a and b. Can also include the SID in the file name as well, but that can be identified via the path. 462H1GK7 is a unique identifier generated by Oracle. It has no meaning.
    I don't know about XE not creating a group 2. Were there group 2 file(s) left over from a previous install (although the OMF probably would have ignored the existing files)? If creating the files manually, you can use "reuse" to use existing files.

  • Redo Log - Storage Consideration

    I have one question about redo Log storage guidelines, that i red in one article on metalink
    In that article recommended place redo log on raid device level 1 (*Mirroring*),
    and NOT recommended place it on raid 10 or 5
    If you know - explain detailed please, why it is so?
    Scientia potentia est

    I haven't seen a raid 0, raid 1, or raid 5 filesystem in the last 5 years. Most companies now use SAN or NAS.
    That is not entirely true is it Robert. Most default SAN installation are set up as Raid5 and are presented to the users as filesystem mounts.
    I agree entirely re the benefits of using ASM
    The reason why redo logs are recommended to be on Raid1 (1+0, 10) and not Raid5 is that redo logs write differently to all other oracle datafiles as they are written sequentially by the LGWR process. Raid5 involves writing parity data to another disk and therefore adds additional writes to what can already be a very intensive single-streamed process
    John
    www.jhdba.wordpress.com

  • Help!I have a question about the Logminer,want to ask everybody to help.

    Question: Archived redo log not to recoed DML statements, only to record DDL statements.
    问题:归档日志不记录DML,只记录DDL。
    Below is a detailed steps operation:
    下面是详细的操作步骤:
    *1. Install Logminer*
    C:\Workspace\GetFiles\bin sqlplus /nolog
    SQL*Plus: Release 10.2.0.3.0 - Production on 星期二 7月 12 17:11:49 2011
    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
    SQL> conn / as sysdba
    已连接。
    SQL> @C:\Oracle\product\10.2.0\db_1\RDBMS\ADMIN\dbmslm.sql
    程序包已创建。
    授权成功。
    SQL> @C:\Oracle\product\10.2.0\db_1\RDBMS\ADMIN\dbmslmd.sql
    程序包已创建。
    *2. Set parameter*
    SQL> ALTER SYSTEM SET UTL_FILE_DIR='C:\Oracle\product\10.2.0\db_1\sjjhDict' SCOPE=SPFILE;
    系统已更改。
    *3. Start archivelog*
    SQL> alter database archivelog;
    数据库已更改。
    *4. View archive mode*
    SQL> archive log list
    数据库日志模式 存档模式(archives mode)
    自动存档 启用(activated)
    存档终点 USE_DB_RECOVERY_FILE_DEST
    最早的联机日志序列 10
    下一个存档日志序列 12
    当前日志序列 12
    *5. Create database dict*
    SQL> exec sys.dbms_logmnr_d.build(dictionary_filename=>'C:\Oracle\product\10.2.0\db_1\sjjhDict\sjjhDict.ora',dictionary_location=>'C:\Oracle\product\10.2.0\db_1\sjjhDict');
    PL/SQL procedure successfully completed
    *6. Create table*
    CREATE TABLE "NEWS_B"
    "NID" NUMBER,
    "NTITLE" VARCHAR2(200),
    "NTIME" DATE,
    "NAUTHOR" NUMBER,
    "NCONTENT" VARCHAR2(4000),
    PRIMARY KEY ("NID")
    *7. Insert data to table*
    insert into news_b (NID, NTITLE, NTIME, NAUTHOR, NCONTENT, ROWID)
    values (1, '1', to_date('12-07-2011', 'dd-mm-yyyy'), 1, '1', 'AAAMoiAAEAAAABdAAA');
    insert into news_b (NID, NTITLE, NTIME, NAUTHOR, NCONTENT, ROWID)
    values (2, '2', to_date('12-07-2011', 'dd-mm-yyyy'), 2, '2', 'AAAMoiAAEAAAABdAAB');
    insert into news_b (NID, NTITLE, NTIME, NAUTHOR, NCONTENT, ROWID)
    values (3, '3', to_date('12-07-2011', 'dd-mm-yyyy'), 3, '3', 'AAAMoiAAEAAAABdAAC');
    commit;
    *8. Start logminer*
    Exec sys.dbms_logmnr.add_logfile('C:\Oracle\product\10.2.0\oradata\hux\REDO01.LOG',sys.dbms_logmnr.NEW);
    Exec sys.dbms_logmnr.add_logfile('C:\Oracle\product\10.2.0\oradata\hux\REDO02.LOG',sys.dbms_logmnr.ADDFILE);
    Exec sys.dbms_logmnr.add_logfile('C:\Oracle\product\10.2.0\oradata\hux\REDO03.LOG',sys.dbms_logmnr.ADDFILE);
    Exec sys.dbms_logmnr.start_logmnr(OPTIONS=>SYS.DBMS_LOGMNR.COMMITTED_DATA_ONLY,DictFileName=>'C:\Oracle\product\10.2.0\db_1\sjjhDict\sjjhDict.ora');
    SQL> SELECT SQL_REDO,SQL_UNDO,OPERATION,TIMESTAMP,ROW_ID,SEG_OWNER,SEG_NAME FROM SYS.V_$LOGMNR_CONTENTS WHERE SEG_OWNER='HUX' AND SEG_NAME='NEWS_B';
    SQL_REDO SQL_UNDO OPERATION TIMESTAMP ROW_ID SEG_OWNER SEG_NAME
    DDL 2011/7/12 1 AAAAAAAAAAAAAAAAAB HUX NEWS_B
    CREATE TABLE "NEWS_B"
    "NID" NUMBER,
    "NTITLE" VARCHAR2(200),
    "NTIME" DATE,
    "NAUTHOR" NUMBER,
    "NCONTENT" VARCHAR2(4000),
    PRIMARY KEY ("NID")
    SQL>
    *9. Execute DDL statements*
    SQL> TRUNCATE TABLE NEWS_B;
    Table truncated
    *10. Again*
    SQL> SELECT SQL_REDO,SQL_UNDO,OPERATION,TIMESTAMP,ROW_ID,SEG_OWNER,SEG_NAME FROM SYS.V_$LOGMNR_CONTENTS WHERE SEG_OWNER='HUX' AND SEG_NAME='NEWS_B';
    SQL_REDO SQL_UNDO OPERATION TIMESTAMP ROW_ID SEG_OWNER SEG_NAME
    DDL 2011/7/12 1 AAAAAAAAAAAAAAAAAB HUX NEWS_B
    CREATE TABLE "NEWS_B"
    "NID" NUMBER,
    "NTITLE" VARCHAR2(200),
    "NTIME" DATE,
    "NAUTHOR" NUMBER,
    "NCONTENT" VARCHAR2(4000),
    PRIMARY KEY ("NID")
    DDL 2011/7/12 1 AAAAAAAAAAAAAAAAAB HUX NEWS_B
    TRUNCATE TABLE NEWS_B
    SQL>
    *11. LOGGING status*
    SQL> select logging from dba_tables t where t.table_name=upper('NEWS_B');
    LOGGING
    YES
    SQL>
    Archived redo log not to recoed DML statements, only to record DDL statements. Why?
    Please help me!

    Archived redo log not to recoed DML statements, only to record DDL statements. Why?This simply isn't true. This may be a bug in your unsupported 10.2.0.3, which needs to be upgraded anyway, but it has always worked.
    Also one usually doesn't mine the current open online redologs.
    Sybrand Bakker
    Senior Oracle DBA

  • RAC Redo Log Internal

    Hi all
    I want ask some questions about redo log generation in RAC.
    1. Does Oracle confirms that each committed transaction, from begin_transaction to commit, redo and undo information resides in redo log of one node? Which means, would it possible that Oracle put begin transaction in redo log of node A, and put commit in redo log of another node B?

    Reup this thread :)
    Another questions:
    What about RAC broadcast performs? I mean is it true that when node A commits a transaction T, then it broadcast this information to all other nodes, would Oracle write this information about T to node B's online redo log? for example, in redo log of Node B contains a redo record including opcode=5.4 and T's transaction id and Node A's thread number.
    But during my analyze, Oracle 10.2.0.1.0 RAC, (NFS share), both dump file of redo log and binary redo log don't contains that redo record.
    So I wonder what Oracle does?
    Black Thought

  • About automatic disappearance of Redo log file

    I had free Oracle9i Release 1(9.0.1) CD and I installed Oracle9i on my PC.
    When using Database assistant to create a database(I choose not to create a database during installation of 9i), after clone of database, it will start database, error comes.
    It says: Error write to redo01.log file, I check that file, it existed in the related folder, while at that time it disappears, once I had redo01, 02, 03 log file. It confused me.
    Does anyone give something on that?
    Thanks.

    Well, seriously you need to read basic oracle documents.
    To give short answers to your question.
    Redo logs are required for instance and crash recovery of your system.
    You need to have minimum two redo groups with minimum one redo member for each group. They are written in a circular fashion, i.e. one after one. If you maintain your database in archivelog mode, before rewriting/reusing the filled redo group member, that will be archived through arch process to a archive file, this can be used for database recovery.
    Every database must required two redo groups and you can't delete them.
    Jaffar

  • Logical standby database issue?

    Hi,
    I created a logical standby database on the same server as primary database.
    then I transited the old primary DB to standby DB,
                                old standby DB to primary DB.
    and "alter system switch logfile" in new Primary DB.
    execute sql in new standby DB:
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN NEWEST_SCN
              0
    question:
    the redo logs can not be applied to new standby DB,
    How to solve it?
    thanks
    DB release:9i

    Hi,
    Can you upload the output of:
    sqlplus> show parameter arc
    from both instances, or post the init.ora parameters from both. I would like to verify your arc related parameters.
    Also, did you check the alert log in the primary and standby for errors?
    Thanks,
    Idan.

  • Calculations not working

    Hi All,
    getting frustrated. I've done forms with calculations (invoice sheets etc.) and they worked well in the past. I can't seem to get the calculations to function in my latest form when they're used by the end used in Reader (8 or 9). Works well in Acrobat though. Is there something I can do to fix this? I've tried replacing the form boxes in question and redoing the sum and or product formulas. Always tests well and does nothing in Reader. Help appreciated.
    Thanks,
    Elliott

    Still need help... Here's what George tried Part 1:
    Elliot,
    Thanks for the file, it looks great. When I changed the field names of
    some of your fields (see attached file), the calculations started
    working. I thought to change the field names because of the "$" that
    was in the field names and thought that might be causing the trouble.
    I would have to investigate this is bit more to understand the exact
    nature of the problem, but for now, it might be best to limit your
    field name character to letters, numbers, decimal point, and
    underscore. I'll let you know if I find anything more about this.
    If this works for you, you might want to update your forum posting if
    you think this could help someone else.
    Good luck,
    George

  • ORA-00314, ORA-00312 during restore

    Hi.
    I have (I should say I had) a database in archivelog mode.
    Arch logs multiplexed on c:, e:.
    Online redo logs multiplexed on c:, e:.
    Controlfile multiplexed on c:, e:.
    I also had a copy of spfile on e:
    I also made a backup on e:
    configure default device type to disk;
    configure retention policy to redundancy 1;
    configure channel device type disk format 'e:\backup\ora_040506_%t_s%s_s%p';
    configure controlfile autobackup on;
    configure controlfile autobackup format for device type disk to 'e:\backup\sp%F';
    configure backup optimization on;
    backup database format 'e:\backup\ora_040506_%t_s%s_s%p';
    Then I had a problem on c: (lost local datafiles, controlfiles), and wanted to restore.
    So I stopped db, copied controlfile, spfile, archive logs and redo logs from e: to c:
    I ran:
    rman nocatalog target / @restoredb.sql
    and restoredb holds:
    restore database;
    recover database;
    exit;
    In alert log I can read:
    Mon Dec 06 11:23:18 2010
    Full restore complete of datafile 2 C:\ORACLEXE\ORADATA\XE\UNDO.DBF. Elapsed time: 0:00:15
    checkpoint is 1853242
    Full restore complete of datafile 4 C:\ORACLEXE\ORADATA\XE\USERS.DBF. Elapsed time: 0:00:19
    checkpoint is 1853242
    Mon Dec 06 11:23:50 2010
    Full restore complete of datafile 1 C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF. Elapsed time: 0:00:47
    checkpoint is 1853242
    Full restore complete of datafile 3 C:\ORACLEXE\ORADATA\XE\SYSAUX.DBF. Elapsed time: 0:00:55
    checkpoint is 1853242
    Mon Dec 06 11:24:05 2010
    alter database recover datafile list clear
    Mon Dec 06 11:24:05 2010
    Completed: alter database recover datafile list clear
    Mon Dec 06 11:24:05 2010
    alter database recover datafile list
    1 , 2 , 3 , 4
    Completed: alter database recover datafile list
    1 , 2 , 3 , 4
    Mon Dec 06 11:24:05 2010
    alter database recover if needed
    start
    Media Recovery Start
    Mon Dec 06 11:24:06 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_3860.trc:
    ORA-00314: journal 1, thread 1, numéro de séq. attendu 78 ne correspond pas à 58
    ORA-00312: journal en ligne 1 thread 1 : 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_5F60S79R_.LOG'
    Mon Dec 06 11:24:08 2010
    Media Recovery failed with error 314
    ORA-283 signalled during: alter database recover if needed
    start
    What happens?
    I can't recover a database in archivelog!

    Iam going to be more complete:
    XE on windows
    I have a laptop.
    Install XE, as default.
    HDD on c:, sd card on e:
    I have
    * put db in archivelog,
    * created online redo log groups #3,4,5, with members on c: and e:
    * dropped default online redo log groups 1 and 2
    * set up 2 controlfile multiplexing on c:, e:
    * set up 2 archive log dest (one on c: and one on e:)
    * copied spfile to e:
    * set up a backup to e: every 2 weeks.
    The laptop has worked one year then laptop crashed (hdd down), but I still have My sd card to restore datas.
    => To restore I installed a new laptop, the default XE install, stopped db, overwrite from e: -> c: :
    * the controlfile
    * the spfile
    * archivelog and redolog
    then startup mount.
    select * from v$logfile;
    => Show only groups 1 and 2 (and 1 is stale)
    select * from v$log;
    => sequences are 78,79 ( and 79 is current)
    The archive logs I have go from 77->106.
    +++++++++++++++++++++++++++++++++++++++++++++
    1st issue (or bug?): redo log groups 3,4,5 are declared in control file but not taken into account when restoring controlfile!
    I stopped new db, copied controlfile from e: (with redo logs 3,4,5 declared) to the one on c:, then restarted db.
    Can someone tell me if I can do this without "guessing" which were the redo log member names and typing ALTER DATABASE ADD LOGFILE GROUP..., then repeat (alter system switch logfile; alter database drop logfile; and drop groups 1 and 2) (and repeating this several times while group 1 or 2 still exists).
    +++++++++++++++++++++++++++++++++++++++++++++
    =>rman target /
    restore database;=>OK no error but v$logfile still shows only groups 1 and 2 (and 1 is stale)
    recover database;=>error on screen:
    dÚmarrage de la rÚcupÚration aprÞs dÚfaillance matÚrielle
    Úchec de la rÚcupÚration aprÞs dÚfaillance matÚrielle
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: Úchec de la commande recover Ó 12/23/2010 17:13:49
    ORA-00283: recovery session canceled due to errors
    RMAN-11003: erreur lors de l'analyse ou de l'exÚcution d'une instruction SQL : a
    lter database recover if needed
    start
    ORA-00283: session de rÚcupÚration annulÚe pour cause d'erreurs
    ORA-00314: journal 1, thread 1, numÚro de sÚq. attendu 78 ne correspond pas Ó 42
    ORA-00312: journal en ligne 1 thread 1 : 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_
    AREA\XE\ONLINELOG\O1_MF_1_5F60S79R_.LOG'
    Gestionnaire de rÚcupÚration (Recovery Manager) terminÚ.
    and in alert_log:
    Full restore complete of datafile 3 C:\ORACLEXE\ORADATA\XE\SYSAUX.DBF. Elapsed time: 0:00:56
    checkpoint is 1853242
    Thu Dec 23 17:13:45 2010
    alter database recover datafile list clear
    Thu Dec 23 17:13:45 2010
    Completed: alter database recover datafile list clear
    Thu Dec 23 17:13:45 2010
    alter database recover datafile list
    1 , 2 , 3 , 4
    Completed: alter database recover datafile list
    1 , 2 , 3 , 4
    Thu Dec 23 17:13:45 2010
    alter database recover if needed
    start
    Media Recovery Start
    Thu Dec 23 17:13:46 2010
    Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_2080.trc:
    ORA-00314: journal 1, thread 1, numéro de séq. attendu 78 ne correspond pas à 42
    ORA-00312: journal en ligne 1 thread 1 : 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_5F60S79R_.LOG'
    Thu Dec 23 17:13:48 2010
    Media Recovery failed with error 314
    ORA-283 signalled during: alter database recover if needed
    start
    => The problem is not seq 78.
    Where does # 42 come from?
    Is it possible to dump the controlfile?
    I loiked into archive logs:
    in # 77 in beginning I have: Seq# 0000000077, SCN 0x0000001bf6b3-0x0000001c473a
    in # 78 in beginning I have: Seq# 0000000078, SCN 0x0000001c473a-0x0000001c96eb
    in # 79 in beginning I have: Seq# 0000000079, SCN 0x0000001cb0bf-0x0000001cffc4
    in # 80 in beginning I have: Seq# 0000000080, SCN 0x0000001cffc4-0x0000001d5a56
    Furthermore:
    If I look at dates; arch log 77 & 78 dates are newer than 79 and next ones. Probably they have been crashed due to question above (redo log groups)?
    Regards,
    Alain

Maybe you are looking for

  • Error while creating AW

    Hi I got following error while creating AW via Wizard. Preparing Creating analytic workspace... Processing cube REFCOST Processing Creating Dimension : REFPRODUCT Processing Creating Dimension : REFTIME Processing Defining Load for Dimension: REFPROD

  • Upgraded iTunes: now cannot see the Podcasts on my iPod nano

    I know it sounds odd but I am genuinely sad that I have not been able to use my iPod for two days: I bought an iPod nano when they first came out and have been very happy with it. Two days ago I upgraded iTunes for the first time (to 7.6.2.9). Since

  • Cannot shut down or restart

    why cant i restart or shut down my new imac

  • BT Broadband Usage Monitor

    I'm trying to keep a daily check on my Broadband usage; reason I overshot my limit last month. But looking at the BT BBand Monitor it seems that it is not updated consistently. At times the amount used increases but the dates of usage do not change.

  • Java 6 - 64 bit only

    does Java 6 only run on 64 bit systems...... From the downloads page, I clicked on system requirements and was taken here: http://java.sun.com/javase/6/webnotes/install/index.html....