Archive log / nologging/ direct path insert

Could you please confirm if following are true or correct me if my understanding is wrong:
1 ) Archive log mode and LOGGING is needed to deal with media recovery; it was not needed for instance recovery.
2) IF insert is in NO APPEND mode , redo is generated even if table is in nologging mode and database is in noachive log mode. This redo is needed for instance recovery.
3) Direct path insert skips undo generation and may skip redo generation if the object is in nologging mode.
Thanks.
In case if it is relevant , I am using Oracle 11.2.0.3.

1) Yes, Archive logs are needed for media recovery.
2 and 3) Even if the table is in nologging mode , it generates little bit of redo for index maintenance and dictionary data. Upon a restart from a failure - Oracle will read the online redo logs and replay any transaction it finds in there. That is the "roll forward" bit. The binary redo information is used to replay everything that did not get written to the datafiles. This replay included regenerating the UNDO information (UNDO is protected by redo).
After the redo has been applied, the database is typically available for use now - and the rollback phase begins. For any transaction that was being processed when the instance failed - we need to undo its changes, roll it back. We do that by processing the undo for all uncommitted transactions.
The database is now fully recovered.
Also read he following link
http://docs.oracle.com/cd/B19306_01/server.102/b14220/startup.htm
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869

Similar Messages

  • Nologging direct-path insert into an indexed table

    Hello,
    Does anyone have an idea how I can suppress generation of undo logs for direct-path insert into an indexed table on 11.2.0.1.0:
    CREATE TABLE TBL(ID NUMBER) NOLOGGING;
    CREATE INDEX IDX ON TBL(ID) NOLOGGING;
    INSERT /*+ APPEND */ INTO TBL SELECT /*+ APPEND */ ROWNUM FROM ...; -- Source table has 400,000,000+ rows
    Regards,
    Angel Tsankov

    Pl do not post duplicates - Why does Oracle not use direct-path insert when instructed to do so - pl continue the discussion in your original thread

  • Direct load insert  vs direct path insert vs nologging

    Hello. I am trying to load data from table A(only 4 columns) to table B. Table B is new. I have 25 million records in table A. I have debating between direct load insert,, direct path insert and nologging. What is the diference between the three methods of data load? What is the best approach??

    Hello,
    The fastest way to move data from Table A to Table B is by using direct path insert with no-logging option turned on table B. Meaning this will be produce minimum logging and in case of DR you might not be able to recover data in table B. Now Direct path insert is equivalence of loading data from flat using direct load method. Generally using conventional method it's six phases to move your data from source (table, flat file) to target (table). But with direct path/load it will cut down to 3, and if in addition you will use PARALLEL hint on select and insert you might have faster result.
    INSERT /*+ APPEND */ INTO TABLE_B SELECT * from TABLE_A;Regards
    Correction to select statement
    Edited by: OrionNet on Feb 19, 2009 11:28 PM

  • Direct Path Insert on Forms

    Hi,
    I am using following insert statement on my forms application to do direct path insert.
    insert into /*+append*/ abc select * from xyz;
    as redo is not generated if you do direct path insert but in above case redo is generating for the above insert statement.
    I then created a procedure test_insert as below
    create procedure test_insert is
    begin
    insert into /*+append*/ abc select * from xyz;
    end;
    i then replaced the insert statement with above procedure on form and after that redo for the insert statement stopped.
    i was wondering if we could use append hint for direct path insert on forms?

    My point, though, is that if you're doing your hourly load into a separate table and building the index(es) on that table before moving the partition into the partitioned table, there is no risk of doing anything to screw up the partitioned table. It's also a very handy way to ensure that your load process doesn't interfere with your production data.
    If i have understood your point right, what about the space required ( tablespaces and the indexes). Trust me, my table at source gets populated @ over 230 MIL records /day. If i have a temp table, i will have to maintain hourly partitions/indexes/tablespaces for the temp table to hold so much data, isn't it? And, the data i move (say today's data) will be used by the user only after 4 days (because it is available in the source). We have provisioned our user interface to connect to 2 databases, one with 5 days of data (incl. sysdate) and the other one is a archive DB that holds 45 days worth of data.
    </>
    I wouldn't expect that a direct path load into a single partition of a partitioned table would invalidate local indexes on other partitions, but since I'd never do a direct path load into a single partition of a partitioned table, I've not tested this to make sure.
    No problem, if i try this out i will share it in the forum. </>Thanks

  • Insert /*+ Append */ and direct-path INSERT

    Hi Guys
    Does insert /*+ Append */ into hint cause Oracle 10G to use direct-path INSERT?
    and if insert /*+ Append */ into hint does cause Oracle to use direct-path INSERT, does insert /*+ Append */ is subject to the same restrictions as direct-path such as "The target table cannot have any triggers or referential integrity constraints defined on it."
    Thanks

    Dear,
    Here below a simple example showing the effet of existing trigger on the append hint
    mhouri@mhouri> select * from v$version where rownum=1;
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production         
    mhouri@mhouri> create table b as select * from all_objects where 1 = 2;
    Table créée.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70986 ligne(s) créée(s).
    mhouri@mhouri> select * from b;
    select * from b
    ERREUR à la ligne 1 :
    ORA-12838: impossible de lire/modifier un objet après modification en parallèle
    mhouri@mhouri> rollback;
    Annulation (rollback) effectuée.The direct path took place as far as I can't select from the table before I commit
    mhouri@mhouri> create trigger b_trg before insert on b
      2  for each row
      3  begin
      4  null;
      5  end;
      6  /
    Déclencheur créé.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70987 ligne(s) créée(s).
    424 ligne(s) sélectionnée(s).
    mhouri@mhouri> select count(1) from b;
      COUNT(1)                                                                     
         70987                                                                      While in the presence of this trigger on the table, the append hint has been silently ignored by Oracle. The fact that I can select from the table immediately afte the insert has finished is the indication that the table has not be inserted using direct path load
    Best Regards
    Mohamed Houri

  • Direct path insert

    Hi Friends,
    If i used direct path insert from a (LAN) remote table with 5 million rows using:
    sql> insert /*+ append */ into EMP using select * from EMP@dblink1;
    1) Do i need to create big rollback segment?
    *Note that in direct path sqlloader, no big rollback is needed.
    2.) Can i still undo it or rollback it the 5 millions rows?
    If i can then it sure holds or need a big rollback segment.
    Thanks a lot
    Message was edited by:
    [email protected]

    From same link see below points from Tom
    b) a direct path load always loads above the high water mark, since it is formatting and writing blocks directly to disk - it cannot reuse any existing space. Think about this - if you direct pathed an insert of a 100 byte row that loaded say just two rows - and you did that 1,000 times, you would be using at least 1,000 blocks (never reuse any existing space) - each with two rows. Now, if you did that using a conventional path insert - you would get about 70/80 rows per block in an 8k block database. You would use about 15 blocks. Which would you prefer?+
    c) you cannot query a table after direct pathing into it until you commit.+
    See point c. So, this will mark block header transaction flag to commit. if you rollback as they are written above HWM and since there is no commit flag yet it just need to set table HWM to original one and that's it.

  • Direct Path Inserts = By DBWR or by Shadow Process

    Hello guys,
    if i use the append hint (direct path inserts) - which process is writting the data to the datafile?
    In the documentation of oracle:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96524/c21dlins.htm#10778
    Oracle appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache Ok that is clear - but who is writing down the data? The shadow process which is handling the insert statement or is it given to the dbwr and it flushes the data directly to the datatfile without putting it into the buffer cache.
    And another question regarding direct path inserts:
    http://sai-oracle.blogspot.com/2006/03/parallel-query-is-it-good-or-bad.html
    If you want to do direct path insert in parallel, you would block all other non direct path insert operations on that table. This is because direct path insert would append above highwater mark, and no other process is allowed to update HWM until your operation is doneIf i insert data in serial with direct path insert... i also write data behind the High-HWM .. why is it only locked in parallel mode?
    And why does oracle not check if there is enough space in some other blocks below the High-HWM and using these blocks for "normal" inserts?
    Regards
    Stefan

    It's the server process that's responsible for writing direct path blocks to disk.
    DBWR only ever flushes out of the buffer cache.
    You do NOT write "data behind the HWM" in direct path mode, ever. Direct path works so fast because all you do is slam whole blocks of data onto disk ABOVE the high water mark. You can slam whole blocks down without worrying about what you're over-writing precisely because the blocks are above the HWM and thus cannot possibly be in use by anyone for anything. Under the HWM and you have to start worrying about locking and contention and stuff like that.
    Parallel operations (or simultaneous serial operations by two different users) inevitably have to block each other for this sort of thing: if I am busy writing to blocks above the HWM, great. But if you want to do it too... well, what's to stop us from over-writing each other's blocks?! Nothing, actually. So Oracle simply has to lock the entire table from any other direct path operation to make the situation manageable.

  • Forcing DIRECT PATH INSERT to go CONVENTIONAL.

    According to Oracle, to force a statement to avoid using DIRECT-PATH insert it must fall into the following:
    Direct-path INSERT is subject to a number of restrictions. If any of these restrictions is violated, then Oracle Database executes conventional INSERT serially without returning any message, unless otherwise noted:
        *     You can have multiple direct-path INSERT statements in a single transaction, with or without other DML statements. However, after one DML statement alters a particular table, partition, or index, no other DML statement in the transaction can access that table, partition, or index.
        *      Queries that access the same table, partition, or index are allowed before the direct-path INSERT statement, but not after it.
        *      If any serial or parallel statement attempts to access a table that has already been modified by a direct-path INSERT in the same transaction, then the database returns an error and rejects the statement.
        *      The target table cannot be part of a cluster.
        *      The target table cannot contain object type columns.
        *      Direct-path INSERT is not supported for an index-organized table (IOT) if it is not partitioned, if it has a mapping table, or if it is reference by a materialized view.
        *      Direct-path INSERT into a single partition of an index-organized table (IOT), or into a partitioned IOT with only one partition, will be done serially, even if the IOT was created in parallel mode or you specify the APPEND hint. However, direct-path INSERT operations into a partitioned IOT will honor parallel mode as long as the partition-extended name is not used and the IOT has more than one partition.
        *      The target table cannot have any triggers or referential integrity constraints defined on it.
        *      The target table cannot be replicated.
        *      A transaction containing a direct-path INSERT statement cannot be or become distributed.Are there any others that are not documented here? We have a vendor based app and want to avoid the DIRECT PATH INSERT and have it go CONVENTIONAL. We tried the TRIGGER approach, but that did not help at all.

    Why are you wanting to force conventional ?
    Are you sure the application uses direct path ?

  • Direct path read caused by direct path insert?

    Oracle 9.2
    ========
    Consider this INSERT statement
    INSERT /*+ APPEND USE_HASH*/ INTO schema.tablename (...)
    SELECT * FROM view_tablename;
    This statement takes about 30 minutes to complete.
    If I look at the v$session_wait, I can see that the session waits denoting db file scattered read (which is understandable). However, at the very end of the 30-minute wait, the v$session_wait view shows that it is waiting denoting direct path read. I want to understand how can a direct path INSERT (or for that matter a simple INSERT statement) can cause a direct path read wait. Can you show me some doc which expalain this in more detail than Oracle doc? Thanks in advance.

    Hmm....ok, well,first, there's a problem w/ the hint specification.
    The USE_HASH hint makes no sense as a hint to the insert statement. It only makes sense in the context of a select statement. Also, it should specify a table alias. So, you could have something like:
    insert /*+ APPEND */.into some_tab
    select /*+ use_hash(tab_alias) */ col1,col2,col3 from .....
    Now, as to the question of direct path read:
    The direct path read event indicates a disk sort. So, it's likely the source of the direct path read wait was due to sorting in the processing of the select statement. I can't imagine how the insert would cause direct path read.
    Hope that helps,
    -Mark

  • How can I tell if direct-path insert is really being used?

    I have a number of INSERT statements with /*+ APPEND */ hints, that I suspect are not using direct-path insert processing. How can I tell (via the plan, via Enterprise Manager, whatever) if direct-path is being used?
    If it is not being used, then I can research why that might be, and try to resolve the obstacles there. But if it is being used, then I want to know that so I can focus elsewhere.
    Thanks,
    Mike

    mtefft wrote:
    The question of whether direct-path is possible with multi-table inserts has been on my mind...
    Moreover, in another forum post, we have eyewitnesses that it has happened at least once:
    APPEND Hint in Multi-table insert
    So I think that post (giving examples of Explain plans with multi-table inserts successfully using direct-path) answers my question.
    Mike,
    Thanks for that link. When I checked my example again I realised that it had a foreign key constraint between the two tables I was inserting into. (It was a demonstration of how to normalise an incoming denormalised address table, so converted a flat table into address/address lines). When I removed the constraint I got the multi-table insert.
    However, I followed this up with a check on the execution plans and for the 10.2.0.3 I was running noted the following:
    Plan reported after running the insert and select from dbms_xplan.display_cursor()
    | Id  | Operation           | Name | Rows  | Bytes | Cost  |
    |   0 | INSERT STATEMENT    |      |       |       |    30 |
    |   1 |  MULTI-TABLE INSERT |      |       |       |       |
    |   2 |   TABLE ACCESS FULL | T3   | 10000 |  1240K|    30 |
    ------------------------------------------------------------Plan reported after explain plan / select from dbms_xplan.display()
    | Id  | Operation           | Name | Rows  | Bytes | Cost  |
    |   0 | INSERT STATEMENT    |      | 10000 |  1240K|    30 |
    |   1 |  MULTI-TABLE INSERT |      |       |       |       |
    |   2 |   DIRECT LOAD INTO  | T1   |       |       |       |
    |   3 |   DIRECT LOAD INTO  | T2   |       |       |       |
    |   4 |    TABLE ACCESS FULL| T3   | 10000 |  1240K|    30 |
    ------------------------------------------------------------If you try to check for direct path inserts by looking at the in-memory plans, you might not see the direct load that is really happening. (Perhaps the a check that both / all the target tables are locked with TM mode 6 may give you a clue.)
    Regards
    Jonathan Lewis

  • Direct Path Insert into temporary table

    Hi,
    I made the following experiment (temp_idtable is a temp table with a single column id, V_PstSigLink is a complex View):
    14:24:28 SQL> insert into temp_idtable select id from V_PstSigLink;
    17084 Zeilen wurden erstellt. (in english: rows inswerted)
    Statistiken
    24 recursive calls
    52718 db block gets
    38305 consistent gets
    16 physical reads
    4860544 redo size
    629 bytes sent via SQL*Net to client
    553 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    4 sorts (memory)
    0 sorts (disk)
    17084 rows processed
    14:24:41 SQL> commit;
    Transaktion mit COMMIT abgeschlossen.
    14:25:29 SQL> insert /*+ APPEND*/into temp_idtable select id from V_PstSigLink;
    17084 Zeilen wurden erstellt. (in english: rows inswerted)
    Statistiken
    1778 recursive calls
    775 db block gets
    38847 consistent gets
    40 physical reads
    427408 redo size
    613 bytes sent via SQL*Net to client
    565 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    27 sorts (memory)
    0 sorts (disk)
    17084 rows processed
    As you can see, with /*+ APPEND*/ there will be much less REDO generated.
    How can it be?
    1. /*+ APPEND*/ only doesn't mean that there shouldn't be REDO generated, does it? For that one has to declare the table with NOLOGGING.
    2. In the doc there is a statement that for temp. tables there is no REDO log generated, just UNDO (and REDO only for UNDO).
    Where is the difference coming from?
    If I use Direct Path (APPEND) for a temp table, where will be the table populated - in the Buufer Cache or directly in a temporary segment on the disk?
    Balazs

    Hi John,
    I'm a little confused about your statement " Oracle does not cache data blocks until they are read from a table.". I wanted to make a little experiment with KEEP Buffer Cache to see weather I can 'pin' the content of a temp table. But before getting to pin I realized an interesting behavior. Please see this sqlplus log:
    SQL> create global temporary table temp_ins(id NUMBER);
    Table created.
    SQL> insert into temp_ins values (1);
    1 row created.
    SQL> insert into temp_ins values (2);
    1 row created.
    SQL> insert into temp_ins values (3);
    1 row created.
    SQL> set autotrace on explain statistics;
    SQL> select * from temp_ins;
    ID
    1
    2
    3
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 TABLE ACCESS (FULL) OF 'TEMP_INS'
    Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    416 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    SQL>
    I realized exectly the opposit of your statement! Oracle apparently cached the content of the temp table before it read from it!! Could you please explain why? Now I'm going to try it with a normal table...
    Balazs

  • Direct-Path Inserts

    If the data bypasses the buffer cache and is written to the data files directly, does that mean the insert commits implicitly? Also, would it be safe to say that this would be a means of doing an insert without having to drop or disable constraints?
    Matt

    There is no implicit commit done.
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF01604
    Will show you the restrictions, most notable (for your question).
    "The target table cannot have any triggers or referential integrity constraints defined on it."

  • DIRECT PATH LOAD의 개념 및 사용 방법

    제품 : ORACLE SERVER
    작성날짜 : 1998-11-27
    매우 많은 양의 데이타를 빠른 시간 내에 load하고자하는 경우 direct path load를
    사용할 수 있다. 여기에서 이러한 direct path load의 자세한 개념 및 사용방법,
    사용 시 고려해야 할 점 등을 설명한다.
    1. conventional path load
    일반적인 sql*loader를 이용한 방법은 존재하는 table에 datafile 내의 data를
    SQL의 INSERT command를 이용하여 insert시킨다. 이렇게 SQL command를
    이용하기 때문에 각각의 데이타를 위한 insert command가 생성되어 parsing되는
    과정이 필요하며, 먼저 bind array buffer (data block buffer) 내에 insert되는
    데이타를 입력시킨 후 이것을 disk에 write하게 된다.
    conventional path load를 사용하여야 하는 경우는 다음과 같다.
    --- load 중에 table을 index를 이용하여 access하여야 하는 경우
    direct load중에는 index가 'direct load state'가 되어 사용이 불가능하다.
    --- load 중에 index를 사용하지 않고 table을 update나 insert등을 수행해야
    하는 경우
    direct load 중에는 table에 exclusive write(X) lock을 건다.
    --- SQL*NET을 통해 load를 수행해야 하는 경우
    --- clustered table에 load하여야 하는 경우
    --- index가 걸려 있는 큰 table에 적은 수의 데이타를 load하고자 할 때
    --- referential이나 check integrity가 정의되어 있는 큰 table에
    load하고자 할 때
    --- data field에 SQL function을 사용하여 load하고자 할 때
    2. direct path load의 수행 원리
    Direct Path Loads는 다음과 같은 특징들로 인하여 매우 많은 양의 데이타를
    빠른 시간에 load하고자 할 때 이용하는 것이 바람직하다.
    (1) SQL INSERT 문장을 generate하여 수행하지 않는다.
    (2) memory 내의 bind array buffer를 이용하지 않고 database block의
    format과 같은 data
    block을 memory에 만들어 데이타를 넣은 후 그대로 disk에 write한다.
    memory 내의 block buffer와 disk의 block은 그 format이 다르다.
    (3) load 시작 시에 table에 lock을 걸고 load가 끝나면 release시킨다.
    (4) table의 HWM (High Water Mark) 윗 부분의 block에 data를 load한다.
    HWM는 table에 data가 insert됨에 따라 계속 늘어나고 truncate 외에는
    줄어들게 하지 못한다.
    그러므로, 항상 완전히 빈 새로운 block을 할당받아 data를 입력시키게 된다.
    (5) instance failure가 발생하여도 redo log file을 필요로 하지 않는다.
    (6) UNDO information을 발생시키지 않는다.
    즉 rollback segment를 사용하지 않는다.
    (7) OS에서 asynchronous I/O가 가능하다면, 복수개의 buffer에 의해서 동시에
    data를 읽어서 buffer에 write하면서 buffer에서 disk로 write할 수 있다.
    (8) parallel option을 이용하면 더욱 성능을 향상시킬 수 있다.
    3. direct path load의 사용방법 및 options
    direct path load를 사용하기 위한 view들은 다음 script에 포함어 있으며,
    미리 sys user로 수행되어야 한다. 단 이 script는 catalog.sql에 포함되어 있어,
    db 구성 시에 이미 수행되어진다.
    @$ORACLE_HOME/rdbms/admin/catldr.sql
    direct path load를 사용하기 위해서는 일반적인 sqlload 명령문에 DIRECT=TRUE를
    포함시키기만 하면 된다. 다음과 같이 기술하면 된다.
    sqlload username/password control=loadtest.ctl direct=true
    이 direct path load를 사용 시에 고려할 만한 추가적인 option 및 control file
    내에 기술 가능한 clause들을 살펴본다.
    (1) ROWS = n
    conventional path load에서 rows는 default가 64이며, rows에 지정된 갯수
    만큼의 row가 load되면 commit이 발생한다. 이와 비슷하게 direct load
    path에서는 rows option을 이용하여 data save를 이루며, data save가 발생하면
    data는 기존 table에 포함되어 입력된 data를 잃지 않게 된다.
    단 이 때 direct path load는 모든 data가 load된 다음에야 index가
    구성되므로 data save가 발생하여도 index는 여전히 direct load state로
    사용하지 못하게 된다.
    direct path load에서 이 rows의 default값은 unlimited이며, 지정된 값이
    database block을 채우지 못하면 block을 완전히 채우는 값으로 올림하여,
    partial block이 생성되지 않도록 한다.
    (2) PIECED clause
    control file내에 column_spec datatype_spec PIECED 순으로 기술하는
    것으로서 direct path load에만 유효하다. LONG type과 같이 하나의 data가
    maximum buffer size보다 큰 경우 하나의 data를 여러번에 나누어 load하는
    것이다. 이 option은 table의 맨 마지막 field
    하나에만 적용가능하며, index column인 경우에는 사용할 수 없다.
    그리고 load도중 data에 문제가 있는 경우 현재 load되는 data의 잘린 부분만
    bad file에 기록되게 된다. 왜냐하면 이전 조각은 이미 datafile에 기록되어
    buffer에는 남아있지 않기 때문이다.
    (3) READBUFFERS = n (default is 4)
    만약 매우 큰 data가 마지막 field가 아니거나 index의 한 부분인 경우
    PIECED option을 사용할 수 없다. 이러한 경우 buffer size를 증가시켜야
    하는데 이것은 readbuffers option을 이용하면 된다. default buffer갯수는
    4개이며, 만약 data load중 ORA-2374(No more slots for read buffer
    queue) message가 나타나면, buffer갯수가 부족한 것이므로 늘려주도록 한다.
    단 일반적으로는 이 option을 이용하여 값을 늘린다하더라도 system
    overhead만 증가하고 performance의 향상은 기대하기 어렵다.
    4. direct path load에서의 index 처리
    direct path load에서 인덱스를 생성하는 절차는 다음과 같다.
    (1) data가 table에 load된다.
    (2) load된 data의 key 부분이 temporary segment에 copy되어 sort된다.
    (3) 기존에 존재하던 index와 (2)에 의해서 정렬된 key가 merge된다.
    (4) (3)에 의해서 새로운 index가 만들어진다.
    기존에 존재하던 index와 temporary segment, 그리고 새로 만들어지는 index가
    merge가 완전히 끝날 때까지 모두 존재한다.
    (5) old index와 temporary segment는 지워진다.
    이와 같은 절차에 반해 conventional path load는 data가 insert될 때마다 한
    row 씩 index에 첨가된다. 그러므로 temporary storage space는 필요하지 않지만
    direct path load에 비해 index 생성 시간도 느리고, index tree의 balancing도
    떨어지게 된다.
    index생성 시 필요한 temporary space는 다음과 같은 공식에 의해 예측되어질 수
    있다.
    1.3 * key_storage
    key_storage = (number_of_rows) * (10 + sum_of_column_sizes +
    number_of_columns)
    여기에서 1.3은 평균적으로 sort 시에 추가적으로 필요한 space를 위한 값이며,
    전체 data가 완전히 순서가 거꾸로 된 경우에는 2, 전체가 미리 정렬된 경우라면
    1을 적용하면 된다.
    --- SINGLEROW clause
    이와 같이 direct path load에서 index 생성 시 space를 많이 차지하는 문제점
    때문에 resource가 부족한 경우에는 SINGLEROW option을 사용할 수 있다.
    이 option은 controlfile 내에 다음과 같은 형태로 기술하며, direct path
    load에만 사용 가능하다.
    into tables table_name [sorted indexes...] singlerow
    이 option을 사용하면 전체 data가 load된 뒤에 index가 구성되는 것이 아니라
    data가 load됨에 따라 data 각각이 바로 index에 추가된다.
    이 option은 기존에 미리 index가 존재하는 경우 index를 생성하는 동안
    merge를 위해 space를 추가적으로 필요로 하는 것을 막고자 하는 것이므로
    INSERT 시에는 사용하지 않고, APPEND시에만 사용하도록 하고 있다.
    실제 새로 load할 data 보다 기존 table이 20배 이상 클 때 사용하도록 권하고
    있다.
    direct path load는 rollback information을 기록하지 않지만, 이 singlerow
    option을 사용하면 insert되는 index에 대해 undo 정보를 rollback segment에
    기록하게 된다.
    그러나, 중간에 instance failure가 발생하면 data는 data save까지는 보존
    되지만 index는 여전히 direct load state로 사용할 수 없게 된다.
    --- Direct Load State
    만약 direct path load가 성공적으로 끝나지 않으면 index는 direct load
    state로 된다.
    이 index를 통해 조회하고자 하면 다음과 같은 오류가 발생한다.
    ORA-01502 : index 'SCOTT.DEPT_PK' is in direct load state.
    index가 direct load state로 되는 원인을 구체적으로 살펴보면 다음과 같다.
    (1) index가 생성되는 과정에서 space가 부족한 경우
    (2) SORTED INDEXES clause가 사용되었으나, 실제 data는 정렬되어 있지 않은
    경우
    이러한 경우 data는 모두 load가 되고, index만이 direct load state로 된다.
    (3) index 생성 도중 instance failure가 발생한 경우
    (4) unique index가 지정되어 있는 컬럼에 중복된 data가 load되는 경우
    특정 index가 direct load state인지를 확인하는 방법은 다음과 같다.
    select index_name, status
    from user_indexes
    where table_name = TABLE_NAME';
    만약 index가 direct load state로 나타나면 그 index는 drop하고 재생성
    하여야만 사용할 수 있다. 단, direct load 중에는 모든 index가 direct
    load state로 되었다가 load가 성공적으로 끝나면 자동으로 valid로 변경된다.
    --- Pre-sorting (SORTED INDEX)
    direct load 시 index구성을 위해서 정렬하는 시간을 줄이기 위해 미리 index
    column에 대해서 data를 정렬하여 load시킬 수 있다. 이 때 control file 내에
    SORTED INDEXES option을 다음과 같이 정의한다.
    이 option은 direct path load 시에만 유효하며, 복수 개의 index에 대해서
    지정가능하다.
    into table table_name SORTED INDEXES (index_names_with_blank)
    만약, 기존의 index가 이미 존재한다면, 새로운 key를 일시적으로 저장할 만큼
    의 temporary storage가 필요하며, 기존 index가 없는 경우였다면, 이러한
    temporary space도 필요하지 않다.
    이와 같이 direct path load 시에 index 구성 시에는 기존 데이타가 있는 table에
    load하는 경우 space도 추가적으로 들고, load가 완전히 성공적으로 끝나지 않으면
    index를 재생성하여야 하므로, 일반적으로 direct path load 전에 미리 table의
    index를 제거한 후 load가 모두 끝난 후 재생성하도록 한다.
    5. Recovery
    direct load는 기존 segment중간에 data를 insert하는 것이 아니라 완전히
    새로운 block을 할당받아 정확히 write가 끝난 다음 해당 segment에 포함되기
    때문에 instance failure시에는 redo log정보를 필요로 하지 않는다. 그러나
    default로 direct load는 redo log에 입력되는 data를 기록하는데 이것은 media
    recovery를 위한 것이다. 그러므로 archive log mode가 아니면 direct load에
    생성된 redo log 정보는 불필요하게 되므로 NOARCHIVELOG mode시에는 항상
    control file내에 UNRECOVERABLE이라는 option을 사용하여 redo log에 redo entry를 기록하지 않도록 한다.
    data가 redo log 정보 없이 instance failure시에 data save까지는 보호되는데
    반해 index는 무조건 direct load state가 되어 재생성하여야 한다. 그리고 data save이후의 load하고자 하는 table에 할당되었던 extent는 load된 data가
    user에게 보여지지는 않지만 extent가 free space로 release되지는 않는다.
    6. Integrity Constraints & Triggers
    direct path load중 not null, unique, primary key constraint는 enable
    상태로 존재한다. not null은 insert시에 check되고 unique는 load후 index를
    구성하는 시점에 check된다.
    그러나 check constraint와 referential constraint는 load가 시작되면서
    disable상태로 된다. 전체 데이타가 load되고 난 후 이렇게 disable된
    constraints를 enable시키려면 control file내에 REENABLE이라는 option을
    지정하여야 한다. 이 reenable option은 각 constraint마다 지정할 수는 없으며
    control file에 한번 지정하면 전체 integrity/check constraint에 영향을
    미치게 된다. 만약 reenable되는 과정에서 constraint를 위배하는 data가
    발견되면 해당 constraint는 enable되지 못하고 disabled status로 남게 되며,
    이렇게 위배된 data를 확인하기 위해서는 reenable clause에 exceptions option을 다음과 같이 추가하면 된다.
    reenable [exceptions table_name]
    이 때 table_name은 $ORACLE_HOME/rdbms/admin/utlexcpt.sql을 다른
    directory로copy하여 table이름을 exceptions가 아닌 다른 이름으로 만들어 수행시키면 된다.
    insert trigger도 integrity/check constraint와 같이 direct load가 시작하는
    시점에 disable되며, load가 끝나면 자동으로 enable된다. 단 enable되고 나서도
    load에 의해 입력된 data에 대해 trigger가 fire되지는 않는다.

  • Direct-Path with (No)Archivelog and recovery

    Hi,
    In a datawarehouse environment, our database is in NOARCHIVELOG. When we do a Insert /*+ APPEND */ on a table withLOGGING attribute, a miniminal redo log is generated.
    We want to switch the database in ARCHIVELOG mode, and change the attribute of the table to NOLOGGING .
    My question is, does instance recovery will work ?
    I made a matrix of the effect of a crash while a direct-path Insert operation :
    Database mode                   Table mode               Instance recovery                      Media recovery
    NOARCHIVELOG                     LOGGING                        OK                                     NOT OK
    ARCHIVELOG                       LOGGING                        OK                                     OK
    ARCHIVELOG                       NOLOGGING                      OK                                     NOT OK
    Do you agree with this matrix ?
    Regards

    Yes instance recovery will work regardless of direct-path and nologgging usage.
    Here is an example with Oracle XE on Windows (database runs in ARCHIVELOG mode and FORCE_LOGGING is not set):
    c:\tmp>rman target /
    Recovery Manager: Release 11.2.0.2.0 - Production on Jeu. Janv. 12 20:05:32 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: XE (DBID=2642463371)
    RMAN> report unrecoverable;
    using target database control file instead of recovery catalog
    Report of files that need backup due to unrecoverable operations
    File Type of Backup Required Name
    RMAN> exit
    Recovery Manager complete.
    c:\tmp>sqlplus xxx/xxx @nolog.sql
    SQL*Plus: Release 11.2.0.2.0 Production on Jeu. Janv. 12 20:05:48 2012
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    SQL> select log_mode, force_logging from v$database;
    LOG_MODE     FOR
    ARCHIVELOG   NO
    SQL> drop table tnl purge;
    Table dropped.
    SQL> create table tnl(x int) nologging tablespace users;
    Table created.
    SQL> insert /*+ APPEND */ into tnl select object_id from all_objects;
    17971 rows created.
    SQL> commit;
    Commit complete.
    SQL> connect / as sysdba
    Connected.
    SQL> startup force
    ORACLE instance started.
    Total System Global Area 1071333376 bytes
    Fixed Size                  1388352 bytes
    Variable Size             658505920 bytes
    Database Buffers          406847488 bytes
    Redo Buffers                4591616 bytes
    Database mounted.
    Database opened.
    SQL> exit
    Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production
    c:\tmp>rman target /
    Recovery Manager: Release 11.2.0.2.0 - Production on Jeu. Janv. 12 20:07:34 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: XE (DBID=2642463371)
    RMAN> report unrecoverable;
    using target database control file instead of recovery catalog
    Report of files that need backup due to unrecoverable operations
    File Type of Backup Required Name
    4    full or incremental     C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF
    RMAN> exit
    Recovery Manager complete.
    c:\tmp>Edited by: P. Forstmann on 12 janv. 2012 20:08

  • APPEND (direct path) - redo size

    I am sure, I might be missing something obvious. I am under the impression that DIRECT PATH loads (such as inserts with APPEND hint )would generate less redo. But, not sure why I am seeing this...
    Regular Insert (Not direct path):
    ====================
    SQL> insert into c2 Select * from dba_objects where rownum < 301;
    300 rows created.
    Statistics
    10 recursive calls
    74 db block gets
    353 consistent gets
    1 physical reads
    *31044 redo size*
    821 bytes sent via SQL*Net to client
    752 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    2 sorts (memory)
    0 sorts (disk)
    300 rows processed
    Direct Path Insert
    ===========
    SQL> insert /*+ APPEND */ into c2 Select * from dba_objects where rownum < 301;
    300 rows created.
    Statistics
    8 recursive calls
    13 db block gets
    346 consistent gets
    1 physical reads
    *39048 redo size*
    809 bytes sent via SQL*Net to client
    770 bytes received via SQL*Net from client
    3 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    300 rows processed
    Not sure, why am I seeing more redo being generated with DIRECT PATH .... either I am missing something obvious or got the DIRECT PATH load completely wrong...would really appreciate any help with this.

    Hello,
    Check out this thread:
    [http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3224814814761]
    Additionally, if you were to switch off logging at the table level, before the INSERT, you should see a (further) reduction in redo:
    ALTER TABLE your_table NOLOGGING;

Maybe you are looking for

  • Difference between in invoice s1 and s2 documents

    what is the difference between s1 and s2 invoice documents. in which case we can use it both the documents . how to use it

  • Peronal Oracle8 loading error message

    Can somebody please tell me how to fix this error message. I am unable to complete laoding Oracle8 personal edition Release 8.0.4.0.0 The message said that the program has performed an illegal operation and will be shut down. Contact the program vend

  • Krusader - audacious: no drag and drop

    Hello, since a few days I'm using krusader as my favorite filemanager on my XFCE4 desktop. The overwhelming majority of my used software is GTK-based, but krusader is really hard to beat Now I have tried to insert some MP3 files from krusader to auda

  • Web application sql trace ?

    Hello all, If i want to run a database trace for designer pl/sql web application would i follow the same steps that are executed when running sql trace from sqlplus that is dbms_session.set_sql_trace(true) DBMS_SESSION.SET_IDENTIFIER ?

  • Clarfication of steps in Unicode Conversion

    Hello All, Currently we are upgrading from 4.6C non-unicode to ECC 6.0 with Enhancement Package 4 unicode. I have read the guide for Combined Upgrade and Unicode conversion and understand that as pre-requisite we need to execute the following steps: