External Table and Direct path load

Hi,
I was just playing with Oracle sql loader and external table features. few things which i ovserved are that data loading through direct path method of sqlloader is much faster and takes much less hard disk space than what external table method takes. here are my stats which i found while data loading:
For Direct Path: -
# OF RECORDS.............TIME...................SOURCE FILE SIZE...................DATAFILE SIZE(.dbf)
478849..........................00:00:43.53...................108,638 KB...................142,088 KB
957697..........................00:01:08.81...................217,365 KB...................258,568 KB
1915393..........................00:02:54.43...................434,729 KB...................509,448 KB
For External Table: -
# OF RECORDS..........TIME...................SOURCE FILE SIZE...................DATAFILE SIZE(.dbf)
478849..........................00:02:51.03...................108,638 KB...................966,408 KB
957697..........................00:08:05.32...................217,365 KB...................1,930,248 KB
1915393..........................00:17:16.31...................434,729 KB...................3,860,488 KB
1915393..........................00:23:17.05...................434,729 KB...................3,927,048 KB
(With PARALLEL)
i used same files for testing and all other conditions are similar also. In my case datafile is autoextendable, hence, as par requirement its size is automatically increased and hard disk space is reduced thus.The issue is that, is this an expected behaviour? and why in case of external tables such a large hard disk space is used when compared to direct path method? Performance of external table load is also very bad when compared to direct path load.
one more thing is that while using external table load with PARALLEL option, ideally, it should take less time. But what i actually get is more than what the time was without PARALLEL option.
In both the cases i am loading data from the same file to the same table (once using direct path and once using external table). before every fresh loading i truncate my internal table into which data was loaded.
any views??
Deep
Message was edited by:
Deep

Thanx to all for your suggestions.
John, my scripts are as follows:
for external table:
CREATE TABLE LOG_TBL_LOAD
(COL1 CHAR(20),     COL2 CHAR(2),     COL3 CHAR(20),     COL4 CHAR(400),
     COL5 CHAR(20),     COL6 CHAR(400),     COL7 CHAR(20),     COL8 CHAR(20),
     COL9 CHAR(400),     COL10 CHAR(400))
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY EXT_TAB_DIR
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY WHITESPACE OPTIONALLY ENCLOSED BY '"' MISSING FIELD VALUES ARE NULL     
LOCATION ('LOGZ3.DAT')
REJECT LIMIT 10;
for loading i did:
INSERT INTO LOG_TBL (COL1, COL2, COL3, COL4,COL5, COL6,
COL7, COL8, COL9, COL10)
(SELECT COL1, COL2, COL3, COL4, COL5, COL6, COL7, COL8,
COL9, COL10 FROM LOG_TBL_load_1);
for direct path my control file is like this:
OPTIONS (
DIRECT = TRUE
LOAD DATA
INFILE 'F:\DATAFILES\LOGZ3.DAT' "str '\n'"
INTO TABLE LOG_TBL
APPEND
FIELDS TERMINATED BY WHITESPACE OPTIONALLY ENCLOSED BY '"'
(COL1 CHAR(20),
COL2 CHAR(2),
COL3 CHAR(20),
COL4 CHAR(400),
COL5 CHAR(20),
COL6 CHAR(400),
COL7 CHAR(20),
COL8 CHAR(20),
COL9 CHAR(400),
COL10 CHAR(400))
and ya, i have used same table in both the situations. after loading once i used to truncate my table, LOG_TBL. i used the same source file, LOGZ3.DAT.
my tablespace USERS is loaclly managed.
thanks

Similar Messages

  • Direct Path Loading Issues with Global Temporary Tables - OCI & OCILib

    I am writing some code to import data into a warehouse from a CPU grid which computes risk data. Due to the fact a computing grid is used there will be many clients which can load the data concurrently and at any point in time.
    Currently the import uses Binding in OCCI and chunking with a prepared statement to import the data into a global temporary table in a staging area after which a stored procedure is called within the same session which will process the data and load the data into a star schema.
    The GTT has the advantage that if any clients have issues no dirty data will be left and each client only sees their own instance of the data.
    I have been looking at using direct path loading to increase the performance of the load and have written some OCI code to perform the same task. I have manged to import the data into a regular heap based table using the OCI direct path apis. However when I try and use the same code to import against a Global Temporary Table I get an OCI Error (ORA-00600: internal error code, arguments: [6979], [16], [1], [1318528], [], [], [], [], [], [], [], [])
    I get error when the function OCIDirPathPrepare is executed. The same issue occurs in both OCI and OCILib.
    Is it not possible to use Direct Path Loading against a Global Temporry Table ? Because you can use the /*+ APPEND */ hint and load global temporary tables this way from tools like SQL Devloper / toad which is surely informing the SQL Engine to use Direct Path ?
    Looking at the table USER_OBJECTS I can see that for a Global Temporary Table the DATA_OBJECT_ID is null. Does this mean that it is impossible to us a direct path load into Global Temporary Tables ?
    Any ideas / suggestions would be really appreciated. If this means redesigning the application then I would appreciate suggestions which would allow many client to quick write processes in a parallel fashion. If this means creating a new parition in a Heap Table for each writer and direct path loading into this table then so be it.
    Thanks
    H
    Edited by: 813640 on 19-Nov-2010 11:08

    Replying to my own message in case anyone else is interested.
    I have now managed to successfully load data using direct path into a global temporary table with OCI. There appears to be no reason why this approach will not work.
    I loaded data into the temporary table and then issued a select count(*) on the table from within the session and from a new session. The results were as expected.
    The resaon for the ORA-006000 error was due to the fact that I had enabled table level parallel loading
    ie
    OCIAttrSet((dvoid *) context, (ub4) OCI_HTYPE_DIRPATH_CTX, *(ub1) 1*, (ub4)0, (ub4) OCI_ATTR_DIRPATH_PARALLEL, errhp)
    When loading a Global Temporary Table the OCI_ATTR_DIRPATH_PARALLEL attribute needs to be zero
    This makes sense, since the temp table does not have any partitions so it would not be possible to write in parallel to multiple paritions.
    Edited by: 813640 on 22-Nov-2010 08:42

  • Regarding Direct Load or Direct Path Load

    Why Direct Load and Direct Path Load is faster?
    In which situation Direct Load, Direct Path Load are faster?

    >
    Why Direct Load and Direct Path Load is faster?
    In which situation Direct Load, Direct Path Load are faster?
    >
    See About Direct-Path INSERT in the DBA doc
    http://docs.oracle.com/cd/E11882_01/server.112/e17120/tables004.htm#i1009100
    >
    About Direct-Path INSERT
    Oracle Database inserts data into a table in one of two ways:
    •During conventional INSERT operations, the database reuses free space in the table, interleaving newly inserted data with existing data. During such operations, the database also maintains referential integrity constraints.
    •During direct-path INSERT operations, the database appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the table is not reused, and referential integrity constraints are ignored. Direct-path INSERT can perform significantly better than conventional insert.

  • Using functions with direct path load

    When loading data with sqlldr is it possible to have functions ( to alter the data) when loading in a direct path? Thanks

    When loading data with sqlldr is it possible to have functions ( to alter the data) when loading in a direct path?Did you try it? Which version do you use?
    How can I speed up the load?
    Did you study chapter 11 in the utilities guide Conventional and Direct Path Loads?
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/ldr_modes.htm#i1011553

  • DIRECT PATH LOAD의 개념 및 사용 방법

    제품 : ORACLE SERVER
    작성날짜 : 1998-11-27
    매우 많은 양의 데이타를 빠른 시간 내에 load하고자하는 경우 direct path load를
    사용할 수 있다. 여기에서 이러한 direct path load의 자세한 개념 및 사용방법,
    사용 시 고려해야 할 점 등을 설명한다.
    1. conventional path load
    일반적인 sql*loader를 이용한 방법은 존재하는 table에 datafile 내의 data를
    SQL의 INSERT command를 이용하여 insert시킨다. 이렇게 SQL command를
    이용하기 때문에 각각의 데이타를 위한 insert command가 생성되어 parsing되는
    과정이 필요하며, 먼저 bind array buffer (data block buffer) 내에 insert되는
    데이타를 입력시킨 후 이것을 disk에 write하게 된다.
    conventional path load를 사용하여야 하는 경우는 다음과 같다.
    --- load 중에 table을 index를 이용하여 access하여야 하는 경우
    direct load중에는 index가 'direct load state'가 되어 사용이 불가능하다.
    --- load 중에 index를 사용하지 않고 table을 update나 insert등을 수행해야
    하는 경우
    direct load 중에는 table에 exclusive write(X) lock을 건다.
    --- SQL*NET을 통해 load를 수행해야 하는 경우
    --- clustered table에 load하여야 하는 경우
    --- index가 걸려 있는 큰 table에 적은 수의 데이타를 load하고자 할 때
    --- referential이나 check integrity가 정의되어 있는 큰 table에
    load하고자 할 때
    --- data field에 SQL function을 사용하여 load하고자 할 때
    2. direct path load의 수행 원리
    Direct Path Loads는 다음과 같은 특징들로 인하여 매우 많은 양의 데이타를
    빠른 시간에 load하고자 할 때 이용하는 것이 바람직하다.
    (1) SQL INSERT 문장을 generate하여 수행하지 않는다.
    (2) memory 내의 bind array buffer를 이용하지 않고 database block의
    format과 같은 data
    block을 memory에 만들어 데이타를 넣은 후 그대로 disk에 write한다.
    memory 내의 block buffer와 disk의 block은 그 format이 다르다.
    (3) load 시작 시에 table에 lock을 걸고 load가 끝나면 release시킨다.
    (4) table의 HWM (High Water Mark) 윗 부분의 block에 data를 load한다.
    HWM는 table에 data가 insert됨에 따라 계속 늘어나고 truncate 외에는
    줄어들게 하지 못한다.
    그러므로, 항상 완전히 빈 새로운 block을 할당받아 data를 입력시키게 된다.
    (5) instance failure가 발생하여도 redo log file을 필요로 하지 않는다.
    (6) UNDO information을 발생시키지 않는다.
    즉 rollback segment를 사용하지 않는다.
    (7) OS에서 asynchronous I/O가 가능하다면, 복수개의 buffer에 의해서 동시에
    data를 읽어서 buffer에 write하면서 buffer에서 disk로 write할 수 있다.
    (8) parallel option을 이용하면 더욱 성능을 향상시킬 수 있다.
    3. direct path load의 사용방법 및 options
    direct path load를 사용하기 위한 view들은 다음 script에 포함어 있으며,
    미리 sys user로 수행되어야 한다. 단 이 script는 catalog.sql에 포함되어 있어,
    db 구성 시에 이미 수행되어진다.
    @$ORACLE_HOME/rdbms/admin/catldr.sql
    direct path load를 사용하기 위해서는 일반적인 sqlload 명령문에 DIRECT=TRUE를
    포함시키기만 하면 된다. 다음과 같이 기술하면 된다.
    sqlload username/password control=loadtest.ctl direct=true
    이 direct path load를 사용 시에 고려할 만한 추가적인 option 및 control file
    내에 기술 가능한 clause들을 살펴본다.
    (1) ROWS = n
    conventional path load에서 rows는 default가 64이며, rows에 지정된 갯수
    만큼의 row가 load되면 commit이 발생한다. 이와 비슷하게 direct load
    path에서는 rows option을 이용하여 data save를 이루며, data save가 발생하면
    data는 기존 table에 포함되어 입력된 data를 잃지 않게 된다.
    단 이 때 direct path load는 모든 data가 load된 다음에야 index가
    구성되므로 data save가 발생하여도 index는 여전히 direct load state로
    사용하지 못하게 된다.
    direct path load에서 이 rows의 default값은 unlimited이며, 지정된 값이
    database block을 채우지 못하면 block을 완전히 채우는 값으로 올림하여,
    partial block이 생성되지 않도록 한다.
    (2) PIECED clause
    control file내에 column_spec datatype_spec PIECED 순으로 기술하는
    것으로서 direct path load에만 유효하다. LONG type과 같이 하나의 data가
    maximum buffer size보다 큰 경우 하나의 data를 여러번에 나누어 load하는
    것이다. 이 option은 table의 맨 마지막 field
    하나에만 적용가능하며, index column인 경우에는 사용할 수 없다.
    그리고 load도중 data에 문제가 있는 경우 현재 load되는 data의 잘린 부분만
    bad file에 기록되게 된다. 왜냐하면 이전 조각은 이미 datafile에 기록되어
    buffer에는 남아있지 않기 때문이다.
    (3) READBUFFERS = n (default is 4)
    만약 매우 큰 data가 마지막 field가 아니거나 index의 한 부분인 경우
    PIECED option을 사용할 수 없다. 이러한 경우 buffer size를 증가시켜야
    하는데 이것은 readbuffers option을 이용하면 된다. default buffer갯수는
    4개이며, 만약 data load중 ORA-2374(No more slots for read buffer
    queue) message가 나타나면, buffer갯수가 부족한 것이므로 늘려주도록 한다.
    단 일반적으로는 이 option을 이용하여 값을 늘린다하더라도 system
    overhead만 증가하고 performance의 향상은 기대하기 어렵다.
    4. direct path load에서의 index 처리
    direct path load에서 인덱스를 생성하는 절차는 다음과 같다.
    (1) data가 table에 load된다.
    (2) load된 data의 key 부분이 temporary segment에 copy되어 sort된다.
    (3) 기존에 존재하던 index와 (2)에 의해서 정렬된 key가 merge된다.
    (4) (3)에 의해서 새로운 index가 만들어진다.
    기존에 존재하던 index와 temporary segment, 그리고 새로 만들어지는 index가
    merge가 완전히 끝날 때까지 모두 존재한다.
    (5) old index와 temporary segment는 지워진다.
    이와 같은 절차에 반해 conventional path load는 data가 insert될 때마다 한
    row 씩 index에 첨가된다. 그러므로 temporary storage space는 필요하지 않지만
    direct path load에 비해 index 생성 시간도 느리고, index tree의 balancing도
    떨어지게 된다.
    index생성 시 필요한 temporary space는 다음과 같은 공식에 의해 예측되어질 수
    있다.
    1.3 * key_storage
    key_storage = (number_of_rows) * (10 + sum_of_column_sizes +
    number_of_columns)
    여기에서 1.3은 평균적으로 sort 시에 추가적으로 필요한 space를 위한 값이며,
    전체 data가 완전히 순서가 거꾸로 된 경우에는 2, 전체가 미리 정렬된 경우라면
    1을 적용하면 된다.
    --- SINGLEROW clause
    이와 같이 direct path load에서 index 생성 시 space를 많이 차지하는 문제점
    때문에 resource가 부족한 경우에는 SINGLEROW option을 사용할 수 있다.
    이 option은 controlfile 내에 다음과 같은 형태로 기술하며, direct path
    load에만 사용 가능하다.
    into tables table_name [sorted indexes...] singlerow
    이 option을 사용하면 전체 data가 load된 뒤에 index가 구성되는 것이 아니라
    data가 load됨에 따라 data 각각이 바로 index에 추가된다.
    이 option은 기존에 미리 index가 존재하는 경우 index를 생성하는 동안
    merge를 위해 space를 추가적으로 필요로 하는 것을 막고자 하는 것이므로
    INSERT 시에는 사용하지 않고, APPEND시에만 사용하도록 하고 있다.
    실제 새로 load할 data 보다 기존 table이 20배 이상 클 때 사용하도록 권하고
    있다.
    direct path load는 rollback information을 기록하지 않지만, 이 singlerow
    option을 사용하면 insert되는 index에 대해 undo 정보를 rollback segment에
    기록하게 된다.
    그러나, 중간에 instance failure가 발생하면 data는 data save까지는 보존
    되지만 index는 여전히 direct load state로 사용할 수 없게 된다.
    --- Direct Load State
    만약 direct path load가 성공적으로 끝나지 않으면 index는 direct load
    state로 된다.
    이 index를 통해 조회하고자 하면 다음과 같은 오류가 발생한다.
    ORA-01502 : index 'SCOTT.DEPT_PK' is in direct load state.
    index가 direct load state로 되는 원인을 구체적으로 살펴보면 다음과 같다.
    (1) index가 생성되는 과정에서 space가 부족한 경우
    (2) SORTED INDEXES clause가 사용되었으나, 실제 data는 정렬되어 있지 않은
    경우
    이러한 경우 data는 모두 load가 되고, index만이 direct load state로 된다.
    (3) index 생성 도중 instance failure가 발생한 경우
    (4) unique index가 지정되어 있는 컬럼에 중복된 data가 load되는 경우
    특정 index가 direct load state인지를 확인하는 방법은 다음과 같다.
    select index_name, status
    from user_indexes
    where table_name = TABLE_NAME';
    만약 index가 direct load state로 나타나면 그 index는 drop하고 재생성
    하여야만 사용할 수 있다. 단, direct load 중에는 모든 index가 direct
    load state로 되었다가 load가 성공적으로 끝나면 자동으로 valid로 변경된다.
    --- Pre-sorting (SORTED INDEX)
    direct load 시 index구성을 위해서 정렬하는 시간을 줄이기 위해 미리 index
    column에 대해서 data를 정렬하여 load시킬 수 있다. 이 때 control file 내에
    SORTED INDEXES option을 다음과 같이 정의한다.
    이 option은 direct path load 시에만 유효하며, 복수 개의 index에 대해서
    지정가능하다.
    into table table_name SORTED INDEXES (index_names_with_blank)
    만약, 기존의 index가 이미 존재한다면, 새로운 key를 일시적으로 저장할 만큼
    의 temporary storage가 필요하며, 기존 index가 없는 경우였다면, 이러한
    temporary space도 필요하지 않다.
    이와 같이 direct path load 시에 index 구성 시에는 기존 데이타가 있는 table에
    load하는 경우 space도 추가적으로 들고, load가 완전히 성공적으로 끝나지 않으면
    index를 재생성하여야 하므로, 일반적으로 direct path load 전에 미리 table의
    index를 제거한 후 load가 모두 끝난 후 재생성하도록 한다.
    5. Recovery
    direct load는 기존 segment중간에 data를 insert하는 것이 아니라 완전히
    새로운 block을 할당받아 정확히 write가 끝난 다음 해당 segment에 포함되기
    때문에 instance failure시에는 redo log정보를 필요로 하지 않는다. 그러나
    default로 direct load는 redo log에 입력되는 data를 기록하는데 이것은 media
    recovery를 위한 것이다. 그러므로 archive log mode가 아니면 direct load에
    생성된 redo log 정보는 불필요하게 되므로 NOARCHIVELOG mode시에는 항상
    control file내에 UNRECOVERABLE이라는 option을 사용하여 redo log에 redo entry를 기록하지 않도록 한다.
    data가 redo log 정보 없이 instance failure시에 data save까지는 보호되는데
    반해 index는 무조건 direct load state가 되어 재생성하여야 한다. 그리고 data save이후의 load하고자 하는 table에 할당되었던 extent는 load된 data가
    user에게 보여지지는 않지만 extent가 free space로 release되지는 않는다.
    6. Integrity Constraints & Triggers
    direct path load중 not null, unique, primary key constraint는 enable
    상태로 존재한다. not null은 insert시에 check되고 unique는 load후 index를
    구성하는 시점에 check된다.
    그러나 check constraint와 referential constraint는 load가 시작되면서
    disable상태로 된다. 전체 데이타가 load되고 난 후 이렇게 disable된
    constraints를 enable시키려면 control file내에 REENABLE이라는 option을
    지정하여야 한다. 이 reenable option은 각 constraint마다 지정할 수는 없으며
    control file에 한번 지정하면 전체 integrity/check constraint에 영향을
    미치게 된다. 만약 reenable되는 과정에서 constraint를 위배하는 data가
    발견되면 해당 constraint는 enable되지 못하고 disabled status로 남게 되며,
    이렇게 위배된 data를 확인하기 위해서는 reenable clause에 exceptions option을 다음과 같이 추가하면 된다.
    reenable [exceptions table_name]
    이 때 table_name은 $ORACLE_HOME/rdbms/admin/utlexcpt.sql을 다른
    directory로copy하여 table이름을 exceptions가 아닌 다른 이름으로 만들어 수행시키면 된다.
    insert trigger도 integrity/check constraint와 같이 direct load가 시작하는
    시점에 disable되며, load가 끝나면 자동으로 enable된다. 단 enable되고 나서도
    load에 의해 입력된 data에 대해 trigger가 fire되지는 않는다.

  • SQL Loader direct path loads and unusable indexes

    sorry about all the questions. I am researching several issues. I am reading the Utilities document. It says that in certain circumstances indexes will become unusable. I have some questions about my scenario.
    1. tables partitioned by range
    2. local indexes
    3. all tables have 1 primary key and other indexes are non-unique
    4. all sql loads will go into the most recent partition
    5. users will be querying tables while sql loader is occurring
    6. only one sql loader session will run per table
    7. no foreign keys, triggers, or other constrants other than primary keys.
    The docs are not clear. Do I have a concern about unusable indexes with direct path loads? Will indexes function while the sql loader direct path is occurring(this I can't test since I have small data files now and they load fast, but I will have larger ones in production).
    My understanding is that Extertnal tables using Insert append is exactly the same as sql loader direct path load. Is this true?

    if you dont have anything productive to say how about you don't post at all? you have made ignorant posts like this for years.
    as far as reading the docs what do you think "the docs are not clear" means? By the docs I am referring to the utilities document.
    As far as version number its 10.2 and I forgot that. However, it does not appear that sql loader has really changed all that much over the last few versions.
    Finally I plan on testing it out and its more than a 2 minute test. I wanted to make sure I don't miss anythng in my tests.
    don't respond to any threads or posts I make from now on.

  • Request: PL/SQL, External Table and SQL Loader

    I see lately Questions asked in SQL and PL/SQL forum regarding SQL Loader and External Table are being moved to {forum:id=732}.
    Being an PL/SQL developer for some time now i feel External Table (and if i may, SQL Loader and DBMS_DATAPUMP) are very much an integral part of a PL/SQL development and the question related to these topics are well suited in SQL and PL/SQL forum. Even in SQL and PL/SQL FAQ we have exclusive content that discuss on these topics {message:id=9360007}
    So i would like to request the moderators to consider not moving such questions out of the SQL and PL/SQL forum.
    Thanks,
    Karthick.

    Karthick_Arp wrote:
    I see lately Questions asked in SQL and PL/SQL forum regarding SQL Loader and External Table are being moved to {forum:id=732}.
    Being an PL/SQL developer for some time now i feel External Table (and if i may, SQL Loader and DBMS_DATAPUMP) are very much an integral part of a PL/SQL development and the question related to these topics are well suited in SQL and PL/SQL forum. Even in SQL and PL/SQL FAQ we have exclusive content that discuss on these topics {message:id=9360007}
    So i would like to request the moderators to consider not moving such questions out of the SQL and PL/SQL forum.
    Thanks,
    Karthick.Not sure which moderators are moving them... cos it ain't me. I'm quite happy to leave those there.

  • SQL LOADER , EXTERNAL  TABLE and ODBS DATA SOURCE

    hello
    Can any body help loading data from dbase file .dbt to an oracle 10g table.
    I tried last day with SQL LOADER, EXTERNAL table , and ODBC data source.
    Why all of these utilities still failing to solve my problem ?
    Is there an efficient way to reach this goal ?
    Thanks in advance

    export the dbase data file to text file,
    then you have choice of using either sql loader or external table option to use.
    regards

  • Insert /*+ Append */ and direct-path INSERT

    Hi Guys
    Does insert /*+ Append */ into hint cause Oracle 10G to use direct-path INSERT?
    and if insert /*+ Append */ into hint does cause Oracle to use direct-path INSERT, does insert /*+ Append */ is subject to the same restrictions as direct-path such as "The target table cannot have any triggers or referential integrity constraints defined on it."
    Thanks

    Dear,
    Here below a simple example showing the effet of existing trigger on the append hint
    mhouri@mhouri> select * from v$version where rownum=1;
    BANNER                                                                         
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production         
    mhouri@mhouri> create table b as select * from all_objects where 1 = 2;
    Table créée.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70986 ligne(s) créée(s).
    mhouri@mhouri> select * from b;
    select * from b
    ERREUR à la ligne 1 :
    ORA-12838: impossible de lire/modifier un objet après modification en parallèle
    mhouri@mhouri> rollback;
    Annulation (rollback) effectuée.The direct path took place as far as I can't select from the table before I commit
    mhouri@mhouri> create trigger b_trg before insert on b
      2  for each row
      3  begin
      4  null;
      5  end;
      6  /
    Déclencheur créé.
    mhouri@mhouri> insert /*+ append */ into b
      2  select * from all_objects;
    70987 ligne(s) créée(s).
    424 ligne(s) sélectionnée(s).
    mhouri@mhouri> select count(1) from b;
      COUNT(1)                                                                     
         70987                                                                      While in the presence of this trigger on the table, the append hint has been silently ignored by Oracle. The fact that I can select from the table immediately afte the insert has finished is the indication that the table has not be inserted using direct path load
    Best Regards
    Mohamed Houri

  • A lot of messages "direct path write temp" and "direct path read temp"

    Hello, all
    Please, understand me, what is going on in my system (DB: Oracle database 11.2.0.3, OS: Windows 2008 R2).
    In AWR report (1 hour) I see next:
    Foreground Wait Events
                                                                 Avg               
                                            %Time Total Wait    wait    Waits   % DB
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    direct path write temp          132,627     0      1,056       8      0.8   21.7
    direct path read temp           308,969     0        565       2      2.0   11.6
    log file sync                    19,228     0        241      13      0.1    5.0
    direct path write                17,698     0        135       8      0.1    2.8
    db file sequential read          21,149     0         94       4      0.1    1.9
    SQL*Net message from dblin           59     0          5      86      0.0     .1
    Segments by Direct Physical Reads         DB/Inst: SGRE/sgre  Snaps: 1039-1040
    -> Total Direct Physical Reads:         392,273
    -> Captured Segments account for   94.7% of Total
               Tablespace                      Subobject  Obj.        Direct       
    Owner         Name    Object Name            Name     Type         Reads  %Total
    ** MISSING TEMP       ** TRANSIENT: 437734 MISSING ** UNDEF       38,290    9.76
    DBSNMP     TEMP       MGMT_TEMPT_SQL                  TABLE       38,242    9.75
    ** MISSING TEMP       ** TRANSIENT: 438784 MISSING ** UNDEF       37,790    9.63
    ** MISSING TEMP       ** TRANSIENT: 437312 MISSING ** UNDEF       37,661    9.60
    ** MISSING TEMP       ** TRANSIENT: 439257 MISSING ** UNDEF       37,477    9.55Some selects:
    SELECT   S.sid,
             T.blocks * TBS.block_size / 1024 / 1024 mb_used, T.tablespace, T.SEGTYPE
    FROM     v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS
    WHERE    T.session_addr = S.saddr
    AND      T.sqladdr = Q.address (+)
    AND      T.tablespace = TBS.tablespace_name
    AND      S.sid = 732
    ORDER BY S.username, S.sid;
    SID     MB_USED     TABLESPACE     SEGTYPE
    732     2          TEMP          LOB_DATA
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          INDEX
    732     1          TEMP          DATA
    732     1          TEMP          INDEX
    732     1          TEMP          LOB_INDEX
    select st.sid, sn.name, st.VALUE
    from V$statname sn, v$sesstat st
    where st.STATISTIC# = sn.STATISTIC#
    and (sn.name like 'sorts%')
    and st.sid in (select sid from v$session_wait where event like '%direct path write temp%')
    order by st.sid
    SID     NAME          VALUE
    732     sorts (memory)     591408
    732     sorts (rows)     102126
    732     sorts (disk)     0Why I do not see any disk sorts? If TEMP is used for no sort operations, then for which operations?
    How I can see that? How can I decrease direct path write temp without tuning SQL?
    Additional information:
    PGA_AGGREGATE_TARGET is set to big value - 6GB (3GB enough due to advisory recommendation)
    Please, help.
    Regards, user12103911.

    user12103911 wrote:
    SELECT optimal_count, round(optimal_count*100/total, 2) optimal_perc,
    onepass_count, round(onepass_count*100/total, 2) onepass_perc,
    multipass_count, round(multipass_count*100/total, 2) multipass_perc
    FROM
    (SELECT decode(sum(total_executions), 0, 1, sum(total_executions)) total,
    sum(OPTIMAL_EXECUTIONS) optimal_count,
    sum(ONEPASS_EXECUTIONS) onepass_count,
    sum(MULTIPASSES_EXECUTIONS) multipass_count
    FROM   v$sql_workarea_histogram); 
    OPTIMAL_COUNT OPTIMAL_PERC  ONEPASS_COUNT ONEPASS_PERC  MULTIPASS_COUNT MULTIPASS_PERC
    13150685      100           0             0             0               0No n-pass executions.That's a pretty convincing display - given that your AWR manages to NAME an object that is in the TEMP tablespace, an obvious point to follow up is global temporary tables. If you have any declared as "on commit preserve" (duration = 'SYS$SESSION') then you could have code which does "insert /*+ append */ into gtt", and if you then query this in parallel (or - since you're on 11.2.0.3 - the data volume is large enough) Oracle could choose to do direct path writes to load the GTT and direct path reads to read it back.
    Regards
    Jonathan Lewis

  • SQLLoader, direct path load issue

    Hi all,
    I have a sqlldr control file which has couple of columns like,
    my_column_1 ,
    my_column_2 "decode(:my_column_1,'ONE','AAA','TWO','BBB', :my_column_1)"
    The table I am loading to is in user X and I am running the load from user Y. Everything works fine with conventional path load (not direct path) as grants are made for the table to user Y.
    When I load the data with same control file with direct path, I get an error ,
    01031 - insufficient privileges
    Is this anything to do with the syntex I have used in the control file or its a privilege issue. If its a privilege issue which privilege is that ?
    I did following tests,
    1) Load is run with conventional path load, from user Y and the decode statement is in control file - Load works
    2) Load is run with direct path load, from user Y and decode statement is in control file - Load fails with above mentioned error
    3) Load is run with direct path load, from user Y and decode IS REMOVED from the control file - Load works (!!!)
    What can be the conclusion? Way out ?
    Thanks and Regards

    <quote>What can be the conclusion? Way out ?</q>
    Adi is right ... in conventional mode, sqlldr will go through the SQL engine to insert data in your table (and the SQL engine knows what DECODE is); in direct path mode, sqlldr formats database blocks directly ... so it cannot do SQL functions.
    A way out ... load my_column_1 into the table the way you do and put a view on top having the extra my_column_2 with the decode on my_column_1.

  • Direct Path Loading

    Sir,
    I tried the oracle utility SQLLDR.I created a table named
    TEST1(id Number primary key,var varchar2(50).
    I correcltly load data into TEST1 using normal load(not direct),bad file entries are filled with duplicate records
    When direct path load is used,all data are loaded into TEST1 voilating PRIMARY KEY constraint.
    After this i tried to insert data into TEST1 error showing table is in unusable state.
    Again direct path loading is used it also shows same error as above
    Please give me valuable information about the working of Direct Path Loading and data insertion.
    Regards Manojvilayil

    OCI_ATTR_DATA_SIZE in OCI Programmer's guide 9.2:
    Steps used in OCI defining - example: OCIAttrGet() into <undefined>
    Attributes of Type Attributes: ub4
    Attributes of Collection Types: ub2
    Attributes belonging to Columns of Tables or Views: ub2
    Attributes belonging to Arguments/Results: ub2
    Retrieving Column Data Types For a Table - example: OCIAttrGet() into ub4
    Describing with Character Length Semantics - example: OCIAttrGet() into ub4
    Creating a Parameter Descriptor for OCIType Calls: ub4
    Handle and Descriptor Attributes, Example on page A-72: OCIAttrSet from <undefined> with length 0
    Handle and Descriptor Attributes, Direct Path Column Parameter Attributes, Attribute Data Type: "ub2 */ub2 *".
    OCI_ATTR_DATA_SIZE in OCI Programmer's guide 12c R1:
    Implicit describe of a result - example: OCIAttrGet() into ub2
    Steps used in OCI defining - example: OCIAttrGet() into <undefined>
    Attributes of Type Attributes: ub2
    Attributes of Collection Types: ub2
    Attributes of Columns of Tables or Views: ub2
    Attributes of Arguments and Results: ub2
    Example 6–2: OCIAttrGet() into ub2
    Example 6–6: OCIAttrGet() into ub2
    Creating a Parameter Descriptor for OCIType Calls: ub2
    Handle and Descriptor Attributes, Example on page A-78: OCIAttrSet from <undefined> with length 0
    Handle and Descriptor Attributes, Direct Path Column Parameter Attributes, Attribute Data Type: "ub4 */ub4 *".
    Apparently the 12c manual is already adapted to ub4.
    Maybe it is a good idea to be careful with all length definitions?

  • SQL*Loader problem with direct path load

    Hi all,
    Its on Oracle 9.2
    I have a sqlldr control file which has couple of columns like,
    my_column_1 ,
    my_column_2 "decode(:my_column_1,'ONE','AAA','TWO','BBB', :my_column_1)"
    The table I am loading to is in user X and I am running the load from
    user Y. Everything works fine with conventional path load (not direct
    path) as grants are made for the table to user Y.
    When I load the data with same control file with direct path, I get an
    error ,
    01031 - insufficient privileges
    Is this anything to do with the syntex I have used in the control file
    or its a privilege issue. If its a privilege issue which privilege is
    that ?
    I did following tests,
    1) Load is run with conventional path load, from user Y and the decode
    statement is in control file - Load works
    2) Load is run with direct path load, from user Y and decode statement
    is in control file - Load fails with above mentioned error
    3) Load is run with direct path load, from user Y and decode IS REMOVED
    from the control file - Load works (!!!)
    What can be the conclusion? Way out ?
    Thanks and Regards

    You need to grant
    grant lock any table to userY;
    For more information see MetaLink Note 1082550.6

  • Direct path load question

    I don't understand why during a direct path load insert an exclusive lock on the table is required.
    Suppose I have a table T without any indexes or constraints, why can't I update the table in a session and bulk load it in another session above the HWM ?
    Thanks in advance

    I don't understand why during a direct path load insert an exclusive lock on the table is required.
    direct path write
    During Direct Path operations, the data is asynchronously written to the database files. At some stage the session needs to make sure that all outstanding asynchronous I/O have been completed to disk. This can also happen if, during a direct write, no more slots are available to store outstanding load requests (a load request could consist of multiple I/Os).http://docs.oracle.com/cd/B28359_01/server.111/b28320/waitevents003.htm
    If you wish to get more details then check below link where Tanel is describing it beautifully. Tanel is talking about “asynch descriptor resize” wait event in Oracle 11gR2, but since we are discussing about direct path write, so I think this may also be relavant and of your interest.
    http://blog.tanelpoder.com/2010/11/23/asynch-descriptor-resize-wait-event-in-oracle/
    Regards
    Girish Sharma

  • Cannot get Oracle sequence to be used with SQL Loader direct path load.

    I attempted to load approximately 3 million rows from a flat file into a 10g database using SQL Loader direct path load. In order to generate a primary key for each of these rows I used a simple SQL expression which gets the next number of an Oracle sequence and assigned that to my primary key column inside the control file. The strange thing is no primary key was generated for each of these columns as I see the sequence was not advanced. Furthermore when I try to run a query to find the count on those rows which have this column as NOT NULL ,I get the total of all the rows even though on inspection this column is empty in each of the rows.
    As far as I know, in the control file for a direct path load I am able to use a SQL Expression that returns simple scalar data which a NEXTVAL on a sequence does. (Oreilly Oracle SQL Loader The Definitive Guide p.184)
    Does anybody why my primary key was not generated and furthermore why is this column in a query appear to be NOT NULL when in fact it has nothing in it?

    Daniel,
    I appreciate your response alot.
    Now a follow on. You say that SQL*Loader is the fastest way to get data into Oracle Spatial even though the conventional path. How does SQL*Loader do this? Doesn't it use OCI?
    I just can't help but think that coverting my data to SQL*Loader format, and then having it parse it and send it to the database in some form must be slower than me just sending whatever SQL Loader sends to the DB myself using OCI. Am I missing something?
    The SQL Loader documentation seems to suggest that SQL*Loader just turns the input into alot of INSERT stateemnts. If so, can't I just do this?
    My performance with OCI and INSERT statements isn't very good, but I believe I am doing one transaction per insert. Might I be as well off to concentrate on fine tuning my OCI based code using INSERTs?
    I will actually do some time tests myself, but I would appreciate your opinion.
    Once again thanks for the great info you have provided.

Maybe you are looking for

  • How do you "Create an iPod or iPhone version"?

    I bought "Daddy Day Care" from the iTunes store, and after syncing my iPod many, many times, it still won't show up on my iPod. It's a 2nd generation, with 3 gigabytes left. Daddy Day Care is only 1 gigabyte. I think it's the video dimensions, as the

  • Pricing routine in copy control from quotation to contract

    Hi Team, I have a requirement where price from a particular condition type of a quotation needs to be copied to another condition type in contract when a contract is created in VA41 with reference to a quote. I am unable to find a routine for this. C

  • "The Page Cannot Be Displayed" error when clicking the Remote Agent

    Hi everyone! We have installed ACS SE 4.0 and we have added a remote agent running on Windows Server 2003. Recently, we are encountering "The page cannot be displayed" error when we click the RA under a network device group. This is resolved by reboo

  • Display favicon in Apex?

    Is there any way to display my own favicon.ico in Apex's URL bar? If yes, how? Thanx

  • Eclipse and JSP

    Hi im new to programming to java and servlets & jsp in general. To program java web apps i noticed a pretty complicated directory structure. Does anyone know how could I get eclipse to handle such a directory structure and help me make webapps? are t