Sqlldr unrecoverable option?

hi there-
in documentation around the web for sqlldr, i see mention of an "unrecoverable" option that seems to turn off logging to speed the load.
here is a sample web page: [http://www.oracleutilities.com/OSUtil/sqlldr.html]
this is what it says:
<!--[endif]-->
bq. Use unrecoverable. The unrecoverable \\ option (unrecoverable load data) disables the writing of the data to the redo \\ logs. This option is available for \\ direct path loads only.+
<!--[if supportFields]><b><i><span
style='mso-element:field-begin'></span></i></b><span
style='mso-spacerun:yes'> </span>XE "<i>unrecoverable</i>" <![endif]--><!--[if supportFields]><b><i><span
style='mso-element:field-end'></span></i></b><![endif]-->However, the actual documentation for sqlldr doesn't seem to mention that option, for example:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch04.htm#1004683
I would like to use the option. anybody know how? (i realize that i could 'alter table nologging', but would prefer to do it as part of the sqlldr command, if possible).
thanks,
steve

yes, thank you, i know about direct=true.
do i also need to turn off logging, or is it effectively already turned off in this mode?
by default, this mode does one commit at the end. if i insert a million rows, will it have made a huge log file?
thanks,
steve

Similar Messages

  • FORCE_LOGGING=YES and Unrecoverable option of SQL LOADER

    Hi All,
    I have small query regarding FORCE_LOGGING=YES and Unrecoverable option of SQL Loader.
    We have our Database set with FORCE_LOGGING=YES option.
    We have huge amount of transaction happen using SQL Loader. For the SQL Loader we have set the option direct=true and unrecoverable.
    The Unrecoverable option won't generate the redo logs while loading, but as we have FORCE_LOGGING=YES.
    Whice option will take effect? Will it still generate redo logs?
    Thanks for your help,
    Manoj

    Yes - it will still generate redo. Like I wrote, setting FORCE_LOGGING = TRUE for the database utterly guarantees there will be no nologging operations.
    In fact, only direct path inserts have the option of being nologging in the first place. Non direct path (aka conventional) inserts always generate redo anyway.
    There is a lot of confusion around this area of Oracle, and I don't think it's documented as well as it could be. If you're still a little unsure, asktom.oracle.com is an excellent website where you can find further explanation (search for 'nologging').

  • What is the alternative to RECOVERABLE/UNRECOVERABLE option in Oracle 9i ?

    In Control files for SQL Loader, we have the option of mentioning RECOVERABLE/UNRECOVERABLE option for quicker loading of data. Is this option extended in Oracle 9i. We get an error when we execute the sql loader script in Oracle 9i with this option set.
    Thanks
    Balaji

    You could use a BitArray in conjunction with static fields to provide labels for the bits:
    static class Flags
    public static int WorkProperly = 0;
    public static int CompileFaster = 1;
    public static int AutoImproveCodeQuality = 2;
    class FlagTest
    static void Main(string[] args)
    BitArray bits = new BitArray(100); // > 64
    bits[Flags.AutoImproveCodeQuality] = true;
    Or, with an enum, but you'd have to cast the value every time:
    enum Flags
    Never = 0,
    MostOfTheTime,
    Sometimes,
    OddThursdays,
    WhenPigsFly
    BitArray bits = new BitArray(1000); // lots of bits
    bits[(int)Flags.Sometimes] = true;

  • SQL Loader Control File Recoverable Option in Oracle 9i

    We are migrating from Oracle 8 to Oracle 9i. We are running some of the SQL*Loader scripts. The control file uses "OPTIONS (UNRECOVERABLE)" option, which is working fine with Orace 8 whereas it is not working in
    Oracle 9i. Are these options "RECOVERABLE/UNRECOVERABLE" removed from Oracle 9i. If yes, why ? Or do we have an alternate option for the same.
    Thanks in advance.
    Balaji

    974647 wrote:
    my requirement is that for the remaining 2 columns, i want to insert text 'COLUMN_DROPPED' in backend table.Rest 8 columns should be populated as usual.First of all it is only possible if remaining 2 column datatype is string (char/nchar, varchar2/nvarchar2, clob/nclob) and are at least 14 characters. The rest is easy. Just modify last 2 fields in control file:
    column9  constant 'COLUMN_DROPPED',
    column10 constant 'COLUMN_DROPPED'SY.

  • Sqlloader unrecoverable??

    Hi all,
    at burleson consulting I read something about a "unrecoverable" option to the sql*loader, which I cannot find in the Oracle documentation.
    Is it just a description of the "direct=true" option? Or is it a separate setting?
    Thanks in advance,
    Xenofon

    See Utilities 10.2 relevant section http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_modes.htm#sthref1548.

  • Sqlldr non-zero return code

    We recently switched from Oracle 7.3.4 to Oracle 8.1.7. The input to a sqlldr has duplicate records which causes unique constraint errors. That's not a problem.
    The table updates are going fine on the new machine. I'm just curious as to why the script that was used to execute the sqlldr on Oracle 7.3.4 returned a 0 (zero) return code when there were unique contraint warnings, and the identical script and sqlldr code is returning a 2 return code on 8.1.7.
    I'm concerned that the non-zero return code may be pointing to something other than the unique constraint issue, that we're not seeing.
    The only discernable difference in the log file between the 7.3.4 version and the 8.1.7 version is the following:
    Space allocated for bind array: 2560 bytes(64 rows)     (old machine)
    Space allocated for memory besides bind array: 55256 bytes
    Space allocated for bind array: 2560 bytes(64 rows)     (new machine)
    Space allocated for memory besides bind array: 0 bytes
    Does the above "Space allocated" statement mean anything?
    SQLLDR Code:
    OPTIONS (SILENT=(FEEDBACK))
    LOAD DATA
    INFILE 'IOR_out.csv'
    APPEND
    INTO TABLE sap_so_mstr
    div_ln position(1:4) CHAR,
    acct_so_iwo_ln position(7:16) CHAR,
    rec_type position(42:43) CHAR,
    SAP_CHG_NBR position(46:53) CHAR,
    task_ln constant '0000000'
    Sample input to SQLLDR (in real life these values are lined up in columns):
    27 , 009911AAAA, 19991126, 20991231, C, IO, 9911AAAA
    27 , 009912AAAA, 19991126, 20991231, C, IO, 9912AAAA
    27 , 0099H2HRCM, 19991126, 20991231, C, IO, 99H2HRCM
    Code within the script calling above SQLLDR code:
    sqlldr userid=${DB_USER}/${DB_PASSWORD}, control=${ctlfile} errors=100000
    Any ideas? Thanks.

    Also you can add this in CustomSettings.ini (is located in \\<SERVER>\DeploymentShare$\Control):
    ;Logging
    SLShare=\\<SERVER>\DeploymentShare$\Logs
    SLShareDynamicLogging =\\<SERVER>\DeploymentShare$\Logs\%ComputerName%
    and in appropriate %ComputerName% folder on <SERVER> during OSD will be generated dinamically updated BDD.log. After OSD is finished rest logs will appear in that folder. 
    You can inspect this with trace32 from System
    Center Configuration Manager 2007 Toolkit V2.

  • Sql loader not able to load more than 4.2 billion rows

    Hi,
    I am facing a problem with Sql loader utility.
    The Sql loader process stops after loading 342 gigs of data i.e it seems to load 4.294 billion rows out of 6.9 billion rows and the loader process completes without throwing any error.
    Is there any limit on the volume of data sql loader can load ?
    Also how to identify that SQL loader process has not loaded whole data from the file as it is not throwing any error ?
    Thanks in Advance.

    it may be a prob with the db config not in the sql loader...
    check all the parameters
    Maximizing SQL*Loader Performance
    SQL*Loader is flexible and offers many options that should be considered to maximize the speed of data loads. These include:
    1. Use Direct Path Loads - The conventional path loader essentially loads the data by using standard insert statements. The direct path loader (direct=true) loads directly into the Oracle data files and creates blocks in Oracle database block format. The fact that SQL is not being issued makes the entire process much less taxing on the database. There are certain cases, however, in which direct path loads cannot be used (clustered tables). To prepare the database for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql.sql must be executed.
    2. Disable Indexes and Constraints. For conventional data loads only, the disabling of indexes and constraints can greatly enhance the performance of SQL*Loader.
    3. Use a Larger Bind Array. For conventional data loads only, larger bind arrays limit the number of calls to the database and increase performance. The size of the bind array is specified using the bindsize parameter. The bind array's size is equivalent to the number of rows it contains (rows=) times the maximum length of each row.
    4. Use ROWS=n to Commit Less Frequently. For conventional data loads only, the rows parameter specifies the number of rows per commit. Issuing fewer commits will enhance performance.
    5. Use Parallel Loads. Available with direct path data loads only, this option allows multiple SQL*Loader jobs to execute concurrently.
    $ sqlldr control=first.ctl parallel=true direct=true
    $ sqlldr control=second.ctl parallel=true direct=true
    6. Use Fixed Width Data. Fixed width data format saves Oracle some processing when parsing the data. The savings can be tremendous, depending on the type of data and number of rows.
    7. Disable Archiving During Load. While this may not be feasible in certain environments, disabling database archiving can increase performance considerably.
    8. Use unrecoverable. The unrecoverable option (unrecoverable load data) disables the writing of the data to the redo logs. This option is available for direct path loads only.
    Edited by: 879090 on 18-Aug-2011 00:23

  • DIRECT PATH LOAD의 개념 및 사용 방법

    제품 : ORACLE SERVER
    작성날짜 : 1998-11-27
    매우 많은 양의 데이타를 빠른 시간 내에 load하고자하는 경우 direct path load를
    사용할 수 있다. 여기에서 이러한 direct path load의 자세한 개념 및 사용방법,
    사용 시 고려해야 할 점 등을 설명한다.
    1. conventional path load
    일반적인 sql*loader를 이용한 방법은 존재하는 table에 datafile 내의 data를
    SQL의 INSERT command를 이용하여 insert시킨다. 이렇게 SQL command를
    이용하기 때문에 각각의 데이타를 위한 insert command가 생성되어 parsing되는
    과정이 필요하며, 먼저 bind array buffer (data block buffer) 내에 insert되는
    데이타를 입력시킨 후 이것을 disk에 write하게 된다.
    conventional path load를 사용하여야 하는 경우는 다음과 같다.
    --- load 중에 table을 index를 이용하여 access하여야 하는 경우
    direct load중에는 index가 'direct load state'가 되어 사용이 불가능하다.
    --- load 중에 index를 사용하지 않고 table을 update나 insert등을 수행해야
    하는 경우
    direct load 중에는 table에 exclusive write(X) lock을 건다.
    --- SQL*NET을 통해 load를 수행해야 하는 경우
    --- clustered table에 load하여야 하는 경우
    --- index가 걸려 있는 큰 table에 적은 수의 데이타를 load하고자 할 때
    --- referential이나 check integrity가 정의되어 있는 큰 table에
    load하고자 할 때
    --- data field에 SQL function을 사용하여 load하고자 할 때
    2. direct path load의 수행 원리
    Direct Path Loads는 다음과 같은 특징들로 인하여 매우 많은 양의 데이타를
    빠른 시간에 load하고자 할 때 이용하는 것이 바람직하다.
    (1) SQL INSERT 문장을 generate하여 수행하지 않는다.
    (2) memory 내의 bind array buffer를 이용하지 않고 database block의
    format과 같은 data
    block을 memory에 만들어 데이타를 넣은 후 그대로 disk에 write한다.
    memory 내의 block buffer와 disk의 block은 그 format이 다르다.
    (3) load 시작 시에 table에 lock을 걸고 load가 끝나면 release시킨다.
    (4) table의 HWM (High Water Mark) 윗 부분의 block에 data를 load한다.
    HWM는 table에 data가 insert됨에 따라 계속 늘어나고 truncate 외에는
    줄어들게 하지 못한다.
    그러므로, 항상 완전히 빈 새로운 block을 할당받아 data를 입력시키게 된다.
    (5) instance failure가 발생하여도 redo log file을 필요로 하지 않는다.
    (6) UNDO information을 발생시키지 않는다.
    즉 rollback segment를 사용하지 않는다.
    (7) OS에서 asynchronous I/O가 가능하다면, 복수개의 buffer에 의해서 동시에
    data를 읽어서 buffer에 write하면서 buffer에서 disk로 write할 수 있다.
    (8) parallel option을 이용하면 더욱 성능을 향상시킬 수 있다.
    3. direct path load의 사용방법 및 options
    direct path load를 사용하기 위한 view들은 다음 script에 포함어 있으며,
    미리 sys user로 수행되어야 한다. 단 이 script는 catalog.sql에 포함되어 있어,
    db 구성 시에 이미 수행되어진다.
    @$ORACLE_HOME/rdbms/admin/catldr.sql
    direct path load를 사용하기 위해서는 일반적인 sqlload 명령문에 DIRECT=TRUE를
    포함시키기만 하면 된다. 다음과 같이 기술하면 된다.
    sqlload username/password control=loadtest.ctl direct=true
    이 direct path load를 사용 시에 고려할 만한 추가적인 option 및 control file
    내에 기술 가능한 clause들을 살펴본다.
    (1) ROWS = n
    conventional path load에서 rows는 default가 64이며, rows에 지정된 갯수
    만큼의 row가 load되면 commit이 발생한다. 이와 비슷하게 direct load
    path에서는 rows option을 이용하여 data save를 이루며, data save가 발생하면
    data는 기존 table에 포함되어 입력된 data를 잃지 않게 된다.
    단 이 때 direct path load는 모든 data가 load된 다음에야 index가
    구성되므로 data save가 발생하여도 index는 여전히 direct load state로
    사용하지 못하게 된다.
    direct path load에서 이 rows의 default값은 unlimited이며, 지정된 값이
    database block을 채우지 못하면 block을 완전히 채우는 값으로 올림하여,
    partial block이 생성되지 않도록 한다.
    (2) PIECED clause
    control file내에 column_spec datatype_spec PIECED 순으로 기술하는
    것으로서 direct path load에만 유효하다. LONG type과 같이 하나의 data가
    maximum buffer size보다 큰 경우 하나의 data를 여러번에 나누어 load하는
    것이다. 이 option은 table의 맨 마지막 field
    하나에만 적용가능하며, index column인 경우에는 사용할 수 없다.
    그리고 load도중 data에 문제가 있는 경우 현재 load되는 data의 잘린 부분만
    bad file에 기록되게 된다. 왜냐하면 이전 조각은 이미 datafile에 기록되어
    buffer에는 남아있지 않기 때문이다.
    (3) READBUFFERS = n (default is 4)
    만약 매우 큰 data가 마지막 field가 아니거나 index의 한 부분인 경우
    PIECED option을 사용할 수 없다. 이러한 경우 buffer size를 증가시켜야
    하는데 이것은 readbuffers option을 이용하면 된다. default buffer갯수는
    4개이며, 만약 data load중 ORA-2374(No more slots for read buffer
    queue) message가 나타나면, buffer갯수가 부족한 것이므로 늘려주도록 한다.
    단 일반적으로는 이 option을 이용하여 값을 늘린다하더라도 system
    overhead만 증가하고 performance의 향상은 기대하기 어렵다.
    4. direct path load에서의 index 처리
    direct path load에서 인덱스를 생성하는 절차는 다음과 같다.
    (1) data가 table에 load된다.
    (2) load된 data의 key 부분이 temporary segment에 copy되어 sort된다.
    (3) 기존에 존재하던 index와 (2)에 의해서 정렬된 key가 merge된다.
    (4) (3)에 의해서 새로운 index가 만들어진다.
    기존에 존재하던 index와 temporary segment, 그리고 새로 만들어지는 index가
    merge가 완전히 끝날 때까지 모두 존재한다.
    (5) old index와 temporary segment는 지워진다.
    이와 같은 절차에 반해 conventional path load는 data가 insert될 때마다 한
    row 씩 index에 첨가된다. 그러므로 temporary storage space는 필요하지 않지만
    direct path load에 비해 index 생성 시간도 느리고, index tree의 balancing도
    떨어지게 된다.
    index생성 시 필요한 temporary space는 다음과 같은 공식에 의해 예측되어질 수
    있다.
    1.3 * key_storage
    key_storage = (number_of_rows) * (10 + sum_of_column_sizes +
    number_of_columns)
    여기에서 1.3은 평균적으로 sort 시에 추가적으로 필요한 space를 위한 값이며,
    전체 data가 완전히 순서가 거꾸로 된 경우에는 2, 전체가 미리 정렬된 경우라면
    1을 적용하면 된다.
    --- SINGLEROW clause
    이와 같이 direct path load에서 index 생성 시 space를 많이 차지하는 문제점
    때문에 resource가 부족한 경우에는 SINGLEROW option을 사용할 수 있다.
    이 option은 controlfile 내에 다음과 같은 형태로 기술하며, direct path
    load에만 사용 가능하다.
    into tables table_name [sorted indexes...] singlerow
    이 option을 사용하면 전체 data가 load된 뒤에 index가 구성되는 것이 아니라
    data가 load됨에 따라 data 각각이 바로 index에 추가된다.
    이 option은 기존에 미리 index가 존재하는 경우 index를 생성하는 동안
    merge를 위해 space를 추가적으로 필요로 하는 것을 막고자 하는 것이므로
    INSERT 시에는 사용하지 않고, APPEND시에만 사용하도록 하고 있다.
    실제 새로 load할 data 보다 기존 table이 20배 이상 클 때 사용하도록 권하고
    있다.
    direct path load는 rollback information을 기록하지 않지만, 이 singlerow
    option을 사용하면 insert되는 index에 대해 undo 정보를 rollback segment에
    기록하게 된다.
    그러나, 중간에 instance failure가 발생하면 data는 data save까지는 보존
    되지만 index는 여전히 direct load state로 사용할 수 없게 된다.
    --- Direct Load State
    만약 direct path load가 성공적으로 끝나지 않으면 index는 direct load
    state로 된다.
    이 index를 통해 조회하고자 하면 다음과 같은 오류가 발생한다.
    ORA-01502 : index 'SCOTT.DEPT_PK' is in direct load state.
    index가 direct load state로 되는 원인을 구체적으로 살펴보면 다음과 같다.
    (1) index가 생성되는 과정에서 space가 부족한 경우
    (2) SORTED INDEXES clause가 사용되었으나, 실제 data는 정렬되어 있지 않은
    경우
    이러한 경우 data는 모두 load가 되고, index만이 direct load state로 된다.
    (3) index 생성 도중 instance failure가 발생한 경우
    (4) unique index가 지정되어 있는 컬럼에 중복된 data가 load되는 경우
    특정 index가 direct load state인지를 확인하는 방법은 다음과 같다.
    select index_name, status
    from user_indexes
    where table_name = TABLE_NAME';
    만약 index가 direct load state로 나타나면 그 index는 drop하고 재생성
    하여야만 사용할 수 있다. 단, direct load 중에는 모든 index가 direct
    load state로 되었다가 load가 성공적으로 끝나면 자동으로 valid로 변경된다.
    --- Pre-sorting (SORTED INDEX)
    direct load 시 index구성을 위해서 정렬하는 시간을 줄이기 위해 미리 index
    column에 대해서 data를 정렬하여 load시킬 수 있다. 이 때 control file 내에
    SORTED INDEXES option을 다음과 같이 정의한다.
    이 option은 direct path load 시에만 유효하며, 복수 개의 index에 대해서
    지정가능하다.
    into table table_name SORTED INDEXES (index_names_with_blank)
    만약, 기존의 index가 이미 존재한다면, 새로운 key를 일시적으로 저장할 만큼
    의 temporary storage가 필요하며, 기존 index가 없는 경우였다면, 이러한
    temporary space도 필요하지 않다.
    이와 같이 direct path load 시에 index 구성 시에는 기존 데이타가 있는 table에
    load하는 경우 space도 추가적으로 들고, load가 완전히 성공적으로 끝나지 않으면
    index를 재생성하여야 하므로, 일반적으로 direct path load 전에 미리 table의
    index를 제거한 후 load가 모두 끝난 후 재생성하도록 한다.
    5. Recovery
    direct load는 기존 segment중간에 data를 insert하는 것이 아니라 완전히
    새로운 block을 할당받아 정확히 write가 끝난 다음 해당 segment에 포함되기
    때문에 instance failure시에는 redo log정보를 필요로 하지 않는다. 그러나
    default로 direct load는 redo log에 입력되는 data를 기록하는데 이것은 media
    recovery를 위한 것이다. 그러므로 archive log mode가 아니면 direct load에
    생성된 redo log 정보는 불필요하게 되므로 NOARCHIVELOG mode시에는 항상
    control file내에 UNRECOVERABLE이라는 option을 사용하여 redo log에 redo entry를 기록하지 않도록 한다.
    data가 redo log 정보 없이 instance failure시에 data save까지는 보호되는데
    반해 index는 무조건 direct load state가 되어 재생성하여야 한다. 그리고 data save이후의 load하고자 하는 table에 할당되었던 extent는 load된 data가
    user에게 보여지지는 않지만 extent가 free space로 release되지는 않는다.
    6. Integrity Constraints & Triggers
    direct path load중 not null, unique, primary key constraint는 enable
    상태로 존재한다. not null은 insert시에 check되고 unique는 load후 index를
    구성하는 시점에 check된다.
    그러나 check constraint와 referential constraint는 load가 시작되면서
    disable상태로 된다. 전체 데이타가 load되고 난 후 이렇게 disable된
    constraints를 enable시키려면 control file내에 REENABLE이라는 option을
    지정하여야 한다. 이 reenable option은 각 constraint마다 지정할 수는 없으며
    control file에 한번 지정하면 전체 integrity/check constraint에 영향을
    미치게 된다. 만약 reenable되는 과정에서 constraint를 위배하는 data가
    발견되면 해당 constraint는 enable되지 못하고 disabled status로 남게 되며,
    이렇게 위배된 data를 확인하기 위해서는 reenable clause에 exceptions option을 다음과 같이 추가하면 된다.
    reenable [exceptions table_name]
    이 때 table_name은 $ORACLE_HOME/rdbms/admin/utlexcpt.sql을 다른
    directory로copy하여 table이름을 exceptions가 아닌 다른 이름으로 만들어 수행시키면 된다.
    insert trigger도 integrity/check constraint와 같이 direct load가 시작하는
    시점에 disable되며, load가 끝나면 자동으로 enable된다. 단 enable되고 나서도
    load에 의해 입력된 data에 대해 trigger가 fire되지는 않는다.

  • Redo log wait event

    Hi,
    in my top evens i've:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time Wait Class
    CPU time 1,894 36.1
    log file sync 36,862 1,008 27 19.2 Commit
    db file scattered read 165,508 970 6 18.5 User I/O
    db file sequential read 196,596 857 4 16.3 User I/O
    log file parallel write 35,847 565 16 10.8 System I/O
    Log file are on a separate disks, with no activity, only 1 redo per group, and 4 groups.
    I think that 27ms for log file synch is high.
    I raised commits in sqlloader putting rows=100000 instead 30000 but it's always high.
    Which check i can perform?
    I'm on AIX 5.3 and database in 10.2.0.4.4

    Log File Sync
    The “log file sync” wait event is triggered when a user session issues a commit (or a rollback). The user session will signal or post the LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session. The wait is entirely dependent on LGWR to write out the necessary redo blocks and send confirmation of its completion back to the user session. The wait time includes the writing of the log buffer and the post, and is sometimes called “commit latency”.
    The P1 parameter in <View:V$SESSION_WAIT> is defined as follows for this wait event:
    P1 = buffer#
    All changes up to this buffer number (in the log buffer) must be flushed to disk and the writes confirmed to ensure that the transaction is committed and will be kept on an instance crash. The wait is for LGWR to flush up to this buffer#.
    Reducing Waits / Wait times:
    If a SQL statement is encountering a significant amount of total time for this event, the average wait time should be examined. If the average wait time is low, but the number of waits is high, then the application might be committing after every row, rather than batching COMMITs. Applications can reduce this wait by committing after “n” rows so there are fewer distinct COMMIT operations. Each commit has to be confirmed to make sure the relevant REDO is on disk. Although commits can be "piggybacked" by Oracle, reducing the overall number of commits by batching transactions can be very beneficial.
    If the SQL statement is a SELECT statement, review the Oracle Auditing settings. If Auditing is enabled for SELECT statements, Oracle could be spending time writing and commit data to the AUDIT$ table.
    If the average time waited is high, then examine the other log related waits for the session, to see where the session is spending most of its time. If a session continues to wait on the same
    If the average time waited is high, then examine the other log related waits for the session, to see where the session is spending most of its time. If a session continues to wait on the same buffer# then the SEQ# column of V$SESSION_WAIT should increment every second. If not then the local session has a problem with wait event timeouts. If the SEQ# column is incrementing then the blocking process is the LGWR process. Check to see what LGWR is waiting on as it may be stuck. If the waits are because of slow I/O, then try the following:
    Reduce other I/O activity on the disks containing the redo logs, or use dedicated disks.
    Try to reduce resource contention. Check the number of transactions (commits + rollbacks) each second, from V$SYSSTAT.
    Alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
    Move the redo logs to faster disks or a faster I/O subsystem (for example, switch from RAID 5 to RAID 1).
    Consider using raw devices (or simulated raw devices provided by disk vendors) to speed up the writes.
    See if any activity can safely be done with NOLOGGING / UNRECOVERABLE options in order to reduce the amount of redo being written.
    See if any of the processing can use the COMMIT NOWAIT option (be sure to understand the semantics of this before using it).
    Check the size of the log buffer as it may be so large that LGWR is writing too many blocks at one time. 

  • Creating Local partitioned index on Range-Partitioned table.

    Hi All,
    Database Version: Oracle 8i
    OS Platform: Solaris
    I need to create Local-Partitioned index on a column in Range-Partitioned table having 8 million records, is there any way to perform it in fastest way.
    I think we can use Nologging, Parallel, Unrecoverable options.
    But while considering Undo and Redo also mainly time required to perform this activity....Which is the best method ?
    Please guide me to perform it in fastest way and also online !!!
    -Yasser

    YasserRACDBA wrote:
    3. CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) LOCAL
    NOLOGGING PARALLEL (DEGREE 14) online;
    4. Analyze the table with cascade option.
    Do you think this is the only method to perform operation in fastest way? As table contain 8 million records and its production database.Yasser,
    if all partitions should go to the same tablespace then you don't need to specify it for each partition.
    In addition you could use the "COMPUTE STATISTICS" clause then you don't need to analyze, if you want to do it only because of the added index.
    If you want to do it separately, then analyze only the index. Of course, if you want to analyze the table, too, your approach is fine.
    So this is how the statement could look like:
    CREATE INDEX CSB_CLIENT_CODE ON CS_BILLING (CLIENT_CODE) TABLESPACE CS_BILLING LOCAL NOLOGGING PARALLEL (DEGREE 14) ONLINE COMPUTE STATISTICS;
    If this operation exceeds particular time window....can i kill the process?...What worst will happen if i kill this process?Killing an ONLINE operation is a bit of a mess... You're already quite on the edge (parallel, online, possibly compute statistics) with this statement. The ONLINE operation creates an IOT table to record the changes to the underlying table during the build operation. All these things need to be cleaned up if the operation fails or the process dies/gets killed. This cleanup is supposed to be performed by the SMON process if I remember correctly. I remember that I once ran into trouble in 8i after such an operation failed, may be I got even an ORA-00600 when I tried to access the table afterwards.
    It's not unlikely that your 8.1.7.2 makes your worries with this kind of statement, so be prepared.
    How much time it may take? (Just to be on safer side)The time it takes to scan the whole table (if the information can't read from another index), the sorting operation, plus writing the segment, plus any wait time due to concurrent DML / locks, plus the time to process the table that holds the changes that were done to the table while building the index.
    You can try to run an EXPLAIN PLAN on your create index statement which will give you a cost indication if you're using the cost based optimizer.
    Please suggest me if any other way exists to perform in fastest way.Since you will need to sort 8 million rows, if you have sufficient memory you could bump up the SORT_AREA_SIZE for your session temporarily to sort as much as possible in RAM.
    -- Use e.g. 100000000 to allow a 100M SORT_AREA_SIZE
    ALTER SESSION SET SORT_AREA_SIZE = <something_large>;
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • ORA-01409 NOSORT option may not be used

    Hi all,
    We've got a range partitioned table, each partition resides in a different tablespace (locally managed) thus :
    create tablespace abcblast_hit_test_data1
    datafile '/export/data/oracle/HLDDEV05/abcblast_hit_test_data1.dbf'
    size 2600m extent management local uniform size 2500m;
    create tablespace abcblast_hit_test_data2
    datafile '/export/data/oracle/HLDDEV05/abcblast_hit_test_data2.dbf'
    size 2600m extent management local uniform size 2500m;
    Two large SORTED files of data are sql*loaded into each partition, taking up ONE extent in each partition
    I then try to create a non-unique index on the table using NOSORT and get the error ORA-01409 NOSORT option may not be used; rows are not in ascending order
    ... with the following reasoning :
    For non-unique indexes the ROWID is considered part of the index key. This means that two rows that appear to be stored in ascending order may not be. If you create an index NOSORT, and two of the rows in the table have the same index values, but get split across two extents, the data block address of the first block in the second extent can be less than the data block address of the last block in the first extent. If these addresses are not in ascending order, the ROWIDs are not either. Since these ROWIDs are considered part of the index key, the index key is not in ascending order, and the create index NOSORT fails.
    BUT the data for each partition DOES reside in one extent :
    select partition_name, tablespace_name, extent_id, bytes
    from dba_extents
    where segment_name = 'ABCBLAST_HIT'
    and segment_type = 'TABLE PARTITION';
    PARTITION_NAME     TABLESPACE_NAME          EXTENT_ID     BYTES
    PART_01          ABCBLAST_HIT_TEST_DATA1     0          2621440000
    PART_02          ABCBLAST_HIT_TEST_DATA2     0          2621440000
    (Oracle 9.0.1 on Linux)
    HELP !!!! Does this mean we can't use NOSORT when building indexes on partitioned tables ?!
    (Note : if NOSORT is not used then a sort is performed which we are trying to avoid - final table will contain 1.6 billion rows and will consist of 50 partitions)

    Hi ,
    Still i am facing same error like. Can any body help me..
    The following index(es) on table KA31CVLA.CITY were processed:
    index KA31CVLA.CITY_PK was made unusable due to:
    ORA-01409: NOSORT option may not be used; rows are not in ascending order
    index KA31CVLA.CITY_UQ01 loaded successfully with 29761 keys
    i have create one table CITY in user KA31CVLA.
    CREATE TABLE CITY
    CNTRY_CD VARCHAR2(2 BYTE) NOT NULL,
    CITY_NM VARCHAR2(40 BYTE) NOT NULL,
    SEQ_NBR NUMBER(10) NOT NULL,
    POSTL_STT_EQNT_CD VARCHAR2(9 BYTE),
    LATITUDE NUMBER(9,5) NOT NULL,
    LONGITUDE NUMBER(9,5) NOT NULL
    and then added primary constraints
    ALTER TABLE CITY ADD CONSTRAINT KA31CVLA.CITY_PK PRIMARY KEY (CNTRY_CD, CITY_NM, SEQ_NBR)
    after this default unique index created as below
    CREATE UNIQUE INDEX KA31CVLA.CITY_PK ON KA31CVLA.CITY (CNTRY_CD, CITY_NM, SEQ_NBR)
    after that i have added one more constraints is
    CREATE UNIQUE INDEX KA31CVLA.CITY_UQ01 ON KA31CVLA.CITY (CITY_NM, POSTL_STT_EQNT_CD, CNTRY_CD, SEQ_NBR)
    Now trying to load data to KA31CVLA.CITY table through SQLLDR
    here is the command which i am executing to insert..
    SQLLDR CONTROL=D:\GMVS\city.ctl, DATA=D:\GMVS\sorted_city.dat log=D:\GMVS\sorted_log.log USERID="KA31CVLA/KA31CVLA123" DIRECT=Y
    Here is the control file details
    options (direct=true)
    unrecoverable
    load data
    truncate
    into table KA31CVLA.city
    sorted indexes (city_PK)
    reenable disabled_constraints
    fields terminated by "^"
    trailing nullcols
    (cntry_cd char,
    city_nm char,
    seq_nbr integer external,
    postl_stt_eqnt_cd Char,
    latitude integer external,
    longitude integer external)
    The total records insertd to table but i am getting above error in log files.
    The following index(es) on table KA31CVLA.CITY were processed:
    index KA31CVLA.CITY_PK was made unusable due to:
    ORA-01409: NOSORT option may not be used; rows are not in ascending order
    index KA31CVLA.CITY_UQ01 loaded successfully with 29761 keys

  • How to use trace on sqlldr?

    Hai can anybody please give me a link about implementing trace on sqlldr? I need to know the information about the connection time to the database and the whole series of steps.

    I read in the docs, that performance increases with direct path. I tried that by running catldr.sql in 'sys' schema and used direct=true in sqlldr command string and used 'UNRECOVERABLE' before the load in the .ctl file. But it is taking same time. Are there any environmental variables to be set up??
    For your Information,
    Following is the control file, in which column names and table name are modified,
    load DATA
    append
    INTO TABLE table_name
    FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    ( Com_num                "ltrim(rtrim(:Com_num))",
    CON_NUM                "ltrim(rtrim(:CON_NUM))",
    Fun_sts                    "ltrim(rtrim(:Fun_sts))",
    cs_num           "ltrim(rtrim(:cs_num))",
    cs_nm                         "ltrim(rtrim(:cs_nm))",
    TR                         ,
    I_DATE                         "to_date(lpad(ltrim(rtrim(:I_DATE)),'0',8),'MMDDYYYY')",
    STAT                         "ltrim(rtrim(:STAT))",
    PO_NUM                     "ltrim(rtrim(:PO_NUM))",
    ORDER_NUM                     "ltrim(rtrim(:ORDER_NUM))",
    RETURN_REF_ORDER_NUM               "ltrim(rtrim(:RETURN_REF_ORDER_NUM))",
    TAG                     "ltrim(rtrim(:TAG))",
    QUANTITY ,
    CREDIT_QTY ,
    Tie_num                         ,
    SKU                     "ltrim(rtrim(:SKU))",
    SKU_DESC                         "ltrim(rtrim(:SKU_DESC))",
    PRICE_PER_UNIT,
    PRICE,
    TYPE_COST                "upper(ltrim(rtrim(:TYPE_COST)))",
    lease_rate_factor ,
    X_TOTAL               ,
    Y_TOTAL               ,
    Z_TOTAL               ,
    ex_fee ,
    TOTAL_AMOUNT,
    sales_tax ,
    SHIPPING_COST                    ,
    ORDER_TOTAL                    ,     
    HARDWARE_RENT                    ,
    SOFT_COST_RENT               ,
    ex_rent               ,
    TOTAL_RENT,
    upfront_tax_rent ,
    SHIP_RENT                    ,
    SHIP_FACTOR               ,
    TOT_RENT                    ,
    RES_DOLLARS               ,
    RES_PERCENT               ,
    Int_rt_days          ,
    Int_rt               ,
    misc_bill_total      ,
    misc_bill_rent      ,
    ppt_mmf                     ,
    Certfied_data_destruct          ,
    Return_logistics           ,
    Insurance           ,
    user_def1val                    "ltrim(rtrim(:user_def1val))",
    user_def2Val                    "ltrim(rtrim(:user_def2val))",
    user_def3val                    "ltrim(rtrim(:user_def3val))",
    user_def4val                    "ltrim(rtrim(:user_def4val))",
    user_def5val                    "ltrim(rtrim(:user_def5val))",
    user_def6val                    "ltrim(rtrim(:user_def6val))",
    user_def7val                    "ltrim(rtrim(:user_def7val))",
    user_def8Val                    "ltrim(rtrim(:user_def8val))",
    user_def9val                    "ltrim(rtrim(:user_def9val))",
    user_def10val                    "ltrim(rtrim(:user_def10val))",
    user_def11val                    "ltrim(rtrim(:user_def11val))",
    user_def12val                    "ltrim(rtrim(:user_def12val))",
    user_def13Val                    "ltrim(rtrim(:user_def13val))",
    user_def14val                    "ltrim(rtrim(:user_def14val))",
    user_def15val                    "ltrim(rtrim(:user_def15val))",
    user_def16val                    "ltrim(rtrim(:user_def16val))",
    address_1 "ltrim(rtrim(:address_1))",
    address_2 "ltrim(rtrim(:address_2))",
    city "Upper(ltrim(rtrim(:city)))",
    county "upper(ltrim(rtrim(:county)))",
    state "upper(ltrim(rtrim(:state)))",
    zip_code "lpad(ltrim(rtrim(:zip_code)),5,'0')",
    zipplus4 "ltrim(rtrim(:zipplus4))",
    country_code "upper(ltrim(rtrim(:country_code)))",
    st_tax_exempt "upper(ltrim(rtrim(:st_tax_exempt)))",
    cty_tax_exempt1 "upper(ltrim(rtrim(:cty_tax_exempt1)))",
    cty_tax_exempt2 "upper(ltrim(rtrim(:ciy_tax_exempt2)))",
    cty_tax_exempt3 "upper(ltrim(rtrim(:cty_tax_exempt3)))",
    cty_t_tax_exempt "upper(ltrim(rtrim(:cty_t_tax_exempt)))",
    ctr_ref               "ltrim(rtrim(:ctr_ref))",
    department                    "ltrim(rtrim(:department))",
    asset_owner                    "ltrim(rtrim(:asset_owner))",
    clin                         "ltrim(rtrim(:clin))",
    asset_level                    "ltrim(rtrim(:asset_level))",
    customer_acct_code               "ltrim(rtrim(:customer_acct_code))",
    asset_status                    "ltrim(rtrim(:asset_status))",
    EXPORT_INTERNAL_KEY               ,     
    Order_Id                    ,
    Invoice_line_Id               ,
    is_base                         "upper(ltrim(rtrim(:is_base)))",
    FILE_NAME           "ltrim(rtrim(:file_name))",
    date_created sysdate          ,
    created_by "nvl(:created_by,user)",
    date_modified sysdate          ,
    modified_by "nvl(:created_by,user)",
    id "IMPORT_S.nextval",
    import_type "nvl(:import_type,'SCHEDULING')"
    )

  • Sqlldr direct got ORA-00054: resource busy and acquire with NOWAIT specifie

    I have multi-threaded application to kick off multiple sqlldr sessions that will try to insert 200K rows of data into same table from each session. I am using direct path with parallel enabled. The target table has no index, not even a PK, but I got this ORA-00054 error.
    Sample control file template:
    OPTIONS (SKIP=1, DIRECT=TRUE, PARALLEL=TRUE, SILENT=ALL, MULTITHREADING=TRUE, SKIP_INDEX_MAINTENANCE=TRUE, SKIP_UNUSABLE_INDEXES=TRUE)
    UNRECOVERABLE
    LOAD DATA
    INFILE '&DATA_FILE_NAME'
    BADFILE '&BAD_FILE_NAME'
    DISCARDFILE '&DISCARD_FILE_NAME'
    INTO TABLE TARGET_TABLE
    APPEND
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
    ( SESSION_ID CONSTANT &SESSION_ID,
    FIELD00,
    FIELD01,
    FIELD02,
    FIELD03,
    FIELD04,
    FIELD05,
    FIELD06
    The definition of TARGET_TABLE:
    CREATE TABLE TARGET_TABLE
    SESSION_ID NUMBER(12),
    FIELD00 VARCHAR2(4000 BYTE),
    FIELD01 VARCHAR2(4000 BYTE),
    FIELD02 VARCHAR2(4000 BYTE),
    FIELD03 VARCHAR2(4000 BYTE),
    FIELD04 VARCHAR2(4000 BYTE),
    FIELD05 VARCHAR2(4000 BYTE),
    FIELD06 VARCHAR2(4000 BYTE),
    FIELD07 VARCHAR2(4000 BYTE),
    FIELD08 VARCHAR2(4000 BYTE),
    FIELD09 VARCHAR2(4000 BYTE),
    FIELD10 VARCHAR2(4000 BYTE),
    FIELD11 VARCHAR2(4000 BYTE)
    I want to WAIT if there's any race. How can I make it WAIT? Most of the time, WAIT should be the default, but somehow, it acts differently here.
    Any help will be highly appreciated.

    Looking at the same manual, you can see here that you need exclusive access to the table:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_modes.htm#sthref1449
    And here, you can see that if other DML is happening on a table, Oracle says you need to do conventional path load:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_modes.htm#sthref1432
    In the case of parallelization, yes, you must set parallel=true. This allows SQL*Loader to manage the parallel inserts. If you were to try to do multiple, concurrent direct path loads yourself by running multiple instances of SQL*Loader, you'd run into the same TM enqueue problem.
    The question as to why the TM enqueue is taken in exclusive mode, has to do with how direct path load works. Oracle loads data into previously unformatted, completely empty data blocks from above the HWM. When the load is complete, the HWM is adjusted, and the data is available. Well, Oracle can't allow for multiple concurrent direct loads, all allocating space from above the HWM, and all messing w/ the HWM. This would cause a bit of a problem. And really, you don't want non-direct load DML going on either. So, Oracle disallows it, by taking the TM enqueue in exclusive mode. (Normal DML, non-direct load, takes the TM enqueue in a sharable mode, allowing for other concurrent DML.)
    Hope that's clear,
    -Mark
    Message was edited by:
    Mark J. Bobak

  • Sqlldr segmenation fault

    Below I have included environment setting and the control file. Any ideas on why I get a segmenation fault when I run sqlldr?
    My Linux run-time environment is:
    bash-2.05$ set
    BASH=/bin/bash
    BASH_VERSINFO=([0]="2" [1]="05" [2]="8" [3]="1" [4]="release" [5]="i386-redhat-linux-gnu")
    BASH_VERSION=$'2.05.8(1)-release'
    COLORS=/etc/DIR_COLORS
    COLUMNS=80
    DIRSTACK=()
    EUID=506
    GROUPS=()
    HISTFILE=/home/rbarsnes/.bash_history
    HISTFILESIZE=1000
    HISTSIZE=1000
    HOME=/home/rbarsnes
    HOSTNAME=<omitted>
    HOSTTYPE=i386
    IFS=$' \t\n'
    INPUTRC=/etc/inputrc
    LAMHELPFILE=/etc/lam/lam-helpfile
    LANG=en_US
    LD_LIBRARY_PATH=/usr01/app/oracle/product/9.0.1/lib:/lib:/usr/lib:/usr/local/lib
    LESSOPEN=$'|/usr/bin/lesspipe.sh %s'
    LIBPATH=/usr01/app/oracle/product/9.0.1/lib
    LINES=24
    LOGNAME=rbarsnes
    LS_COLORS=$'no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01;32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01;31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01;35:*.png=01;35:*.tif=01;35:'
    MACHTYPE=i386-redhat-linux-gnu
    MAIL=/var/spool/mail/rbarsnes
    MAILCHECK=60
    OPTERR=1
    OPTIND=1
    ORACLE_BASE=/usr01/app/oracle
    ORACLE_HOME=/usr01/app/oracle/product/9.0.1
    ORACLE_OWNER=oracle
    ORACLE_SID=<omitted>
    ORA_NLS33=/usr01/app/oracle/product/9.0.1/ocommon/nls/admin/data
    OSTYPE=linux-gnu
    PATH=/home/rbarsnes/bin:/usr01/app/oracle/product/9.0.1/bin:/opt/bin:/bin:/usr/bin:/usr/local/bin:/usr/sbin:/usr/X11R6/bin:/usr/local/java/bin:.:
    PIPESTATUS=([0]="0")
    PPID=4089
    PS1=$'\\s-\\v\\$ '
    PS2=$'> '
    PS4=$'+ '
    PVM_ROOT=/usr/share/pvm3
    PVM_RSH=/usr/bin/rsh
    PWD=/home/rbarsnes
    SHELL=/bin/bash
    SHELLOPTS=braceexpand:hashall:histexpand:monitor:history:interactive-comments:emacs
    SHLVL=1
    SSH_CLIENT=$'172.17.36.54 4662 22'
    SSH_TTY=/dev/pts/1
    SUPPORTED=en_US:en
    TERM=xterm
    TNS_ADMIN=/usr01/app/oracle/product/9.0.1/network/admin
    UID=506
    USER=rbarsnes
    XPVM_ROOT=/usr/share/pvm3/xpvm
    _=clear
    langfile=/home/rbarsnes/.i18n
    root=/opt/IBMJava2-131
    bash-2.05$
    End Linux run-time environment
    My control file is:
    OPTIONS (DIRECT=true) -- No inserts to increase performance
    -- Load the catalog pricing data
    UNRECOVERABLE -- Do not generate redo entries to increase performance
    LOAD
    -- "str X'0d0a'" instructs that records are delimited by \r\n instead of \n.
    INFILE='/home/rbarsnes/pricing/catgprc_all.txt'
    BADFILE='/home/rbarsnes/pricing/catgprc_all.bad'
    DISCARDFILE='/home/rbarsnes/pricing/catgprc_all.dsc'
    TRUNCATE
    INTO TABLE PROD_SQLLDR_CATA_PRICE
    FIELDS TERMINATEMINATED BY WHITESPACE
    ( SKU, PRICE )
    End my control file

    but opatch doesn't work.Could you be more specific? Some error message?
    can you give me a link from which i can download it again.Link you have already... metalink...
    Please reply as this is production system. As your database is in production state and you have access to metalink site, you may consider to ask for help Oracle support via "Service request".

  • Can you use schema name in "INTO TABLE" in sqlldr?

    Hi All,
    I have a simple question.
    My Oracle userid is SBHAT.
    SBHAT has insert,delete, select,update privileges on a Table in another schema XYZ.
    I want to SQL*Load data in Table EMPLOYEE in XYZ schema, using my userid. Something like ....
    sqlldr userid=*SBHAT*/password control=test.ctl data=test.txt
    I tried to use the following in my test.ctl file but it does not work.
    load data
    append
    into table "XYZ.EMPLOYEE"
    fields terminated by ',' optionally enclosed by '"'
    trailing nullcols
    Can someone give me the proper syntax for into table that uses *schema.table_name* construct.
    Thanks,
    Suresh

    Pl post exact OS and database versions - what do you get when you execute sql*loader with the syntax you have identified so far ?
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_control_file.htm#i1005623
    HTH
    Srini

Maybe you are looking for