SQL Loader choosing conventional path when direct path is requested

We have a mystery regarding SQL Loader choosing to load with conventional path even though direct path is requested.
We have a control file that produces direct-path loads and one which does not. The difference between them does not seem to account for the difference in behavior.
The following control file does not give us direct-path:
OPTIONS (
     SKIP=0,
     ERRORS=0,          
     DIRECT=TRUE,          
     NOLOGGING
LOAD DATA
INFILE "[file path]" "STR x'0A'"
BADFILE "[file path].bad"
DISCARDFILE "[file path].dsc"
DISCARDMAX 0
INSERT
INTO [schema name].[table name]
FIELDS TERMINATED BY X'2C'
OPTIONALLY ENCLOSED BY '?'
TRAILING NULLCOLS
     C1_ACD_LINE_CD     CHAR(2000),
[column specifications continue]
)When running with this control file, the log shows:
Number to load: ALL
Number to skip: 0
Errors allowed: 0
Bind array:     64 rows, maximum of 256000 bytes
Continuation:    none specified
Path used:      Conventional
Table [schema name].[table name], loaded from every logical record.
Insert option in effect for this table: INSERT
TRAILING NULLCOLS option in effectIf we use a control file that is modified as follows:
OPTIONS (
     SKIP=0,
     ERRORS=0,     
     DIRECT=TRUE,     
     PARALLEL=TRUE,
     NOLOGGING
     )Then we do get direct-path load:
Number to load: ALL
Number to skip: 0
Errors allowed: 0
Continuation:    none specified
Path used:      Direct
Table [schema name].[table name], loaded from every logical record.
Insert option in effect for this table: INSERT
TRAILING NULLCOLS option in effectSo there is nothing about the table (constraints, triggers, etc.) that is preventing direct-path loads.
Now, we stumbled into this PARALLEL thing by accident - we are not really trying to do parallel loads.
In my reading of the Utilities guide (http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_modes.htm#autoId64 ), the PARALLEL option lets SQL Loader tolerate multiple sessions loading to the same segment at once, but does not perform parallel processing itself. So, is it possible there is some other lock on the table is causing SQL Loader to block direct-path loads to the table (because of a previous SQL Loader direct-path load, perhaps) unless the PARALLEL option is invoked? If so, how do we recognize that state and how do we resolve it?
Version information:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE     11.2.0.3.0     Production
TNS for Solaris: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
Any thoughts or suggestions would be appreciated.
Thanks,
Mike

From the same link
>
To use a direct path load (except for parallel loads), SQL*Loader must have exclusive write access to the table and exclusive read/write access to any indexes.
>
So I suspect that when using only DIRECT=TRUE, Oracle is not able to get an exclusive lock on the required objects, so it uses the conventional mode.
From a later section
>
- Segments to be loaded do not have any active transactions pending.
To check for this condition, use the Oracle Enterprise Manager command MONITOR TABLE to find the object ID for the tables you want to load. Then use the command MONITOR LOCK to see if there are any locks on the tables.
>
Would be interested in knowing what you find
HTH
Srini

Similar Messages

  • What is the diff b/w Conventional Path and Direct Path?

    What is the diff b/w Conventional Path and Direct Path?
    While doing exp/imp
    which one is best in peroformance Conventional Or Direct
    consider my Oracle is 9i (9.2.0) and Os is Solaris 9
    Could you please clarify.....
    Thanks

    http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm#1005685

  • Option in dialogue NOT to cut of ends of existing paths when adjusting paths

    more often than not when I draw with the pencil tool to adjust a paths course, it will delete one of the ends, either the longer or shorter side, this is not what I want, I want to simply adjust the course of the path during its journey, without losing either of the ends, it takes tedious trial and error to get it right and not lose the path, often I just end up deleting the whole path and redrawing it from scratch, which is not why I like the path adjustment option with the pencil tool,
    I like to sketch out my paths to lay them down, then refine them with the path adjustment pencil option working back and forth in this manner..
    if I draw over one part of the path, continue its journey like a river in another direction and cross back over that path,I want that middle section between those 2 points overlapping the original path to be redrawn, but leave the ends intact as they were...
    so If I choose edit selected paths, wherever I draw over with the pencil tool on the selected existing path (in yellow below), the adjusting path is green, the original cross over points are circled in orange,
    Shown here even If I consequently draw over that path several times, whereever the adjusting path (green) last crosses over the underlying selected path (yellow)  will be where the adjustment ends. Ideally you wouldnt have to worry about how close you "start" the adjusting path, aslong as it crosses over that path,
    so end result here, while not accurately displaying adjustment path above, you still get the idea,
    but this would be a lot simpler than having to get within "x" amount of pixels to the line, and totally avoid the lost ends of original path sydnrome.
    but if thats what you want,then as is currently available, where the green path crosses over the original yellow path, this continues the yellow path from that direction in the direction the green path is being drawn, then where the green line last crosses over the yellow path, the option to continue its line versus completing the yellow lines original end journey. so effectively a tickbox option in dialogue like"continue adjustment path if last crossover beyond x pixels or points" on the HUD display I suggested, as result below,
    the thing with this method is it avoids the annoying and frustrating event of losing either the long or short side of the existing path..and allows quicker easier path editing.without having to pinpoint the pencil tool on the existing path or get it extremely close, thats allright when your pen tooling, but I pencil tool more often in the beginning as I do animals, flowers, faces and figures which are more organic and sketchy. then refine with pen and warp tools after..

    where you cross an existing path with adjustment path on only one point, the previous line will question which of its sides from that point you want to keep, the start of its line to the adjusting paths cross over point, or the cross over point to the end of that line being adjusted,
    a radio circle option in the HUD display could allow "keep long" or "keep short" as in keep the longer side of the line, or the shorter side measured from the cross over point to the either end of the line..
    i`ll post a video next week to illustrate what I`m trying to say here..

  • Sql Loader Direct Path Upload

    I am trying DIRECT path upload to insert data from flat file to oracle table which has some indexes.The table is used by a procedure for dml operations on another table.
    Previously,we used conventional path upload with PARALLEL=TRUE option.
    I read that DIRECT upload affects indexes,and since I use the table being populated by direct upload for some other dml operation I don't want any performance degradation to other operations in the procedure.
    How do I inspect the impact of DIRECT upload on my tables and procedure execution.
    Also,are there any other side-effects of DIRECT Path Upload.
    Any ideas?

    since I use the table being populated by direct upload for
    some other dml operationOne of the restrictions for Direct Path Loads is
    that the tables to be loaded do not have any active transactions pending.
    For other restrictions on using direct path loads and conditions causing a direct path load to leave an index in usables state see
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10825/ldr_modes.htm#i1008815

  • Sql Loader - Parallel Direct Path Loading

    Hi,
    I want to load a few million records into a table. I read from OTN site that we can make use of Parallel Direct Loading Option. As given in the site, I split my source file into two and tried loading the file from two different sqlldr sessions simultaniously. I used two seperate control files for each session. The sessions started first is getting completed successfully. But the session which started second is giving the following error.
    Will anyone Pls help to sort out the problem.
    Error
    "SQL*Loader-951: Error calling once/load initialization
    ORA-00604: error occurred at recursive SQL level 1
    ORA-00054: resource busy and acquire with NOWAIT specified "
    I'm using oracle 9.0.1.1.1.
    The Options I tried with both the sessions are :
    Direct=true parallel=true.
    loading method is append.
    Thanks in Advance,
    Tom

    I've got a similar problem.
    I'm running Informatica which runs SQL*Loader via OCI.
    With direct & parallel my Indexes give me ORA-26002 - quite understandable.
    As its all wrapped up, I cannot say SKIP_INDEX_MAINTENANCE directly.
    1) Is there a way to give SQL*Loader some Default-Parameters? I read something about a File shrept.lst but I cannot find a reference in the Oracle documentation.
    Something useful down this road?
    2) when I omit parallel, what am I losing? I load one file into one partition of one table. I read the SQL*Loader Documentation but didn't get the message of the 'Intrasegment Parallel Loading' Paragraph. Is this using parallel? Or is it just enabling to use parallel (if there will be a second SQL*Loader)? If its all about 'enabling' I can easily omit this - I know there won't be a second SQL*Loader for this partition.
    regards,
    Kathrin

  • Direct Path Loading

    Sir,
    I tried the oracle utility SQLLDR.I created a table named
    TEST1(id Number primary key,var varchar2(50).
    I correcltly load data into TEST1 using normal load(not direct),bad file entries are filled with duplicate records
    When direct path load is used,all data are loaded into TEST1 voilating PRIMARY KEY constraint.
    After this i tried to insert data into TEST1 error showing table is in unusable state.
    Again direct path loading is used it also shows same error as above
    Please give me valuable information about the working of Direct Path Loading and data insertion.
    Regards Manojvilayil

    OCI_ATTR_DATA_SIZE in OCI Programmer's guide 9.2:
    Steps used in OCI defining - example: OCIAttrGet() into <undefined>
    Attributes of Type Attributes: ub4
    Attributes of Collection Types: ub2
    Attributes belonging to Columns of Tables or Views: ub2
    Attributes belonging to Arguments/Results: ub2
    Retrieving Column Data Types For a Table - example: OCIAttrGet() into ub4
    Describing with Character Length Semantics - example: OCIAttrGet() into ub4
    Creating a Parameter Descriptor for OCIType Calls: ub4
    Handle and Descriptor Attributes, Example on page A-72: OCIAttrSet from <undefined> with length 0
    Handle and Descriptor Attributes, Direct Path Column Parameter Attributes, Attribute Data Type: "ub2 */ub2 *".
    OCI_ATTR_DATA_SIZE in OCI Programmer's guide 12c R1:
    Implicit describe of a result - example: OCIAttrGet() into ub2
    Steps used in OCI defining - example: OCIAttrGet() into <undefined>
    Attributes of Type Attributes: ub2
    Attributes of Collection Types: ub2
    Attributes of Columns of Tables or Views: ub2
    Attributes of Arguments and Results: ub2
    Example 6–2: OCIAttrGet() into ub2
    Example 6–6: OCIAttrGet() into ub2
    Creating a Parameter Descriptor for OCIType Calls: ub2
    Handle and Descriptor Attributes, Example on page A-78: OCIAttrSet from <undefined> with length 0
    Handle and Descriptor Attributes, Direct Path Column Parameter Attributes, Attribute Data Type: "ub4 */ub4 *".
    Apparently the 12c manual is already adapted to ub4.
    Maybe it is a good idea to be careful with all length definitions?

  • Direct Path & Timestamps

    I have already checked Google & the forum's search, couldn't find any related topics, sorry if there is any re-posted related material.
    I am trying to convert to direct path loading rather than conventional and I am running into an issue. This error occurs for my timestamp conversions...
    SQL*Loader-951: Error calling once/load initialization
    ORA-26052: Unsupported type 180 for SQL expression on column TIME_STAMP.This is an example of my control file:
    load data
      append into table p_job
    fields terminated by '\t'
    trailing nullcols
    ( application_id, app_name, avg_task_duration, batch_id,
    broker_id, dept_name, description,
    time_stamp "to_timestamp(:time_stamp, 'YYYY-MM-DD HH24:MI:SS.FF')" )So obviously it does not like my conversion... I have seen bits and pieces over the Web that it might have problems with time stamps?
    Can anyone clarify and possibly provide an idea for a solution? I have already seen one solution: create a "pre"-staging table with the timestamp as text and then convert to timestamp and insert into primary staging table. I do not particulary like this idea as it might not really save me time performance wise; although I have not conducted any tests yet (dinner time) so if I am wrong please correct me.
    Thanks,
    -Tim
    Sorry almost forgot...
    Oracle 10g
    Windows 2000
    Message was edited by:
    TimS

    The problem is SQL functions cannot be used in direct path loads.
    In your case, you just need to set the environment variable NLS_TIMESTAMP_FORMAT to tell SQL*Loader what format to expect the timestamp to be in:
    set NLS_TIMESTAMP_FORMAT=YYYY-MM-DD HH24:MI:SS.FF
    ..and remove the reference to the to_timestamp function completely from the controlfile.

  • Bulkloading vs Direct Path

    Having read this article
    http://www.oracle.com/technology/oramag/oracle/06-mar/o26performance.html
    and being a data warehouse person that firmly believes that target tables
    should have error processing on the row which is also clearly noted in
    oracle's documentation ( direct path DML loading with
    an error table is a significant feature, which allows SET inserts & updates
    with an error bucket ) are people steering away from bulk loaded cursors
    for the new direct path... it puts the failed rows in a error bucket and this
    article states its much faster.
    Too many times I have been on data warehouse projects in the past where
    a tool or just plain laziness have lead to bulk SET logic inserts into target
    tables ( which is all or none processing ) where eventually the job fails and
    some poor individual has the task of going through 100,000s or rows to find
    the culprit... now with this new direct path dml loading you can do a SET
    logic process to a target table and have an error bucket for failed rows... this
    is huge for data warehousing. I admit to not playing with this yet but am looking
    for anyone who has used this feature in a data warehouse and looking for their
    feedback concerning this new feature.
    Thanks

    DML error logging is a test subject in terms of performance for a datawarehouse, especially if you are not doing a direct path load this can be costly.
    http://www.oracle-developer.net/display.php?id=330
    http://tkyte.blogspot.com/2005/07/how-cool-is-this-part-ii.html
    http://www.rittmanmead.com/2005/12/04/performance-issues-with-dml-error-logging-and-conventional-path-inserts/

  • WAIT = 'direct path read temp' in session

    Hi;
    select * from v$version
    Oracle Database 11g Release 11.2.0.1.0 - 64bit Production
    In our development system, a query "insert into X" using a couple of session global temporary tables, was running under 5 minuts, is now taking 40 minuts!
    Sql Developer Sessions shows a wait: "direct path read temp"
    Any hints on that might cause this, and possible solutions?
    Trace file of the session looks like this:
    *** 2013-03-18 15:56:22.871
    WAIT #20: nam='direct path read temp' ela= 106 file number=201 first dba=46685 block cnt=31 obj#=61321 tim=1363614982871399
    *** 2013-03-18 15:56:25.354
    WAIT #20: nam='direct path read temp' ela= 90 file number=201 first dba=46336 block cnt=31 obj#=61321 tim=1363614985354148
    *** 2013-03-18 15:56:28.098
    WAIT #20: nam='direct path read temp' ela= 86 file number=201 first dba=46367 block cnt=31 obj#=61321 tim=1363614988098575
    *** 2013-03-18 15:56:32.302
    WAIT #20: nam='direct path read temp' ela= 112 file number=201 first dba=69438 block cnt=31 obj#=61321 tim=1363614992302296
    WAIT #20: nam='direct path read temp' ela= 93 file number=201 first dba=69469 block cnt=31 obj#=61321 tim=1363614992302484
    WAIT #20: nam='direct path read temp' ela= 95 file number=201 first dba=68030 block cnt=31 obj#=61321 tim=1363614992302888
    WAIT #20: nam='direct path read temp' ela= 93 file number=201 first dba=66719 block cnt=31 obj#=61321 tim=1363614992303265
    WAIT #20: nam='direct path read temp' ela= 107 file number=201 first dba=65726 block cnt=31 obj#=61321 tim=1363614992303657
    WAIT #20: nam='direct path read temp' ela= 94 file number=201 first dba=64702 block cnt=31 obj#=61321 tim=1363614992304037
    WAIT #20: nam='direct path read temp' ela= 97 file number=201 first dba=63709 block cnt=31 obj#=61321 tim=1363614992304421
    WAIT #20: nam='direct path read temp' ela= 94 file number=201 first dba=62623 block cnt=31 obj#=61321 tim=1363614992304820
    WAIT #20: nam='direct path read temp' ela= 111 file number=201 first dba=61471 block cnt=31 obj#=61321 tim=1363614992305227
    WAIT #20: nam='direct path read temp' ela= 121 file number=201 first dba=60606 block cnt=31 obj#=61321 tim=1363614992305764
    WAIT #20: nam='direct path read temp' ela= 100 file number=201 first dba=59392 block cnt=31 obj#=61321 tim=1363614992306175
    WAIT #20: nam='direct path read temp' ela= 101 file number=201 first dba=58589 block cnt=31 obj#=61321 tim=1363614992306579
    WAIT #20: nam='direct path read temp' ela= 98 file number=201 first dba=57503 block cnt=31 obj#=61321 tim=1363614992306965
    WAIT #20: nam='direct path read temp' ela= 93 file number=201 first dba=56510 block cnt=31 obj#=61321 tim=1363614992307342
    WAIT #20: nam='direct path read temp' ela= 94 file number=201 first dba=55296 block cnt=31 obj#=61321 tim=1363614992307742
    WAIT #20: nam='direct path read temp' ela= 96 file number=201 first dba=54272 block cnt=31 obj#=61321 tim=1363614992308149
    WAIT #20: nam='direct path read temp' ela= 131 file number=201 first dba=53407 block cnt=31 obj#=61321 tim=1363614992308651
    WAIT #20: nam='direct path read temp' ela= 108 file number=201 first dba=52480 block cnt=31 obj#=61321 tim=1363614992309129
    WAIT #20: nam='direct path read temp' ela= 99 file number=201 first dba=52511 block cnt=31 obj#=61321 tim=1363614992309273

    Tkprof output... notice the big values for direct path write temp and direct path read temp values!
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    callcount cpu elapsed disk query current rows
    Parse 2 0.17 0.18 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 4 206.24 207.66 18960 755 21 8
    total 8 206.42 207.84 18960 755 21 8
    Misses in library cache during parse: 2
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    SQL*Net message to client 5 0.00 0.00
    SQL*Net message from client 5 11.01 19.27
    db file sequential read 6 0.00 0.00
    Disk file operations I/O 15 0.00 0.00
    asynch descriptor resize 84 0.00 0.00
    direct path write temp 1264 0.19 1.25
    direct path read temp 1264 0.00 0.04
    control file sequential read 42 0.00 0.00
    db file single write 3 0.00 0.00
    control file parallel write 9 0.00 0.00
    rdbms ipc reply 2 0.00 0.00
    local write wait 12 0.00 0.00
    log file sync 2 0.00 0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 85 0.04 0.04 0 0 0 0
    Execute 899 0.18 0.24 22 47 7 4
    Fetch 1393 0.04 0.69 204 3200 0 7002
    total 2377 0.28 0.98 226 3247 7 7006
    Misses in library cache during parse: 55
    Misses in library cache during execute: 51
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    db file sequential read 206 0.03 0.65
    latch: shared pool 4 0.00 0.00
    Disk file operations I/O 2 0.00 0.00
    db file scattered read 3 0.03 0.04
    5 user SQL statements in session.
    896 internal SQL statements in session.
    901 SQL statements in session.
    Trace file: SXDB_ora_25080.trc
    Trace file compatibility: 11.1.0.7
    Sort options: default
    1 session in tracefile.
    5 user SQL statements in trace file.
    896 internal SQL statements in trace file.
    901 SQL statements in trace file.
    41 unique SQL statements in trace file.
    21517 lines in trace file.
    217 elapsed seconds in trace file.
    Edited by: PauloSMO on 19/Mar/2013 11:36

  • SQL*LOADER Conditional using conventional method

    I need load only the registers that have one determined value,
    using SQL*LOADER with conventional method.
    thanks

    Have you looked at the WHEN clause?
    eg.
    INTO TABLE dept
    WHEN recid = 1
    ( recid ... etc

  • SQL*LOADER, the WHEN clause and WILDCARDS

    has anybody ever used wildcards in a WHEN clause in the SQL*LOADER control file?
    WHEN string_2_load = 'GOOD' , all 'good' rows load
    WHEN string_2_load = 'GO%', all rows fail the WHEN clause and end up in the discard file.
    thanks in advance - if i don't go crazy first
    burt

    I have also faced a similar problem like this. Try this control file
    LOAD DATA
    INFILE 'DATA.dat'
    BADFILE 'MLIMA.bad'
    INTO TABLE Brok_Gl_Interface
    APPEND
    WHEN record_type = '10'
    FIELDS TERMINATED BY ","
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    record_type CHAR ,
    currency CHAR ,
    entity CHAR ,
    cost_centre CHAR ,
    usd_account CHAR ,
    amount CHAR
    INTO TABLE Brok_Gl_Interface
    WHEN record_type = '99'
    FIELDS TERMINATED BY ","
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    record_type POSITION(1) CHAR ,
    record_count CHAR
    INTO TABLE Brok_Gl_Interface
    WHEN record_type = '00'
    FIELDS TERMINATED BY ","
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    record_type POSITION(1) CHAR,
    run_date CHAR,
    effective_date CHAR
    )Regards,
    Mohana

  • Sql -loader and indexes

    Hi
    I'm working with sql-loader in Oracle8. I'm loading a table with one index, ¿can i have any problem with that index?, problems of unusable index or something similar
    Thanks in advance

    If an index is in a direct mode state, it usually means that a direct path sqlload (sqlldr) was run against the underlying table,
    and the indexes were defined at the time of the load.
    If the indexes were left in a direct load state, it means the direct path load failed while re-creating/merging the indexes.
    Any attempt to use such an index will result in ORA-1502:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96525/e1500.htm#1000531

  • SQL Loader and foreign characters in the data file problem

    Hello,
    I have run into an issue which I can't find an answer for. When I run SQL Loader, one of my control files is used to get file content (LOBFILE) and one of the fields in the data file has a path to that file. The control file looks like:
    LOAD DATA
    INFILE 'PLACE_HOLDER.dat'
    INTO TABLE iceberg.rpt_document_core APPEND
    FIELDS TERMINATED BY ','
    doc_core_id "iceberg.seq_rpt_document_core.nextval",
    -- created_date POSITION(1) date "yyyy-mm-dd:hh24:mi:ss",
    created_date date "yyyy-mm-dd:hh24:mi:ss",
    document_size,
    hash,
    body_format,
    is_generic_doc,
    is_legacy_doc,
    external_filename FILLER char(275) ENCLOSED by '"',
    body LOBFILE(external_filename) terminated by EOF
    A sample data file looks like:
    0,2012-10-22:10:09:35,21,BB51344DD2127002118E286A197ECD4A,text,N,N,"E:\tmp\misc_files\index_testers\foreign\شیمیایی.txt"
    0,2012-10-22:10:09:35,17,CF85BE76B1E20704180534E19D363CF8,text,N,N,"E:\tmp\misc_files\index_testers\foreign\ลอบวางระเบิด.txt"
    0,2012-10-22:10:09:35,23552,47DB382558D69F170227AA18179FD0F0,binary,N,N,"E:\tmp\misc_files\index_testers\foreign\leesburgis_á_ñ_é_í_ó_ú_¿_¡_ü_99.doc"
    0,2012-10-22:10:09:35,17,83FCA0377445B60CE422DE8994900A79,binary,N,N,"E:\tmp\misc_files\index_testers\foreign\làm thế nào bạn làm ngày hôm nay"
    The problem is that whan I run this, SQL Loader throws an error that it can't find the file. It appears that it can't interpret the foreign characters in a way that allows it to find that path. I have tried adding a CHARACTERSET (using AL32UTF8 or UTF8) value in the control file but that only has some success with Western languages, not the ones listed above. Also, there is no set of defined languages that could be found in the data file. It essentaially could be any language.
    Does anyone know if there is a way to somehow get SQL Loader to "understand" the file system paths when a folder and/or file name could be in some other langauge?
    Thanks for any thoughts - Peter

    Thanks for the reply Harry. If I try to open the file in various text editors like Wordpad, Notepad, GVIM, andTextpad, they all display the foreign characters differently. Only Notepad comes close to displaying the characters properly. I have a C# app that will read the file and display the contents and it renders it fine. If you look at the directory of files in Windows Explorer, they all are displayed properly. So it seems things like .Net and Windows have some mechanism to understand the characters in order to render them properly. Other applications, again like Wordpad, do not know how to render them properly. It would seem that whatever SQL Loader is using to "read" the data files also is not rendering the characters properly which prevents it from finding the directory path to the file. If I add "CHARACTERSET AL32UTF8" in the control file, all is fine when dealing with Western langauges (ex, German, Spanish) but not for the Eastern languages (ex. Thai, Chinese). So .... telling SQL Loader to use a characterset seems to work, but not in all cases. The AL32UTF8 is the characterset that the Oracle database was created with. I have not had any luck if I try to set the CHARACTERSET to whatever the Thai character set is, for example. There problem there though is that even if that did work, I can't target specific lagauages because the data could come from anywhere. It's like I need some sort of global "super set" characterset to use. It seems like the CHARACTERSET is the right track to follow but I am not sure, and even if it is, is there a way to handle all languages.
    Thanks - Peter

  • Installing SQL LOADER

    hi there,
    I need to install only SQL LOADER from oracle 9i Enterprise edition software.How it is possible?
    Please tell me the steps...

    SQL*loader is implicitly installed when you performed the enterprise edition installation.
    It is not even required to be an enterprise edition for it to be installed, it can be installed from any Oracle Edition (rdbms install), as well as from the client.
    If you are not able to find the sql*loader executable:
    # make sure your PATH environment variable includes the ORACLE_HOME\bin ORACLE_HOME is the directory where Oracle was installed.
    # Executable is sqlldr.exe
    Madrid.

  • SQL Loader Parallel Mode

    Hi,
         I have similar issue where i have requirement to load 270 million record per day into single table(having No constraints & Indexes), where every CTL file contain 37000 records.
    I have machine having 16 CPU and 2 thread per CPU is set.I am using the PARALLEL=TRUE, MULTITHREADING=TRUE,DIRECT=TRUE option in sql loader.
    E.g:- OPTIONS(  ERRORS=100000, SILENT=all, MULTITHREADING=TRUE, DIRECT=TRUE, PARALLEL=TRUE, SKIP_INDEX_MAINTENANCE=TRUE,streamsize=1048576, readsize=1048576, columnarrayrows=8000 )
    Also enable the PARALLEL degree  and set the value to 32.
    When i am running sqlloader 4 session with above configuration it was taking total 4-5 seconds to load 4 CTL files having 37000 records per file. Well for initial 50 million records sql loader behave normally to load the CTL files in 4-5 seconds, but after 50 million records in table, time taken to process the 4 CTL files was gradually increasing to 40 to 70 seconds  and it was still increasing as the number of records more and more in table.
    I don't know why sqlloader behave like this after 50 million record in table.
    Below is the parallel parameter set on the machine
    SQL> show parameter parallel;
    NAME                                 TYPE        VALUE
    fast_start_parallel_rollback         string      LOW
    parallel_adaptive_multi_user         boolean     TRUE
    parallel_automatic_tuning            boolean     FALSE
    parallel_degree_limit                string      CPU
    parallel_degree_policy               string      MANUAL
    parallel_execution_message_size      integer     16384
    parallel_force_local                 boolean     FALSE
    parallel_instance_group              string
    parallel_io_cap_enabled              boolean     FALSE
    parallel_max_servers                 integer     80
    parallel_min_percent                 integer     0
    NAME                                 TYPE        VALUE
    parallel_min_servers                 integer     0
    parallel_min_time_threshold          string      AUTO
    parallel_server                      boolean     FALSE
    parallel_server_instances            integer     1
    parallel_servers_target              integer     32
    parallel_threads_per_cpu             integer     2
    recovery_parallelism                 integer     0
    Kindly repl on the above query

    I've got a similar problem.
    I'm running Informatica which runs SQL*Loader via OCI.
    With direct & parallel my Indexes give me ORA-26002 - quite understandable.
    As its all wrapped up, I cannot say SKIP_INDEX_MAINTENANCE directly.
    1) Is there a way to give SQL*Loader some Default-Parameters? I read something about a File shrept.lst but I cannot find a reference in the Oracle documentation.
    Something useful down this road?
    2) when I omit parallel, what am I losing? I load one file into one partition of one table. I read the SQL*Loader Documentation but didn't get the message of the 'Intrasegment Parallel Loading' Paragraph. Is this using parallel? Or is it just enabling to use parallel (if there will be a second SQL*Loader)? If its all about 'enabling' I can easily omit this - I know there won't be a second SQL*Loader for this partition.
    regards,
    Kathrin

Maybe you are looking for

  • How do I edit a site I created on my desktop, onto my Powerbook??

    So I created a site on my G4 Desktop using iWeb - then when I logged into iWeb on my new Powerbook, I couldn't access any of the sites (which are stored on my iDisk) How do I open or transfer over the page info to be able to edit from my second compu

  • IOS 5 calendar bug: Floating Appointments

    This is a bug with iOS 5's calendar that you can reproduce yourself. This bug was not in iOS 4: 1. Enter an appointment into iCal on your Mac running Mac OS X 10.6.8. 2. Make the appointment a FLOATING appointment, which means that its time will ALWA

  • Pldoc utility to document packages created in sql developer

    I am trying to find a quick way to produce documentation of packages developed in the sqldeveloper. PLDOC claims that we can produce documentation easily. but I find it quite cumbersome to use it. since it is free source, tech support is not availabl

  • Cookie support planned in UIX?

    Is it planned to support cookies in UIX? e.g. read a cookie and in a null eventhandler and write it into page property or versa. Would be nice e.g. for storing the username in a cookie for login handling ("remember me"). Thanks, Markus Btw. I know th

  • Need information on Banks migration to R12.1.3

    Hi All, We have a requirement to migrate suppliers, invoices and banks to R12. I am confused on Banks whether they should be migrate first then suppliers or after migrating suppliers. Request anyone to clear my confusion. Thanks, Jana