Sql Loader Performance

Hi
i have some question about SQL Loader. i have to find answers but did not get from Google or Documentation. i want to know that is there any way to check whether Sql loader inserting records with Direct path or Conventional path. As we know there are restriction in Direct load. Direct-path inserts do not support all objects that conventional inserts do. Their functionality
is restricted. If the database engine is not able to execute a direct-path insert, the operation is silently converted into a conventional insert. i have instruct Sql loader to insert using Direct=true and parallel as well. but it take 15 mint to loader 4 million record in table. i have observed its transfer rate is bit slow. i have oracle 11R2 on windows 2008 with 40GB RAM and SAN. how can i verify during execution either Sql loader load user Direct path or silently converted it conventional path. here is my sample control file. function used in control file will convert direct path to conventional?
/c sqlldr userid='MSNV5Star/Aa123456@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=srv01)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=orcl)))' control='C:\ControlFile.txt' log='C:\ Final Data.log' bad= 'C:\ Final Data.bad'  direct=true  PARALLEL=TRUE  skip=1 Errors=5000000
LOAD DATA 
INFILE 'C:\adeel loading\in\A06052010.txt'APPEND
INTO TABLE GN_FILE_DATA_TABLE
FIELDS TERMINATED BY "     " TRAILING NULLCOLS
Operational_Date "to_date(:Operational_Date, 'YYYYMMDD')" ,
Store_Code "TRIM(:Store_Code)" ,
Txn_Void_Flag "TRIM(:Txn_Void_Flag)" ,
Txn_Staff_Flag "TRIM(:Txn_Staff_Flag)" ,
Txn_Aborted_Flag "TRIM(:Txn_Aborted_Flag)"     Edited by: Oracle Studnet on May 30, 2011 7:18 AM

Pl do not post duplicates - sql Loader Performance
Srini

Similar Messages

  • Sql loader performance problem with xml

    Hi,
    i have to load a 400 mb big xml file into mz local machine's free oracle db
    i have tested a one record xml and was able to load succesfully, but 400 mb freeying for half an hour and does not even started?
    it is normal? is there any chance i will be able to load it, just need to wait?
    are there any faster solution?
    i ahve created a table below
    CREATE TABLE test_xml
    COL_ID VARCHAR2(1000),
    IN_FILE XMLTYPE
    XMLTYPE IN_FILE STORE AS CLOB
    and control file below
    LOAD DATA
    CHARACTERSET UTF8
    INFILE 'test.xml'
    APPEND
    INTO TABLE product_xml
    col_id filler CHAR (1000),
    in_file LOBFILE(CONSTANT "test.xml") TERMINATED BY EOF
    anything i am doing wrong? thanks for advices

    SQL*Loader: Release 11.2.0.2.0 - Production on H. Febr. 11 18:57:09 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Control File: prodxml.ctl
    Character Set UTF8 specified for all input.
    Data File: test.xml
    Bad File: test.bad
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 5000
    Bind array: 64 rows, maximum of 256000 bytes
    Continuation: none specified
    Path used: Conventional
    Table PRODUCT_XML, loaded from every logical record.
    Insert option in effect for this table: APPEND
    Column Name Position Len Term Encl Datatype
    COL_ID FIRST 1000 CHARACTER
    (FILLER FIELD)
    IN_FILE DERIVED * EOF CHARACTER
    Static LOBFILE. Filename is bv_test.xml
    Character Set UTF8 specified for all input.
    SQL*Loader-605: Non-data dependent ORACLE error occurred -- load discontinued.
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    Table PRODUCT_XML:
    0 Rows successfully loaded.
    0 Rows not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Space allocated for bind array: 256 bytes(64 rows)
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records rejected: 0
    Total logical records discarded: 0
    Run began on H. Febr. 11 18:57:09 2013
    Run ended on H. Febr. 11 19:20:54 2013
    Elapsed time was: 00:23:45.76
    CPU time was: 00:05:05.50
    this is the log
    i have truncated everything i am not able to load 400 mega into 4 giga i cannot understand
    windows is not licensed 32 bit

  • Loading huge file with Sql-Loader from Java

    Hi,
    I have a csv file with aprox. 3 and a half million records.
    I load this data with sqlldr from within java like this:
               String command = "sqlldr userid=" + user + "/" + pass
                        + "@" + service + " control='" + ctlFile + "'";
                System.out.println(command);
                if (System.getProperty("os.name").contains("Windows")) {
                    p = Runtime.getRuntime().exec("cmd /C " + command);
                } else {
                    p = Runtime.getRuntime().exec("sh -c " + command);
                }it does what I want to, load the data to a certain table, BUT it takes too much time, Is there a faster way to load data to an oracle db from within java?
    Thanks, any advice is very welcome

    Have your DBA work on this issue - they can monitor and check performance of SQL*Loader
    SQL*Loader performance tips          [Document 28631.1]
    SQL*LOADER SLOW PERFORMANCE          [Document 1026145.6]
    Master Note for SQL*Loader          [Document 1264730.1]
    HTH
    Srini

  • SQL*Loader and multiple files

    Hello, am tasked with loading tables with 21+ million rows. Will SQL*Loader perform better with one large file or many smaller files? Is there any ideal file size for optimal performance? Thank you
    David

    Don, when I tried to loada 21M row table using direct, I get the following messages:
    Record 2373: Rejected - Error on table STAGE_CUSTOMER.
    ORA-03113: end-of-file on communication channel
    SQL*Loader-926: OCI error while uldlfca:OCIDirPathColArrayLoadStream for table STAGE_CUSTOMER
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (ptc_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (ptc_hp)
    ORA-24338: statement handle not executed
    Here is the SQL*Loader log:
    SQL*Loader: Release 9.2.0.1.0 - Production on Thu Apr 26 15:38:29 2007
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Control File: stage_customer.ctl
    Character Set UTF8 specified for all input.
    First primary datafile stage_Customer_20070301.csv has a
    utf8 byte order mark in it.
    Data File: stage_Customer_20070301.csv
    Bad File: stage_Customer_20070301.bad
    Discard File: none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 1000
    Continuation: none specified
    Path used: Direct
    Silent options: FEEDBACK
    Table STAGE_CUSTOMER, loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
    Column Name Position Len Term Encl Datatype
    ROW_ID SEQUENCE (MAX, 1)
    CUSTOMER_ACCT_NUM FIRST * , O(") CHARACTER
    IS_DELETED NEXT * , O(") CHARACTER
    SQL string for column : "CASE WHEN UPPER(:Is_Deleted) IN ('FALSE','0') THEN 0 ELSE 1 END"
    NAME_PREFIX NEXT * , O(") CHARACTER
    FIRST_NAME NEXT * , O(") CHARACTER
    MIDDLE_NAME NEXT * , O(") CHARACTER
    LAST_NAME NEXT * , O(") CHARACTER
    NAME_SUFFIX NEXT * , O(") CHARACTER
    NICK_NAME NEXT * , O(") CHARACTER
    ALT_FIRST_NAME NEXT * , O(") CHARACTER
    ALT_LAST_NAME NEXT * , O(") CHARACTER
    MARKETING_SOURCE_ID NEXT * , O(") CHARACTER
    HOME_PHONE NEXT * , O(") CHARACTER
    WORK_PHONE NEXT * , O(") CHARACTER
    MOBILE_PHONE NEXT * , O(") CHARACTER
    ALTERNATE_PHONE NEXT * , O(") CHARACTER
    EMAIL_ADDR NEXT * , O(") CHARACTER
    ALT_EMAIL_ADDR NEXT * , O(") CHARACTER
    BIRTH_DATE NEXT * , O(") CHARACTER
    SQL string for column : "TRUNC(TO_DATE(:Birth_Date, 'MM/DD/YYYY HH24:MI:SS'))"
    SALES_CHANNEL_ID NEXT * , O(") CHARACTER
    SQL string for column : "decode(:Sales_Channel_id,NULL,NULL,NULL)"
    ASSOCIATE_NUMBER NEXT * , O(") CHARACTER
    ALT_ASSOCIATE_NUMBER NEXT * , O(") CHARACTER
    UPDATE_LOCATION_CD NEXT * , O(") CHARACTER
    UPDATE_DATE NEXT * , O(") CHARACTER
    SQL string for column : "TRUNC(TO_DATE(:Update_Date, 'MM/DD/YYYY HH24:MI:SS'))"
    CUSTOMER_LOGIN_NAME NEXT * , O(") CHARACTER
    DISCOUNT_CD NEXT * , O(") CHARACTER
    DISCOUNT_PERCENT NEXT * , O(") CHARACTER
    BUSINESS_NAME NEXT * , O(") CHARACTER
    POS_TAX_FLAG NEXT * , O(") CHARACTER
    POS_TAX_PROMPT NEXT * , O(") CHARACTER
    SQL string for column : "CASE WHEN UPPER(:POS_TAX_PROMPT) IN ('FALSE','0') THEN 0 ELSE 1 END"
    POS_DEFAULT_TAX_ID NEXT * , O(") CHARACTER
    POS_TAX_ID_EXPIRATION_DATE NEXT * , O(") CHARACTER
    POS_AUTHORIZED_USER_FLAG NEXT * , O(") CHARACTER
    SQL string for column : "CASE WHEN UPPER(:POS_AUTHORIZED_USER_FLAG) IN ('FALSE','0') THEN 0 ELSE 1 END"
    POS_ALLOW_PURCHASE_ORDER_FLAG NEXT * , O(") CHARACTER
    SQL string for column : "CASE WHEN UPPER(:POS_ALLOW_PURCHASE_ORDER_FLAG) IN ('FALSE','0') THEN 0 ELSE 1 END"
    ADDRESS1 NEXT * , O(") CHARACTER
    ADDRESS2 NEXT * , O(") CHARACTER
    ADDRESS3 NEXT * , O(") CHARACTER
    CITY NEXT * , O(") CHARACTER
    STATE_CD NEXT * , O(") CHARACTER
    POSTAL_CD NEXT * , O(") CHARACTER
    COUNTRY_CD NEXT * , O(") CHARACTER
    ALLOW_UPDATE NEXT * , O(") CHARACTER
    SQL string for column : "CASE WHEN UPPER(:ALLOW_UPDATE) IN ('FALSE','0') THEN 0 ELSE 1 END"
    ACTION_CODE NEXT * , O(") CHARACTER
    SQL string for column : "CASE WHEN UPPER(:action_code) IN ('FALSE','0') THEN 0 ELSE 1 END"
    ACCOUNT_TYPE_ID NEXT * , O(") CHARACTER
    LOCALE_CD NEXT * , O(") CHARACTER
    SQL string for column : "CASE WHEN :Locale_CD IS NOT NULL AND :Locale_CD LIKE '__-__' THEN :Locale_CD ELSE 'en-US' END"
    IS_READY_FOR_PROCESSING CONSTANT
    Value is '1'
    IS_BUSINESS CONSTANT
    Value is '0'
    HAD_ERRORS CONSTANT
    Value is '0'
    Record 2373: Rejected - Error on table STAGE_CUSTOMER.
    ORA-03113: end-of-file on communication channel
    SQL*Loader-926: OCI error while uldlfca:OCIDirPathColArrayLoadStream for table STAGE_CUSTOMER
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (ptc_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (ptc_hp)
    ORA-24338: statement handle not executed
    Table STAGE_CUSTOMER:
    0 Rows successfully loaded.
    1 Row not loaded due to data errors.
    0 Rows not loaded because all WHEN clauses were failed.
    0 Rows not loaded because all fields were null.
    Bind array size not used in direct path.
    Column array rows : 5000
    Stream buffer bytes: 256000
    Read buffer bytes: 1048576
    Total logical records skipped: 0
    Total logical records read: 3469
    Total logical records rejected: 1
    Total logical records discarded: 0
    Direct path multithreading optimization is disabled
    Run began on Thu Apr 26 15:38:29 2007
    Run ended on Thu Apr 26 15:38:30 2007
    Elapsed time was: 00:00:01.18
    CPU time was: 00:00:00.32

  • Sql loader not able to load more than 4.2 billion rows

    Hi,
    I am facing a problem with Sql loader utility.
    The Sql loader process stops after loading 342 gigs of data i.e it seems to load 4.294 billion rows out of 6.9 billion rows and the loader process completes without throwing any error.
    Is there any limit on the volume of data sql loader can load ?
    Also how to identify that SQL loader process has not loaded whole data from the file as it is not throwing any error ?
    Thanks in Advance.

    it may be a prob with the db config not in the sql loader...
    check all the parameters
    Maximizing SQL*Loader Performance
    SQL*Loader is flexible and offers many options that should be considered to maximize the speed of data loads. These include:
    1. Use Direct Path Loads - The conventional path loader essentially loads the data by using standard insert statements. The direct path loader (direct=true) loads directly into the Oracle data files and creates blocks in Oracle database block format. The fact that SQL is not being issued makes the entire process much less taxing on the database. There are certain cases, however, in which direct path loads cannot be used (clustered tables). To prepare the database for direct path loads, the script $ORACLE_HOME/rdbms/admin/catldr.sql.sql must be executed.
    2. Disable Indexes and Constraints. For conventional data loads only, the disabling of indexes and constraints can greatly enhance the performance of SQL*Loader.
    3. Use a Larger Bind Array. For conventional data loads only, larger bind arrays limit the number of calls to the database and increase performance. The size of the bind array is specified using the bindsize parameter. The bind array's size is equivalent to the number of rows it contains (rows=) times the maximum length of each row.
    4. Use ROWS=n to Commit Less Frequently. For conventional data loads only, the rows parameter specifies the number of rows per commit. Issuing fewer commits will enhance performance.
    5. Use Parallel Loads. Available with direct path data loads only, this option allows multiple SQL*Loader jobs to execute concurrently.
    $ sqlldr control=first.ctl parallel=true direct=true
    $ sqlldr control=second.ctl parallel=true direct=true
    6. Use Fixed Width Data. Fixed width data format saves Oracle some processing when parsing the data. The savings can be tremendous, depending on the type of data and number of rows.
    7. Disable Archiving During Load. While this may not be feasible in certain environments, disabling database archiving can increase performance considerably.
    8. Use unrecoverable. The unrecoverable option (unrecoverable load data) disables the writing of the data to the redo logs. This option is available for direct path loads only.
    Edited by: 879090 on 18-Aug-2011 00:23

  • SQL Loader, direct path and indexes on partition tables

    Hi,
    I have a big partitioned table and I have two indexes on it.
    When I use SQL Loader to upload data to my table with using OPTIONS (DIRECT=TRUE) UNRECOVERABLE , then it takes almost half hour to load just one record!!
    When I remove OPTIONS (DIRECT=TRUE), then it takes just two seconds to upload the same one record.
    Am I missing anything? Can I use direct path load on indexed partitioned tables and have reasonable load time ?
    An scheduled external job loads almost 100,000 records into this table every hour and I am trying to make the sql*loader performance it as fast as possible.
    Any help would be appreciated,
    Alan

    Hi Alan,
    How big is this table and what sort of indexes are they?
    An index update using SQL*Loader unrecoverable direct path load is achieved by an isolated sort followed by a nologging merge of the old index and the new mini-index into a new index segment (this according to seminar notes by Jonathan Lewis). This will conceivably take a long time for a large table / large index.
    Performance improvements? Are you loading all records into a new partition?
    Cheers,
    Colin

  • Possibility of capturing data file name in SQL * Loader

    Hi,
    I have a requirement to capture the data file name in the staging table, is there a way that i can capture it in SQL * Loader or any other way of doing it.
    Need experts suggestion please.
    Thanks,
    Genoo

    Hi Genoo.
    how do we capture the file name and stores in the temporary table
    You may use the above command mentioned in my previous post (if Linux) to populate the Test.csv file with the available file name in the directory, i.e:
    ls /some/path/*.dat | xargs -n1 basename  > /home/oracle/Test.csv
    1. Ensure to first load the Test.csv file as for eg:
    1,aaa
    2,bbb
    3,ccc
    2. Create a control file to load these records into temporary tables,for eg:
    load data
    infile '/home/oracle/Test.csv'
    into table file_name_upload
    fields terminated by ","
    ( id, file_name )
    3. Create the respective table in the database:
    create table file_name_upload
      id number,
      file_name varchar2(20)
    4. Load the data into temporary table
    sqlldr test/test control=/home/oracle/sqlldr_test.ctl
    Please refer notes:
    SQL*Loader - How To Load A Date Column With Fractions Of Second (Doc ID 1276259.1)
    Script To Generate SQL*Loader Control File (Doc ID 1019523.6)
    SQL*Loader performance tips (Doc ID 28631.1)
    How To use the Sequence Function of SQL*Loader (Doc ID 1058895.6)
    How to Get Data from Existing Table to Flat File Usable by SQL*Loader (Doc ID 123852.1)
    Also see link:
    10 Oracle SQLLDR Command Examples (Oracle SQL*Loader Tutorial)
    Thanks &
    Best regards,

  • SQL Loader and Insert Into Performance Difference

    Hello All,
    Im in a situation to measure performance difference between SQL Loader and Insert into. Say there 10000 records in a flat file and I want to load it into a staging table.
    I know that if I use PL/SQL UTL_FILE to do this job performance will degrade(dont ask me why im going for UTL_FILE instead of SQL Loader). But I dont know how much. Can anybody tell me the performance difference in % (like 20% will decrease) in case of 10000 records.
    Thanks,
    Kannan.

    Kannan B wrote:
    Do not confuse the topic, as I told im not going to use External tables. This post is to speak the performance difference between SQL Loader and Simple Insert Statement.I don't think people are confusing the topic.
    External tables are a superior means of reading a file as it doesn't require any command line calls or external control files to be set up. All that is needed is a single external table definition created in a similar way to creating any other table (just with the additional external table information obviously). It also eliminates the need to have a 'staging' table on the database to load the data into as the data can just be queried as needed directly from the file, and if the file changes, so does the data seen through the external table automatically without the need to re-run any SQL*Loader process again.
    Who told you not to use External Tables? Do they know what they are talking about? Can they give a valid reason why external tables are not to be used?
    IMO, if you're considering SQL*Loader, you should be considering External tables as a better alternative.

  • Need suggestions for imporving data load performance via SQL Loader

    Hi,
    Our requirement is to load 512 (1 GB each) files in Oracle database.
    We are using SQL loaders to load files into the DB (A partitioned table) and have tried almost all the possible options that come with sql loaders (Direct load path, parallel=true, multithreading=true, unrecoverable)
    As the tables is growing bigger in size, each file load time is increasing (It started with 5 minutes per file and has reached 2 hours per 3 files now and is increasing with every batch- Note we are loading 3 files concurrently on the target table using the parallel = true oprion of sql loader)
    Questions 1:
    My problem is that somehow multithreading is not working for us (we have multi CPU server and have enabled multithreading=true). Could it be something to do with DB setting which might be hindering the data load to be done in multiple threads?
    Question 2:
    Would gathering stats on the target table and it's partitions help improve load performance ? I'm not sure if stats improve DML's, they would definitely improve sql queries. Any thoughts?
    Question 3:
    What would be the best strategy to gather stats on this table (which would end up having 512 GB data) ?
    Question 4:
    Do you think insertions in a partitioned table (with growing sizes) would have poor performance as compared to a non-partitioned table ?
    Any other suggestions to improve performace are most welcome !!
    Thanks,
    Sachin
    Edited by: Sachin Tiwari on Mar 13, 2013 6:29 AM

    2 hours to load just 3 GB of data seems unreasonable regardless of the SQL Loader settings. It seems likely to me that the problem is not with SQL Loader but somewhere else.
    Have you generated a Statspack/ AWR/ ASH report to see where all that time is being spent? Are there triggers on the table? Are there bitmap indexes?
    Is your table partitioned in a way that is designed to improve the efficiency of loads so that all the data from one file goes into one partition? Or is data from each file getting inserted into many different partitions.
    Justin

  • Performance Problem..DBMS_XMLStore or SQL Loader

    Hi All,
    I have implemented the insertion of data using the following method but it's taking very long.
    Once XML and XSL files are there in mapped 'Directory_Path'....
    BEGIN
    SELECT directory_name INTO v_directory_name
    FROM all_directories
    WHERE owner = 'SYS'
    AND directory_name = 'Directory_Path';
    END;
    xmldata1 := XMLTYPE(BFILENAME(v_directory_name, 'temp_xml.xml'), nls_charset_id('UTF8'));
    xmldata2 := XMLTYPE(BFILENAME(v_directory_name, 'temp_xml.xsl'), nls_charset_id('UTF8'));
    --Open a new context, required for these procedures
    v_context := DBMS_XMLSTORE.newContext('TEMP_XML');
    v_rows := DBMS_XMLStore.insertXML(
    v_context,
    XMLType.transform(xmldata1, xmldata2));
    -- Close the context
    DBMS_XMLStore.closeContext(v_context);
    END;
    May be I am converting data from BFILE to XMLType. But XMLType.transform(xmldata1, xmldata2) accepts XMLType data only.
    Or May be this approach where I am not giving the column names explicitly..
    Earlier I was loading the transformed XML file(With ROWSET ROW Format) in the the 'Directory_Path' and loading the data as
    DECLARE
    insCtx DBMS_XMLSTORE.ctxType;
    rows NUMBER;
    v_directory_name VARCHAR2(300);
    src_file BFILE ;
    BEGIN
    BEGIN
    SELECT directory_name INTO v_directory_name
    FROM all_directories
    WHERE owner = 'SYS'
    AND directory_name = 'Directory_Path';
    END;
    src_file := BFILENAME(v_directory_name, XMLFILE.xml');
    insCtx := DBMS_XMLSTORE.newContext( 'TABLE_NAME'); -- Get saved context
    DBMS_XMLSTORE.clearUpdateColumnList(insCtx); -- Clear the update settings
    -- Set the columns to be updated as a list of values
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'COLUMN1');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'COLUMN2');
    -- Insert the doc.
    rows := DBMS_XMLSTORE.insertXML(insCtx, xmltype(src_file,nls_charset_id('AL32UTF8')));
    DBMS_OUTPUT.put_line(rows || ' rows inserted.');
    -- Close the context
    DBMS_XMLSTORE.closeContext(insCtx);
    END;
    This was happening in a fraction of a second.
    But for thousands of XMLs we can't manually generate the Transformed XMLs even if there is a single type of XSL for all those.
    AM I MISSING SOMETHING VERY ELEMENTARY OVER HERE???
    (I didn't meant to shout by putting in CAPS but highlight it so that it's not missed..Apologies )
    The new suggestion being given is PARSE XML FILE into PIPE DELIMITED FORMAT FILE and LOAD that data through SQL LOADER
    Personally I don't think it would be better when Oracle is giving us an inbuilt feature.
    PLEASE HELP....
    How can I improve this. I really think I am taking some wrong step somewhere.....
    Regards.....

    Thanks A_NON,
    But I actually did load within 4 seconds using the below approach without registering as mentioned here.
    xmldata1 :=
    XMLTYPE (BFILENAME (v_directory_name,
    p_xml_file_nme
    NLS_CHARSET_ID ('UTF8')
    xmldata2 :=
    XMLTYPE (BFILENAME (v_directory_name,
    p_xsl_file_nme
    NLS_CHARSET_ID ('UTF8')
    SELECT XMLTRANSFORM (xmldata1, xmldata2) AS temp2
    INTO temp2
    FROM DUAL;
    And then
    insCtx := DBMS_XMLSTORE.newContext( 'TABLE_NAME'); -- Get saved context
    DBMS_XMLSTORE.clearUpdateColumnList(insCtx); -- Clear the update settings
    -- Set the columns to be updated as a list of values
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'COLUMN1');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'COLUMN2');
    I will try to optimise it more now. But 4 seconds is beautiful you see. :-)
    Thanks for All the help A_NON and I think I have learnt some Oracle XML DB over here.
    Hope to answer some basic questions too in future.. :-)
    Regards....

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • How can I load this XML with sql loader ?

    Hi,
    Im try to load a XML file into a XML table:
    Create table TEST of XMLTYPE;
    XML file is something like this:
    <!DOCTYPE list SYSTEM "myfile.dtd">
    <list>
    <item id = 1> lot of tags an info here </item>
    <item id = 9000> lot of tags an info here </item>
    </list>
    I dont know how to exactly create an appropriate control file to my XML. All examples I saw in documentation are similar to this one:
    this is an example control file
    LOAD DATA
    INFILE *
    INTO TABLE test
    APPEND
    XMLTYPE (xmldata)
    FIELDS
    I dont know how to complete this control file. Can anyone help me ?
    thank you!
    Message was edited by:
    pollopolea

    Well I found this working code
    LOAD DATA
    INFILE *
    INTO TABLE test APPEND
    xmltype(xmldata)
    FIELDS
    ext_fname filler char(30),
    xmldata LOBFILE (ext_fname) TERMINATED BY EOF
    BEGINDATA
    mifile.xml
    mifile2.xml
    mifile3.xml
    the main diference I found is that when i use:
    Insert into test values (XMLType(bfilename('XMLDIR', 'swd_uC000025127.xml'),nls_charset_id('AL32UTF8')));
    tag <!DOCTYPE list SYSTEM "myfile.dtd"> is expanded
    (I explain at DTD insertion and errors with long DTD entities
    Now using sql loader the tag is not modified.
    I dont know if is there any difference, between each case. did I lost performance?
    Message was edited by:
    pollopolea

  • SQL*LOADER and decode

    I'm setting up a sql*loader script and trying to use the decode function as referred to in 'Applying SQL Operators to Fields' I'm getting an error message ' Token longer than max allowable length of 258 chars'. Is there a limit to the size of the decode statement within sql*loader - or is it better to use a table trigger to handle this on insert? I ran the decode statement as a select through SQL*Plus and it works okay there. Oracle 8.0 Utilities shows example of decode in Ch. 5, but Oracle 9i Utilities Ch. 6 does not. Has anyone done this and what's the impact on performance of the load if I can get it to work? See my example below:
    LOAD DATA
    INFILE 'e2e_prod_cust_profile.csv'
    APPEND
    INTO TABLE APPS.RA_CUSTOMER_PROFILES_INTERFACE
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    (Insert_update_flag CHAR(1),
    Orig_system_customer_ref CHAR(240),
    customer_profile_class_name CHAR(30) NULLIF customer_profile_class=BLANKS
    "decode(customer_profile_class_name,
    'NORTHLAND Default','(MIA) Default',
    'NORTHLAND Non Consolidated','(MIA) Non Cons',
    'NORTHLAND Consolidated A','(MIA) Cons A',
    'NORTHLAND Consolidated B','(MIA) Cons B',
    'NORTHLAND Consolidated C','(MIA) Cons C',
    'NORTHLAND Consolidated D','(MIA) Cons D',
    'NORTHLAND Cons A NonZS','(MIA) Cons A NonZS',
    'NORTHLAND Cons B NonZS','(MIA) Cons B NonZS',
    'NORTHLAND Cons C NonZS','(MIA) Cons C NonZS',
    'NORTHLAND Cons D NonZS','(MIA) Cons D NonZS',
    'NORTHLAND International Billing','(MIA) International Billing',
    customer_profile_class_name)",
    credit_hold CHAR(1),
    overall_credit_limit INTERGER EXTERNAL,
    "e2e_cust_profile.ctl" 49 lines, 1855 characters
    SQL*Loader-350: Syntax error at line 15.
    Token longer than max allowable length of 258 chars
    'NORTHLAND Consolidated D','(MIA) Cons D',
    ^

    Your controlfile is incomplete and has some typos, but you could try something like:
    create or replace function decode_profile_class_name (p_longname IN VARCHAR2)
    return VARCHAR2
    is
    begin
      CASE p_longname
        WHEN 'NORTHLAND Default'               THEN RETURN '(MIA) Default';
        WHEN 'NORTHLAND Non Consolidated'      THEN RETURN '(MIA) Non Cons';
        WHEN 'NORTHLAND Consolidated A'        THEN RETURN '(MIA) Cons A';
        WHEN 'NORTHLAND Consolidated B'        THEN RETURN '(MIA) Cons B';
        WHEN 'NORTHLAND Consolidated C'        THEN RETURN '(MIA) Cons C';
        WHEN 'NORTHLAND Consolidated D'        THEN RETURN '(MIA) Cons D';
        WHEN 'NORTHLAND Cons A NonZS'          THEN RETURN '(MIA) Cons A NonZS';
        WHEN 'NORTHLAND Cons B NonZS'          THEN RETURN '(MIA) Cons B NonZS';
        WHEN 'NORTHLAND Cons C NonZS'          THEN RETURN '(MIA) Cons C NonZS';
        WHEN 'NORTHLAND Cons D NonZS'          THEN RETURN '(MIA) Cons D NonZS';
        WHEN 'NORTHLAND International Billing' THEN RETURN '(MIA) International Billing';
        ELSE RETURN p_longname;
      END CASE;
    end;
    LOAD DATA
    INFILE 'e2e_prod_cust_profile.csv'
    APPEND
    INTO TABLE APPS.RA_CUSTOMER_PROFILES_INTERFACE
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    Insert_update_flag          CHAR(1),
    Orig_system_customer_ref    CHAR(240),
    customer_profile_class_name CHAR(30) NULLIF customer_profile_class=BLANKS "decode_profile_class_name(:customer_profile_class_name)"
    credit_hold                 CHAR(1),
    overall_credit_limit        INTEGER EXTERNAL

  • Sql -loader and parallel

    Hello,
    I'm trying to use sql-loader with parallel option.
    I have loaded data with and without parellel but it takes same time. I don't appreciatte a better perfomance with parallel.
    In the called to sqlldr i'm adding PARALLEL=TRUE. That is the only change i have made.
    Do i have to make any 'alter table' with my Oracle table??.
    Any advice will be greatly appreciatted.
    Nauj

    The PARALLEL=TRUE parameter will not, by itself, result in any performance gain at all. It is really just a flag which tells SQL*Loader to allow multiple sessions to perform direct loads at the same time. It is up to you to actually spawn multiple SQL*Loader sessions on different portions of the data (e.g. you might divide your source data into multiple files and run distinct SQL*Loader sessions on each file concurrently)

  • Calling SQL Loader in Java

    I am writing a Java application running on SUN Solaris to perform data feed from a number of csv files to ORACLE. I am to use SQL Loader to perform the loading. What I need is to kick off the SQL Loader in my code. My question is how to kick off the SQL Loader in Java.
    One of option I am thinking is using RunTime.Exec(). Is the following code workable?
    runtime.exec( "sqlldr CONTROL=foo.ctl, LOG=bar.log, USERID=scott/tiger" );
    Althernatively should I write a Oracle store procedure to call the SQL Loader and Java call the store procedure?
    Any thoughts and directions are welcome!
    Regards,

    Hi,
    Can't you achieve the same thing using this
    Class.forName("Driver");
    Connection ocon=DriverManager.getConnection("url");
    Once you have ocon then you can execute any query.
    If this is not what you want,can you explain what SQLLoader means here?
    Regards
    Vicky

Maybe you are looking for