Loading a CSV file with Umlaut characters (àáä)

Hai,
We are uploading a CSV file though a Custom JSP page built based on Oracle JTF framework.
The JSP page is loading the data into FND_LOBS table using JTF object, oracle.apps.jtf.amv.ServletUploader.
The CSV file in the FND_LOBS table stored properly with the umlaut characters.
Now the JSP page invokes a Java object to read and parse the data. We are selecting the data first into BLOB object and then using the Input Stream Reader to get the data.
Here is the sample code:
oraclepreparedstatement = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT FILE_DATA FROM FND_LOBS WHERE FILE_ID = :1 ");
oraclepreparedstatement.defineColumnType(1, 2004);
oraclepreparedstatement.setLong(1, <file id>);
oracleresultset = (OracleResultSet)oraclepreparedstatement.executeQuery();
blob = (BLOB)oracleresultset.getObject(1);
InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream());
lineReader = new LineNumberReader(inputstreamreader )
lCSVLine = lineReader.readLine();
I tried printing the character set used by the InputStreamReader and it returned as ASCII
Now I tried setting the different character sets to read Umlaut characters(german chars) but nothing has worked.
InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream(),"UTF-8");
Can someone please let me know where and how to set the Character Set to accept the Umlaut characters like àáä?
Thanks,
Anji

Thank you for the quick response.
Requirement:
I need to retrive the BLOB object with umlaut characters from database, parse the data with comma delimeter into strings and store in database.
I am viewing the umlaut data from the database table using TOAD utility tool.
I tried the same code example provided above but it is not working as expected. The umlaut characters are translated to 'ýýý.
CODE EXAMPLE:
Input:
test_umlaut (sno NUMBER, col1 VARCHAR2(100), col3 BLOB);
insert into test_umlaut(sno,col3) values(200, utl_raw.cast_to_raw('äöüÄÖÜ' ))
Note: Verified that the database is showing the umlaut characters on selecting the col3 and storing in a flat file
--- code
OraclePreparedStatement oraclepreparedstatement10 = null;
OracleResultSet rs = null;
oraclepreparedstatement10 = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT col3 FROM test_umlaut WHERE sno = 200 ");
oraclepreparedstatement10.defineColumnType(1, 2004);
rs = (OracleResultSet)oraclepreparedstatement10.executeQuery();
while(rs.next()) {
BLOB b = (BLOB)rs.getObject(1);
InputStream is = b.getBinaryStream();
InputStreamReader r = new InputStreamReader(is,"utf8");
BufferedReader br = new BufferedReader(r);
String line;
while( (line = br.readLine()) != null) {
     System.out.println(line);
     OraclePreparedStatement oraclepreparedstatement12 = null;
     OracleResultSet oracleresultset12 = null;
     oraclepreparedstatement12 = (OraclePreparedStatement)oracleconnection.prepareStatement(" INSERT INTO test_umlaut(sno,col1) VALUES (300,?) ");
     oraclepreparedstatement12.setString(1,line);
     oraclepreparedstatement12.executeUpdate();
br.close();
r.close();
is.close();
Output: Verified the output from the database table which is inserted in the loop above.
select col1 from test_umlaut where sno=300
ýýý

Similar Messages

  • Error importing CSV files with 'hidden' characters using External Table

    Hi Folks
    Bit of a strange one here.
    We're well used to using the External Table method of loading data from CSV files into the database but a recent event has presented us with a problem.
    We have received some CSV files that 'look' like regular CSV files but Oracle will not load them.
    When we examined the CSV file using VIM on a UNIX box we saw the following 'hidden' characters between every regular character in the file.
    ^@So a string that looks like this when opened in Excel/Wordpad etc
    "TEST","TEXT"Looks like this when exmained with VIM
    ^@"^@T^@E^@S^@T^@"^@,^@"^@T^@E^@X^@T^@"Has anyone come across this before?
    Many thanks
    Simon Gadd
    Oracle 11g 11.2.0.1.0

    Hi Simon,
    ^@ represents the NUL character (0x00).
    So, most likely, you've got a Unicode-encoded file.
    You'll have to specify the character set in the record specification (and if necessary the byte order mark), for instance :
    CREATE TABLE ext_table
      col1 VARCHAR2(10),
      col2 VARCHAR2(10)
    ORGANIZATION EXTERNAL
      TYPE ORACLE_LOADER
      DEFAULT DIRECTORY dump_dir
      ACCESS PARAMETERS
       RECORDS DELIMITED BY '
    ' CHARACTERSET 'UTF16'
      FIELDS TERMINATED BY ','
      LOCATION ('dump.csv')
    REJECT LIMIT UNLIMITED;http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/et_params.htm#i1009499

  • Parsing BLOB (CSV file with special characters) into table

    Hello everyone,
    In my application, user uploads a CSV file (it is stored as BLOB), which is later read and parsed into table. The parsing engine is shown bellow...
    The problem is, that it won't read national characters as Ö, Ü etc., they simply dissapear.
    Is there any CSV parser that supports national characters? Or, said in other words - is it possible to read BLOB by characters (where characters can be Ö, Ü etc.)?
    Regards,
    Adam
      |
      | helper function for csv parsing
      |
      +-----------------------------------------------*/
      FUNCTION hex_to_decimal(p_hex_str in varchar2) return number
      --this function is based on one by Connor McDonald
        --http://www.jlcomp.demon.co.uk/faq/base_convert.html
       is
        v_dec number;
        v_hex varchar2(16) := '0123456789ABCDEF';
      begin
        v_dec := 0;
        for indx in 1 .. length(p_hex_str) loop
          v_dec := v_dec * 16 + instr(v_hex, upper(substr(p_hex_str, indx, 1))) - 1;
        end loop;
        return v_dec;
      end hex_to_decimal;
      |
      | csv parsing
      |
      +-----------------------------------------------*/
      FUNCTION parse_csv_to_imp_table(in_import_id in number) RETURN boolean IS
        PRAGMA autonomous_transaction;
        v_blob_data   BLOB;
        n_blob_len    NUMBER;
        v_entity_name VARCHAR2(100);
        n_skip_rows   INTEGER;
        n_columns     INTEGER;
        n_col         INTEGER := 0;
        n_position    NUMBER;
        v_raw_chunk   RAW(10000);
        v_char        CHAR(1);
        c_chunk_len   number := 1;
        v_line        VARCHAR2(32767) := NULL;
        n_rows        number := 0;
        n_temp        number;
      BEGIN
        -- shortened
        n_blob_len := dbms_lob.getlength(v_blob_data);
        n_position := 1;
        -- Read and convert binary to char
        WHILE (n_position <= n_blob_len) LOOP
          v_raw_chunk := dbms_lob.substr(v_blob_data, c_chunk_len, n_position);
          v_char      := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
          n_temp      := ascii(v_char);
          n_position  := n_position + c_chunk_len;
          -- When a whole line is retrieved
          IF v_char = CHR(10) THEN
            n_rows := n_rows + 1;
            if n_rows > n_skip_rows then
              -- Shortened
              -- Perform some action with the line (store into table etc.)
            end if;
            -- Clear out
            v_line := NULL;
            n_col := 0;
          ELSIF v_char != chr(10) and v_char != chr(13) THEN
            v_line := v_line || v_char;
            if v_char = ';' then
              n_col := n_col+1;
            end if;
          END IF;
        END LOOP;
        COMMIT;
        return true;
      EXCEPTION
         -- some exception handling
      END;

    Uploading CSV files into LOB columns and then reading them in PL/SQL: [It&#146;s|http://forums.oracle.com/forums/thread.jspa?messageID=3454184&#3454184] Re: Reading a Blob (CSV file) and displaying the contents Re: Associative Array and Blob Number of rows in a clob doncha know.
    Anyway, it woudl help if you gave us some basic information: database version and NLS settings would seem particularly relevant here.
    Cheers, APC
    blog: http://radiofreetooting.blogspot.com

  • Unable to Load CSV file with comma inside the column(Sql Server 2008)

    Hi,
    I am not able load an CSV file with Comma inside a column. 
    Here is sample File:(First row contain the column names)
    _id,qp,c
    "1","[ ""0"", ""0"", ""0"" ]","helloworld"
    "1","[ ""0"", ""0"", ""0"" ]","helloworld"
    When i specify the Text Qualifier as "(Double quotes) it work in SQL Server 2012, where as fail in the SQL Server 2008, complaining with error:
    TITLE: Microsoft Visual Studio
    The preview sample contains embedded text qualifiers ("). The flat file parser does not support embedding text qualifiers in data. Parsing columns that contain data with text qualifiers will fail at run time.
    BUTTONS:
    OK
    I need to do this in sql server 2008 R2 with Service pack 2, my build version is 10.50.1600.1.
    Can someone let me know it is possible in do the same way it is handle in SQL Server 2012?
    Or
    It got resolved in any successive Cumulative update after 10.50.1600.1?
    Regards Harsh

    Hello,
    If you have CSV with double quotes inside double quotes and with SSIS 2008, I suggest:
    in your data flow you use a script transformation component as Source, then define the ouput columns id signed int, Gp unicode string and C unicode string. e.g. Ouput 0 output colmuns
    Id - four-byte signed
    gp - unicode string
    cc - unicode string
    Do not use a flat file connection, but use a user variable in which you store the name of the flat file (this could be inside a for each file loop).
    The user variable is supplied as a a readonly variable argument to the script component in your dataflow.
    In the script component inMain.CreateNewOutputRows use a System.IO.Streamreader with the user variable name to read the file and use the outputbuffer addrow method to add each line to the output of the script component.
    Between the ReadLine instraction and the addrow instruction you have to add your code to extract the 3 column values.
    public override void CreateNewOutputRows()
    Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
    For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
    string line;
    System.IO.StreamReader file = new System.IO.StreamReader( Variables.CsvFilename);
    while ((line = file.ReadLine()) != null)
    System.Windows.Forms.MessageBox.Show(line);
    if (line.StartsWith("_id,qp,c") != true) //skip header line
    Output0Buffer.AddRow();
    string[] mydata = Yourlineconversionher(line);
    Output0Buffer.Id = Convert.ToInt32(mydata[0]);
    Output0Buffer.gp = mydata[1];
    Output0Buffer.cc = mydata[2];
    file.Close();
    Jan D'Hondt - SQL server BI development

  • Loading a CSV file to Sql Server

    Hi,
    How can i load a csv file, which is in a shared location itn osql using bods.
    Where can i create the data source pointing to csv file

    Hi,
    Use fileformat to load CSV file into SQL server, below the details
    A file format defines a connection to a file. Therefore, you use a file format to connect to source or target data when the data is stored in a file rather than a database table. The object library stores file format templates that you use to define specific file formats as sources and targets in data flows.
    To work with file formats, perform the following tasks:
    • Create a file format template that defines the structure for a file.
    • Create a specific source or target file format in a data flow. The source or target file format is based on a template and specifies connection information such as the file name.
    File format objects can describe files of the following types:
    • Delimited: Characters such as commas or tabs separate each field.
    • Fixed width: You specify the column width.
    • SAP transport: Use to define data transport objects in SAP application data flows.
    • Unstructured text: Use to read one or more files of unstructured text from a directory.
    • Unstructured binary: Use to read one or more binary documents from a directory.
    More details, refer the designer guide
    http://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_designer_en.pdf

  • How can we load a flat file with very, very long lines into a table?

    Hello:
    We have to load a flat file with OWB. The problem is that each of the lines in the file might be up to 30.000 characters long (up to 1.000 units of information in each line, 30 characters long each)
    Of course, our mapping should insert these units of information as independent rows in a table (1.000 rows, in our example).
    We do not know how to go about it. We usually load flat files using table functions, but we am not sure that they will be able to cope with these huge lines. And how should we pivot those lines? Will the Pivot operator do the trick? Or maybe we should pivot those lines outside the database before loading them?
    We are a bit lost. Any suggestion would be appreciated.
    Regards
    Edited by: [email protected] on Oct 29, 2008 8:43 AM
    Edited by: [email protected] on Oct 29, 2008 8:44 AM

    Yes, well, we could define a 1.000 column external table, and then map those 1.000 columns to the Pivot operator… perhaps it would work. But we have been investigating a little bit, and we think that we have found a better solution: there is a unix utility called “fold”. This utility can split our 30.000 character lines in 1.000 lines, 30 characters long each: just what we needed. Then we can load the resulting file using an external table.
    We think this is a much better solution that handling 1.000 columns in the external table and in the Pivot operator.
    Thanks for your help.
    Regards
    Edited by: [email protected] on Oct 29, 2008 10:35 AM

  • Error while loading a CSV file

    Haii,
    While loading a csv file in Open Script I'm facing with the foll: error.
    Error loading the CSV file Caused by: java.io.SocketException occurred.
    Can anyone please help me sort it out.
    Thank You

    Hi,
    Are you creating a table and loading it in Openscript??
    If so, can you show the screen shot.
    One more information , you can also change the Excel sheet into a CSV file and you can add it as a databank in the Tree view of the Openscript.
    Thanks and Regards,
    Nishanth Soundararajan.

  • Loading a CSV file and accessing the variables

    Hi guys,
    I'm new to AS3 and dealt with AS2 before (just getting the grasp when the change it).
    Is it possible in AS3 to load an excel .csv file into Flash using the URLLoader (or ???) and the data as variables?
    I can get the .csv to load and trace the values (cell1,cell2,cell3....) but I'm not sure how to collect the data and place it into variables.
    Can I just create an array and access it like so.... myArray[0], myArray[1]? If so, I'm not sure why it's not working.
    I must be on the completely wrong path. Here's what I have so far....
    var loader:URLLoader = new URLLoader();
    loader.dataFormat = URLLoaderDataFormat.VARIABLES;
    loader.addEventListener(Event.COMPLETE, dataLoaded);
    var request:URLRequest = new URLRequest("population.csv");
    loader.load(request);
    function dataLoaded(evt:Event):void {
        var myData:Array = new Array(loader.data);
        trace(myData[i]);
    Thanks for any help,
    Sky

    just load your csv file and use the flash string methods to allocate those values to an array:
    var myDate:Array = loader.data.split(",");

  • Issue while loading a csv file using sql*loader...

    Hi,
    I am loading a csv file using sql*loader.
    On the number columns where there is data populated in them, decimal number/integers, the row errors out on the error -
    ORA-01722: invalid number
    I tried checking the value picking from the excel,
    and found the chr(13),chr(32),chr(10) values characters on the value.
    ex: select length('0.21') from dual is giving a value of 7.
    When i checked each character as
    select ascii(substr('0.21',5,1) from dual is returning a value 9...etc.
    I tried the following command....
    "to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
    to remove all the non-number special characters. But still facing the error.
    Please let me know, any solution for this error.
    Thanks in advance.
    Kiran

    control file:
    OPTIONS (ROWS=1, ERRORS=10000)
    LOAD DATA
    CHARACTERSET WE8ISO8859P1
    INFILE '$Xx_TOP/bin/ITEMS.csv'
    APPEND INTO TABLE XXINF.ITEMS_STAGE
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS
    ItemNum                    "trim(replace(replace(:ItemNum,chr(9),''),chr(13),''))",
    cross_ref_old_item_num               "trim(replace(replace(:cross_ref_old_item_num,chr(9),''),chr(13),''))",
    Mas_description               "trim(replace(replace(:Mas_description,chr(9),''),chr(13),''))",
    Mas_long_description               "trim(replace(replace(:Mas_long_description,chr(9),''),chr(13),''))",
    Org_description               "trim(replace(replace(:Org_description,chr(9),''),chr(13),''))",
    Org_long_description               "trim(replace(replace(:Org_long_description,chr(9),''),chr(13),''))",
    user_item_type                    "trim(replace(replace(:user_item_type,chr(9),''),chr(13),''))",
    organization_code               "trim(replace(replace(:organization_code,chr(9),''),chr(13),''))",
    primary_uom_code               "trim(replace(replace(:primary_uom_code,chr(9),''),chr(13),''))",
    inv_default_item_status          "trim(replace(replace(:inv_default_item_status,chr(9),''),chr(13),''))",
    inventory_item_flag               "trim(replace(replace(:inventory_item_flag,chr(9),''),chr(13),''))",
    stock_enabled_flag               "trim(replace(replace(:stock_enabled_flag,chr(9),''),chr(13),''))",
    mtl_transactions_enabled_flag          "trim(replace(replace(:mtl_transactions_enabled_flag,chr(9),''),chr(13),''))",
    revision_qty_control_code          "trim(replace(replace(:revision_qty_control_code,chr(9),''),chr(13),''))",
    reservable_type               "trim(replace(replace(:reservable_type,chr(9),''),chr(13),''))",
    check_shortages_flag               "trim(replace(replace(:check_shortages_flag,chr(9),''),chr(13),''))",
    shelf_life_code               "trim(replace(replace(replace(replace(:shelf_life_code,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    shelf_life_days               "trim(replace(replace(replace(replace(:shelf_life_days,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    lot_control_code               "trim(replace(replace(:lot_control_code,chr(9),''),chr(13),''))",
    auto_lot_alpha_prefix               "trim(replace(replace(:auto_lot_alpha_prefix,chr(9),''),chr(13),''))",
    start_auto_lot_number               "trim(replace(replace(:start_auto_lot_number,chr(9),''),chr(13),''))",
    negative_measurement_error          "trim(replace(replace(replace(replace(:negative_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    positive_measurement_error          "trim(replace(replace(replace(replace(:positive_measurement_error,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    serial_number_control_code          "trim(replace(replace(:serial_number_control_code,chr(9),''),chr(13),''))",
    auto_serial_alpha_prefix          "trim(replace(replace(:auto_serial_alpha_prefix,chr(9),''),chr(13),''))",
    start_auto_serial_number          "trim(replace(replace(:start_auto_serial_number,chr(9),''),chr(13),''))",
    location_control_code               "trim(replace(replace(:location_control_code,chr(9),''),chr(13),''))",
    restrict_subinventories_code          "trim(replace(replace(:restrict_subinventories_code,chr(9),''),chr(13),''))",
    restrict_locators_code               "trim(replace(replace(:restrict_locators_code,chr(9),''),chr(13),''))",
    bom_enabled_flag               "trim(replace(replace(:bom_enabled_flag,chr(9),''),chr(13),''))",
    costing_enabled_flag               "trim(replace(replace(:costing_enabled_flag,chr(9),''),chr(13),''))",
    inventory_asset_flag               "trim(replace(replace(:inventory_asset_flag,chr(9),''),chr(13),''))",
    default_include_in_rollup_flag          "trim(replace(replace(:default_include_in_rollup_flag,chr(9),''),chr(13),''))",
    cost_of_goods_sold_account          "trim(replace(replace(:cost_of_goods_sold_account,chr(9),''),chr(13),''))",
    std_lot_size                    "trim(replace(replace(replace(replace(:std_lot_size,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    sales_account                    "trim(replace(replace(:sales_account,chr(9),''),chr(13),''))",
    purchasing_item_flag               "trim(replace(replace(:purchasing_item_flag,chr(9),''),chr(13),''))",
    purchasing_enabled_flag          "trim(replace(replace(:purchasing_enabled_flag,chr(9),''),chr(13),''))",
    must_use_approved_vendor_flag          "trim(replace(replace(:must_use_approved_vendor_flag,chr(9),''),chr(13),''))",
    allow_item_desc_update_flag          "trim(replace(replace(:allow_item_desc_update_flag,chr(9),''),chr(13),''))",
    rfq_required_flag               "trim(replace(replace(:rfq_required_flag,chr(9),''),chr(13),''))",
    buyer_name                    "trim(replace(replace(:buyer_name,chr(9),''),chr(13),''))",
    list_price_per_unit               "trim(replace(replace(replace(replace(:list_price_per_unit,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    taxable_flag                    "trim(replace(replace(:taxable_flag,chr(9),''),chr(13),''))",
    purchasing_tax_code               "trim(replace(replace(:purchasing_tax_code,chr(9),''),chr(13),''))",
    receipt_required_flag               "trim(replace(replace(:receipt_required_flag,chr(9),''),chr(13),''))",
    inspection_required_flag          "trim(replace(replace(:inspection_required_flag,chr(9),''),chr(13),''))",
    price_tolerance_percent          "trim(replace(replace(replace(replace(:price_tolerance_percent,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    expense_account               "trim(replace(replace(:expense_account,chr(9),''),chr(13),''))",
    allow_substitute_receipts_flag          "trim(replace(replace(:allow_substitute_receipts_flag,chr(9),''),chr(13),''))",
    allow_unordered_receipts_flag          "trim(replace(replace(:allow_unordered_receipts_flag,chr(9),''),chr(13),''))",
    receiving_routing_code               "trim(replace(replace(:receiving_routing_code,chr(9),''),chr(13),''))",
    inventory_planning_code          "trim(replace(replace(:inventory_planning_code,chr(9),''),chr(13),''))",
    min_minmax_quantity               "trim(replace(replace(replace(replace(:min_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    max_minmax_quantity               "trim(replace(replace(replace(replace(:max_minmax_quantity,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    planning_make_buy_code               "trim(replace(replace(:planning_make_buy_code,chr(9),''),chr(13),''))",
    source_type                    "trim(replace(replace(:source_type,chr(9),''),chr(13),''))",
    mrp_safety_stock_code               "trim(replace(replace(:mrp_safety_stock_code,chr(9),''),chr(13),''))",
    material_cost                    "trim(replace(replace(:material_cost,chr(9),''),chr(13),''))",
    mrp_planning_code               "trim(replace(replace(:mrp_planning_code,chr(9),''),chr(13),''))",
    customer_order_enabled_flag          "trim(replace(replace(:customer_order_enabled_flag,chr(9),''),chr(13),''))",
    customer_order_flag               "trim(replace(replace(:customer_order_flag,chr(9),''),chr(13),''))",
    shippable_item_flag               "trim(replace(replace(:shippable_item_flag,chr(9),''),chr(13),''))",
    internal_order_flag               "trim(replace(replace(:internal_order_flag,chr(9),''),chr(13),''))",
    internal_order_enabled_flag          "trim(replace(replace(:internal_order_enabled_flag,chr(9),''),chr(13),''))",
    invoice_enabled_flag               "trim(replace(replace(:invoice_enabled_flag,chr(9),''),chr(13),''))",
    invoiceable_item_flag               "trim(replace(replace(:invoiceable_item_flag,chr(9),''),chr(13),''))",
    cross_ref_ean_code               "trim(replace(replace(:cross_ref_ean_code,chr(9),''),chr(13),''))",
    category_set_intrastat               "trim(replace(replace(:category_set_intrastat,chr(9),''),chr(13),''))",
    CustomCode                    "trim(replace(replace(:CustomCode,chr(9),''),chr(13),''))",
    net_weight                    "trim(replace(replace(replace(replace(:net_weight,chr(9),''),chr(13),''),chr(32),''),chr(10),''))",
    production_speed               "trim(replace(replace(:production_speed,chr(9),''),chr(13),''))",
    LABEL                         "trim(replace(replace(:LABEL,chr(9),''),chr(13),''))",
    comment1_org_level               "trim(replace(replace(:comment1_org_level,chr(9),''),chr(13),''))",
    comment2_org_level               "trim(replace(replace(:comment2_org_level,chr(9),''),chr(13),''))",
    std_cost_price_scala               "to_number(trim(replace(replace(replace(replace(:std_cost_price_scala,chr(9),''),chr(32),''),chr(13),''),chr(10),'')))",
    supply_type                    "trim(replace(replace(:supply_type,chr(9),''),chr(13),''))",
    subinventory_code               "trim(replace(replace(:subinventory_code,chr(9),''),chr(13),''))",
    preprocessing_lead_time          "trim(replace(replace(replace(replace(:preprocessing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
    processing_lead_time                "trim(replace(replace(replace(replace(:processing_lead_time,chr(9),''),chr(32),''),chr(13),''),chr(10),''))",
    wip_supply_locator               "trim(replace(replace(:wip_supply_locator,chr(9),''),chr(13),''))"
    Sample data from csv file.
    "9901-0001-35","390000","JMKL16 Pipe bend 16 mm","","JMKL16 Putkikaari 16 mm","","AI","FJE","Ea","","","","","","","","","","","","","","","","","","","","","","","","","21-21100-22200-00000-00000-00-00000-00000","0","21-11100-22110-00000-00000-00-00000-00000","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","0.1","Pull","AFTER PROD","","","Locator for Production"
    The load errors out on especially two columns :
    1) std_cost_price_scala
    2) list_price_per_unit
    both are number columns.
    And when there is data being provided on them. It errors out. But, if they are holding null values, the records go through fine.
    Message was edited by:
    KK28

  • Using SQL*Loader to load a .csv file having multiple CLOBs

    Oracle 8.1.5 on Solaris 2.6
    I want to use SQL*Loader to load a .CSV file that has 4 inline CLOB columns. I shall attempt to give some information about the problem:
    1. The CLOBs are not delimited in the field level and could themselves contain commas.
    2. I cannot get the data file in any other format.
    Can anybody help me out with this? While loading LOB in predetermined size fields, is there a limit on the size?
    TIA.
    -Murali

    Thanks for the article link. The article states "...the loader can load only XMLType tables, not columns." Is this still the case with 10g R2? If so, what is the best way to workaround this problem? I am migrating data from a Sybase table that contains a TEXT column (among others) to an Oracle table that contains an XMLType column. How do you recommend I accomplish this task?
    - Ron

  • Error loading local CSV file into external table

    Hello,
    I am trying to load a CSV file located on my C:\ drive on a WIndows XP system into an 'external table'. Everyting used to work correctly when using Oracle XE (iinstalled also locally on my WIndows system).
    However, once I am trynig to load the same file into a Oracle 11g R2 database on UNIX, I get the following errr:
    ORA-29913: Error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    error opening file ...\...\...nnnnn.log
    Please let me know if I can achieve the same functionality I had with Oracle XP.
    (Note: I cannot use SQL*Loader approach, I am invoking Oracle stored procedures from VB.Net that are attempting to load data into external tables).
    Regards,
    M.R.

    user7047382 wrote:
    Hello,
    I am trying to load a CSV file located on my C:\ drive on a WIndows XP system into an 'external table'. Everyting used to work correctly when using Oracle XE (iinstalled also locally on my WIndows system).
    However, once I am trynig to load the same file into a Oracle 11g R2 database on UNIX, I get the following errr:
    ORA-29913: Error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    error opening file ...\...\...nnnnn.log
    Please let me know if I can achieve the same functionality I had with Oracle XP.
    (Note: I cannot use SQL*Loader approach, I am invoking Oracle stored procedures from VB.Net that are attempting to load data into external tables).
    Regards,
    M.R.So your database is on a unix box, but the file is on your Windows desktop machine? So .... how is it you are making your file (on your desktop) visible to your database (on a unix box)???????

  • How to load unicode data files with fixed records lengths?

    Hi!
    To load unicode data files with fixed records lengths (in terms of charachters and not of bytes!) using SQL*Loader manually, I found two ways:
    Alternative 1: one record per row
    SQL*Loader control file example (without POSITION, since POSITION always refers to bytes!)<br>
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode.dat
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001111112234444
    01NormalDExZWEI
    02ÄÜÖßêÊûÛxöööö
    03ÄÜÖßêÊûÛxöööö
    04üüüüüüÖÄxµôÔµ Alternative2: variable length records
    LOAD DATA
    CHARACTERSET UTF8
    LENGTH SEMANTICS CHAR
    INFILE unicode_var.dat "VAR 4"
    INTO TABLE STG_UNICODE
    TRUNCATE
    A CHAR(2) ,
    B CHAR(6) ,
    C CHAR(2) ,
    D CHAR(1) ,
    E CHAR(4)
    ) Datafile:
    001501NormalDExZWEI002702ÄÜÖßêÊûÛxöööö002604üuüüüüÖÄxµôÔµ Problems
    Implementing these two alternatives in OWB, I encounter the following problems:
    * How to specify LENGTH SEMANTICS CHAR?
    * How to suppress the POSITION definition?
    * How to define a flat file with variable length and how to specify the number of bytes containing the length definition?
    Or is there another way that can be implemented using OWB?
    Any help is appreciated!
    Thanks,
    Carsten.

    Hi Carsten
    If you need to support the LENGTH SEMANTICS CHAR clause in an external table then one option is to use the unbound external table and capture the access parameters manually. To create an unbound external table you can skip the selection of a base file in the external table wizard. Then when the external table is edited you will get an Access Parameters tab where you can define the parameters. In 11gR2 the File to Oracle external table can also add this clause via an option.
    Cheers
    David

  • Is there a way to open CSV files with more than 255 columns?

    I have a CSV file with more than 255 columns of data.  It's a fairly standard export of social media data that shows volume of posts by day for the past year, from which I can analyze the data and publish customized charts. Very easy in Excel but I'm hitting the Numbers limit of 255 columns per table. Is there a way to work around the limitation? Perhaps splitting the CSV in two? The data shows up in the CSV file when I open via TextEdit, so it's there. Just can't access it in Numbers. And it's not very usable/useful for me in TextEdit.
    Regards,
    Tim

    You might be better off with Excel. Even if you could find a way to easily split the CSV file into two tables, it would be two tables when you want only one.  You said you want to make charts from this data.  While a series on a chart can be constructed from data in two different tables, to do so takes a few extra steps for each series on the chart.
    For a test to see if you want to proceed, make two small tables with data spanning the tables and make a chart from that data.  Make the chart the normal way using the data in the first table then repeat the following steps for each series
    Select the series in the chart
    Go to Format sidebar
    Click in the "Value" box
    Add a comma then select the data for this series from the second chart
    Press Return
    If there is an easier way to do this, maybe someone else will chime in with that info.

  • I also have a .csv file with the name of a jpeg in one column and a text description of each jpeg in a second column. Is there a way to automatically insert one jpeg (photo) and its corresponding text, each pair on one page, into a Indesign document?

    I also have a .csv file with the name of a jpeg in one column and a text description of each jpeg in a second column. Is there a way to automatically insert one jpeg (photo) and its corresponding text, each pair on one page, into a Indesign document?

    I would also recommend to write the description into the meta data. This would allow to place a text frame above the image and it is possible to add meta information and file name automatically together with the image, when you place it or even in a prepared template.
    Meta data information can be written easily in Bridge in the Meta File Workspace.

  • How to load a flat file with lot of records

    Hi,
    I am trying to load a flat file with hundreds of records into an apps table. when i create the process and deploy it onto the console it asks for an input in an html form. why does it ask for an input when i have specified the input file directory in my process? is there any way around tis where in it just reads all the records from the flat file directly??is custom queues anyway related to what I am about to do?any documents on this process will be greatly appreciated.If any one can help me on this it will be great. thank you guys....

    After deploying it, do you see if it is active and the status is on from the BPEL console BPEL Process tab? It should not come up to ask for input unless you are clicking it from the Dashboard tab. Do not click it from the Dashboard. Instead you should put some files into the input driectory. Wait few seconds you should see the instances of the BPEL process is created and start to process the files asynchrously.

Maybe you are looking for

  • Help required in building OBIEE Report

    Hi, Can anyone help me to below OBIEE issues as I am very new to this field. 1. The requirement is to create one Oracle Apps Projects OBIEE report which was previously in discoverer. The discoverer report has 6 worksheet and many custom folders and m

  • Issue in executing contains Query(Oracle 11g Enterprise E on Linux RHEL 4

    Oracle Version: 11.1.0.6.0 We are facing problem in executing contains Query : 1.When length of string inside contains clause goes beyond 4000 it gives error ORA-01704: string literal too long 2.We try to modify query by breaking it into multiple con

  • Bpm patterns?

    Hi xi,bpm expertsu2026. I am kid to xi so I am asking such these type of questions. While I am doing message mergeing of to xml file scenario I want to do this using BPM collect pattern payload dependent is it possible?  Same I want process two idocs

  • Keep files from first of the month for a year

    We have a series of folders with subfolders that have logfiles in them. Each folder has new log files generated daily. However on occasion, a log file is not generated for a day or for a few days. I want to keep the log file in each folder for the fi

  • The podcats app is a huge backwards step, what are apple doing about it?

    I made an account here specifically to bring up the problem that the new podcats app is unbelievably awful. Just take a quick look at the constant stream of 1 star reviews on the app store and it should become obvious that something is badly wrong. I