Error loading local CSV file into external table

Hello,
I am trying to load a CSV file located on my C:\ drive on a WIndows XP system into an 'external table'. Everyting used to work correctly when using Oracle XE (iinstalled also locally on my WIndows system).
However, once I am trynig to load the same file into a Oracle 11g R2 database on UNIX, I get the following errr:
ORA-29913: Error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file ...\...\...nnnnn.log
Please let me know if I can achieve the same functionality I had with Oracle XP.
(Note: I cannot use SQL*Loader approach, I am invoking Oracle stored procedures from VB.Net that are attempting to load data into external tables).
Regards,
M.R.

user7047382 wrote:
Hello,
I am trying to load a CSV file located on my C:\ drive on a WIndows XP system into an 'external table'. Everyting used to work correctly when using Oracle XE (iinstalled also locally on my WIndows system).
However, once I am trynig to load the same file into a Oracle 11g R2 database on UNIX, I get the following errr:
ORA-29913: Error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file ...\...\...nnnnn.log
Please let me know if I can achieve the same functionality I had with Oracle XP.
(Note: I cannot use SQL*Loader approach, I am invoking Oracle stored procedures from VB.Net that are attempting to load data into external tables).
Regards,
M.R.So your database is on a unix box, but the file is on your Windows desktop machine? So .... how is it you are making your file (on your desktop) visible to your database (on a unix box)???????

Similar Messages

  • Loading a CSV file into a table

    HTML DB Data Workshop has a feature to load a CSV file and create a table based on it.
    I use a simplified version of this feature in my applications.
    See a quick demo at
    http://htmldb.oracle.com/pls/otn/f?p=38131:1
    Create a small csv file like
    col1,col2,col3
    varchar2(10),number,"number(10,2)"
    cat,2,3.2
    dog,99,10.4
    The first line in the file are column names
    Second line are their datatypes.
    Save the file as c:\test.csv
    Click the Browse button, search for this file.
    Click the Submit button to upload and parse the file.
    You can browse the file contents.
    If you like what you see, type in a table name and click the Create Table button to create a table.
    Let me know if anyone is interested and I can post the code.
    Hope this helps.
    Message was edited by:
    Vikas
    Message was edited by:
    Vikas

    Hi Vikas,
    I tried to import your COOL application into html db at http://htmldb.oracle.com and got the error:
    "Expecting p_company or wwv_flow_company cookie to contain security group id of application owner.
    Error ERR-7621 Could not determine workspace for application (:) on application accept.
    OK "
    The SAME error I get when trying to import Scott's Photo Catalog at http://htmldb.oracle.com/pls/otn/f?p=18326:7:6703988766288646534::::P7_ID:2242.
    What should I change in the exported script in addition to change your workspace schema VIKASA or Scott's STUDIO_SHOWCASE to my NICED68?
    Thanks!
    DC

  • Loading a CSV file into a table as in dataworkshop

    Data Workshop has a feature to load a CSV file and create a table based on it, similarly i want to create it in my application.
    i have gone through the forum http://forums.oracle.com/forums/thread.jspa?threadID=334988&start=60&tstart=0
    but could not download all the files(application , HTMLDB_TOOLS package and the PAGE_SENTRY function) couldnt find the PAGE_SENTRY function.
    AND when i open this link http://apex.oracle.com/pls/apex/f?p=27746
    I couldn't run the application. I provided a CSV file and when I click SUBMIT, I got the error:
    ORA-06550: line 1, column 7: PLS-00201: identifier 'HTMLDB_TOOLS.PARSE_FILE' must be declared
    tried it in apex.oracle.com host as given in the previous post.
    any help pls..,
    any other method to load data into tables.., (similar to dataworkshop)

    hi jari,
    I'm using the HTMDB_TOOLS to parse my csv files,It works well for creating a report, but if i want to create a new table and upload the data giving error *"missing right parenthesis"*
    I've been looking through the package body and i think there is some problem in the code,
            IF (p_table_name is not null)
            THEN
              BEGIN
                execute immediate 'drop table '||p_table_name;
              EXCEPTION
                WHEN OTHERS THEN NULL;
              END;
              l_ddl := 'create table '||p_table_name||' '||v(p_ddl_item);
              htmldb_util.set_session_state('P149_DEBUG',l_ddl);
              execute immediate l_ddl;
              l_ddl := 'insert into '||p_table_name||' '||
                       'select '||v(p_columns_item)||' '||
                       'from htmldb_collections '||
                       'where seq_id > 1 and collection_name='''||p_collection_name||'''';
              htmldb_util.set_session_state('P149_DEBUG',v('P149_DEBUG')||'/'||l_ddl);
              execute immediate l_ddl;
              RETURN;
        END IF;it is droping table but not creating new table.
    P149_DEBUG contains create table EMP_D (Emp ID number(10),Name varchar2(20),Type varchar2(20),Join Date varchar2(20),Location varchar2(20))and if i comment the
              BEGIN
                execute immediate 'drop table '||p_table_name;
              EXCEPTION
                WHEN OTHERS THEN NULL;
              END;
              l_ddl := 'create table '||p_table_name||' '||v(p_ddl_item);
              htmldb_util.set_session_state('P149_DEBUG',l_ddl);
              execute immediate l_ddl;then it is working well, i.e i enter the exsisting table name then i works fine, inserting the rows.
    but unable to create new table
    can you pls help to fix it.
    Regards,
    Little Foot

  • Loading multiple .csv files into a table using SSIS

    Hi,
    I have a requirement where I have 200+ csv files to be loaded into Netezza table using SSIS.
    Issue I am facing is all columns have different number of columns, for ex, file 1 has columns A,B,C and file 2 has columns C,D,E. My target table has all columns from A to E in it. 
    But, when I am using for each loop container, only the file for which I have specified filepath+filename in loop variable, that is getting loaded. Rest all files, no data is getting loaded from them and package is executing successfully.
    Any help is appreciated.
    Regards,
    VT

    if you want to iterate through files then all files should be in same folder and you should use file enumerator type within ForEach loop. Then inside loop you might need a script task to generate the data flow on the fly based on the avialble input columns
    from the file and do the mapping in the destination. So I assume you put NULLs (or some default value) for missing columns from the file
    http://blog.quasarinc.com/ssis/best-solution-to-load-dynamically-change-csv-file-in-ssis-etl-package/
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • Loading data from .csv file into Oracle Table

    Hi,
    I have a requirement where I need to populate data from .csv file into oracle table.
    Is there any mechanism so that i can follow the same?
    Any help will be fruitful.
    Thanks and regards

    You can use Sql Loader or External tables for your requirement
    Missed Karthick's post ...alredy there :)
    Edited by: Rajneesh Kumar on Dec 4, 2008 10:54 AM

  • How do i import my local csv files into HANA database as temporary tables  using java program?

    I want to import my local csv file into database for further apply Join with other tables programmatic in JAVA

    Hi Vivek,
    Please go through the following blogs and video to resolve your issue.
    Loading large CSV files to SAP HANA
    HANA Academy - Importing Data using CTL Method - YouTube
    Hope it helps.
    Regards
    Kumar

  • How to load a XML file into a table

    Hi,
    I've been working on Oracle for many years but for the first time I was asked to load a XML file into a table.
    As an example, I've found this on the web, but it doesn't work
    Can someone tell me why? I hoped this example could help me.
    the file acct.xml is this:
    <?xml version="1.0"?>
    <ACCOUNT_HEADER_ACK>
    <HEADER>
    <STATUS_CODE>100</STATUS_CODE>
    <STATUS_REMARKS>check</STATUS_REMARKS>
    </HEADER>
    <DETAILS>
    <DETAIL>
    <SEGMENT_NUMBER>2</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>3</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic administration</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>4</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic finance</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>5</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic logistics</REMARKS>
    </DETAIL>
    </DETAILS>
    <HEADER>
    <STATUS_CODE>500</STATUS_CODE>
    <STATUS_REMARKS>process exception</STATUS_REMARKS>
    </HEADER>
    <DETAILS>
    <DETAIL>
    <SEGMENT_NUMBER>20</SEGMENT_NUMBER>
    <REMARKS> base polytechnic</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>30</SEGMENT_NUMBER>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>40</SEGMENT_NUMBER>
    <REMARKS> base polytechnic finance</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>50</SEGMENT_NUMBER>
    <REMARKS> base polytechnic logistics</REMARKS>
    </DETAIL>
    </DETAILS>
    </ACCOUNT_HEADER_ACK>
    For the two tags HEADER and DETAILS I have the table:
    create table xxrp_acct_details(
    status_code number,
    status_remarks varchar2(100),
    segment_number number,
    remarks varchar2(100)
    before I've created a
    create directory test_dir as 'c:\esterno'; -- where I have my acct.xml
    and after, can you give me a script for loading data by using XMLTABLE?
    I've tried this but it doesn't work:
    DECLARE
    acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
    BEGIN
    insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
    select x1.status_code,
            x1.status_remarks,
            x2.segment_number,
            x2.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns header_no      for ordinality,
              status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
      passing acct_doc as "d",
              x1.header_no as "hn"
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS'
    ) x2
    END;
    This should allow me to get something like this:
    select * from xxrp_acct_details;
    Statuscode status remarks segement remarks
    100 check 2 rp polytechnic
    100 check 3 rp polytechnic administration
    100 check 4 rp polytechnic finance
    100 check 5 rp polytechnic logistics
    500 process exception 20 base polytechnic
    500 process exception 30
    500 process exception 40 base polytechnic finance
    500 process exception 50 base polytechnic logistics
    but I get:
    Error report:
    ORA-06550: line 19, column 11:
    PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
    ORA-06550: line 4, column 2:
    PL/SQL: SQL Statement ignored
    06550. 00000 -  "line %s, column %s:\n%s"
    *Cause:    Usually a PL/SQL compilation error.
    and if I try to change the script without using the column HEADER_NO to keep track of the header rank inside the document:
    DECLARE
    acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
    BEGIN
    insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
    select x1.status_code,
            x1.status_remarks,
            x2.segment_number,
            x2.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '/ACCOUNT_HEADER_ACK/DETAILS'
      passing acct_doc
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS'
    ) x2
    END;
    I get this message:
    Error report:
    ORA-19114: error during parsing the XQuery expression: 
    ORA-06550: line 1, column 13:
    PLS-00201: identifier 'SYS.DBMS_XQUERYINT' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    ORA-06512: at line 4
    19114. 00000 -  "error during parsing the XQuery expression: %s"
    *Cause:    An error occurred during the parsing of the XQuery expression.
    *Action:   Check the detailed error message for the possible causes.
    My oracle version is 10gR2 Express Edition
    I do need a script for loading xml files into a table as soon as possible, Give me please a simple example for understanding and that works on 10gR2 Express Edition
    Thanks in advance!

    The reason your first SQL statement
    select x1.status_code,
            x1.status_remarks,
            x2.segment_number,
            x2.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns header_no      for ordinality,
              status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
      passing acct_doc as "d",
              x1.header_no as "hn"
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS'
    ) x2
    returns the error you noticed
    PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
    is because Oracle is expecting XML to be passed in.  At the moment I forget if it requires a certain format or not, but it is simply expecting the value to be wrapped in simple XML.
    Your query actually runs as is on 11.1 as Oracle changed how that functionality worked when 11.1 was released.  Your query runs slowly, but it does run.
    As you are dealing with groups, is there any way the input XML can be modified to be like
    <ACCOUNT_HEADER_ACK>
    <ACCOUNT_GROUP>
    <HEADER>....</HEADER>
    <DETAILS>....</DETAILS>
    </ACCOUNT_GROUP>
      <ACCOUNT_GROUP>
      <HEADER>....</HEADER>
      <DETAILS>....</DETAILS>
      </ACCOUNT_GROUP>
    </ACCOUNT_HEADER_ACK>
    so that it is easier to associate a HEADER/DETAILS combination?  If so, it would make parsing the XML much easier.
    Assuming the answer is no, here is one hack to accomplish your goal
    select x1.status_code,
            x1.status_remarks,
            x3.segment_number,
            x3.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns header_no      for ordinality,
              status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '$d/ACCOUNT_HEADER_ACK/DETAILS'
      passing acct_doc as "d",
      columns detail_no      for ordinality,
              detail_xml     xmltype       path 'DETAIL'
    ) x2,
    xmltable(
      'DETAIL'
      passing x2.detail_xml
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS') x3
    WHERE x1.header_no = x2.detail_no;
    This follows the approach you started with.  Table x1 creates a row for each HEADER node and table x2 creates a row for each DETAILS node.  It assumes there is always a one and only one association between the two.  I use table x3, which is joined to x2, to parse the many DETAIL nodes.  The WHERE clause then joins each header row to the corresponding details row and produces the eight rows you are seeking.
    There is another approach that I know of, and that would be using XQuery within the XMLTable.  It should require using only one XMLTable but I would have to spend some time coming up with that solution and I can't recall whether restrictions exist in 10gR2 Express Edition compared to what can run in 10.2 Enterprise Edition for XQuery.

  • Question about reading csv file into internal table

    Some one (thanks those nice guys!) in this forum have suggested me to use FM KCD_CSV_FILE_TO_INTERN_CONVERT to read csv file into internal table. However, it can be used to read a local file only.
    I would like to ask how can I read a CSV file into internal table from files in application server?
    I can't simply use SPLIT as there may be comma in the content. e.g.
    "abc","aaa,ab",10,"bbc"
    My expected output:
    abc
    aaa,ab
    10
    bbb
    Thanks again for your help.

    Hi Gundam,
    Try this code. I have made a custom parser to read the details in the record and split them accordingly. I have also tested them with your provided test cases and it work fine.
    OPEN DATASET dsn FOR input IN TEXT MODE ENCODING DEFAULT.
    DO.
    READ DATASET dsn INTO record.
      PERFORM parser USING record.
    ENDDO.
    *DATA str(32) VALUE '"abc",10,"aaa,ab","bbc"'.
    *DATA str(32) VALUE '"abc","aaa,ab",10,"bbc"'.
    *DATA str(32) VALUE '"a,bc","aaaab",10,"bbc"'.
    *DATA str(32) VALUE '"abc","aaa,ab",10,"b,bc"'.
    *DATA str(32) VALUE '"abc","aaaab",10,"bbc"'.
    FORM parser USING str.
    DATA field(12).
    DATA field1(12).
    DATA field2(12).
    DATA field3(12).
    DATA field4(12).
    DATA cnt TYPE i.
    DATA len TYPE i.
    DATA temp TYPE i.
    DATA start TYPE i.
    DATA quote TYPE i.
    DATA rec_cnt TYPE i.
    len = strlen( str ).
    cnt = 0.
    temp = 0.
    rec_cnt = 0.
    DO.
    *  Start at the beginning
      IF start EQ 0.
        "string just ENDED start new one.
        start = 1.
        quote = 0.
        CLEAR field.
      ENDIF.
      IF str+cnt(1) EQ '"'.  "Check for qoutes
        "CHECK IF quotes is already set
        IF quote = 1.
          "Already quotes set
          "Start new field
          start = 0.
          quote = 0.
          CONCATENATE field '"' INTO field.
          IF field IS NOT INITIAL.
            rec_cnt = rec_cnt + 1.
            CONDENSE field.
            IF rec_cnt EQ 1.
              field1 = field.
            ELSEIF rec_cnt EQ 2.
              field2 = field.
            ELSEIF rec_cnt EQ 3.
              field3 = field.
            ELSEIF rec_cnt EQ 4.
              field4 = field.
            ENDIF.
          ENDIF.
    *      WRITE field.
        ELSE.
          "This is the start of quotes
          quote = 1.
        ENDIF.
      ENDIF.
      IF str+cnt(1) EQ ','. "Check end of field
        IF quote EQ 0. "This is not inside quote end of field
          start = 0.
          quote = 0.
          CONDENSE field.
    *      WRITE field.
          IF field IS NOT INITIAL.
            rec_cnt = rec_cnt + 1.
            IF rec_cnt EQ 1.
              field1 = field.
            ELSEIF rec_cnt EQ 2.
              field2 = field.
            ELSEIF rec_cnt EQ 3.
              field3 = field.
            ELSEIF rec_cnt EQ 4.
              field4 = field.
            ENDIF.
          ENDIF.
        ENDIF.
      ENDIF.
      CONCATENATE field str+cnt(1) INTO field.
      cnt = cnt + 1.
      IF cnt GE len.
        EXIT.
      ENDIF.
    ENDDO.
    WRITE: field1, field2, field3, field4.
    ENDFORM.
    Regards,
    Wenceslaus.

  • Getting Issue while uploading CSV file into internal table

    Hi,
    CSV file Data format as below
         a             b               c              d           e               f
    2.01E14     29-Sep-08     13:44:19     2.01E14     SELL     T+1
    actual values of column   A is 201000000000000
                     and  columen D is 201000000035690
    I am uploading above said CSV file into internal table using
    the below coding:
    TYPES: BEGIN OF TY_INTERN.
            INCLUDE STRUCTURE  KCDE_CELLS.
    TYPES: END OF TY_INTERN.
    CALL FUNCTION 'KCD_CSV_FILE_TO_INTERN_CONVERT'
        EXPORTING
          I_FILENAME      = P_FILE
          I_SEPARATOR     = ','
        TABLES
          E_INTERN        = T_INTERN
        EXCEPTIONS
          UPLOAD_CSV      = 1
          UPLOAD_FILETYPE = 2
          OTHERS          = 3.
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    am getting all columns data into internal table,
    getting problem is columan A & D. am getting values into internal table for both; 2.01E+14. How to get actual values without modifying the csv file format.
    waiting for your reply...
    thanks & regards,
    abhi

    Hi Saurabh,
    Thanks for your reply.
    even i can't double click on those columns.
    b'se the program needs be executed in background there can lot of csv file in one folder. No manual interaction on those csv files.
    regards,
    abhi

  • Example for loading a csv file into diadem from a labview application

    Hi everyone, i'm using labview 8.2 and DIAdem 10.1.
    I've been searching in NI example finder but I had no luck so far.
    I have already downloaded the labview connectivity VIs.
    Can anyone provide a example that can help me loading a csv file into diadem from a labview application?
    Thanks

    Hi Alexandre.
    I attach an example for you.
    Best Regards.
    Message Edité par R_Duval le 01-15-2008 02:44 PM
    Romain D.
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    NIDays 2010 : Conférence mondiale de l'instrumentation virtuelle
    >>Détails et Inscription<<
    Attachments:
    Classeur1.csv ‏1 KB
    Load CSV to Diadem.vi ‏15 KB

  • Uploading CSV file into internal table

    Hi,
    I want to upload a CSV file into internal table.The flat file is having values as below:
    'AAAAA','2003-10-11 07:52:37','167','Argentina',NULL,NULL,NULL,NULL,NULL,'MX1',NULL,NULL,'AAAA BBBB',NULL,NULL,NULL,'1',NULL,NULL,'AR ',NULL,NULL,NULL,'ARGENT','M1V','MX1',NULL,NULL,'F','F','F','F','F',NULL,'1',NULL,'MX','MMI ',NULL
    'jklhg','2004-06-25 08:01:57','456','hjllajsdk','MANAGUA   ',NULL,NULL,'265-5139','266-5136 al 38','MX1',NULL,NULL,'hjgkid GRÖBER','sdfsdf dfs asdfsdf 380 ad ased,','200 as ads, sfd sfd abajao y 50 m al sdf',NULL,'1',NULL,NULL,'NI ',NULL,NULL,NULL,'sdfdfg','M1V','dds',NULL,NULL,
    Here I can not even split at ',' because some of the values are having value like NULL and some have values with comma too,
    The delimiter is a quote and the separator is a comma here.
    Can anyone help on this?
    Thanks.
    Edited by: Ginger on Jun 29, 2009 9:08 AM

    As long as there can be a comma in a text literal you are right that the spilt command doesn't help. However there is one possibility how to attack this under one assumption:
    - A comma outside a text delimiter is always considered a separator
    - A comma inside a text delimiter is always considered a comma as part of the text
    You have to read you file line by line and then travel along the line string character by character and setting a flag or counter for the text delimiters:
    e.g.
    "Text","Text1, Text2",NULL,NULL,"Text"
    String Index  1: EQ " => lv_delimiter = 'X'
    String Index  2: EQ T => text literal (because lv_delimiter = 'X')
    String Index  3: EQ e => text literal (because lv_delimiter = 'X')
    String Index  4: EQ x => text literal (because lv_delimiter = 'X')
    String Index  5: EQ t => text literal (because lv_delimiter = 'X')
    String Index  6: EQ " => lv_delimiter = ' ' (because it was 'X' before)
    String Index  7: EQ , => This is a separator because lv_delimiter = ' '
    String Index  8: EQ " => lv_delimiter = 'X' (because it was ' ' before)
    String Index  9: EQ T => text literal (because lv_delimiter = 'X')
    String Index 10: EQ e => text literal (because lv_delimiter = 'X')
    String Index 11: EQ x => text literal (because lv_delimiter = 'X')
    String Index 12: EQ t => text literal (because lv_delimiter = 'X')
    String Index 13: EQ 1 => text literal (because lv_delimiter = 'X')
    String Index 14: EQ , => text literal (because lv_delimiter = 'X')
    String Index 15: EQ T => text literal (because lv_delimiter = 'X')
    Whenever you hit a 'real' separator (lv_delimiter = ' ') you pass the value of the string before that up to the previous separator into the next structure field.
    This is not an easy way to do it, but if you might have commas in your text literal and NULL values I gues it is probably the only way to go.
    Hope that helps,
    Michael

  • Loading an XML file into the table without creating a directory .

    Hi,
    I wanted to load an XML file into a table column . But I should not create a directory in the server and placing the XML file there and giving the path in the insert query. Can anybody help me here?
    Thanks in advance.

    You could write a java stored procedure that retrieves the file into a clob. Wrap that in a function call and use it in your insert statement.
    This solution require read privileges granted by sys and is therefore only feasible if the top-level directory/directories are known or you get read-access to everything.

  • Need to merge a csv file using external tables into a main table

    Hi,
    I have a csv file which contains the date(with time stamp), column1(number),column2(number), column3 (number). I am using external tables concept to load the data froom csv to this external table and then merging into the main table. Problem here is : the csv file is a system generated file and nothing can be edited under it. our aim is to automate this process of loading data from csv to the table. In this csv the date time stamp is not in the proper format.I mean the date is not visible and only minutes and seconds are visible.By changing the format in csv manually this can be overcome.but we donot need any manual intervention.
    how can i overcome this problem ?? please help mee...
    Excels data looks like:
    (PDH-TSV 4.0) (India Standard Time)(-330)     \\DISAPPSER01\Processor(_Total)\% Privileged Time     \\DISAPPSER01\Processor(_Total)\% Processor Time     \\DISAPPSER01\Web Service(_Total)\Current Connections
    56:59.0               47
    57:09.0     0.72379582     4.204561281     46
    57:19.0     0.916548537     4.006179927     44
    57:29.0     0.663034771     3.674662541     43
    57:39.0     0.750789844     4.093933999     42
    57:49.0     0.721538487     2.650858026     40
    57:59.0     0.594781604     3.333393703     40

    please format your sample data giving header to the column so that we can make sense out of the values, also since the minutes and seconds are only given, what is the date to be considered for records to be moved to the master table, sysdate or will the date be passed as a parameter?

  • Load CSV file into a table when a button is clicked by the user

    Hello,
    Can anyone please help me out with this issue, I have a form where in a user comes and uploads a CSV file and clicks a button, when the button is clicked - it should load the CSV file data into the database table for the corresponding columns.
    Can anyone please suggest me a possible solution or an approach.
    Thanks,
    Orton
    Edited by: orton607 on May 5, 2010 2:00 PM

    thanks fro replying.
    I have tried your changes but its not working. One more question is that I am having one column which contains commas, when I tried to load the file its failing. I think its the problem with commas. So I have changed the code to use the replace function for that column, then also its not working. Can anyone please suggest a possible approach. Below is my source code for your reference.
    DECLARE
    v_blob_data BLOB;
    v_blob_len NUMBER;
    v_position NUMBER;
    v_raw_chunk RAW(10000);
    v_char CHAR(1);
    c_chunk_len NUMBER := 1;
    v_line VARCHAR2 (32767):= NULL;
    v_data_array wwv_flow_global.vc_arr2;
    v_rows NUMBER;
    v_sr_no NUMBER := 1;
    l_cnt BINARY_INTEGER := 0;
    l_stepid NUMBER := 10;
    BEGIN
    delete from sample_tbl;
    -- Read data from wwv_flow_files</span>
    select blob_content into v_blob_data from wwv_flow_files
    where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
    and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
    v_blob_len := dbms_lob.getlength(v_blob_data);
    v_position := 1;
    -- Read and convert binary to char</span>
    WHILE ( v_position <= v_blob_len ) LOOP
    v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
    v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
    v_line := v_line || v_char;
    v_position := v_position + c_chunk_len;
    -- When a whole line is retrieved </span>
    IF v_char = CHR(10) THEN
    -- Convert comma to : to use wwv_flow_utilities </span>
    v_line := REPLACE (v_line, ',', ':');
    -- Convert each column separated by : into array of data </span>
    v_data_array := wwv_flow_utilities.string_to_table (v_line);
    -- Insert data into target table </span>
    EXECUTE IMMEDIATE 'insert into sample_tbl(col1..col12)
    values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)'
    USING
    v_sr_no,
    v_data_array(1),
    v_data_array(2),
    v_data_array(3),
    v_data_array(4),
    v_data_array(5),
    v_data_array(6),
    v_data_array(7),
    v_data_array(8),
    REPLACE(v_data_array(9), ':', ','),
    to_date(v_data_array(10),'MM/DD/YYYY'),
    v_data_array(11);
    -- Clear out
    v_line := NULL;
    v_sr_no := v_sr_no + 1;
    l_cnt := l_cnt + SQL%ROWCOUNT;
    END IF;
    END LOOP;
    COMMIT;
    l_stepid := 20;
    IF l_cnt = 0 THEN
    apex_application.g_print_success_message := apex_application.g_print_success_message || ' Please select a file to upload ' ;
    ELSE
    apex_application.g_print_success_message := apex_application.g_print_success_message || 'File uploaded and processed ' || l_cnt || ' record(s) successfully.';
    END IF;
    l_stepid := 30;
    EXCEPTION WHEN OTHERS THEN
    ROLLBACK;
    apex_application.g_print_success_message := apex_application.g_print_success_message || 'Failed to upload the file. '||REGEXP_REPLACE(SQLERRM,'[('')(<)(>)(,)(;)(:)(")('')]{1,}', '') ;
    END;
    Below is the function which I am using to convert hex to decimal
    create or replace function hex_to_decimal( p_hex_str in varchar2 ) return number
    is
    v_dec number;
    v_hex varchar2(16) := '0123456789ABCDEF';
    begin
    v_dec := 0;
    for indx in 1 .. length(p_hex_str)
    loop
    v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1;
    end loop;
    return v_dec;
    end hex_to_decimal;
    thanks,
    Orton

Maybe you are looking for

  • IPhone app developers! please read my wish-list of Calendar enhancements

    iPhone event alarm choices button & other enhancements 1 Upon Event Alert a multi-choice button would appear with the following single-tap selections: a. Jump Ahead b. Open Alert c. Complete 2 The Jump Ahead time can be pre-set in options. 3 Upon app

  • Dynamic table generation, an OOP question, and .

    I am attempting to teach myself Java using the Sun tutorials (mostly DiveLog) and these forums. So far, things are going well. My application is a scheduling program for my current boss. I work in retail, and the app would (ideally) faciliate creatin

  • More then one player playing the same stream

    Hi, the problem is when i try to send media stream to a multicast server with class D ip address (224.224.224.24) on port 49150 it should multicast the stream to the set of receivers on their ip's and ports. please suggest me whether multicasting thi

  • Bam Report column Name changing ???

    Hi frds,         i want to change column name in bam report. Actually i created Data object with long column names after creating and integrating with Bpel, i saw in report its showing some of the part Only its not showing total column name, can any

  • PI to Idoc - Wrong partner Number

    Hi experts, something odd happened yesterday, we usually receive ORDERS idocs from different legacy system, but for one order, the Partner Number of the Idoc created in SAP was incorrect. For example, Legacy ABC sent us an order which created an idoc