Processing Several Records in a CSV File

Hello Experts!
I'm currently using XI to process an incoming CSV file containing Accounts Payable information.  The data from the CSV file is used to call a BAPI in the ECC (BAPI_ACC_DOCUMENT_POST).  Return messages are written to text file.  I'm also using BPM.  So far, I've been able to get everything to work - financial documents are successfully created in the ECC   I also receive the success message in my return text file.
I am, however, having one small problem...  No matter how many records are in the CSV file, XI only processes the very first record.  So my question is this: Why isn't XI processing all the records?  Do I need a loop in my BPM?  Are there occurrence settings that I'm missing?  I kinda figured XI would simply process each record in the file.
Also, are there some good examples out there that show me how this is done?
Thanks a lot, I more than appreciate any help!

Matthew,
First let me explain the BPM Steps,
Recv--->Transformation1->For-Each Block->Transformation2->Synch Call->Container(To append the response from BAPI)->Transformation3--->Send
Transformation3 and Send must be outside Block.
Transformation1
Here, the source and target must be same. I think you must be know to split the messages, if not  see the below example
Source
<MT_Input>
<Records>
<Field1>Value1</Field1>
<Field2>Value1</Field2>
</Records>
<Records>
<Field1>Value2</Field1>
<Field2>Value2</Field2>
</Records>
<Records>
<Field1>Value3</Field1>
<Field3>Value3</Field3>
</Records>
</MT_Input>
Now , I need to split the messages for each Records, so what I can do?
In Message Mapping, choose the source and target as same and in the Messages tab, choose the target occurrence as 0..Unbounded.
Now,if you come to Mapping tab, you can see Messages tag added to your structure, and also you can see <MT_Input> occurrence will be changed to 0..unbounded.
Here is the logic now
Map Records to MT_INPUT
Constant(empty) to Records
Map rest of the fields directly. Now your o/p looks like
<Messages>
<Message1>
<MT_Input>
<Records>
<Field1>Value1</Field1>
<Field2>Value1</Field2>
</Records>
</MT_Input>
<MT_Input>
<Records>
<Field1>Value2</Field1>
<Field2>Value2</Field2>
</Records>
</MT_Input>
<MT_Input>
<Records>
<Field1>Value3</Field1>
<Field3>Value3</Field3>
</Records>
</MT_Input>
</Message1>
</Messages>
raj.

Similar Messages

  • Split records into Multiple csv files using a Threshold percentage

    Hi Gurus,
    I have a requirement to split the data into two csv file from a table using a threshold value(in Percentage) .
    Assume that If my source select query of interface fetches 2000 records , I will provide a threshold value like 20%.
    I need to generate a csv1 with 400 records(20% of 2000) and the rest of the records into another csv2.
    For implementing this I am trying to use the following process.
    1) Create a procedure with the select query to get the count of records.
    Total Records count: select count(1) from source_table <Joins> <Lookups> <Conditions>;
    2) Calculate the Record count to first CSV using the threshold_value.
    CSV1_Count=Total records count /threshold_value
    3) Create a view that fetches the CSV1_Count(400) records for CSV1 as follows.
    Create view CSV1_view as select Col1,Col2,Col3 from source_table <Joins> <Lookups> <Conditions>
    Where rownum<=CSV1_Count;
    4) Generate CSV1 file using View 'CSV1_View'
    5) Generate CSV2 File using the Interface with same select statement (with columns ) to generate a CSV.
    select Col1,Col2,Col3 from source_table ST <Joins> <Lookups> <Conditions>
    Left outer join (Select Col1 from CSV1_View ) CS on CS.Col1=ST.Col1 where CS.Col1 is null;
    Which gives the Total records minus the CS1_View records.
    The above process seems a bit complex and very simple . If any changes in my Interface I also need to change the procedure (counts the no:of records).
    Please provide your comments and feedback about this and looking for your inputs for any new simple approach or fine tune the above approach.
    Thanks,
    Arjun

    Arjun,
    This are my thoughts and Lets do it in 3 Steps
    Step 1.  ODI Procedure
    Drop table Temp_20 ;
    Create table Temp_20 as select * from table where rownum < ( SELECT TRUNC( COUNT(1) /5) FROM TABLE ) .
    [ ** This way iam fetching approx 20% of the table data and loading into Temp table . 1/5 th is 20%  so i am dividing count by 5
    I don't believe View will help you especially with RowNum as if you run the same query with rownum < N the rows order might differ . so Temp table is great ]
    Step 2 .  Use OdiSqlUnload  with select columns  from temp_20
    Step 3 . Use again OdiSqlUnload  with  select columns from table where  ( uk keys ) not in ( selecy uk_keys from temp_20)
    [** this way you can pick the remaining 80% ** and the data will be not repeat itself across 20% and 80% , as might happen with view ]
    what do you think ?

  • Search and Delete a specific record from a CSV file

    Hi All,
    I am new to java . I want to search for the records from CSV file and delete the row form the file.
    Below is my Sample .csv
    100||a100||1b100
    200||b200||dc300
    200||bg430||ef850
    400||f344||ce888
    Now I need some help in below requirements.
    1.How to delete a record having value 200 and b200?
    2.If record already exists how to update the existing record with new values?
    Please share your ideas or give me some code snippet..
    Thanks in Advance

    In that case Do i need to write the entire contents of my file to a hash table(sumthng like this) and modify the Second row in my case with the new values..
    is it possible??I would have done like this (though there maybe better methods)
    1- create a class representing the record.
    class Record{
          String field1;
          String field2;
          String field3;
          // and so on....
          //setters
          public void setFeild1(String str){
              field1=str;
          // and so on....
          //getters
          public String getFeild1(){
              field1=str;
          // and so on....
          public String toString(){
               return(field1+"||"+field2+"||"+field3);
    }//end class2- then create an ArrayList meant to have objects of this class (Generics).
    3- read from the file , create a new Record Object and add that to the ArrayList
    4- perform operations on the ArrayList (you can add new records, and delete record, update......)
    5- write the record back to file using 'toString()' method.
    is there ne sample code available for thisdon't know, but you rarely get full code on forums.....outline given can be followed
    Thanks!
    Edit: It appears that 'r035198x' and me have the same point. This shows that this methodology is almost a standard way( if we ignore the Random access files.....)
    Edited by: T.B.M on Jan 13, 2009 2:39 PM

  • Sort records in a csv file in the descending order of time created attribute

    I have an excel (.csv) file with the following column headers :-
    Client, 
    SourceNetworkAddress
    ,TimeCreated,
     LogonType
    ,User,
    Message
    Values like :- ABC, 10.5.22.27, 11/23/2014 9:02:21 PM, 10, testuser
    The file is a combination of a report generated everyday using multiple scripts. The data is appended each day therefore, I would like to sort the final output file in descending order of time created (a combination of date and time) fetched from events
    i.e. the latest record with the latest date and time should be at the top of the list.
    I tried using the following command however, I get a sorted list according to the date but not the time. The command does not consider the AM/PM mentioned in the time instead simply sorts them as per numbers 
    Import-Csv "C:\Users\a\Desktop\report.csv" | sort Timecreated -Descending | Export-csv "C:\Users\a\Desktop\report_sorted.csv" -force -NoTypeInformation
    So if I have a record with 9:02:21 PM(latest) and a record with time 10.44.10 AM on the same date, the command will sort the list with record 10:44:10 AM first and then record with 9:02:21 PM however it should be the opposite as per descending order.
    Kindly help !!

    Hi jrv,
    Thanks for your response. However, I get errors while I run this command in Powershell :-
    Import-Csv <file> | Select Client,SourceNetworkAddress,LogonType,User,Message,@{N='TimeCreated';E={[datetime]($_.TimeCreated)} | Sort TimeCreated -Descending | Export-csv <file> -force -NoTypeInformation
    Missing expression after ','.
    At line:1 char:150
    Unexpected token 'LogonType' in expression or statement.
    At line:1 char:151
    Unexpected token ',' in expression or statement.
    At line:1 char:160
    Unexpected token 'User' in expression or statement.
    At line:1 char:161
    Unexpected token ',' in expression or statement.
    At line:1 char:165
    Unexpected token 'Message' in expression or statement.
    At line:1 char:166
    The hash literal was incomplete.
    At line:1 char:174
    Please help!
    You are missing a second curly brace - 
    Import-Csv <file> | Select Client,SourceNetworkAddress,LogonType,User,Message,@{N='TimeCreated';E={[datetime]($_.TimeCreated)}} | Sort TimeCreated -Descending | Export-csv <file> -force -NoTypeInformation

  • SQLLDR: (CTL file) How to ONLY load 1 record of the CSV file

    Hello,
    We are in 11g, and we get CSV data.
    I'd like to know if there is a way in the CTL file to specify that I only want to load first row ?
    I know how to do it if there is a common unique value in the first row (WHEN myColumn = 'value1' )
    BUT, in that case, first row doesn't hold any specific value, and I think that I have to tell the loader to take only first row.
    I hope it is clear.
    Here is the CTL, in the case we can get a specific value for first row:
    LOAD DATA
    CHARACTERSET WE8ISO8859P1
    APPEND
    CONTINUEIF LAST != ";"
    INTO TABLE IMPORT_FIRST_LINES
    WHEN COL_3 = 'firstRowValue'
       FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    ( COL_1         CHAR
    , COL_2         CHAR
    , COL_3         CHAR)
    {code}
    So, I think I need to change the *WHEN clause*.
    I hope it is clear enough for you to understand.
    Thanks in advance,
        Olivier                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    just change the control file like this,
    LOAD DATA
    CHARACTERSET WE8ISO8859P1
    APPEND
    CONTINUEIF LAST != ";"
    INTO TABLE IMPORT_FIRST_LINES
    WHEN COL_3 = 'firstRowValue'
    FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    ( COL_1 CHAR
    ----------- if you only want to load 1st column of csv file to first column in table.

  • How to skip first records while reading a csv file

    Hi,
    How to skip the first 5 records while reading csv file.
    I have a file which contains first 5 records as dummy records, I want to skip those records and load the rest into RDBMS.
    How to achieve this?
    Thanks,
    Naveen Suram

    Hi Guru,
    Actually I have converted an excel to CSV format, which is generating first 4 rows as some dummy records. 5th row is my header record. But in hearder record, two column names are same. Thats why while reversing it is giving an error.
    With zero in number of columns in header, I am getting the column names as C1, C2...
    If not in reversing, can we reject the first records while loading into RDMBS table using an interface?
    Thanks,
    Naveen Suram

  • IDOC-File - records in Idoc to file based on some condition

    Hello experts,
    I have a idoc to file scenario. the incoming Idoc can have multiple records in it and i have to map these records to a csv file. Now the problem is not all records has to be mapped to the file. Based on the value of a perticular file (which is not root), i need to decide if the entire record has to be processed or not. Like the idoc structure is say:
    IDOC
       E1WPA01            0....9999
          E1WPA04         0...99
              KONDART     0..1
    Now for all valid E1WPA01 where the value of KONDART equals to some spacified value (known) , there has to be one record in the target csv file.
    How can this be done at the root level?
    One way of doing is we give empty values in the target file where the condition of KONDART is not fulfilled. Say if there are 10 records of E1WPA01 and only 4 satisfies the condition, we will ahve 10 records in the output file but only 4 records will have value and the rest 6 will be blank like (,,,,,,,). But I dont want this. I want only 4 records in the output file.
    I tried mapping like this:
    E1WPA01----
    >Advanced -
    > Root
    E1WPA04-KONDART---->UDF
    The problem i am facing here is if 4 records fulfill the condition, the first 4 are taken and the revelent 4.
    Please help.
    Regards,
    Yash

    Hi Chirag,
    I wrote the following code in UDF:
    for(i=0;i<a.length;i++)
    if(a{i}.equals("Specified Values"))
    result.addValue("a{i}");
    else
    result.addValue(ResuktList.SUPPRESS);
    And its working partially. I mean the queue of the UDF looks like
    1. AAAA                      SUPPRESS
    2. <Specified value>     <Specified value>     
    3. AAA                        SUPPRESS
    4. AAA                        SUPPRESS
    5. AAA                        SUPPRESS
    6. AAA                        SUPPRESS
    7. <Specified value>     <Specified value>
    8. AAA                        SUPPRESS
    9. AAA                        SUPPRESS
    and it creates 2 ROOT nodes. But the root nodes are created for line 2 and line 8 insteed of 7.
    What can be problem??
    Yash

  • Submit quiz results to one single .csv file

    How can I submit quiz results (over 200 people will be taking
    my captivate quiz) to a single .csv file?
    Right now, the quizes are submitted to my email address and
    attached to the email as a POSTDATA.ATT file. I have to manually go
    into my Outlook and save attachment as "FnameLname.csv”. So
    each quiz taker will have an individual .csv file! So I will have
    over 100 emails and over 100 .csv files!!!!!
    How can I make the quiz results submit to a single
    Quiz_Results.csv file on my web server instead?

    The way I would do this is to submit the scores into a
    database. In between Captivate and the database you'll need
    middleware (.asp, asp.net, ColdFusion, etc.). This middleware
    receives your data from Captivate and processes it - submitting it
    into the database. You can then write another middleware page that
    produces a report (web page table, or exports .csv file) with the
    data stored the database.
    Another possibility is to use Captivate's built-in SCORM
    functionality and submit user scores into an LMS, then run reports
    and export .csv files from your LMS.
    Sorry - I don't think this functionality is built into
    Captivate to join multiple records into one .csv file.

  • Creating a CSV file from a pl/sql procedure

    Hi Everyone,
    I would like to know how to write a procedure in pl/sql where i need to check if they are any records in a table say "Table A".
    If they are any records in the "Table A" then we need to write those records in the "Table A" to the CSV file.
    If they are no records then we need to insert a record into the CSV file that "No records are found in "Table A".
    Could anybody please help ?
    Thanks in advance

    see this
    ops$tkyte@8i> create or replace procedure dump_table_to_csv( p_tname in varchar2,
    2 p_dir in varchar2,
    3 p_filename in varchar2 )
    4 is
    5 l_output utl_file.file_type;
    6 l_theCursor integer default dbms_sql.open_cursor;
    7 l_columnValue varchar2(4000);
    8 l_status integer;
    9 l_query varchar2(1000)
    10 default 'select * from ' || p_tname;
    11 l_colCnt number := 0;
    12 l_separator varchar2(1);
    13 l_descTbl dbms_sql.desc_tab;
    14 begin
    15 l_output := utl_file.fopen( p_dir, p_filename, 'w' );
    16 execute immediate 'alter session set nls_date_format=''dd-mon-yyyy hh24:mi:ss''
    17
    18 dbms_sql.parse( l_theCursor, l_query, dbms_sql.native );
    19 dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
    20
    21 for i in 1 .. l_colCnt loop
    22 utl_file.put( l_output, l_separator || '"' || l_descTbl(i).col_name || '"'
    23 dbms_sql.define_column( l_theCursor, i, l_columnValue, 4000 );
    24 l_separator := ',';
    25 end loop;
    26 utl_file.new_line( l_output );
    27
    28 l_status := dbms_sql.execute(l_theCursor);
    29
    30 while ( dbms_sql.fetch_rows(l_theCursor) > 0 ) loop
    31 l_separator := '';
    32 for i in 1 .. l_colCnt loop
    33 dbms_sql.column_value( l_theCursor, i, l_columnValue );
    34 utl_file.put( l_output, l_separator || l_columnValue );
    35 l_separator := ',';
    36 end loop;
    37 utl_file.new_line( l_output );
    38 end loop;
    39 dbms_sql.close_cursor(l_theCursor);
    40 utl_file.fclose( l_output );
    41
    42 execute immediate 'alter session set nls_date_format=''dd-MON-yy'' ';
    43 exception
    44 when others then
    45 execute immediate 'alter session set nls_date_format=''dd-MON-yy'' ';
    46 raise;
    47 end;
    48 /
    Procedure created.
    ops$tkyte@8i> exec dump_table_to_csv( 'emp', '/tmp', 'tkyte.emp' );
    PL/SQL procedure successfully completed.

  • How to read/write .CSV file into CLOB column in a table of Oracle 10g

    I have a requirement which is nothing but a table has two column
    create table emp_data (empid number, report clob)
    Here REPORT column is CLOB data type which used to load the data from the .csv file.
    The requirement here is
    1) How to load data from .CSV file into CLOB column along with empid using DBMS_lob utility
    2) How to read report columns which should return all the columns present in the .CSV file (dynamically because every csv file may have different number of columns) along with the primariy key empid).
    eg: empid report_field1 report_field2
    1 x y
    Any help would be appreciated.

    If I understand you right, you want each row in your table to contain an emp_id and the complete text of a multi-record .csv file.
    It's not clear how you relate emp_id to the appropriate file to be read. Is the emp_id stored in the csv file?
    To read the file, you can use functions from [UTL_FILE|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#BABGGEDF] (as long as the file is in a directory accessible to the Oracle server):
    declare
        lt_report_clob CLOB;
        l_max_line_length integer := 1024;   -- set as high as the longest line in your file
        l_infile UTL_FILE.file_type;
        l_buffer varchar2(1024);
        l_emp_id report_table.emp_id%type := 123; -- not clear where emp_id comes from
        l_filename varchar2(200) := 'my_file_name.csv';   -- get this from somewhere
    begin
       -- open the file; we assume an Oracle directory has already been created
        l_infile := utl_file.fopen('CSV_DIRECTORY', l_filename, 'r', l_max_line_length);
        -- initialise the empty clob
        dbms_lob.createtemporary(lt_report_clob, TRUE, DBMS_LOB.session);
        loop
          begin
             utl_file.get_line(l_infile, l_buffer);
             dbms_lob.append(lt_report_clob, l_buffer);
          exception
             when no_data_found then
                 exit;
          end;
        end loop;
        insert into report_table (emp_id, report)
        values (l_emp_id, lt_report_clob);
        -- free the temporary lob
        dbms_lob.freetemporary(lt_report_clob);
       -- close the file
       UTL_FILE.fclose(l_infile);
    end;This simple line-by-line approach is easy to understand, and gives you an opportunity (if you want) to take each line in the file and transform it (for example, you could transform it into a nested table, or into XML). However it can be rather slow if there are many records in the csv file - the lob_append operation is not particularly efficient. I was able to improve the efficiency by caching the lines in a VARCHAR2 up to a maximum cache size, and only then appending to the LOB - see [three posts on my blog|http://preferisco.blogspot.com/search/label/lob].
    There is at least one other possibility:
    - you could use [DBMS_LOB.loadclobfromfile|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i998978]. I've not tried this before myself, but I think the procedure is described [here in the 9i docs|http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl12bfl.htm#879711]. This is likely to be faster than UTL_FILE (because it is all happening in the underlying DBMS_LOB package, possibly in a native way).
    That's all for now. I haven't yet answered your question on how to report data back out of the CLOB. I would like to know how you associate employees with files; what happens if there is > 1 file per employee, etc.
    HTH
    Regards Nigel
    Edited by: nthomas on Mar 2, 2009 11:22 AM - don't forget to fclose the file...

  • Data formatting and reading a CSV file without using Sqlloader

    I am reading a csv file to an Oracle table called sps_dataload. The table is structured based on the record type of the data at the beginning of
    each record in the csv file. But the first two lines of the file are not going to be loaded to the table due to the format.
    Question # 1:
    How can I skip reading the first two lines from my csv file?
    Question # 2:
    There are more fields in the csv file than there are number of columns in my table. I know I can add filler as an option, but then there are
    about 150 odd fields which are comma-separated in the file and my table has 8 columns to load from the file. So, do I really have to use filler
    for 140 times in my script or, there is a better way to do this?
    Question # 3:
    This is more of an extension of my question above. The csv file has fields with block quotes - I know this could be achieved in sql loader when we mention Occassionally enclosed by '"'.
    But can this be doable in the insert as created in the below code?
    I am trying to find the "wrap code" button in my post, but do not see it.
    Heres my file layout -
    PROSPACE SCHEMATIC FILE
    ; Version 2007.7.1
    Project,abc xyz Project,,1,,7,1.5,1.5,1,1,0,,0,1,0,0,0,0,3,1,1,0,1,0,0,0,0,2,3,1,0,1,0,0,0,0,3,3,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
    Subproject,25580-1005303.pst,,102,192,42,12632256,1,1,102,192,42,1,12632256,0,6,1,0,32896,1,0,0,0,0,,,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,0,1,1,,,,,,1
    Segment, , , 0, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, , , , , , , , , , , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, , , 1
    Product,00093097000459,26007,2X4 MF SF SD SOLR,,28.25,9.5,52.3, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.60,0,0,0,0,00,-1,0
    Product,00093097000329,75556,"22""X22"" BZ CM DD 1548",,27,7,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,345.32
    Product,00093097000336,75557,"22""X46"" BZ CM XD 48133",,27,7.5,51, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,0
    Product,00093097134833,75621,"22""X22"" BZ CM/YT DD 12828",,27,9,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,1
    This is my table structure -
    desc sps_dataload;
    File_Name     Varchar2 (50) Not Null,
    Record_Layer Varchar2 (20) Not Null,     
    Level_Id     Varchar2 (20),
    Desc1          Varchar2 (50),
    Desc2          Varchar2 (50),
    Desc3          Varchar2 (50),
    Desc4          Varchar2 (50)
    Heres my code to do this -
    create or replace procedure insert_spsdataloader(p_filepath IN varchar2,
    p_filename IN varchar2,
    p_Totalinserted IN OUT number) as
    v_filename varchar2(30) := p_filename;
    v_filehandle UTL_FILE.FILE_TYPE;
    v_startPos number; --starting position of a field
    v_Pos number; --position of string
    v_lenstring number; --length of string
    v_record_layer varchar2(20);
    v_level_id varchar2(20) := 0;
    v_desc1 varchar2(50);
    v_desc2 varchar2(50);
    v_desc3 varchar2(50);
    v_desc4 varchar2(50);
    v_input_buffer varchar2(1200);
    v_delChar varchar2(1) := ','
    v_str varchar2(255);
    BEGIN
    v_Filehandle :=utl_file.fopen(p_filepath, p_filename, 'r');
    p_Totalinserted := 0;
    LOOP
    BEGIN
    UTL_FILE.GET_LINE(v_filehandle,v_input_buffer);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    EXIT;
    END;
    -- this will read the 1st field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,1);
    v_lenString := v_Pos - 1;
    v_record_layer := substr(v_input_buffer,1,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 2nd field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,2);
    v_lenString := v_Pos - v_startPos;
    v_desc1 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 3rd field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,3);
    v_lenString := v_Pos - v_startPos;
    v_desc2 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 4th field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,4);
    v_lenString := v_Pos - v_startPos;
    v_desc3 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    -- this will read the 5th field from the file --
    v_Pos := instr(v_input_buffer,v_delChar,1,5);
    v_lenString := v_Pos - v_startPos;
    v_desc4 := substr(v_input_buffer,v_startPos,v_lenString);
    v_startPos := v_Pos + 1;
    v_str := 'insert into table sps_dataload values('||v_filename||','||v_record_layer||','||v_level_id||','||v_desc1||','||v_desc2||','||v_desc3||','||v_desc4||')';
    Execute immediate v_str;
    p_Totalinserted := p_Totalinserted + 1;
    commit;
    END LOOP;
    UTL_FILE.FCLOSE(v_filehandle);
    EXCEPTION
    WHEN UTL_FILE.INVALID_OPERATION THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20051, 'sps_dataload: Invalid Operation');
    WHEN UTL_FILE.INVALID_FILEHANDLE THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20052, 'sps_dataload: Invalid File Handle');
    WHEN UTL_FILE.READ_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20053, 'sps_dataload: Read Error');
    WHEN UTL_FILE.INVALID_PATH THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20054, 'sps_dataload: Invalid Path');
    WHEN UTL_FILE.INVALID_MODE THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20055, 'sps_dataload: Invalid Mode');
    WHEN UTL_FILE.INTERNAL_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20056, 'sps_dataload: Internal Error');
    WHEN VALUE_ERROR THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE_APPLICATION_ERROR(-20057, 'sps_dataload: Value Error');
    WHEN OTHERS THEN
    UTL_FILE.FCLOSE(v_FileHandle);
    RAISE;
    END insert_spsdataloader;
    /

    Justin, thanks. I did happen to change my pl sql procedure using utl_file.get_file and modifying the instr function based on position of ',' in the file, but my procedure is getting really big and too complex to debug. So I got motivated to use external tables or sql loader as plan b.
    As I was reading more about creating an external table as an efficient way and thus believe I can perhaps build an extern table with my varying selection from the file. But I am still unclear if I can construct my external table by choosing different fields in a record based on a record identifier string value (which is the first field of any record). I guess I can, but I am looking for the construct as to how am I going to use the instr function for selecting the field from the file while creating the table.
    PROSPACE SCHEMATIC FILE
    ; Version 2007.7.1
    Project,abc xyz Project,,1,,7,1.5,1.5,1,1,0,,0,1,0,0,0,0,3,1,1,0,1,0,0,0,0,2,3,1,0,1,0,0,0,0,3,3,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
    Subproject,25580-1005303.pst,,102,192,42,12632256,1,1,102,192,42,1,12632256,0,6,1,0,32896,1,0,0,0,0,,,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,-1,-1,-1,-1,0,0,0,-1,-1,0,1,1,,,,,,1
    Segment, , , 0, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, , , , , , , , , , , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 0, , , 1
    Product,00093097000459,26007,2X4 MF SF SD SOLR,,28.25,9.5,52.3, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.60,0,0,0,0,00,-1,0
    Product,00093097000329,75556,"22""X22"" BZ CM DD 1548",,27,7,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,345.32
    Product,00093097000336,75557,"22""X46"" BZ CM XD 48133",,27,7.5,51, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,0
    Product,00093097134833,75621,"22""X22"" BZ CM/YT DD 12828",,27,9,27, 8421504,,0,,xyz INC.,SOLAR,,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,,0,0,0,0,1,000000.20,0,0,0,0,0,0,0,,1For example, if I want to create an external table like this -
    CREATE TABLE extern_sps_dataload
    ( record_layer            VARCHAR2(20),
      attr1                   VARCHAR2(20),
      attr2                   VARCHAR2(20),
      attr3                   VARCHAR2(20),
      attr4                   VARCHAR2(20)
    ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY dataload
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE
        BADFILE     dataload:'sps_dataload.bad'
        LOGFILE     dataload:'sps_dataload.log'
        DISCARDFILE dataload:'sps_dataload.dis'
        SKIP 2
        VARIABLE 2 FIELDS TERMINATED BY ',' 
        OPTIONALLY ENCLOSED BY '"' LRTRIM
        MISSING FIELD VALUES ARE NULL
        +LOAD WHEN RECORD_LAYER = 'PROJECT' (FIELD2, FIELD3,FIELD7,FIELD9)+
        +LOAD WHEN RECORD_LAYER= 'PRODUCT' (FIELD3,FIELD4,FIELD8,FIELD9)+
        +LOAD WHEN RECORD_LAYER= 'SEGMENT' (FIELD1,FIELD2,FIELD4,FIELD5)+    LOCATION ('sps_dataload.csv')
    REJECT LIMIT UNLIMITED;
    {code}
    While I was reading the external table documentation, I thought I could achieve similar things by using position_spec option, but I am not getting behind its parameters. I have highlighted italics in the code above(from LOAD WHEN....FIELDS....), the part I think I am going to use, but not sure of it's construct.
    Thank you for your help!! Appreciate your thoughts on this..
    Sanders.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to write the JTables Content into the CSV File.

    Hi Friends
    I managed to write the Database records into the CSV Files. Now i would like to add the JTables contend into the CSV Files.
    I just add the Code which Used to write the Database records into the CSV Files.
    void exportApi()throws Exception
              try
                   PrintWriter writing= new PrintWriter(new FileWriter("Report.csv"));
                   System.out.println("Connected");
                   stexport=conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_UPDATABLE);
                   rsexport=stexport.executeQuery("Select * from IssuedBook ");
                   ResultSetMetaData md = rsexport.getMetaData();
                   int columns = md.getColumnCount();
                   String fieldNames[]={"No","Name","Author","Date","Id","Issued","Return"};
                   //write fields names
                   String rec = "";
                   for (int i=0; i < fieldNames.length; i++)
                        rec +='\"'+fieldNames[i]+'\"';
                        rec+=",";
                   if (rec.endsWith(",")) rec=rec.substring(0, (rec.length()-1));
                   writing.println(rec);
                   //write values from result set to file
                    rsexport.beforeFirst();
                   while(rsexport.next())
                        rec = "";
                         for (int i=1; i < (columns+1); i++)
                             try
                                    rec +="\""+rsexport.getString(i)+"\",";
                                    rec +="\""+rsexport.getInt(i)+"\",";
                             catch(SQLException sqle)
                                  // I would add this System.out.println("Exception in retrieval in for loop:\n"+sqle);
                         if (rec.endsWith(",")) rec=rec.substring(0,(rec.length()-1));
                        writing.println(rec);
                   writing.close();
         }With this Same code how to Write the JTable content into the CSV Files.
    Please tell me how to implement this.
    Thank you for your Service
    Jofin

    Hi Friends
    I just modified my code and tried according to your suggestion. But here it does not print the records inside CSV File. But when i use ResultSet it prints the Records inside the CSV. Now i want to Display only the JTable content.
    I am posting my code here. Please run this code and find the Report.csv file in your current Directory. and please help me to come out of this Problem.
    import javax.swing.*;
    import java.util.*;
    import java.io.*;
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.table.*;
    public class Exporting extends JDialog implements ActionListener
         private JRadioButton rby,rbn,rbr,rbnore,rbnorest;
         private ButtonGroup bg;
         private JPanel exportpanel;
         private JButton btnExpots;
         FileReader reading=null;
         FileWriter writing=null;
         JTable table;
         JScrollPane scroll;
         public Exporting()throws Exception
              setSize(550,450);
              setTitle("Export Results");
              this.setLocation(100,100);
              String Heading[]={"BOOK ID","NAME","AUTHOR","PRICE"};
              String records[][]={{"B0201","JAVA PROGRAMING","JAMES","1234.00"},
                               {"B0202","SERVLET PROGRAMING","GOSLIN","1425.00"},
                               {"B0203","PHP DEVELOPMENT","SUNITHA","123"},
                               {"B0204","PRIAM","SELVI","1354"},
                               {"B0205","JAVA PROGRAMING","JAMES","1234.00"},
                               {"B0206","SERVLET PROGRAMING","GOSLIN","1425.00"},
                               {"B0207","PHP DEVELOPMENT","SUNITHA","123"},
                               {"B0208","PRIAM","SELVI","1354"}};
              btnExpots= new JButton("Export");
              btnExpots.addActionListener(this);
              btnExpots.setBounds(140,200,60,25);
              table = new JTable();
              scroll=new JScrollPane(table);
              ((DefaultTableModel)table.getModel()).setDataVector(records,Heading);
              System.out.println(table.getModel());
              exportpanel= new JPanel();
              exportpanel.add(btnExpots,BorderLayout.SOUTH);
              exportpanel.add(scroll);
              getContentPane().add(exportpanel);
              setVisible(true);
          public void actionPerformed(ActionEvent ae)
              Object obj=ae.getSource();
              try {
              PrintWriter writing= new PrintWriter(new FileWriter("Report.csv"));
              if(obj==btnExpots)
                   for(int row=0;row<table.getRowCount();++row)
                             for(int col=0;col<table.getColumnCount();++col)
                                  Object ob=table.getValueAt(row,col);
                                  //exportApi(ob);
                                  System.out.println(ob);
                                  System.out.println("Connected");
                                  String fieldNames[]={"BOOK ID","NAME","AUTHOR","PRICE"};
                                  String rec = "";
                                  for (int i=0; i <fieldNames.length; i++)
                                       rec +='\"'+fieldNames[i]+'\"';
                                       rec+=",";
                                  if (rec.endsWith(",")) rec=rec.substring(0, (rec.length()-1));
                                  writing.println(rec);
                                  //write values from result set to file
                                   rec +="\""+ob+"\",";     
                                   if (rec.endsWith(",")) rec=rec.substring(0,(rec.length()-1));
                                   writing.println(rec);
                                   writing.close();
         catch(Exception ex)
              ex.printStackTrace();
         public static void main(String arg[]) throws Exception
              Exporting ex= new Exporting();
    }Could anyone Please modify my code and help me out.
    Thank you for your service
    Cheers
    Jofin

  • CSV  FILE READING

    Hi all,
    I got the Csv parser from the net.It is giving runtime error "IO FILE Exception"
    actually there are 3 file included in it.
    CSVFile
    import java.util.ArrayList;
    import java.io.BufferedReader;
    import java.io.FileReader;
    * holds the file object of records
    public class CSVFile
    * arraylist of records, each one containing a single record
    private ArrayList records = new ArrayList();
    * What to replace a row delimiter with, on output.
    private String replacementForRowDelimiterInTextField = " "; // Change if needed.
         * debug, > 0 for output.
        public int debug = 5;
        private boolean debugLoading = true; //true when debugging load cycle
    *Return the required record
    *@param index the index of the required record
    *@return a CSVRecord, see #CSVRecord
    public CSVRecord getRecord (int index)
        if (this.debug > 3 && !debugLoading) {
         System.err.println("CSVFile getRecord ["+index+"]"+ ((CSVRecord)this.records.get(index)).getFields(3));
         return (CSVRecord)this.records.get(index);
    *Get the number of records in the file
    *@return 1 based count of records.
    public int count()
         return this.records.size();
         // ----- Constructor -----
    *Constructor; create a file object
    *@param details  a propertyFile object, see #propertyFile
    *@param csvFile filename of csv file
         public CSVFile(propertyFile details, String csvFile)
             try{
              BufferedReader reader = new BufferedReader (new FileReader (csvFile));
              //StringBuilder sbBuffer = new StringBuilder( reader.ReadToEnd() );
              StringBuffer buf=new StringBuffer();
              String text;
              try {
                  while ((text=reader.readLine()) != null)
                   buf.append(text + "\n");
                  reader.close();
              }catch (java.io.IOException e) {
                  System.err.println("Unable to read from csv file "+ csvFile);
                  System.exit(2);
              String buffer;
              buffer = buf.toString();
              buffer = buffer.replaceAll("&","&");
              buffer = buffer.replaceAll("<","<");
              boolean inQuote = false;
              String savedRecord = "";
              String curRecord = "";
              if (debug > 2) {
                  System.err.println("csvFile: setup");
                  System.err.println("Read int from src CSV file");
              //Split entire input file into array records, using row delim.
              String records[] =  buffer.split( details.rowDelimiter() );
              //Iterate over each split, looking for incomplete quoted strings.
              for (int rec=0; rec <records.length; rec++)
                   curRecord = savedRecord + records[rec];
                   if (debug > 4) {
                       System.out.println("csvFile: saved rec" + savedRecord);
                       System.out.println("csvFile: current rec " + curRecord);
                       System.out.println("csvFile: currRecLth: " + curRecord.length());
                   for (int i = 0; i < curRecord.length(); i ++ )
                        char ch = curRecord.charAt(i);
                        char prev = ( i != 0? curRecord.charAt(i-1): ' ');
                        char nxt = ( i < (curRecord.length()-2)? curRecord.charAt(i+1): ' ');
                        if ( !inQuote && ch == '"' )
                            inQuote = true;
                        else
                            if ( inQuote && ch == '"' )
                             if ( i + 1 < curRecord.length() )
                                 inQuote = (nxt == '"')
                                  || (prev == '"');
                             else
                                 inQuote = false;
                   if ( inQuote )
                        // A space is currently used to replace the row delimiter
                        //when found within a text field
                        savedRecord = curRecord + replacementForRowDelimiterInTextField;
                        inQuote = false;
                   else
                        this.records.add( new CSVRecord(details, curRecord) );
                        savedRecord = "";
              catch (java.io.FileNotFoundException e) {
                  System.out.println("Unable to read CSV file, quitting");
                  System.exit(2);
         // ----- Private Methods -----
         private String[] SplitText(String textIn, String splitString)
              String [] arrText = textIn.split(splitString);
              return arrText;
    *Get all records in the csvfile
    *@return array of CSVRecords, see #CSVRecord
    public CSVRecord[] GetAllRecords()
    CSVRecord[] allRecords = new CSVRecord[ this.records.size() ];
    for (int i = 0; i < this.records.size(); i++ )
         allRecords[i] = (CSVRecord)this.records.get(i);
    return allRecords;
      public static void main(String args[])
         propertyFile path=new propertyFile("C:\\bea\\jdk142_05\\bin");
        CSVFile  a=new CSVFile(path,"C:\\bea\\jdk142_05\\bin\\xxx.csv");
    CSVRecord
    import  java.util.ArrayList;
    *Represents a single record of a CSV file
    public class CSVRecord
         *Debug
        private int debug = 0;
         * Arraylist of fields of the record
        private ArrayList fields = new ArrayList();
         *get the field with index index
         *@param index of field required
         *@return String value of that field
        public String getFields (int index)
         if ( index < fields.size())
         return (String)this.fields.get(index);
         else return ("");
         *get the number of fields
         *@return int number of fields in this file
        public int count()
         return this.fields.size();
         *Create a csv record from the input String, using the propertyfile.
         *@param  details , the property file
         *@see <a href="propertyFile.html">propertyFile</a>
         *@param  recordText , the record to be added to the arraylist of records
        public  CSVRecord(propertyFile details, String recordText)
          * true if within a quote
         boolean inQuote = false;
          * temp saved field value
         String savedField = "";
          * current field value
         String curField = "";
          * field being built
         String field = "";
          * array of records.
          * split it according to the field delimiter.
          * The default String.split() is not accurate according to the M$ view.
         String records[] =  recordText.split( details.fieldDelimiter() );
         for (int rec=0; rec <records.length; rec++)
              field = records[rec];
              //Add this field to currently saved field.
              curField = savedField + field;
              //Iterate over current field.
              for (int i = 0; i < curField.length(); i ++ ){
                   char ch = curField.charAt(i); //current char
                   char nxt = ((i==
                             curField.length() -1)
                            ? ' ' : curField.charAt(i+1)); //next char
                   char prev = (i==0? ' ': curField.charAt(i-1)); //prev char
                   if ( !inQuote && ch == '"' )
                       inQuote = true;
                   else
                       if ( inQuote && ch == '"' )
                        if ( (i + 1) < curField.length() )
                            inQuote = (nxt == '"') || (prev == '"');
                        else
                            inQuote = (prev == '"');
              }//end of current field
              if ( inQuote )
                   savedField = curField + details.fieldDelimiter() + " ";
                   inQuote = false;
              else if (!inQuote && curField.length() > 0)
                   char ch = curField.charAt(0); //current char
                   char lst = curField.charAt(curField.length()-1);
                   if (ch   == '"' &&
                       lst == '"')
                        //Strip leading and trailing quotes
                        curField = curField.substring(1,curField.length()-2);
                        //curField = curField.Replace( "\"\"", "\"" );
                        curField =curField.replaceAll("\"\"", "\"");
                   this.fields.add( curField );
                   savedField = "";
              else if(curField.length() == 0){
                  this.fields.add("");
              if (debug > 2)
                  System.out.println("csvRec  Added:" + curField);
             }//   end of for each record
    propertyFile
    import java.util.ArrayList;
    import java.io.BufferedReader;
    import java.io.FileReader;
    * This class holds the data from a Property file.
    public class propertyFile
        // ----- Private Fields -----
         *Comments from the file
        private String comment;
         * Delimiter for individual fields
        private String fieldDelimiter; // was char
         *   Delimiter for each row
        private String rowDelimiter;
         * Root element to use for output XML
        private String xmlRootName;
         * Element to use for each row
        private String recordName;
         *How many fields are there -  Note: This is 1 based, not zero based.
        private int fieldCount;
         * array of fields
        private ArrayList fields = new ArrayList(88);
         *Set to int > 0 for debug output
        private int  debug=0;
    /** A single instance of this will hold all the relavant details for ONE PropertyFile.
        *@param filePath String name of the property file.
        public  propertyFile(String filePath)
         //StreamReader reader = new StreamReader( filePath );
         try {
         BufferedReader reader = new BufferedReader (new FileReader (filePath));
         String line = null;
         while ( (line = reader.readLine()) != null )
              if ( line.length() != 0 )   //was != ""
                   if (debug> 0)
                       System.err.println("String is: " + line + "lth: " + line.length());
                   if ( line.charAt(0) != '[' && !( line.startsWith("//") ) )
                        String propertyValue = line.split("=")[1];
                        // Assign Comment
                        if ( line.toUpperCase().startsWith("COMMENT=") )
                            this.comment = propertyValue;
                        // Assign Field Delimter
                        if ( line.toUpperCase().startsWith("FIELDDELIMITER") )
                            this.fieldDelimiter = propertyValue.substring(0);
                        // Assign Row Delimiter
                        if ( line.toUpperCase().startsWith("ROWDELIMITER") )
                             if ( propertyValue.substring(0,1).toUpperCase() ==
                                  "\\" && propertyValue.toUpperCase().charAt(1) == 'N')
                                 this.rowDelimiter = "\r\n";
                             else
                                 this.rowDelimiter = propertyValue;
                        // Assign Root Document Name
                        if ( line.toUpperCase().startsWith("ROOTNAME") )
                            this.xmlRootName = propertyValue;
                        // Assign Record Name
                        if ( line.toUpperCase().startsWith("RECORDNAME") )
                            this.recordName = propertyValue;
                        // Assign Field Count
                        if ( line.toUpperCase().startsWith("FIELDS") )
                            this.fieldCount =  Integer.parseInt(propertyValue);
                   else
                        if ( line.toUpperCase().startsWith("[FIELDS]") )
                             while ( (line = reader.readLine()) != null )
                                  if ( line.length() == 0)
                                      break;
                                  else{
                                      if (debug > 0)
                                       System.err.println("Adding: "+line.split("=")[1]);
                                      this.fields.add( line.split("=")[1] );
                             break;
         reader.close();
         } catch (java.io.IOException e) {
             System.out.println("**** IO Error on input file. Quitting");
             System.exit(2);
         * Return the comment int the property file
         *@return String, the comment value, if any
        public String comment ()
         return this.comment;
         * The delimiter to be used for each field, often comma.
         *@return String, the character(s)
        public String fieldDelimiter()
         return this.fieldDelimiter;
         * Row Delimiter - often '\n'
         *@return String, the character(s)
        public String rowDelimiter ()
         return this.rowDelimiter;
        * The XML document root node.
        * @return String, the element name
        public String XMLRootName()
         return this.xmlRootName;
        /** <summary>
        ** The node name for each record
        public String recordName()
         return this.recordName;
        ** Number of Fields per record/node
        *@return integer count of number of fields, 1 based.
        public int fields()
         return this.fieldCount;
         // ----- Public Methods -----
         ** The value of the nth field, 0 based.
         ** @param index Which field to return
         * @return String the field value
        public String fieldNames(int index)
         if (index <this.fields.size())
             return (String)this.fields.get(index); //was .toString()
         else
              System.err.println("PropertyFile: Trying to get idx of :"
                           + index
                           + "\n when only "
                           //+ (this.fields.size() -  1)
                           + this.fieldCount
                           + " available"
              System.exit(2);
         return "";
         *Test entry point, this class
         *@param argv  cmd line arg of property file
        public static void main (String argv[]) {
              if ( argv.length != 1) {
               System.out.println ("Md5 <file>") ;
               System.exit (1) ;
        propertyFile p = new propertyFile(argv[0]);
    Please help as i m novice in File handling espically csvfiles

    > **** IO Error on input file. Quitting
    Press any key to continue . . .
    Ok, no compiler error but it seems that the filePath String name of the property file isn't there.

  • Loading a CSV file

    I have a file of comma-separated-values from another handheld.  I think the fields in each record are not in the order that the Z22 expects.  Can someone tell me what order the Z22 expects?
    This is not exactly a hotsync question, but that seems like the nearest subject listed.
    Post relates to: Palm Z22

    Hello bridge01944 and welcome to the Palm forums.
     What I would suggest is to create a new record in Palm Desktop for the Mac.  In each of the fields, I would type the field name.  Then export that record as a .csv file.  Then open that .csv file in a spreadsheet application, like Excel, and then open the .csv export file from your old device.  Then record the columns in the old file to match the order of the new file.  Once you have done that, you should be able to import the data in to Palm Desktop for the Mac.
    Alan G
    Post relates to: Treo 755p (Sprint)

  • Splitting csv file

    Hi,
    I have a procedure which stores my retrieved records into a csv file using the mime type. I want to split my resulting file into 2 depending on the records retrieved. Is this possible? Please reply asap.
    Thanks in advance
    Bharat

    the point being there are 2 delimiters to split, the first being the comma the second being the line break thus creating an array of single digits
    I have solved the issue by using replaceall to replace all commas with spaces and then split on the spaceStill makes no sense. A line break doesn't not contain spaces. So how does replacing commas with spaces allow you to split on the line break.
    If you are appending each line to a string (to build one long string) then append the data with a comma, not a space.
    I really get the idea you guys are not enjoying my code?Its your requirements we don't understand. You obviously aren't explaining them correctly.

Maybe you are looking for