Related to CSV file .

Hi,
   How do we read a CSV file using OOPS concept? And at the same time how can we upload it is there any special functions to do that?please reply, urgent.
Thanks,
Sindhu.

Here is a slightly more dynamic approach.
report zrich_0001.
types: begin of ttab,
       fld1(20) type c,
       fld2(20) type c,
       fld3(20) type c,
       end of ttab.
data: itab type table of ttab.
data: wa type ttab.
data: iup type table of string.
data: xup type string.
data: isplit type table of string with header line.
field-symbols: <fs>.
call method cl_gui_frontend_services=>gui_upload
  exporting
    filename                = 'C:test.csv'
  changing
    data_tab                = iup.
<b>loop at iup into xup.
  clear wa.
  clear isplit.  refresh isplit.
  split xup at ',' into table isplit.
  loop at isplit.
    assign component sy-tabix of structure wa to <fs>.
    if sy-subrc <> 0.
      exit.
    endif.
    <fs> = isplit.
  endloop.
  append wa to itab.
endloop.</b>
loop at itab into wa.
  write:/ wa-fld1, wa-fld2, wa-fld3.
endloop.
Regards,
Rich Heilman

Similar Messages

  • Related to CSV file data upload

    Hi all,
    I am new to the oracle technology.
    While uploading the data from csv file i got this error
    "SQL*Loader-350: Syntax error at line 4.
    Expecting keyword TABLE, found "xxgfs_gen_text_lookups".
    APPEND INTO xxgfs_gen_text_lookups"
    my csv file data is
    Invoice Match Options,,,Invoice,,
    Invoice Match Options,,,Receipt,,
    Invoice Match Options,,,Purchase Order,,X
    Invoice Type,A 00,Advance,Standard,Standard invoice,
    Invoice Type,B 00,Expense,Standard,Standard invoice,
    Invoice Type,2 00,Debit Memo,Credit Memo,Credit Memo,
    Invoice Type,2 20,EDI Debit Memo,Credit Memo,Credit Memo,
    Invoice Type,1 00,Invoice,Standard,Standard invoice,
    Invoice Type,1 01,Arrow Credit,Standard,Standard invoice,,
    Invoice Type,1 10,Recurring Payments,Standard,Standard invoice,,
    Invoice Type,1 20,EDI Invoices,Standard,Standard invoice,,
    Invoice Type,1 21,Imaged Invoice,Standard,Standard invoice,X,Default when entering from form
    Invoice Tax Code,XMT,DO NOT USE,,,,
    Invoice Tax Code,STE,DO NOT USE,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code,,,,,,
    Invoice Tax Code
    If i am removing the blank row then also it is giving the same problem.
    If anybody face the same problem then please help me out.
    thanks in advance to u all for ur help.
    -Rajnish

    This is the Oracle Application Express (formerly known as HTML DB) forum. SQL*Loader related questions should be asked in the Database SQL forum (PL/SQL or Database General Forum (General Database Discussions but being a nice guy familiar with SQL*Loader I will "give it a go".
    The error message you are getting indicates that you have a syntax error in your control file. The syntax for the APPEND keyword is APPEND INTO TABLE table_name. So change "APPEND INTO xxgfs_gen_text_lookups" to "APPEND INTO TABLE xxgfs_gen_text_lookups".
    Mike

  • Hi ,related to CSV files out put.

    hi all,
          I am taking out put in ALV and .CSV format (in unix path ) .
    And the ALV out put is fine.
    In one of the material description the text is like this :
    my name is  (Activ.Foil+S,sr)
    so for description in CSV it is taking like this : my name is (Activ.Foil+S
    and for the next field the remaining text is going
    i.e ,  for material group it is getting : sr)
    and all the values are shifted by one field.
    so the BW team facing the problem while extracting data ,
    can any one please help me out how to solve this probelm.
    Thanks and Regards.
    C.Shamsundher.

    Hi Prem,
    In such case you need to change the delimitter or else remove the comma from Material description once you retrieve data from database into internal table.
    REPLACE ALL OCCURRENCES OF ',' in WA_TAB-DESC with ' '.
    Then move the material descritpion to the CSV file.
    Best regards,
    Prashant

  • CSV File Creation

    I have 2 queries related to CSV file creation using ODI.
    1) I need to generate CSV file out of data contained in Oracle DB Table. When the data gets extracted into CSV file, it needs to be sorted on one particular column ( e.g Account_Number). Is this possible?
    2) When the data gets extracted in CSV format, the DATE is shown as ' 2008-06-06 00:00:00.0' . How can I get the date to be stored in just 'DDDD-MM-YY' format?
    Thanks
    KS

    check out the OdiUnloadSql component. I have used it to create a csv file, but have not gone into date format and sorting detail. you might have to look into that.

  • Csv file to database tables and also foreginkey related columns directly

    i have created dimensions tables in ssis  and i need to load  the data into that tables from my given csv files. iand i have foriegien key columns of fact table for this also data need to load

    definitely we have primary key relations..  the tables contain primary keys and forien keys  i have created tables nearly 20 tables  in sql server some of them consist of dimensions and facts.. so i have an csv files of data. so that i need
    to load data in that tables by using ssis package
    i have an idea taking one data flow task in control flow task for  each and every single table i am taking one oldb destination. for each and every one i need source as csv file. by connecting this both we can load data
    but i need to load data into 20 tables by taking on dataflow task..how it is possible any solution and any different ways to load data from csv files to ssispacke tables

  • Csv file load related issue: giving short dump

    Hi Experts,
    In BPS I have created a load csv planning function which load data from csv file. While I am executing the function it is showing successful message 'dta records generated' but ehen I press the save button it is giving a dump 'Exception condition "ILLEGAL_INPUT" raised.'.The same planning level and package is working fine with manual planning. The load function modules are also working fine for some other plannig levels. Please suggest what could be the reason for this exception.
    Thanks,
    Saikat.

    I had  same problem.
    It s something wrong in the CSV files. What I mean is that sometimes it went something like a ',' or a '.' or other wrong characters in some characteristics that cannot accept like numbers or whatever, or something that to you looks a blank but for SAP is some special character.
    The way to find is very boring ! Be sure first of all that when you load you do not try to save also the HEADER with Titles of the CSV file. Anyway this was not my problem.
    To me happened as follow last time.
    " For one rcharacteristic it accept values '1'  '2'  '3' etc...until '9'. So just numbers.  In one of the record of the files it was written '1.' So just the '.' made this big problem. "
    To solve this last time, I just generated again the file and without to change the ABAP code it went ok.
    Anyway try to check again from benning all your levels and the ABAP code.
    I know what u mean, this error cause big headaches, I lost 3 hours to solve.
    Hope you are lucky.

  • How to read/write .CSV file into CLOB column in a table of Oracle 10g

    I have a requirement which is nothing but a table has two column
    create table emp_data (empid number, report clob)
    Here REPORT column is CLOB data type which used to load the data from the .csv file.
    The requirement here is
    1) How to load data from .CSV file into CLOB column along with empid using DBMS_lob utility
    2) How to read report columns which should return all the columns present in the .CSV file (dynamically because every csv file may have different number of columns) along with the primariy key empid).
    eg: empid report_field1 report_field2
    1 x y
    Any help would be appreciated.

    If I understand you right, you want each row in your table to contain an emp_id and the complete text of a multi-record .csv file.
    It's not clear how you relate emp_id to the appropriate file to be read. Is the emp_id stored in the csv file?
    To read the file, you can use functions from [UTL_FILE|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#BABGGEDF] (as long as the file is in a directory accessible to the Oracle server):
    declare
        lt_report_clob CLOB;
        l_max_line_length integer := 1024;   -- set as high as the longest line in your file
        l_infile UTL_FILE.file_type;
        l_buffer varchar2(1024);
        l_emp_id report_table.emp_id%type := 123; -- not clear where emp_id comes from
        l_filename varchar2(200) := 'my_file_name.csv';   -- get this from somewhere
    begin
       -- open the file; we assume an Oracle directory has already been created
        l_infile := utl_file.fopen('CSV_DIRECTORY', l_filename, 'r', l_max_line_length);
        -- initialise the empty clob
        dbms_lob.createtemporary(lt_report_clob, TRUE, DBMS_LOB.session);
        loop
          begin
             utl_file.get_line(l_infile, l_buffer);
             dbms_lob.append(lt_report_clob, l_buffer);
          exception
             when no_data_found then
                 exit;
          end;
        end loop;
        insert into report_table (emp_id, report)
        values (l_emp_id, lt_report_clob);
        -- free the temporary lob
        dbms_lob.freetemporary(lt_report_clob);
       -- close the file
       UTL_FILE.fclose(l_infile);
    end;This simple line-by-line approach is easy to understand, and gives you an opportunity (if you want) to take each line in the file and transform it (for example, you could transform it into a nested table, or into XML). However it can be rather slow if there are many records in the csv file - the lob_append operation is not particularly efficient. I was able to improve the efficiency by caching the lines in a VARCHAR2 up to a maximum cache size, and only then appending to the LOB - see [three posts on my blog|http://preferisco.blogspot.com/search/label/lob].
    There is at least one other possibility:
    - you could use [DBMS_LOB.loadclobfromfile|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i998978]. I've not tried this before myself, but I think the procedure is described [here in the 9i docs|http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl12bfl.htm#879711]. This is likely to be faster than UTL_FILE (because it is all happening in the underlying DBMS_LOB package, possibly in a native way).
    That's all for now. I haven't yet answered your question on how to report data back out of the CLOB. I would like to know how you associate employees with files; what happens if there is > 1 file per employee, etc.
    HTH
    Regards Nigel
    Edited by: nthomas on Mar 2, 2009 11:22 AM - don't forget to fclose the file...

  • How to read the name of .csv file from a particular folder using Oracle.

    Gurus,
    I have a folder called 'data_dir' in Oracle Server and it contains 10 different .csv files and the name of the each file is being suffixed by date and time(24 hrs format).
    First i need to read all the file names then i have to process those files by UTL_FILE to load the data into a relational table.
    Is there any mechanism is available in Oracle to read the file names?
    (In this case all the 10 different csv file names)
    If so, then please help me accomplish this.
    Thanks in advance.
    Regards,
    Venugopal.K

    Is there any mechanism is available in Oracle to read the file names?Sounds to me like you need to use External Tables (*not* utl_file).
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/et_concepts.htm

  • Error while writing to a .CSV file

    hi,
    I am writing output from PL/SQL block to a .CSV file.
    CREATE OR REPLACE PROCEDURE wr_csv
    AS
    BEGIN
    DECLARE
    abc VARCHAR2(10);
    file_name VARCHAR2(30);
    output_file utl_file.file_type;
    BEGIN
    SELECT ename INTO abc FROM EMP;
    file_name :=('Testwriting.CSV');
    output_file :=utl_file.fopen('d:\test',Testwriting.CSV,'W');
    utl_file.put_line(output_file,'NAME: '||abc );
    utl_file.fclose(output_file);
    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line(SQLCODE||SQLERRM);
    END;
    END wr_csv;
    i am getting a error message:
    LINE/COL ERROR
    11/1 PL/SQL: Statement ignored
    11/40 PLS-00201: identifier 'TESTWRITING.CSV' must be declared
    Is it related to changes in the init.ora file ?
    Please Suggest..

    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line(SQLCODE||SQLERRM);
    ENDDont use like this..
    U never know if any error happens..
    ERROR at line 1:
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at "SCOTT.WR_CSV", line 7
    ORA-06512: at line 2
    Message was edited by:
    jeneesh

  • How to asssign output value in the CSV file for the specific header

    Hi,
    I am using OpenScript 9.10. The problem I am facing is that, to Write the Output value captured from the application, When i am trying to use the methods "Appen String to File" i am unable to write / assign the out put value below the specific header. Can any one please look into this and post the methods related to Write the output vaues in to the CSV files under specific headers.
    Thanks in Advance.
    Thanks.,
    Siva

    Hi Alex,
    Thanks for your reply.I need to write the output value under specified parameter name.
    for example after creating the sales order,the order number have to write in the CSV file under
    OrderNumber column.I tried with appendStringtofille() method,by using this able to write the value under
    first column.But i need to write output value under specified columnname(means Header name in the cSV
    file).can u please give a reply for the above problem.
    Note: In that Csv file i am taking the input vales and also need to write the output values under
    specified column(header)
    Thanks,
    Siva Thota.

  • Blank page shown instead of Open/Save As dialog box-CSV file download in IE

    I am migrating an application hosted earlier on Oracle 9iAS (9.0.3.1 BP1) to OracleAS 10g (10.1.3) with Struts 1.2.9. While the below JSP code worked for the earlier s/w stack when accessed using IE 5.5 SP2 or above, a blank page is displayed with the new s/w stack. What I mean by "worked earlier" is this. A "Open/Save As" dialog box would appear. Trying to "Open" would open a new IE window launching MS Excel and displaying correct data. Trying to "Save As" would display correct filename and extension in the dialog box.
    <%@page autoFlush="false" contentType="application/x-filler"%>
    <%
    try
    String fileName = "ABC.CSV";
    String strData = "A,B,C";
    response.setContentLength(strData.length());
    response.setHeader("Content-Type","application/octet-stream");
    response.setHeader("Content-Disposition","inline;filename="+fileName);
    ServletOutputStream ouputStream = response.getOutputStream();
    ouputStream.write(strData.getBytes(), 0, strData.getBytes().length);
    ouputStream.flush();
    ouputStream.close();
    catch(Exception e)
    %>
    I tried some changes listed below:
    1. Deliberately introduced compilation errors in the JSP code.
    2. Changed contentType="application/x-filler" to various MIME types like application/x-download, application/vnd.ms-excel, removing it altogether.
    3. Commented out response.setHeader("Content-Type","application/octet-stream") and also tried other MIME types related to CSV/Excel.
    4. Changed Content-Disposition from inline to attachment.
    Each of the above resulted in an unacceptable behaviour compared with the earlier state:
    1. Blank page shown as if nothing was happening even in case of deliberate incorrect JSP syntax.
    2. Dialog box did show up with some combinations. But on trying to "Open" or "Save As", the dialog box shows up a second time. Then while trying to "Open", the contents get opened either inline or in MS Excel but with some additional garbage data like source file name.jsp etc. While trying to "Save As" the file name and extension take on the pattern <URL_Pattern>.htm where URL_Pattern would be "DOWNLOAD_ACTION.do".
    Some other information:
    1. This code works when accessed from desktop local container (Win 2000) but not when deployed on Unix env (HP-UX B.11.11.0109).
    2. Latest patchsets are installed for IE.
    3. The existing application is designed to work for ONLY IE 5.5 SP2 or above, so I haven't tried with other browsers. I have tried both IE 5.5 SP2 and 6.0 SP1.
    Any help in this regard will be highly appreciated.
    Thanks a lot for your valuable time..

    I have made it to work with the below changes:
    1. Changed the URL pattern for <filter-mapping> for *.jsp to /*.jsp
    2. Added <dispatcher>REQUEST</dispatcher> and <dispatcher>FORWARD</dispatcher>
    3. Everything else (code-wise) is as before.
    Because of these changes, the filter, wherein some headers are being added to the response, is being invoked appropriately now.
    Thanks for you time and suggestion though...

  • How to dynamically and selectively update DSO based on values in a csv file

    Hi,
    I'm loading a csv file into a DSO. When loading the flat file in FULL mode I need to do a pseudo delete of records that were previously loaded but are not in the new flat file.
    Is it possible to dynamically determine the unique set of records (say Pk1, Pk2, Pk3) in the csv file and then set all the corresponding DSO records' quantities to 0 - maybe in a start routine??  After that, I can load the csv file with the correct quantities (effectively update and inserts).  The net result should be that the change log only be updated through to the next DSO.
    Example: Load 10 records yesterday. Today reload 9 records. 10th records must have quantity set to 0. Other 9 records will have quantity values set to those in today's csv file -  some will be the same & some will be different. The net change log of all 10 records must be loaded into the next DSO.
    Any suggestions on how to do this logic?
    Thanks!

    Hi Gregg,
    You can create one transformation from the DataStore to itself.  In the "Technical" rules group, set 0RECORDMODE = 'X' (before image) or 'R' (reverse).  Therefore, when you execute its corresponding DTP, all existing records shouldl be set to zero.
    Then, as a second step, you can execute the DTP which is related to the transformation between the DataStore and the DataSource, thus loading the new records.
    I hope this helps you.
    Regards,
    Maximiliano

  • IDoc to csv file with required fields

    All,
    I have a source IDoc going to XI.  For example it contains source fields SF1 (required), SF2 (required), SF3(optional) and SF4(optional).
    I want to produce a target file using file content conversion.  I want to produce target fields TF1(required), TF2(required), TF3(optional), and TF4(optional).  In my mapping SF1 maps to TF1, SF2 maps to TF2, etc...
    I want to produce a comma separated file, I am using file content conversions and I am using NameA.fieldSeparator (using a comma)  in my file content conversion parameters.  I have no problem when all 4 source fields are populated.  In this case if the value of my source fields are: ABC 123 XYZ and 789 then I get a flat file with the result:
    ABC,123,XYZ,789
    The problem is when my optional fields are blank I currently get the following in my csv file:
    ABC,123
    When instead I want:
    ABC,123,,
    I know I've seen threads relating to this issue but I haven't had any success locating them.  Any insight is appreciated.

    Hi Shaun,
    Source : SF1 (required), SF2 (required), SF3(optional) and SF4(optional).
    Target: TF1(required), TF2(required), TF3(optional), and TF4(optional)
    For Optional in mapping , If the source optional field is not there, then for target assign with blank constant. You will get the output as below, because at target side the element will be there with blank value and FCC will process that.
    ABC,123,,
    Otherwise Try with NameA.fieldFixedLengths in FCC.
    Regards,
    Prasanna

  • HELP PLEASE!! Acrobat will not recognise my csv file for a script

    Hi,
    I have this script which is below which I need to split a 300 page document which I need to split into 50 documents and name them with a csv file provided. I have the excel spreadsheet with column headed "filename" with 50 file names.
    When I select the data file to import after being asked to enter text at start of filename etc I see the message 'No filenames found - using "file-XX.pdf". Press Escape after continuing to cancel.'
    and then the error -
    RaiseError: The file may be read-only, or another user may have it open. Please save the document with a different name or in a different folder.
    Doc.extractPages:83:Console undefined:Exec
    ===> The file may be read-only, or another user may have it open. Please save the document with a different name or in a different folder.
    file-0
    I have checked the number of rows in the csv with the one which i created the merge in Indesign and it is the same. I have also tried using google docs spreadsheet but acrobat doesnt recognise the URL.
    Thank you.
    script I am using -
    var CSV = function (data, delimiter) {
        var _data = CSVToArray(data, delimiter);
        var _head = _data.shift();
        return {
            length: function () {return _data.length;},
            adjustedLength: function () {return _data.length - 1;},
            getRow: function (row) {return _data[row];},
            getRowAndColumn: function (row, col) {
                if (typeof col !== "string") {
                    return _data[row][col];
                } else {
                    col = col.toLowerCase();
                    for (var i in _head) {
                        if (_head[i].toLowerCase() === col) {
                            return _data[row][i];
    function CSVToArray( strData, strDelimiter ){
        strDelimiter = (strDelimiter || ",");
        var objPattern = new RegExp(
                // Delimiters.
                "(\\" + strDelimiter + "|\\r?\\n|\\r|^)" +
                // Quoted fields.
                "(?:\"([^\"]*(?:\"\"[^\"]*)*)\"|" +
                // Standard fields.
                "([^\"\\" + strDelimiter + "\\r\\n]*))"
            "gi"
        var arrData = [[]];
        var arrMatches = null;
        while (arrMatches = objPattern.exec( strData )){
            var strMatchedDelimiter = arrMatches[ 1 ];
            if (
                strMatchedDelimiter.length &&
                (strMatchedDelimiter != strDelimiter)
                arrData.push( [] );
            if (arrMatches[ 2 ]){
                var strMatchedValue = arrMatches[ 2 ].replace(
                    new RegExp( "\"\"", "g" ),
            } else {
                var strMatchedValue = arrMatches[ 3 ];
            arrData[ arrData.length - 1 ].push( strMatchedValue );
        return( arrData );
    function isInt(n) {
        return typeof n === "number" && n % 1 == 0;
    var prepend = app.response("Enter any text to go at the START of each filename:");
    var append = app.response("Enter any text to go at the END of each filename:");
    var pathStr = app.response("If the PDFs should be saved in a sub folder, enter the relative path here:", "", "pdf/");
    this.importDataObject("CSV Data");
    var dataObject = this.getDataObjectContents("CSV Data");
    var csvData = new CSV(util.stringFromStream(dataObject, 'utf-8'), ',');
    var pagesPerRecord = this.numPages / csvData.length();
    if (isInt(pagesPerRecord)) {
        for (var i = 0; i < this.numPages; i ++) {
            var pageStart = i*pagesPerRecord;
            var pageEnd = (i+1)*pagesPerRecord - 1;
            var recordIndex = (i + pagesPerRecord) / pagesPerRecord;
            var filename = csvData.getRowAndColumn(i, "filename");
            if (!filename) {
                app.alert('No filenames found - using "file-XX.pdf". Press Escape after continuing to cancel.');
                filename = "file-" + i;
            var settings = {nStart: pageStart, nEnd: pageEnd, cPath: pathStr+prepend+filename+append+'.pdf'};
            this.extractPages(settings);
    } else {
        var message = "The number of pages per row is not an integer (" + pagesPerRecord;
        message += ", " + this.numPages + " pages, " + csvData.length() + " rows).";
    (which I found from another forum, but noone has answered me and I am desperate)
    Thank you in advance.

    As Thunderbird has already tried the password it has for the account, obviously none of those will work when you enter them. Often disabling anti virus scanning of email allows the download to occur without interruption.

  • Addressbook cant read .csv-files exported from Numbers (Solved)

    Hello,
    I can hardly believe this myself, but please try. Numbers separates fields by semicolons while exporting my address data to a .csv file, which seems completely feasible to me. When trying to import this into Addressbook, the applications moans that it cant import this file because it is not valid.
    Do programmers from separate groups at Apple ever test whether their Apps work together? I just expect them to do so and I cant imagine a case more simple then that. This is really poor. The reason why I put this complaint in the Numbers discussion is, that there is no forum for Adressbook (whereas iCal got his own - why?).
    And the reason why I pick up this topic again in a new thread is that the other thread I found related to transferring data from Numbers to Addressbook took a weird twist when they resorted to writing a complicated and elaborate Apple Script for this basic task. The Script provided there messed up with valuable Address data in the first version, so I am not going to try it here.
    I just took the search and replace functionality of my text editor to solve it for me. But I still believe that this should work without headaches right out of the box. So Addressbook programmers please hurry up, go, fix it. It is just too distressing. Imagine, I did this in public with Windows guys around. I dont want to experience such again.
    Yours, Christian Völker

    Hello Yvan,
    yes, I agree that Numbers is doing fine and by your explanation I even know why it does the way it does, but Addressbook remains broken.
    Regarding your script I did not even bother to read it and I just dont trust it because I cant sue you in case something goes wrong. I prefer to do my own faults and in this case it was an even better and faster solution for me to use my text editor with search and replace.
    To make the story complete, after the data was accepted, Addressbook wasnt able to recognize the field labeled "Email" as an Email Address and Street as a Street Name. After telling Addressbook about the meaning of the fields, it consistently crashed while importing. No, this is no good. A Script wont heal these flaws.

Maybe you are looking for