Stored Proc to create .csv file from table

Hi all,
I have a table with 4 columns and 10 rows. Now I need a stored proc to create a .csv file from the table data. Here the scripts to create a table and insert the row's.
Create table emp(emp_id number(10), emp_name varchar2(20), department varchar2(20), salary number(10));
Insert into emp values ('1','AAAAA','SALES','10000');
Insert into emp values ('2','BBBBB','MARKETING','8000');
Insert into emp values ('3','CCCCC','SALES','12000');
Insert into emp values ('4','DDDDD','FINANCE','10000');
Insert into emp values ('5','EEEEE','SALES','11000');
Insert into emp values ('6','FFFFF','MANAGER','90000');
Insert into emp values ('7','GGGGG','SALES','12000');
Insert into emp values ('8','HHHHH','FINANCE','14000');
Insert into emp values ('9','IIIII','SALES','20000');
Insert into emp values ('10','JJJJJ','FINANCE','21000');
commit;
Now I need a stored proc to create a .csv file in my local location. Please let me know If you need any other details....

Some pointers:
http://www.oracle-base.com/articles/9i/GeneratingCSVFiles.php
http://tkyte.blogspot.com/2009/10/httpasktomoraclecomtkyteflat.html
also, doing a search on this forum or http://asktom.oracle.com will give you many clues.
.csv file in my local location.What is your 'local location'?
A client machine? The database server machine?
What database version are you using?
(the result of: select * from v$version; )

Similar Messages

  • How to create .csv file from ABAP report

    Hi
    We have a requirement to generate .csv file from abap report.
    Currently user saves data from abap report to spreadsheet(.xls format) in desktop.  Then opens excel file and save as .csv format.  Need option to save directly in .csv format instead of .xls format.
    Please let me know, if there is any standard function module available to create .csv file.
    Regards
    Uma

    I tried with your code it's going to dump
    REPORT ZTEMP101 message-id 00.
    tables: lfa1.
    types: begin of t_lfa1,
          lifnr like lfa1-lifnr,
          name1 like lfa1-name1,
          end of t_lfa1.
    data: i_lfa1 type standard table of t_lfa1,
          wa_lfa1 type t_lfa1.
    types truxs_t_text_data(4096) type c occurs 0.
    data: csv_converted_table type table of TRUXS_T_TEXT_DATA.
    select-options: s_lifnr for lfa1-lifnr.
    select lifnr name1 from lfa1 into table i_lfa1
           where lifnr in s_lifnr.
    CALL FUNCTION 'SAP_CONVERT_TO_CSV_FORMAT'
    EXPORTING
       I_FIELD_SEPERATOR          = ';'
             I_LINE_HEADER              =
             I_FILENAME                 =
             I_APPL_KEEP                = ' '
      TABLES
        I_TAB_SAP_DATA             = I_LFA1
    CHANGING
       I_TAB_CONVERTED_DATA       = csv_converted_table
    EXCEPTIONS
       CONVERSION_FAILED          = 1
       OTHERS                     = 2
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    CALL FUNCTION 'WS_DOWNLOAD'
    EXPORTING
      BIN_FILESIZE                  = ' '
      CODEPAGE                      = ' '
       FILENAME                      =
       'C:\Documents and Settings\ps12\Desktop\Test folder\exl.cvs'
      FILETYPE                      = 'DAT'
      MODE                          = ' '
      WK1_N_FORMAT                  = ' '
      WK1_N_SIZE                    = ' '
      WK1_T_FORMAT                  = ' '
      WK1_T_SIZE                    = ' '
      COL_SELECT                    = ' '
      COL_SELECTMASK                = ' '
      NO_AUTH_CHECK                 = ' '
    IMPORTING
      FILELENGTH                    =
      TABLES
        DATA_TAB                      = csv_converted_table
      FIELDNAMES                    =
    EXCEPTIONS
      FILE_OPEN_ERROR               = 1
      FILE_WRITE_ERROR              = 2
      INVALID_FILESIZE              = 3
      INVALID_TYPE                  = 4
      NO_BATCH                      = 5
      UNKNOWN_ERROR                 = 6
      INVALID_TABLE_WIDTH           = 7
      GUI_REFUSE_FILETRANSFER       = 8
      CUSTOMER_ERROR                = 9
      OTHERS                        = 10
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    my version is 4.6c

  • Create XML file from table data

    Dear All,
    with dataservice 4.0, I want to create an XML file from a table data.
    Table have a single column but more record, for example:
    0001000488;100;EUR;
    0001000489;200;EUR;
    0001000450;300;EUR;
    My desired XML output:
    <Data>
      0001000488;100;GBP;
      0001000489;200;EUR;
      0001000450;300;EUR;
    </Data>
    I try with a sample query but the sistem write only the last record in XML file:
    <Data>
      0001000450;300;EUR;
    </Data>
    Can everyone help me?
    Thank in advance.
    Simone

    Hello
    That is a very simple (also odd) XML document structure, and as such doesn't require use of the XML target.  It can be easily acheived by writing a normal file with a header and footer, which is acheived using a row_generation and a query to generate the hard coded open and close tags.
    Michael

  • Creating text file from table

    Hi all
    I have a table LFA1 with headers LIFNR, MANDT, NAME1, NAME2, ...., . That table contains data and I need to create text file that collects all headers with all data, where each field is separated by TAB.
    thanks for your help.

    Here is program for KNA1.
    *"Table declarations...................................................
    tables:
      kna1.                              " General Data in Customer Master
    *"Selection screen elements............................................
    select-options:
      s_kunnr for kna1-kunnr.            " Customer Number 1
    *" Type declarations...................................................
    types:
      begin of type_s_customer_details,
        name        like kna1-kunnr,
        address     like kna1-adrnr,
        title       like kna1-anred,
        createdon   like kna1-erdat,
        createdby   like kna1-ernam,
      end of type_s_customer_details.
    Internal table to hold General Data in Customer Master data         *
    data:
      t_customer_details type table
                           of type_s_customer_details
                         with header line.
                          START-OF-SELECTION EVENT                      *
    start-of-selection.
      perform select.
    *&      Form  select
    This subroutine selects information from database and exports the    *
    data into presentation layer                                         *
    There are no interface parameters to be passed to this subroutine.  *
    form select .
      select kunnr
              adrnr
              anred
              erdat
              ernam
         into table t_customer_details
         from kna1
        where kunnr in s_kunnr.
      if sy-subrc ne 0.
        write : / 'DATABASE SELECTION FAILED'.
      else.
        call function 'GUI_DOWNLOAD'
          exporting
      BIN_FILESIZE                    =
           filename                        = 'd:\customer_details.txt'
           filetype                        = 'ASC'
      APPEND                          = ' '
      WRITE_FIELD_SEPARATOR           = ' '
      HEADER                          = '00'
      TRUNC_TRAILING_BLANKS           = ' '
      WRITE_LF                        = 'X'
      COL_SELECT                      = ' '
      COL_SELECT_MASK                 = ' '
      DAT_MODE                        = ' '
      CONFIRM_OVERWRITE               = ' '
      NO_AUTH_CHECK                   = ' '
      CODEPAGE                        = ' '
      IGNORE_CERR                     = ABAP_TRUE
      REPLACEMENT                     = '#'
      WRITE_BOM                       = ' '
      TRUNC_TRAILING_BLANKS_EOL       = 'X'
    IMPORTING
      FILELENGTH                      =
          tables
            data_tab                        = t_customer_details
      FIELDNAMES                      =
         exceptions
           file_write_error                = 1
           no_batch                        = 2
           gui_refuse_filetransfer         = 3
           invalid_type                    = 4
           no_authority                    = 5
           unknown_error                   = 6
           header_not_allowed              = 7
           separator_not_allowed           = 8
           filesize_not_allowed            = 9
           header_too_long                 = 10
           dp_error_create                 = 11
           dp_error_send                   = 12
           dp_error_write                  = 13
           unknown_dp_error                = 14
           access_denied                   = 15
           dp_out_of_memory                = 16
           disk_full                       = 17
           dp_timeout                      = 18
           file_not_found                  = 19
           dataprovider_exception          = 20
           control_flush_error             = 21
           others                          = 22
        if sy-subrc <> 0.
          write : / 'Upload Failed' , sy-subrc.
        else.
          write : / 'Upload Completed'.
        endif.
      endif.
    endform.                               " SELECT

  • Seeking simple example pl/sql to create text file from table data

    hello,
    I am hoping someone can provide very simple example of creating a file on my local harddrive using a pl/sql program. The basic steps are as follows:
    First, I store some text in a varchar2 variable like this:
    1. select sometext into otextvar from mytable where recordid = 1;
    Second, I want this text to become a file in my data directory:
    2. c:\data\sometext.txt
    The second step is where I need help.
    Any suggestions are greatly appreciated.

    Use this function
    It will create for you a file in your /home/oracle directory with sysdate name and will insert all table names in it
    CREATE OR REPLACE PROCEDURE my_proc AS
    CURSOR cursor1 IS
    SELECT table_name from all_tables;
    CURSOR cursor2 IS
    SELECT sysdate from dual;
    rec1 cursor1%ROWTYPE;
    rec2 cursor2%ROWTYPE;
    created_file_name VARCHAR2(100);
    file_name utl_file.file_type;
    BEGIN
    OPEN cursor2;
    LOOP
    FETCH cursor2 INTO rec2;
    EXIT WHEN cursor2%NOTFOUND;
    created_file_name:=rec2.sysdate;
    file_name := utl_file.fopen('/home/oracle', created_file_name,'W');
    OPEN cursor1;
    LOOP
    FETCH cursor1 INTO rec1;
    EXIT WHEN cursor1%NOTFOUND;
    utl_file.putf(file_name, '%s\n',rec1.TABLE_NAME);
    END LOOP;
    utl_file.fclose(file_name);
    END LOOP;
    END my_proc;
    SQL>exec my_proc;

  • How to create xml file from original

    I have this complicated xml and I want to search through it and make a new xml with only element I find that match. what is the best way to do this? the file
    is about 6mb and I want to read the whole file easily..should i load to database table and create a file from table? I have found issues reading the file..
    - <ArrayOfJobClass xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    - <JobClass>
    - <Triggers>
    - <TriggerClass>
    <ExpireActionType>Delete</ExpireActionType>
    <ExpireType>DateTime</ExpireType>
    <TriggerType>TimeType</TriggerType>
    - <TTime>
    <TimeTriggerType>Custom</TimeTriggerType>
    <IntervalType>Daily</IntervalType>
    <SpecificType>Days</SpecificType>
    <FirstLastType>First</FirstLastType>
    <IntervalValue>1</IntervalValue>
    <FirstLastWeekDay>Monday</FirstLastWeekDay>
    <InitDate>2009-05-04T14:17:05.40625-07:00</InitDate>
    - <IntervalDays>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>
    <boolean>false</boolean>

    You can load it in the database in an XMLType table (Object-Relational or Binary XML storage), use XQuery on the XML content and then write back the result to a file.
    Or, you might just use an external processor (XSLT or XQuery).
    If you want to stay in the Oracle world, you can use :
    - For XSLT : $ORACLE_HOME/bin/oraxsl (it's a wrapper for the java XSLT engine)
    - For XQuery : the java XQuery API
    Personally, I sometimes use the Saxon XSLT and XQuery processor, quite efficient.
    Here's a simplistic example with oraxsl utility :
    emp.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <emps>
      <emp id="7369">
        <name>SMITH</name>
        <job>CLERK</job>
        <salary>800</salary>
      </emp>
      <emp id="7499">
        <name>ALLEN</name>
        <job>SALESMAN</job>
        <salary>1600</salary>
      </emp>
      <emp id="7521">
        <name>WARD</name>
        <job>SALESMAN</job>
        <salary>1250</salary>
      </emp>
      <emp id="7566">
        <name>JONES</name>
        <job>MANAGER</job>
        <salary>2975</salary>
      </emp>
      <emp id="7654">
        <name>MARTIN</name>
        <job>SALESMAN</job>
        <salary>1250</salary>
      </emp>
    </emps>
    emp.xsl
    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:output method="xml"/>
    <xsl:template match="text()"/>
    <xsl:template match="emps/emp[job='SALESMAN']">
      <xsl:copy-of select="."/>
    </xsl:template>
    </xsl:stylesheet>Applying the transformation to search and extract all "salesmen" :
    D:\ORACLE\test>%ORACLE_HOME%\bin\oraxsl emp.xml emp.xsl result.xml
    D:\ORACLE\test>type result.xml
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <emp id="7499">
        <name>ALLEN</name>
        <job>SALESMAN</job>
        <salary>1600</salary>
      </emp><emp id="7521">
        <name>WARD</name>
        <job>SALESMAN</job>
        <salary>1250</salary>
      </emp><emp id="7654">
        <name>MARTIN</name>
        <job>SALESMAN</job>
        <salary>1250</salary>
      </emp>

  • Inserting Record into CSV file from BizTalk Orchestration

    Scenario:
    1.Receive file from Source system via RecvPipeline
    2.In Orchestration  extracting some values like ENO,Ename,Salary etc.these values to be added in to CSV file from Expression Shape.How to append/add emp records in to CSV with out overriding the rows.
    Ex:If we submitted 10 files then the CSV file should contain 10 rows in CSV.
    Let me know how to create CSV file from Orchestration and how to add rows into that csv value
    Regards BizTalkWorship

    Simple.
    Receive the message through a Receive Port/Location.
    Create a flat-file schema representing the CSV file structure. Ensure each row is delimited by “{CR}{LF}”. 
    This flat-file schema should only contain the element which you want to see in the destination CSV file like ENO,Ename,Salary etc.
    Have a map where the source schema should be the one which represents the received file and destination schema should be the one which is above created flat-file schema.
    Map the source schema to the destination schema mapping the filed 
    ENO,Ename,Salary etc.
    Have a custom send pipeline with flat-file assembler component it. Use this send pipeline in the send port.
    In send port, configure the send filter like “BTS.ReceivePortName == YourReceivePortName”. Configure the send port’s “Outbound Maps” to the map which you have created in
    above step
    Key Point. In your send port, set the “Copy Mode” property to “Append” from default “Create New”
    With your send port’s, “Copy Mode” property configured to “Append” this will append the value of the output to the existing file. Since in your flat-file schema, each record
    is delimited by “{CR}{LF}” and since you’re overwriting the output file you will have one file with records appended. So if 10 files received, instead of 10 output files, you will have 1 CVS file with 10 rows.
    If you want to construct the message in Orchestration as do, you do as opposed to map in send port at outbound map you can still do.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • SSIS 2008 – Read roughly 50 CSV files from a folder, create SQL table from them dynamically, and dump data.

    Hello everyone,
    I’ve been assigned one requirement wherein I would like to read around 50 CSV files from a specified folder.
    In step 1 I would like to create schema for this files, meaning take the CSV file one by one and create SQL table for it, if it does not exist at destination.
    In step 2 I would like to append the data of these 50 CSV files into respective table.
    In step 3 I would like to purge data older than a given date.
    Please note, the data in these CSV files would be very bulky, I would like to know the best way to insert bulky data into SQL table.
    Also, in some of the CSV files, there will be 4 rows at the top of the file which have the header details/header rows.
    According to my knowledge I would be asked to implement this on SSIS 2008 but I’m not 100% sure for it.
    So, please feel free to provide multiple approaches if we can achieve these requirements elegantly in newer versions like SSIS 2012.
    Any help would be much appreciated.
    Thanks,
    Ankit
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com

    Hello Harry and Aamir,
    Thank you for the responses.
    @Aamir, thank you for sharing the link, yes I'm going to use Script task to read header columns of CSV files, preparing one SSIS variable which will be having SQL script to create the required table with if exists condition inside script task itself.
    I will be having "Execute SQL task" following the script task. And this will create the actual table for a CSV.
    Both these components will be inside a for each loop container and execute all 50 CSV files one by one.
    Some points to be clarified,
    1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform data insert, while for the rest 48
    files, they should be appended on daily basis.
    Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
    2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have repeated file names. How can we manage
    this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
    3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged with older than criteria, say remove
    data older than 1st Jan 2015. what is the best way to achieve this requirement?
    Please know, I'm very new in SSIS world and would like to develop these packages for client using best package development practices.
    Any help would be greatly appreciated.
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com
    1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform
    data insert, while for the rest 48 files, they should be appended on daily basis.
    Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
    How can you identify these files? Is it based on file name or are there some info in the file which indicates
    that it required a purge? If yes you can pick this information during file name or file data parsing step and set a boolean variable. Then in control flow have a conditional precedence constraint which will check the boolean variable and if set it will execute
    a execte sql task to do the purge (you can use TRUNCATE TABLE or DELETE FROM TableName statements)
    2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have
    repeated file names. How can we manage this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
    The best way to achieve this is to append a sequential value to filename (may be timestamp) and then process
    them in sequence. This can be done prior to main loop so that you can use same loop to process these duplicate filenames also. The best thing would be to use file creation date attribute value so that it gets processed in the right sequence. You can use a
    script task to get this for each file as below
    http://microsoft-ssis.blogspot.com/2011/03/get-file-properties-with-ssis.html
    3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged
    with older than criteria, say remove data older than 1st Jan 2015. what is the best way to achieve this requirement?
    You can use a SQL script for this. Just call a sql procedure
    with a single parameter called @Date and then write logic like below
    CREATE PROC PurgeTableData
    @CutOffDate datetime
    AS
    DELETE FROM Table1 WHERE DateField < @CutOffDate;
    DELETE FROM Table2 WHERE DateField < @CutOffDate;
    DELETE FROM Table3 WHERE DateField < @CutOffDate;
    GO
    @CutOffDate which denote date from which older data have to be purged
    You can then schedule this SP in a sql agent job to get executed based on your required frequency
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Create csv file for data in tables

    Hi All,
    I need to "export" data for about 60 tables in one of my databases to a csv file format using a "|" as a separator.
    I know I can do this using a query like:
    select col1 || '|' || col2 || '|' || col3 from table;
    Some of my tables have more than 50 columns so I'm guessing there is an easier way to do this than to construct select SQL statements for each table?
    Thanks in advance.

    I would point out that the OP did not identify the target for the files so it could be an non-Oracle database or Excel in which case external tables would not work since Oracle only writes to external tables in datapump format.  If the target is another Oracle database then external tables would be an option.  If external tables are an option then insert/select over a database link would be a potential alternate approach to using csv files.
    I use a SQL script to generate the Select statement to create my csv files.
    set echo off
    rem
    rem SQL*Plus script to create comma delimited output file from table
    rem
    rem 20000614  Mark D Powell   Automate commonly done task
    rem
    set pagesize  0
    set verify    off
    set feedback  off
    set linesize  999
    set trimspool on
    accept owner    prompt 'Enter table owner => '
    accept tblname  prompt 'Enter table name => '
    spool csv2.sql
    select 'select ' from sys.dual;
    select decode(column_id,1,column_name,
                 '||''|''||'||column_name)
    from   sys.dba_tab_columns
    where  table_name = upper('&&tblname')
    and    owner      = upper('&&owner')
    order by column_id;
    select 'from &&owner..&&tblname;'
    from   sys.dual;
    spool off
    undefine owner
    undefine tblname
    HTH -- Mark D Powell --

  • How to create csv file in stored procedure

    I want the output of my stored procedure in a csv file. How can I do that?

    1) define File Paths
    2) Write the output comma seperated to the file.
    Simple :)
    Hope you are comfortable with File operations.
    URL for your reference
    PL/SQL output to .CSV file
    Regards,
    Bhushan

  • Reading a CSV file from server

    Hi All,
    I am reading a CSV file from server and my internal table has only one field with lenght 200. In the input CSV file there are more than one column and while splitting the file my internal table should have same number of rows as columns of the input record.
    But when i do that the last field in the internal table is appened with #.
    Can somebody tell me the solution for this.
    U can see the my code below.
    data: begin of itab_infile occurs 0,
             input(3000),
          end of itab_infile.
    data: begin of itab_rec occurs 0,
             record(200),
          end of itab_rec.
    data: c_comma(1) value ',',
            open dataset f_name1 for input in text mode encoding default.
            if sy-subrc <> 0.
              write: /, 'FILE NOT FOUND'.
              exit.
            endif.
    do
      read dataset p_ipath into waf_infile.
      split itab_infile-input at c_sep into table itab_rec.
    enddo.
    Thanks in advance.
    Sunil

    Sunil,
    You go not mention the platform on which the CSV file was created and the platform on which it is read.
    A common problem with CSV files created on MS/Windows platforms and read on unix is the end-of-record (EOR) characters.
    MS/Windows usings <CR><LF> as the EOR
    Unix using either <CR> or <LF>
    If on unix open the file using vi in a telnet session to confirm the EOR type.
    The fix options.
    1) Before opening the opening the file in your ABAP program run the unix command dos2unix.
    2) Transfer the file from the MS/Windows platform to unix using FTP using ascii not bin.  This does the dos2unix conversion on the fly.
    3) Install SAMBA and share the load directory to the windows platforms.  SAMBA also handles the dos2unix and unix2dos conversions on the fly.
    Hope this helps
    David Cooper

  • Problem in creating csv file

    I have one problem in creating csv file.
    If one column has single line value, it is coming in single cell. But if the column has no.of lines using carriage return while entering into the table,
    I am not able to create csv file properly. That one column value takes more than one cell in csv.
    For example the column "Issue" has following value:
    "Dear
    I hereby updated the Human Resources: New User Registration Form Request.
    And sending the request for your action.
    Regards
    Karthik".
    If i try to create the csv file that particular record is coming as follows:
    0608001,AEGIS USERID,SINGAPORE,Dear
    I hereby updated the Human Resources: New User Registration Form Request.
    And sending the request for your action.
    Regards
    Karthik,closed.
    If we try to load the data in table it is giving error since that one record is coming in more than one line. How can I store that value in a single line in csv file.
    Pls help.

    I have tried using chr(10) and chr(13) like this......still it is not solved.
    select REQNO ,
    '"'||replace(SUBJECT,chr(13),' ')||'"' subject,
    AREA ,
    REQUESTOR ,
    DEPT ,
    COUNTRY ,
    ASSIGN_TO ,
    to_Date(START_DT) ,
    START_TIME ,
    PRIORITY ,
    '"'||replace(issues, chr(13), ' ')||'"' issues,
    '"'||replace(SOLUTIONS,chr(13),' ')||'"' SOLUTIONS ,
    '"'||replace(REMARKS,chr(13),' ')||'"' REMARKS ,
    to_date(CLOSED_DT) ,
    CLOSED_TIME ,
    MAN_DAYS ,
    MAN_HRS ,
    CLOSED_BY ,
    STATUS from asg_log_mstr
    Pls help.

  • Delete a .csv file from desktop system

    Hi All,
    My requirement is to read the .csv file from the desktop system having the shared folder and delete the file after read successfully.
    Here I can read the .csv file from the location using the function RFC_REMOTE_FILE and updated the content into internal table.
    But I cant delete the file from the presentation server ( Desktop system).
    Can anyone tell me how to delete the .csv file from the desktop system on different location.
    Note:
    I followed this link to read file:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/9831750a-0801-0010-1d9e-f8c64efb2bd2&overridelayout=true

    Hi Rob,
    Thanks. I solved this problem myself.
    The solution to delete the file from remote system is
    concatenate 'DEL' i_filename i_dirname into v_bkfile separated by space .
    call function 'RFC_REMOTE_EXEC'
      destination  c_dest
      exporting
        command               = v_bkfile
      exceptions
        system_failure        = 1  MESSAGE v_ermsg
        communication_failure = 2  MESSAGE v_ermsg.

  • Can we create .CSV files through documaker

    Hi,
    Is it possible to create .csv files through documaker apart from pdf's
    Thanks

    With apologies, it is still unclear (to me) what you are asking.
    It is possible to create files using DAL, but to offer more specific guidance requires a better understanding of what it is that you want.
    A CSV is a Comma-Separate Value file. A PDF file is a printed document file containing formatted text content, graphics, lines, boxes, etc - in other words a fully-composed document.  I'm not exactly sure how a PDF and a CSV file could contain the same data.
    Are you perhaps asking to get an index of the created PDF files written as a CSV file? Something that contains a row of key transaction data along with the file name of the generated PDF? Perhaps something akin to the batch (BCH) files that can be produced per transaction recipient?
    Are you asking for an export file of all the field data that went into the creation of the document and included in the PDF? Normally a CSV has a header that describes the columns and then each row of data would be consistent with that header. Without knowing that every PDF file you create will have exactly the same number of fields defined, it is difficult to imagine that a single CSV file would be sufficient. And depending upon the number of fields and the size of the data assigned to each, this could be quite a long record per PDF in the resulting CSV file.
    Or perhaps you had in mind to get a name / value pair written for each field with data written on separate lines? This would not a be a true CSV file, but could have the name and values separated by commas if that is what you require.
    You have something specific in mind and yet, we are not in your mind.  We need more specific information on exactly what you are trying to accomplish.

  • How to load csv file to table using ssis

    Hi,
    i have loaded the csv file to table using ssis, it is working file but record is mismatch.
    source data :
    col1 col2 col3 col4
    aa, bb, this is sample,data for the details, this is 
    column delimiter - comma
    col3 - record contains comma. so col3 record after the comma is added with the as col4
    i need the col3 record with comma.
    so what is the column delimiter will use ?
    Regards,
    abdul Khadir

    Hi Abdul,
    Based on your description, you want to distinguish a comma is used for column separator or used in address column in a text file, then load the data from the text file to SQL Server table.
    As per my understanding, if you can replace the comma column separator to another delimiter like semicolon (;), just do it. Then we can select Semicolon {;} as Column delimiter for the Flat File Connection Manager. Or ensure all columns are enclosed in double
    quotes ("). Then we can set the double quotes (") as Text qualifier for the Flat File Connection Manager, and the commas will be loaded as part of the string fields.
    If you can't have that done, because computers don't know the context of the data, you would have to come up with some kind of rules that decides when a comma represents a delimiter, and when it is just part of the text. I think a custom script component
    would be necessary to pre-process the data, identify where a comma is part of an address, and then treat that as one field.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

Maybe you are looking for

  • Sharing iTunes Library between Snow Leopard and Windows 8

    My setup: 2006 Macbook, running Snow Leopard Microsoft Surface Pro, running Windows 8 Pro Toshiba 1TB portable external HDD I am using my external HDD to host all of my media content, including my iTunes library. When I want to take advantage of the

  • Outlook New Message Window from Javascript

    Hi, From a hyperlink, i can bring up a new mail message window by simply using the <mailto> tag. What if i wanna do the same thing on click of a button.... onclick event can be handled in javascript but how can we invoke a tag from there ???

  • Cannot coerce object:

    I have setup an Email Endpoint to invoke the process. Once the email document is received, I have set it to "Assign a Task". The process in invoked, with the above error. The variable's input parameter in the Endpoint is "*.pdf" and the variable type

  • How do I get my photos into my  hard drive?

    I have 800 photos in the library of the iPhoto application.  But when I go to my user file on my hard drive and open the pictures file it is empty.  Therefore none of them are saved with Time Machine.  Shouldn't there be two sets of m photos:  1 in t

  • ExcelDestination Format issue using SSIS

    Hi Everyone, I have one scenario, My Process is exporting some data from sql server table to excel destination with date related columns like below. Query: SELECT TOP 10 [DiscountAmount] ,[ProductStandardCost] ,[TotalProductCost] ,[SalesAmount] ,[Tax