Export query results to flat file with dynamic filename

Hi
Can anybody can point me how to dynamic export query serults set to for example txt file using process flows in OWB.
Let say I have simple select query
select * from table1 where daterange >= sysdate -1 and daterange < sysdate
so query results will be different every day because daterange will be different. Also I would like to name txt file dynamicly as well
eg. results_20090601.txt, results_20090602.txt, results_20090603.txt
I cant see any activity in process editor to enter custom sql statment, like it is in MSSQL 2000 or 2005
thanks in advance

You can call existing procedures from a process flow the procedure can create the filename with whatever name you desire. OWB maps with file as target can also create a file with a dynamic name defined by an expression (see here ).
Cheers
David

Similar Messages

  • Importing From Flat File with Dynamic Columns

    HI
    I am using ssis 2008,i have folder in which I have Four(4) “.txt” files each file will have 2 columns(ID, NAME). I loaded 4
    files in one destination, but today I receive one more “.txt” file here we have 3 columns (ID, NAME, JOB) how can I get a message new column will receive in source. And how can I create in extra column in my destination table dynamically …please help me

    Hi Sasidhar,
    You need a Script Task to read the names and number of columns in the first row of the flat file each time and store it in a variable, then create a staging table dynamically based on this variable and modify the destination table definition if one ore more
    new columns need to be added, and then use the staging table to load the destination table. I am afraid there is no available working script for your scenario, and you need some .NET coding experience to achieve your goal. Here is an example you can refer
    to:
    http://www.citagus.com/citagus/blog/importing-from-flat-file-with-dynamic-columns/ 
    Regards,
    Mike Yin
    TechNet Community Support

  • Export query result to txt file

    Hello,
    I'm trying to export a query result to txt file but I facing some problems.
    I'm using the comand below:
    set echo off newpage 0 space 0 pagesize 0 feed off head off trimspool on
    set colsep ,
    spool C:\estados.txt
    select id_estado,cod_estado,nme_estado from tb_estado;
    spool off
    First problem: My select statement is being writen on my estados.txt
    Second problem: The results are being writen with a lot of blank spaces in start of line, for example, instead of write "1,AC,Acre" the line is write as " 1,AC,Acre".
    Third problem: The "spool off" statement is being written on my estados.txt
    How could I solve it?
    Thanks

    1. Works here - what version of SQL*PLUS are you using?
    2. If you want to prevent spaces, you have to write your SELECT like this:
    select id_estado || ',' || cod_estado || ',' || nme_estado from tb_estado;Note: not tested.
    3. See 1.
    C.

  • Export query results into .csv file?

    Hello I have a T-SQL script that gets row counts for a specified date range and then needs to loop (by incrementing +1 day to get the next day's counts) for a large date range.  I'm aiming to output & append each query results day counts
    into a .csv file via a SQL Agent job since this will take quite a while to complete.
    Would using the following as an example template...
    INSERT INTO OPENROWSET('Microsoft.ACE.OLEDB.12.0','Text;Database=D:\;HDR=YES;FMT=Delimited','SELECT * FROM [FileName.csv]')
    SELECT Field1, Field2, Field3
    FROM DatabaseName
    ...be the method or something else?
    If this is good to use I've tried running this but get the following error:
    Msg 7357, Level 16, State 2, Line 76
    Cannot process the object "SELECT * FROM [FileName.csv]". The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" indicates that either the object has no columns or the current user does not have permissions
    on that object.
    Thanks in advance.

    Hi Techresearch7777777,
    The error in your post says that the file FileName.csv has to be created with the column names in the first row. Like:
    Field1,Field2,Field3
    Either you can create a schema.ini file under the same folder:
     [FileName.csv]
     Format=CSVDelimited
     ColNameHeader=False
     Col1=Field1 [DataType]
     Col2=Field2 [DataType]
     Col3=Field3 [DataType]
    For the [DataType],you can reference
    Schema.ini File (Text File Driver)
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Export SQL View to Flat File with UTF-8 Encoding

    I've setup a package in SSIS to export a SQL view to a flat file and it's working fine.  I now need to make that flat file UTF-8 encoded.  The package executes but still shows the files as ANSI encoded.
    My package consists of a Source (SQL View) -> Derived Column (casts the fields to DT_WSTR) -> Destination Flat File (Set to output UTF-8 file).
    I don't get any errors to help me troubleshoot further.  I'm running SQL Server 2005 SP2.

    Unless there is a Byte-Order-Marker (BOM - hex file prefix: EF BB BF) at the beginning of the file, and unless your data contains non-ASCII characters, I'm unsure there is a technical difference in the files, Paul.
    That is, even if the file is "encoded" UTF-8, if your data is only ASCII values (decimal values 0-127, hex 00-7F), UTF-8 doesn't really serve a purpose over ANSI encoding.  Now if you're looking for UTF-8 with specifically the BOM included, and your data is all standard ASCII, the Flat File Connection Manager can't do that, it seems.
    What the flat file connection manager is doing correctly though, is encoding values that are over decimal 127/hex 7F in UTF-8 when the encoding of the connection manager is set to 65001 (UTF-8).
    Example:
    Input data built with a script component as a source (code at the bottom of this post) and with only one WSTR output column hooked to a flat file destination component:
    a string containing only decimal value 225 (german Eszett character - ß)
    Encoding set to ANSI 1252 looks like:
    E1 0D 0A (which is the ANSI encoding of the decimal character value 225 (E1) and a CR-LF (0D 0A)
    Encoding set to UTF-8 65001 looks like:
    C3 A1 0D 0A  (which is the UTF-8 encoding of the decimal character value 225 (C3 A1) and a CR-LF (0D 0A)
    Note that for values over decimal 127, UTF-8 takes at least two bytes and up to four for the remaining values available.
    So, I'm comfortable now, after sitting down and going through this, that the flat file connection manager is working correctly, unless you need a BOM.
    1
    Imports System  
    2
    Imports System.Data  
    3
    Imports System.Math  
    4
    Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper  
    5
    Imports Microsoft.SqlServer.Dts.Runtime.Wrapper  
    6
    7
    Public Class ScriptMain  
    8
        Inherits UserComponent  
    9
    10
        Public Overrides Sub CreateNewOutputRows()  
    11
            Output0Buffer.AddRow()  
    12
            Output0Buffer.col1 = ChrW(225)  
    13
        End Sub 
    14
    15
    End Class 
    Phil

  • Help Export TABLE Records into Flat File with INSERTs - error

    Hi,
    When i'm trying to run this procedure I got this error:
    ORA-00932:inconsistent datatypes: expected - got -
    Can anybody tell me why?
    Thanks
    CREATE OR REPLACE PROCEDURE generate_stmt(prm_table_name IN VARCHAR2,
    prm_where_clause IN VARCHAR2,
    prm_output_folder IN VARCHAR2,
    prm_output_file IN VARCHAR2) IS
    TYPE ref_cols IS REF CURSOR;
    mmy_ref_cols ref_cols;
    mmy_column_name VARCHAR2(100);
    mmy_column_data_type VARCHAR2(1);
    mmy_col_string VARCHAR2(32767);
    mmy_query_col_string VARCHAR2(32767);
    V_FILE_HNDL UTL_FILE.file_type;
    begin
    OPEN mmy_ref_cols FOR
    SELECT LOWER(column_name) column_name
    FROM user_tab_columns
    WHERE table_name = UPPER(prm_table_name)
    ORDER BY column_id;
    LOOP
    FETCH mmy_ref_cols
    INTO mmy_column_name;
    EXIT WHEN mmy_ref_cols%NOTFOUND;
    mmy_col_string := mmy_col_string || mmy_column_name || ', ';
    mmy_query_col_string := mmy_query_col_string || ' ' || mmy_column_name || ',';
    END LOOP;
    CLOSE mmy_ref_cols;
    V_FILE_HNDL := UTL_FILE.FOPEN('TEST','TESST.TXT', 'W');
    mmy_col_string := 'INSERT INTO ' || LOWER(prm_table_name) || ' (' ||
    CHR(10) || CHR(9) || CHR(9) || mmy_col_string;
    mmy_col_string := RTRIM(mmy_col_string, ', ');
    mmy_col_string := mmy_col_string || ')' || CHR(10) || 'VALUES ( ' ||
    CHR(9);
    mmy_query_col_string := RTRIM(mmy_query_col_string,
    ' || ' || '''' || ',' || '''' || ' || ');
    dbms_output.put_line(mmy_column_name);
    OPEN mmy_ref_cols
    FOR ' SELECT ' || mmy_query_col_string ||
    ' FROM ' || prm_table_name ||
    ' ' || prm_where_clause;
    loop
    FETCH mmy_ref_cols
    INTO mmy_query_col_string;
    EXIT WHEN mmy_ref_cols%NOTFOUND;
    mmy_query_col_string := mmy_query_col_string || ');';
    UTL_FILE.PUT_LINE(V_FILE_HNDL, mmy_col_string);
    UTL_FILE.PUT_LINE(V_FILE_HNDL, mmy_query_col_string);
    end loop;
    end;

    Buddy,
    Try this..
    CREATE OR REPLACE PROCEDURE generate_stmt
    (prm_table_name IN VARCHAR2,
    prm_where_clause IN VARCHAR2,
    prm_output_folder IN VARCHAR2,
    prm_output_file IN VARCHAR2) IS
    TYPE ref_cols IS REF CURSOR;
    mmy_ref_cols ref_cols;
    mmy_column_name VARCHAR2(100);
    mmy_column_data_type VARCHAR2(1);
    mmy_col_string VARCHAR2(32767);
    mmy_query_col_string VARCHAR2(32767);
    V_FILE_HNDL UTL_FILE.file_type;
    begin
    OPEN mmy_ref_cols FOR
    SELECT LOWER(column_name) column_name
    FROM user_tab_columns
    WHERE table_name = UPPER(prm_table_name)
    ORDER BY column_id;
    LOOP
    FETCH mmy_ref_cols
    INTO mmy_column_name;
    EXIT WHEN mmy_ref_cols%NOTFOUND;
    mmy_col_string := mmy_col_string || mmy_column_name || ', ';
    mmy_query_col_string := mmy_query_col_string || ' ' || mmy_column_name || ',';
    END LOOP;
    CLOSE mmy_ref_cols;
    mmy_col_string := 'INSERT INTO ' || LOWER(prm_table_name) || ' (' ||CHR(10) || CHR(9) || CHR(9) || mmy_col_string;
    mmy_col_string := RTRIM(mmy_col_string, ', ');
    mmy_col_string := mmy_col_string || ')' || CHR(10) || 'VALUES ( ' ||CHR(9);
    mmy_query_col_string := RTRIM(mmy_query_col_string,' || ' || '''' || ',' || '''' || ' || ');
    V_FILE_HNDL := UTL_FILE.FOPEN('TEST','TESST.TXT', 'W');
    OPEN mmy_ref_cols FOR 'SELECT ' || mmy_query_col_string ||' FROM ' || prm_table_name ||' ' || prm_where_clause;
    loop
    FETCH mmy_ref_cols INTO mmy_query_col_string;
    EXIT WHEN mmy_ref_cols%NOTFOUND;
    mmy_query_col_string := mmy_query_col_string || ');';
    UTL_FILE.PUT_LINE(V_FILE_HNDL, mmy_col_string);
    UTL_FILE.PUT_LINE(V_FILE_HNDL, mmy_query_col_string);
    end loop;
    UTL_FILE.FCLOSE(V_FILE_HNDL);
    END;
    This would work for table with one and only one column.
    Look at the line below:
    FETCH mmy_ref_cols INTO mmy_query_col_string;
    mmy_query_col_string has been declared as string...So it would hold single value only.That's the reason when you try this block on table with more than one column,mmy_query_col_string would've to hold a table row type data which it would not...
    Good luck!!
    Bhagat

  • Using ssis import a multiple flat files with different mappings

    I have an scenario for import a file
    20 different flat file source and having XML mapping document in a table.
    get files from that path(i have path and file name extension in table)
    i have map this file based on the file name extension in dynamically
    create a 20 different staging table based on mapping in dynamically
    kindly help for this
     

    Hi Karthick,
    As Arthur said, if you don't want to hard code the data flow, you need to read XML from the SQL Server table and use Script Task to implement the dynamic columns mapping. To read XML from a SQL table, you can refer to:
    http://blog.sqlauthority.com/2009/02/13/sql-server-simple-example-of-reading-xml-file-using-t-sql/ 
    In the Script Task, we need to parse the first row of the flat file to get the columns information and save that to a .NET variable. Then, the variable will be used to create the query that will be used to create SQL table. For more information, please see:
    http://www.citagus.com/citagus/blog/importing-from-flat-file-with-dynamic-columns/ 
    Regards,
    Mike Yin
    TechNet Community Support

  • SSIS project - read multiple flat files with different formats

    hi all,
    i need to import multiple flat files with different formats into different tables of the sql server database and not able to figure out the best way out in ssis to do so...
    please advise the possible methods in ssis to do so and if possible the process which can be dynamic as file names or columns might change in future.

    Hi AK1987,
    To import flat files with dynamic columns, we can use Script Task inside a Foreach Loop Container to parse the first row of the flat file to get the columns names and save them into a .NET variable, then, we can create “Create Table” script based on this
    variable, and then store the script into a SSIS package variable. After that, we create a staging table based on the package variable, load the flat file data to the staging table. Eventually, we load data from the staging table to the destination table. For
    the detail steps, please walk through the following blog:
    http://www.citagus.com/citagus/blog/importing-from-flat-file-with-dynamic-columns/ 
    Regards,
    Mike Yin
    TechNet Community Support

  • Too many flat files with different formats.

    Hi gurus,
    i have a headache problems. that is follows:
    We will upload data from flat files,but you know there are about 100 .xls files and with different formats.
    and as you know  the datasource of the flat file must correspond to every fields of the flat file.
    but you know there are 100 flat files and if we donot do any optimise,we will create about 100 transformation and 100 datasource to meet our requirements.
    but that seems impossible.
    is there any good idea to decrease the number of transfomation ?

    Hi AK1987,
    To import flat files with dynamic columns, we can use Script Task inside a Foreach Loop Container to parse the first row of the flat file to get the columns names and save them into a .NET variable, then, we can create “Create Table” script based on this
    variable, and then store the script into a SSIS package variable. After that, we create a staging table based on the package variable, load the flat file data to the staging table. Eventually, we load data from the staging table to the destination table. For
    the detail steps, please walk through the following blog:
    http://www.citagus.com/citagus/blog/importing-from-flat-file-with-dynamic-columns/ 
    Regards,
    Mike Yin
    TechNet Community Support

  • How to export data from DB to flat file with ODI?

    we want to export DB2 tables to flat files.
    Who can tell me how to do it with ODI?
    Please give me a simple example or steps.
    Thank you.

    There are two ways
    Either use IKM Sql to File Append
    (or)
    OdiSqlUnload
    For OdiSqlUnload you can use this technique to provide the connection parameters - http://odiexperts.com/?p=1985

  • Export Report to flat file with spaces

    I have a report that is several columns wide. It queries data from our SQL based accounting package to create a flat file with spaces(must be accurate). When we go export the file we don't get all of the columns. Any ideas?

    Hi,
    Are you exporting to CSV or Excel File?
    Export to Excel.
    Bashir Awan

  • Csv-export with dynamic filename ?

    I'm using the report template "export: CSV" to export data to a csv-file. It works fine with a static file name. Now, I would like to have a dynamic file name for the csv-file, e.g. "ExportData_20070209.csv".
    Any ideas ?

    Ran,
    This is a procedure, which will export result of any query into a flat file in your directory:
    PROCEDURE export_rep_to_bfile (
          p_query       IN   VARCHAR2,
          p_directory   IN   VARCHAR2,
          p_filename    IN   VARCHAR2 DEFAULT 'export_csv.csv',
          p_delimiter   IN   VARCHAR2 DEFAULT ';',
          p_coll_name   IN   VARCHAR2 DEFAULT 'MY_EXP_COLL'
       IS
          v_file            UTL_FILE.file_type;
          v_column_list     DBMS_SQL.desc_tab;
          v_column_cursor   PLS_INTEGER;
          v_number_of_col   PLS_INTEGER;
          v_col_names       VARCHAR2 (4000);
          v_line            VARCHAR2 (4000);
          v_delimiter       VARCHAR2 (100);
          v_error           VARCHAR2 (4000);
          v_code            VARCHAR2 (4000);
       BEGIN
          -- Get the Source Table Columns
          v_column_cursor := DBMS_SQL.open_cursor;
          DBMS_SQL.parse (v_column_cursor, p_query, DBMS_SQL.native);
          DBMS_SQL.describe_columns (v_column_cursor,
                                     v_number_of_col,
                                     v_column_list
          DBMS_SQL.close_cursor (v_column_cursor);
          FOR i IN v_column_list.FIRST .. v_column_list.LAST
          LOOP
             v_col_names :=
                       v_col_names || p_delimiter
                       || (v_column_list (i).col_name);
          END LOOP;
          v_col_names := LTRIM (v_col_names, ';');
          v_file := UTL_FILE.fopen (p_directory, p_filename, 'W');
          UTL_FILE.put_line (v_file, v_col_names);
          --for ApEx when used outside of ApEx
          IF htmldb_collection.collection_exists (p_collection_name      => p_coll_name)
          THEN
             htmldb_collection.delete_collection
                                                (p_collection_name      => p_coll_name);
          END IF;
          htmldb_collection.create_collection_from_query
                                                (p_collection_name      => p_coll_name,
                                                 p_query                => p_query
          COMMIT;
          v_number_of_col := 50 - v_number_of_col;
          FOR i IN 1 .. v_number_of_col
          LOOP
             v_delimiter := v_delimiter || p_delimiter;
          END LOOP;
          FOR c IN (SELECT c001, c002, c003, c004, c005, c006, c007, c008, c009,
                           c010, c011, c012, c013, c014, c015, c016, c017, c018,
                           c019, c020, c021, c022, c023, c024, c025, c026, c027,
                           c028, c029, c030, c031, c032, c033, c034, c035, c036,
                           c037, c038, c039, c040, c041, c042, c043, c044, c045,
                           c046, c047, c048, c049, c050
                      FROM htmldb_collections
                     WHERE collection_name = p_coll_name)
          LOOP
             v_line :=
                RTRIM (   c.c001
                       || p_delimiter
                       || c.c002
                       || p_delimiter
                       || c.c003
                       || p_delimiter
                       || c.c004
                       || p_delimiter
                       || c.c005
                       || p_delimiter
                       || c.c006
                       || p_delimiter
                       || c.c007
                       || p_delimiter
                       || c.c008
                       || p_delimiter
                       || c.c009
                       || p_delimiter
                       || c.c010
                       || p_delimiter
                       || c.c011
                       || p_delimiter
                       || c.c012
                       || p_delimiter
                       || c.c013
                       || p_delimiter
                       || c.c014
                       || p_delimiter
                       || c.c015
                       || p_delimiter
                       || c.c016
                       || p_delimiter
                       || c.c017
                       || p_delimiter
                       || c.c018
                       || p_delimiter
                       || c.c019
                       || p_delimiter
                       || c.c020
                       || p_delimiter
                       || c.c021
                       || p_delimiter
                       || c.c022
                       || p_delimiter
                       || c.c023
                       || p_delimiter
                       || c.c024
                       || p_delimiter
                       || c.c025
                       || p_delimiter
                       || c.c026
                       || p_delimiter
                       || c.c027
                       || p_delimiter
                       || c.c028
                       || p_delimiter
                       || c.c029
                       || p_delimiter
                       || c.c030
                       || p_delimiter
                       || c.c031
                       || p_delimiter
                       || c.c032
                       || p_delimiter
                       || c.c033
                       || p_delimiter
                       || c.c034
                       || p_delimiter
                       || c.c035
                       || p_delimiter
                       || c.c036
                       || p_delimiter
                       || c.c037
                       || p_delimiter
                       || c.c038
                       || p_delimiter
                       || c.c039
                       || p_delimiter
                       || c.c040
                       || p_delimiter
                       || c.c041
                       || p_delimiter
                       || c.c042
                       || p_delimiter
                       || c.c043
                       || p_delimiter
                       || c.c044
                       || p_delimiter
                       || c.c045
                       || p_delimiter
                       || c.c046
                       || p_delimiter
                       || c.c047
                       || p_delimiter
                       || c.c048
                       || p_delimiter
                       || c.c049
                       || p_delimiter
                       || c.c050,
                       v_delimiter
             UTL_FILE.put_line (v_file, v_line);
          END LOOP;
          UTL_FILE.fclose (v_file);
          IF htmldb_collection.collection_exists (p_collection_name      => p_coll_name)
          THEN
             htmldb_collection.delete_collection
                                                (p_collection_name      => p_coll_name);
          END IF;
          COMMIT;
       EXCEPTION
          WHEN OTHERS
          THEN
             IF DBMS_SQL.is_open (v_column_cursor)
             THEN
                DBMS_SQL.close_cursor (v_column_cursor);
             END IF;
             v_error := SQLERRM;
             v_code := SQLCODE;
             raise_application_error (-20001,
                                         'Your query is invalid!'
                                      || CHR (10)
                                      || 'SQL_ERROR: '
                                      || v_error
                                      || CHR (10)
                                      || 'SQL_CODE: '
                                      || v_code
                                      || CHR (10)
                                      || 'Please correct and try again.'
                                      || CHR (10)
       END export_rep_to_bfile;You may adjust this to get a report query, by doing the following:
    DECLARE
       x   VARCHAR2 (4000);
    BEGIN
       SELECT region_source
         INTO x
         FROM apex_application_page_regions
        WHERE region_id = TO_NUMBER (LTRIM (p_region, 'R'))
          AND page_id = p_page_id
          AND application_id = p_app_id;
    END;which will get your region query.
    ...and there you go.
    Denes Kubicek

  • Exporting R3 tables into Flat Files for BPC

    Dear BPC experts,
    I understand currently the way for BPC to extract data from R3 is via flat files. I have the following queries:
    1) What exactly are the T codes and steps required to export R3 tables into flat files (without going through the OpenHub in BI7)? Can this process be automated?
    2) Is Data Manager of BPC equivalent to SSIS (Integration Services) of SQL Server?
    Please advise. Thanks!!
    SJ

    Hi Soong Jeng,
    I would take a look at the existing BI Extractors for the answer to Q1, I am working on finishing up a HTG regarding this. Look for it very soon.
    Here is the code to dump out data from a BI Extractor directly from ERP.
    You need dev permissions in your ERP system and access to an app server folder. ...Good Luck
    *& Report  Z_EXTRACTOR_TO_FILE                                         *
    report  z_extractor_to_file                     .
    type-pools:
      rsaot.
    parameters:
      p_osrce type roosource-oltpsource,
      p_filnm type rlgrap-filename,
      p_maxsz type rsiodynp4-maxsize default 100,
      p_maxfc type rsiodynp4-calls default 10,
      p_updmd type rsiodynp4-updmode default 'F'.
    data:
      l_lines_read type sy-tabix,
      ls_select type rsselect,
      lt_select type table of rsselect,
      ls_field type rsfieldsel,
      lt_field type table of rsfieldsel,
      l_quiet type rois-genflag value 'X',
      l_readonly type rsiodynp4-readonly value 'X',
      l_desc_type type char1 value 'M',
      lr_data type ref to data,
      lt_oltpsource type rsaot_s_osource,
      l_req_no type rsiodynp4-requnr,
      l_debugmode type rsiodynp4-debugmode,
      l_genmode type rois-genflag,
      l_columns type i,
      l_temp_char type char40,
      l_filename    like rlgrap-filename,
      wa_x030l      like x030l,
      tb_dfies      type standard table of dfies,
      wa_dfies      type dfies,
      begin of tb_flditab occurs 0,
    *   field description
        fldname(40)  type c,
      end of tb_flditab,
      ls_flditab like line of tb_flditab,
      l_file type string.
    field-symbols:
      <lt_data> type standard table,
      <ls_data> type any,
      <ls_field> type any.
    call function 'RSA1_SINGLE_OLTPSOURCE_GET'
      exporting
        i_oltpsource   = p_osrce
      importing
        e_s_oltpsource = lt_oltpsource
      exceptions
        no_authority   = 1
        not_exist      = 2
        inconsistent   = 3
        others         = 4.
    if sy-subrc <> 0.
    * ERROR
    endif.
    create data lr_data type standard table of (lt_oltpsource-exstruct).
    assign lr_data->* to <lt_data>.
    call function 'RSFH_GET_DATA_SIMPLE'
      exporting
        i_requnr                     = l_req_no
        i_osource                    = p_osrce
        i_maxsize                    = p_maxsz
        i_maxfetch                   = p_maxfc
        i_updmode                    = p_updmd
        i_debugmode                  = l_debugmode
        i_abapmemory                 = l_genmode
        i_quiet                      = l_quiet
        i_read_only                  = l_readonly
      importing
        e_lines_read                 = l_lines_read
      tables
        i_t_select                   = lt_select
        i_t_field                    = lt_field
        e_t_data                     = <lt_data>
      exceptions
        generation_error             = 1
        interface_table_error        = 2
        metadata_error               = 3
        error_passed_to_mess_handler = 4
        no_authority                 = 5
        others                       = 6.
    * get table/structure field info
    call function 'GET_FIELDTAB'
      exporting
        langu               = sy-langu
        only                = space
        tabname             = lt_oltpsource-exstruct
        withtext            = 'X'
      importing
        header              = wa_x030l
      tables
        fieldtab            = tb_dfies
      exceptions
        internal_error      = 01
        no_texts_found      = 02
        table_has_no_fields = 03
        table_not_activ     = 04.
    * check result
    case sy-subrc.
      when 0.
    *      copy fieldnames
        loop at tb_dfies into wa_dfies.
          case l_desc_type.
            when 'F'.
              tb_flditab-fldname = wa_dfies-fieldname.
            when 'S'.
              tb_flditab-fldname = wa_dfies-scrtext_s.
            when 'M'.
              tb_flditab-fldname = wa_dfies-scrtext_m.
            when 'L'.
              tb_flditab-fldname = wa_dfies-scrtext_l.
            when others.
    *         use fieldname
              tb_flditab-fldname = wa_dfies-fieldname.
          endcase.
          append tb_flditab.
    *        clear variables
          clear: wa_dfies.
        endloop.
      when others.
        message id sy-msgid type sy-msgty number sy-msgno
        with  sy-subrc raising error_get_dictionary_info.
    endcase.
    describe table tb_flditab lines l_columns.
    " MOVE DATA TO THE APPLICATION SERVER
    open dataset p_filnm for output in text mode encoding utf-8
      with windows linefeed.
    data i type i.
    loop at <lt_data> assigning <ls_data>.
      loop at tb_flditab into ls_flditab.
        i = sy-tabix.
        assign component i of structure <ls_data> to <ls_field>.
        l_temp_char = <ls_field>.
        if i eq 1.
          l_file = l_temp_char.
        else.
          concatenate l_file ',' l_temp_char  into l_file.
        endif.
      endloop.
      transfer l_file to p_filnm.
      clear l_file.
    endloop.
    close dataset p_filnm.
    Cheers,
    Scott
    Edited by: Jeffrey Holdeman on May 25, 2010 4:44 PM
    Added  markup to improve readability
    Edited by: Jeffrey Holdeman on May 25, 2010 4:47 PM

  • Exporting Query Result to OpenOffice

    Hello,
    We would like to export Query Result from Web Application to a file which could be opened in OpenOffice.
    Exporting data in XLS format doesn't work because XLS file is created with MS VML Markup with couln't be read by OpenOffice.
    Does someone already deals with this kind of issue ?
    How do you overcome this problem ?
    Thanks in advance for your answer
    Laurent

    Hi Laurent,
    Can you try saving the file in .csv.
    ad try opening it in openoffice.
    Hope this helps
    PV

  • How to create flat file with fixed lenght records

    I need help to export an Oracle table to a flat file with fixed lenght and without columns separator.
    the fixed length is the more important demand.
    My table have 50 columns with varchar, date and number .
    Date and number columns may be empty, null o with values.
    Thanks a lot for any help.
    [email protected]

    Hi,
    You can use this trick:
    SQL>desc t
    Name                                      Null?    Type
    NAME                                               VARCHAR2(20)
    SEX                                                VARCHAR2(1)
    SQL>SELECT LENGTH(LPAD(NAME,20,' ')||LPAD(SEX,1,' ')), LPAD(NAME,20,' ')||LPAD(SEX,1,' ') FROM T;
    LENGTH(LPAD(NAME,20,'')||LPAD(SEX,1,'')) LPAD(NAME,20,'')||LPA
                                          21                    aF
                                          21                    BM
                                          21                    CF
                                          21                    DM
    4 rows selected.
    SQL>SELECT *  FROM t;
    NAME                 S
    a                    F
    B                    M
    C                    F
    D                    M
    4 rows selected.Regards

Maybe you are looking for