Process CSV-Files dynamically

Hi all,
I need to process CSV-Files dynamically, i.e. not using a static order for splitting the values of the lines into the according variables, but determining the mapping dynamically using a header line in the CSV-File.
For instance:
So if the order in the the CSV-File changes to e.g.
the program should fill the corresponding variables (ls_upload-dcu, ls_upload-item, ls_load-subitem) dynamically without the need to change the code.
If anyone has a solution, please let me know.

Try to use variants concepts

Similar Messages

  • How to update csv file dynamically  from Catalog manager

    HI - In OBIEE we used to generate report statistics into csv files from Catalog manager.But this task should be automatic or dynamic process.So it means that each and every when catalog get changes that information will be captured into CSV file by dynamic way. To achieve this process we should have one script or batch etc.............
    Your help is highly appreciated.

    Yes we can achieve this by enabling Usage Tracking.
    Create a report with all the columns you need and schedule the report using Agents with some time frame say like EOD everyday.There we can see what all the changes as done to the catalog.
    Do let me know if you need any help to achieve this.
    Mark if helps,

  • Error while processing csv file

    i get this messages while processing a csv file to load the content to database,
    [SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL;
     data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system's console.
    [SSIS.Pipeline] Warning: The output column "Copy of Column 13" (842) on output "Data Conversion Output" (798) and component
     "Data Conversion" (796) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
    can someone please me with this.
    With Regards
    litu Here

    Hello Litu,
    Are you using: OLEDB Source and Destination ?
    Can you change the source and destination provider to ADO.NET and check if the issue still persist?
    Mark this post as "Answered" if this addresses your question. 
    Don Rohan [MSFT]
    Regards, Don Rohan [MSFT]

  • External Table which can handle appending multiple csv files dynamic

    I need an external table which can handle appending multiple csv files' values.
    But the problem I am having is : the number of csv files are not fixed.
    I can have between 2 to 6-7 files with the suffix as current_date. Lets say it will be like my_file1_aug_08_1.csv, my_file1_aug_08_2.csv, my_file1_aug_08_3.csv etc. and so on.
    I can do it by following as hardcoding if I know the number of files, but unfortunately the number is not fixed and need to something dynamically to inject with a wildcard search of file pattern.
    CREATE TABLE my_et_tbl
      my_field1 varchar2(4000),
      my_field2 varchar2(4000)
         DEFAULT DIRECTORY my_et_dir
            FIELDS TERMINATED BY ','
         LOCATION (UTL_DIR:'my_file2_5_aug_08.csv','my_file2_5_aug_08.csv')
    NOMONITORING;Please advice me with your ideas. thanks.

    Well, you could do it dynamically by constructing location value:
    SQL> CREATE TABLE emp_load
      2      (
      3       employee_number      CHAR(5),
      4       employee_dob         CHAR(20),
      5       employee_last_name   CHAR(20),
      6       employee_first_name  CHAR(15),
      7       employee_middle_name CHAR(15),
      8       employee_hire_date   DATE
      9      )
    11      (
    13       DEFAULT DIRECTORY tmp
    15         (
    17          FIELDS (
    18                  employee_number      CHAR(2),
    19                  employee_dob         CHAR(20),
    20                  employee_last_name   CHAR(18),
    21                  employee_first_name  CHAR(11),
    22                  employee_middle_name CHAR(11),
    23                  employee_hire_date   CHAR(10) date_format DATE mask "mm/dd/yyyy"
    24                 )
    25         )
    26       LOCATION ('info*.dat')
    27      )
    28  /
    Table created.
    SQL> select * from emp_load;
    select * from emp_load
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    SQL> set serveroutput on
    SQL> declare
      2      v_exists      boolean;
      3      v_file_length number;
      4      v_blocksize   number;
      5      v_stmt        varchar2(1000) := 'alter table emp_load location(';
      6      i             number := 1;
      7  begin
      8      loop
      9        utl_file.fgetattr(
    10                          'TMP',
    11                          'info' || i || '.dat',
    12                          v_exists,
    13                          v_file_length,
    14                          v_blocksize
    15                         );
    16        exit when not v_exists;
    17        v_stmt := v_stmt || '''info' || i || '.dat'',';
    18        i := i + 1;
    19      end loop;
    20      v_stmt := rtrim(v_stmt,',') || ')';
    21      dbms_output.put_line(v_stmt);
    22      execute immediate v_stmt;
    23  end;
    24  /
    alter table emp_load location('info1.dat','info2.dat')
    PL/SQL procedure successfully completed.
    SQL> select * from emp_load;
    56    november, 15, 1980   baker                mary            alice     0
    87    december, 20, 1970   roper                lisa            marie     0
    SQL> SY.
    P.S. Keep in mind that changing location will affect all sessions referencing external table.

  • Issue with processing .csv file

    I have a simple csv file (multiple rows) which needs to be picked up by the PI File adapter and then processed into a BAPi.
    I created a Data type 'Record' which has the column names. Then there is a message type using this particular data type MT_SourceOrder. This message type is mapped to BAPI_SALESORDER_CREATEFROMDAT2. I have done the configuration is the receiver file adapter as well to accept the .csv file
    Document Name: MT_SourceOrder
    Doc Namespace:....
    Recorset Name: Recordset
    Recordset Structure: Row,*
    Recordset Sequence: Ascending
    Recordsets per message: 1000
    Key Field Type: Ascending
    In the parameter section, the following information has been provided:
    Row.fieldNames     MerchantID,OrderNumber,OrderDate,Description,VendorSKU,MerchantSKU,Size,UnitPrice,UnitCost,ItemCount,ItemLineNumber,Quantity,Name,Address1,Address2,Address3,City,State,Country,Zip,Phone,ShipMethod,ServiceType,GiftMessage,Tax,Accountno,CustomerPO
    Row.fieldSeparator     ,
    Row.processConfiguration     FromConfiguration
    However, when the mapping is still not working correctly.
    Can anyone please help.
    This is very urgent and all help is very much appreciated.
    Anuradha SenGupta.

    HI Santosh.
    I have verified the content in the source payload in SXMB_MONI.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:MT_SourceOrder xmlns:ns="">
              <Description>&quot;Callaway 60&quot;&quot; Women&apos;s Umbrella&quot;</Description>
              <Address1>15519 LOYALIST PKY</Address1>
              <Phone>(613)399-5615 x5615</Phone>
              <GiftMessage>PI Holding this cost-</GiftMessage>
    It looked as above. However, in the message mapping section it doesnt work - when i do Display Queue on the source field it keeps saying the value as NULL.
    Please advice.

  • Process csv files in obiee

    can obiee accept data directly  from a csv file for inclusion in the business layer

    create a dsn and point it to location of the csv file import it as as regular database.
    each sheet shows up as a seperate table
    Mark if helps

  • How to read/write .CSV file into CLOB column in a table of Oracle 10g

    I have a requirement which is nothing but a table has two column
    create table emp_data (empid number, report clob)
    Here REPORT column is CLOB data type which used to load the data from the .csv file.
    The requirement here is
    1) How to load data from .CSV file into CLOB column along with empid using DBMS_lob utility
    2) How to read report columns which should return all the columns present in the .CSV file (dynamically because every csv file may have different number of columns) along with the primariy key empid).
    eg: empid report_field1 report_field2
    1 x y
    Any help would be appreciated.

    If I understand you right, you want each row in your table to contain an emp_id and the complete text of a multi-record .csv file.
    It's not clear how you relate emp_id to the appropriate file to be read. Is the emp_id stored in the csv file?
    To read the file, you can use functions from [UTL_FILE|] (as long as the file is in a directory accessible to the Oracle server):
        lt_report_clob CLOB;
        l_max_line_length integer := 1024;   -- set as high as the longest line in your file
        l_infile UTL_FILE.file_type;
        l_buffer varchar2(1024);
        l_emp_id report_table.emp_id%type := 123; -- not clear where emp_id comes from
        l_filename varchar2(200) := 'my_file_name.csv';   -- get this from somewhere
       -- open the file; we assume an Oracle directory has already been created
        l_infile := utl_file.fopen('CSV_DIRECTORY', l_filename, 'r', l_max_line_length);
        -- initialise the empty clob
        dbms_lob.createtemporary(lt_report_clob, TRUE, DBMS_LOB.session);
             utl_file.get_line(l_infile, l_buffer);
             dbms_lob.append(lt_report_clob, l_buffer);
             when no_data_found then
        end loop;
        insert into report_table (emp_id, report)
        values (l_emp_id, lt_report_clob);
        -- free the temporary lob
       -- close the file
    end;This simple line-by-line approach is easy to understand, and gives you an opportunity (if you want) to take each line in the file and transform it (for example, you could transform it into a nested table, or into XML). However it can be rather slow if there are many records in the csv file - the lob_append operation is not particularly efficient. I was able to improve the efficiency by caching the lines in a VARCHAR2 up to a maximum cache size, and only then appending to the LOB - see [three posts on my blog|].
    There is at least one other possibility:
    - you could use [DBMS_LOB.loadclobfromfile|]. I've not tried this before myself, but I think the procedure is described [here in the 9i docs|]. This is likely to be faster than UTL_FILE (because it is all happening in the underlying DBMS_LOB package, possibly in a native way).
    That's all for now. I haven't yet answered your question on how to report data back out of the CLOB. I would like to know how you associate employees with files; what happens if there is > 1 file per employee, etc.
    Regards Nigel
    Edited by: nthomas on Mar 2, 2009 11:22 AM - don't forget to fclose the file...

  • Use .CSV files to UPDATE

    I have a requirement to process .CSV files and use the info to UPDATE an existing table. One line from the file matches a row in the existing table by an unique key. There is no need to keep the data from the .CSV files after.
    I was planning to use a temporary table with an INSERT trigger that will perform the UPDATE, with ON COMMIT DELETE ROWS, and have sqlldr load the data in this temporary table.
    But I found out that sqlldr cannot load into temporary tables (SQL*Loader-280).
    What would be other options? The .CSV files are retrieved periodically (every 15 min), have all the same structure, their number can vary, and their filename is unique.
    Thank you in advance.

    SQL*Loader-280 "table %s is a temporary table"
    *Cause: The sqlldr utility does not load temporary tables. Note that if sqlldr did allow loading of temporary tables, the data would
    disappear after the load completed.
    *Action: Load the data into a non-temporary table. Can't you load data into non-temporary table and drop after update?

  • Dynamicall create csv files and zip them

    Apex version - 4.2
    I have some data in the database. I want to generate multiple csv files dynamically from those data. Then zip the csv files together to provide it to the end user.
    Can anybody suggest with the approach that could be followed.
    Thanx in advance !

    Find some code to generate the csv, for instance Export and save result set in CSV format using UTL_FILE
    Find some code to zip the files/blobs, for instance
    Combine the two pieces

  • Dynamically creating oracle table with csv file as source

    We have a requirment..TO create a dynamic external table.. Whenever the data or number of columns change in the CSV file the table should be replaced with current data and current number of we are naive experienced people in oracle please give us a clear solution.. We have tried with a code already ..But getting some errors. Code given below..
    thank you
    we have executed this code by changing the schema name and table name ..Remaining everything same ...
    Assume the following:
    - Oracle User and Schema name is ALLEXPERTS
    - Database name is EXPERTS
    - The directory object is file_dir
    - CSV file directory is /export/home/log
    - The csv file name is ALLEXPERTS_CSV.log
    - The table name is all_experts_tbl
    1. Create a directory object in Oracle. The directory will point to the directory where the file located.
    conn sys/{password}@EXPERTS as sysdba;
    CREATE OR REPLACE DIRECTORY file_dir AS '/export/home/log';
    2. Grant the directory privilege to the user
    3. Create the table
    Connect as ALLEXPERTS user
    create table ALLEXPERTS.all_experts_tbl
    (txt_line varchar2(512))
    organization external
    default directory file_dir
    access parameters (records delimited by newline
    (txt_line char(512)))
    location ('ALLEXPERTS_CSV.log')
    This will create a table that links the data to a file. Now you can treat this file as a regular table where you can use SELECT statement to retrieve the data.
    PL/SQL to create the data (PSEUDO code)
    -- Setup the cursor
    CURSOR c_main IS SELECT *
    FROM allexperts.all_experts_tbl;
    FROM allexperts.all_experts_tbl
    -- Declare Variable
    l_delimiter_count NUMBER;
    l_temp_counter NUMBER:=1;
    l_current_row VARCHAR2(100);
    l_create_statements VARCHAR2(1000);
    -- Get the first row
    -- Open the c_first_row and fetch the data into l_current_row
    -- Count the number of delimiter l_current_row and set the l_delimiter_count
    OPEN c_first_row;
    FETCH c_first_row INTO l_current_row;
    CLOSE c_first_row;
    l_delimiter_count := number of delimiter in l_current_row;
    -- Create the table with the right number of columns
    l_create_statements := 'CREATE TABLE csv_table ( ';
    WHILE l_temp_counter <= l_delimiter_count
    l_create_statement := l_create_statement || 'COL' || l_temp_counter || ' VARCHAR2(100)'
    l_temp_counter := l_temp_counter + 1;
    IF l_temp_counter <=l_delimiter_count THEN
    l_create_statement := l_create_statement || ',';
    END IF;
    l_create_statement := l_create_statement || ')';
    EXECUTE IMMEDIATE l_create_statement;
    -- Open the c_main to parse all the rows and insert into the table
    WHILE rec IN c_main
    -- Loop thru all the records and parse them
    -- Insert the data into the table created above

    The initial table is showing errors and the procedure is created with compilation errors
    After executing the create table i am getting the following errors
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    KUP-00554: error encountered while parsing access parameters
    KUP-01005: syntax error: found "identifier": expecting one of: "badfile,
    byteordermark, characterset, column, data, delimited, discardfile,
    disable_directory_link_check, exit, fields, fixed, load, logfile, language,
    nodiscardfile, nobadfile, nologfile, date_cache, processing, readsize, string,
    skip, territory, varia"
    KUP-01008: the bad identifier was: deli
    KUP-01007: at line 1 column 9
    ORA-06512: at "SYS.ORACLE_LOADER", line 19

  • How to call a SP with dynamic columns and output results into a .csv file via SSIS

    hi Folks, I have a challenging question here. I've created a SP called dbo.ResultsWithDynamicColumns and take one parameter of CONVERT(DATE,GETDATE()), the uniqueness of this SP is that the result does not have fixed columns as it's based on sales from previous
    days. For example, Previous day, customers have purchased 20 products but today , 30 products have been purchased.
    Right now, on SSMS, I am able to execute this SP when supplying  a parameter.  What I want to achieve here is to automate this process and send the result as a .csv file and SFTP to a server. 
    SFTP part is kinda easy as I can call WinSCP with proper script to handle it.  How to export the result of a dynamic SP to a .CSV file? 
    I've tried
    EXEC xp_cmdshell ' BCP " EXEC xxxx.[dbo].[ResultsWithDynamicColumns ]  @dateFrom = ''2014-01-21''"   queryout  "c:\path\xxxx.dat" -T -c'
    SSMS gives the following error as Error = [Microsoft][SQL Server Native Client 10.0]BCP host-files must contain at least one column
    any ideas?
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

    Hey Jakub, thanks and I did see the #temp table issue in our 2008R2.  I finally figured it out in a different way... I manage to modify this dynamic SP to output results into
    a physical table. This table will be dropped and recreated everytime when SP gets executed... After that, I used a SSIS pkg to output this table
    to a file destination which is .csv.  
     The downside is that if this table structure ever gets changed, this SSIS pkg will fail or not fully reflecting the whole table. However, this won't happen often
    and I can live with that at this moment. 
    --Currently using Reporting Service 2000; Visual Studio .NET 2003; Visual Source Safe SSIS 2008 SSAS 2008, SVN --

  • Dynamic CSV file name in target (Multiple workflow calling same dataflow with new global variable value)

    Hi their,
    I have multiple data flows doing 90% of the process same. The difference is in source query where clause and target flat file.
    I used the global variables to dynamically change the query where clause easily, but I need help in dynamically changing the target flat file (CSV file).
    What I want to do is have multiple workflows, which will first set the global variable to new value in the script box and then call the same data flow.
    Please let me know if you have any solution or any idea which might put me in the direction to my quest for solution.
    thank you,

    Hi Raj - in your content conversion for lineitem .. read the additional attribute as well.
    Change your source strcture and additional field company_code
    As you are already sending the filename for each line item using the UDF.. it's just you need to modify your UDF to take another input i.e. Company Code.
    If your PI version is > 7.1 use the graphical variable to hold the filename..
    and while sending the company code information for every item just use the concat function to append filename & company code..

  • Read a CSV file and dynamically generate the insert

    I have a requirement where there are multiple csv's which needs to be exported to a sql table. So far, I am able to read the csv file and generate the insert statement dynamically for selected columns however, the insert statement when passed as a parameter
    to the $cmd.CommandText
    does not evaluate the values
    How to evaluate the string in powershell
    Import-Csv -Path $FileName.FullName | % {
    # Insert statement.
    $insert = "INSERT INTO $Tablename ($ReqColumns) Values ('"
    $lists = $ReqColumns.split(",");
    foreach($l in $lists)
    $valCols= $valCols + '$($_.'+$l+')'','''
    #Generate the values statement
    $insertStr [email protected]("INSERT INTO $Tablename ($ReqColumns) Values ('$($DataCols))")
    #The above statement generate the following insert statement
    $cmd.CommandText = $insertStr #does not evaluate the values
    #If the same statement is passed as below then it execute successfully
    #Execute Query
    $cmd.ExecuteNonQuery() | Out-Null

    Hi Jyeragi,
    To convert the data to the SQL table format, please try this function out-sql:
    out-sql Powershell function - export pipeline contents to a new SQL Server table
    If I have any misunderstanding, please let me know.
    If you have any feedback on our support, please click here.
    Best Regards,
    TechNet Community Support

  • Save a csv file on local host, executing a DTP Open hub in a Process chain?

    Hi All,
    my client has asked me to try if there is a way to save in the local host c:\  a .CSV file generated from an Open HUB executed by a DTP in a Process chain.
    when i execute my DTP it work correctly and it save the .csv file on my c:\ directory.
    my client doesn't want to give th users authtorization on RSA1 and want to try if there is a way to put the DTP in a process chain
    and when executing the process chain every user can have the file on his own local c:\ directory.
    i tried this solution but it doesn't work i get an error on executin the process chain:
    Runtime Errors         OBJECTS_OBJREF_NOT_ASSIGNED
    Except.                     CX_SY_REF_IS_INITIAL
    Error analysis
        An exception occurred that is explained in detail below.
        The exception, which is assigned to class 'CX_SY_REF_IS_INITIAL', was not
         caught in
        procedure "FILE_DELETE" "(METHOD)", nor was it propagated by a RAISING clause.
        Since the caller of the procedure could not have anticipated that the
        exception would occur, the current program is terminated.
        The reason for the exception is:
        You attempted to use a 'NULL' object reference (points to 'nothing')
        access a component (variable: "CL_GUI_FRONTEND_SERVICES=>HANDLE").
        An object reference must point to an object (an instance of a class)
        before it can be used to access components.
        Either the reference was never set or it was set to 'NULL' using the
        CLEAR statement.
    can you give some advices please?
    Thanks for All

    Hi Bilal,
    Unfortunately, DTPs belonging to Open Hubs wich are targeted to a local workstation file, can't be executed through a process chain. 
    The only way of including such DTP in a process chain is changing the Open Hub so that it writes the output file in the application server.  Then, you can retrieve the file -through FTP or any other means- from the application server to the local workstation.
    Hope this helps.
    King regards,

  • Uploading & Processing of CSV file  fails in clustered env.

              We have a csv file which is uploaded to the weblogic application server, written
              to a temporary directory and then manipulated before being written to the database.
              This process works correctly in a single server environment but fails in a clustered
              The file gets uploaded to the server into the temporary directory without problem.
              The processing starts. When running in a cluster the csv file is replicated
              to a temporary directory on the secondary server as well as the primary.
              The manipulation process is running but never finishes and the browser times out
              with the following message:
              Message from the NSAPI plugin:
              No backend server available for connection: timed out after 30 seconds.
              Build date/time: Jun 3 2002 12:27:28
              The server which is loading the file and processing it writes this to the log:
              17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
              lfilment) One of the getParameter family of methods called after reading from
              the Serv
              letInputStream, not merging post parameters>.

    Anna Bancroft wrote:
              > We have a csv file which is uploaded to the weblogic application server, written
              > to a temporary directory and then manipulated before being written to the database.
              > This process works correctly in a single server environment but fails in a clustered
              > environment.
              > The file gets uploaded to the server into the temporary directory without problem.
              > The processing starts. When running in a cluster the csv file is replicated
              > to a temporary directory on the secondary server as well as the primary.
              > The manipulation process is running but never finishes and the browser times out
              > with the following message:
              > Message from the NSAPI plugin:
              > No backend server available for connection: timed out after 30 seconds.
              > --------------------------------------------------------------------------------
              > Build date/time: Jun 3 2002 12:27:28
              > The server which is loading the file and processing it writes this to the log:
              > 17-Jul-03 15:13:12 BST> <Warning> <HTTP> <WebAppServletContext(1883426,fulfilment,/fu
              > lfilment) One of the getParameter family of methods called after reading from
              > the Serv
              > letInputStream, not merging post parameters>.
              It doesn't make sense? Who is replicating the file? How long does it
              take to process the file? Which plugin are you using as proxy to the
              -- Prasad

Maybe you are looking for