Possibilities to ownload TCURR table in BW ad send it as csv file to third

Hi all,
I want to know how to accomplish the following:
I need to download the table TCURR with 2 exhcange rate types and format it in a csv file. Then I need to send it to a third party tool using open hub.
Do I need to create an ODS to store the table download or is there a much simpler way to accomplish this?
Thanks,
Savita

Hi Savita,
I suggest you to create .csv and store it in server (such as ftp). Then you can create a program to get the data from server into local.
OPEN DATASET v_filename FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
LOOP AT IT_OUTPUT.
  TRANSFER IT_OUTPUT TO v_filename.
ENDLOOP.
CLOSE DATASET v_filename.
You can use that code to store data to server.
Hopefully it can help you a lot.
Regards,
Niel.

Similar Messages

  • Error while creating table from csv file

    I am getting an error while creating a table using 'Import Data' button for a csv file containing 22 columns and 8 rows. For primary key, I am using an existing column 'Line' and 'Not generated' options.
    ORA-20001: Excel load run ddl error: drop table "RESTORE" ORA-00942: table or view does not exist ORA-20001: Excel load run ddl error: create table "RESTORE" ( "LINE" NUMBER, "PHASE" VARCHAR2(30), "RDC_MEDIA_ID" VARCHAR2(30), "CLIENT_MEDIA_LABEL" VARCHAR2(30), "MEDIA_TYPE" VARCHAR2(30), "SIZE_GB" NUMBER, "RDC_IMG_HD_A" NUMBER, "START_TECH" VARCHAR2(30), "CREATE_DATE" VARCHAR2(30), "RDC_MEDIA_DEST" VARCHAR2(30), "POD" NUMBER, "TAPE" NUMBER, "ERRORS_YN" VA
    Any idea?

    I am getting an error while creating a table using 'Import Data' button for a csv file containing 22 columns and 8 rows. For primary key, I am using an existing column 'Line' and 'Not generated' options.
    ORA-20001: Excel load run ddl error: drop table "RESTORE" ORA-00942: table or view does not exist ORA-20001: Excel load run ddl error: create table "RESTORE" ( "LINE" NUMBER, "PHASE" VARCHAR2(30), "RDC_MEDIA_ID" VARCHAR2(30), "CLIENT_MEDIA_LABEL" VARCHAR2(30), "MEDIA_TYPE" VARCHAR2(30), "SIZE_GB" NUMBER, "RDC_IMG_HD_A" NUMBER, "START_TECH" VARCHAR2(30), "CREATE_DATE" VARCHAR2(30), "RDC_MEDIA_DEST" VARCHAR2(30), "POD" NUMBER, "TAPE" NUMBER, "ERRORS_YN" VA
    Any idea?

  • Read a csv file, fill a dynamic table and insert into a standard table

    Hi everybody,
    I have a problem here and I need your help:
    I have to read a csv file and insert the data of it into a standard table.
    1 - On the parameter scrreen I have to indicate the standard table and the csv file.
    2 - I need to create a dynamic table. The same type of the one I choose at parameter screen.
    3 - Then I need to read the csv and put the data into this dynamic table.
    4 - Later I need to insert the data from the dynamic table into the standard table (the one on the parameter screen).
    How do I do this job? Do you have an example? Thanks.

    Here is an example table which shows how to upload a csv file from the frontend to a dynamic internal table.  You can of course modify this to update your database table.
    report zrich_0002.
    type-pools: slis.
    field-symbols: <dyn_table> type standard table,
                   <dyn_wa>,
                   <dyn_field>.
    data: it_fldcat type lvc_t_fcat,
          wa_it_fldcat type lvc_s_fcat.
    type-pools : abap.
    data: new_table type ref to data,
          new_line  type ref to data.
    data: xcel type table of alsmex_tabline with header line.
    selection-screen begin of block b1 with frame title text .
    parameters: p_file type  rlgrap-filename default 'c:Test.csv'.
    parameters: p_flds type i.
    selection-screen end of block b1.
    start-of-selection.
    * Add X number of fields to the dynamic itab cataelog
      do p_flds times.
        clear wa_it_fldcat.
        wa_it_fldcat-fieldname = sy-index.
        wa_it_fldcat-datatype = 'C'.
        wa_it_fldcat-inttype = 'C'.
        wa_it_fldcat-intlen = 10.
        append wa_it_fldcat to it_fldcat .
      enddo.
    * Create dynamic internal table and assign to FS
      call method cl_alv_table_create=>create_dynamic_table
                   exporting
                      it_fieldcatalog = it_fldcat
                   importing
                      ep_table        = new_table.
      assign new_table->* to <dyn_table>.
    * Create dynamic work area and assign to FS
      create data new_line like line of <dyn_table>.
      assign new_line->* to <dyn_wa>.
    * Upload the excel
      call function 'ALSM_EXCEL_TO_INTERNAL_TABLE'
           exporting
                filename                = p_file
                i_begin_col             = '1'
                i_begin_row             = '1'
                i_end_col               = '200'
                i_end_row               = '5000'
           tables
                intern                  = xcel
           exceptions
                inconsistent_parameters = 1
                upload_ole              = 2
                others                  = 3.
    * Reformt to dynamic internal table
      loop at xcel.
        assign component xcel-col of structure <dyn_wa> to <dyn_field>.
        if sy-subrc = 0.
         <dyn_field> = xcel-value.
        endif.
        at end of row.
          append <dyn_wa> to <dyn_table>.
          clear <dyn_wa>.
        endat.
      endloop.
    * Write out data from table.
      loop at <dyn_table> into <dyn_wa>.
        do.
          assign component  sy-index  of structure <dyn_wa> to <dyn_field>.
          if sy-subrc <> 0.
            exit.
          endif.
          if sy-index = 1.
            write:/ <dyn_field>.
          else.
            write: <dyn_field>.
          endif.
        enddo.
      endloop.
    REgards,
    RIch Heilman

  • SSIS 2008 – Read roughly 50 CSV files from a folder, create SQL table from them dynamically, and dump data.

    Hello everyone,
    I’ve been assigned one requirement wherein I would like to read around 50 CSV files from a specified folder.
    In step 1 I would like to create schema for this files, meaning take the CSV file one by one and create SQL table for it, if it does not exist at destination.
    In step 2 I would like to append the data of these 50 CSV files into respective table.
    In step 3 I would like to purge data older than a given date.
    Please note, the data in these CSV files would be very bulky, I would like to know the best way to insert bulky data into SQL table.
    Also, in some of the CSV files, there will be 4 rows at the top of the file which have the header details/header rows.
    According to my knowledge I would be asked to implement this on SSIS 2008 but I’m not 100% sure for it.
    So, please feel free to provide multiple approaches if we can achieve these requirements elegantly in newer versions like SSIS 2012.
    Any help would be much appreciated.
    Thanks,
    Ankit
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com

    Hello Harry and Aamir,
    Thank you for the responses.
    @Aamir, thank you for sharing the link, yes I'm going to use Script task to read header columns of CSV files, preparing one SSIS variable which will be having SQL script to create the required table with if exists condition inside script task itself.
    I will be having "Execute SQL task" following the script task. And this will create the actual table for a CSV.
    Both these components will be inside a for each loop container and execute all 50 CSV files one by one.
    Some points to be clarified,
    1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform data insert, while for the rest 48
    files, they should be appended on daily basis.
    Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
    2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have repeated file names. How can we manage
    this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
    3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged with older than criteria, say remove
    data older than 1st Jan 2015. what is the best way to achieve this requirement?
    Please know, I'm very new in SSIS world and would like to develop these packages for client using best package development practices.
    Any help would be greatly appreciated.
    Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com
    1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform
    data insert, while for the rest 48 files, they should be appended on daily basis.
    Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
    How can you identify these files? Is it based on file name or are there some info in the file which indicates
    that it required a purge? If yes you can pick this information during file name or file data parsing step and set a boolean variable. Then in control flow have a conditional precedence constraint which will check the boolean variable and if set it will execute
    a execte sql task to do the purge (you can use TRUNCATE TABLE or DELETE FROM TableName statements)
    2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have
    repeated file names. How can we manage this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
    The best way to achieve this is to append a sequential value to filename (may be timestamp) and then process
    them in sequence. This can be done prior to main loop so that you can use same loop to process these duplicate filenames also. The best thing would be to use file creation date attribute value so that it gets processed in the right sequence. You can use a
    script task to get this for each file as below
    http://microsoft-ssis.blogspot.com/2011/03/get-file-properties-with-ssis.html
    3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged
    with older than criteria, say remove data older than 1st Jan 2015. what is the best way to achieve this requirement?
    You can use a SQL script for this. Just call a sql procedure
    with a single parameter called @Date and then write logic like below
    CREATE PROC PurgeTableData
    @CutOffDate datetime
    AS
    DELETE FROM Table1 WHERE DateField < @CutOffDate;
    DELETE FROM Table2 WHERE DateField < @CutOffDate;
    DELETE FROM Table3 WHERE DateField < @CutOffDate;
    GO
    @CutOffDate which denote date from which older data have to be purged
    You can then schedule this SP in a sql agent job to get executed based on your required frequency
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Loading data into csv file based on the count of data in table in ODI

    Hi,
    I have a requirement like i have data in table and need to be loaded in csv file.
    If the count of the data in the table exceeds the csv data limit then i need to call the same procedure again to
    generate one more csv file and load the remaining data into it.my data in the table may vary all the time.
    I am new to this ODI.Can anyone tell any ways to do it in ODI or tell me any logic to do this?
    Thanks in advance..
    Edited by: 883410 on May 22, 2012 12:01 AM

    What limit on the csv file are you talking about. There isn't a limit (unless you are on Unix where the filesize is limit is 2GB). Are you talking about opening it in earlier versions of Excel where the display limit is 60,000 rows as this is irrelevant for your operation?

  • Exporting table data to .csv file

    How to export the table data (this table is having 18 million records) to .csv file. using SQL Developer and PL/SQL deveoloper it is taking too long time to complete. Please let me know is there any faster way to complete this task.
    Thanks in advance

    Also bear in mind that SQL Developer and PL/SQL Developer are running on your local client, so it's transferring all 18 million rows to the client before formatting and putting them out to a file. If you had some PL/SQL code to export it on the database server it would run a lot faster as you wouldn't have the network traffic delay to deal with, although your file would then reside on the server rather than the client (which isn't necessarily a bad thing as a lot of companies would be concerned about the security of their data especially if it ends up being stored on local machines rather than on a secure server).
    As already asked, why on earth do you need to store 18 million rows in a CSV file? The database is the best place to keep the data.

  • Select Query on TCURR table

    Hello All,
    I am writing select query on TCURR table , but unable to get the values from table though i cansee the entries in table for the same condition as i provided int he program .
    Is there an peculiar reason for the sme .
    MY query is as follows.
    SELECT  * FROM tcurr INTO TABLE g_t_tcurr
                                     WHERE kurst = 'M'                             AND fcurr = g_f_waers
                                 AND tcurr = 'AUD'
                                 AND gdatu LE l_f_datum (MKPF-BUDAT).
    Any help is appreciated .
    Thanks

    Hi,
    Please check the datatype and try again...
    data:
             DATC(8)       TYPE C,
             GDATU_OLD(8)  TYPE C.
    *exc_type = 'M'.
    SELECT * FROM  tcurf  into table g_t_tcurr
    CLIENT SPECIFIED
               WHERE  kurst       = exc_type
               AND    fcurr       = for_curr
               AND    tcurr       = loc_curr
               AND    gdatu      GE datc
               AND    mandt       = mandt
               ORDER BY PRIMARY KEY.
    FM: CONVERT_TO_LOCAL_CURRENCY_N can be used to convert the amt from local currency to Foreign currency.
    Regds
    Parvathi

  • How can I make an easy *.CSV file to load into database table

    Hi All,
    I have a huge excel sheet having columns item#, description and qty. The description column sometimes maybe one word name, two word name separated with space or may be , spearated name. I want to write and PL/SQl code which will read this file and load it into database table. Now the *.CSV file is either comma delimited or tab text delmited which both do not solve my issue. Is there any better solution with anyone which can prevent the manual editing to the *.CSV file and I can easily load it to table.
    Your help is appreciated,
    Thanks
    Zahir

    SQL*Loader is probably the fastest method, but since you specifically asked for a PL/SQL method:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:464420312302

  • Problem in converting table data into CSV file

    Hi All,
    In my Process i need to convert my error table data into csv file,my data is converted as csv file by using OdisqlUnload function,but the column headers are not converted,i use another procedure for converting column headers but iam getting below error ...
    com.sunopsis.sql.SnpsMissingParametersException: Missing parameter string.find, string.find
    SQL: import string import java.sql as sql import java.lang as lang import re sourceConnection = odiRef.getJDBCConnection("SRC") output_write=open('C:/Oracle/Middleware/Oracle_ODI2/oracledi/pro/PRO.txt','r+') myStmt = sourceConnection.createStatement() my_query = "select * FROM E$_LOCAL_F0911Z1" my_query=my_query.upper() if string.find(my_query, '*') > 0: myRs = myStmt.executeQuery(my_query) md=myRs.getMetaData() collect=[] i=1 while (i <= md.getColumnCount()): collect.append(md.getColumnName(i)) i += 1 header=','.join(map(string.strip, collect)) elif string.find(my_query,'||') > 0: header = my_query[7:string.find(my_query, 'FROM')].replace("||','||",',') else: header = my_query[7:string.find(my_query, 'FROM')] print header old=output_write.read() output_write.seek(0) output_write.write (header+'\n'+old) sourceConnection.close() output_write.close()
    And i used below code for converting.......
    import string
    import java.sql as sql
    import java.lang as lang
    import re
    sourceConnection = odiRef.getJDBCConnection("SRC")
    output_write=open('C:/Oracle/Middleware/Oracle_ODI2/oracledi/pro/PRO.txt','r+')
    myStmt = sourceConnection.createStatement()
    my_query = "select FROM E$_COMPANY"*
    *my_query=my_query.upper()*
    *if string.find(my_query, '*') > 0:*
    *myRs = myStmt.executeQuery(my_query)*
    *md=myRs.getMetaData()*
    *collect=[]*
    *i=1*
    *while (i <= md.getColumnCount()):*
    *collect.append(md.getColumnName(i))*
    *i += 1*
    *header=','.join(map(string.strip, collect))*
    *elif string.find(my_query,'||') > 0:*
    *header = my_query[7:string.find(my_query, 'FROM')].replace("||','||",',')*
    *else:*
    *header = my_query[7:string.find(my_query, 'FROM')]*
    *print header*
    *old=output_write.read()*
    *output_write.seek(0)*
    *output_write.write (header+'\n'+old)*
    *sourceConnection.close()*
    *output_write.close()*
    Any one can you help regarding this
    Edited by: 30021986 on Oct 1, 2012 6:04 PM

    This may not be an option for you but in pinch you may want to consider outputing your data to an MS Spreadsheet, then saving it as a CSV. It's somewhat of a cumbersome process, but it will get you by for now.
    You will need to change your content type to application/vnd.ms-excel.
    <% response.setContentType("application/vnd.ms-excel"); %>

  • How can we export table data to a CSV file??

    Hi,
    I have the following requirement. Initially business agreed upon, exporting the table data to Excel file. But now, they would like to export the table data to a CSV file, which is not being supported by af:exportCollectionActionListener component.
    Because, when i opened the exported CSV file, i can see the exported data sorrounded with HTML tags. Hence the issue.
    Does someone has any solution for this ... Like, how can we export the table data to csv format. And it should work similar to exporting the data to excel sheet.
    For youre reference here is the code which i have used to export the table data..
    ><f:facet name="menus">
    ><af:menu text="Menu" id="m1">
    ><af:commandMenuItem text="Print" id="cmi1">
    ><af:exportCollectionActionListener exportedId="t1"
    >title="CommunicationDistributionList"
    >filename="CommunicationDistributionList"
    >type="excelHTML"/> ---- I tried with removing value for this attribute. With no value, it did not worked at all.
    ></af:commandMenuItem>
    ></af:menu>
    ></f:facet>
    Thanks & Regards,
    Kiran Konjeti

    Hi Alex,
    I have already visited that POST and it works only in 10g. Not in 11g.
    I got the solution for this. The solution is :
    Use the following code in jsff
    ==================
    <af:commandButton text="Export Data" id="ctb1">><af:fileDownloadActionListener contentType="text/csv; charset=utf-8"
    >filename="test.csv"
    >method="#{pageFlowScope.pageFlowScopeDemoAppMB.test}"/>
    ></af:commandButton>
    OR
    <af:commandButton text="Export Data" id="ctb1">><af:fileDownloadActionListener contentType="application/vnd.ms-excel; charset=utf-8"
    >filename="test.csv"
    >method="#{pageFlowScope.pageFlowScopeDemoAppMB.test}"/>
    ></af:commandButton>
    And place this code in ManagedBean
    ======================
    > public void test(FacesContext facesContext, OutputStream outputStream) throws IOException {
    > DCBindingContainer dcBindings = (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
    >DCIteratorBinding itrBinding = (DCIteratorBinding)dcBindings.get("fetchDataIterator");
    >tableRows = itrBinding.getAllRowsInRange();
    preparaing column headers
    >PrintWriter out = new PrintWriter(outputStream);
    >out.print(" ID");
    >out.print(",");
    >out.print("Name");
    >out.print(",");
    >out.print("Designation");
    >out.print(",");
    >out.print("Salary");
    >out.println();
    preparing column data
    > for(Row row : tableRows){
    >DCDataRow dataRow = (DCDataRow)row;
    > DataLoaderDTO dto = (DataLoaderDTO)dataRow.getDataProvider();
    >out.print(dto.getId());
    >out.print(",");
    >out.print(dto.getName());
    >out.print(",");
    >out.print(dto.getDesgntn());
    >out.print(",");
    >out.print(dto.getSalary());
    >out.println();
    >}
    >out.flush();
    >out.close();
    > }
    And do the following settings(*OPTIONAL*) for your browser - Only in case, if the file is being blocked by IE
    ==================================================================
    http://ais-ss.usc.edu/helpdoc/main/browser/bris004b.html
    This resolves implementation of exporting table data to CSV file in 11g.
    Thanks & Regards,
    Kiran Konjeti

  • How to read/write .CSV file into CLOB column in a table of Oracle 10g

    I have a requirement which is nothing but a table has two column
    create table emp_data (empid number, report clob)
    Here REPORT column is CLOB data type which used to load the data from the .csv file.
    The requirement here is
    1) How to load data from .CSV file into CLOB column along with empid using DBMS_lob utility
    2) How to read report columns which should return all the columns present in the .CSV file (dynamically because every csv file may have different number of columns) along with the primariy key empid).
    eg: empid report_field1 report_field2
    1 x y
    Any help would be appreciated.

    If I understand you right, you want each row in your table to contain an emp_id and the complete text of a multi-record .csv file.
    It's not clear how you relate emp_id to the appropriate file to be read. Is the emp_id stored in the csv file?
    To read the file, you can use functions from [UTL_FILE|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#BABGGEDF] (as long as the file is in a directory accessible to the Oracle server):
    declare
        lt_report_clob CLOB;
        l_max_line_length integer := 1024;   -- set as high as the longest line in your file
        l_infile UTL_FILE.file_type;
        l_buffer varchar2(1024);
        l_emp_id report_table.emp_id%type := 123; -- not clear where emp_id comes from
        l_filename varchar2(200) := 'my_file_name.csv';   -- get this from somewhere
    begin
       -- open the file; we assume an Oracle directory has already been created
        l_infile := utl_file.fopen('CSV_DIRECTORY', l_filename, 'r', l_max_line_length);
        -- initialise the empty clob
        dbms_lob.createtemporary(lt_report_clob, TRUE, DBMS_LOB.session);
        loop
          begin
             utl_file.get_line(l_infile, l_buffer);
             dbms_lob.append(lt_report_clob, l_buffer);
          exception
             when no_data_found then
                 exit;
          end;
        end loop;
        insert into report_table (emp_id, report)
        values (l_emp_id, lt_report_clob);
        -- free the temporary lob
        dbms_lob.freetemporary(lt_report_clob);
       -- close the file
       UTL_FILE.fclose(l_infile);
    end;This simple line-by-line approach is easy to understand, and gives you an opportunity (if you want) to take each line in the file and transform it (for example, you could transform it into a nested table, or into XML). However it can be rather slow if there are many records in the csv file - the lob_append operation is not particularly efficient. I was able to improve the efficiency by caching the lines in a VARCHAR2 up to a maximum cache size, and only then appending to the LOB - see [three posts on my blog|http://preferisco.blogspot.com/search/label/lob].
    There is at least one other possibility:
    - you could use [DBMS_LOB.loadclobfromfile|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_lob.htm#i998978]. I've not tried this before myself, but I think the procedure is described [here in the 9i docs|http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96591/adl12bfl.htm#879711]. This is likely to be faster than UTL_FILE (because it is all happening in the underlying DBMS_LOB package, possibly in a native way).
    That's all for now. I haven't yet answered your question on how to report data back out of the CLOB. I would like to know how you associate employees with files; what happens if there is > 1 file per employee, etc.
    HTH
    Regards Nigel
    Edited by: nthomas on Mar 2, 2009 11:22 AM - don't forget to fclose the file...

  • How to create a table with datatype blob and insert a pdf file (ravi)

    how to create a table with datatype blob and insert a pdf file,
    give me the explain asap
    1.create the table?
    2.insert the pdffiles into tables?
    3.how to view the files?
    Thanks & Regards
    ravikumar.k
    Edited by: 895044 on Dec 5, 2011 2:55 AM

    895044 wrote:
    how to create a table with datatype blob and insert a pdf file,
    give me the explain asapPerhaps you should read...
    {message:id=9360002}
    especially point 2.
    We're not just sitting here waiting to answer your question as quickly as possible for you.

  • Loading data from .csv file into Oracle Table

    Hi,
    I have a requirement where I need to populate data from .csv file into oracle table.
    Is there any mechanism so that i can follow the same?
    Any help will be fruitful.
    Thanks and regards

    You can use Sql Loader or External tables for your requirement
    Missed Karthick's post ...alredy there :)
    Edited by: Rajneesh Kumar on Dec 4, 2008 10:54 AM

  • Loading data from .csv file into existing table

    Hi,
    I have taken a look at several threads which talk about loading data from .csv file into existing /new table. Also checked out Vikas's application regarding the same. I am trying to explain my requirement with an example.
    I have a .csv file and I want the data to be loaded into an existing table. The timesheet table columns are -
    timesheet_entry_id,time_worked,timesheet_date,project_key .
    The csv columns are :
    project,utilization,project_key,timesheet_category,employee,timesheet_date , hours_worked etc.
    What I needed to know is that before the csv data is loaded into the timesheet table is there any way of validating the project key ( which is the primary key of the projects table) with the projects table . I need to perform similar validations with other columns like customer_id from customers table. Basically the loading should be done after validating if the data exists in the parent table. Has anyone done this kind of loading through the APEX utility-data load.Or is there another method of accomplishing the same.
    Does Vikas's application do what the utility does ( i am assuming that the code being from 2005 the utility was not incorporated in APEX at that time). Any helpful advise is greatly appreciated.
    Thanks,
    Anjali

    Hi Anjali,
    Take a look at these threads which might outline different ways to do it -
    File Browse, File Upload
    Loading CSV file using external table
    Loading a CSV file into a table
    you can create hidden items in the page to validate previous records before insert data.
    Hope this helps,
    M Tajuddin
    http://tajuddin.whitepagesbd.com

  • Loading records from .csv file to SAP table via SAP Program

    Hi,
    I have a .csv file with 132,869 records and I am trying to load it to an SAP table with a customized SAP program.
    After executing the program, only 99,999 records are being loaded into the table.
    Is there some setting to define how many records can be loaded into a table? Or what else could be the problem?
    Pls advice.
    Thanks!!!

    hi Arun ,
    A datasource need a extract structure to fetch data .It is nothing but a temp table to hold data.
    First you need to create atable in SE11 with fields coming from CSV file.
    Then you need to write a report program to read you CSV file and populate your table in BW .
    Then you can create a datasource on top of this table .
    After that replicate and load data at PSA and use to upper flow.
    Regards,
    Jaya Tiwari

Maybe you are looking for

  • Looking for a way to move a rejected xml file after loading it

    Hi all, I'm currently looking for a solution to do the following: I've loaded multiple xml files into an external table and after firing a few queries on that external table, I've moved the xml data into a normal oracle table depending on a few value

  • Is there program for precise measuring of web traffic in/out my Mac?

    Some ISPs have their proprietary web traffic/usage indicators   but they are  crude, of no use for precise measurements. Widgets are  of no hellp either. Is there  a program which can display the internet traffic in/out of a computer at any given mom

  • What is "Primary attribute" in Context attribute property?

    Dear Friends... Many properties..in Context attribute properties.. Attribute name, Type, Read-only, Primary attribute..and so on.. What is primary attribute and how to check it? I don't check it. and i don't know what is this..? Please answer.. Thank

  • Difference dmenu and dmenu-xft

    Hello guys, the default dmenu highlights the selected entry with a brighter text color where dmenu-xft displays selected entries with the same color than the others. All I changed in dwm is static const char *dmenucmd[] = { "dmenu_run", "-fn", font,

  • Why Collection interface declares equals() and hashCode() method

    When I went through the source code of Collection interface, I found equals() and hashCode() are declared? Why? If a class implements this interface, it will inherit these two method from Object, and we can true to override them.