Store Header and/or  footer from flat file to an oracle table by customizing LKM.

I like to customize my LKM by adding extra option in LKM and extra step in LKM,  so that I can store Header and Footer of the source File (Header : normally at beginning of the line Footer At the end of the line) in a separate oracle table.
File format would be
Header row
data
data
data
data
footer row
I like to store header row and footer row inside a table.
Please suggest how the LKM can be customized to do so.

Answered at
Re: Uploading files to server

Similar Messages

  • How to remove Header and Footer from Flat File

    Hi,
    In a scenario , we are going to recieve a flat file which will contains header , footer and data. We have to have to load data in Oracle tables and remove header and footer.
    once the data load is complete , we have to delete flat file from source directory.
    Please help me in solving this issue.
    Thanks,
    Kamlesh

    You have a header parameter in the file datastore. You can provide any number to skip rows from top of the file. For footer you can create a filter in staging area.
    Filter coould be something like this.
    where source_alias.source_column not like '%FOOTER_OR_ANY_VALUE_AS_PER_YOUR_REQ'

  • How to copy only some coulums and rows from excel files to a Oracle Table

    Dear all,
    I have the following problems:
    How to copy only some wanted coulums ( I.e DATETIME(format:17/09/2007 00:00:01 ) and TIME (format: 00:00:01 ) from excel files to oracle table.
    HOPE TO GET HELP FROM YOU GUYS!
    SINCERELY,
    MONOJSHARMALUITEL

    How about Heterogeneous Services so that the Excel sheet is an external database:
    1- Go to Control Panel>Administrative Tools>Data Sources (ODBC)>System DSN and create a data source with appropriate driver. Name it EXCL.
    2- In %ORACLE_HOME%\Network\Admin\Tnsnames.ora fie add entry:
    EXCL =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.12.0.24)(PORT = 1521))
    (CONNECT_DATA =
    (SID = EXCL)
    (HS = OK)
    Here SID is the name of data source that you have just created.
    3- In %ORACLE_HOME%\Network\Admin\Listener.ora file add:
    (SID_DESC =
    (PROGRAM = hsodbc)
    (SID_NAME = <hs_sid>)
    (ORACLE_HOME = <oracle home>)
    under SID_LIST_LISTENER like:
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = d:\ORA9DB)
    (PROGRAM = extproc)
    (SID_DESC =
    (GLOBAL_DBNAME = ORA9DB)
    (ORACLE_HOME = d:\ORA9DB)
    (SID_NAME = ORA9DB)
    (SID_DESC =
    (PROGRAM = hsodbc)
    (SID_NAME = EXCL)
    (ORACLE_HOME = D:\ora9db)
    Dont forget to reload the listener
    c:\> lsnrctl reload
    4- In %ORACLE_HOME%\hs\admin create init<HS_SID>.ora. For our sid EXCL we create file initexcl.ora.
    In this file set following two parameters:
    HS_FDS_CONNECT_INFO = excl
    HS_FDS_TRACE_LEVEL = 0
    5- Now connect to Oracle database and create database link with following command:
    SQL> CREATE DATABASE LINK excl
    2 USING 'excl'
    3 /
    Database link created.
    Now you can perform query against this database like you would for any remote database.
    SQL> SELECT table_name FROM all_tables@excl;
    TABLE_NAME
    DEPT
    EMP

  • How to parse and retrieve records from xml files into columns in Table

    Hi
    I attached the thing what i tried.
    Table to hold the XML COntent:
    create table xmlfile(xml_con sys.xmltype);
    Inserting Xml file content into the Above table:
    insert into xmlfile values(sys.xmltype.CreateXml('<Root><name>RAM</name><age>23</age></Root>'))
    SQL> select * from xmlfile;
    XML_CON
    <Root>
    <name>RAM</name>
    <age>23</age>
    </Root>
    SQL> select extractValue(xml_con, '/Root/name') content from xmlfile;
    CONTENT
    RAM
    This one works fine
    But if the file content is as below( contains MUltiple Records)
    insert into xmlfile values(sys.xmltype.CreateXml('<Root><Record><name>RAM</name><age>23</age></Record><Record><name>SAM</name><age>23</age></Record></Root>'))
    SQL> select extractValue(xml_con, '/Root/Record/name') content from xmlfile;
    ERROR at line 1:
    ORA-19025: EXTRACTVALUE returns value of only one node
    Can anyone help me 4 this issue-How to extract multiple records from the XML file inthis manner(from PL/SQL without using JAVA)
    OR
    If there is anyother way to do this please tell me?

    SQL> SELECT EXTRACTVALUE (COLUMN_VALUE, '//name') NAME,
           EXTRACTVALUE (COLUMN_VALUE, '//age') age
      FROM TABLE
              (XMLSEQUENCE
                  (EXTRACT
                      (XMLTYPE
                          ('<Root>
                              <Record>
                                <name>RAM</name>
                                <age>23</age>
                              </Record>
                              <Record>
                                <name>SAM</name>
                                <age>23</age>
                              </Record>
                            </Root>'
                       '/Root/Record'
    NAME       AGE      
    RAM        23       
    SAM        23       
    2 rows selected.

  • Regarding master data ,transactional data loading from flat file

    Hi friends,
    Please tell me how to load master data and transactional data from flat file ....
    Thanks in advance ,
    Regards,
    ramnaresh.

    Hi,
    Please use the 'search forum' functionality and search the BI Forum with say 'flat file loading'.  You would get plenty of links of previous threads.
    BR/
    Mathew.

  • How to Add Header and Footer in Flat file

    HI All,
    I am using ODI 11g.
    MY interface is populating data from joining to 2 tables through sql query and Loads in to Flat file.
    But i need to add Header and Footer in the Flat file.
    In the below 3 lines
    The 1st line is header ....2nd line is Data ....3rd line is Footer.
    Any ideas how to work on this.
    HSB9KT.KTPRTO.EBSDATA.D0000000 20120626020636
    CFFE00263 SOLGSR014991340000527894536058173880207SOLNE06242012072420120000009048382650000247MRS
    T00000000000000000000
    Thanks,
    Lony

    Hi Drieux,
    i am able to add Header and Footer in this.
    But i want the header like Its like HSB9KT.KTPRTO.EBSDATA.D0000000 || SYSDATE (YYYYMMDDHHMMSS)
    In my Out put i only see header as HSB9KT.KTPRTO.EBSDATA.D0000000 .
    I have added a Variable where i am using SELECT to_char(sysdate,'YYYYMMDDHHMMSS') FROM DUAL query.
    create header (HSB9KT.KTPRTO.EBSDATA.D0000000 #v_DATE)
    /*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=FFT_SOLSNP$CRLOAD_FILE=/home/tarak/FFT_SOL.datSNP$CRFILE_FORMAT=FSNP$CRFILE_SEP_FIELD=0x0009SNP$CRFILE_SEP_LINE=0x000ASNP$CRFILE_FIRST_ROW=0SNP$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=descrSNP$CRTYPE_NAME=STRINGSNP$CRLINE_OFFSET=1SNP$CRLENGTH=994SNP$CRPRECISION=994SNP$CR$$SNPS_END_KEY*/
    Any idea why this sysdate part is not showing in Header.

  • How to store ,retreive and edit  data from an xml file

    I am new to XML. I am creating a phone book in JSP with backend as an XML file.How can I store retrieve and edit datas from a jsp file.
    Please provide me with examples.I dont want to use bean.
    Please provide examples

    You will have to do some leg-work yourself. You will probably want to use DOM to parse and manipulate your XML data. Xerces (xml.apache.org) is a good open source parser. There are extensive examples on their site.
    - Saish

  • Loading a field of length( 60 and up to 2048) from flat file to BW.

    Hello,
    Can you please suggests ways by which we can load a field in flat file which is of length greater than 60 and up to 2048 digits to BW.
    Facts:
    We cannot have more than 59 length digits in an infoobject
    Observations:
    The idea of using multiple infoobject,ie for 2048 max length it will be 35 infoobjects is not acceptable by my client.
    So, can anyone suggests other ways in BW3.5 for loading a text of length up to 2048 from flat file to BW.

    Hi,
    If you have source field DD/MM/YYYY format, please try this code in the field routine.
    Data : lv_date(8) type C.
    lv_date0(4) = <Source Field Name>6(4).
    lv_date4(2) = <Source Field Name>3(2).
    lv_date6(2) = <Source Field Name>0(2).
    RESULT = LV_DATE.
    Thanks,
    Saru
    Edited by: P. Saravana Kumar on Apr 19, 2009 2:38 PM

  • How to load Material from Flat File and convert to SAP Format

    Hi
              I am loading 0Material values from Flat File for mapping purpose. The format of Material in Flat File is "7704132". Within the system, i need to compare the value with 0Material values from the incoming data and update corresponding 0Material in records. for this purpose, I created dummy materials taking 0Material as template and trying to load data. I am getting invalid error- Version '7704132' is not valid RSDMD No194. Can anyone please let me know how to over come this issue. Should I include any routine in the data source or rules level. I am in BI 7.0.
    Thanks.

    Hi,
    Use the FM CONVERSION_EXIT_ALPHA_INPUT to convert the value into Internal format .Use this FM in the Transformations (field mapping).
    Search the forum with CONVERSION_EXIT_ALPHA_INPUT for more information on this.
    Regards,
    Anil Kumar Sharma .P

  • Need a script to import the data from flat file

    Hi Friends,
    Any one have any scripts to import the data from flat files into oracle database(Linux OS). I have to automate the script for every 30min to check any flat files in Incoming directory process them with out user interaction.
    Thanks.
    Srini

    Here is my init.ora file
    # $Header: init.ora 06-aug-98.10:24:40 atsukerm Exp $
    # Copyright (c) 1991, 1997, 1998 by Oracle Corporation
    # NAME
    # init.ora
    # FUNCTION
    # NOTES
    # MODIFIED
    # atsukerm 08/06/98 - fix for 8.1.
    # hpiao 06/05/97 - fix for 803
    # glavash 05/12/97 - add oracle_trace_enable comment
    # hpiao 04/22/97 - remove ifile=, events=, etc.
    # alingelb 09/19/94 - remove vms-specific stuff
    # dpawson 07/07/93 - add more comments regarded archive start
    # maporter 10/29/92 - Add vms_sga_use_gblpagfile=TRUE
    # jloaiza 03/07/92 - change ALPHA to BETA
    # danderso 02/26/92 - change db_block_cache_protect to dbblock_cache_p
    # ghallmar 02/03/92 - db_directory -> db_domain
    # maporter 01/12/92 - merge changes from branch 1.8.308.1
    # maporter 12/21/91 - bug 76493: Add control_files parameter
    # wbridge 12/03/91 - use of %c in archive format is discouraged
    # ghallmar 12/02/91 - add global_names=true, db_directory=us.acme.com
    # thayes 11/27/91 - Change default for cache_clone
    # jloaiza 08/13/91 - merge changes from branch 1.7.100.1
    # jloaiza 07/31/91 - add debug stuff
    # rlim 04/29/91 - removal of char_is_varchar2
    # Bridge 03/12/91 - log_allocation no longer exists
    # Wijaya 02/05/91 - remove obsolete parameters
    # Example INIT.ORA file
    # This file is provided by Oracle Corporation to help you customize
    # your RDBMS installation for your site. Important system parameters
    # are discussed, and example settings given.
    # Some parameter settings are generic to any size installation.
    # For parameters that require different values in different size
    # installations, three scenarios have been provided: SMALL, MEDIUM
    # and LARGE. Any parameter that needs to be tuned according to
    # installation size will have three settings, each one commented
    # according to installation size.
    # Use the following table to approximate the SGA size needed for the
    # three scenarious provided in this file:
    # -------Installation/Database Size------
    # SMALL MEDIUM LARGE
    # Block 2K 4500K 6800K 17000K
    # Size 4K 5500K 8800K 21000K
    # To set up a database that multiple instances will be using, place
    # all instance-specific parameters in one file, and then have all
    # of these files point to a master file using the IFILE command.
    # This way, when you change a public
    # parameter, it will automatically change on all instances. This is
    # necessary, since all instances must run with the same value for many
    # parameters. For example, if you choose to use private rollback segments,
    # these must be specified in different files, but since all gc_*
    # parameters must be the same on all instances, they should be in one file.
    # INSTRUCTIONS: Edit this file and the other INIT files it calls for
    # your site, either by using the values provided here or by providing
    # your own. Then place an IFILE= line into each instance-specific
    # INIT file that points at this file.
    # NOTE: Parameter values suggested in this file are based on conservative
    # estimates for computer memory availability. You should adjust values upward
    # for modern machines.
    # You may also consider using Database Configuration Assistant tool (DBCA)
    # to create INIT file and to size your initial set of tablespaces based
    # on the user input.
    # replace DEFAULT with your database name
    db_name=DEFAULT
    db_files = 80 # SMALL
    # db_files = 400 # MEDIUM
    # db_files = 1500 # LARGE
    db_file_multiblock_read_count = 8 # SMALL
    # db_file_multiblock_read_count = 16 # MEDIUM
    # db_file_multiblock_read_count = 32 # LARGE
    db_block_buffers = 100 # SMALL
    # db_block_buffers = 550 # MEDIUM
    # db_block_buffers = 3200 # LARGE
    shared_pool_size = 3500000 # SMALL
    # shared_pool_size = 5000000 # MEDIUM
    # shared_pool_size = 9000000 # LARGE
    log_checkpoint_interval = 10000
    processes = 50 # SMALL
    # processes = 100 # MEDIUM
    # processes = 200 # LARGE
    parallel_max_servers = 5 # SMALL
    # parallel_max_servers = 4 x (number of CPUs) # MEDIUM
    # parallel_max_servers = 4 x (number of CPUs) # LARGE
    log_buffer = 32768 # SMALL
    # log_buffer = 32768 # MEDIUM
    # log_buffer = 163840 # LARGE
    # audit_trail = true # if you want auditing
    # timed_statistics = true # if you want timed statistics
    max_dump_file_size = 10240 # limit trace file size to 5 Meg each
    # Uncommenting the line below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest = disk$rdbms:[oracle.archive]
    # log_archive_format = "T%TS%S.ARC"
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    # rollback_segments = (name1, name2)
    # If using public rollback segments, define how many
    # rollback segments each instance will pick up, using the formula
    # # of rollback segments = transactions / transactions_per_rollback_segment
    # In this example each instance will grab 40/5 = 8:
    # transactions = 40
    # transactions_per_rollback_segment = 5
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    global_names = TRUE
    # Edit and uncomment the following line to provide the suffix that will be
    # appended to the db_name parameter (separated with a dot) and stored as the
    # global database name when a database is created. If your site uses
    # Internet Domain names for e-mail, then the part of your e-mail address after
    # the '@' is a good candidate for this parameter value.
    # db_domain = us.acme.com      # global database name is db_name.db_domain
    # FOR DEVELOPMENT ONLY, ALWAYS TRY TO USE SYSTEM BACKING STORE
    # vms_sga_use_gblpagfil = TRUE
    # FOR BETA RELEASE ONLY. Enable debugging modes. Note that these can
    # adversely affect performance. On some non-VMS ports the db_block_cache_*
    # debugging modes have a severe effect on performance.
    #_db_block_cache_protect = true # memory protect buffers
    #event = "10210 trace name context forever, level 2" # data block checking
    #event = "10211 trace name context forever, level 2" # index block checking
    #event = "10235 trace name context forever, level 1" # memory heap checking
    #event = "10049 trace name context forever, level 2" # memory protect cursors
    # define parallel server (multi-instance) parameters
    #ifile = ora_system:initps.ora
    # define two control files by default
    control_files = (ora_control1, ora_control2)
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = TRUE
    # Uncomment the following line, if you want to use some of the new 8.1
    # features. Please remember that using them may require some downgrade
    # actions if you later decide to move back to 8.0.
    #compatible = 8.1.0
    Thanks.
    Srini

  • Upload PO data into SRM system from flat file

    Hi all,
    I need to create conversion program to upload Open Purchase order data from flat file to System.
    I am trying to create po using bapi BAPI_POEC_CREATE. but getting error .
    should any one give the details of parameter need to  pass the bapi .
    Thanks in advance
    Sharad

    Sharad,
    Not the very best piece of code, but should be helpful.
    REPORT  zkb_po_create.
    DATA: ls_po_header   TYPE bapi_po_header_c.
    DATA: ls_e_po_header TYPE bapi_po_header_d.
    DATA: ls_po_items    TYPE bapi_po_item_c.
    DATA: ls_po_accass   TYPE bapi_acc_c.
    DATA: ls_po_partner  TYPE bapi_bup_c.
    DATA: ls_po_orgdata  TYPE bapi_org_c.
    DATA: ls_return      TYPE bapiret2.
    DATA: lt_po_items   TYPE TABLE OF bapi_po_item_c.
    DATA: lt_po_accass  TYPE TABLE OF bapi_acc_c.
    DATA: lt_po_partner TYPE TABLE OF bapi_bup_c.
    DATA: lt_po_orgdata TYPE TABLE OF bapi_org_c.
    DATA: lt_return     TYPE TABLE OF bapiret2.
    * Header Details
    ls_po_header-businessprocess = 1.
    CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
      EXPORTING
        input  = ls_po_header-businessprocess
      IMPORTING
        output = ls_po_header-businessprocess.
    ls_po_header-process_type = 'EC'.
    ls_po_header-doc_date = sy-datum.
    ls_po_header-description = 'Test for BAPI_POEC_CREATE'.
    ls_po_header-logsys_fi = 'Backend'.
    ls_po_header-co_code = '1000'.
    ls_po_header-currency = 'GBP'.
    * Item Details
    ls_po_items-item_guid    = 2.
    CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
      EXPORTING
        input  = ls_po_items-item_guid
      IMPORTING
        output = ls_po_items-item_guid.
    ls_po_items-parent        = ls_po_header-businessprocess.
    ls_po_items-product_guid  = '4678E74FFFC380AD000000000A8E035B'.
    ls_po_items-product_id    = '400030'.
    ls_po_items-product_type  = '01'.
    ls_po_items-category_guid = '4627B461073F40FC000000000A8E035B'.
    ls_po_items-category_id   = '1.04.0500'.
    ls_po_items-quantity      = 10.
    ls_po_items-deliv_date   = sy-datum + 10.
    ls_po_items-price = '25'.
    APPEND ls_po_items TO lt_po_items.
    * Account Assignment
    ls_po_accass-guid = 3.
    CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
      EXPORTING
        input  = ls_po_accass-guid
      IMPORTING
        output = ls_po_accass-guid.
    ls_po_accass-parent_guid = ls_po_items-item_guid.
    ls_po_accass-distr_perc  = 100.
    ls_po_accass-g_l_acct    = '<gl acc>'.
    ls_po_accass-cost_ctr    = '<cost centre>'.
    ls_po_accass-co_area     = '<Ctrl area>'.
    APPEND ls_po_accass TO lt_po_accass.
    * Partner Functions
    ls_po_partner-partner_fct  = '00000019'.
    ls_po_partner-partner      = 'Vendor'.
    ls_po_partner-parent_guid  = ls_po_items-item_guid.
    APPEND ls_po_partner TO lt_po_partner.
    ls_po_partner-partner_fct  = '00000016'.
    ls_po_partner-partner      = 'Requester'.
    ls_po_partner-parent_guid  = ls_po_items-item_guid.
    APPEND ls_po_partner TO lt_po_partner.
    ls_po_partner-partner_fct  = '00000020'.
    ls_po_partner-partner      = 'Receipient'.
    ls_po_partner-parent_guid  = ls_po_items-item_guid.
    APPEND ls_po_partner TO lt_po_partner.
    ls_po_partner-partner_fct  = '00000075'.
    ls_po_partner-partner      = 'Location'.
    ls_po_partner-parent_guid  = ls_po_items-item_guid.
    APPEND ls_po_partner TO lt_po_partner.
    ls_po_orgdata-proc_org_ot = 'O'.
    ls_po_orgdata-proc_org_id = 'Pur Org'.
    ls_po_orgdata-proc_group_ot = 'O'.
    ls_po_orgdata-proc_group_id = 'Pur Group'.
    ls_po_orgdata-parent_guid = ls_po_items-item_guid.
    APPEND ls_po_orgdata TO lt_po_orgdata.
    CALL FUNCTION 'BAPI_POEC_CREATE'
      EXPORTING
        i_po_header  = ls_po_header
      IMPORTING
        e_po_header  = ls_e_po_header
      TABLES
        i_po_items   = lt_po_items
        i_po_accass  = lt_po_accass
        i_po_partner = lt_po_partner
        i_po_orgdata = lt_po_orgdata
        return       = lt_return.
    READ TABLE lt_return INTO ls_return WITH KEY type = 'E'.
    IF sy-subrc NE 0.
      COMMIT WORK AND WAIT.
      WRITE:/ ls_e_po_header-doc_number, ': created successfully'.
    ELSE.
      WRITE:/ 'The below errors occurs during PO creation.'.
      LOOP AT lt_return INTO ls_return.
        WRITE:/ ls_return-message.
      ENDLOOP.
    ENDIF.
    Regards, Kathirvel

  • Importing From Flat File with Dynamic Columns

    HI
    I am using ssis 2008,i have folder in which I have Four(4) “.txt” files each file will have 2 columns(ID, NAME). I loaded 4
    files in one destination, but today I receive one more “.txt” file here we have 3 columns (ID, NAME, JOB) how can I get a message new column will receive in source. And how can I create in extra column in my destination table dynamically …please help me

    Hi Sasidhar,
    You need a Script Task to read the names and number of columns in the first row of the flat file each time and store it in a variable, then create a staging table dynamically based on this variable and modify the destination table definition if one ore more
    new columns need to be added, and then use the staging table to load the destination table. I am afraid there is no available working script for your scenario, and you need some .NET coding experience to achieve your goal. Here is an example you can refer
    to:
    http://www.citagus.com/citagus/blog/importing-from-flat-file-with-dynamic-columns/ 
    Regards,
    Mike Yin
    TechNet Community Support

  • MBGMCR02 inbound posting from Flat file

    hello Friends ,
    i am posting MBMCR02 Inbound idoc for oil materils .
    it is giving an error status 51 "Material does not exists or not active "
    i am trying to post in WE 19 and through my program also .
    it is an inbound idoc program processed from flat file .
    i am trying to post the goods receipts against PO.
    can any one suggest any solution.
    thanks
    Anant

    1. Read the file using OPEN DATASET and read and fill up the segment info and fill the EDIDC header data
    and then call function
    CALL FUNCTION 'INBOUND_IDOC_PROCESS'
        TABLES
          IDOC_CONTROL       =  i_edidc
          IDOC_DATA          =  i_edid4.

  • Creating BOM using BDC :How to display no of records from flat file under

    Hi,
          How to display no of records from flat file under one (Alternative BOM) vertically.
        When i execute, the records are replacing one by one.
    Here my coding:
    report ZBOM1
           no standard page heading line-size 255.
    *include bdcrecx1.
    DATA: BEGIN OF bdc OCCURS 0,
           matnr(18),
           werks(4),
           stlan(1),
          END OF BDC.
    DATA: BEGIN OF BDC1 OCCURS 0,
           idnrk(18),
           MENGE(18),
           MEINS(3),
           postp(1),
          END OF bdc1.
    DATA: BEGIN OF BDCDATA OCCURS 0,
             matnr(18),
             werks(4),
             stlan(1),
             idnrk(18),
             MENGE(18),
             MEINS(3),
             postp(1),
             posnr(4),
          END OF BDCDATA.
    data: ibdcdata type  standard table of bdcdata WITH HEADER LINE.
    *start-of-selection.
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        FILENAME                      = 'C:\Documents and Settings\dilipkumar.b\Desktop\soft.txt'
       FILETYPE                       = 'ASC'
       HAS_FIELD_SEPARATOR            = ','
      HEADER_LENGTH                 = 0
      READ_BY_LINE                  = 'X'
      DAT_MODE                      = ' '
      CODEPAGE                      = ' '
      IGNORE_CERR                   = ABAP_TRUE
      REPLACEMENT                   = '#'
      CHECK_BOM                     = ' '
      VIRUS_SCAN_PROFILE            =
      NO_AUTH_CHECK                 = ' '
    IMPORTING
      FILELENGTH                     =
      HEADER                         =
      TABLES
        DATA_TAB                      = BDCDATA
    EXCEPTIONS
      FILE_OPEN_ERROR               = 1
      FILE_READ_ERROR               = 2
      NO_BATCH                      = 3
      GUI_REFUSE_FILETRANSFER       = 4
      INVALID_TYPE                  = 5
      NO_AUTHORITY                  = 6
      UNKNOWN_ERROR                 = 7
      BAD_DATA_FORMAT               = 8
      HEADER_NOT_ALLOWED            = 9
      SEPARATOR_NOT_ALLOWED         = 10
      HEADER_TOO_LONG               = 11
      UNKNOWN_DP_ERROR              = 12
      ACCESS_DENIED                 = 13
      DP_OUT_OF_MEMORY              = 14
      DISK_FULL                     = 15
      DP_TIMEOUT                    = 16
      OTHERS                        = 17
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    *perform open_group.
    loop at bdcdata.
    perform bdc_dynpro      using 'SAPLCSDI' '0100'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29N-STLAN'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'RC29N-MATNR'
                                  'SOFTDRINKS'.
    perform bdc_field       using 'RC29N-WERKS'
                                  'WIND'.
    perform bdc_field       using 'RC29N-STLAN'
                                  '1'.
    perform bdc_field       using 'RC29N-DATUV'
                                  '16.09.2008'.
    perform bdc_dynpro      using 'SAPLCSDI' '0110'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'RC29K-BMENG'
                                  '1'.
    perform bdc_field       using 'RC29K-STLST'
                                  '1'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29K-EXSTL'.
    perform bdc_dynpro      using 'SAPLCSDI' '0111'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29K-LABOR'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_dynpro      using 'SAPLCSDI' '0140'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POSTP(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=FCBU'.
    perform bdc_field       using 'RC29P-IDNRK(01)'
                                  BDCDATA-IDNRK.
    perform bdc_field       using 'RC29P-MENGE(01)'
                                  BDCDATA-MENGE.
    perform bdc_field       using 'RC29P-MEINS(01)'
                                  BDCDATA-MEINS.
    perform bdc_field       using 'RC29P-POSTP(01)'
                                  BDCDATA-POSTP.
    perform bdc_dynpro      using 'SAPLCSDI' '0130'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POSNR'.
    perform bdc_field       using 'RC29P-POSNR'
                                   BDCDATA-POSNR.            "'0010'.
    perform bdc_field       using 'RC29P-IDNRK'
                                  BDCDATA-IDNRK.             "'15'.
    perform bdc_field       using 'RC29P-MENGE'
                                  BDCDATA-MENGE.             "'1'.
    perform bdc_field       using 'RC29P-MEINS'
                                  BDCDATA-MEINS.             "'ml'.
    perform bdc_dynpro      using 'SAPLCSDI' '0131'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POTX1'.
    perform bdc_field       using 'RC29P-SANKA'
                                  'X'.
    *perform bdc_transaction using 'CS01'.
    *perform close_group.
    CALL TRANSACTION 'CS01' USING IBDCDATA MODE 'A' UPDATE 'S'.
    REFRESH IBDCDATA.
    endloop.
           Start new screen                                              *
    FORM BDC_DYNPRO USING PROGRAM DYNPRO.
      CLEAR iBDCDATA.
      iBDCDATA-PROGRAM  = PROGRAM.
      iBDCDATA-DYNPRO   = DYNPRO.
      iBDCDATA-DYNBEGIN = 'X'.
      APPEND ibDCDATA .
    ENDFORM.
           Insert field                                                  *
    FORM BDC_FIELD USING FNAM FVAL.
    IF FVAL <> NODATA.
        CLEAR iBDCDATA.
        iBDCDATA-FNAM = FNAM.
        iBDCDATA-FVAL = FVAL.
        APPEND iBDCDATA .
    ENDIF.
    ENDFORM.

    Hi,
    the BDCDATA structure must be fnam, fval,dynbegin,dynpro,program.
    You have to declare like this and pass this in your CALL TRANSACTION statement.
    Please give some other table name for BDCDATA you declared for and also for IBDCDATA.

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

Maybe you are looking for

  • Updated original iPhone to OS 3.0 don't even have the App Icon now!

    Updated original iPhone to OS 3.0 now most APPs are missing ? ! Anything I can do?

  • Restore Finder/Window Settings

    I accidentally hit the "sort by date" for my desktop when I intended to use it for one window. Now my desktop is "sorted" by date. The problem is that my desktop had previously been sorted using a scheme that made sense to me but wasn't something Mac

  • What is the best afordable/free Slow motion plugin for AE CS6?

    Howdy! I am new here and I couldn't really find the answer I was looking for. I need a affordable or Free is even better plugin for AE CS6 to do some editing on GoPro footage and some other High fps video. I really want a smooth and minimal blur effe

  • How to get an item data from a tabular recorder?

    Hi Since a data item(column)in a tabular recoders has only one name,I don't know how to use the one name to access the different row data. Plz tell me how to do.thx. Lee

  • Hierarchical sql

    hello, I am seeking help with a hierarchical query. My table has five columns: grade (number) score (number) name (varchar) record_id (number) parentid (number) A parentid is a record_id at a higher level in the hierarchy. Grade and Score are used to