Program to load a flat file in a Hierarchy

Hello Experts,
I have this huge hierarchy which we are trying to load through a flat file.
It is about 10 levels and it is huge and all the readings and guide here are just not helping. It will probably take forever creating this file with the Node ID, Next ID, Parent ID, stuff....
Does any one has a code out there which can go into an Excel file and create the table for loading a hierarchy from a gflat file?
Thanks.

Hi ,
   I have a code which will create BW hierarchy file, this code is for 4 level hierarchy  you need to modify the code as per your requirement.
report zbiw_glacc_hier line-size 250.
data :
     begin of itab occurs 0,
      seg(5),
      segname(30),
      spu(5),
      spuname(30),
      bu(5),
      buname(30),
      pu(15),
      puname(30),
*      cust(8),
*      custname(60),
      end of itab.
data : itab_all like itab occurs 0 with header line,
         itab_root like itab occurs 0 with header line.
data : begin of i_hier occurs 0.    " BIW Structure to hold hirearchy
*        INCLUDE STRUCTURE E1RSHND.
data:   nodeid like e1rshnd-nodeid,           "NODE ID
        infoobject like e1rshnd-infoobject,   "infoobjectNAME
        nodename like e1rshnd-nodename,       "nodename year
        link(5), "   LIKE E1RSHND-LINK,              "linkname
        parentid like e1rshnd-parentid.       "parent id
data:   leafto(32),                           "Interval - to
        leaffrom(32).                         "Interval - from
        include structure e1rsrlt.
        include structure rstxtsml.
*DATA : PARENT LIKE E1RSHND-NODENAME.
data : end of i_hier.
data : sr(8) type n value '00000000'.
data : begin of itab1 occurs 0, " Node Level1 of Hierarchy
       z_ah_level1 like itab-seg,
       z_ah_level2 like itab-spu,
       z_ah_desc_level2 like itab-spuname,
       end of itab1.
data : begin of itab2 occurs 0, " Node Level1 of Hierarchy
       z_ah_level2 like itab-spu,
       z_ah_level3 like itab-bu,
       z_ah_desc_level3 like itab-buname,
       end of itab2.
data : begin of itab3 occurs 0,   " Node Level3 of Hierarchy
       z_ah_level3 like itab-bu,
       z_ah_level4 like itab-pu,
       z_ah_desc_level4 like itab-puname,
       end of itab3.
data wa like itab.
data : begin of itab_first occurs 0, " Node Level2 of Hierarchy
       z_ah_level1 like ITAB-SEG,
       end of itab_first.
data : begin of itab_sec occurs 0, " Node Level2 of Hierarchy
       z_ah_level2 like ITAB-SPU,
       end of itab_sec.
data : begin of itab_third occurs 0, " Node Level2 of Hierarchy
       z_ah_level3 like ITAB-BU,
       end of itab_third.
data : begin of i_download occurs 0,
       record(250),
       end of i_download.
data : v_pid like e1rshnd-nodeid,
       v_line(250).
start-of-selection.
CALL FUNCTION 'UPLOAD'
EXPORTING
*   CODEPAGE                      = ' '
   FILENAME                      = 'C:/'
   FILETYPE                      = 'DAT'
  TABLES
    DATA_TAB                      = itab .
IF SY-SUBRC <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
*         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
itab_all[] = itab[].
  itab_root[] = itab_all[].
  sort itab_all by seg spu bu pu .
  sort itab_root by seg.
  delete adjacent duplicates from itab_root comparing seg.
* Appending Root level of node
*  sr = 00000001.
*  move : sr to i_hier-nodeid,
*        '0HIER_NODE' to i_hier-infoobject,
*        'SHA HIERARCHY' to i_hier-nodename,
*        '00000000' to i_hier-parentid,
*        'EN' to i_hier-langu,
*        'SHA HIERARCHY' to i_hier-txtmd.
*  append i_hier.
*  clear i_hier.
v_pid  = '00000000'.
loop at itab_root.
if itab_root-seg = itab_root-spu.
  delete itab_root.
endif.
endloop.
  loop at itab_root.
* Appending first level of the node
    sr = sr + 1.
    move : sr to i_hier-nodeid,
          '0HIER_NODE' to i_hier-infoobject,
           itab_root-seg to i_hier-nodename,
           v_pid to i_hier-parentid,
          'EN' to i_hier-langu,
          itab_root-segname to i_hier-txtsh,
          itab_root-segname to i_hier-txtmd,
          itab_root-segname to i_hier-TXTLG.
    append i_hier.
    clear i_hier.
  endloop.
  loop at itab_all.
    move itab_all to wa.
* Separation of level 3 of the node
    at new spu.
      if not wa-spu is initial.
        move : wa-seg to itab1-z_ah_level1,
               wa-spu to itab1-z_ah_level2,
               wa-spuname to itab1-z_ah_desc_level2.
        append itab1.
        clear itab1.
      endif.
    endat.
* Separation of level 3 of the node
    at new bu.
      if not wa-bu is initial.
        move : wa-spu to itab2-z_ah_level2,
               wa-bu to itab2-z_ah_level3,
               wa-buname to itab2-z_ah_desc_level3.
        append itab2.
        clear itab2.
      endif.
    endat.
* Separation of level 2 of the node
    at new pu.
      if not wa-pu  is initial.
        move : wa-bu to itab3-z_ah_level3,
               wa-pu to itab3-z_ah_level4,
               wa-puname to itab3-z_ah_desc_level4.
        append itab3.
        clear itab3.
      endif.
    endat.
    clear wa.
  endloop.
  sort itab1 by z_ah_level1 z_ah_level2.
  delete adjacent duplicates from itab1 comparing z_ah_level1
                                              z_ah_level2.
  sort itab2 by z_ah_level2 z_ah_level3.
  delete adjacent duplicates from itab2 comparing z_ah_level2
                                                  z_ah_level3.
  sort itab3 by z_ah_level3 z_ah_level4.
  delete adjacent duplicates from itab3 comparing z_ah_level3
                                                  z_ah_level4.
loop at itab_root.
move itab_root-SEG to itab_first-z_ah_level1.
append itab_first.
clear itab_first.
endloop.
itab_sec[] = itab2[].
itab_third[] = itab3[].
sort itab_first by z_ah_level1.
delete adjacent duplicates from itab_first.
sort itab_sec by z_ah_level2.
delete adjacent duplicates from itab_sec.
sort itab_third by z_ah_level3.
delete adjacent duplicates from itab_third.
data : indx like sy-tabix.
clear indx.
loop at itab_first.
indx = sy-tabix.
read table itab_sec with key z_ah_level2 = itab_first-z_ah_level1
                        binary search.
if sy-subrc eq 0.
delete itab_first index indx.
endif.
endloop.
clear indx.
loop at itab_sec.
indx = sy-tabix.
read table itab_third with  key z_ah_level3 = itab_sec-z_ah_level2
                        binary search.
if sy-subrc eq 0.
delete itab_sec index indx.
endif.
endloop.
clear indx.
clear indx.
loop at itab1.
If itab1-z_ah_level1 = itab1-z_ah_level2.
  delete itab1.
endif.
endloop.
loop at itab2.
If itab2-z_ah_level2 = itab2-z_ah_level3.
  delete itab2.
endif.
endloop.
loop at itab3.
If itab3-z_ah_level3 = itab3-z_ah_level4.
  delete itab3.
endif.
endloop.
* Appending First level node to Hierarchy table
  loop at itab_root.
    loop at itab1 where z_ah_level1 = itab_root-SEG.
      read table i_hier with key nodename = itab1-z_ah_level2.
       if sy-subrc ne 0.
      read table i_hier with key nodename = itab1-z_ah_level1.
      if sy-subrc eq 0.
      move i_hier-nodeid to i_hier-parentid.
* Appending First level node to Hierarchy table
      sr = sr + 1.
      move  : sr to i_hier-nodeid,
              '0HIER_NODE' to i_hier-infoobject,
              itab1-z_ah_level2 to i_hier-nodename,
              'EN' to i_hier-langu,
              itab1-z_ah_desc_level2 to i_hier-txtsh,
              itab1-z_ah_desc_level2 to i_hier-txtmd,
              itab1-z_ah_desc_level2 to i_hier-txtlg.
      append i_hier.
      clear i_hier.
      endif.
      endif.
    endloop.
  endloop.
* Appending second level node to Hierarchy table
  loop at itab_sec.
    loop at itab2 where z_ah_level2 = itab_sec-z_ah_level2.
      read table i_hier with key nodename = itab2-z_ah_level3.
       if sy-subrc ne 0.
      read table i_hier with key nodename = itab2-z_ah_level2.
       if sy-subrc eq 0.
      move i_hier-nodeid to i_hier-parentid.
* Appending second level node to Hierarchy table
      sr = sr + 1.
      move : sr to i_hier-nodeid,
            '0HIER_NODE' to i_hier-infoobject,
            itab2-z_ah_level3 to i_hier-nodename,
            'EN' to i_hier-langu,
            itab2-z_ah_desc_level3  to i_hier-txtsh,
            itab2-z_ah_desc_level3 to i_hier-txtmd,
            itab2-z_ah_desc_level3 to i_hier-txtlg.
      append i_hier.
      clear i_hier.
     endif.
     endif.
    endloop.
  endloop.
* Appending third level node to Hierarchy table
  loop at itab_third.
    loop at itab3 where z_ah_level3 = itab_third-z_ah_level3.
      read table i_hier with key nodename = itab3-z_ah_level4.
    if sy-subrc ne 0.
      read table i_hier with key nodename = itab3-z_ah_level3.
       if sy-subrc eq 0.
      move i_hier-nodeid to i_hier-parentid.
* Appending third level node to Hierarchy table
      sr = sr + 1.
      move : sr to i_hier-nodeid,
*            'ZGL_AC_CD' to i_hier-infoobject,
            '0GL_ACCOUNT' to i_hier-infoobject,
            itab3-z_ah_level4 to i_hier-nodename,
            'EN' to i_hier-langu,
            itab3-z_ah_desc_level4 to i_hier-txtsh,
            itab3-z_ah_desc_level4 to i_hier-txtmd,
            itab3-z_ah_desc_level4 to i_hier-txtlg.
      append i_hier.
      clear i_hier.
      endif.
        endif.
    endloop.
  endloop.
  loop at i_hier.
    concatenate i_hier-nodeid i_hier-infoobject i_hier-nodename
                i_hier-link i_hier-parentid i_hier-leafto
                i_hier-leaffrom i_hier-langu i_hier-txtsh
                i_hier-txtmd i_hier-txtlg into v_line
                separated by ','.
    move  v_line to i_download-record.
    append i_download.
    write : / i_hier.
  endloop.
  perform f_download.
**&      Form  f_download
**       text
**  -->  p1        text
**  <--  p2        text
form f_download.
  call function 'DOWNLOAD'
   exporting
*   BIN_FILESIZE                  = ' '
*   CODEPAGE                      = ' '
*   FILENAME                      = ' '
      filetype                      = 'DAT'
*   ITEM                          = ' '
*   MODE                          = ' '
*   WK1_N_FORMAT                  = ' '
*   WK1_N_SIZE                    = ' '
*   WK1_T_FORMAT                  = ' '
*   WK1_T_SIZE                    = ' '
*   FILEMASK_MASK                 = ' '
*   FILEMASK_TEXT                 = ' '
*   FILETYPE_NO_CHANGE            = ' '
*   FILEMASK_ALL                  = ' '
*   FILETYPE_NO_SHOW              = ' '
*   SILENT                        = 'S'
*   COL_SELECT                    = ' '
*   COL_SELECTMASK                = ' '
*   NO_AUTH_CHECK                 = ' '
* IMPORTING
*   ACT_FILENAME                  =
*   ACT_FILETYPE                  =
*   FILESIZE                      =
*   CANCEL                        =
    tables
      data_tab                      = i_download
*   FIELDNAMES                    =
* EXCEPTIONS
*   INVALID_FILESIZE              = 1
*   INVALID_TABLE_WIDTH           = 2
*   INVALID_TYPE                  = 3
*   NO_BATCH                      = 4
*   UNKNOWN_ERROR                 = 5
*   GUI_REFUSE_FILETRANSFER       = 6
*   CUSTOMER_ERROR                = 7
*   OTHERS                        = 8
  if sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
*         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
  endif.
endform.                    " f_download
Hope this will help. Please assign points if it helps you.
Cheers,
Balaji

Similar Messages

  • Loading Multiple Flat File around 80+

    Hi,
    Loading multiple flat file in to BI system.
    My issue: I got a scenerio of loading multiple i.e., 80+ flat file into BI system every month and my client want it to be done through process chain r automation with out manual intervention.
    Can any one suggest me how can i achiev this one.
    Regards,
    Prabhakar.

    Hi All,
    We have developed a logic to upload multiple flat file at a time i.e., .CSV file. Please find below is the logic regarding that routine.
    *+ program filename_routine.
    Global code
    $$ begin of global - insert your declaration only below this line  -
    Enter here global variables and type declarations
    as well as additional form routines, which you may call from the
    main routine COMPUTE_FLAT_FILE_FILENAME below
    data : v_filename type string.
    data : v_foldername type string.
    data : v_uploadfile type string.
    data:  v_download_file type string.
    data : v_ext(4) type c.
    data : v_FILE_EXISTS type c.
    data : v_file type DXFILE-FILENAME.
    data : v_count(3) type c.
    data : v_length type i.
    TYPES: BEGIN OF  TY_file,
            LINE(900),
          END OF  TY_file.
    DATA: GIT_file TYPE TABLE OF TY_file.
    $$ end of global - insert your declaration only before this line   -
    form compute_flat_file_filename
      using    p_infopackage  type rslogdpid
               p_datasource   type rsoltpsourcer
               p_logsys       type rsslogsys
      changing p_filename     type RSFILENM
               p_subrc        like sy-subrc.
    $$ begin of routine - insert your code only below this line        -
    This routine will be called by the adapter,
    when the infopackage is executed.
      v_count = 1.
    *---- As per the folder location of the files & system we have to change
    *the path fo the folder in the below variable i.e., 'c:\............
      v_foldername =
      'C:\Documents and Settings\Prabhakar-HP\Desktop\test\'.
    *---- As per the Prefix of the files like 'CSI_01_..' we have change the
    *prefix if required in the below variable i.e., 'CSI_01_..', IF REQUIRED
    *OR IF WANT TO CHANGE THE PREFIX TO SOME OTHER.
      v_filename = 'CSI_01_'.
      v_ext = '.csv'.
      condense v_count.
      concatenate   v_foldername
                    v_filename
                    v_count
                    v_ext       into v_uploadfile.
      v_file = v_uploadfile.
      CALL FUNCTION 'DX_FILE_EXISTENCE_CHECK'
        EXPORTING
          FILENAME       = v_file
          PC             = 'X'
        IMPORTING
          FILE_EXISTS    = v_FILE_EXISTS
        EXCEPTIONS
          RFC_ERROR      = 1
          FRONTEND_ERROR = 2
          NO_AUTHORITY   = 3
          OTHERS         = 4.
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
        WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      while v_FILE_EXISTS = 'X'.
        v_file = v_uploadfile.
        CALL FUNCTION 'DX_FILE_EXISTENCE_CHECK'
          EXPORTING
            FILENAME       = v_file
            PC             = 'X'
          IMPORTING
            FILE_EXISTS    = v_FILE_EXISTS
          EXCEPTIONS
            RFC_ERROR      = 1
            FRONTEND_ERROR = 2
            NO_AUTHORITY   = 3
            OTHERS         = 4.
        IF SY-SUBRC <> 0.
          MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
          WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
        if v_file_exists = 'X'.
          CALL FUNCTION 'GUI_UPLOAD'
            EXPORTING
              FILENAME                = v_uploadfile
              FILETYPE                = 'ASC'
              HAS_FIELD_SEPARATOR     = 'X'
            TABLES
              DATA_TAB                = GIT_file
            EXCEPTIONS
              FILE_OPEN_ERROR         = 1
              FILE_READ_ERROR         = 2
              NO_BATCH                = 3
              GUI_REFUSE_FILETRANSFER = 4
              INVALID_TYPE            = 5
              NO_AUTHORITY            = 6
              UNKNOWN_ERROR           = 7
              BAD_DATA_FORMAT         = 8
              HEADER_NOT_ALLOWED      = 9
              SEPARATOR_NOT_ALLOWED   = 10
              HEADER_TOO_LONG         = 11
              UNKNOWN_DP_ERROR        = 12
              ACCESS_DENIED           = 13
              DP_OUT_OF_MEMORY        = 14
              DISK_FULL               = 15
              DP_TIMEOUT              = 16
              OTHERS                  = 17.
          IF SY-SUBRC <> 0.
           MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                   WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
          elseif sy-subrc = 0.
    **----- As per the REQUIRMENT, want to change the down loading file
    *path means we have to change the path in the below variable i.e.,
    *'c:\.......
    v_download_file =
    'C:\Documents and Settings\Prabhakar-HP\Desktop\test\satheesh.CSV'.
            CALL FUNCTION 'GUI_DOWNLOAD'
              EXPORTING
                FILENAME                = v_download_file
                FILETYPE                = 'ASC'
                APPEND                  = 'X'
              TABLES
                DATA_TAB                = git_file
              EXCEPTIONS
                FILE_WRITE_ERROR        = 1
                NO_BATCH                = 2
                GUI_REFUSE_FILETRANSFER = 3
                INVALID_TYPE            = 4
                NO_AUTHORITY            = 5
                UNKNOWN_ERROR           = 6
                HEADER_NOT_ALLOWED      = 7
                SEPARATOR_NOT_ALLOWED   = 8
                FILESIZE_NOT_ALLOWED    = 9
                HEADER_TOO_LONG         = 10
                DP_ERROR_CREATE         = 11
                DP_ERROR_SEND           = 12
                DP_ERROR_WRITE          = 13
                UNKNOWN_DP_ERROR        = 14
                ACCESS_DENIED           = 15
                DP_OUT_OF_MEMORY        = 16
                DISK_FULL               = 17
                DP_TIMEOUT              = 18
                FILE_NOT_FOUND          = 19
                DATAPROVIDER_EXCEPTION  = 20
                CONTROL_FLUSH_ERROR     = 21
                OTHERS                  = 22.
            IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
            ENDIF.
            refresh git_file.
          ENDIF.
        endif.
        if v_file_exists = 'X'.
          if p_subrc = 0.
            v_count = v_count + 1.
            condense v_count.
            concatenate
                 v_foldername
                 v_filename
                 v_count
                 v_ext into v_uploadfile.
          endif.
        else.
        endif.
      endwhile.
      v_uploadfile = v_download_file.
      p_filename = v_uploadfile.
      p_subrc = 0.
    $$ end of routine - insert your code only before this line         -
    endform. +*
    Reagrds,
    Prabhakar.

  • Error loading a flat file.

    Hi,
    I am getting the following error while loading a flat file (in Preview):
    Error 8 when compiling the upload program: row 227, message: Data type /BIC/CCHBZFI_PREV_YR was found in a newe
    Please help.

    Hi,
    Check whether the file is open? close the file and try to pull once again.
    Fields are matching with the data source? check it once. it should be in sync with MData fields.
    and may be the problem with data...I think some Extra permitted chars / lower case is there in flat file. this can be solved by RSKC

  • Loading of flat file (csv) into PSA – no data loaded

    Hi BW-gurus,
    We have an issue regarding loading a flat file (csv) into PSA using an infopackage u2013 (BI 7.0)
    The infopackage has been used for a while. Prior the consultants with SAP_ALL-profile have run the infopackage. Now we want a few super users to run the infopackage.
    We have created a role for the super users, including authorization objects:
    Data Warehousing objects: S_RS_ADMWB
    Activity: 03, 16, 23, 63, 66
    Data Warehousing Workbench obj: INFOAREA, INFOOBJECT, INFOPACKAG, MONITOR, SOURCESYS, WORKBENCH
    Data Warehousing Workbench u2013 datasource (version > BW 3.x): S_RS_DS
    Activity: All
    Datasource: All
    Subobject for New DataSource: All
    Sourcesystem: FILE
    Data Warehousing Workbench u2013 infosource (flex update): S_RS_ISOUR
    Activity: Display, Maintain, Request
    Application Component: All
    InfoSource: All
    InfoSource Subobject: All values
    As mentioned, the infopackage in question, has been used by consultants with SAP_ALL-profile for some time, and been working just fine.  When the super users with the new role are executing the infopackage, the records are found, but not loaded into PSA. The load seems to be stuck, but no error message occurs. The file we are trying to load contains only 15 records.
    Details monitor:
    Overall status: Missing messages or warnings (yellow)
    Requests (messages): Everything ok (green)
      ->  Data request arranged (green)
      ->  Confirmed with: OK (green)
    Extraction (messages): Errors occurred (yellow)
      ->  Data request received (green)
      -> Data selection scheduled (green)
      -> 15 Records sent (0 Records received) (yellow)
      -> Data selection ended (green)
    Transfer (IDocs and TRFC): Missing messages (yellow)
    Processing (data packet):  Warnings received (yellow)
      -> Data package 1 (? Records): Missing messages (yellow)
         -> Inbound processing (0 records): Missing messages (yellow)
         -> Update PSA (0 Records posted): Missing messages (yellow)
         -> Processing end: Missing messages (yellow)
    Have we forgotten something? Any assistance will be highly appreciated!
    Cheers,
    Anne Therese S. Johannessen

    Hi,
    Try to use the transaction ST01 to trace the authorization of the upload with the SAP_ALL. 
    And the enhance your Profile for the super user.
    Best regards
    Matthias

  • Load a flat file into BW-BPS using SAP GUI

    Hi,
    We are using BW BPS 3.5 version, i implemented how to guide  " How to load a flat file into BW-BPS using SAP GUI" successfully without any errors.
    I inlcuded three infoobjects in the text file   costelemt, Posting period and amount. the same three infoobjects i inlcuded the file structure in the global data as specified in the how to document
    The flat file format is like this
    Costelmnt      Postingperiod         Amount   
    XXXXX             #      
    XXXXX             1                          100
    XXXXX             2                           800
    XXXXX             3                           700
    XXXXX             4                           500
    XXXXX             5                           300
    XXXXX             6                           200
    XXXXX             7                           270
    XXXXX             8                           120
    XXXXX             9                           145 
    XXXXX            10                           340
    XXXXX            11                           147
    XXXXX            12                           900 
    I successfully loaded above flat file in to BPS cube and it dispalyed in the layout also.
    But users are requesting to load  flatfile in the below format
    Costelmnt        Annual(PP=#)   Jan(PP=1)   Feb(PP=2) ........................................Dec(PP=12)  
    XXXXX              Blank                       100           800                                                   900
    Is it possible to load a flat file like this
    They wants load a single row instead of 13 rows for each costelement
    How to do this. Please suggest me if anybody accorss this requirment.
    In the infocube we have got only one Info object 0FISCPER3(Posting period) and one 0AMOUNT(Amount)
    do we need 13 Infobjects for each posting period and amount.
    Is there any possiblity we can implement any user exit which we use in BEX Quer's
    Please share your ideas on this.
    Thanks in advance
    Best regards
    SS

    Hi,
    There are 2 ways to do this.
    One is to change the structure of the cube to have 12 key figures for the 12 posting periods.
    Another way is to write an ABAP Function Module to fetch the values from each record based on the posting period and store it in the cube for the corresponding characteristic. This way, you dont have to change the structure of the cube.
    If this particular cube is not used anywhere else, I would suggest to change the structure itself.
    Hope this helps.

  • How can we load a flat file with very, very long lines into a table?

    Hello:
    We have to load a flat file with OWB. The problem is that each of the lines in the file might be up to 30.000 characters long (up to 1.000 units of information in each line, 30 characters long each)
    Of course, our mapping should insert these units of information as independent rows in a table (1.000 rows, in our example).
    We do not know how to go about it. We usually load flat files using table functions, but we am not sure that they will be able to cope with these huge lines. And how should we pivot those lines? Will the Pivot operator do the trick? Or maybe we should pivot those lines outside the database before loading them?
    We are a bit lost. Any suggestion would be appreciated.
    Regards
    Edited by: [email protected] on Oct 29, 2008 8:43 AM
    Edited by: [email protected] on Oct 29, 2008 8:44 AM

    Yes, well, we could define a 1.000 column external table, and then map those 1.000 columns to the Pivot operator… perhaps it would work. But we have been investigating a little bit, and we think that we have found a better solution: there is a unix utility called “fold”. This utility can split our 30.000 character lines in 1.000 lines, 30 characters long each: just what we needed. Then we can load the resulting file using an external table.
    We think this is a much better solution that handling 1.000 columns in the external table and in the Pivot operator.
    Thanks for your help.
    Regards
    Edited by: [email protected] on Oct 29, 2008 10:35 AM

  • How to load a flat file with lot of records

    Hi,
    I am trying to load a flat file with hundreds of records into an apps table. when i create the process and deploy it onto the console it asks for an input in an html form. why does it ask for an input when i have specified the input file directory in my process? is there any way around tis where in it just reads all the records from the flat file directly??is custom queues anyway related to what I am about to do?any documents on this process will be greatly appreciated.If any one can help me on this it will be great. thank you guys....

    After deploying it, do you see if it is active and the status is on from the BPEL console BPEL Process tab? It should not come up to ask for input unless you are clicking it from the Dashboard tab. Do not click it from the Dashboard. Instead you should put some files into the input driectory. Wait few seconds you should see the instances of the BPEL process is created and start to process the files asynchrously.

  • How to load a flat file with utf8 format in odi as source file?

    Hi All,
    Anybody knows how we can load a flat file with utf8 format in odi as source file.Please everybody knows to guide me.
    Regards,
    Sahar

    Could you explain which problem are you facing?
    Francesco

  • Loading Multiple Flat File in to BI System with one InfoPackage

    Greetings,
    i have developed a routine to Load Multiple Flate File in one InfoPackage, but it doesn't work. Only the Last File are loaded. Who can Help me?
    Loading 'R:\Verzeichnis \ing_wan_b_002.fall.csv' to 'R:\Verzeichnis \ing_wan_b_240*.fall.csv'
    Routine:
    DATA: VerzUndDateiname TYPE string value 'R:\Verzeichnis \ing_wan_b_000.fall.csv',
          VerzUndDateinameKomplett TYPE string,
          count TYPE i value 2,
          countN(3) TYPE n.
    WHILE count <= 240.
    countN = count.
    VerzUndDateinameKomplett = VerzUndDateiname.
    REPLACE '000' WITH zaehlerN
                  INTO VerzUndDateinameKomplett.
    p_filename = VerzUndDateinameKomplett.
    count = count + 1.
    ENDWHILE.
    Best Regards
    Jens
    Edited by: JB6493 on May 18, 2009 1:03 PM
    Edited by: JB6493 on May 18, 2009 1:07 PM

    Hello Jens,
    you have to process the InfoPackage 239 times. Your routine would be executed once during one processing of the InfoPackage. This would run the WHILE statement and end up with the last filename after 239 iterations.
    Either try to concatenate the files you want to load (if possible) and load them in one go. Alternatively you could try to use a process chain to run the InfoPackage 239 times.
    Kind regards,
    Christoph

  • Error when loading a flat file

    Hello,
    I am trying to load a flat file into our BW for the first time.  I keep getting the error:
    Error 1 when loading external data
    Message no. RSAR234
    I have search SDN and looked at the OSS notes and have tried several suggestions, but I still cannot get the file to load.  We are currently on version 3.5.
    My transfer structure and flat file do match.
    My transfer structure for IO_MAT_ATTR has a transfer method of PSA and is laid out as follows:
    IO_MAT                      Material Number          IO_MAT
    IO_MATNM     Material Name          IO_MATNM
    IO_MAT is: CHAR 15
    IO_MATNM is: CHAR 30
    My flat file is saved as a CSV and is as follows:
    MATONE,TEA
    MATTWO,COFFEE
    My file is saved on my local PC and is not open when I try to load it.  When I attempt to preview the file in my infopackage, I get the same error there as well.
    Any suggestions would be greatly appreciated.
    Thanks
    Charla

    Hi Charla,
    Error 1 means that the system is unable to access the file. Make sure that the path is correct and also check if the data separator is correctly defined in the "external tab" of the infopackage.
    I guess the settings in the "external tab" of the infopackage have some issues.
    Bye
    Dinesh

  • Error while loading CSV Flat file

    Hi,
    I am getting the following errors while loading a flat file into the DataSource:
    1. Error 'Unit CS is not created in language EN...' at conversion exit CONVERSION_EXIT_CUNIT_INPUT (field UNIT record 1, value CS)
    2. Error 'The argument '9,018.00' cannot be interpreted as anumber' on assignment field /BIC/ZTVSA record 1 value 9,018.00
    What can I do to correct this?
    Thanks

    I am taking the following (Headers) fields in this order in a CSV file :
    0LOGSYS     ,0MATERIAL,0UNIT,0BASE_UOM,0COMP_CODE,0PLANT,0FISCPER,0FISCVARNT,
    0CURRENCY,0PRICE_TYPE,ZTVSAKF,ZVOTVSKF,0PRICE_CTRL,ZPRICE_STKF,0PRICE_BASE,
    0VAL_CLASS,0CURTYPE,0VALUATION
    I have added all these fields in the DataSource as well. T-Code CUNIT has CS as a Unit. 
    I deleted and recreated the DataSource and am getting this error now:
    Error 'Unit CS is not created in language EN...' at conversion exit CONVERSION_EXIT_CUNIT_INPUT (field UNIT record 1, value CS)
    Error 'The argument '1,398.00' cannot be interpreted asa number' on assignment field /BIC/ZTVSAKF record 1 value 1,398.00
    Error 'The argument '8,10.00' cannot be interpreted as anumber' on assignment field /BIC/ZTVSKF record 1 value 8,10.00
    Error 'Unit 0 is not created in language EN...' at conversion exit CONVERSION_EXIT_CUNIT_INPUT (field BASE_UOM record 1, value )
    Error 'Unit CS is not created in language EN...' at conversion exit CONVERSION_EXIT_CUNIT_INPUT (field UNIT record 1, value CS)
    Error 'The argument '5,50.00' cannot be interpreted as anumber' on assignment field /BIC/ZTVSAKF record 1 value 5,50.00
    Does it matter what order the columns are in?

  • Unable to load the flat file  from client  work station

    Hi,
    I am trying load a flat file (.CSV file)from my desktop (Client work station) and getting the following error.
    An upload from the client workstation in the background is not possible
    Message no. RSM860
    Diagnosis
    You cannot load data from the client workstation in the background.
    Procedure
    Transfer your data to the application server and load it from there.
    I have recd a .XLS file and then I have converted to .CSV file , which I saved in my desktop and trying to load the same.
    Please help me how to go about this?...
    Thanks in advance.
    Christy.

    Hi All,
    Again, I have tried to load the flat file from clint work station with direct loading..I have got the following errors..
    Errors : 1
    Record                                                  990: Contents '50,000' from Field /BIC/ZPLQTY_B Not Convertible in Type QUAN -> Long Tex
    like this i recd so many errors.
    ERROR : 2
    Error in an arithmetic operation in record 259     
    Please help me how to load the flat file successfully.
    If I hv to save the flat file in Appl server..how to do that..Please provide step by step instruction..
    Thanks
    Christy

  • Difference b/w Loading a Flat File made from EXCEL and NOTEPAD

    Hi
    Is there any difference in Loading a flat file made from MS-Excel and Note Pad. Imean performance or errors etc....
    GSR

    S R G,
    Theres no much difference only that when loading from excel we store excel in CSV format.  In notepad we use seperators like comma aand write one row at a time, then next....
    Rest everything is same...
    thats y it is preferable to load from excel as it is easier to represent and create...

  • Cube creation & Data loading from Flat file

    Hi All,
    I am new to BI 7. Trying to create a cube and load data from a flat file.
    Successfully created the infosource and Cube (used infosource as a template for cube)
    But got stucked at that point.
    I need help on how to create transfer rules/update rules and then load data into it.
    Thanks,
    Praveen.

    Hi
    right click on infosource->additional functions->create transfer rules.
    now in the window insert the fields you want to load from flat file->activate it.
    now right click on the cube->additional functions->create update rules->activate it.
    click on the small arrow on the left and when you reach the last node(DS)
    right click on it->create info package->extenal data tab->give your FLAT file path and select csv format->schedule tab->click on start.
    hope it helps..
    cheers.

  • Tracking history while loading from flat file

    Dear Experts
    I have a scenario where in have to load data from flat file in regular intervals and also want to track the changes made to the data.
    the data will be like this and it keep on changes and the key is P1,A1,C1.
    DAY 1---P1,A1,C1,100,120,100
    DAY 2-- P1,A1,C1,125,123,190
    DAY 3-- P1, A1, C1, 134,111,135
    DAY 4-- P1,A1,C1,888,234,129
    I am planing to load data into an ODS and then to infocube for reporting what will be the result and how to track history like if i want to see what was the data on Day1 and also current data for P1,A1,C1.
    Just to mention as i am loading from flat file i am not mapping RECORDMODE in ODS from flat file..
    Thanks and regards
    Neel
    Message was edited by:
            Neel Kamal

    Hi
    You don't mention your BI release level, so I will assume you are on the current release, SAP NetWeaver BI 2004s.
    Consider loading to a write-optimized DataStore object, to strore the data.  That way, you automatically will have a unqiue technical key for each record and it will be kept for historical purposes.  In addition, load to a standard DataStore object which will track the changes in the change log.  Then, load the cube from the change log (will avoid your summarization concern), as the changes will be updates (after images) in the standard DataStore Object.
    Thanks for any points you choose to assign
    Best Regards -
    Ron Silberstein
    SAP

Maybe you are looking for

  • Autmated creation of Technical system

    Hi Experts,                   I am experiencing a strange phenomenon. In my PI dev server, I hav manually integrated BI dev server and R/3 server. There is no other server apart from these in the landscape.     But everyday morning I get to see that

  • [Q] Oracle Spatial Java Class Library: classes missing ?

    Hello ! We are using Oracle 9.2.0.4 with Spatial features and we'd like to use the Java Library to get access to the object Geometries. We have downloaded the Spatial Java Library at: http://www.oracle.com/technology/software/products/spatial/index.h

  • SGA free space information in Oracle 7.3.4

    Please help me how to find SGA free space information in Oracle 7.3.4

  • GX700 - problem with power

    Hi My notebook runs great as long as it runs on battery, but when I plug in the AC the notebook eventually dies. The only way to get it up and running again is to remove the battery for a few moments (3-4 seconds does the trick). How much time passes

  • Missing "Basic" Settings.

    The "Basic" adjustments in my Devellpe Module have disappeared. can anyone tell me haow to get them back?