Loading different flat files and consolidate in one BEX

I want to know in SAP BW how is the best way to load different flat files and loading them into a Consolitaded Infocubes o DSO to show in a Bex report. This files have the same estructure.

you need to create a datasource with the exact structure of your files... then this datasource will be linked with your cube via transformation.... you need to define the characters, key figures and time character ..
load file to datasource.. then to cube....then it will be available for reporting

Similar Messages

  • Loading Multiple Flat File in to BI System with one InfoPackage

    Greetings,
    i have developed a routine to Load Multiple Flate File in one InfoPackage, but it doesn't work. Only the Last File are loaded. Who can Help me?
    Loading 'R:\Verzeichnis \ing_wan_b_002.fall.csv' to 'R:\Verzeichnis \ing_wan_b_240*.fall.csv'
    Routine:
    DATA: VerzUndDateiname TYPE string value 'R:\Verzeichnis \ing_wan_b_000.fall.csv',
          VerzUndDateinameKomplett TYPE string,
          count TYPE i value 2,
          countN(3) TYPE n.
    WHILE count <= 240.
    countN = count.
    VerzUndDateinameKomplett = VerzUndDateiname.
    REPLACE '000' WITH zaehlerN
                  INTO VerzUndDateinameKomplett.
    p_filename = VerzUndDateinameKomplett.
    count = count + 1.
    ENDWHILE.
    Best Regards
    Jens
    Edited by: JB6493 on May 18, 2009 1:03 PM
    Edited by: JB6493 on May 18, 2009 1:07 PM

    Hello Jens,
    you have to process the InfoPackage 239 times. Your routine would be executed once during one processing of the InfoPackage. This would run the WHILE statement and end up with the last filename after 239 iterations.
    Either try to concatenate the files you want to load (if possible) and load them in one go. Alternatively you could try to use a process chain to run the InfoPackage 239 times.
    Kind regards,
    Christoph

  • Difference b/w Loading a Flat File made from EXCEL and NOTEPAD

    Hi
    Is there any difference in Loading a flat file made from MS-Excel and Note Pad. Imean performance or errors etc....
    GSR

    S R G,
    Theres no much difference only that when loading from excel we store excel in CSV format.  In notepad we use seperators like comma aand write one row at a time, then next....
    Rest everything is same...
    thats y it is preferable to load from excel as it is easier to represent and create...

  • How to load 2 fields out of 3 from flat file to Oracle in one interface?

    Hi,
    Is there any way to selectively load flat file into oracle within one interface. For example if my flat file has this structure:
    test_cid
    test_ind
    test_data
    and my table is:
    test_code ( correspond to test_cid above)
    test_data (correspond to test_data above)
    That can be done with two interfaces ( first to load the file into oracle staging and second to load the target oracle table from staging oracle table). Is there a way to make it in one interface?
    Edited by: vzaslav on Oct 21, 2009 1:26 PM

    That works for small tables. For big tables you need to accomodate for extra space as you will need to have at least twice as much space for working/integration tables and that is what we are trying to avoid.
    In fact I was able to modify the LKM to load the file into oracle, however my integration step fails and I cannot make it work. To make lkm work one need to change getColList in create working table/generate ctl file steps to getSrcColList. That will create the working table based on source file structure and not on target table structure.
    That doesn't work for IKM however. Any ideas would be appreciated.
    Thank You.

  • Loading Flat File and Temporary Table question

    I am new to ODI. Is there a method to load a flat file directly in to the target table and not have ODI create a temporary file? I am asking this in trying to find out how efficient ODI is in bulk loading a very large set of data.

    You can always modify the KM to skip the loading of the temporary table and load the target table directly. Another option would be to use the SUNOPSIS MEMORY ENGINE as your staging database. But, this option has some drawbacks, as it relies on available memory for the JVM and is not suitable for large datasets (which you mention you have).
    http://odiexperts.com/11g-oracle-data-integrator-%E2%80%93-part-711g-%E2%80%93-sunopsis-memory-engine/
    Regards,
    Michael Rainey
    Edited by: Michael R on Oct 15, 2012 2:44 PM

  • Load a flat file into BW-BPS using SAP GUI

    Hi,
    We are using BW BPS 3.5 version, i implemented how to guide  " How to load a flat file into BW-BPS using SAP GUI" successfully without any errors.
    I inlcuded three infoobjects in the text file   costelemt, Posting period and amount. the same three infoobjects i inlcuded the file structure in the global data as specified in the how to document
    The flat file format is like this
    Costelmnt      Postingperiod         Amount   
    XXXXX             #      
    XXXXX             1                          100
    XXXXX             2                           800
    XXXXX             3                           700
    XXXXX             4                           500
    XXXXX             5                           300
    XXXXX             6                           200
    XXXXX             7                           270
    XXXXX             8                           120
    XXXXX             9                           145 
    XXXXX            10                           340
    XXXXX            11                           147
    XXXXX            12                           900 
    I successfully loaded above flat file in to BPS cube and it dispalyed in the layout also.
    But users are requesting to load  flatfile in the below format
    Costelmnt        Annual(PP=#)   Jan(PP=1)   Feb(PP=2) ........................................Dec(PP=12)  
    XXXXX              Blank                       100           800                                                   900
    Is it possible to load a flat file like this
    They wants load a single row instead of 13 rows for each costelement
    How to do this. Please suggest me if anybody accorss this requirment.
    In the infocube we have got only one Info object 0FISCPER3(Posting period) and one 0AMOUNT(Amount)
    do we need 13 Infobjects for each posting period and amount.
    Is there any possiblity we can implement any user exit which we use in BEX Quer's
    Please share your ideas on this.
    Thanks in advance
    Best regards
    SS

    Hi,
    There are 2 ways to do this.
    One is to change the structure of the cube to have 12 key figures for the 12 posting periods.
    Another way is to write an ABAP Function Module to fetch the values from each record based on the posting period and store it in the cube for the corresponding characteristic. This way, you dont have to change the structure of the cube.
    If this particular cube is not used anywhere else, I would suggest to change the structure itself.
    Hope this helps.

  • How to load a flat file with lot of records

    Hi,
    I am trying to load a flat file with hundreds of records into an apps table. when i create the process and deploy it onto the console it asks for an input in an html form. why does it ask for an input when i have specified the input file directory in my process? is there any way around tis where in it just reads all the records from the flat file directly??is custom queues anyway related to what I am about to do?any documents on this process will be greatly appreciated.If any one can help me on this it will be great. thank you guys....

    After deploying it, do you see if it is active and the status is on from the BPEL console BPEL Process tab? It should not come up to ask for input unless you are clicking it from the Dashboard tab. Do not click it from the Dashboard. Instead you should put some files into the input driectory. Wait few seconds you should see the instances of the BPEL process is created and start to process the files asynchrously.

  • Loading Multiple Flat File around 80+

    Hi,
    Loading multiple flat file in to BI system.
    My issue: I got a scenerio of loading multiple i.e., 80+ flat file into BI system every month and my client want it to be done through process chain r automation with out manual intervention.
    Can any one suggest me how can i achiev this one.
    Regards,
    Prabhakar.

    Hi All,
    We have developed a logic to upload multiple flat file at a time i.e., .CSV file. Please find below is the logic regarding that routine.
    *+ program filename_routine.
    Global code
    $$ begin of global - insert your declaration only below this line  -
    Enter here global variables and type declarations
    as well as additional form routines, which you may call from the
    main routine COMPUTE_FLAT_FILE_FILENAME below
    data : v_filename type string.
    data : v_foldername type string.
    data : v_uploadfile type string.
    data:  v_download_file type string.
    data : v_ext(4) type c.
    data : v_FILE_EXISTS type c.
    data : v_file type DXFILE-FILENAME.
    data : v_count(3) type c.
    data : v_length type i.
    TYPES: BEGIN OF  TY_file,
            LINE(900),
          END OF  TY_file.
    DATA: GIT_file TYPE TABLE OF TY_file.
    $$ end of global - insert your declaration only before this line   -
    form compute_flat_file_filename
      using    p_infopackage  type rslogdpid
               p_datasource   type rsoltpsourcer
               p_logsys       type rsslogsys
      changing p_filename     type RSFILENM
               p_subrc        like sy-subrc.
    $$ begin of routine - insert your code only below this line        -
    This routine will be called by the adapter,
    when the infopackage is executed.
      v_count = 1.
    *---- As per the folder location of the files & system we have to change
    *the path fo the folder in the below variable i.e., 'c:\............
      v_foldername =
      'C:\Documents and Settings\Prabhakar-HP\Desktop\test\'.
    *---- As per the Prefix of the files like 'CSI_01_..' we have change the
    *prefix if required in the below variable i.e., 'CSI_01_..', IF REQUIRED
    *OR IF WANT TO CHANGE THE PREFIX TO SOME OTHER.
      v_filename = 'CSI_01_'.
      v_ext = '.csv'.
      condense v_count.
      concatenate   v_foldername
                    v_filename
                    v_count
                    v_ext       into v_uploadfile.
      v_file = v_uploadfile.
      CALL FUNCTION 'DX_FILE_EXISTENCE_CHECK'
        EXPORTING
          FILENAME       = v_file
          PC             = 'X'
        IMPORTING
          FILE_EXISTS    = v_FILE_EXISTS
        EXCEPTIONS
          RFC_ERROR      = 1
          FRONTEND_ERROR = 2
          NO_AUTHORITY   = 3
          OTHERS         = 4.
      IF SY-SUBRC <> 0.
        MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
        WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      while v_FILE_EXISTS = 'X'.
        v_file = v_uploadfile.
        CALL FUNCTION 'DX_FILE_EXISTENCE_CHECK'
          EXPORTING
            FILENAME       = v_file
            PC             = 'X'
          IMPORTING
            FILE_EXISTS    = v_FILE_EXISTS
          EXCEPTIONS
            RFC_ERROR      = 1
            FRONTEND_ERROR = 2
            NO_AUTHORITY   = 3
            OTHERS         = 4.
        IF SY-SUBRC <> 0.
          MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
          WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
        if v_file_exists = 'X'.
          CALL FUNCTION 'GUI_UPLOAD'
            EXPORTING
              FILENAME                = v_uploadfile
              FILETYPE                = 'ASC'
              HAS_FIELD_SEPARATOR     = 'X'
            TABLES
              DATA_TAB                = GIT_file
            EXCEPTIONS
              FILE_OPEN_ERROR         = 1
              FILE_READ_ERROR         = 2
              NO_BATCH                = 3
              GUI_REFUSE_FILETRANSFER = 4
              INVALID_TYPE            = 5
              NO_AUTHORITY            = 6
              UNKNOWN_ERROR           = 7
              BAD_DATA_FORMAT         = 8
              HEADER_NOT_ALLOWED      = 9
              SEPARATOR_NOT_ALLOWED   = 10
              HEADER_TOO_LONG         = 11
              UNKNOWN_DP_ERROR        = 12
              ACCESS_DENIED           = 13
              DP_OUT_OF_MEMORY        = 14
              DISK_FULL               = 15
              DP_TIMEOUT              = 16
              OTHERS                  = 17.
          IF SY-SUBRC <> 0.
           MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
                   WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
          elseif sy-subrc = 0.
    **----- As per the REQUIRMENT, want to change the down loading file
    *path means we have to change the path in the below variable i.e.,
    *'c:\.......
    v_download_file =
    'C:\Documents and Settings\Prabhakar-HP\Desktop\test\satheesh.CSV'.
            CALL FUNCTION 'GUI_DOWNLOAD'
              EXPORTING
                FILENAME                = v_download_file
                FILETYPE                = 'ASC'
                APPEND                  = 'X'
              TABLES
                DATA_TAB                = git_file
              EXCEPTIONS
                FILE_WRITE_ERROR        = 1
                NO_BATCH                = 2
                GUI_REFUSE_FILETRANSFER = 3
                INVALID_TYPE            = 4
                NO_AUTHORITY            = 5
                UNKNOWN_ERROR           = 6
                HEADER_NOT_ALLOWED      = 7
                SEPARATOR_NOT_ALLOWED   = 8
                FILESIZE_NOT_ALLOWED    = 9
                HEADER_TOO_LONG         = 10
                DP_ERROR_CREATE         = 11
                DP_ERROR_SEND           = 12
                DP_ERROR_WRITE          = 13
                UNKNOWN_DP_ERROR        = 14
                ACCESS_DENIED           = 15
                DP_OUT_OF_MEMORY        = 16
                DISK_FULL               = 17
                DP_TIMEOUT              = 18
                FILE_NOT_FOUND          = 19
                DATAPROVIDER_EXCEPTION  = 20
                CONTROL_FLUSH_ERROR     = 21
                OTHERS                  = 22.
            IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
            ENDIF.
            refresh git_file.
          ENDIF.
        endif.
        if v_file_exists = 'X'.
          if p_subrc = 0.
            v_count = v_count + 1.
            condense v_count.
            concatenate
                 v_foldername
                 v_filename
                 v_count
                 v_ext into v_uploadfile.
          endif.
        else.
        endif.
      endwhile.
      v_uploadfile = v_download_file.
      p_filename = v_uploadfile.
      p_subrc = 0.
    $$ end of routine - insert your code only before this line         -
    endform. +*
    Reagrds,
    Prabhakar.

  • Part 2: Flat files and Business Contents: Any issues with this scenario?

    I will appreciate some clarification on the some points made in response to my previous post "Flat files and Business Contents: Any issues with this scenario?"
    1.
    " ...you’d better analyze those cubes for data redundancy and presence of data you’ll never use. " I will appreciate some clarification on the type of analysis you are referring to. Examples will help.
    2.
    "If you want to combine several found IOs in your custom dataprovider, then again you must know (or figure out) relationships between these IOs." I will appreciate some clarification on the type of relationship you are referring to. Examples will help.
    3.
    I am a bit confused with "..include into ODS structure ALL fields required for the cube" but you also noted noted that "...except navigational attributes and chars and KFs that are going to be determined in TRs or URs."
    If you exclude ALL these, haven't you excluded all the fields you included in the ODS structure?
    4.
    "Consider carefully the ODS’ key fields selection. Their combination should not allow data aggregation that you don’t need."
    I may be missing the point here, I understand that you need to select the fields which will form the unique ID for the records in the ODS under the Key Field (please correct me if I am wrong with the purpose of the Key Field), but I don't understand the discussion of "aggregation" in the context.
    Thanks in advance

    Hallo
    I try to give some exaplanation based on the previous answer.
    1. Data redundancy - make sure you do not store the same information. does not make sense to have data redundanty across you Data Warehouse. this is also a cost. just sotre the same information one time if you get all what you need.
    2. whatwhever you build you dp, which consist of IO, you need to know with kind of relation (1:1 or 1:n - n:n and so on) exist between them. that will help you when you model you infoprovider. For example I would never pit togheter IO (n:m) in the same dimension if you expect an high number of cardinality. Sometime an IO can be an attribute of another one (depend on relation. For example
    Business Partner and his Address. Usually you have a relation 1:1, in this case address is an attribute of BParten and store it in the Masterdata instead then DP
    3. Sometime when you load from ODS to CUBE, you can fill some IO (which are in the infocube and not in the ODS)through ABAP routine in TR-Start Routine of Update Rule. Does not make sense to include these IO in the ODS as they are NULL or Blank (the deault value). This can happen when for example, you first load in the ODS (Price and Quantity) and then you calculate Sell price later (Price * Quantity). of course it could be doen also in the Bex. Depends on other factors (Performance - Loaidng -Sizing)
    4. Make sure that the KEY definition of ODS is accordingly to the data otherwise you will aggregate the data and later maybe if you need the detail you miss it.
    for example: customer - product - Distr Chan - Sell Price
    if each Customer can buy each product for any Distrution Channel, then when you build your ODS(Customer - Product and Distribution must be KEY) otherwise (if you have only Customer - Product KEY for example) you will lose the details for Distribution Channel.
    I hope eveyrhting is clear
    Regards
    Mike

  • Errors when trying to load a flat file

    When I try and load a flat file (I have also manually created a test .csv file with one record) I am able to load to PSA, but when I try and load to the Object I get the following error message (below), how can I fix this? Thanks
    The test record is:
    1460;MAMO;UK - Banbury;MAMO;Other;FCJ
    Diagnosis                                                                               
    Data record 1 & with the key '1460MAMO &' is invalid in value 'Other &'   
        of the attribute/characteristic WINVCAT &.                                                                               
    System response                                                                               
    The system has recognized that the value mentioned above is invalid, and  
        has processed this general error message. A subsequent message may give   
        you more information on the error. This message refers to the same        
        value, even though it does not state this explicitly.                                                                               
    Procedure                                                                               
    If this message appears during a data load, maintain the attribute in     
        the PSA maintenance screens. If this message appears in the master data   
        maintenance screens, leave the transaction and call it again. This allows you to maintain your master data.

    Hi
    GO to Transaction RSKC and check whether & available or not,if it is not there please enter & value and execute.
    and also go to the psa and change record(delete the gap before & and save and reload it...
    Thanks
    Teja
    Message was edited by: Teja badugu

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • What is the diff b/w flat file and legacy system?

    Hi everyone,
           when v say v r working on scenario FILE to FILE? which format of file r v usually working on? and what is the diff b/w flat file and legacy system?
    thanx

    Hi,
    <i>when v say v r working on scenario FILE to FILE? which format of file r v usually working on?</i>
    >>>Many a times it will be a Flat file with CSV format,Tab delimited format, fixedlength fields.
    <i>what is the diff b/w flat file and legacy system?</i>
    >>>We can not differeniate like this..
    Flat file may come from any systems, it may be live system or legacy system.
    Legacy system- is something like old, or past one. If you talk about SAP , then older versions of SAP can be  called as a legacy system.
    So it may be a file system, or any system which is old version but it doesnot mean it is not in use,
    Regards,
    Moorthy

  • Loading of flat file (csv) into PSA – no data loaded

    Hi BW-gurus,
    We have an issue regarding loading a flat file (csv) into PSA using an infopackage u2013 (BI 7.0)
    The infopackage has been used for a while. Prior the consultants with SAP_ALL-profile have run the infopackage. Now we want a few super users to run the infopackage.
    We have created a role for the super users, including authorization objects:
    Data Warehousing objects: S_RS_ADMWB
    Activity: 03, 16, 23, 63, 66
    Data Warehousing Workbench obj: INFOAREA, INFOOBJECT, INFOPACKAG, MONITOR, SOURCESYS, WORKBENCH
    Data Warehousing Workbench u2013 datasource (version > BW 3.x): S_RS_DS
    Activity: All
    Datasource: All
    Subobject for New DataSource: All
    Sourcesystem: FILE
    Data Warehousing Workbench u2013 infosource (flex update): S_RS_ISOUR
    Activity: Display, Maintain, Request
    Application Component: All
    InfoSource: All
    InfoSource Subobject: All values
    As mentioned, the infopackage in question, has been used by consultants with SAP_ALL-profile for some time, and been working just fine.  When the super users with the new role are executing the infopackage, the records are found, but not loaded into PSA. The load seems to be stuck, but no error message occurs. The file we are trying to load contains only 15 records.
    Details monitor:
    Overall status: Missing messages or warnings (yellow)
    Requests (messages): Everything ok (green)
      ->  Data request arranged (green)
      ->  Confirmed with: OK (green)
    Extraction (messages): Errors occurred (yellow)
      ->  Data request received (green)
      -> Data selection scheduled (green)
      -> 15 Records sent (0 Records received) (yellow)
      -> Data selection ended (green)
    Transfer (IDocs and TRFC): Missing messages (yellow)
    Processing (data packet):  Warnings received (yellow)
      -> Data package 1 (? Records): Missing messages (yellow)
         -> Inbound processing (0 records): Missing messages (yellow)
         -> Update PSA (0 Records posted): Missing messages (yellow)
         -> Processing end: Missing messages (yellow)
    Have we forgotten something? Any assistance will be highly appreciated!
    Cheers,
    Anne Therese S. Johannessen

    Hi,
    Try to use the transaction ST01 to trace the authorization of the upload with the SAP_ALL. 
    And the enhance your Profile for the super user.
    Best regards
    Matthias

  • How can we load a flat file with very, very long lines into a table?

    Hello:
    We have to load a flat file with OWB. The problem is that each of the lines in the file might be up to 30.000 characters long (up to 1.000 units of information in each line, 30 characters long each)
    Of course, our mapping should insert these units of information as independent rows in a table (1.000 rows, in our example).
    We do not know how to go about it. We usually load flat files using table functions, but we am not sure that they will be able to cope with these huge lines. And how should we pivot those lines? Will the Pivot operator do the trick? Or maybe we should pivot those lines outside the database before loading them?
    We are a bit lost. Any suggestion would be appreciated.
    Regards
    Edited by: [email protected] on Oct 29, 2008 8:43 AM
    Edited by: [email protected] on Oct 29, 2008 8:44 AM

    Yes, well, we could define a 1.000 column external table, and then map those 1.000 columns to the Pivot operator… perhaps it would work. But we have been investigating a little bit, and we think that we have found a better solution: there is a unix utility called “fold”. This utility can split our 30.000 character lines in 1.000 lines, 30 characters long each: just what we needed. Then we can load the resulting file using an external table.
    We think this is a much better solution that handling 1.000 columns in the external table and in the Pivot operator.
    Thanks for your help.
    Regards
    Edited by: [email protected] on Oct 29, 2008 10:35 AM

  • Error when loading a flat file

    Hello,
    I am trying to load a flat file into our BW for the first time.  I keep getting the error:
    Error 1 when loading external data
    Message no. RSAR234
    I have search SDN and looked at the OSS notes and have tried several suggestions, but I still cannot get the file to load.  We are currently on version 3.5.
    My transfer structure and flat file do match.
    My transfer structure for IO_MAT_ATTR has a transfer method of PSA and is laid out as follows:
    IO_MAT                      Material Number          IO_MAT
    IO_MATNM     Material Name          IO_MATNM
    IO_MAT is: CHAR 15
    IO_MATNM is: CHAR 30
    My flat file is saved as a CSV and is as follows:
    MATONE,TEA
    MATTWO,COFFEE
    My file is saved on my local PC and is not open when I try to load it.  When I attempt to preview the file in my infopackage, I get the same error there as well.
    Any suggestions would be greatly appreciated.
    Thanks
    Charla

    Hi Charla,
    Error 1 means that the system is unable to access the file. Make sure that the path is correct and also check if the data separator is correctly defined in the "external tab" of the infopackage.
    I guess the settings in the "external tab" of the infopackage have some issues.
    Bye
    Dinesh

Maybe you are looking for