What is flat file

what is flat  file and how do we create it. is it same as the txt ,csv ,excel file

hi,
A flat file is a static document, spreadsheet, or textual record that typically contains data that is not structurally related. Flat files are called so because there is little that can be accomplished with the information contained in them other than reading, storing, and sending.
differences between BDC and lsmw :
Batch Data Communication (BDC) is the oldest batch interfacing technique that SAP provided since the early versions of R/3. BDC is not a typical integration tool, in the sense that, it can be only be used for uploading data into R/3 and so it is not bi-directional.
BDC works on the principle of simulating user input for transactional screen, via an ABAP program. Typically the input comes in the form of a flat file. The ABAP program reads this file and formats the input data screen by screen into an internal table (BDCDATA). The transaction is then started using this internal table as the input and executed in the background.
In Call Transaction, the transactions are triggered at the time of processing itself and so the ABAP program must do the error handling.
It can also be used for real-time interfaces and custom error handling & logging features. Whereas in Batch Input Sessions, the ABAP
program creates a session with all the transactional data, and this session can be viewed, scheduled and processed (using
Transaction SM35) at a later time. The latter technique has a built-in error processing mechanism too.
Batch Input (BI) programs still use the classical BDC approach but doesnt require an ABAP program to be written to format the
BDCDATA. The user has to format the data using predefined structures and store it in a flat file. The BI program then reads this and invokes the transaction mentioned in the header record of the file.
Direct Input (DI) programs work exactly similar to BI programs. But the only difference is, instead of processing screens they validate fields and directly load the data into tables using standard function modules. For this reason, DI programs are much faster (RMDATIND - Material Master DI program works at least 5 times faster) than the BDC counterpart and so ideally suited for loading large volume data. DI programs are
not available for all application areas.
LSMW is an encapsulated data transfer tool. It can provide the same functionality as BDC infact much more but when coming to techinical perspective most the parameters are encapulated. To listout some of the differences :
LSMW is basicaly designed for a fuctional consultant who do not do much coding but need to explore the fuctionality while BDC is designed for a technical consultant.
LSMW offers different techinque for migrating data: Direct input ,BAPI,Idoc,Batch input recording. While bdc basically uses recording.
LSMW mapping is done by SAP while in BDC we have to do it explicitly .
LSMW is basically for standard SAP application while bdc basically for customized application.
Coding can be done flexibly in BDC when compared to LSMW
pls reward if helpful.
Edited by: Rajyalakshmi Attili on May 21, 2008 10:39 AM

Similar Messages

  • How IE works for  flat file

    Hi all:
         As we all know that, when IE gets a idoc's service name from SLD, then use it with idoc's message type and Idoc type to do receiver determniation, what about flat file ? how can we know its Service name and interface name  if there is only a flat file on FTP?  how IE works for Flat file ?
         Couldn't thank you more

    Hi,
    For any idoc scenarious, you would use business systems rather than business service which is stored in SLD. So the IE would fetch it from SLD at runtime.
    For file based scenarious also, you can create business system as type third party and use the same.
    Is that answer your question?
    Regards
    Krish

  • I can't build an xsd for a flat file (txt) to handle repeating records

    Hi - have looked at many posts around flat file schema and they don't seem to address my question.
    I have a flat file that is \n delimited
    the pattern of the data is simple:
    record1 - 90 characters
    record2 - 20 characters
    record3 - n 248 characters - each of these records is parsed into children by the positional method
    record n+1 10 characters
    record n+2 20 characters
    so I used the flat file schema generator to generate the schema and built a map mapping the flat file schema to another xml schema. The schema looks ok - record1, record2, record n+1, record n+2 are child elements of the root. the repeating record
    section is showing up as a node with the parsed children.
    The transform is only mapping the children of the repeating records. When I test the map only the first repeating record gets parsed. No repeating happens (the actual flat file has 400+ repeating records). When I run the map in debug mode, the input
    xml shows that record1 is read in correctly, record2 is read in correctly, record3 is read in and parsed and record4 is treated like record n+1 and record5 is treated like record n+2 and the map thinks it's all finished.
    the section of the repeat part of the schema is and you can see that I set the minOccurs=1 and maxOccurs=unbounded for the node (INVOICE) and the complexType but this is not an affective syntax. I have looked at how the EDI X12 schema look and how they handle
    looping and it is a lot different than what the Flat File schema wizard is doing. Is there a good set of rules published that would guide me though this? otherwise I will basically have to read in the lines from the file and parse them out with functoids -
    seems so inelegant. Thanks in advance.
    <xs:element minOccurs="1" maxOccurs="unbounded" name="INVOICE">
              <xs:annotation>
                <xs:appinfo>
                  <b:recordInfo structure="positional" sequence_number="3" preserve_delimiter_for_empty_data="true" suppress_trailing_delimiters="false"
    />
                </xs:appinfo>
              </xs:annotation>
              <xs:complexType>
                <xs:sequence minOccurs="1" maxOccurs="unbounded">
                  <xs:annotation>
                    <xs:appinfo>
                      <groupInfo sequence_number="0" xmlns="http://schemas.microsoft.com/BizTalk/2003"
    />
                    </xs:appinfo>
                  </xs:annotation>
                  <xs:element name="SegmentType" type="xs:string">
                    <xs:annotation>
                      <xs:appinfo>
                        <b:fieldInfo justification="left" pos_offset="0" pos_length="2" sequence_number="1" />
                      </xs:appinfo>
                    </xs:annotation>
                  </xs:element>....... more children elements
    Harold Rosenkrans

    Thanks for responding
    I gave up trying to parse the repeating record into fields. Instead I just loop through the repeating record section with an <xs:for-each> block in the xsl and use functoids to grab the fields.
    So that works for having the two, shorter header records (structure is positional) before the section of repeating records. Now I just have to figure out how to get the schema to handle the two, shorter trailer (or footer, whatever you prefer) records after
    the section of repeating records
    the error I get in VS when I test the map is [BTW I changed the element names in the schema which is why you don't see INVOICE in the error]
    When I declare the last element as being positional with a character length of 10 I get the error:
    Error 18 Native Parsing Error: Unexpected end of stream while looking for:
    '\r\n'
    The current definition being parsed is SAPARData. The stream offset where the error occured is 1359. The line number where the error occured is 9. The column where the error occured is 0. 
    so the first record is 77 char in length and the second is 16 char and then the repeating records (5 in the file) are 248 char and the last record is 10 char
    so an offset of 1359 puts it beyond the last record by 16 characters - so the stream reader is looking for the next repeating record.
    if I try to declare the last element as delimited I get the error:
    Error 14 Native Parsing Error: Unexpected data found while looking for:
    '\r\n'
    The current definition being parsed is SAPARData. The stream offset where the error occured is 597. The line number where the error occured is 5. The column where the error occured is 0. 
    so the first record is 77 char in length and the second is 16 char and then the repeating records are 248 char.
    a stream offset of 597 puts me 8 characters into the third repeating record - at this point I have only declared one trailer record in the  schema, 10 characters long.
    Why is stream reader stopping at such a weird spot?
    The bottom line is I still haven't discovered the correct schema to handle the trailer records. even if I set the maxOccurs="4" (for the repeat record declaration) it still gets the first error. How does it find an unexpected end of stream looking
    for \r\n when the maxOccurs for the repeat record declaration should have the stream pointer in the 5th repeat record.
    I unfortunately don't have any options concerning the file structure.
    I have read a lot of posts concerning the trailer issue. I have seen a couple that looked interesting. I guess I'll just have to give them a try. The other option is to create a custom pipeline that will only take file lines of 248 characters.
    That's just disgusting !
    Harold Rosenkrans

  • Flat File in POS After Sales

    Hi Friends,
    What are Flat files is getting generated at POS.
    Thanks in advance.
    Regards
    Vijai Jain

    It sounds like you're not a technical person, and this is understandable.  I don't understand why so many software developers don't realize this.
    Anyway, try ChikPOS if you want what is nowadays called "decision oriented reports". It means, it doesn't give you a whole heap of useless data. It "data mines" the important stuff for ya!
    I've had NO problems with it whatsoever.  It's a Jeremy Shum Invent so its a quality Aussie product too - helping the economy.  The features are also endless... multi-language support, managerial decision making reports, not locked to hardware, fully multi-touch (like iphone), external monitor support, xbrl compliant, auto-generation of online store, can advertise "related products", corporate chat support, show time/date/news on external screen... it's just top stuff. AND it's Windows 7 compatible!

  • Export SQL View to Flat File with UTF-8 Encoding

    I've setup a package in SSIS to export a SQL view to a flat file and it's working fine.  I now need to make that flat file UTF-8 encoded.  The package executes but still shows the files as ANSI encoded.
    My package consists of a Source (SQL View) -> Derived Column (casts the fields to DT_WSTR) -> Destination Flat File (Set to output UTF-8 file).
    I don't get any errors to help me troubleshoot further.  I'm running SQL Server 2005 SP2.

    Unless there is a Byte-Order-Marker (BOM - hex file prefix: EF BB BF) at the beginning of the file, and unless your data contains non-ASCII characters, I'm unsure there is a technical difference in the files, Paul.
    That is, even if the file is "encoded" UTF-8, if your data is only ASCII values (decimal values 0-127, hex 00-7F), UTF-8 doesn't really serve a purpose over ANSI encoding.  Now if you're looking for UTF-8 with specifically the BOM included, and your data is all standard ASCII, the Flat File Connection Manager can't do that, it seems.
    What the flat file connection manager is doing correctly though, is encoding values that are over decimal 127/hex 7F in UTF-8 when the encoding of the connection manager is set to 65001 (UTF-8).
    Example:
    Input data built with a script component as a source (code at the bottom of this post) and with only one WSTR output column hooked to a flat file destination component:
    a string containing only decimal value 225 (german Eszett character - ß)
    Encoding set to ANSI 1252 looks like:
    E1 0D 0A (which is the ANSI encoding of the decimal character value 225 (E1) and a CR-LF (0D 0A)
    Encoding set to UTF-8 65001 looks like:
    C3 A1 0D 0A  (which is the UTF-8 encoding of the decimal character value 225 (C3 A1) and a CR-LF (0D 0A)
    Note that for values over decimal 127, UTF-8 takes at least two bytes and up to four for the remaining values available.
    So, I'm comfortable now, after sitting down and going through this, that the flat file connection manager is working correctly, unless you need a BOM.
    1
    Imports System  
    2
    Imports System.Data  
    3
    Imports System.Math  
    4
    Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper  
    5
    Imports Microsoft.SqlServer.Dts.Runtime.Wrapper  
    6
    7
    Public Class ScriptMain  
    8
        Inherits UserComponent  
    9
    10
        Public Overrides Sub CreateNewOutputRows()  
    11
            Output0Buffer.AddRow()  
    12
            Output0Buffer.col1 = ChrW(225)  
    13
        End Sub 
    14
    15
    End Class 
    Phil

  • Multiple languages in flat file

    HI,
    I m trying to load data throw flat file, how i can load the multiple languages into bw. i  mean spliting in bw.
    while loading throw r/3 data , one check box will be enable in bi side ( multiple languages), what about flat file.

    hi,
    I m not asking where i have to maintain language in flat file ? i m asking while extracting the flat file data, we can see one multiple language option, it's disable mode in flate file, i given in flat file EN,GE it's have split in bw side, for unicode testing...

  • LSMW - How to view the flat file on App Server

    Hi All,
    I'm trying to take a look at BC420_DOC_1_HEAD_POS.LEG which is the file for LSMW training BC420. However, this file is stored on the application (NT) server to which I have no access. Can I browse this file using R/3 utilities?
    I just want to see what a flat file for the training course looks like.
    Thanks so much!
    Roman

    Hi Roman,
    In general, the users will not have access to the directories on the application server directly at the OS level. You will have to go through an SAP program / transaction.
    Look at the transaction AL11. If the file that you are talking about resided in any of the directories listed in there you will be able to navigate to it.
    Regards,
    Anand Mandalika.

  • What is the diff b/w flat file and legacy system?

    Hi everyone,
           when v say v r working on scenario FILE to FILE? which format of file r v usually working on? and what is the diff b/w flat file and legacy system?
    thanx

    Hi,
    <i>when v say v r working on scenario FILE to FILE? which format of file r v usually working on?</i>
    >>>Many a times it will be a Flat file with CSV format,Tab delimited format, fixedlength fields.
    <i>what is the diff b/w flat file and legacy system?</i>
    >>>We can not differeniate like this..
    Flat file may come from any systems, it may be live system or legacy system.
    Legacy system- is something like old, or past one. If you talk about SAP , then older versions of SAP can be  called as a legacy system.
    So it may be a file system, or any system which is old version but it doesnot mean it is not in use,
    Regards,
    Moorthy

  • What is the best way to load and convert data from a flat file?

    Hi,
    I want to load data from a flat file, convert dates, numbers and some fields with custom logic (e.g. 0,1 into N,Y) to the correct format.
    The rows where all to_number, to_date and custom conversions succeed should go into table STG_OK. If some conversion fails (due to an illegal format in the flat file), those rows (where the conversion raises some exception) should go into table STG_ERR.
    What is the best and easiest way to archive this?
    Thanks,
    Carsten.

    Hi,
    thanks for your answers so far!
    I gave them a thought and came up with two different alternatives:
    Alternative 1
    I load the data from the flat file into a staging table using sqlldr. I convert the data to the target format using sqlldr expressions.
    The columns of the staging table have the target format (date, number).
    The rows that cannot be loaded go into a bad file. I manually load the data from the bad file (without any conversion) into the error table.
    Alternative 2
    The columns of the staging table are all of type varchar2 regardless of the target format.
    I define data rules for all columns that require a later conversion.
    I load the data from the flat file into the staging table using external table or sqlldr without any data conversion.
    The rows that cannot be loaded go automatically into the error table.
    When I read the data from the staging table, I can safely convert it since it is already checked by the rules.
    What I dislike in alternative 1 is that I manually have to create a second file and a second mapping (ok, I can automate this using OMB*Plus).
    Further, I would prefer using expressions in the mapping for converting the data.
    What I dislike in alternative 2 is that I have to create a data rule and a conversion expression and then keep the data rule and the conversion expression in sync (in case of changes of the file format).
    I also would prefer to have the data in the staging table in the target format. Well, I might load it into a second staging table with columns having the target format. But that's another mapping and a lot of i/o.
    As far as I know I need the data quality option for using data rules, is that true?
    Is there another alternative without any of these drawbacks?
    Otherwise I think I will go for alternative 1.
    Thanks,
    Carsten.

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • What are the key steps & order to follow: changes in my flat file structure

    HI,
    I have a Cube which sits on ODS.
    In the ODS,  the are 6 characterisics: Char1, Char2,...., char6; and 3 key figures: kf1, kf2, and kf3.
    The ODS is loaded through a flat file.
    Through an update rule and startup routine, the ODS updates the cube.
    Now, I have a new requirement to add 2 new characteristics (Cha10, char20) and one 2 key figures (KF55, KF66)).i.e. the flat file will now be coming in with these new fields.
    I have an idea but this this is the first time I really have to implement, I need to be sure.
    What are the key steps that I need to go through and in what order?
    Thanks

    What version are you running ?
    Will you need to load history data as for these new fields or is it just going forward ?
    Is it all one to one mapping for the new fields ?
    1. Make sure you know to what dimensions you need to add the new chars ? New dimension for these two ?
    2. New chars to be made keyfields ?
    3. Update Type for keyfigures Overwrite or Additive ?
    Enhance the Cube, DSO, change  TRFN/TR/UR..

  • What is the best way to export the data out of BW into a flat file on the S

    Hi All,
    We are BW 7.01 (EHP 1, Service Pack Level 7).
    As part of our BW project scope for our current release, we will be developing certain reports in BW, and for certain reports, the existing legacy reporting system based out of MS Access and the old version of Business Objects Release 2 would be used, with the needed data supplied from the BW system.
    What is the best way to export the data out of BW into a flat file on the Server on regular intervals using a process chain?
    Thanks in advance,
    - Shashi

    Hello Shashi,
    some comments:
    1) An "open hub license" is required for all processes that extract data from BW to a non-SAP system (including APD). Please check with your SAP Account Executive for details.
    2) The limitation of 16 key fields is only valid when using open hub for extracting to a DB table. There's no such limitation when writing files.
    3) Open hub is the recommended solution since it's the easiest to implement, no programming is required, and you don't have to worry much about scaling with higher data volumes (APD and CRM BAPI are quite different in all of these aspects).
    For completeness, here's the most recent documentation which also lists other options:
    http://help.sap.com/saphelp_nw73/helpdata/en/0a/0212b4335542a5ae2ecf9a51fbfc96/frameset.htm
    Regards,
    Marc
    SAP Customer Solution Adoption (CSA)

  • What is a Flat file adapter?

    What is a Flat file adapter?
    What is a Planning adapter?
    What are all the adapters required to load the data from Excel to Planning application?

    I agree with Gary in his previous post that users should do some effort to search before posting here. This forum is meant to post and answer only problems and difficulties developers face and that NOT normally covered in manuals or references, so this is not to define a keyword or explain a process that is covered in the documentation.
    Please refer to your documentations, Google your question or use en.wikipedia.org before posting in this forum.
    Thank you.

  • After flat file what next - beggining with BI

    Hi
    I am sure my question might sound silly but I am just beginning my journey with SAP BI..
    I have access to system, I can do in it whatever I want - just without messing too much..First steps- I went thru flat file upload.
    i would like to start practicing data extraction from R/3 but as i know I might mess too much in the system
    So my question is:
    What impact can my messing have on the system in worst case??
    Can you suggest something what I can practice on my own after FF??

    Hi,
    You can try creating generic extractors based on some 'Z' tables and then load data from it in BW. In generic extraction you can try different methods of extraction i.e. View, FM etc.
    After extraction you can try reporting scenarios based on Virtual Provider and InfoProviders etc. For practise you can create everything of your own so it will not mess with existing objects in the system.
    Regards,
    Durgesh.

  • What are the advantages of idoc compare to flat file. how data is secure

    what are the advantages of idoc compare to flat file. how data is secure in idocs compare to flat file

    Hi Ramana,
    In simple words, Main advantage with idoc over flat file is security....
    I will explain you some scenario here U got a flat file with all the data...Now u r having the flat file if you want u can modify the data in it, or somehow any one can modify the data in it  if they were able to access this file. That means u maintained the file in the presentation server
    One level of higher security to the above level is maintaining the flat file in application server, at point also even though lot of people r not having the access to that file, super user who is  having  the access may modify the data or delete the data from it rite....
    so in both of those levels u don't have 100% security...
    So there come to the picture of idocs, Idocs simply data carriers, those r generated by a program but not manually...data will be divided into number of segments based upon ur program. So manually its not so easy to modify the data in these idocs. If any changes to be made in the data then u have to modify the data in the application and then u have to update the idoc or you have to generate the new idoc with that corresponding data. so in this case not even super user can manipulate the data directly in the idoc....
    I think u got my point what I mean to say.....
    If you find it useful mark the points
    ~~Guduri

Maybe you are looking for

  • Specifying the exact size of a rectangle or line - Adobe Acrobat Professional 8

    I am making a few modifications to an existing document of a scale drawing (1mm to 300mm). When using the Line Tool or Rectangle Tool Is it possible to draw a line or rectangle to be an exact size on the drawing. Is there a way to specify the exact d

  • Exchange 2013 Journaling Best practices

    I am using native Exchange 2013 Journaling for journal all external messages for all users. The size of journal mailbox is very huge. I am looking for best practices for journaling in terms of journal mailbox size? Journal database size? Do I need to

  • File Processing Techniques in ABAP

    Hi, Friends, Can any one let me know the file processing techniques used in ABAP. Open/Close , Read/Write . are these used for file processing, if so can you explain me this syntax and scenario. please let me know. Regards, Roberts.K

  • Red/Yellow CCMS alerts - How can they be trigered in separate alert emails?

    Hi Guys, Has anyone done a configuration with CCMS monitoring, where we can have separate email alert for RED alerts and separate email for YELLOW email alerts? I know we can use the method CCMS_TRIGGER_AUTO_REACTIONS, and define the RED and YELLOW m

  • Discover 3i V's Business Objects

    we have evaluated business objects but after seeing all this great stuff in birmingham @ idevelop I wonder is there anyboby out there who can tell me the advantages of discover 3i over business objects or should I still go with business objects