Decimal in PSa based on Flat File

Hi,
I have a datasource based on Flat File.
In the preview its showing decimal (e.g. 23870.20) againt Amount field but when i'm loading data into the PSA, output is like 2387020
What can be the issue.
Thanks & regards,
Shilpi Gupt

Hi Gupta,
Please try the below steps,
Goto RSA1>System Tab>UserProfile>Own Data>Click Defaults Tab-->Select the Decimal Notation to Second Option(1234567.89).
If this helps that's fine.
Try changing the Field Separator in the DS Extraction tab Data Separator as ,(Comma)  and Escape sign as "
Regards,
Syed
Edited by: shameer83in on Jan 16, 2010 7:05 PM

Similar Messages

  • Decimal places in currencies from flat files

    Hi experts,
    I am facing a problem in the way BI is managing decimal places when I extract them from tex files. I have read a lot of post and make a lot of changes and the problem persits. Here is my problem. I am uploading data from text files. These files come from 6 different source systems. All developments are already in production so changing the way they send the data it would be the most expensive solution at this point.
    For all currencies ratios I receive in the file  could contain these 3 kind of values:
    5.45
    3.2
    4
    Decimal indicator is "." but it is not always present and sometimes is follow 1 decimal pla or 2, which is ogial to expect depending of the nature of the value. I haven't been able to make it work for all the cases.
    I have set "." in RSCUSTV1 as decimal separator. I tried number format "direct entry" in the extraction Tab of data source. I have tried all possible combinations in the tab fields with data type CURR, DEC, even CHAR and Internal or External format, and any configuration I have made works only for some cases and not for the others. I don't know what else I can look at. Your help will be highly appreciated and rewarded with points.
    Regards,
    Raimundo Alvarez

    HI guys and Thanks for your help.
    I could solve my problem. There were some issues already fixed (a combination of the external property in the tab fields, the configuration for decimal separator and a problem with the currency). The main one was giving me the headache that I am trying to import a currency but I was not receiving the currency from the source because is always the same (Colombian Pesos). This one has an special configuration in table TCURX that was making the decimal places be wrong. I solved requesting the currency from the source even if it was a constant value.
    Thanks for your help,
    Rai.

  • Decimal point separator in flat file load

    Hi all
    In BI 7.0 I'm stuck trying to load a csv flat file via DTP  (without extracting from PSA). The file I have to load has an euro amount field in this format:
    143565,56
    but the load fails. The only way to load it's to open the csv with notepad and to "find and replace" the commas with dots before starting to load, in order to have:
    143565.56
    I tried to modify the su01 settings for decimals separator, but without success, and I cant find any setting in the DTP. There is a way to load amounts whit the comma as decimal separator, avoiding to "find & replace" the file?
    Thank you in advance
    Francesco

    Sorry but your hints dont work for me.
    I checked in the Datasource Extraction tab and I specified the comma separator for decimal in User Select Entry, but without success (I tried both extracting from PSA and not).
    I tried the su01 user setting for decimal separator, but it was already right (with comma), I think in the su01 you can set only visualization of amounts, for example in the Bex.
    Before the upgrade at 7.0, the right decimal separator was the comma and we have a lot of excel macros that create csv with that decimal separator, so it's becoming a serious problem...
    Just a question: where can I check, if exists a transaction code, the BW system setting for decimal and thousand separators (not the user's ones)?
    Thank you all
    Francesco

  • Include Flat file name as a field in the PSA / DSO.

    Hi,
    I have a requirement to include the Flat file name also in the DSO which is used for loading the data from a flat file source. I have created a new infoobject for File Name in the DSO. Can you guide me from which table I can get this information.
    I was thinking of reading this information from the table which stores the file name in a routine in the transformation to the DSO.
    I came across the table RSLDPSEL, it stores the Infopackage name and file name. But this does not work when I am using the same infopackage to load multiple files. Thanks in advance.
    Regards,
    Hari.

    Hi,
    The main challenge is to get the Technical name of the infopackage in the transformation ( field routine) from PSA (Datasource Source) to DSO for the Filename info object.
    Is there any table which stores the Infopackage and the corresponding data target information in BI 7.0 sothat I could use the field routine in transformation to get the Infopackage ID based on the data target (DSO) ?
    Once I have the technical name of the Infopackage, my logic is to query the RSLDPSEL table and find the file name based on Infopackage ID.
    Please share your thoughts on how this scenario could be realised.
    Thanks,
    Regards,
    Hari

  • Extract PSA Data to a Flat File

    Hello,
    I would like to download PSA data into a flat file, so I can load it into SQL Server and run SQL statements on it for analysis purposes.  I have tried creating a PSA export datasource; however, I can find way to cleanly get the data into a flat structure for analysis and/or download. 
    Can anyone suggest options for doing this?
    Thanks in advance for your help. 
    Sincerely,
    Sonya

    Hi Sonya,
    In teh PSA screen try pressing Shift and F8. If this does not bring up the file types then you can try the following: Settings > Chnage Display Variants > View tab > Microsoft Excel > Click SAP_OM.xls and then the green check amrk. The data will be displayed in excel format which you can save to your PC.
    Hope this helps...

  • Loading of flat file (csv) into PSA – no data loaded

    Hi BW-gurus,
    We have an issue regarding loading a flat file (csv) into PSA using an infopackage u2013 (BI 7.0)
    The infopackage has been used for a while. Prior the consultants with SAP_ALL-profile have run the infopackage. Now we want a few super users to run the infopackage.
    We have created a role for the super users, including authorization objects:
    Data Warehousing objects: S_RS_ADMWB
    Activity: 03, 16, 23, 63, 66
    Data Warehousing Workbench obj: INFOAREA, INFOOBJECT, INFOPACKAG, MONITOR, SOURCESYS, WORKBENCH
    Data Warehousing Workbench u2013 datasource (version > BW 3.x): S_RS_DS
    Activity: All
    Datasource: All
    Subobject for New DataSource: All
    Sourcesystem: FILE
    Data Warehousing Workbench u2013 infosource (flex update): S_RS_ISOUR
    Activity: Display, Maintain, Request
    Application Component: All
    InfoSource: All
    InfoSource Subobject: All values
    As mentioned, the infopackage in question, has been used by consultants with SAP_ALL-profile for some time, and been working just fine.  When the super users with the new role are executing the infopackage, the records are found, but not loaded into PSA. The load seems to be stuck, but no error message occurs. The file we are trying to load contains only 15 records.
    Details monitor:
    Overall status: Missing messages or warnings (yellow)
    Requests (messages): Everything ok (green)
      ->  Data request arranged (green)
      ->  Confirmed with: OK (green)
    Extraction (messages): Errors occurred (yellow)
      ->  Data request received (green)
      -> Data selection scheduled (green)
      -> 15 Records sent (0 Records received) (yellow)
      -> Data selection ended (green)
    Transfer (IDocs and TRFC): Missing messages (yellow)
    Processing (data packet):  Warnings received (yellow)
      -> Data package 1 (? Records): Missing messages (yellow)
         -> Inbound processing (0 records): Missing messages (yellow)
         -> Update PSA (0 Records posted): Missing messages (yellow)
         -> Processing end: Missing messages (yellow)
    Have we forgotten something? Any assistance will be highly appreciated!
    Cheers,
    Anne Therese S. Johannessen

    Hi,
    Try to use the transaction ST01 to trace the authorization of the upload with the SAP_ALL. 
    And the enhance your Profile for the super user.
    Best regards
    Matthias

  • Add new line in the Flat file based on the field value

    Hi,
    Following is my Flat File -
    Customer   X      Y
    1001          1       2
    1002          0       1
    Based on the X and Y value I need to add new lines in the Flat file. If X>0 then add a new line with repeating row and Y>0 add again a new line with repeating row. If X or Y=0 then no need to add any repeating new line. 
    So, here for the above example I need output as-
    Customer    X    Y
    1001          1      2
    1001         1       2
    1001         1       2
    1002          0       1
    1002          0        1
    Suggest how we can achieve this?
    Regards,
    Tridib Konwar 

    Hi Tridib,
        I tried your scenario and You will have to use the custom xslt to get the expected result.
        Please find bellow the xslt code which you can use in your map.
    <?xml version="1.0" encoding="utf-16" ?>
    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:var="http://schemas.microsoft.com/BizTalk/2003/var" exclude-result-prefixes="msxsl var" version="1.0" xmlns:ns0="http://PracticeAtul.XYFlatFileSchema">
    <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />
    <xsl:template match="/">
    <xsl:apply-templates select="/ns0:XYComp" />
    </xsl:template>
    <xsl:template match="/ns0:XYComp">
    <ns0:XYComp>
    <XYComp_Child1>
    <XYComp_Child1_Child1>
    <xsl:value-of select="XYComp_Child1/XYComp_Child1_Child1/text()" />
    </XYComp_Child1_Child1>
    <XYComp_Child1_Child2>
    <xsl:value-of select="XYComp_Child1/XYComp_Child1_Child2/text()" />
    </XYComp_Child1_Child2>
    <XYComp_Child1_Child3>
    <xsl:value-of select="XYComp_Child1/XYComp_Child1_Child3/text()" />
    </XYComp_Child1_Child3>
    <xsl:value-of select="XYComp_Child1/text()" />
    </XYComp_Child1>
    <xsl:for-each select="XYComp_Child2">
    <XYComp_Child2>
    <XYComp_Child2_Child1>
    <xsl:value-of select="XYComp_Child2_Child1/text()" />
    </XYComp_Child2_Child1>
    <XYComp_Child2_Child2>
    <xsl:value-of select="XYComp_Child2_Child2/text()" />
    </XYComp_Child2_Child2>
    <XYComp_Child2_Child3>
    <xsl:value-of select="XYComp_Child2_Child3/text()" />
    </XYComp_Child2_Child3>
    </XYComp_Child2>
    <xsl:if test="XYComp_Child2_Child2/text()!=0">
    <XYComp_Child2>
    <XYComp_Child2_Child1>
    <xsl:value-of select="XYComp_Child2_Child1/text()" />
    </XYComp_Child2_Child1>
    <XYComp_Child2_Child2>
    <xsl:value-of select="XYComp_Child2_Child2/text()" />
    </XYComp_Child2_Child2>
    <XYComp_Child2_Child3>
    <xsl:value-of select="XYComp_Child2_Child3/text()" />
    </XYComp_Child2_Child3>
    </XYComp_Child2>
    </xsl:if>
    <xsl:if test="XYComp_Child2_Child3/text()!=0">
    <XYComp_Child2>
    <XYComp_Child2_Child1>
    <xsl:value-of select="XYComp_Child2_Child1/text()" />
    </XYComp_Child2_Child1>
    <XYComp_Child2_Child2>
    <xsl:value-of select="XYComp_Child2_Child2/text()" />
    </XYComp_Child2_Child2>
    <XYComp_Child2_Child3>
    <xsl:value-of select="XYComp_Child2_Child3/text()" />
    </XYComp_Child2_Child3>
    </XYComp_Child2>
    </xsl:if>
    </xsl:for-each>
    </ns0:XYComp>
    </xsl:template>
    </xsl:stylesheet>
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful.
    Atul Toke

  • Errors during Loading TD from Flat file to PSA

    Hi Guys,
    I got some errors while loading the flat file transaction data into PSA using infopackage.
    Could you guys can help me to solve the below errors. It would be more appreciated if you guys could send some material which tells about these types of errors more detailly.
    Points will be awarded.
    Field /BIC/ZIO_PRIC ( Position 4 ): External length specification will be ignored
    Message no. RSDS101
    Diagnosis
    The value 23 is specified as output length in Field /BIC/ZIO_PRIC ( Position 4 ). It is also specified that the data is shipped in internal format.
    System Response
    The length specification for output length is ignored. If the data in the source actually have the specified lengths, this results in conversion errors.
    Procedure
    Select the external format if required.
    Field /BIC/ZIO_PRIC ( Position 4 ): Amount field in internal format; check settings
    Message no. RSDS099
    Diagnosis
    Setting made for Field /BIC/ZIO_PRIC ( Position 4 ) states that the data from the source is available in the internal format.
    This is not usually the case for this source system type.
    System Response
    If you do not change this setting, the data has to be available in the internal SAP format.
    Example:
    For the Japanese Yen this means: 1000 JPY has to be delivered from the source as 10.00 JPY.
    Procedure
    Check the data format of the source and where necessary, change the setting to External.
    Field /BIC/ZIO_PRIC ( Position 4 ): Missing reference to currency field/unit field
    Message no. RSDS151
    Diagnosis
    If an amount or quantity field is not delivered with values in internal format, an associated currency field or unit field must be specified so that the currency or units can be converted during data import.
    System Response
    The DataSource can not be activated.
    Procedure
    Specify a reference field if the data is delivered in external format.
    Specify a reference field even if the currency or unit is constant. Fields with fixed values are not part of the data source; they are filled by the system when loaded.

    Hi JUergen.
    The order of the fields are same as datasource. I didn't make any field for amount in a flat file as it is defined as fixed currency usd when i created a key figure. i don't think so we have to create a field unless we take 0currency is a unit for price instead of fixed currency.
    Though we got the above error, it still allows activates the datasource but the 10.00 usd shows as 1000 usd.
    Cheers,
    Shrinu

  • Merging of flat files based on some value in the file.

    Hi all,
      I have one requirment in which I need to pick some flat files(Let say N number of files).
    Third row in each file, needs to be read and all files having same value in 3rd row needs to be merged in single file.
    There should be as many output files as many different values at row 3rd in all files.
    All the files having unique 3rd row value, will be having sequence number in 4th row. While merging we need to take care of the sequence number and need to be merged based on sequence number.
    All the files need to be placed at some FTP location
    Thanks
    Jai

    thanks for ur response Raj.
    My input files don't have any fixed length or structure.. So It is not possible to write FCC.
    Can you please explain your point in detais
    My approach is to store the files at some intermediate location and go with two interfaces.
    If you have any other idea please let me know.
    Thanks
    Jai

  • Import flat file to multiple tables based on identifier column

    Hello,
    I am trying to setup a package that will import one pipe-delimited flat file (a utility bill) to multiple data tables based on the value of the first column.  I have been told it is similar in format to an EDI file, but there are some differences.
    The number of columns is consistent where the first columns are the same.  Meaning a record that has '00' in the first column will always have 10 columns; a record that has '01' in the first column will always have 9 columns; etc.
    Each value in the first column represents a separate destination data table.  Meaning a record that has '00' in the first column should be output to table '00'; a record that has '01' in the first column should be output to table '01'; etc.  All
    destination tables reside on the same SQL Server.
    Identifier columns can repeat multiple times throughout the flat file.  Meaning a record that starts with '01' may be repeated multiple times in the same.
    Sample Data:
    00|XXXXXXXX|XXX|XXXXXXXX|XXXXXX|XXXX|X|XXXXXXXXXX|XX|XXXXX
    01|XXXXXXXXXXX|XXX|XXXXXXXX|XXXXX|XXXXXXXXXXXXXXXXXXXX|XXXXXXXXXX|XXXXXXX|XXXXXXXXXXXXXX
    02|XXXXXXXXXXX|XXXXXXXX|XXXXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX
    04|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX
    05|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    01|XXXXXXXXXXX|XXX|XXXXXXXX|XXXXX|XXXXXXXXXXXXXXXXXXXX|XXXXXXXXXX|XXXXXXX|XXXXXXXXXXXXXX
    02|XXXXXXXXXXX|XXXXXXXX|XXXXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX
    04|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX
    Any help would be appreciated.

    Hi koldar.308,
    If there are few distinct values in the first column, we can use Flat File Source connect to that flat file, then use Conditional Split Transformation to split the first column to multiples, and then load the data to multiple tables with OLE DB Destination
    based on the outputs of Conditional Split.
    After testing the issue in my environment, please refer to the following steps to achieve this requirement:
    Drag a  Flat File Source connect to that flat file with Flat File Connection Manager.
    Drag a Conditional Split Transformation connects to the Flat File Source.
    Double-click the Conditional Split Transformation, add several Output based on the first column values as below:
    Drag same number OLE DB Destinations as the outputs of Conditional Split, connect to Conditional Split with one case output:
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    If you have any feedback on our support, please click
    here.
    Katherine Xiong
    TechNet Community Support

  • No PSA for Infosource X and source system Y. Here Y is the flat file source

    Hi,
    We are facing an issue while transporting from dev s/m to quality. There is no PSA for Infosource X and source system Y. Here Y is the flat file source system this issue we are getting while transporting transfer rules.We are checked in OSS notes 518426. Its not helpful.
    Thanks
    Manjula

    Hi
    Try RS_TRANSTRU_ACTIVATE_ALL (SE38).
    You may also check SAP Note 861890 - ODS tables disappear during the upgrade -
    and activate all ODS Objects before the upgrade and run the program RSDG_ODSO_ACTIVATE.
    also that with Stack 14 an extended version of this program is available together with the
    Program RSUPGRCHECK. If errors occurs, consult also SAP Note 518426 and run transaction
    RSSGPCLA for the regeneration of the RSDRO_* Objects.
    Recheck whether you have Exported with correct parameters or not -- Try to Re Export the Same with all necessary objects .
    Try Note 524554 and 325525 also
    Hope it helps and clear

  • How to transport psa that loads with flat file

    Hi All,
    I created a psa which is populated with a flat file.  I need to transport this psa.  When I open up the transport connection, the psa is not there.
    Kind regards,
    Cheryl Adamonis

    Hi Cheryl,
    Check whether your DS is collected in the transport request. If it is collected, then you can transport the same. PSA will be automatically available along with the DS.
    Hope this helps!
    Regards,
    Pavan

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Flat File automation process - limitations

    Hello Everyone,
    I would really appreciate any insight on the process improvement suggestions.
    Background:
    Currently we have around 12 territories providing a new flat file with new data on a daily basis depending on the business activity. Which would also mean that, on a given day if there is no activity would mean no flat file provided to BI for loading process.
    The flat files provided need to be loaded into the BI system (PSA - DSO - InfoCube).
    The flat file loading process has been automated for the daily file by implementing the logical file name for each territory.
    1. The process variant in the process chain is to ensure if the flat file is available on the App server (Custom ABAP program).
    2. 12 InfoPackages have been created to pick the data from the flat file on the app server and load the data over into the PSA.
    3. All the InfoPackages merge into an "AND" event in the process chain before the DTP load into the DSO kicks off.
    4. DSO Activation
    5. Recon between the flat file and the DSO to ensure all the data from flat file has been loaded into the DSO.
    6. DTP loads into the InfoCube.
    7. Recon between the InfoCube and the DSO itself.
    8. Moving the flat file from one folder into another.
    All the above processes are automatically performed without any issues if the flat file is available on the server.
    Problem / Issue:
    As one of the major limitations of the above design is the flat file for sure needs to be made available on the app server in order for the whole data flow in the process chain to continue without any breakpoints.
    Current workaround / process improvement in place:
    Based on the above limitation and upon further research, I was able to apply the OSS Note to give us the option of maintaining multiple DTPs for the same data target with different filter values.
    So, even if have individual data stream for each territory with a different DTP the issue still remains where the process variant (ABAP program to check if file exists) or the InfoPackage load if the ABAP program is removed will fail.
    Which means due to the above fact, the support team is alerted about the process chain failure.
    Question / Suggestions required:
    The main questions or any suggestions would be welcome, if one of you can let us know an approach where the flat file check program doesn't have to give a hard failure in the process chain for the rest of the process chain to continue with the loading process. (As in order for the rest of the process chain to continue the only options we have are Error, Success, Always).
    I have also looked into the Decision process variant available in the process chain, but based on the options available within I cannot utilize it for the loading process.
    Error can be caused by generating an error message in the ABAP program which in turn is causing the issue of alert being sent over even if the rest of the process chain finishes.
    Success would mean the flat file needs to be available. Always cannot be used in this case as it will cause a failure at the InfoPackage level.
    If the InfoPackage load can be avoided without a hard error to be generated, the process chain does not have to remain in the failed state which in turn will not trigger any alert to the support team.
    Please do let me know if you need more details about the above process improvement question.
    Thanks
    Dharma.

    The main issue with this as you mentioned is that the file has to be available for sure.
    We had a similar issue - we had a very critical data load that had to happen everyday , failure of the file getting loaded would mean that the reports for the day would be delayed.
    We were running on UNIX and we implemented a simple UNIX command that would not complete till the file was available in the app server.
    Something like
    while ( the file does not exist )
    delay of 15 seconds
    you will come out of the while only after the while completes which means that the file becomes available.
    You can write a similar ABAp program to check file availability if required and put it into your program.
    we also had a failover process where we created a zero byte file with the same name if the file did not come beyond a certain number of tries and the PSA would load zero records and the data load will continue.
    Edited by: Arun Varadarajan on Jan 26, 2009 10:18 AM

  • Mapping with both source and destination as flat files???

    hi I have two two flat files(large data) for example A and B.
    let us say
    A has records of format( characteres of size(5) , numbers of size(6) , characteres of size(5) )
    B has records of format( characteres of size(5) , numbers of size(6) )
    i have to map these flat files so that rocords with numbers in both files should be added where the records of characters in both files are same) and output a flat file C .
    i.e
    A(aaaaa111111bbbbb222222ccccc111111
    bbbbb111111fffff666666ddddd333333)
    B (aaaaa222222)
    output should be(aaaaa333333)
    i have created the flat file module and could able to sample A and B .
    I have also created an external table based on A and B.but the data is not been showed ih the external table.How to map this.
    Pls guide me.
    srry for being long here.
    Thanks 4 ur time.

    Sounds like your datatypes/settings are incorrect.
    To process a file (let's call it stuff.txt) with fixed length records such as the following...
    aaaaa111111bbbbb222222ccccc111111bbbbb111111fffff666666ddddd
    Here is an example tcl script. There are some variables you have to setup for the flat file module, the oracle module, the File location and project name all of which should exist before running. It will create the flat file, external table, a simple mapping from external table to flat file defined by tcl variable target_file (in same directory as the LOC_SRC_FILES, you can change this..its just for demo purposes and will write a comma separated file). Hopefully this will get you up and going with your problem...
    # Create the modules etc and set the values below, then run
    set project MY_PROJECT
    set ff_module FF
    set ff_location LOC_SRC_FILES
    set ora_module MM
    set target_file my_target_file
    OMBCC '/$OMB_CURRENT_PROJECT'
    OMBDCC
    OMBCC '$ff_module'
    OMBCREATE FLAT_FILE 'FSTUFF' SET PROPERTIES (DATA_FILE_NAME,IS_DELIMITED, RECORD_LENGTH) VALUES ('stuff.txt',0, '16') ADD RECORD 'FSTUFF'
    OMBALTER FLAT_FILE 'FSTUFF' MODIFY RECORD 'FSTUFF' ADD FIELD 'FIELDA' SET PROPERTIES (DATATYPE,START_POSITION,END_POSITION,MAXIMUM_LENGTH) VALUES ('CHAR',1,5,5)
    OMBALTER FLAT_FILE 'FSTUFF' MODIFY RECORD 'FSTUFF' ADD FIELD 'FIELDB' SET PROPERTIES (DATATYPE,START_POSITION,END_POSITION,MAXIMUM_LENGTH) VALUES ('DECIMAL EXTERNAL',6,11,6)
    OMBALTER FLAT_FILE 'FSTUFF' MODIFY RECORD 'FSTUFF' ADD FIELD 'FIELDC' SET PROPERTIES (DATATYPE,START_POSITION,END_POSITION,MAXIMUM_LENGTH) VALUES ('CHAR',12,16,5)
    OMBCC '../$ora_module'
    OMBCREATE EXTERNAL_TABLE 'FSTUFF_EXT' SET PROPERTIES(LOAD_NULLS_WHEN_MISSING_VALUES,TRIM) VALUES (1, 'RIGHT') SET REFERENCE RECORD 'FSTUFF' OF FLAT_FILE '../$ff_module/FSTUFF' DEFAULT_LOCATION '$ff_location'
    OMBCREATE MAPPING 'FILE_TO_FILE'
    OMBALTER MAPPING 'FILE_TO_FILE' ADD EXTERNAL_TABLE OPERATOR 'SOURCE_STUFF' BOUND TO EXTERNAL_TABLE 'FSTUFF_EXT'
    OMBALTER MAPPING 'FILE_TO_FILE' ADD FLAT_FILE OPERATOR 'TARGET_FILE'
    OMBALTER MAPPING 'FILE_TO_FILE' ADD CONNECTION FROM GROUP 'OUTGRP1' OF OPERATOR 'SOURCE_STUFF' TO GROUP 'INOUTGRP1' OF OPERATOR 'TARGET_FILE'
    OMBALTER MAPPING 'FILE_TO_FILE' SET PROPERTIES (GENERATION_LANGUAGE) VALUES ('PLSQL')
    OMBALTER MAPPING 'FILE_TO_FILE' MODIFY OPERATOR 'TARGET_FILE' SET PROPERTIES (TARGET_DATA_FILE_NAME) VALUES ('$target_file')
    OMBALTER MAPPING 'FILE_TO_FILE' MODIFY OPERATOR 'TARGET_FILE' SET PROPERTIES (TARGET_DATA_FILE_LOCATION) VALUES ('$ff_location')
    You can do all this in the UI, just thought it would be useful as a script for you.
    Cheers
    David

Maybe you are looking for