Flat File  mapping issue

Hello All,
I am trying to an extract using flat file method in BI 7.0. I have some 150 fields in my CSV file but i wanted is just about 10 which are scattered around in the CSV file.When i Read Preview Data in the Preview tab i see incorrect data in there.And that too all under one tab, basically all under one field , though in the extraction tab for  Data Seperator i have been using ; and also tried checking the HEX box, for escape i have the ", tried HEX for this aswell.For rows to ignore i have 1.
One thing i would like to know is how will the BI infoobject know where is the position of the flat file field in the  CSV file and where will it be mapped.i know it can be mapped in the Transformations but that is from the flat file datasource, i am asking about the CSV file.
Please help me out and tell me what am i doing incorrectly.
Thanks for your help.Points will be assigned.

hi,
use ,and ; as the escape signs.
system takes care of it when u speicfy the path and name of the file and format as CSV.
always the system checks the one to one mapping of the falt file fields with the infoobject in the datasource.
options for u
1. arrange the neccessary fields in the flat file that exactly maps with infoobjects for mapping. then start loading.
2. keep as such and load with scattered field and in transformation map the required fields alone.
second option consumes more memory space unneccessarily.
For BI 7.0 basic step to laod data from flat (excel files) files for this follow the beloww step-by step directions ....
Uploading of master data
Log on to your SAP
Transaction code RSA1—LEAD YOU TO MODELLING
1. Creation of Info Objects
• In left panel select info object
• Create info area
• Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
• Create new characteristics and key figures under respective catalogs according to the project requirement
• Create required info objects and Activate.
2. Creation of Data Source
• In the left panel select data sources
• Create application component(AC)
• Right click AC and create datasource
• Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
• In general tab give short, medium, and long description.
• In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
• In proposal tab load example data and verify it.
• In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
• Activate data source and read preview data under preview tab.
• Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
3. Creation of data targets
• In left panel select info provider
• Select created info area and right click to select Insert Characteristics as info provider
• Select required info object ( Ex : Employee ID)
• Under that info object select attributes
• Right click on attributes and select create transformation.
• In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
• Activate created transformation
• Create Data transfer process (DTP) by right clicking the master data attributes
• In extraction tab specify extraction mode ( full)
• In update tab specify error handling ( request green)
• Activate DTP and in execute tab click execute button to load data in data targets.
4. Monitor
Right Click data targets and select manage and in contents tab select contents to view the loaded data. Alternatively monitor icon can be used.
BW 7.0
Uploading of Transaction data
Log on to your SAP
Transaction code RSA1—LEAD YOU TO MODELLING
5. Creation of Info Objects
• In left panel select info object
• Create info area
• Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
• Create new characteristics and key figures under respective catalogs according to the project requirement
• Create required info objects and Activate.
6. Creation of Data Source
• In the left panel select data sources
• Create application component(AC)
• Right click AC and create datasource
• Specify data source name, source system, and data type ( Transaction data )
• In general tab give short, medium, and long description.
• In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
• In proposal tab load example data and verify it.
• In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
• Activate data source and read preview data under preview tab.
• Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
7. Creation of data targets
• In left panel select info provider
• Select created info area and right click to create ODS( Data store object ) or Cube.
• Specify name fro the ODS or cube and click create
• From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
• Click Activate.
• Right click on ODS or Cube and select create transformation.
• In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
• Activate created transformation
• Create Data transfer process (DTP) by right clicking the master data attributes
• In extraction tab specify extraction mode ( full)
• In update tab specify error handling ( request green)
• Activate DTP and in execute tab click execute button to load data in data targets.
8. Monitor
Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used
Ramesh

Similar Messages

  • Flat file truncation issue

    I am attempting to perform a fairly standard operation, extract a table to a flat file.
    I have set all the schemas, models and interfaces, and the file is produced how I want it, apart from one thing. In the source, one field is 100 characters long, and in the output, it needs to be 25.
    I have set the destination model to have a column physical and logical length of 25.
    Looking at the documentation presented at http://docs.oracle.com/cd/E25054_01/integrate.1111/e12644/files.htm - this suggests that setting the file driver up to truncate fields should solve the issue.
    However, building a new file driver using the string 'jdbc:snps:dbfile?TRUNC_FIXED_STRINGS=TRUE&TRUNC_DEL_STRINGS=TRUE' does not appear to truncate the output.
    I noticed a discrepancy in the documentation - the page above notes 'Truncates strings to the field size for fixed files'. The help tooltip in ODI notes 'Truncates the strings from the fixed files to the field size'. Which might explain the observed lack of truncation.
    My question is - what is the way to enforce field sizes in a flat file output?
    I could truncate the fields separately in each of the mapping statements using substr, but that seems counter-intuitive, and losing the benefits of the tool.
    Using ODI Version:
    Standalone Edition Version 11.1.1 - Build ODI_11.1.1.5.0_GENERIC_110422.1001

    bump
    If this is an elementary issue, please let me know what I've missed in the manual.

  • Error in flat file mapping

    I am using flat file as source ,and when i am trying to validate the mapping i am getting the following error:
    No data file specified
    I tried to configure the mapping,I did not find any option asking source file name.
    Can you please help me...
    Kiran

    Hi Kiran
    As far as I remember...when you configure the map, the map configuration property inspector has a tree in the lhs if you navigate to the flat file operator, and right click on that node you should be able to add a data file. The properties for the data file are shown on the rhs when selected in tree.
    Cheers
    David

  • Urgent: Flat file load issue

    Hi Guru's,
    Doing loading in data target ODS via flat file, problem is that flat files shows date field values correct of 8 characters but when we do preview it shows 7 characters and loading is not going through.
    Anyone knows where the problem is why in the preview screen it shows 7 characters of date when flat file has 8 characters.
    Thanks
    MK

    Hi Bhanu,
    How do i check if conversion is specified or not and another thing is it is not just one field we have 6 date fields and all of them are showing 7 characters in PSA after loading where as flat file have 8 characters in all of the 6 date fields.
    In PSA I checked the error message it shows 2 error messages:
    First Error message:
    The value '1# ' from field /BIC/ZACTDAYS is not convertible into the    
    DDIC data type QUAN of the InfoObject in data record 7 . The field      
    content could not be transferred into the communication structure       
    format.                                                                               
    em Response                                                                               
    The data to be loaded has a data error or field /BIC/ZACTDAYS in the    
    transfer structure is not mapped to a suitable InfoObject.                                                                               
    The conversion of the transfer structure to the communication structure 
    was terminated. The processing of data records with errors was continued
    in accordance with the settings in error handling for the InfoPackage   
    (tab page: Update Parameters).                                                                               
    Check the data conformity of your data source in the source system.                                                                               
    On the 'Assignment IOBJ - Field' tab page in the transfer rules, check    
    the InfoObject-field assignment for transfer-structure field              
    /BIC/ZACTDAYS .                                                                               
    If the data was temporarily saved in the PSA, you can also check the      
    received data for consistency in the PSA.                                                                               
    If the data is available and is correct in the source system and in the   
    PSA, you can activate debugging in the transfer rules by using update     
    simulation on the Detail   tab page in the monitor. This enables you to   
    perform an error analysis for the transfer rules. You need experience of  
    ABAP to do this.                                                                               
    2nd Error Message:
    Diagnosis                                                                               
    The transfer program attempted to execute an invalid arithmetic          
        operation.                                                                               
    System Response                                                                               
    Processing was terminated, and indicated as incorrect. You can carry out 
        the update again any time you choose.                                                                               
    Procedure                                                                               
    1.  Check that the data is accurate.                                                                               
    -   In particular, check that the data types, and lengths of the     
                transferred data, agree with the definitions of the InfoSource,  
                and that fixed values and routines defined in the transfer rules                                                                               
    -   The error may have arisen because you didn't specify the      
             existing headers.                                                                               
    2.  If necessary, correct the data error and start processing again.  
    Thanks
    MK

  • OWB FLAT FILE MAPPING NOT ABLE TO DEPLOY

    Hi,
    I've recently started studying data warehousing and exploring the Datawarehouse builder version 3i.
    I've created a simple mapping from a csv file to database table ie my source is a flat file and target is a database table. The mapping gets validated successfully and the ctl scrip also gets generated. But the 'Deploy' and "Run' buttons on the "Transformations" tab of the "Generation Results" window are displabled(greyed out) and hence I cannot deploy the generated script.
    Please do suggest the cause and solution for the same.
    Regards,
    Barkha

    Many Thanks to Winfred Deering from Oracle who replied:
    "This is the expected behavior for flat file mappings. You can register the tcl script with Oracle Enterprise Manager and run it from there. Also you can save the control file and execute via sql loader. "
    Thanks and Regards,
    Barkha

  • Flat file mapping problem.

    Hi,
    I've just created a file mapping, and i'm trying to split a flat file field into 3 subfields for a xml record, but i have the following message:
    Before the process run, i've tested the message mapping and the interface mapping, and the log result said that the mapping has been successfully ended, so, Have you ever seen this problem before?
    I appreciate your help, thanks.
    Marzolla.

    Hi Jorge,
              Check your XML input. Try to put the same xml instance in the message mapping transformation and test it out.
    Regards,
    Dhana

  • File Mapping Issues

    Hi all,
    I am desperately seeking some insight regarding the OSX file mapping state.
    Just recently in August, my near-8 year old G3 Wallstreet running OS 8.1 pooped-out, so I ended up buying a 1.5ghz 12" PowerBook running 10.4.0. Lovely hardware. Mmmmm.
    During the first month or so, I ran into several unfamiliar issues (as I currently need to run 'Classic' as well) that I was able to work out on my own and since have learned a lot about this new OS to eventually move as many apps as I can to OSX.
    But I keep getting sidelined every now and then with this d@mn file mapping thing re: .EPS files.
    The problem arises when I have Adobe Illustrator EPS files and Photoshop EPS files on the same hard drive. With extensions necessary (due to cross platform exchanges required in my biz) confusion arises. I assign a Photoshop eps file it's application, then when I go to do the same with the Illustrator eps, it defaults to Photoshop -- and I can't change it for the likes of me. I use OnyX to 'Reset the Application Links' and start over -- usually in a different sequence. Sometimes that has worked. Not always. It appears no one thought of this when developing OS X, short of the pithy "Internet" control panel in OS 9 that proclaims that OSX takes it's cue from entries here. (Really?) At first I thought it was because I am using a mixture of 'Classic' apps and native ones. I find out from my peers this happens with them too. In OS 7/8/9 this was NEVER EVER EVER an issue. As a working professional, who bills by the hour, OS X is making my life twice as difficult now. Every time I have to deal with this I lose 2-3 hours of billable time trying to troubleshoot this 'oversight'. This has happened 4 or 5 times now since I bought this PowerBook in August '05. I hate to say it, but even WinXP Pro is more forgiving in this area -- (and only in this area)
    Does anyone out there know of a sure-fire Pref Panel, or anything like that, where you can go in and assign File Type, Creator and Extensions -- and it actually works -- rock solid? I've tried the DefaultApp pane to no avail, and everything else wants me to drag-n-drop folders to re-assign. Other tools I use for various things are: OnyX, Drive Genius, MainMenu and File Adopter.(For the record, I am aware of the Cmd-i > Choose > 'Always open with' > Select application routine -- somethings it works great with, others not)
    It's a pain not knowing what TYPE of EPS was just sent to me. Vector or Bitmap. When it comes from the PC world, who knows. I used to be able to tell in OS 8.1.
    Regardless, I'd like to stop tearing my hair out and just get back to work on this lovely little piece of hardware.
    Thanx in advance!
    12 G4 PowerBook/1.5 & PB 3400c/200   Mac OS X (10.4.2)  

    I suspect the confusion is due to the things that are sent to you from Windows machines. OSX can assign things with the same extension to different applications, provided the resource fork is there and shows the creator of the file. Thus, I have a pair of .eps files created on Macs, one is an Illustrator file and sports the Illustrator icon and is set to open by default in Illustrator. The other is a Photoshop eps file, has a custom Photoshop icon, and the defaut to open it is Photoshop. All well and good. But if there is no resource fork then that too is a "type" and the Mac will open it with whatever default application you assign to files that have an extension of eps and no creator code. This will be the case for files received from a Windows machine. Unfortunately in this case the "type" of eps and no creator code includes two possible applications and you can only pick one or the other as the default. Without the creator code there is no way for the Mac to tell which way to go. There may be in the future, since the actual creator may be included in the metadata, which may also be a part of Windows Adobe files (I don't have any to test so I don't know), and in the future this may be readable by the system. That's a lot of "maybes" and doesn't help right now.
    If those who send you files tell you whether they used Illustrator or Photoshop, I believe you can batch assign a creator code using SuperGetInfo from BareBones.
    Francine
    Schwieder

  • Idoc to flat file mapping using XSLT

    Hi,
    i am using XSLT mapping. my requirement is mapping between idoc and flat file(xml to text). as i do not want to use FCC, i have opted for xslt mapping. please let me know any article which would be helpful for this.
    regards,
    Meenakshi

    Hi Meenakshi,
    Two things:
    1. Achieving this functionality  using XSLT is very difficult.
    2. Secondly you may not be able to find a direct document to convert IDoc-XML to flat file using XSLT in sdn. Try google.
    I found one link like that may be you can get some idea from there
    http://www.stylusstudio.com/SSDN/default.asp?action=9&read=6453&fid=48
    Also if you have a XSLT editor like XMLSPY or stylus studio then creating your specific XSLT will b much more simpler
    Regards
    Suraj

  • XML to Flat File design issue

    Hi,
    A newbie to SSIS but was able to create an SSIS package which extracts XML data from one of the SQL server columns using "Execute SQL Task" and passes that to a for each loop container which contains an XML task for apply transform to each input
    xml and append it to a flat file(additonally using a script task within for each container).
    All good so far but now I want to apply conditional splitting to "Execute SQL Task" in the above and additionally pipe it to the exact similar additional process(For each container doing xml task as described above) for a different kind of
    flat file.
    If I alter the design to use the data flow approach and apply OOTB conditional split, I run into not knowing how to connect and execute more than one  foreach container and embedded XML and script task (dataflow to control flow connection)
    It is easy to put everything in a sequence container and repeat the Execute SQL Task . But to me that is very inefficient.
    Any creative ideas or pointers to some internet content which tells me how can this be done most efficiently.
    Hope my question makes sense and let me know if you need more clarification.
    Thanks in advance.
    SM

    As I understand what you're asking for is a way to create conditional branches to do different typeof processing for each subset of data. For this you can do like below 
    1. Add set of tasks to additional processing for each type of flat file and link them to Execute sql task.
    2. Make the precedence constraint option as Expression And Constraint for each of them. Make constraint as OnSuccess and expression based on your condition. You may need to create SSIS variables to capture value of fields to be used in expression manipulation.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • JCA flat file read issue

    Hi,
    We needs to read a flat file and transform it to destination xml format. Then send it to destination file location.
    Steps we have done:
    1. Create JCA adapter and configure the flat file schema
    2. Create proxy based on the jca
    3. create transformation for the source to target schema (this has no namespace)
    4. Create BS for sending the output
    Everything workins as expected when testing from OSB test console. But then the file is placed in the source folder, the output xml has the namespace xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" to the root node.
    e.g,
    <Root xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" >
    <Child>
    <Child1/>
    <Child2/>
    <Child3/>
    </Child>
    </Root>
    But expected output is
    <Root>
    <Child>
    <Child1/>
    <Child2/>
    <Child3/>
    </Child>
    </Root>
    We tried converting the xml to string then repalcing the xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" value with balnk.
    Also we tried with hadcorded xml using assign action instead of transformation. Even the harded xml also having the namespace to the root node.
    But the end system is failing due to this namespace value.
    Help me to resolve the issue.
    Thanks,
    Vinoth
    Edited by: Vinoth on Jun 8, 2011 10:12 PM

    Ideally your endsystem should not fail if you specify any number of namespace identifiers in the XML unless you are using them in elements or attributes within the XML. They should try to resolve this on their side.
    But to see whats going on in OSB, can you please paste log the $body variable in the before the publish action and paste the content here. Or send the sbconfig of the Proxy and business to me on my email mentioned in the profile if possible.

  • Flat file load issue-Urgent

    Hello
    I'm trying to load a master data infoobject with a flat file and getting errors.
    The infoobject ZCOLLECT_DATE has two fields - ZQUARTER and ZCOLLECTDATE
    ZCOLLECT_DATE is of Data type NUMC, Length 5, conversion routine PERI5 and Output length 6
    I created a flat file in Excel the following way:
    Left the first row blank
    Second column - 1/2008 (for ZQUARTER)
    Thrid column - 01/11/2008 (for ZCOLLECTDATE)
    I saved it as a CSV (save as-> csv).
    In the infopackage, I've selected this file in the External data tab, File type CSV, Data separator ; , Escape sign "
    When I try to load the infoobject, I get the following error:
    "Error in conversion exit CONVERSION_EXIT_PERI5_INPUT"
    I thought this was because the Length in my infoobject was 5, whereas ZQUARTER is of length 6 in my flat file (eg-1/2008) and ZCOLLECTDATE is of format mm/dd/yyyy.
    Any thoughts on what the issue is?
    Thanks
    Alex

    Hi Hemant
    I changed it to 12008, 22008 etc.. but still get the error ""Error in conversion exit CONVERSION_EXIT_PERI5_INPUT"
    Also, when I save the xls file as CSV, I get two pop-ups saying some features are not compatible and only those features that are compatible will be saved. Are these normal warnings, or could they be causing the issue?
    Also, in the infoobject, since it only consists of dates, do I need to specify conversion routine PERI5 in Infoobject properties/maintenance?
    Thanks

  • Flat File Load Issue - Cannot convert character sets for one or more charac

    We have recently upgraded our production BW system to SPS 17 (BW SP19)
    and we have issues loading flat files (contain Chinese, Japanese and
    Korean characters) which worked fine before the upgrade. The Asian
    languages appear as invalid characters (garbled) as we can see in PSA. The Character Set
    Settings was previously set as Default Setting and it worked fine until
    the upgrade abd we referred to note 1130965 and with the code page 1100
    the load went through (without the Cannot convert character sets for one or more characters error) however the Asian language characters will appear
    as invalid characters. We tried all the code pages suggested in the note e.g.
    4102, 4103 etc on the info packages but it did not work. Note that the
    flat files are encoded in UTF-8.

    I checked lower case option for all IO
    When i checked the PSA failed log no of records processed is "0 of 0" my question is with out processing single record system is througing this error message.
    When i use same file loading from local workstation no error message
    I am thinking when  I FTP the file from AS/400 to BI Application Server(Linex) some invalid characters are adding but how can we track that invalid char?
    Gurus please share your thoughts on this I will assign full points.
    Thanks,

  • Flat file conversion issue

    Hi all,
    I'm doing a IDOC to Flat file scenario, now after the mapping and FCC I have an output in this way:
    SOLD_TO,SHIP_TO,BILL_TO,0000300623,0000300622,0000500569
    But I want it in this way:
    SOLD_TO,0000300623,SHIP_TO,0000300622,BILL_TO,0000500569
    What should I do in FCC to get the desired output?
    Thanks,
    Srinivas

    Suman,
    MT
    DT
    ADDR_TYPE
    EXT_ADDR_ID
    These two fields I have duplicated twice each in the mapping:
    MT
    DT
    ADDR_TYPE
    SOLD_TO
    ADDR_TYPE
    SHIP_TO
    ADDR_TYPE
    BILL_TO
    EXT_ADDR_ID
    0000300623
    EXT_ADDR_ID
    0000300622
    EXT_ADDR_ID
    0000500569
    Srini

  • Flat File Load- issue

    Hi All,
    I am facing a problem while doing a deta load through Flat File; it's an .csv file.
    Problem arises whne I have some records like this £1,910.00 ; it's creating a separate column.
    I am having lot's of such amount field.
    is there any breakthrough for this issue ?? Your ans would be highly appriciated.
    Regards,
    Kironmoy Banerjee

    Hi,
    As satish has mentioned it is better to maintain two seperate columns as below
    1.0currency     
        GBP     
    2.0amount
        1910
    (GBP )Indicates British Pound.
    When ur maintaining amount as 1,910.00 it will go for next column because in info package  Extraction tabe Data Separator will be , (Comma).
    Regards
    Surendra
    Edited by: vaade surendra on May 7, 2010 11:37 AM
    Edited by: vaade surendra on May 7, 2010 11:37 AM
    Edited by: vaade surendra on May 7, 2010 11:38 AM
    Edited by: vaade surendra on May 7, 2010 11:41 AM

  • Flat file mapping question

    Hello
    I'm very new to XI. I have to a flat file which contains Order No, Product, Value, and looks something like this:
    A,A,1
    A,B,1
    A,C,1
    B,A,1
    B,B,2
    B,C,3
    I need to be able to map it to a target structure that has a header for Order No, and detail for the Product  and Value fields, i.e. looks a bit like this:
    <Order No>
    A
    <Product> A </Product>
    <Value> 1 </Value>
    <Product> B </Product>
    <Value> 1 </Value>
    <Product> C </Product>
    <Value> 1 </Value>
    </Order No>
    <Order No>
    B
    <Product> A </Product>
    <Value> 1 </Value>
    <Product> B </Product>
    <Value> 1 </Value>
    <Product> C </Product>
    <Value> 1 </Value>
    </Order No>
    In my attempts with various contexts etc. I either get just one output message for order A, or a message for every product within an order (so 6 messages). Amateur stuff, I know.....
    Thanks for your help
    Regards
    Robert

    Hi,
    Please do this mapping:
    For the node Order
    OrderNumber --> removeContext --> splitByValue:ValueChanged --> collapseContext --> Order
    For the node Ordernumber
    OrderNumber --> removeContext --> splitByValue:ValueChanged --> collapseContext --> splitByValue:EachValue --> Ordernumber
    For the node OrderLines
    OrderNumber --> removeContext --> splitByValue:ValueChanged --> OrderLines
    For the node Product
    Product --> Product
    For the node Value
    Value --> Value
    Hope this helps,
    Edited by: Mark Dihiansan on Mar 12, 2009 6:04 PM

Maybe you are looking for