EJB +FLAT FILE READ options

Hi Form
I know it's really Against the policy to read a FLAT File in EJB env,
But On a classic scenireio , When one has to use a 3rd party tool with Flat file Read / Write complications arise How dos one Cope with this manner
With regards
Karthik

Hi Form
Yes Some day's ago In did post the form , but returned without results,So hence re posting the form
I am working on NLS [ Natural Language Processing ] String sequence and would like to use the same along with EJB's ]
There is a 3rd Party ( JWNL ) Package avaliable on Web which uses the FLATFILE Read capability and Free one in it's env.
For the JWNL to work ,it has to read some FLAT Files for processing
Synonyms,Cordinate terms...
It this is the classic Scenerion to use within the EJB HOW TO HANDLE??
Plz help
With regards
Karthik

Similar Messages

  • How to use 'flat file extraction' option in Infospoke

    Hello Gurus,
    We are trying to extract the BW data using Infospoke to a 3rd party ETL tool (Datastage). Now in the current BW 3.5 release, the customer is on SP11 and we are not able to extract any data using a 3rd party system option in Infospoke. Due to the fact, we found out from SAP, that since we are getting an error (3rd party system is not set)in infospoke, we have to upgrade the SP level from 11 to either 16 or 17. The customer that we are dealing with is not going to do that until July - 2007.
    So now we have a second option to load the 'flat file' in BW App. server and FTP to Datastage. Is this something workable option?
    When using that option within Infospoke destination Tab, we can see two options. One is to use File option and load the data within the standard DIR HOME. I tried that but I can't see the file and the entire data within the standard directory. Can anyone explain why and what will be the procedure to follow?
    The second option is to use Logical file. I never done that before. Can anyone explain step by step process or provide the detail document and/or link?
    Since we are at the critical timeframe stage, I would appreciate if I can get the faster response from the Gurus.
    I will make sure to give 'reward points' to each and every helpful answer(s).
    thanks in advance,
    Maulesh

    Hi shashi,
    first, let me clarify one thing here! I am not saying '3rd party tool' as an option in infospoke. I know Infospoke provides only two options (DB table, and file). But when you use DB table option, you check mark on 3rd party destination and that's where you define an RFC destination for 3rd party tool. So when you execute an infospoke, it will start extracting data and load into the defined destination. Since we are on SP11 on BW 3.5 release, we are getting an error while executing an Infospoke using 'DB table option'. The error is like '3rd party destination is not set'. When I went to Marketplace and found an OSS note, I found out that we need an upgrade on SP level (upto 17).
       As I mentioned earlier, the client doesn't want to upgrade until next year, and we are at the critical timeframe stage, where we have to make a decision whether we hold this project until the SP upgrade happens or found alternate approach where we can extract the data from BW (somehow) and load into Teradata database via Datastage - BW Pack (3rd party ETL).
       So we might be able to use the second option in Infospoke destination, which is to use 'Flat file' load.
       Now when I use File option with App. server checked, the data loads into default Server and directory on BW. Which is DIR_HOME. I can see those file extracted and loaded into that directory under the transaction AL11. But I can't see any data. Do you think I have to request for an increase in size limit? If I see all the data, how can I FTP from the BW server? Do I have to work with SAP Basis or create a UNIX script?
       When I use Logical file name, I guess I have to follow what you have described on your response, correct? Can i use an existing Logical path and file name? Do you have a 'How to' document on creating YEAR and PER FM? Is it possible that Basis can do this work for us? What happens after we assign a logical Path and logical file name into the destination tab on Infospoke? What is the use of logical file versus file? Overall, I am looking for a process from extracting the data from Infospoke using either file or logical file, all the way to load into 3rd party database.
      looking forward for your response!
    thanks,
    Maulesh

  • JCA flat file read issue

    Hi,
    We needs to read a flat file and transform it to destination xml format. Then send it to destination file location.
    Steps we have done:
    1. Create JCA adapter and configure the flat file schema
    2. Create proxy based on the jca
    3. create transformation for the source to target schema (this has no namespace)
    4. Create BS for sending the output
    Everything workins as expected when testing from OSB test console. But then the file is placed in the source folder, the output xml has the namespace xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" to the root node.
    e.g,
    <Root xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" >
    <Child>
    <Child1/>
    <Child2/>
    <Child3/>
    </Child>
    </Root>
    But expected output is
    <Root>
    <Child>
    <Child1/>
    <Child2/>
    <Child3/>
    </Child>
    </Root>
    We tried converting the xml to string then repalcing the xmlns:soap-env="http://schemas.xmlsoap.org/soap/envelope/" value with balnk.
    Also we tried with hadcorded xml using assign action instead of transformation. Even the harded xml also having the namespace to the root node.
    But the end system is failing due to this namespace value.
    Help me to resolve the issue.
    Thanks,
    Vinoth
    Edited by: Vinoth on Jun 8, 2011 10:12 PM

    Ideally your endsystem should not fail if you specify any number of namespace identifiers in the XML unless you are using them in elements or attributes within the XML. They should try to resolve this on their side.
    But to see whats going on in OSB, can you please paste log the $body variable in the before the publish action and paste the content here. Or send the sbconfig of the Proxy and business to me on my email mentioned in the profile if possible.

  • Multiple Header Lines in Flat-File read by FileSenderAdapter

    Hello XI and File Sender Adapter Species
    We have the following file with fixed file lengths:
    #H1 F1
    #H2 F1
    #H3 F1
    #H4 F1
    #H5 F1
    #H6 F1
    #D1 Field1 - Fieldn
    #D2 Field1 - Fieldm
    #D2 Field1 - Fieldm
    #D1 Field1 - Fieldn
    #D2 Field1 - Fieldm
    #D2 Field1 - Fieldm
    #F F1
    Concrete example as follows:
    #H1 150
    #H2 ECH150_20070709_026745152.dat
    #H3 20070709_1600
    #H4 9.0
    #H5 8712423010208
    #H6 8712423009202
    #MDDTD3 146307732 146202845 871687940006178374E70871687910000219120200707090032B 040235031             Noordkant               28         SINT ANTHONIS           5845EW                                   8716879000004871242300920287124230091962007070909550113533533                 8716948000010501000L
    #MDDTD4 59664          E10
    #MDDTD4 30180          E11
    #MDDTD3 146309776 146202839 871694840030212726E70871694830000000309200707090031B 0411   8              Flierakkers             21         VROOMSHOOP              7681XV                                   8716948000003871242300920287124230091962007070909590113533515                 8716948000010503000L
    #MDDTD4 3562           E10
    #MDDTD4 2422           E11
    #F6
    This File Should be read via the File Sender Adapter and afterwards mapped into the following idoc structure:
    IDOC-Type
    ZUECH_010
      Header Segment: H1, H2, H3, H4 , H5 ,H6
      Detail1Segment: Field1, Field2 …. Fieldn
      Detail2Segment: Field1, Field2 …. Fieldm
      Detail2Segment: Field1, Field2 …. Fieldm
      Detail1Segment: Field1, Field2 …. Fieldn
      Detail2Segment: Field1, Field2 …. Fieldm
      Detail2Segment: Field1, Field2 …. Fieldm
      FooterSegment: Field1
    Now my Questions:
    1.     As far as I know it is not possible to configure in the filesender adapter 2 different Record sets? We need two record sets, one for the header lines whose occurrences are once per File and one for the detail lines? Does anybody know if this is possible?
    2.     Any other ideas for a simple solution?
    Thanks for a soon answer.
    Regards Marlies

    Thanks all very much for your answers:
    The hint from Praveen was very helpful.
    If possible we need a solution with the graphical mapping tool and as far as possible without UDF, because at the moment there is no java developer.
    The file adapter now produces the following xml structure:
    <ZUECH_0150>
      <recordset>
        <H1>     
         <KH1>#H1</KH1>
         <H1>150</H1>
        </H1>
        <H2>
         <KH2>#H2</KH2>
                       <H2>ECH150_20070709_026745152.dat</H2>
              </H2>
              <H3>
                   <KH3>#H3</KH3>
                   <H3>20070709_1600</H3>
              </H3>
              <H4>
                   <KH4>#H4</KH4>
                   <H4>9.0</H4>
              </H4>
              <H5>
                   <KH5>#H5</KH5>
                   <H5>8712423010208</H5>
              </H5>
              <H6>
                   <KH6>#H6</KH6>
                   <H6>8712423009202</H6>
              </H6>
              <MDDTD3>
                   <KMDDTD3>#MDDTD3</KMDDTD3>
                   <MDDTD3>146307732 146202845 </MDDTD3>
              </MDDTD3>
              <MDDTD4>
                   <KMDDTD4>#MDDTD4</KMDDTD4>
                   <MDDTD4>59664          E10</MDDTD4>
              </MDDTD4>
              <MDDTD4>
                   <KMDDTD4>#MDDTD4</KMDDTD4>
                   <MDDTD4>30180          E11</MDDTD4>
              </MDDTD4>
         </recordset>
    </ ZUECH_0150>
    I would prefer the following structure, because it fits exactly to the structure of the idoc.
    That means the mapping is very simple:
    <ZUECH_0150>
          <HEADER>
                  <H1>150</H1>
                  <H2> ECH150_20070709_026745152.dat</H1>
          </HEADER>
          <MDDTD3>
                 <KMDDTD3></KMDDTD3>
                 <MDDTD3>146307732 146202845</MDDTD3>
                    <MDDTD4>
                   <KMDDTD4>#MDDTD4</KMDDTD4>
                   <MDDTD4>59664          E10</MDDTD4>
              </MDDTD4>
              <MDDTD4>
                   <KMDDTD4>#MDDTD4</KMDDTD4>
                   <MDDTD4>30180          E11</MDDTD4>
              </MDDTD4>
          </MDDTD3>
    </ZUECH_0150>
    Now my new questions:
    1.     Is it possible to configure the file adapter to produce a xml structure which afterwards can be mapped with a simple graphical mapping into the idoc structure?
    (I can live with the suggestion from Praveen, that the header information is in each recordset but only has content in the first one.)
    2.     What about the MDDTD4? It is a substructure from MDDTD3. Is it possible to configure this in the file adapter?
    Thanks a lot for your help and a soon answer.
    Regards Marlies
    Message was edited by:
            Marlies  Nowotka
    Message was edited by:
            Marlies  Nowotka

  • Dynamic name Flat file reading.

    Hi all,
      my client wants to schedule a BDC program for daily running.
    But the file name is changed daily eventhough path is fixed.
    can this be done.
    can we read dynamic file from local system.
    thanks in advance
    Vikash.

    yes. so u can at initialissation for the paramter which takes the filepath name you can set to synamic filename as below
    data : p_filename  type RSVAR-FILEPATH.
    initialization.
    concatenate '/....SAP Forex Upload_' sy-datum+06(02) sy-datum+04(02) sy-datum+00(04) into p_filename.

  • Open adobe reader file - read option - instead of open

      When I try to open a adobe reader file by right clicking it I get " read" instead of "open " file.
    please see attached file for an example of what I mean.
    In the attahed printed screen file it says open - this is OK.
    but in my other pc I have only "read" not open.
    your help would be appreciated
    Thanks
    Omer

    If I press Read the file opens OK with no problem.
    The problem is that if I try to open the same file from a third party program it does not open .
    The file looks like it does know what windows extension it is supposed to be.
    I will try to post a screen shot from the other PC
    thanks

  • Problem Loading Microsoft Sql Serer table data to flat file

    Hi Experts,
    i am trying to load data from SQL Server table to flat file but its errror out.
    I have selected Staging area different form Targert ( I am using SQL Server as my staging area)
    knowlegde modue used: IKM SQL to file Append
    I reciee the following errror
    ODI-1217: Session table to file (124001) fails with return code 7000.
    ODI-1226: Step table to file fails after 1 attempt(s).
    ODI-1240: Flow table to file fails while performing a Integration operation. This flow loads target table test.
    ODI-1227: Task table to file (Integration) fails on the source MICROSOFT_SQL_SERVER connection POS_XSTORE.
    Caused By: java.sql.SQLException: Could not read heading rows from file
         at com.sunopsis.jdbc.driver.file.FileResultSet.<init>(FileResultSet.java:164)
    Please help!!!
    Thanks,
    Prateek

    Have you defined the File Datastore correctly with appropriate delimiter and file properties.
    Although the example is for oracle to file , see if this helps you in any way - http://odiexperts.com/oracle-to-flat-file.

  • Flat file loading... how to automate the loading of multiple files

    HI Alll,
    I'm wondering if it's possible to create a routine that automates the loading of several flat files into a PSA table.
    I only want a single info package, and I would like this info package to load file 1, then load file 2, file 3 etc etc....
    The alternative is to create info packages for each flat file, but this seems clunky and inefficient.
    Is this possible?
    Regards,
    Frederick

    Hi,
    It is not possible with single infopackage. you have to create different infopackages. But you can put it in process chain in sequence to load different flat files.

  • Creating flat file on other machine.

    Hi All,
    I am working on ODI 11g( 11.1.1.6).
    i want to create flat file on another machine.
    lets say, i have two machine A & B.
    Machine A contain ODI 11g, Machine B contain source (Oracle) and Flat file location which is Target.
    How to create flat file in machineB as target.
    Please heip.
    Regards
    Prashant

    Thanx for replay,
    In machineA ODI 11g is installed.
    source and target are on machineB.
    In Physical Architecture i am able to map source(Oracle) machineB using IP,
    but for file, i am not able to map with target machineB, to generated flat file.

  • FM to create a flat file in given IDOC type format

    Hi,
    I need to create a flat file having IDOC format.
    I have data in some other source file.
    Which function module can be used for the same?
    Could you pls provide some sample code?
    Appreciate the help.
    Thanks.
    -Shreyas

    My Exact requirement is:
    I have some data in a source file.
    Now I want to create a flat file while which should be in a standard IDOC format(WMMBID02) so that I can process it later using IDOC processing say EDI_DATA_INCOMING or any appropriate FM which will process IDOC later.
    I dont want IDOC to be created in the system right now. I just need to reformat my source file into a standard IDOC format flat file.

  • Reading huge flat file in OSB 11gR1

    Hi,
    I want to read a flat file in OSB.The size of the flat file may be larger, upto 1 MB.
    As per my knowledge, OSB provides following approaches to read a flat file-
    1.JCA(creating a file adapter in jdev and importing artifacts in OSB)
    2.MFL transformation
    3.Java callout
    Please let me know which is the best way to read the flat file.Also , is there any other way to do the same.
    Thanks in advance.
    Regards,
    Seemant
    Edited by: Seemant Srivastava on Feb 18, 2011 1:47 PM

    Which option is best one to convert a flat file to XML - is it via File Adapter or MFL ? Well, it's a topic of debate and it usually depends on your choice. Manoj has explained it clearly above that why one may prefer File Adapter over MFL. It also depends on your familiarity with the product. If you are a Oracle developer dealing with BPEL/Mediator mostly then you will prefer going for File adapter in this situation, even with OSB, but if you are a OSB developer (since the time it was known as ALSB) you will prefer MFL over adapter.
    It's just matter of choice & your comfort. Remember, in different-different cases, both solutions may result in different performance, so better test them from performance perspective and then choose.
    Such flexibility of optional tags can only be handled by mfl.I don't think so. File adapter should also be able to handle this use case. Have you checked this -
    http://download.oracle.com/docs/cd/E17904_01/integration.1111/e10231/nfb.htm#CHDDHEAI
    Also, upto what size of input file is supported by mfl.It's a unanswerable question. It totally depends upon your system structure. I personally don't prefer huge files translation at OSB because it hurts the performance of OSB.
    but I think such feature will not be supported when the file adapter artifacts are imported and used in OSB 11.1.1.3/11.1.1.4Correct. From OSB Dev guide -
    25.2.1.1 Oracle JCA Adapter Limitations
    Following are limitations when using some JCA adapters with Oracle Service Bus:
    •Oracle JCA Adapter for AQ – Streamed payload is not supported with Oracle Service Bus.
    •Oracle JCA Adapter for FTP and Files – Attachments (large payload support), pre- and post-processing of files, using a re-entrant valve for processing ZIP files, content streaming, and file chunked read are not supported with Oracle Service Bus.
    http://download.oracle.com/docs/cd/E17904_01/doc.1111/e15866/jca.htm#BABBICIA
    Regards,
    Anuj

  • Reading flat files with variable names in SSIS

    I have a ssis package that reads a flat file from a network drive (using a flat file connection manager) and loads the data into sql server tables. I have this working on a fixed file name, however in reality the file name will not be the
    same from run to run.      Essentially, I need to check a folder each day and if there is a file there with a certain prefix (for example, 'datafile'), I need to process this file(s). In other words, if there is a file in the folder
    called datafilexyz, my process needs to read it in and process it.    If there are files named datafileabc, datafiledef, and testfile123, I need to read in and process the datafileabc and datafiledef flat files.    
    I'm not sure how to make this work.     I haven't had any SSIS training, just what I can find on the internet and looking at existing SSIS packages (I haven't found any that do what I'm trying to do here) so I'm kind of lost.   
    Any help is appreciated.  

    this is working well.    How can I, after loading each flat file, move the flat file to an archive folder?    I'm trying to use a file system task, but doing something wrong.    I created and reference
    an archive folder connection manager in the destination connection, and reference the original folder connection manager for the source connection, but get 'sourcepath is not valid on operation movefile'.    I think a file system task is what
    is needed (within the foreach loop after the data load for each flat file), but I'm not doing something correctly.
    Sounds fine except for one thing. Did you assign the filename variable to connection string property of the source file connection manager used by file system task?
    Also it should be existing file option you should choose for
    source connection and existing folder for destination.
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How to read a flat file from the batch

    Hi,
    We have a requirement to read a flat file in the custom java batch in ETM2.2. We have the file at a specific location.
    can some one let me know what are the various options, we have to read a flat file.
    Thanks,
    Sharath Kumar G.

    Is it referring to some "Process X" batch?
    And why MPL/XAI with or without Configuration Tools should not be used to do this and a custom Java batch is required?

  • Problem Reading Flat File from Application server

    Hi All,
    I want to upload a Flat File which is having a Line Length of 3000.
    While uploading it to Application server , only upto 555 length it is getting uploaded .
    i am using the code :
    DATA: BEGIN OF TB_DATA OCCURS 0,
            LINE(3000),
          END OF TB_DATA.
    *----Uploading the flat file from the Local drive using FM "UPLOAD".
    OPEN DATASET TB_FILENAM-DATA FOR OUTPUT IN TEXT MODE." ENCODING DEFAULT.
    IF SY-SUBRC NE 0.
      WRITE:/ 'Unable to open file:', TB_FILENAM-DATA.
    ELSE.
      LOOP AT TB_DATA.
        TRANSFER TB_DATA TO TB_FILENAM-DATA.
      ENDLOOP.
    ENDIF.
    CLOSE DATASET TB_FILENAM-DATA.
    What could be the problem?
    Could it be due to any Configuration Problem?
    Waiting for your replies.
    Thanks and Regards.
    Suki

    Your code looks OK, but you may have touble displaying the full width. Try:
    WRITE: /001 TB_FILENAM-DATA+555.
    (And don't forget to append your ITAB after each read.)
    Rob

  • To read flat file from a unix server

    We need to read a flat file from a Unix server, where our Database is located.
    The location gets created correctly.
    But while we are trying to import files from the location in Design Center , we get an error that "directory does not exists", although the directory has all the permissions.
    Can someone please suggest how should we create the location so as it can read the files.
    Please Reply ASAP......

    We have started Design Center on a local machine(Windows Machine) with uaer as repository owner of the server,
    In the design center we can not sample the file till we import it,
    can you please tell how to sampe the file without importing it.
    Also a location pointing to server location gets easily created on the design center and the file module points to that location only, but when we try to import the file through that location, it says directory does not exists, although oracle user has all the read write permissions on the directory......
    Please help!

Maybe you are looking for

  • IPod touch doesnt work with Ford Sync after 2,1 upgrade

    After I upgraded my iPod Touch to the new 2.1 version I am having all kinds of issues with it. Either it won't charge, says it has no media files, is not connected or device is not supported. Anyone have the same problem. Is there a way of reverting

  • Table Component and Row Height

    I am using XCelsius 4.5 and using the table component.  Is there anyway the row height on the table component can be linked to the Excel file?  It is not dynamically changing or wrapping text based on the amount of data that is imported in.

  • Date Difference Calculation problem

    I am creating a form which should calculate the number of days between two calender dat es using the Date2Sum expression but my result cell gives me zeo despite setting the locale and the calculation correcttly. Where could my problem be? Darlington

  • Changing Custom Tab Name in ME21N

    Hi All, I have created a new custom tab in me21n transaction. When i created it by default it's name is coming as 'Customer tab' [H11 - Customer data]. So now i need to change that name as per my requirement. So please provide me a solution how to ch

  • Obiee 11 installation in oel 5 , 64 bit os

    Dear All, when i start the installation using run installer then give the below error. i also download the jre 1.3.1 file and install but give the same error . any help .please Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-0