If CSV file contains multiple structure...

If CSV file contains multiple structures...then how should set vaules in file content conversion.
pls mention any links regarding File Content Conversion
thanks in advance..
Ramesh

Hi,
You are using RecordSet. Here are some scenarios.
http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/frameset.htm
/people/venkat.donela/blog/2005/03/03/introduction-to-simple-file-xi-filescenario-and-complete-walk-through-for-starterspart2
/people/prateek.shah/blog/2005/06/08/introduction-to-idoc-xi-file-scenario-and-complete-walk-through-for-starters - IDoc to File
/people/ravikumar.allampallam/blog/2005/03/14/abap-proxies-in-xiclient-proxy - ABAP Proxy to File
/people/sap.user72/blog/2005/06/01/file-to-jdbc-adapter-using-sap-xi-30 - File to JDBC
/people/prateek.shah/blog/2005/06/14/file-to-r3-via-abap-proxy - File to ABAP Proxy
/people/venkat.donela/blog/2005/03/02/introduction-to-simplefile-xi-filescenario-and-complete-walk-through-for-starterspart1 - File to File Part 1
/people/arpit.seth/blog/2005/06/27/rfc-scenario-using-bpm--starter-kit - File to RFC
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1685 [original link is broken] [original link is broken] [original link is broken] [original link is broken] - File to Mail
Regards,
Wojciech

Similar Messages

  • ESB/File Adapter - XML files containing multiple messages

    Hi,
    In all the examples on file adapters I read, if files contain multiple messages, it always concerns non-XML files, such as CSV files.
    In our case, we have an XML file containing multiple messages, which we want to process separately (not in a batch). We selected "Files contain Multiple Messages" and set "Publish Messages in Batches of" to 1.
    However, the OC4J log files show the following error:
    ORABPEL-12505
    Payload Record Element is not DOM source.
    The Resource Adapter sent a Message to the Adapter Framework which could not be converted to a org.w3c.dom.Element.
    Anyone knows whether it's possible to do this for XML files?
    Regards, Ronald

    Maybe I need to give a little bit more background info.
    Ideally, one would only read/pick-up small XML documents in which every XML document forms a single message. In that way they can be processed individually.
    However, in our case an external party supplies multiple messages in a single batch-file, which is in XML format. I want to "work" on individual messages as soon as possible and not put a huge batch file through our ESB and BPEL processes. Unfortunately we can not influence the way the XML file is supplied, since we are not the only subscriber to it.
    So yes, we can use XPath to extract all individual messages from the XML batch-file and start a ESB process instance for each individual message. But that would require the creation of another ESB or BPEL process which only task is to "chop up" the batch file and start the original ESB process for each message.
    I was hoping that the batch option in the File adapter could also do this for XML content and not only for e.g. CSV content. That way it will not require an additional process and manual coding.
    Can anyone confirm this is not supported in ESB?
    Regards,
    Ronald
    Message was edited by:
    Ronald van Luttikhuizen

  • How to import csv file with multiple tables into sql server

    I have multiple csv files that has one sheet but has 130 headers with each header having different data. 
    I'd like to import each one of these header rows with data into its own file in sql server. 
    I know very basic SSIS and am but am not familiar with the scripting in it though which what I assume I'd have to use. 
    Each header in the csv file is structured as such(also see example pic):
    first header would be this:                             
          ITEM = ORG_V                              
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15      
    column names
    data
    second header would be this:
    ITEM = TER_V
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15
    column names
    data
    The headers can be at any random row number as well as the data size in each excel file differs but they all start with "ITEM ="
    and then in the next row "DATE ="
    I could also convert these to excel files if it makes this process easier. 

    Why don't you put a filter on D3, filter out the blanks, copy/paste to a new CSV file, save it, and import it.
    There's no way you're going to get SQL to do that kind of thing for you.  The language is for set based operations, not for complex data manipulation tasks.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Split delimited file into multiple structures

    Hi experts,
    I have a delimited text file that has multiple row types - each row contains only one structure, but there could be 10 different row types (structures) in the file. I can figure out which structure each row belongs. Just need to come up with a way to dynamically split each line.
    I am trying to stay away from:
    case 'row type'
      when 'structure A'
        split into A1, A2, A3...
      when 'structure B'
        split into B1, B2, B3...
    end with.
    Rather, create a routine accepting file line and structure (or structure name) that returns the structure with data populated.
    Any suggestions?
    Thanks,
    Hyun Kang

    This may give you some ideas. 
    report zrich_0001.
    data: begin of itab1 occurs 0,
          fld1(10) type c,
          end of itab1.
    data: begin of itab2 occurs 0,
          fld1(10) type c,
          fld2(10) type c,
          end of itab2.
    data: begin of itab3 occurs 0,
          fld1(10) type c,
          fld2(10) type c,
          fld3(10) type c,
          end of itab3.
    data: tab_name type string.
    data: istr type table of string with header line.
    data: isplit type table of string with header line.
    field-symbols: <dyn_tab> type table,
                   <dyn_wa>,
                   <fs>.
    start-of-selection.
      call function 'GUI_UPLOAD'
           exporting
                filename = 'C:test.txt'
           tables
                data_tab = istr.
      loop at istr.
        split istr at ',' into table isplit.
        read table isplit index 1.
        concatenate isplit '[]' into tab_name.
        assign (tab_name) to <dyn_tab>.
        assign (isplit) to <dyn_wa>.
        delete isplit index 1.
        loop at isplit.
          assign component sy-tabix of structure <dyn_wa> to <fs>.
          if sy-subrc <> 0.
            exit.
          endif.
          <fs> = isplit.
        endloop.
        append <dyn_wa> to <dyn_tab>.
      endloop.
      loop at itab1.
        write:/ itab1-fld1.
      endloop.
      loop at itab2.
        write:/ itab2-fld1, itab2-fld2.
      endloop.
      loop at itab3.
        write:/ itab3-fld1, itab3-fld2, itab3-fld3.
      endloop.
    My file looks like this.
    ITAB1,Value1
    ITAB1,Value2
    ITAB2,ValueA,ValueB,
    ITAB2,ValueC,ValueD,
    ITAB3,ValueR,ValueS,ValueT
    ITAB3,ValueU,ValueV,ValueW
    You can see in this program, that the first column drives what internal table the data is written to for that line.
    Regards,
    RIch Heilman

  • Impossible to send back a CSV file containing more than 8000 chars

    Hello,
    I work on an application using Spring and I try to write a String of around 19000 chars in a CSV file that I send back to the client with the method PrintWriter.write (response.getWriter().write) of the object HttpServletResponse. Here is the code :
    public ModelAndView getViewdomainReport(HttpServletRequest request, HttpServletResponse response) throws Exception {
    // domainString is a string containing around 19000 chars
    String domainString = reportManagementService.getViewdomainReport(domainRows,domain);
    String filename = domain.getDomainNameWithoutBlank()+".csv";
    response.setContentType("text/csv");
    response.getWriter().write(domainString);
    response.setHeader("Content-Disposition", "attachment; filename=\""
                       + filename + "\"");
    return null;
    The following thing is strange : when the String domainString has less than 8000 chars, the file CSV is sent back to the client with the appropriate name and I can open it with EXCEL. When I exceed 8000 chars, I cannot get the response with a CSV format, but the response is directly sent to the browser as a String (domainString) printed directly in the browser page with the URL /getViewdomainReport.htm. That is I exceed the capacity of a sent file with the method write. I think that the fact that it is sent in a CSV format has no importance. It would be the same with a html format or else. It is surely the size of the response which is limited. What can I do ? is it a buffer matter ? how can I get my 19000 chars in my CSV file so I can open it with EXCELL ?
    If you can help me, thank you in advance

    Please don't cross post. As your thread in New to Java has the largest number of replies, please continue in that thread.
    [http://forums.sun.com/thread.jspa?threadID=5338772]
    I'm locking this thread and the one you posted in Java Servlet.
    db

  • Read and write a .CSV file contains cirillic characters issue

    Hi guys,
    I am a developer of a web application project which uses Oracle Fusion Middleware technologies. We use JDeveloper 11.1.1.4.0 as development IDE.
    I have a requirement to get a .csv file from WLS to application running machine. I used a downloadActinLinsener in front end .jspx in order to do that.
    I use OpenCSV library to read and write .csv files.
    Here is my code for read and write the .csv file,
    public void dwdFile(FacesContext facesContext, OutputStream out) {
    System.out.println("started");
    String [] nextLine;
    try {
    FileInputStream fstream1 = new FileInputStream("Downloads/filetoberead.CSV");
    DataInputStream in = new DataInputStream(fstream1);
    BufferedReader br = new BufferedReader(new InputStreamReader(in,"UTF-8"));
    CSVReader reader = new CSVReader(br,'\n');
    //CSVReader reader = new CSVReader(new FileReader("Downloads/ACTIVITY_LOG_22-JAN-13.csv"),'\n');
    List<String> list=new ArrayList();
    while ((nextLine = reader.readNext()) != null) {
    if(nextLine !=null){
    for(String s:nextLine){
    list.add(s);
    System.out.println("list size ; "+list.size());
    OutputStreamWriter w = new OutputStreamWriter(out, "UTF-8");
    CSVWriter writer = new CSVWriter(w, ',','\u0000');
    for(int i=0;i<list.size();i++){
    System.out.println("list items"+list.get(i));
    String[] entries = list.get(i).split(",");
    writer.writeNext(entries);
    //System.out.println("list items : "+list.get(i));
    writer.close();
    } catch (IOException e) {
    e.printStackTrace();
    say the filetoberead.CSV contains following data,
    0,22012013,E,E,ASG,,O-0000,O,0000,100
    1,111211,LI,0,TABO,B,M002500003593,,,К /БЭ60072715/,КАРТЕНБАЙ
    2,07,Balance Free,3
    1,383708,LI,0,BDSC,B,НЭ63041374,,,Т /НЭ63041374/,ОТГОНБААТАР
    2,07,Balance Free,161
    It reads and writes the numbers and english characters correct. All cirillic characters it prints "?" as follows,
    0,22012013,E,E,ASG,,O-0000,O,0000,100
    1,111211,LI,0,TABO,B,M002500003593,,,? /??60072715/,?????????
    2,07,Balance Free,3
    1,383708,LI,0,BDSC,B,??63041374,,,? /??63041374/,???????????
    2,07,Balance Free,161
    can somone please help me to resolve this problem?
    Regards !
    Sameera

    Are you sure that the input file (e.g. "Downloads/filetoberead.CSV") is in UTF-8 character set? You can also check it using some text editor having a view in hex mode. If each Cyrillic character in your input file occupies a single byte (instead of two), then the file is not in UTF-8. Most probably it is in Cyrillic for Windows (CP1251).
    If this is the case, you should modify the line
    BufferedReader br = new BufferedReader(new InputStreamReader(in,"UTF-8"));toBufferedReader br = new BufferedReader(new InputStreamReader(in,"windows-1251"));Dimitar

  • Using SQL*Loader to load a .csv file having multiple CLOBs

    Oracle 8.1.5 on Solaris 2.6
    I want to use SQL*Loader to load a .CSV file that has 4 inline CLOB columns. I shall attempt to give some information about the problem:
    1. The CLOBs are not delimited in the field level and could themselves contain commas.
    2. I cannot get the data file in any other format.
    Can anybody help me out with this? While loading LOB in predetermined size fields, is there a limit on the size?
    TIA.
    -Murali

    Thanks for the article link. The article states "...the loader can load only XMLType tables, not columns." Is this still the case with 10g R2? If so, what is the best way to workaround this problem? I am migrating data from a Sybase table that contains a TEXT column (among others) to an Oracle table that contains an XMLType column. How do you recommend I accomplish this task?
    - Ron

  • How do I split a Large CSV file into Multiple CSV's Using Powershell

    I am a novice at powershell but this looks to be the best tool to do this task. have a csv file that looks like this:
    Date,Policy,Application
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    Is it possible to split this CSV into multiple CSV's based on "Application".
    Lets say the output might look like:
    None.csv
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    10/13/2014,No,None
    AppBiz.csv
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    11/14/2013,Yes,AppBiz
    PeopleBiz.csv
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    07/04/2013,No,PeopleBiz
    Any help would be greatly appreciated

    I think this might be what you want:
    Import-Csv applications.csv |
    Group Application |
    foreach {
    $_.Group | Export-Csv "$($_.Name).csv" -NoTypeInformation
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "
    Very nice! 4x faster..
    I doubt the OP will get what you just did there..
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Disconnected Application Cache .CSV file contains error Code 4

    Dear All,
    Recently we are facing an issue in our BI application . We have implemented Disconnected Application Cache via ibots in our application . We have used preprocessed sync and according to the process what it does is , it saves the data in .csv files into server by making folders of disconnected users.
    Currently , we fould and issue that particular report does not contain data in disconnected mode. Whne we backtraced the issue we found that , the underlying table for the report is empty and when we further investigate we found that , the .csv file kept in server against Uset folder contains an error code , which is :
    <error><error Code>4</error Code></error>TS
    For this reason no data is there in csv files and the disconnected report is also showing emmpty columns for that.
    Can anybody please help me solving this issue . I am not sure why this error happens and what this kind of error code signifies.
    As it is a production issue, it becomes very urgent to solve this . Can anybody please give me some idea how to resolve this one.
    Thanks and Regards,
    TG

    Are you sure that the input file (e.g. "Downloads/filetoberead.CSV") is in UTF-8 character set? You can also check it using some text editor having a view in hex mode. If each Cyrillic character in your input file occupies a single byte (instead of two), then the file is not in UTF-8. Most probably it is in Cyrillic for Windows (CP1251).
    If this is the case, you should modify the line
    BufferedReader br = new BufferedReader(new InputStreamReader(in,"UTF-8"));toBufferedReader br = new BufferedReader(new InputStreamReader(in,"windows-1251"));Dimitar

  • Splitting of a CSV File with Multiple Records into Multiple XML File

    Dear All,
    <b> I am doing a Scenario of CSV to XML Files. I am using BPM for the same. My incoming CSV File has got multiple records. I want to break this Multiple records into Multiple XML Files having one record each.</b>
    Can someone suggest how can I break this rather Split this into Multiple XML Files.
    Is Multimapping absoltely necesaary for this. Can't we do this without Multimapping. Can we have some workaround in the FCC parameters that we use in the Integration Directory.
    Kindly reply ASAP. Thanks a lot to all in anticipation.
    Pls Help.
    Best Regards
    Chakra and Somnath

    Dear All,
    I am trying to do the Multimapping, and have 0....unbounded also. Someways it is not working.
    <b>
    Smitha please tell me one thing...Assigning the Recordsets per Message to 1, does it mean that it will write multiple XML Files as I want.</b>
    Also I am usinf Set to Read only. So once the File is read it becomes RA from A. Then will it write the other Records.
    I have to use a BPM because there are certain dependencies that are there for the entire Process Flow. I cannot do without a BPM.
    Awaiting a reply. Thanks a lot in anticipation.
    Best Regards
    Chakra and Somnath

  • BEA 9.2 Portal issue: downloaded CSV file contains embedded html code

    We have J2EE application using BEA 9.2 Portal framework, and one of the page has feature to generate report( in pop up window) in CSV file format. As per the history from previous developer, BEA 8.1 didn't have this issue but, after mirgration to 9.2, they started having file download error( incomplete contents ..). To overcome this issue they commented out setting content length to HttpServletResponse as attached below, but this, now, causes html page source code of the parent page( where submit button is clicked to generate csv file report) being rendered along with actual report in the downloaded CSV file. Has anyone have this sort of issue? If so, can you please share your thoughts? or any thoughts in general?
    BEA 9.2 with Portal framework, JDK 15, JSP, Beehive NetUI, Sun Microsystem Solaris server
    Here is the source code that avoids setting content length and reasoning behind it..
    private static void setResponseHeadersForCSVFile(HttpServletResponse response, String filename, int contentLength)
    String mimeType = mimeTypes.getContentType(filename);
    response.reset();
    response.setContentType(mimeType);
    // DON'T explicitly set the content length, since the length of the String or StringBuffer that contains
    // the contents of the CSV file will be character encoded when it is actually written to the output stream, based
    // upon the character encoding of this JVM App Server's settings. So let the JVM App Server framework apply the character
    // encoding AND set the final and truly correct content length header at the time the contents of the String or StringBuffer are truly
    // streamed back to the user.
    //response.setContentLength(contentLength);
    response.addHeader("Content-Disposition", "attachment; filename=\"" + filename + "\"; size=" + contentLength);
    }

    1) Yes, the old content.tld is available as part of a web library module as a taglib.tld file. The module is: wlp-services-web-lib.war, which can be found in your bea/weblogic92/portal/lib/modules directory.
    2) The new API is accessible from the ContentManagerFactory class. This provides access to INodeManager, ITypeManager, ISearchManager, etc. The new API is contained within the com.bea.content.federated package. The 8.1.x API in the com.bea.content.manager package including RepositoryManager, NodeOps, SearchOps, etc. has been deprecated with 9.2.
    3) Yes, via the new I*Manager implementations. The entitlement support is for application-scoped visitor roles. Make sure you're using the ISearchManager when performing search operations. This will ensure secure results are returned.

  • Trying to import from different csv files into multiple JTables

    This might seem like something that isn't commonly done, but I'll see if anyone has done this before anyways.
    I currently have a JTable (tab #1) and it successfully reads in the data from a CSV file to populate the cells. Now, I am looking into making 3 more JTables. I can tab between the four different tables, but I can't seem to figure out how to get the data to go to the correct JTable. Tab #1 = table, Tab #2 = table2, Tab #3 = table3, Tab #4 = table4.
    Thanks.
            try
                FileInputStream fileInput = new FileInputStream ("upcoming.csv");
                BufferedReader InputCSV = new BufferedReader (new InputStreamReader (fileInput));
                line = InputCSV.readLine(); //start reading into the records.
                    //Create data for the table.
                    //the row variable = starting row
                int row = 0;
                while(line != null)
                    tmp = line.split(",");
                    for (int col = 0; col < columnNames.length; ++col)
                        data[row][col] = tmp[col]; //store cells' data to the 2-dimensional array.
                    row++;
                    line = InputCSV.readLine();
                }//end of WHILE-statement.
            } //END of TRY-statement.
            catch(Exception e)
                System.out.println(e);
            }

    I have been successful in my quest to get 4 different datasets into 4 different JTables. The previous posters suggestion to use the Table model led me to this page:
    http://java.sun.com/j2se/1.5.0/docs/api/javax/swing/table/TableModel.html
            TableModel myData = new MyTableModel();
            JTable table = new JTable(myData);I had some very similar code in my program. I use the DefaultTableModel and the SortFilterModel before having 4 different JTables.
            DefaultTableModel model = new DefaultTableModel(data, columnNames);
            DefaultTableModel model2 = new DefaultTableModel(data2, columnNames);
            DefaultTableModel model3 = new DefaultTableModel(data3, columnNames);
            DefaultTableModel model4 = new DefaultTableModel(data4, columnNames);I use the code in the first post 4 times, and only change the filename and the data#.
    I also have some code that counts the number of lines that will be needed.
    Thanks.

  • Create a pfd file from a word 2004 file containing multiple Sections.

    I just got a macbook pro, installed office 2004 and acrobat pro 8.1. However, when I want to print - using PDF 8.9 printer driver - a word file with multiple sections in it, I cannot get the full document in the pdf file but only the last Section.
    Some sections are portraits and others are landscapes.
    Any idea what is happening? Tried to play with various set-up but nothing works...... Help!

    Thanks, however, I tried and it does not work. Also I correct the printer driver is 8.0 not 8.9. I tried both ways you mentionned but I get the pdf files in multiple sections (one file per section), cannot get a full file with all sections at once.

  • Correlation for files containing multiple records

    I was learning correlation from a nice blog below.
    /people/milan.thaker/blog/2008/07/23/correlation-150-runtime-behavior-of-bpm
    I have one question though. In the case where the correlation is done on  a key field say PsNo.
    Now if two files have multiple records, will the correlation compare the PsNo of each record or is the comparison just on the first PsNo of each file record.
    Thanks,
    Minhaj.

    Hi Minhaj,
    1) Suppose if you want to merge 2 files into a single file based on a keyfield (PsNo), You are opting Correlation in BPM.
    In your Correlation Editor, You should mention the XPATH Expression for Keyfield(PsNO) for the involved messages.
    2) Only if both the values of keyfied(PsNo) in 2 files are equal then Only you can merge all those 2 files into single file.
    So first PsNo of each file record is going to compared with another file.

  • Difficulty with SF2 files containing multiple sounds; only one sound will play

    Here's my difficulty. I know exactly how to install SF2 soundfonts on my computer, I have used them in several compositions, but there's one limitation I havent been able to overcome.
    Some soundfonts, as you probably know, contain multiple sounds. My problem is that when I have a multi-sound SF2 selected, I can only get GarageBand to play one sound -- presumably the first one in the pack. There are no options in the DLS Music Player window for selecting other sounds within the soundfont; only the soundfont itself is listed. I have a couple of soundfont packs where the different sounds are all split into individual SF2s within a folder, and that displays as a submenu properly... but I can't get these others to work properly. Any ideas?

    I don't think you can get it to work with the current version of GB. There used to be a plugin called SoundFontSynth that handled soundfonts with sound banks, but it has been discontinued and the auther never replied to emails.

Maybe you are looking for

  • Goods Receipt 103/105 - CIN

    Hi we need to receive goods first with 103 movt type then 105 we don't have option capture & post excise in MIGO It is need to be done separately using J1IEX Please confirm the sequence of below steps 1. PO 2. J1IEX-Capture 3. MIGO 103 - refer excise

  • What does it take to get a military discount...

    Okay, I went ahead and changed my plan to, surprisingly, a reduced price.  Had to call in to get it, since the website suggested I take the great deal of nothing much extra except the $20 increase in my monthly charge.  I was on the site trying to fi

  • I am not able to open up Safari!!!!!!!

    I am not able to open up Safari, but I think that maybe I have a virus on my computer what is not allowing me to open it up. The Safari icon jumps when I click on it, but thats it. It doesn't open and I really need to use it! I have to use Google Chr

  • Wrong Cost Estimate error

    Hi, While we are running Cost Estimate through tcode CK11N, amount seems to incorrect. In Material Master Moving price is showing differ than after showing Cost Estimate. If Material Master showing Moving price 100.00, then during cost run why it is

  • FCP shutting down when I bring jpegs into timeline....

    Hi all you night owls..... I am just about pulling my hair out right now. Have a project due in 8 hours; this project is a weekly project that I have been doing for the past 16 weeks. Same thing different jpegs and hdv footage. When I drag the jpegs