One to many xml files with file adaptor

Hi,
I have a scenario HCM-ABAPProxy--XI-File for one structure I need to generate multiple xml files with 100 records per file.
this is my input  messag
MT_in
  Node
     PositionIDs
        descrption
        job
        IsActive
what I was doing on the ABAP side for every node I have 100 PositionID's sub nodes. so each node should be a sperate file.
my output structure is
PositionIDs
    PositionID
      pid
      description
      job
so there should be one PositionIDs per file which contains 100 PositionID.
I've multi message mapping without BPM that did not work out just wondering if any one came across the same scenario.
thanks,
Joe

You can not create multiple files without BPM. You can pretty much perform multi mapping to achieve your split but to write it to a file, you will have to call the file adapter for each split which you can not do without using BPM. In BPM, for each split that you perform, you can use a send step in for-each loop which will give you the functionality you require.
Award if helpful,
Sarath.

Similar Messages

  • SQL Developper Version 4.1.0.19: cannot open sql files with file type .pck

    Hi
    I can't open sql files with file type .pck. They are opened as a package. The icon is a package too. (Worked fine in 4.0)
    My settings:
    Thanks for any help

    Thanks, but that doesn't help
    To specify it more precisly: I can open such a file, but it is not opened as a sql file. The icons you see above left are the icons of the files (not of the packages).
    Greetings
    Ovi

  • Importing text file (with file names) into Automator.. is it possible?

    Hello all,
    I have been working with Windows Batch files for my line of work. I have a couple of file names in a text file (a column), which I want to copy from one folder of one hdd to another folder on a different hdd. I have been trying to do this kind of work with a Mac. I already know how you copy and rename files in automator (which isn't difficult, of course) but you have to 'select' the files in the finder first (with get specified items).
    But the only way i see that you can specify items is by selecting them... is there a way to import a text file with all the file names instead of selecting all the file names manually?
    or is there an AppleScript alternative which I can use to import the text file (or just copy into applescript) and run before the query's of copying and renaming the files? I am kind of new to Apple programming.
    The text file looks like this:
    image1.jpg
    image2.jpg
    etc..
    so there has to be a command to: 'goto' a specific folder as well.
    Thanks in advance!

    You can import text files, but if they are just names you will need an additional action to add the source folder path. A *Run AppleScript* action can be used, for example:
    Tested workflow:
    1) *Ask for Finder Items* {Type: files } -- choose the text file containing the names
    2) *Combine Text Files* -- this gets the text file contents
    3) *Filter Paragraphs* { return paragraphs that are not empty } -- skip blank lines
    4) *Run AppleScript* -- copy and paste the following script:
    <pre style="
    font-family: Monaco, 'Courier New', Courier, monospace;
    font-size: 10px;
    font-weight: normal;
    margin: 0px;
    padding: 5px;
    border: 1px solid #000000;
    width: 680; height: 340px;
    color: #000000;
    background-color: #FFEE80;
    overflow: auto;"
    title="this text can be pasted into an Automator 'Run AppleScript' action">
    on run {input, parameters} -- add folder path
    add the specified folder path to a list of file names
    input: a list of text items (the file names)
    output: a list of file paths (aliases)
    set output to {}
    set SkippedItems to {} -- this will be a list of skipped items (errors)
    set SourceFolder to (choose folder with prompt "Choose the folder containing the file names") as text -- this is the folder containing the names
    repeat with AnItem in the input -- step through each name in the input
    try
    set AnItem to SourceFolder & AnItem -- add the prefix
    set the end of the output to (AnItem as alias) -- test
    on error number ErrorNumber -- oops
    set ErrorNumber to ("  (" & ErrorNumber as text) & ")" -- add the specific error number
    set the end of SkippedItems to (AnItem as text) & ErrorNumber
    end try
    end repeat
    ShowSkippedAlert for SkippedItems
    return the output -- pass the result(s) to the next action
    end run
    to ShowSkippedAlert for SkippedItems
    show an alert dialog for any items skipped, with the option to cancel the workflow
    parameters - SkippedItems [list]: the items skipped
    returns nothing
    if SkippedItems is not {} then
    set {AlertText, TheCount} to {"Error with AppleScript action", count SkippedItems}
    if TheCount is greater than 1 then
    set theMessage to (TheCount as text) & space & " items were skipped:"
    else
    set theMessage to "1 " & " item was skipped:"
    end if
    set {TempTID, AppleScript's text item delimiters} to {AppleScript's text item delimiters, return}
    set {SkippedItems, AppleScript's text item delimiters} to {SkippedItems as text, TempTID}
    if button returned of (display alert AlertText message (theMessage & return & SkippedItems) ¬
    alternate button "Cancel" default button "OK") is "Cancel" then error number -128
    end if
    return
    end ShowSkippedAlert
    </pre>
    5) *Copy Finder Items* { To: _your external drive_ }

  • How to process large input CSV file with File adapter

    Hi,
    could someone recommend me the right BPEL way to process the large input CSV file (4MB or more with at least 5000 rows) with File Adapter?
    My idea is to receive data from file (poll the UX directory for new input file), transform it and then export to one output CSV file (input for other system).
    I developed my process that consists of:
    - File adapter partnerlink for read data
    - Receive activity with checked box to create instance
    - Transform activity
    - Invoke activity for writing to output CSV.
    I tried this with small input file and everything was OK, but now when I try to use the complete input file, the process doesn't start and automatically goes to OFF state in BPEL console.
    Could I use the MaxTransactionSize parameter as in DB adapter, should I batch the input file or other way could help me?
    Any hint from you? I've to solve this problem till this thursday.
    Thanks,
    Milan K.

    This is a known issue. Martin Kleinman has posted several issues on the forum here, with a similar scenario using ESB. This can only be solved by completely tuning the BPEL application itself, and throwing in big hardware.
    Also switching to the latest 10.1.3.3 version of the SOA Suite (assuming you didn't already) will show some improvements.
    HTH,
    Bas

  • Not sure where to ask this- printing multiple files with file names

    Hi,
    Apologies if there was a better place to ask this question- I'm not sure which program I should be using to achieve what I'd like to do and I'm not sure if it's even possible. I'm currently working for a publishing house preparing images that will eventually be printed inside books. Files come to me in a wide variety of formats. I save them as either tiff or eps. These files (the tiffs and the eps) will eventually be sent along to someone that is doing layout work for the book and then to the printers. Before that happens however I need to provide the editor with hardcopies of the images that will be published so that they can review them. Having the editors look at digital versions of the files is not currently an option. Our current workflow (not designed by me) entails entering all the images for any given book into an InDesign template. Then in Indesign I have to enter some information about the images (their numerical designation, their file format). Then we print the .indd file for editorial review. I find this process incredibly time consuming and tedious and I'm looking for a way to avoid it all together.
    What I would like to be able to do is select a folder of images (tiffs and eps only) and print them at 100% (no resizing, one image per page) with the file name at the bottom of the page. Due to the number of images I'm responsible for, this could save me several hours of tedious work a week and would allow me to devote more time to producing a quality product. I thought I had found a solution using the 'contact sheet II' tool in adobe bridge. However, this rasterizes the eps files and makes them look unprofessional. An hour and a half of googling this issue has taken me to numerous dead ends. I wonder if there might be some way to do this quickly in indesign (at which I am a complete novice) or possibly via an automated illustrator function.
    Thanks in advance for any help you may be able to offer.
    Please do let me know if there is anything unclear in my description. For what it's worth, I am using Creative Suite 6.

    Hi Mylenium,
    I appreciate the reply. I'm not sure that I will be able to convince the powers that be to drop the EPS format. I'm fairly certain that the printer, which the press has a long-standing relationship with, expects TIFF and EPS files exclusively.  In addition, I'm not sure if I'm misreading what you're saying, or if you misunderstood something I said. Typically, the EPS format looks great and prints fine through illustrator or via InDesign (in other words, it doesn't usually look unprofessional). I use TIFF for photos or illustrations and EPS for charts or graphs (basically, in any instance I have to set type). The problem is that the only way I've found to speed up the production of hardcopies for editorial review ('contact sheet II' ) runs through bridge and photoshop, ruining the appearance of the EPS files.
    I wonder if there's a way to run a similar batch process (may not be the correct wording) through illustrator such that both the TIFF and EPS files look clear, print at 100%, print one image per page, and the file name is included on the printout. Basically, a one image per page contact sheet. To do this now, I have to place every single file individually into InDesign and manually enter information about the files.
    Thanks

  • Binary file with File adapter

    Hi gurus,
    I am getting binary file as base64binary (not a text/xml file) in a request XML tag. I need to write this file to a file directory with the file adapter after retrieving it from the request message by a message/java mapping and use another field in XML as the file name.
    I followed the blog `how to send binary data through PI` but in that case, the request message is also the same. Mine requires a mapping to be executed and in this case I cannot use dummy names for message types/interfaces as Enterprise Repository development is required.
    How can I achieve this?
    Thanks
    Gokhan

    Just solved the problem actually
    As I was trying to write a binary file directly with the receiver file adapter, I wasn't sure how to define a data type / message type for it. I developed a java mapping that decodes the base64 to binary data and writes it to the output stream, and used a dummy message type / data type for the service interface as the target in Operation Mapping
    And it worked!
    Regards,
    Gökhan

  • Idoc to file with file name as ABCD timestamp and count

    Hi,
    Greetings!
    I have an requirement like iDoc to File and the output file name should be ABCDYYMMDDXX (e.g ABCD14091701) in PI 7.1
    where,
    Default = ABCD
    YY = year
    MM = month
    DD = date
    XX = sequence no. on the same day; e.g. first batch on same day = 01, second batch on the dame day = 02
    I tried checking many blogs but its not meeting this requirement.
    Kindly please help me out in completing this interface.
    Regards,
    Vinoth

    Hi Vinoth,
    Below is the code. Use it in-conjunction with your dynamic config code.
    String inputFile = "D:\\Receiver"; //This is your target receiver folder
      String dateStr = DateFormat.getDateInstance().format(new Date());
      dateStr = dateStr.substring(0, 2)+ dateStr.substring(3, 5)+ dateStr.substring(8,10);
      final String fileNamePattern = "DEFAULT"+dateStr;
      File file = new File(inputFile);
      File[] listFiles = file.listFiles(new FilenameFilter() {
      @Override
      public boolean accept(File dir, String name) {
      if(name.contains(fileNamePattern))
      return true;
      return false;
      //System.out.println("Number of files in the directory "+inputFile+" : Next Counter"+ (listFiles.length+1));
      int counter = listFiles.length+1;
      String fileName = "";
      if(counter < 10)
      fileName = fileNamePattern+0+counter;
      else
      fileName = fileNamePattern+counter;
      System.out.println("Name of the file will be:"+fileName);
    Hope it helps!
    Best Regards,
    Anand Patil

  • Pick up a specific file with File Sender Adapter.

    Hi guys,
    I would like to know how I can pick a specific file in a file pool (folder)? I would like to choose this file by name, like FileA or FileB, etc.
    I’m asking this because I have an asynchronous file scenario (BPM) with a receiver adapter that put the File with a specific name (variable substitution) in a folder.
    And I would like to do something like this:
    In another asynchronous scenario (BPM), a File sender adapter picks up this specific file (using the name). The correlation is made trough an IDOC that XI receives before pick up the file, this IDOC has payload field with the name of file to be picked.
    Is it possible receives the IDOC, read the field with the name of the file to be picked and choose this specific file? In a Sender File Adapter how I can do something like variable substitution like receiver adapter does.
    Thanks in advance,
    Ricardo.

    Hi,
    <i>Is it possible receives the IDOC, read the field with the name of the file to be picked and choose this specific file? In a Sender File Adapter how I can do something like variable substitution like receiver adapter does.</i>
    No this is not possible. The only dynamic thing you can do is use wild card characters like *.
    So, maybe you can pick a file like . or AA. and so on...
    Regards,
    Bhavesh

  • Reading fixed position files with File Adapter

    Hi !
    I'm trying to use the file adapter to read a file with data in fixed positions .
    I cannot get it to work, I'm getting :
    [Line=3, Col=133] Expected "${eol}" at the specified position in the native data, while trying to read the data for "element with name Hours", using "style" as "array" and "cellSeparatedBy" as "${eol}", but not found.
    Ensure that "${eol}", exists at the specified position in the native data.
    and don't understand what the problem is.

    Sorry I didn't reply sooner.
    If you are only interested in the first 133 characters then as you stated you need to put in a dummy filler at the end.
    The file adapter is not as smart as you think. What is does is reads the string in a stream, so it gets its command and reads the stream until it hits it, then it gets the next command.
    In your situation you said read 1 - 133 thinking it would ignore the other characters in the line. In reality what happens is that it reads the first 133 characters. Assuming that it wasn't looking for a end of line it would then read the next 133 characters. This would include the character you want to ignore, so your pattern would fail.
    Good way to think about it is to apply the rules as if you are reading the file your self. In your mind you say read these characters then I want to ignore all the others in the row. You have to tell the file adapter to do the same.
    Hope this explains.
    cheers
    James

  • One-to-many relationship: problem with several tables on the one side...

    Hello
    I'm having problems developing a database for a content management system. Apart from details, I've got one main table, that holds the tree structure of the content ("resources") and several other tables that contain data of a particular datatype ("documents", "images", etc.). Now, there's one-to-many relationship between "resources" table and all the datatype tables - that is, in the "resources" table there's "resource_id" column, being a foreign key referenced to the "id" columns in the datatype tables.
    The problem is that this design is deficient. I can't tell form the "resource_id" column from which datatype table to get the data. It seems to me that one-to-many relationship only works with two tables. If the data on the one side of the relationship is contained in several tables, problems arise.
    Anybody knows a solution? I would be obliged.
    Regards
    Havocado

    Hi;
    A simple way may be create a view on referenced tables:
    Connected to Oracle Database 10g Express Edition Release 10.2.0.1.0
    Connected as hr
    SQL>
    SQL> drop table resources;
    Table dropped
    SQL> create table resources(id number, name varchar2(12));
    Table created
    SQL> insert into resources values(1,'Doc....');
    1 row inserted
    SQL> insert into resources values(2,'Img....');
    1 row inserted
    SQL> drop table documents;
    Table dropped
    SQL> create table documents(id number, resource_id number,type varchar2(12));
    Table created
    SQL> insert into documents values(1,1,'txt');
    1 row inserted
    SQL> drop table images;
    Table dropped
    SQL> create table images(id number, resource_id number,path varchar2(24));
    Table created
    SQL> insert into images values(1,2,'/data01/images/img01.jpg');
    1 row inserted
    SQL> create or replace view vw_resource_ref as
      2    select id, resource_id, type, null as path from documents
      3      union
      4     select id, resource_id, null as type, path from images;
    View created
    SQL> select * from resources r inner join vw_resource_ref rv on r.id = rv.resource_id;
            ID NAME                 ID RESOURCE_ID TYPE         PATH
             1 Doc....               1           1 txt         
             2 Img....               1           2              /data01/images/img01.jpg
    SQL> Regards....

  • How to split big xml-messages with file inbound adapter

    Hello,
    we have big xml-messages in our filesystem, which are processed by XI 3.0 (SP11). We are using the file inbound adapter. Now we want to split these big xml-message into some smaller messages.
    Exist there a corresponding function to the "xml.recordsetsPerMessage" which is working with xml-files?
    Thanks!
    Regards
    Stefan

    Hi,
    maybe you can split the message in the BPM 
    with 1:N mapping?
    Process Integration (PI) & SOA Middleware
    I don't think anythink like "recordsetsPerMessage" is possible for xml messages 
    Regards,
    michal

  • Convert XML to flat file with File adapter

    Hi all.
    Trying to configure a file adapter according to the following link.
    http://help.sap.com/erp2005_ehp_04/helpdata/EN/d2/bab440c97f3716e10000000a155106/frameset.htm
    What i want it to do is the following.
    I have the following incoming message:
    <header>
         <h_field1></h_field1>
         <h_field2></h_field2>
    </header>
    <data>
         <d_field1></d_field1>
         <d_field2></d_field2>
         <infotext>
              <i_field1></i_field1>
              <i_field2></i_field2>
         </infotext>
    </data>
    I want this to become a flat file that looks as follow.
    h_field1¤h_field2
    d_filed1¤d_field2
    i_field1¤i_filed2
    Every field in each segment should be concatenated with the ¤ char as separator.
    I've done the following:
    Content Conversions Parameters
    Record Structure: header,data,infotext
    header.fieldSeparator   ¤
    data.fieldSeparator    ¤
    infotext.fieldSeparator   ¤
    The adapter just gets into wait mode.
    Does anybody know why?
    BR
    Kalle

    Hello kalle,
    As per your source structure, if you perform FCC on it you should have 2 level hierarchie, Your RecordSet  will be your root node.
    and the RecordSetStructure is : header,1,data,1,
    Since the node infotext is in the data, you can not refer this node and is not appropriate.
    <header>
        <h_field1/>
    <h_field2/>
                 </header>
                <data>
      <d_field1/>
       <d_field2/>
        <infotext>
    <i_field1/>
    <i_field2/>
    <  /infotext>
              </data>
    Change The above strucutre to like this
    <header>
        <h_field1/>
         <h_field2/>
      </header>
       <data>
          <d_field1/>
          <d_field2/>
        </data>
          <infotext>
                <i_field1/>
               <i_field2/>
           < /infotext>
    Your RecordSet  will be your root node.
    and the RecordSetStructure is : header,1,data,1,infotext,1
    This resolves your issue..
    Regards,
    Prasanna

  • Processing an empty file with file adapter

    Hi,
    We have a scenario where we are merging multiple files via a BPM.  One of the files that we read is a delta file, which at times can be blank/empty.  The multiple receive steps in the BPM are in one fork and the end condition is set up so that all the messages are received prior to the transformation step.
    When we attempt to read an empty file by activating the adapter, the adapter monitor reads that the message was processed successfully.  But, no message is created in XI.  Is this correct?  If yes, is there any way we can create a message for an empty file w/o using a module to change the contents of the file?
    Any suggestions or help is greatly appreciated.
    Thanks in advance.
    Best Regards,
    Duke

    Hi Duke,
    One suggestion:
    You can construct a message with some fields and use exists() function in mapping.
    try this link:
    http://help.sap.com/saphelp_nw04/helpdata/en/db/83f7b88528424c9113b15d5e0fb516/content.htm
    Regards
    Priyanka

  • How do I get PSE 12 to save a TIFF file with file extension ".tiff" instead of ".tif"?

    I am using PSE 12 as an external editor for Aperture.  When I ask to use an external photo editor, Aperture creates a .tiff file in my Aperture folder, launches PSE 12 and tells it to edit that .tiff file.  This all works great.  When I am done editing in PSE 12, I ask it to save the file and it saves it back to the original Aperture folder with a .tif extension.  I can instead say "save as" and then it suggests the file name with a .tif extension.  When I correct the extension to .tiff it warns me that I'm going to overwrite the file (exactly what I want!).  I say yes and then I find out that PSE did not overwrite the file, but wrote it as a .tif file anyway.  I have to go to the folder, delete the .tiff file and rename the .tif file to .tiff and then everything is fine - but what a hassle.  Has anyone solved this problem?

    That's not quite where the problem lies, which is good, because you can't change that. The real solution is that you don't ever want to see the save as window in PSE if you are using PSE as external editor for any program. When you see it you aren't creating a version for your asset management program. Go to the PSE preferences>saving files>on first save and choose to save over existing file. That should do what you want.

  • How to ignore columns in CSV File with File Sender Adapter

    Hi,
    I have a CSV File that I need to load with XI. The File contains 10 columns but I only need the data from 3 columns, let's say column 1,4 and 7. Can I configure the XI File Sender Adapter in a way that it only loads the data for the required columns and ignores the rest?
    Thanks in advance.
    Alex

    Alex,
    Dont think so. Why not create Dummy fieldName for these columns you want to ignore.?
    Regards
    Bhavesh

Maybe you are looking for