File copy scenario

Hi ,
Is it possible to decide the  "destination folder" ( where file needs to be copied)   based on the sender  file name ( no mapping exists only ID settings ) . As shown in  the below example  based on text 100 on the source file the destination should file should be copied under folder 100.
If not  What would be the best approach to achieve this ?
sender                       ->         receiver
scr/PI/folder100.txt     ->       dest/PI/100/folder100.txt
scrt/PI/foder200.txt   ->         dest/PI/200/folder200.txt
Thanks in advance ,
Raj

Hi
Message Mapping : create UDF with the below code and map it to Root tag of the target structure.
Java Mapping : Include hte piece of code in ur java code.
http://wiki.sdn.sap.com/wiki/pages/viewpage.action?pageId=95093307
DynamicConfiguration conf = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);
DynamicConfigurationKey key = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File", "FileName");
DynamicConfigurationKey key1 = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File", "Directory");
String fileName = conf.get(key);
String fileDir = conf.get(key1);
String newDir = fileDir+"\"+fileName.replaceAll(".txt","")+"\";
conf.put(key1, newDir);
return "";
Regards
Ramg

Similar Messages

  • Multithreaded File Copy takes more time 1.5 times than single thread.

    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.nio.channels.FileChannel;
    public class TestMulti implements Runnable {
         public static Thread Th1;
         public static Thread Th2;
         String str = null;
         static int seqNumber = 1000000000;
         public static void main(String args[]) {
              Th1 = new Thread(new TestMulti("1_1"));
              Th2 = new Thread(new TestMulti("1_2"));
              Th1.start();
              Th2.start();
              try {
                   Th1.join();
                   Th2.join();
              } catch (Exception e) {
                   e.printStackTrace();
         public TestMulti(String str) {
              this.str = str;
         public void run() {
              File f = new File("C:/Songs2/" + str);
              File files[] = f.listFiles();
              String fileName = "";
              String seqName = "";
              String seq = "";
              int sequenceNo = 0;
              try {
                   for (int j = 0; j < files.length; j++) {
                        File musicFiles[] = files[j].listFiles();
                        for (int k = 0; k < musicFiles.length; k++) {
                             seq = "18072006";
                             seqName = seq + seqNumber;
                             sequenceNo = 10000 + seqNumber % 100;
                             seqNumber = seqNumber + 1;
                             fileName = musicFiles[k].getName();
                             String fileExt = fileName.substring(fileName.length() - 3,fileName.length());
                             String targetFile = "C:/Songs1/" + sequenceNo;
                             File fi = new File(targetFile);
                             if (!fi.exists()) { fi.mkdir(); }
                             targetFile = "C:/Songs1/" + sequenceNo + "/" + seqName+ "." + fileExt;
                             FileInputStream fin = new FileInputStream(musicFiles[k]);
                             FileChannel fcin = fin.getChannel();
                             FileOutputStream fout = new FileOutputStream(targetFile);
                             FileChannel fcout = fout.getChannel();
                             fcin.transferTo(0, fcin.size(), fcout);
                             fout.flush();
                             fcout.close();
                             fcin.close();
                             fout.close();
                             fin.close();
              } catch (Exception e) {
                   e.printStackTrace();
    Multithreaded File Copy takes more time 1.5 times than single thread.
    Is there any issue with this code. Please help me.

    If all of your threads are doing CPU-intensive work, or all are doing I/O to the same interface (for example, writing to the same physical disk), then multithreading would not be expected to help you.
    Multithreading does not magically make your CPU able to do more work per unit time than it could otherwise.
    Multithreading does not magically make your network interface or disk controller able to pump more bytes through than it could otherwise.
    Where multithreading helps (some or all of this has already been mentioned):
    * When you have multiple, independent CPU-bound tasks AND multiple CPUs available on which to execute them.
    * When you have tasks that involve a mix of CPU-bound and I/O-bound work. The CPU-bound stuff can crank while the I/O-bound stuff waits for bytes to be written or read, thus making use of what would otherwise be CPU "dead time."
    What you're doing does not fit either of those scenarios. Copying a file is pure I/O. If the source and destination file are on the same phsyical disk or controller, adding threads only adds overhead with no real possibility to do more work per unit time.
    If your source and destination are on different disks or controllers, then it's possible that you could get some benefit from multithreading. While one thread is waiting for bytes to be written to the target disk, the other thread can be reading from the source disk.

  • Existing files on drive connected to latest Airport Extreme do not show up, however files copied over the air to the drive do. Any ideas?

    I have the latest Airport Extreme with a Seagate drive connected. I copied files to the drive while it was connected to my MacBook Pro. The files do not show up when connected to the AE, but they appear when connected to the Macbook, Files copied over the network to the drive appear. The drive is formatted HFS+.

         After playing with this for the last few days, I found a solution. Under the "Disks" tab in Airport Utility, the "Secure Shared Disks" drop down needs to be set to "With device password". Mine was set to "With acounts", which even after correct authentication, none of the files would appear.
         Another interesting note:  The files that I could see, the ones I copied over the network, do not appear now. I can copy over new files, and they show up, but the ones copied prior to changing the "Secure Shared Disks" option, do not.

  • File Copy times

    My newsreader is acting funny and dropping posted messages, so I
    apologize if this shows up twice.
    My comments on the file speed are that the times posted by other just go
    to show how difficult it sometimes is to make good timing measurements.
    I suspect that the wide variations being posted are in large part to
    disk caching. To measure this, you should either flush the caches each
    time or run them multiple times to make sure that the cache affects them
    more equally.
    Here is what I'd expect. The LV file I/O is a thin layer built upon the
    OS file I/O. Any program using file I/O will see that smaller writes
    have somewhat more overhead than a few large writes. However, at some
    size, either LV or the OS will break the larger writes into smaller
    ones. The file I/O functions in general will be slower to read and
    write contents than making a file copy using the copy node or move node.
    Sorry if I can't be more specific, but if you have a task that
    seems way to slow, please send it to technical support and report a
    performance problem. If we can find a better implementation, we will
    try to integrate it.
    Greg McKaskle

    Maybe this is because of the write buffer?
    Try mounting the media using the -o sync option to have data written immediately.

  • How to measure progress of file copy?

    One of the requirements for my AIR application is to copy
    large files from one location to another on the client computer.
    I've been using the copyToAsync() method in AIR, but unfortunately
    this does not trigger progress events. Copying gigabyte files that
    take several seconds becomes pretty uncomfortable for users, when
    they can't see the progress.
    Does anyone know of an alternative or workaround? All I need
    to do is copy a file from one location to another, and measure the
    progress while that is happening.
    Since I control the client machines (it is a closed
    environment), I have the option to setup file servers on every
    machine and use the file download control with Flash instead of
    file copy. However, when I investigate this option it seems that
    the file download control will always trigger user interaction to
    select the target path (which I don't want -- I want to
    programatically select the source files and the target directory).
    If someone knows a solution to that problem I would happily take it
    and move on :)

    Hi,
    Use 2 FileStream objects in async mode (openAsync). One to
    read and another to write. When you read in async mode, you get
    progress events. When they fire, you can read bytesAvailable
    amounts of data and write that to the output filestream.
    The output filestream can listen for output_progress events
    to ensure that new data can be written to it or not.

  • Network speed affected by large file copy operations. Also, why intermittent network outages?

    Hi
    I have a couple of issues on our company network.
    The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
    network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
    The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
    We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
    Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
    When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
    (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
    While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
    I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
    I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
    Ordinarlily we work with many files
    Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
    I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
    Thanks

    What have you checked for resource usage during one of these copies of a large file?
    At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
    What RAID level is there on the file server?
    There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
    Hi, JohnB352
    Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
    I'm going to replace the RAM and double it up to 12GB.
    Thanks! This may help with some other issues we are having. I'll post back after it has been done.
    [Edit]
    Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

  • Does the last updated backup in Time Machine have all my files for "standard" file copy? (i want to copy and paste files to another computer that doesn't have TM)

    My personal Macbook pro died, but I did have a Time Machine backup. I have a new iMac that I would like to use as a family computer. I would like to transfer some files to the new computer (and set-up a new Time Machine backup), but archive the remainder on a non-TM drive. I was planning to:
    1. File copy my last backup folder from my old TM drive to the new computer
    2. Copy (and then delete) the files I want to archive on another HD drive
    3. Wipe clean my current TM drive
    4. Set-up TM on my new computer
    Is this a good way to proceed?
    Is copying the last backup folder (vs. all backup folders) enough to move my files over?
    Any better approaches appreciated!

    I suggest you visit MicroCenter and go to there Apple section and ask one of there guys if they are still offering the 2 terabyte Backup For the iMac, Mine was only $100.00.
    So the iMac has Time machine plus the USB 2 Terabyte Backup for less than your $130.00

  • File copying problems

    Copying some files using the Finder results in a dialogue box error message: "The Finder cannot complete the opeeration because some data in +(file name)+ could not be read or written. (Error code -36). The same files copy without error when using SilverKeeper (v.2.0.2) backup program.
    I am also experiencing a problem when quitting SIlverKeeper. A second or so after selecting Quit, and after the program has actually quit, a system dialogue box appears saying that the program has unexpectedly quit and offering choices of Reopen, Report to Apple or Close.
    The system also seems somewhat sluggish.
    Can anyone offer suggestions?

    Hi John, At this point I think you should get Applejack...
    http://www.versiontracker.com/dyn/moreinfo/macosx/19596
    After installing, reboot holding down CMD+s, (+s), then when the DOS like prompt shows, type in...
    applejack AUTO
    Then let it do all 5 of it's things.
    At least it'll eliminate some questions if it doesn't fix it.
    The 6 things it does are...
    Correct any Disk problems.
    Repair Permissions.
    Clear out Cache Files.
    Repair/check several plist files.
    Dump the VM files for a fresh start.
    Trash old Log files.
    First reboot will be slower, sometimes 2 or 3 restarts will be required for full benefit... my guess is files relying upon other files relying upon other files!
    Disconnect the USB cable from any UPS so the system doesn't shut down in the middle of the process.

  • Is it possible to have media files copied to a new location as you import them into a project?

    I have roughly 800gb of footage i've taken over the past 2 years of traveling, and I am making a single video montage... The files are located on an external drive, making them slower to work with. I have a 1tb ssd but not enough free space to copy all the files to it for sorting. What i want to do is have the files copy over to the SSD as i add them to the project, so that only the footage i'm going to use gets copied over to the SSD. The only way i can determine if i am going to use the footage or not is to scrub through and mark Ins and Outs in premiere...
    So i guess my question, in the simplest terms i can ask is, Is there a setting in Premiere, like the project manager tool which allows you to consolidate used media files to one destination, but one that will consolidate files as you add them to the project? or could i do this manually by repeatedly using the project manager tool to consolidate files periodically?
    Hope that makes sense...
    Oh, and yes, i have considered other methods to my madness, like reviewing the footage outside of premiere before deciding if i will use it or not, frankly, i have over 4000 pieces of footage and it's just much faster to scrub through them in premiere than to open each one in vlc to review. It's also much more convenient to mark my ins and outs as i review the footage, so premiere is my only option.
    Thanks!

    You can easily (and better, IMHO) do what you want in Adobe Prelude.  It's part of the CC subscription, and may have been included with CS6.
    Open the Ingest panel, navigate to the external drive where you have your source clips, make the thumbnails in the Ingest panel comfortably large, click a video clip to enable scrubbing, and use the J-K-L keys to navigate playback through the clips.  Put a check mark on the clips you want and be sure to select and set up the Transfer option on the right side of the panel before ingesting.  Don't select the Transcode option.
    Cheers,
    Jeff

  • File copy speeds to CSV vs non-CSV

    I'm working on bringing up a 2012 R2 cluster and doing a basic test.  In this cluster, I have two adapters for iSCSI traffic, one for network traffic, and one for the heartbeat.  Cluster node has all the current updates on it.  Everything
    is set up correctly as far as I can see.  I'm taking a folder with 1GB of random files in it and copying it from the C: drive of a node to an iSCSI LUN.  If I have the LUN set up as a non-CSV disk, the copy happens about three time faster than if
    I have it set up as a CSV disk.  All I'm doing is using FCM to change the disk from CSV to non-CSV (right-click, Remove from CSV, right-click, Add to CSV).  I can swap it back and forth and each time the copy process is about three time slower when
    it's a CSV.  Am I missing something here?  I've been through all the usual stuff with regard to the iSCSI adapters, MPIO, drivers, etc.  But I don't think that would have anything to do with this anyway.  The disk is accessed the same with
    regard to all that whether it's CSV or not, unless I'm missing something.  Right now, I only have a single node configured in the cluster, so it's definitely not anything to do with the CSV being in redirected mode.
    I'm not trying to establish any particular transfer speed, I know file transfers are different than actual workloads and performance tools like iometer when it comes to actual numbers.  But it seems to me like the transfers should be close
    to the same whether the disk is a CSV or not, since I'm not changing anything else. 

    Which system owns the CSV?  If the system from which you are copying does not own the CSV then all the metadata updates have to go across the network to be handled by the node that does own the CSV.  If you are copying a lot of little
    files, there is more metadata.
    Actually, metadata updates always happen in redirected IO from what I'm reading, that has been the part that I was missing.  This explains it. 
    https://technet.microsoft.com/en-us/library/jj612868.aspx?f=255&MSPPError=-2147217396 "When certain small changes occur in the file system on a CSV volume, this metadata must be synchronized on each of the physical nodes that access the
    LUN, not only on the single coordinator node... These metadata update operations occur in parallel across the cluster networks by using SMB 3.0. "
    So a file copy, even when done on a coordinator node, does the metadata updates in redirected mode.  Other articles seem to say the same thing, though not always clearly.  So it's still accurate to say that a file copy isn't the best way to measure
    CSV performance, but there doesn't seem to be a lot of pointing to the (I think) important distinction regarding how the metadata updates work.  From what I can see, that distinction is probably trumping anything else such as who is the
    coordinator node, CSV cache, etc.  For me anyway, it makes a 3X performance difference, so I think that's pretty significant.  

  • Files copied (backed up) on an external hard drive can NOT be opened

    MacBook Pro (2.5GHz, intel Core i5)
    OS X Yosemite, 10.10.2
    I was running low on storage space on my Mac Pro. I copied all items on Desktop to my external hard drive. After all files were copied from MacBook Pro to the external hard drive, "get info" suggested that the amount of space taken entirely was the same between when all items were on the Desktop and when they were on the external hard drive. The number of items copied over were also identical. I should have been more careful, but I then deleted all items from the Desktop. It was until a few days later (today) that I noticed majority of the files copied over onto the external disk can NOT be opened now. The error message was "The alias “<filename>” can’t be opened because the original item can’t be found.".
    I really don't want to lose these files, but where are they? They got their file sizes correct but just can not be opened. I tried copying these files back to Desktop on my MacBook Pro, that's not helpful. I tried accessing the files on a PC, still can't open the files. I tried repairing disk of the external hard drive (under Disk Utility), still can't open the files. I tried repairing disk permissions of my MacBook Pro hard drive, still can't open the files.
    One last thing I tried in terminal (I am not terminal savvy at all, just googled the "The alias..." error message and copied the instruction):  /usr/bin/SetFile -a a /Volumes/MAC This did not solve the problem, either.
    Thank you for your help in advance!

    Hi, eddie from pataskala ohio.
    We love pro-active thinking here at the Apple Discussions and you're off to a great start.
    Even though upgrading iTunes shouldn't affect your Library, one of the funny things about computers is that sometime, somehow, somewhere when you least expect it , they can do some not so funny things, like losing data that they're not supposed to. Which is why it's a good idea to have a back up copy of any data (such as your music) that you value in reserve ... just in case.
    The most straightforward way, is to go to the iTunes Edit menu > Preferences > Advanced tab > General sub-tab and make sure your iTunes Music folder is assigned to its default location: C:\Documents and Settings\User Name\My Documents\My Music\iTunes .
    Next make sure all your music is stored in that folder by using the iTunes Advanced menu > Consolidate Library option.
    Now you can copy the entire iTunes folder - the iTunes Music folder and the iTunes library files in particular - to your external drive.
    If anything does go wrong with the upgrade, you'll be all set to recreate your Library.

  • Has anyone else had Mountain Lion 10.8.2 file copying trouble?

    Has anyone had any trouble with file copying to external disks?
    I have used a third party programme called "Synchronize! Pro X" (SyncPX) for many years with OS X up to 10.5.8 and it has never given any trouble at all.  In July I bought a new MacBook Pro 15", upgraded to 10.8.1 and bought the latest update for SyncPX (6.5.1).  This worked perfectly as far as I know until I downloaded the 10.8.2 Mountain Lion upgrade about a week ago.
    The symptoms are:
    1) When files are copied in backup mode (running as a single user, updating only files that have changed) to an external disk using Firewire, a high percentage of them do not have their file information copied correctly.  The "Last modified" date is often set to the time that the copy was made rather than being replaced with the  "Last modified" date from the source file.  Also, "further information" is sometimes not copied.
    2)  When files are copied in "Full bootable backup" mode (SyncPX runs "as root", copies files from ALL users and tries to set owner and group to be same as source file), the same symptoms as in (1) occur, but also SyncPX hangs right at the end of copying.  Sometimes it says "Updating dyld cache…", sometimes "Updating aliases…".  When this happens it is impossible to halt SyncPX except by forcing a quit, and then the external disc is not released so that it cannot be ejected except by forcing a shutdown.  In that case the external disk usually needs repairing.  The errors are always "incorrect number of extended attributes".
    I have tried lots of things.
    a) Restart
    b) Reformatting the external disk and starting over.
    c) Reinstalling the system (after checking and repairing the main system disc), followed by (b)
    d) Turning off indexing for the external disc
    e) Repeating the test with a different external disk (to make sure it's not down to a faulty external disk)
    f) I have looked at Console logs but they do not indicate anything odd going on.
    None of this makes any difference.  By contrast, a straight drag and drop copy from Finder works with no errors.
    At this point you probably think the errors are SyncPX's fault.  But:
    i) The same version of SyncPX worked until 10.8.2 was released
    ii) If the destination disk for a SyncPX backup is a network disk, everything works OK.
    I started by taking the fault up with SyncPX's developer, Qdea.  One other user had noted a fault similar to (2).  Qdea maintain that there's nothing wrong with their code, and blame Apple's disc drivers.
    On balance I think Qdea could be right, or at least Apple have changed some specifications but not told the developer world.
    I haven't yet tried to take this up directly with Apple (through AppleCare) as I think they will probably wash their hands of it and blame the developer.  The point of this post is to see if anyone else has had a similar problem, perhaps with different software, and if there is anyone with sufficient experience to give advice on how to take this forward.  Is the case for an OS bug convincing or do others feel it's a fault with SyncPX alone?  If so how does one explain its perfect behaviour until 10.8.2?
    What steps can I take to pin this down further?  All advice and comments will be gratefully received..

    RobertDHarding wrote:
    At this point you probably think the errors are SyncPX's fault. 
    And you would be right.
    I started by taking the fault up with SyncPX's developer, Qdea.  One other user had noted a fault similar to (2).  Qdea maintain that there's nothing wrong with their code, and blame Apple's disc drivers.
    What steps can I take to pin this down further?  All advice and comments will be gratefully received..
    Dump Qdea for someone who knows how to write Mac software.
    1. Qdea displays the following logo: . This is a pre-MacOS X logo.
    2. Qdea displays the following review:
    MacFixit had a good reputation, if you are old enough to remember when it used to exist.
    3. Qdea provides download links to SynchronizeX and version in French, German, and Japanese. Apparently they don't know that Mac software can be localized into a single executable.
    I'm sure there are any number of similar utilities available, all of them cheaper than SynchronizeX's $99.99 price.

  • Slow Files Copy File Server DFS Namespace

    I have two file servers running on VM both servers are on different physical servers.
    Both connect with dfs namespace.
    The problem part is both servers never have same copy speed.
    Sometime very slow files copy about 1MBps on FS01 and fast copy 12MBps on FS02.
    Sometime fast on FS01 and slow on FS02.
    Sometime both of them slow..
    So as usual I reboot the servers. Doesn't work.
    Then I reboot the DC01 also doesn't work. There is another brother DC02.
    After I reboot DC02, one of the FS become normal and another FS still slow.
    FS01 and FS02 randomly. They never get faster speed together.
    Users never complain slow FS because 1MBps is acceptable for them to open word excel etc.,.
    The HUGE problem is I don't have backup when the slow FS days.
    The problem since two weeks I'm giving up fixing it myself and need help from you expert guys.
    Thanks!
    DC01, DC02, FS01, FS02 (Win 2012 and All VMs)

    Hi,
    Since the slow copy is also occurred when you tried the direct copy from both shared folder, you could enable the disk write cache on the destination server to check the results.
    HOW TO: Manually Enable/Disable Disk Write Caching
    http://support.microsoft.com/kb/259716
    Windows 2008 R2 - large file copy uses all available memory and then tranfer rate decreases dramatically (20x)
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/3f8a80fd-914b-4fe7-8c93-b06787b03662/windows-2008-r2-large-file-copy-uses-all-available-memory-and-then-tranfer-rate-decreases?forum=winservergen
    You could also refer to the FAQ article to troubleshoot the slow copy issue:
    [Forum FAQ] Troubleshooting Network File Copy Slowness
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/7bd9978c-69b4-42bf-90cd-fc7541ccb663/forum-faq-troubleshooting-network-file-copy-slowness?forum=winserverPN
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • File-to-File/RFC scenario with reading filename

    Hi,
    i have a File-to-File/RFC scenario which causes some problems in desining it correctly. Maybe some of you has an idea how to do this.
    Scenario:
    - A file is picked up by a File-Adapter. The files are different: pdf, doc, tiff, jpg, txt, ...
    - The file must now go through a business process (not necessary the file, but i need the filename in the business process).
    - The process has to contact several backend systems (SAP R/3) to collect some data. To achieve this the filename has to be send to this systems.
    - The collected data are send via SOAP to a receiver system
    - The file itself has to be stored in a directory via File-Adapter.
    Here's my problem:
    - Is it possible to transport the binary file content within a message which contains other elements (e.g. filename)?
    - Is it possible to do graphical mappings with such a payload (only 1 to 1)? Or must i use Java Mappings only?
    - How to generate a Message from the sender File Adapter which contains binary file content AND filename? Is this possible with a Module?
    - Is it better to create 2 messages with an adapter module? One with the image the other with the filename. Or is it better to split them later in a Mapping?
    Thanks in advance,
    ms

    If all that you need is the file name, use adapter specific settings in the sender file adapter. Then you can access the file name in mapping runtime in UDFs. If you want to have content of the pdf, jpg etc images, i do not think there are ready modules available except for reading pdfs ( you might have to research on this).
    For look ups etc with R3 systems, you can use the file name that you got from the adapter and store it in mapping fields.
    VJ

  • N:1 file merging scenario in bpm

    Hi friends,
    Can anybody send me link of blog for file merging scenario in BPM. i hav two files and i hav to merge it into one single file. I am new to XI, plz help me.
    Thanks in advance,
    Shweta

    Hi,
    Check this blogs...
    /people/pooja.pandey/blog/2005/07/27/idocs-multiple-types-collection-in-bpm
    /people/narendra.jain/blog/2005/12/30/various-multi-mappings-and-optimizing-their-implementation-in-integration-processes-bpm-in-xi
    http://help.sap.com/saphelp_nw04/helpdata/en/08/16163ff8519a06e10000000a114084/content.htm
    Merging using corelation -
    /people/sravya.talanki2/blog/2005/08/24/do-you-like-to-understand-147correlation148-in-xi
    Example using correlation:
    http://help.sap.com/saphelp_nw04/helpdata/en/08/16163ff8519a06e10000000a114084/content.htm
    Please reward points if it helps
    Thanks
    Vikranth

Maybe you are looking for

  • My skype to go number has disappeared off of my ac...

    Hello, I had a skype-to-go number for the past 2 years, but it has disappeared from the account.  How can I reactivate the number?

  • Moving Referenced Images to a new HD

    Hi, I have my 500GB external HD almost full with all my Aperture referenced images. I got a new 1TB HD. I Would like to move all my original folders keeping Aperture modifications on the images (metadata, adjustments, etc...). Can I just copy the con

  • WRT54G v5 Help required!

    Here's the problem. My router was working fine with an NTL modem, then I moved house, new broadband provider and the router just will not work. The broadband works fine when connected directly to the computer, but insert the router, nothing! During s

  • How to deliver add-on installation package to clients

    Hi, We have successfully released the add-on installation package. Now how could we deliver it our clients. how could we deliver the package to others. regards chinta

  • "PS CS6 Extended Student and Teacher Edition": Installer failed to initialize

    Hi there, just tried to install my software from DVD, getting the message: Installer failed to initialize. Was ask to install Adobe Support Advisor. Couldn't dedect any problems and sent me to Adobe Support. Here I am. Any ideas? OS: Window 7, 64bit,