Sender File Adapter stop processing all files

Hello all,
the file adapter pick up all files in the directory by default.
if  a large number of files are in the directory then this could slow down the pi processing.
is there any way to process only one file per polling??
regards

>
Ralf Zimmerningkat wrote:
> Hello all,
> the file adapter pick up all files in the directory by default.
> if  a large number of files are in the directory then this could slow down the pi processing.
> is there any way to process only one file per polling??
>
> regards
I do not have any proble if you have got the answer. BUT this blog says how to exclude the other files from the same folder.
Your Case: For example, Your Sender CC wants to pick up file ABC.txt from /xyz dir, now suppose there are 10 thousand files of same name in the dir and you want ABC.txt should be picked up one by one, So how this blog is going to help you. Can you Plz explain to me and others too?
@Sachin may be you can throw some light on this... may be I am missing something.

Similar Messages

  • Sender File adapter to process each file with time gap

    Hi All,
    I have a sender file adapter which is picking 2 files in a folder. I want to delay the processing of each file by 5 minutes. For ex: PI should process the first file, then wait 5 minutes and the process the next etc.  Is this achievable ?
    Is there a way  to create 2 sender agreements if I create 2 separate comm channels for each file .
    I am using PI 7.0
    Edited by: Dev Noronha on Sep 17, 2010 8:48 AM

    Hi Sarvesh...
    U r correct ... but i am nt wrong.... Just joking...
    S u r correct in normal cases..
    But this interface also involves BPM too.....
    Dev didnt mention the whole prblm in clear....
    The prblm is as following..
    When he runs the CC at sender , if u have multiple files, then every file will initiate a BPM instance.
    And in turn every BPM instance is waiting fr a message from PROXY....
    The proxy messages coming from the ECC are not able to go fr exact BPM instance..
    Eg: 2 Proxy message gng fr 1 BPM instance...etc..
    FYI , no correlation on the messages....
    Babu

  • File Adapter Not Processing Large File

    Hi Everyone,
    We are experiencing a problem in an interface that includes file adapter. The scenario is as follows:
    1. A .csv file containing multiple records needs to be processed; for each line of the .csv file, an XML file should be genrated. <b>We are using the 'Recordsets Per Message' field in the Content Conversion Parameters to achieve this.</b> After the source .csv file is processed, it is deleted.
    We were testing with small 15-30 records in each input file. For the purpose of testing scalability, we increased the number of records in a file to nearly 300. But the file adapter did not pick up the file.Comm.Channel in RWB is showing green, MDT contains no entries and SXMB_MONI also shows no messages. What can be the problem? Is there any limit on the size of file that can be converted in this way?
    Awaiting your replies,
    Regards,
    Amitabha

    Amitabha,
    300 records should not be a problem at all. If you are not getting any error msg in CC monitoring, then it is time you take a look into VA logs.
    Ref: /people/michal.krawczyk2/blog/2005/09/07/xi-why-dont-start-searching-for-all-errors-from-one-place
    Regards,
    Jai Shankar

  • HA File Adapter still processing same file twice

    We are using the eis/HAFileAdapterMSSQL to watch for ZIP files that get dropped into a directory. We are running SOA Suite 11.1.1.6 on WebLogic 10.3 with a 2-server cluster. We set our PollingFrequency to 120 seconds.
    When our ZIP files are small enough to where our BPEL process (which does some unzipping and merging of files therein) completes within those 120 seconds, everything is fine. One, and only one, node in our cluster processes the file.
    But when the ZIP files are large, and our BPEL process takes 30 minutes to complete (i.e. longer than the 120 second polling frequency), we see a new instance of the BPEL process on soa_server1 start as expected, then second process instance start on our soa_server2 instance 120 seconds later.
    It's almost as if, if the entire BPEL process - which is kicked off by the HAFileAdapterMSSQL - does not complete within our polling frequency, then the file is not being blocked from other servers in our cluster picking it up and start processing it. Is there a way to "lock" the file with the first instance of the HAFileAdapterMSSQL that picks it up until it's completely finished with it?
    Has anyone encountered this?
    Thanks,
    Michael

    You need to define the below property in composite.xml file of your bpel. This will allow only one adapter active and another passive to pick up the file.
    <activationAgents>
    <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="PickupFile/FTPConsumer">
    *<property name="clusterGroupId">BPELProcessNameCluster</property>*
    <property name="portType">FTP/FileConsumer_Message_ptt</property>
    </activationAgent>
    </activationAgents>
    Thanks,
    Vijay

  • Sender File adapter not picking the file ABCD.PRN extension file

    Hi
    Am doing File to Abap Proxy scenario. My source will be in text format of fixed length of fileds 7.
    My souce file will be generated by a third party machine with the extension EEE150809.PRN. In my scenario my file adapter should pick the file and update the same in ECC via a Proxy.
    I have configured the Sender File adapter with Message Protocol -  Content Conversion but file adapter is not picking the file. I have checked the Communication channel and status is fine. But the same Communication Channel works for .XML file.
    What are the parameters that I have to consider with the file extension .PRN using File adapter?
    Thanks.
    S.

    Hi Swarna,
    You dont need to worry about the extension when picking the file. You can try using EEE* so that is picks all the files starting with EEE. If you have the fixed name then you can try giving EEE150809.*. Also EEE150809.PRN should not have any issue. Try to see in sender commu ication cahnnel for anyerrors. Probably there might be some network issues or authorisation issues. If you are reading with NFS then ask them to give necessary permissions. If you are reading through FTP check the user id and pwd you are using.
    Regards,
    ---Satish

  • Sender File Adapter picking the same file twice

    We are facing a weird issue with File Sender Adapter
    We are using PI File Adapter ( NFS ) to read files for a NFS folder and
    processing those in PI.Normally it works fine. But for a scenario we
    are noticing it sometimes process the same file twice before archiving,
    thus duplicating the financial postings.
    What we have is :
    File Sender Adapter - NFS, Polling interval 60 secs, and Processign
    mdoe Archive. File name includes Wildcards - JE_Upload*.txt
    what we are noticing is that :
    when it picks up a file, it immediately polls again to check for
    another file, and sometimes the file is not yet archived so it picks up
    and reprocess the same file.
    If you see the message below, both belong to the same file, and it
    picked up the same file again in 12 secs after processing it the first
    time
       Successful 02.11.2009 15:01:00 02.11.2009 15:01:01   APMANUAL     urn:bl:i2g:003:100
    SI_SKF_FIDOC_OB XI Message
       Successful 02.11.2009 15:00:49 02.11.2009 15:00:50   APMANUAL     urn:bl:i2g:003:100
    SI_SKF_FIDOC_OB XI Message
    Anyone seen this behavior before?

    Hi,
    Please check the script which creates file in source NFS Folders. There is possibility that script is making change in file when PI is picking up the file.
    When PI picks the file first time it creates one message ID in system. After that if script is making any change in file without file name change (This need not necessary data change), for PI it becomes new file and new message gets generated in PI for same file.
    This error normally comes when File adapter is not able to archive file succesfully. For eg. file with same name alredy exists in Archive folder.
    File adapter generates the new message id whenfile get modified(eg.change in its length or data change) even though
    the file name is same and when file get change ,XI file adapter thinks that its new file and hence generate the new message id for same file.
    If file has same name and notmodified then XI adapter will not generate new message id and will keep on throw the error till you remove that file with same name from the
    archieve directory.
    -Warm Regards,
    Gouri

  • Aim to process all files in folders on desktop to run through photoshop and save in multiple locations

    Aim to process all files in folders on desktop to run through photoshop and save in multiple locations
    Part one:-
    Gather information from desktop to get brand names and week numbers from the folders
    Excluding folders on desktop beginning with "2" or "Hot"
    Not sure about the list of folders
    but I have got this bit to work with
    set folderPath to "Hal 9000:Users:matthew:Desktop:DIVA_WK30_PSD" --<<this would be gained from the items on the desktop
    set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
    set folderName to last text item of folderPath
    set my text item delimiters to "_WK"
    set FolderEndName to last text item of folderName
    set brandName to first text item of folderName
    set my text item delimiters to "_PSD"
    set weekNumber to first text item of FolderEndName
    set my text item delimiters to oldTID
    After running this I have enough information to create folders in multiple locations, (i need to know where they are so that photoshop can later save them in those multiple locations
    So I need the following folders created
    Locally
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_LR" --(Set path for Later)PathA
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_HR"--(Set path for Later)PathB
    Network
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathC
    Volumes:GEN:Website_Images --(no need to create folder just set path)PathD
    FTP (Still as a normal Volume) So like another Network
    Volumes:impulse:"Week" + weekNumber
    Volumes:impulse:"Week" + weekNumber:Brand
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathE
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_HR"  --(Set path for Later)PathF
    I like to think that is end of Part 1
    Part 2
    Take the images  (PSD's) from those folders relevant to the Brand then possibly run more applescript that opens flattens and then saves it in the locations above.
    For example….
    An image in folder DIVA_WK30_PSD will then run an applescript in Photoshop, lets call it DivaProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:DIVA_WK30_LR and of course save the image as original filename.
    Then from the next folder
    An image in folder Free_WK30_PSD will then run an applescript in Photoshop, lets call it FreeProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:Free_WK30_LR and of course save the image as original filename.
    The photoshop applescript i'm hoping will be easier as it should be a clearer step by step process without any if's and but's
    Now for the coffee!!

    Hi,
    MattJayC wrote:
    Now to the other part, where each folder was created (and those that already existed) how do I set them as varibles?
    For example,
    set localBrandFolder_High_Res to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", localBrandFolder)
    This line was used to create more than one folder as it ran though the folders on the desktop. The next part is I will need to reference them to save files to them.
    You can use a records
    Examples
    if you want the path of localBrandFolder_High_Res  of "Diva", if "Diva" is the second folder of the Desktop
    You get the path with this : localBrandFolder_High_Res of record 2 of myRecords
    if you want the path of localWeekFolder  in the first folder of the Desktop
    You get the path with this : localWeekFolder of record 1 of myRecords
    Here is the script
    set myRecords to {}
    set dtF to paragraphs of (do shell script "ls -F ~/Desktop | grep '/' | cut -d'/' -f1")
    repeat with i from 1 to number of items in dtF
        set this_item to item i of dtF
        if this_item does not start with "2_" and this_item does not start with "Hot" then
            try
                set folderPath to this_item
                set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
                set folderName to last text item of folderPath
                set my text item delimiters to "_WK"
                set FolderEndName to last text item of folderName
                set brandName to first text item of folderName
                set my text item delimiters to "_PSD"
                set weekNumber to first text item of FolderEndName
                set my text item delimiters to oldTID
            end try
            try
                set this_local_folder to "Hal 9000:Users:matthew:Pictures:2011-2012"
                set var1 to my getFolderPath("WK" & weekNumber, this_local_folder)
                set var2 to my getFolderPath(brandName, var1)
                set var3 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var2)
                set var4 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var2)
                --set up names to destination folders and create over Netwrok including an already exisiting folder
                set this_Network_folder to "DCKGEN:Brands:Zoom:Brand - Zoom:Upload Photos:2012:"
                set var5 to my getFolderPath("WK" & weekNumber, this_Network_folder)
                set var6 to my getFolderPath(brandName, var5)
                set var7 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var6)
                set website_images to "DCKGEN:Website_Images:"
                --set up names to destination folders and create over Netwrok for FTP collection (based on a mounted drive)
                set this_ftp_folder to "Impulse:"
                set var8 to my getFolderPath("Week" & weekNumber, this_ftp_folder)
                set var9 to my getFolderPath(brandName, var8)
                set var10 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var9)
                set var11 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var9)
                set end of myRecords to ¬
      {localWeekFolder:var1, localBrandFolder:var2, localBrandFolder_Low_Res:var3, localBrandFolder_High_Res:var4, networkWeekFolder:var5, networkBrandFolder:var6, networkBrandFolder_Low_Res:var7, ftpWeekFolder:var8, ftpBrandFolder:var9, ftpBrandFolder_Low_Res:var10, ftpBrandFolder_High_Res:var11}
            end try
        end if
    end repeat
    localBrandFolder_High_Res of record 2 of myRecords -- get full path of localBrandFolder_High_Res in the second folder of Desktop
    on getFolderPath(tName, folderPath)
        tell application "Finder" to tell folder folderPath
            if not (exists folder tName) then
                return (make new folder at it with properties {name:tName}) as string
            else
                return (folder tName) as string
            end if
        end tell
    end getFolderPath

  • File Adapter BPEL Process getting switched off

    The file adapter BPEL process reads a csv file which has a series of records in itfrom /xfer/chroot/data/aramex/accountUpdate/files. In between reading the files, the BPEL process gets switched off. The below snippet is the error we found in the domain.log. Anybody can you please suggest what to do?
    <2010-11-25 16:22:28,025> <WARN> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound>
    java.io.FileNotFoundException: /xfer/chroot/data/aramex/accountUpdate/files/VFQ-251120101_1000.csv (No such file or directory)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(FileInputStream.java:106)
    at oracle.tip.adapter.file.FileUtil.copyFile(FileUtil.java:947)
    at oracle.tip.adapter.file.inbound.ProcessWork.defaultArchive(ProcessWork.java:2341)
    at oracle.tip.adapter.file.inbound.ProcessWork.doneProcessing(ProcessWork.java:614)
    at oracle.tip.adapter.file.inbound.ProcessWork.processMessages(ProcessWork.java:445)
    at oracle.tip.adapter.file.inbound.ProcessWork.run(ProcessWork.java:227)
    at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
    at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:280)
    at java.lang.Thread.run(Thread.java:619)
    <2010-11-25 16:22:28,025> <INFO> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound> Processer thread calling onFatalError with exception /xfer/chroot/data/aramex/accountUpdate/files/VFQ-251120101_1000.csv (No such file or directory)
    <2010-11-25 16:22:28,025> <FATAL> <PreActivation.collaxa.cube.activation> <AdapterFramework::Inbound> [Read_ptt::Read(root)]Resource Adapter requested Process shutdown!
    <2010-11-25 16:22:28,025> <INFO> <PreActivation.collaxa.cube.activation> <AdapterFramework::Inbound> Adapter Framework instance: OraBPEL - performing endpointDeactivation for portType=Read_ptt, operation=Read
    <2010-11-25 16:22:28,025> <INFO> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound> Endpoint De-activation called in adapter for endpoint : /xfer/chroot/data/aramex/accountUpdate/files/
    <2010-11-25 16:22:28,095> <WARN> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound> ProcessWork::Delete failed, the operation will be retried for max of [2] times
    <2010-11-25 16:22:28,095> <WARN> <PreActivation.collaxa.cube.ws> <File Adapter::Outbound>
    ORABPEL-11042
    File deletion failed.
    File : /xfer/chroot/data/aramex/accountUpdate/files/VFQ-251120101_1000.csv as it does not exist. could not be deleted.
    Delete the file and restart server. Contact oracle support if error is not fixable.
    at oracle.tip.adapter.file.FileUtil.deleteFile(FileUtil.java:279)
    at oracle.tip.adapter.file.FileUtil.deleteFile(FileUtil.java:177)
    at oracle.tip.adapter.file.FileAgent.deleteFile(FileAgent.java:223)
    at oracle.tip.adapter.file.inbound.FileSource.deleteFile(FileSource.java:245)
    at oracle.tip.adapter.file.inbound.ProcessWork.doneProcessing(ProcessWork.java:655)
    at oracle.tip.adapter.file.inbound.ProcessWork.processMessages(ProcessWork.java:445)
    at oracle.tip.adapter.file.inbound.ProcessWork.run(ProcessWork.java:227)
    at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
    at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:280)
    at java.lang.Thread.run(Thread.java:619)
    <2010-11-25 16:22:28,315> <ERROR> <PreActivation.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "cube delivery": Process state off.
    The process class "BulkAccountUpdateFileConsumer" (revision "1.0" ) has not been turned on. No operations on the process or any instances belonging to the process may be performed if the process is off.
    Please consult your administrator if this process has been turned off inadvertently.

    This patch is not for 10.1.3.1.
    I have provided a response to on the following post
    BPEL Process Going into Dead State Automatically.
    cheers
    James

  • File Adapter to read Zip file and send it as input to another webservice

    Hi,
    I have the below requirement:
    1. A service will generate 3 attachments and place it in a particular directory.
    2. SOA service has to pick those 3 files and send those files as input to another custom application which will email.
    Design :
    1. First SOA will create an archive file of those 3 attachements and then file adapter will poll for that zip file in that location and send that file as a whole to the custom application.
    Query:
    Now my question, is the above design feasible? If so, how to configure the file adapter to pass the file as input to that custom application?
    Kindly do the needful
    Thanks,
    Priya

    You can accomplish this via java embedding activity...Create a java embedding, which will create a zip file.. this java code is easy to implement..
    You can also do away with un-necessary polling file adapter.. and you can use "Synchronous File Read" operation of File Adapter.. For Sync Read, you'll have to pass the zip file name, which you can easily fetch from java embedding activity..
    Let me know, if this doesn't work.

  • JCBC Adapter and File adapter not processing messages

    Hi
    I noticed that messages are being delivered to Adapter engine and the same are visible in Runtime Workbench with status "to be delivered". But, JDBC Adapter and File Adapter not processing these messages.
    Any idea where I can find the problem?
    I was able to re-deliver successfully via the JDBC Adapter using the MessagingSystem GUI using XISUPER user.
    Regards
    Chandu

    Hi,
    1.Status: TO_BE_DELIVERED
    Which means that the message was successfully delivered from Integration Server point of view and it states that the messages is initially handed over to the Messaging System.
    TO_BE_DELIVERED occurs while the message is put into the Messaging System receive queue.
    Regards
    Agasthuri Doss

  • File adapter picking up partial files

    Hi All,
    We are facing a wierd problem with the File Adapter.
    The problem is that, we are getting the xml files from the source system, and PI is supposed to pick them up and process them. Sometimes, the file adapter is picking up the empty files / partial files, i.e., before the file is completely written on to the FTP folder.
    We tried increasing the polling interval. But this did not help. Since we are taking the files from the FTP, the option "MSecs" in Additional Parameters also will not work. As we are on PI 7.0 SP13, we tried to search for the "Empty File Handling" option wherein we can make the file adapter skip the 0kb files. But we are not able to find that option anywhere.
    Please help us find a solution to this problem.
    Thanks,
    Hari.

    Hari,
    In sender communication channel click on  Advanced. Then you see the paramter Msecs to Wait Before Modification Check. Add a value 3000. Then you should be good.
    Regards,
    ---Satish

  • File Adapter Not picking the files

    Hi All,
    We have a process wherein the file adapter picks up the file from a particular location and it processed thereafter. We get the files once every month. We noticed that if the files that are being dazzled are of the same, i.e if the same nomenclature is there for the file that was dazzled the previous month, the adapter does not pick up the file. Only after renaming the file, it picks the file and processed them.
    Any idea why the files that are dazzled with the same name are not being picked up.
    Thanks in Advance...!!

    Hi,
    While configuring file adapter to pick up the files cross verify with file name putting in the directory location against the
    "Includes files with name pattern" "Excludes file with name pattern" file name..
    let say if u kept .*txt if picks the files txt with any name and one more think once after picking up the file from particular location are u enabling (delete file once read) option in configuration(It all depends on ur req)..
    cross check the schema element of the file pattern.

  • File Adapter is not polling File.

    Hi All,
    I am using file adapter to poll a file. When I am restarting server then it is picking file. After then when I am dropping a file then File Adapter is not picking File.
    Can any one guide me what si problem?
    I am using SOA SUITE 10.1.3.4
    Thanks in Advance.

    First clusterGroupId is a unique value. Do not use the same clusterGroupId across different adapters even in different process.
    Once clusterGroupId is defined, the multiple activations of the same adapter Activation Agent is to be detected implicitly and automatically by all the instances of the adapter framework active in that cluster. The cluster will allow onlye one node activation to start the reading the files/messages. The adapter framework instances chooses one among multiple nodes, randomly to assume the Primary Activation responsibility. The other activations (instances) in the cluster will initiate to a hot stand-by state, without actually invoking EndpointActivation on the JCA resource adapter.
    If a primary activation at some point becomes unresponsive, is deactivated manually or if it crashes/exits, then any one of the remaining adapter framework members of the cluster group will immediately detect this, and reassign the primary activation responsibility to one of activation agents standing by.
    This feature uses JGroups underneath for the implementation, hence the clusterGroupId property. These features require cluster and JGroups configuration in collaxa-config.xml and jgroups-protocol.xml. All BPEL instances participating in the cluster must share the same cluster name in collaxa-config.xml,and the same multicast address and port in jgroups-protocol.xml.
    As far as for this issue, I too face the same across diff environments and searching for the exact reason. I believe this happens when primary activation agent crashes, the hand-over to the other node is not happening properly through jgroups.
    Nirav

  • File adapter reading while the file is still being written....

    Hello BPEL Gurus,
    I had a quick question around BPEL or ESB file adapter. Does BPEL file adapter starts reading a huge file that is being written or it waits until the writing process is completed and file is complete?
    Any response is highly is appreciated.
    Thanks.
    SM

    It goes like this. At every polling frequency, the adapter looks into the directory for the files with specified pattern (e.g. *.csv, MYCOMPANY*.txt) and specified condition, e.g. minimum file age. This means if there will be 2 files available with matching criteria, the both will be picked up and processed simultaneously in two different BPEL instances. No specific order of execution. However you will find the instances in BPEL console with little delay based on file size.
    Perhaps you can elaborate your scenario further. Do we have knowledge of the file name that are to be picked up from the folder. You may use synchronous read option. If you are using 10.1.3.4 version then you can specify the file name before file adapter makes a synchronous read into the give directory.

  • NFS File adapter donu00B4t generate log file.

    Hello!
    We have a problem with a File adapter. when adapter catch file, this one don´t archive this file into archive directory.
    We have:
    Processing mode: Archive.
    Archive directory : /XIcom/INT181_GECAT/LOG
    This directory is created correctly.
    Someone can i help me. Thanks.
    Best regards.

    Hi ,
    >>>NFS File adapter don´t generate log file
    Do you mean you are not getting the processed files archived only when the file adapter set to NFS File system ?
    Did you try same thing by setting File adapter as FTP ?
    If you face same issue with File adpter set in FTP mode also then there is some issue with access to the folders.
    Please check this ...
    Regards,
    Nanda
    Message was edited by: Nanda kishore Reddy Narapu Reddy

Maybe you are looking for