HAFileAdapter based OSB processing 1st file twice in 2 node cluster env

Hi,
I am trying my hands in OSB with HA File Adapter and came across a strange issue where in OSB service designed to pick up a message of "*.CSV" format and process it's record one at a time recursively as per below details:
The reason I am saying it's been processed by 2 nodes of the cluster independently is because I can see 2 separate threads picking up same file at the same time on both the nodes in log files. I enabled the debugging property on managed servers.
----------------------------------------------------- Start Log Snippet -----------------------------------------------------
Node 1 : ExecuteThread: '51'
####<Oct 17, 2012 3:21:35 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Defa
ult (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447695272> <BEA-000000> <Translated inbound batch index "1" of fil
e {CCISU_500.csv} sucessfully.>
####<Oct 17, 2012 3:21:35 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Defa
ult (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447695272> <BEA-000000> <Publisher::publishMessage called with prim
aryKey=[Qz6n0UOYTQ1oJ2QeawJThlv-jKbiLeH1Bkko5wHl1Lg.]>
####<Oct 17, 2012 3:21:35 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Defa
ult (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447695272> <BEA-000000> <Sending batch to Adapter Framework for pos
ting to BPEL engine: {
batchId=_-H9OvOxmH30ZcxGkXWAK8mVTcajl3SHsnnVKxfsEBU., batchIndex=1, publishSize=1
}>
####<Oct 17, 2012 3:21:39 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447699920> <BEA-000000> <Copying file :/FileAdapters/Input/CCISU_500.csv to user-defined archive directory for processed files :/FileAdapters/Archive>
####<Oct 17, 2012 3:21:39 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447699928> <BEA-000000> <Deleting/Soft deleting file : /FileAdapters/Input/CCISU_500.csv>
####<Oct 17, 2012 3:21:39 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447699928> <BEA-000000> <Deleting file : CCISU_500.csv>
####<Oct 17, 2012 3:21:39 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447699929> <BEA-000000> <Deleted file : true>
####<Oct 17, 2012 3:21:39 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447699929> <BEA-000000> <Poller has reached max raise size, will not raise anymore>
Node 2 : ExecuteThread: '53'
####<Oct 17, 2012 3:21:35 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Defa
ult (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447695186> <BEA-000000> <Translated inbound batch index "1" of file
{CCISU_500.csv} sucessfully.>
####<Oct 17, 2012 3:21:35 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Defa
ult (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447695186> <BEA-000000> <Publisher::publishMessage called with prima
ryKey=[ODfzwSUr85GeHkzscyhXrh2URW1FEAposlikUkdTZL8.]>
####<Oct 17, 2012 3:21:35 PM EST> <Debug> <AlsbJcaFrameworkAdapter> <pcalin01> <WLS_OSB1> <[ACTIVE] ExecuteThread: '53' for queue: 'weblogic.kernel.Defa
ult (self-tuning)'> <oracle> <> <7dca6b7cd27159da:58554710:13a6c24a0f9:-8000-000000000001b722> <1350447695186> <BEA-000000> <Sending batch to Adapter Framework for post
ing to BPEL engine: {
batchId=ZkZ5HZA43QtsPZFqU2prPmxVS83NBMHcZa5n76yKC-Q., batchIndex=1, publishSize=1
}>
####<Oct 17, 2012 3:21:40 PM EST> <Info> <JCA_FRAMEWORK_AND_ADAPTER> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Def
ault (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447700939> <BEA-000000> <Since file could not be copied to specifi
ed archive directory, file : CCISU_500.csv_CbOXCcnMsRq44Slx2jiJrbM5vvgER0M_cYJsXlmvzWU= is being copied to a default archive directory :/oracle/ofmw3u/admin/ofmw3u_doma
in/mserver/ofmw3u_domain/fileftp/defaultArchive/>
####<Oct 17, 2012 3:21:40 PM EST> <Error> <JCA_FRAMEWORK_AND_ADAPTER> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.De
fault (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447700940> <BEA-000000> <Endpoint will shutdown process to preven
t data loss as file :CCISU_500.csv could not be being copied to default archive : /oracle/ofmw3u/admin/ofmw3u_domain/mserver/ofmw3u_domain/fileftp/defaultArchive/. Plea
se ensure that there is enough free space available to copy this file.
java.io.FileNotFoundException: /FileAdapters/Input/CCISU_500.csv (No such file or direct
ory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:120)
at oracle.tip.adapter.file.FileUtil.copyFile(FileUtil.java:1024)
at oracle.tip.adapter.file.inbound.PostProcessor.defaultArchive(PostProcessor.java:300)
at oracle.tip.adapter.file.inbound.PostProcessor.performArchival(PostProcessor.java:229)
at oracle.tip.adapter.file.inbound.ProcessorDelegate.process(ProcessorDelegate.java:178)
at oracle.tip.adapter.file.inbound.PollWork.publishFile(PollWork.java:1574)
at oracle.tip.adapter.file.inbound.PollWork.processFilesInSameThread(PollWork.java:1001)
at oracle.tip.adapter.file.inbound.PollWork.run(PollWork.java:335)
at weblogic.work.ContextWrap.run(ContextWrap.java:41)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
>
####<Oct 17, 2012 3:21:40 PM EST> <Error> <JCA_FRAMEWORK_AND_ADAPTER> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447700940> <BEA-000000> <ProcessWork::run Fatal exception caught
java.io.FileNotFoundException: /FileAdapters/Input/CCISU_500.csv (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:120)
at oracle.tip.adapter.file.FileUtil.copyFile(FileUtil.java:1024)
at oracle.tip.adapter.file.inbound.PostProcessor.defaultArchive(PostProcessor.java:300)
at oracle.tip.adapter.file.inbound.PostProcessor.performArchival(PostProcessor.java:229)
at oracle.tip.adapter.file.inbound.ProcessorDelegate.process(ProcessorDelegate.java:178)
at oracle.tip.adapter.file.inbound.PollWork.publishFile(PollWork.java:1574)
at oracle.tip.adapter.file.inbound.PollWork.processFilesInSameThread(PollWork.java:1001)
at oracle.tip.adapter.file.inbound.PollWork.run(PollWork.java:335)
at weblogic.work.ContextWrap.run(ContextWrap.java:41)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
>
####<Oct 17, 2012 3:21:40 PM EST> <Info> <JCA_FRAMEWORK_AND_ADAPTER> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447700941> <BEA-000000> <Processer thread calling onFatalError with exception [java.io.FileNotFoundException: /FileAdapters/Input/CCISU_500.csv (No such file or directory)]>
####<Oct 17, 2012 3:21:40 PM EST> <Info> <JCA_FRAMEWORK_AND_ADAPTER> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447700941> <BEA-000000> <Unable to call onFatalError since the adapter could not find an endpoint>
####<Oct 17, 2012 3:21:40 PM EST> <Warning> <JCA_FRAMEWORK_AND_ADAPTER> <pcalin03> <WLS_OSB2> <[ACTIVE] ExecuteThread: '51' for queue: 'weblogic.kernel.Default (self-tuning)'> <oracle> <> <70d9e8b790dff210:-480a1b4a:13a6c31d21b:-8000-000000000023fd0a> <1350447700941> <BEA-000000> <Poller::processFilesInSameThread will abort as the endpoint has been released>
----------------------------------------------------- End Log Snippet -----------------------------------------------------
----------------------------------------------------- Start Questions to the Gurus Out There -----------------------------------------------------
1. Is there any property to make sure that only one activation agent is active at anytime in a cluster env ?
2. Does JCA property "<property name="SingleThreadModel" value="true"/> holds good when used in OSB ? or it's just supported in composite ??
3. Does both the nodes of cluster polls independently ? As I have seen in logs that node1 polling and complaining about time to live factor while the other node picking it before node1 completes another cycle of polling.
4. In above log you can see that node1 processed all the records 1sec earlier than node2 in a race condition and as a result node2 completed about file not found exception?
Gurus any help will be appriciated.
----------------------------------------------------- End Questions to the Gurus Out There -----------------------------------------------------
----------------------------------------------------- Start Details -----------------------------------------------------
Protocol : jca
Endpoint URI : jca://eis/FileAdapter
Get All Headers : No Headers
JCA: <adapter-config name="RequestHandler" adapter="File Adapter" wsdlLocation="RequestHandler.wsdl"; xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
<connection-factory location="eis/HAFileAdapter" UIincludeWildcard="*.csv"/>
<endpoint-activation portType="Read_ptt" operation="Read">
<activation-spec className="oracle.tip.adapter.file.inbound.ScalableFileActivationSpec";>
<property name="DeleteFile" value="true"/>
<property name="MinimumAge" value="120"/>
<property name="PhysicalDirectory" value="/FileAdapters/Input"/>
<property name="PhysicalArchiveDirectory" value="/FileAdapters/Archive"/>
<property name="PhysicalErrorArchiveDirectory" value="/FileAdapters/Error"/>
<property name="Recursive" value="true"/>
<property name="PublishSize" value="1"/>
<property name="PollingFrequency" value="300"/>
<property name="MaxRaiseSize" value="1"/>
<property name="SingleThreadModel" value="true"/>
<property name="IncludeFiles" value=".*\.csv"/>
<property name="UseHeaders" value="false"/>
</activation-spec>
</endpoint-activation>
</adapter-config>
----------------------------------------------------- End Detail -----------------------------------------------------
Edited by: 964320 on Oct 17, 2012 10:58 PM

You need to define the below property in composite.xml file of your bpel. This will allow only one adapter active and another passive to pick up the file.
<activationAgents>
<activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="PickupFile/FTPConsumer">
*<property name="clusterGroupId">BPELProcessNameCluster</property>*
<property name="portType">FTP/FileConsumer_Message_ptt</property>
</activationAgent>
</activationAgents>
Thanks,
Vijay

Similar Messages

  • HA File Adapter still processing same file twice

    We are using the eis/HAFileAdapterMSSQL to watch for ZIP files that get dropped into a directory. We are running SOA Suite 11.1.1.6 on WebLogic 10.3 with a 2-server cluster. We set our PollingFrequency to 120 seconds.
    When our ZIP files are small enough to where our BPEL process (which does some unzipping and merging of files therein) completes within those 120 seconds, everything is fine. One, and only one, node in our cluster processes the file.
    But when the ZIP files are large, and our BPEL process takes 30 minutes to complete (i.e. longer than the 120 second polling frequency), we see a new instance of the BPEL process on soa_server1 start as expected, then second process instance start on our soa_server2 instance 120 seconds later.
    It's almost as if, if the entire BPEL process - which is kicked off by the HAFileAdapterMSSQL - does not complete within our polling frequency, then the file is not being blocked from other servers in our cluster picking it up and start processing it. Is there a way to "lock" the file with the first instance of the HAFileAdapterMSSQL that picks it up until it's completely finished with it?
    Has anyone encountered this?
    Thanks,
    Michael

    You need to define the below property in composite.xml file of your bpel. This will allow only one adapter active and another passive to pick up the file.
    <activationAgents>
    <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="PickupFile/FTPConsumer">
    *<property name="clusterGroupId">BPELProcessNameCluster</property>*
    <property name="portType">FTP/FileConsumer_Message_ptt</property>
    </activationAgent>
    </activationAgents>
    Thanks,
    Vijay

  • XI file sender: filename validation to stop processing a file twice

    Hello folks,
    I have a file sender adapter for a text file with name convention that includes a date - e.g. orders_YYYYMMDD.txt. We have a business requirement to ensure that we don't post the same file more than once. The file can be huge, so it's not really an option to pick up the file contents in one message.
    Options I have tried:
    - I do not know of a setting in the file adapter to achieve this.
    - write a perl script to read the filename and translate [hex-encode] this into a Message GUID, then post the file via HTTP adapter
      - - if the file name is not too long (guid is 32 hex chars) this method works well for small files that are in XML format.
      - - Text files need too much perl coding to translate to XML. Large files that need to be split will fail on the 2nd chunk.
    Options I don't want to use:
    - Use BPM to call an RFC/Proxy that validates the filename - this will cause me to read the whole file, or I have to implement a 'pipe' in BPM to ensure EOIO processing. (We have this elsewhere, but it's not good for performance)
    - Actually, I don't want to manage this in ABAP/Ztables at all if possible.
    I am about to start work on a Module Processor to mangle the GIUD in the file adapter, similar to the HTTP method above (don't have any idea of whether this will be possible yet)...
    Can anyone recommend another method to achieve this?

    Hi Derek,
    Intially when the file is picked then archive it to another directory.
    with this the same file will not be processed twice.
    Rgds,
    Kumar

  • FTP Adapter duplicate processing of files in Active-Active cluster env

    Hi All,
    We have single active-active cluster environment and we have configured all the HA configurations for Inbound and Outbound operations (except outboundDatasourceLocal). But we faced a situation that the FTP Adapter processed a file twice. This happened when there was a fail-over in oracle database in where SOA-INFRA schema is created.
    Please let me know if anyone faced similar issue and what would be the resolution for this duplication issue.
    Thanks in Advance.

    Hi Vernetto,
    Thanks for your reply.
    I too had opened a ticket with oracle and they are looking into the issue.
    can you please let me know what are the inbound and outbound operation configurations you have specified in the 'outbound connection pools'. Also is there any other specific property you set in jca file?
    one more thing is, we have configured outboundLockTypeForWrite to 'oracle' as we are using the oracle database. but i couldnt see any data in FILEADAPTER_IN or in FILEADAPTER_MUTEX tables. you said you could see data in FILEADAPTER_IN table. so is there any specific property requires to be set which i am missing?
    please suggest.
    Thanks.

  • How OSB archive the file name

    How OSB archive the file name?
    I have used JCA adapter to archive file while reading in OSB and OSB does archive the file with different name.
    But OSB is creating the file with name as understandable form. Is it possible to give my own defined name while archiving the file(s).
    Please help me !
    -Thanks

    I dont think it is possible to change the names of the archive files once OSB processes the file. I haven't found any references of the same in the JCA adapter documentation as well.
    You can check with Oracle support if this is feasible.
    Thanks,
    Patrick.

  • Where I can get visio file of SAML claims-based authentication process

    Hello
    I saw SAML claims-based authentication process flow diagram  on
    http://msdn.microsoft.com/en-us/library/office/hh394901(v=office.14).aspx . Please let me know where I will get the visio of this file.  
    Regards
    Avian

    What makes you think that such a file is available (to the public)?
    Use the graphic, or make your own.
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs

  • OSB 10.3g file proxy and how to process files sequentially

    Hi all,
    I've a folder that contains some files that I need to process sequentially by a file proxy:
    /tmp/osb/msg > ls -l
    -rw-r--r-- 1 weblogic weblogic 904 Jul 22 12:02 yyy_C0001279792949924_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 917 Jul 22 12:02 yyy_C0001279792949955_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 932 Jul 22 12:02 yyy_C0001279792949986_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 858 Jul 22 12:02 yyy_C0001279792950017_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 890 Jul 22 12:02 yyy_C0001279792950033_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 790 Jul 22 12:02 yyy_C0001279792950064_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 844 Jul 22 12:02 yyy_C0001279792950096_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 937 Jul 22 12:02 yyy_C0001279792950111_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 813 Jul 22 12:02 yyy_C0001279792950142_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    -rw-r--r-- 1 weblogic weblogic 963 Jul 22 12:02 yyy_C0001279792950158_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    this is the list of the files sorted alphabetically but also is the list sorted by creation time (each filename (after yyy_C) contains a timestamp and also a date).
    The file proxy is configured as belows:
    Service Type : Any XML Service
    Protocol : file
    Endpoint URI : file:///tmp/osb/msg
    Get All Headers : No
    File Mask : yyy*.xml
    Polling Interval: 30
    Read Limit : 0
    Sort By Arrival : true
    Scan SubDirectories: false
    Pass By Reference : false
    Post Read Action : archive
    Stage Directory : /tmp/osb/stage
    Archive Directory : /tmp/osb/archive
    Error Directory : /tmp/osb/error
    Request encoding : utf-8
    Content Streaming : Disabled
    I used "Sort By Arrival : true" because from doc (http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/httppollertransport/index.html) I read:
    "This parameter indicates the sequence of events raised in the order of the arrival of files. The default value for this parameter is False"
    It is not clear if 'order of the arrival' means by time of the arrival or by 'alphabetical order'.
    I did a test using the above settings but I see from OSB log file that the files are processed neither by time order nor by alphabetical order:
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '4' ... <BEA1-07857BB52A53B9043886> <> <1279794804450> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792949955_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '0' ... <BEA1-078F7BB52A53B9043886> <> <1279794804454> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792949924_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '1' ... <BEA1-07907BB52A53B9043886> <> <1279794804457> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792949986_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '0' ... <BEA1-07917BB52A53B9043886> <> <1279794804462> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792950064_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '1' ... <BEA1-07927BB52A53B9043886> <> <1279794804467> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792950017_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '0' ... <BEA1-07937BB52A53B9043886> <> <1279794804607> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792950142_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '4' ... <BEA1-07947BB52A53B9043886> <> <1279794804611> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792950111_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '4' ... <BEA1-07957BB52A53B9043886> <> <1279794804615> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792950096_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '4' ... <BEA1-07967BB52A53B9043886> <> <1279794804618> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792950033_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 12:33:24 PM CEST> <... ExecuteThread: '1' ... <BEA1-07977BB52A53B9043886> <> <1279794804630> ...########## ready to process filename: .../tmp/osb/msg/yyy_C0001279792950158_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    How could I process files sequentially ?
    I think using also Read Limit = 1 and Polling Interval = 1 sec doesn't work, because
    - if the proxy instance P1 reading file A takes more than 1 sec
    - after 1 sec a second proxy instance P2 starts to process file B
    - but P2 can completes its processing before P1 ...
    What do you suggest to do ?
    Thanks in advance
    ferp

    Hi Anuj,
    even trying with
    - Read Limit 1
    - Polling Interval 1
    - Sort By Arrival true
    I see from OSB log file that the files are processed neither by time order nor by alphabetical order:
    ####<Jul 22, 2010 2:35:56 PM CEST> ... ExecuteThread: '0' ...<BEA1-07F27BB52A53B9043886> <> <1279802156519> .../tmp/osb/msg/yyy_C0001279792949955_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:35:58 PM CEST> ... ExecuteThread: '0' ...<BEA1-07F47BB52A53B9043886> <> <1279802158508> .../tmp/osb/msg/yyy_C0001279792949924_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:00 PM CEST> ... ExecuteThread: '1' ...<BEA1-07F67BB52A53B9043886> <> <1279802160525> .../tmp/osb/msg/yyy_C0001279792949986_20100722_120229_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:02 PM CEST> ... ExecuteThread: '4' ...<BEA1-07F87BB52A53B9043886> <> <1279802162544> .../tmp/osb/msg/yyy_C0001279792950064_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:04 PM CEST> ... ExecuteThread: '0' ...<BEA1-07FA7BB52A53B9043886> <> <1279802164564> .../tmp/osb/msg/yyy_C0001279792950017_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:06 PM CEST> ... ExecuteThread: '1' ...<BEA1-07FC7BB52A53B9043886> <> <1279802166584> .../tmp/osb/msg/yyy_C0001279792950142_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:08 PM CEST> ... ExecuteThread: '0' ...<BEA1-07FE7BB52A53B9043886> <> <1279802168606> .../tmp/osb/msg/yyy_C0001279792950111_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:10 PM CEST> ... ExecuteThread: '4' ...<BEA1-08007BB52A53B9043886> <> <1279802170624> .../tmp/osb/msg/yyy_C0001279792950096_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:12 PM CEST> ... ExecuteThread: '0' ...<BEA1-08027BB52A53B9043886> <> <1279802172644> .../tmp/osb/msg/yyy_C0001279792950033_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    ####<Jul 22, 2010 2:36:14 PM CEST> ... ExecuteThread: '4' ...<BEA1-08047BB52A53B9043886> <> <1279802174664> .../tmp/osb/msg/yyy_C0001279792950158_20100722_120230_BU.yyy.CH_ASS_xxx.xml
    As said I think that using Read Limit = 1 and Polling Interval = 1 sec doesn't work, because
    - if the proxy instance P1 reading file A takes more than 1 sec for processing the file
    - after 1 sec a second proxy instance P2 starts to process file B
    - but P2 can completes its processing before P1 ...
    Do you know if OSB when polling a file, takes it only when OSB recognizes the file has been written completely ?
    I mean what happens if OSB wakes up while the file producer is still writing the file ?
    Regards
    ferp

  • MDM adpter picking up file twice.

    Hi,
    my scenario is MDM->PI->R3. When the XML files are generated into MDM ready folder,PI is picking these files twice.
    For Ex: Material no A20 is getting picked up twice. (Its result two messages in SXMB_MONI.Two IDOCS getting created at R3.)
    1st message ID is PQR
    2nd message ID is ABC
    When i am checking in Message monitoring->Adapter engine->Message ID(PQR & ABC).Both messages showing me same file name XY.xml.
    Why the same file is getting picked up twice? Where as we know whenever PI pickes up the file,Same file is gets transfered into Archive folder. so that PI wont Pick it up 2nd time.
    Please suggest.

    I never configured MDM PI adapter, but i do think that you have an option to delete file after picking up,if this type option is there use this,so you can avoid this problem.
    check like processing mode option available in MDM AAdapter or not.
    Regards,
    Raj

  • Working with WebLogic81sp2 and getting error while processing a file.

    Hi,
    getting error while working with the weblogic81sp2 and java based application. In this m processing a file. the file gets processed but doing command 'ps -ef' shows process still running. The exception log is like:
    <26-Mar-2008 10:10:48 o'clock GMT> <Warning> <WebLogicServer> <BEA-000337> <ExecuteThread: '21' for queue: 'weblogic.kernel.Default' has been busy for "645" seconds working on the request "Http Request: /shield/xml", which is more than the configured time (StuckThreadMaxTime) of "600" seconds.>
    <01-Apr-2008 06:50:00 o'clock BST> <Error> <HTTP> <BEA-101017> <[ServletContext(id=197633402,name=/shield,context-path=/shield)] Root cause of ServletException.
    javax.servlet.jsp.JspException: Can't insert page '/jsp/layouts/mainLayout.jsp' : Broken pipe
    at org.apache.struts.taglib.tiles.InsertTag$InsertHandler.processException(Ljava.lang.Throwable;Ljava.lang.String;)V(InsertTag.java:956)
    at org.apache.struts.taglib.tiles.InsertTag$InsertHandler.doEndTag()I(InsertTag.java:884)
    at org.apache.struts.taglib.tiles.InsertTag.doEndTag()I(InsertTag.java:473)
    at jsp_servlet._jsp.__main._jspService(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(main.jsp:2)
    at weblogic.servlet.jsp.JspBase.service(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(JspBase.java:33)
    at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run()Ljava.lang.Object;(ServletStubImpl.java:971)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;Lweblogic.servlet.internal.FilterChainImpl;)V(ServletStubImpl.java:402)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(ServletStubImpl.java:305)
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(RequestDispatcherImpl.java:301)
    at org.apache.struts.action.RequestProcessor.doForward(Ljava.lang.String;Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(RequestProcessor.java:1069)
    at org.apache.struts.tiles.TilesRequestProcessor.doForward(Ljava.lang.String;Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(TilesRequestProcessor.java:274)
    at org.apache.struts.action.RequestProcessor.processForwardConfig(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;Lorg.apache.struts.config.ForwardConfig;)V(RequestProcessor.java:455)
    at org.apache.struts.tiles.TilesRequestProcessor.processForwardConfig(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;Lorg.apache.struts.config.ForwardConfig;)V(TilesRequestProcessor.java:320)
    at org.apache.struts.action.RequestProcessor.process(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(RequestProcessor.java:279)
    at org.apache.struts.action.ActionServlet.process(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(ActionServlet.java:1482)
    at org.apache.struts.action.ActionServlet.doGet(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(ActionServlet.java:507)
    Please help me!

    Not sure why you want to replace. Since the response of the proxy would remain to hold the request body by default.
    If you have stored the opaque element in a variable ($var_opaque), then you can do the following.
    XPath : .
    In variable : body
    Expression : $var_opaque
    Check - "Replace node content"

  • A failure occurred while the server was processing report file

    Hi All,
    I keep getting this error while opening the report in Infoview,  "A failure occurred while the server was processing report file".
    I checked Event viewer there i have lot of errors with the same message and the source is "BusinessObject_crproc". 
    Please advise me what i have to do to resolve the issue.
    Environment: BOXI3.1, CR2008, Windows Server 2003
    Thanks
    Sudharsan.

    We got this error message on a report that had 5 subreports, 3 of which were based on stored procedures. The report was running fine in our Dev environment and in the CR developer, but not when we published it to another environment. The problem was caused because the stored procedures had been changed in Dev (so that they ran correctly) but these changes had not been released to the other environment. Once the scripts were run to update the stored procedures the report ran successfully. So it apepars that the problem was because the stored procedure/s the subreports were using were failing, but we only got the RCIRAS0546 error message.

  • Aim to process all files in folders on desktop to run through photoshop and save in multiple locations

    Aim to process all files in folders on desktop to run through photoshop and save in multiple locations
    Part one:-
    Gather information from desktop to get brand names and week numbers from the folders
    Excluding folders on desktop beginning with "2" or "Hot"
    Not sure about the list of folders
    but I have got this bit to work with
    set folderPath to "Hal 9000:Users:matthew:Desktop:DIVA_WK30_PSD" --<<this would be gained from the items on the desktop
    set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
    set folderName to last text item of folderPath
    set my text item delimiters to "_WK"
    set FolderEndName to last text item of folderName
    set brandName to first text item of folderName
    set my text item delimiters to "_PSD"
    set weekNumber to first text item of FolderEndName
    set my text item delimiters to oldTID
    After running this I have enough information to create folders in multiple locations, (i need to know where they are so that photoshop can later save them in those multiple locations
    So I need the following folders created
    Locally
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_LR" --(Set path for Later)PathA
    Hal 9000:Users:matthew:Pictures:2011-2012:"WK" + weekNumber: brandName: brandName + "_WK" + weekNumber + "_HR"--(Set path for Later)PathB
    Network
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:"Week" + weekNumber:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathC
    Volumes:GEN:Website_Images --(no need to create folder just set path)PathD
    FTP (Still as a normal Volume) So like another Network
    Volumes:impulse:"Week" + weekNumber
    Volumes:impulse:"Week" + weekNumber:Brand
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_LR"  --(Set path for Later)PathE
    Volumes:impulse:"Week" + weekNumber:Brand:brandName + "_WK" + weekNumber + "_HR"  --(Set path for Later)PathF
    I like to think that is end of Part 1
    Part 2
    Take the images  (PSD's) from those folders relevant to the Brand then possibly run more applescript that opens flattens and then saves it in the locations above.
    For example….
    An image in folder DIVA_WK30_PSD will then run an applescript in Photoshop, lets call it DivaProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:DIVA_WK30_LR and of course save the image as original filename.
    Then from the next folder
    An image in folder Free_WK30_PSD will then run an applescript in Photoshop, lets call it FreeProcessImages within this we then save to PathA, PathB, PathC, PathD, PathE, PathF the folder path of C should therefore look like this
    Volumes:GEN:Brands:Zoom:Brands - Zoom:Upload Photos:2012:Week30:Free_WK30_LR and of course save the image as original filename.
    The photoshop applescript i'm hoping will be easier as it should be a clearer step by step process without any if's and but's
    Now for the coffee!!

    Hi,
    MattJayC wrote:
    Now to the other part, where each folder was created (and those that already existed) how do I set them as varibles?
    For example,
    set localBrandFolder_High_Res to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", localBrandFolder)
    This line was used to create more than one folder as it ran though the folders on the desktop. The next part is I will need to reference them to save files to them.
    You can use a records
    Examples
    if you want the path of localBrandFolder_High_Res  of "Diva", if "Diva" is the second folder of the Desktop
    You get the path with this : localBrandFolder_High_Res of record 2 of myRecords
    if you want the path of localWeekFolder  in the first folder of the Desktop
    You get the path with this : localWeekFolder of record 1 of myRecords
    Here is the script
    set myRecords to {}
    set dtF to paragraphs of (do shell script "ls -F ~/Desktop | grep '/' | cut -d'/' -f1")
    repeat with i from 1 to number of items in dtF
        set this_item to item i of dtF
        if this_item does not start with "2_" and this_item does not start with "Hot" then
            try
                set folderPath to this_item
                set {oldTID, my text item delimiters} to {my text item delimiters, ":"}
                set folderName to last text item of folderPath
                set my text item delimiters to "_WK"
                set FolderEndName to last text item of folderName
                set brandName to first text item of folderName
                set my text item delimiters to "_PSD"
                set weekNumber to first text item of FolderEndName
                set my text item delimiters to oldTID
            end try
            try
                set this_local_folder to "Hal 9000:Users:matthew:Pictures:2011-2012"
                set var1 to my getFolderPath("WK" & weekNumber, this_local_folder)
                set var2 to my getFolderPath(brandName, var1)
                set var3 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var2)
                set var4 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var2)
                --set up names to destination folders and create over Netwrok including an already exisiting folder
                set this_Network_folder to "DCKGEN:Brands:Zoom:Brand - Zoom:Upload Photos:2012:"
                set var5 to my getFolderPath("WK" & weekNumber, this_Network_folder)
                set var6 to my getFolderPath(brandName, var5)
                set var7 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var6)
                set website_images to "DCKGEN:Website_Images:"
                --set up names to destination folders and create over Netwrok for FTP collection (based on a mounted drive)
                set this_ftp_folder to "Impulse:"
                set var8 to my getFolderPath("Week" & weekNumber, this_ftp_folder)
                set var9 to my getFolderPath(brandName, var8)
                set var10 to my getFolderPath(brandName & "_WK" & weekNumber & "_LR", var9)
                set var11 to my getFolderPath(brandName & "_WK" & weekNumber & "_HR", var9)
                set end of myRecords to ¬
      {localWeekFolder:var1, localBrandFolder:var2, localBrandFolder_Low_Res:var3, localBrandFolder_High_Res:var4, networkWeekFolder:var5, networkBrandFolder:var6, networkBrandFolder_Low_Res:var7, ftpWeekFolder:var8, ftpBrandFolder:var9, ftpBrandFolder_Low_Res:var10, ftpBrandFolder_High_Res:var11}
            end try
        end if
    end repeat
    localBrandFolder_High_Res of record 2 of myRecords -- get full path of localBrandFolder_High_Res in the second folder of Desktop
    on getFolderPath(tName, folderPath)
        tell application "Finder" to tell folder folderPath
            if not (exists folder tName) then
                return (make new folder at it with properties {name:tName}) as string
            else
                return (folder tName) as string
            end if
        end tell
    end getFolderPath

  • Sender File Adapter picking the same file twice

    We are facing a weird issue with File Sender Adapter
    We are using PI File Adapter ( NFS ) to read files for a NFS folder and
    processing those in PI.Normally it works fine. But for a scenario we
    are noticing it sometimes process the same file twice before archiving,
    thus duplicating the financial postings.
    What we have is :
    File Sender Adapter - NFS, Polling interval 60 secs, and Processign
    mdoe Archive. File name includes Wildcards - JE_Upload*.txt
    what we are noticing is that :
    when it picks up a file, it immediately polls again to check for
    another file, and sometimes the file is not yet archived so it picks up
    and reprocess the same file.
    If you see the message below, both belong to the same file, and it
    picked up the same file again in 12 secs after processing it the first
    time
       Successful 02.11.2009 15:01:00 02.11.2009 15:01:01   APMANUAL     urn:bl:i2g:003:100
    SI_SKF_FIDOC_OB XI Message
       Successful 02.11.2009 15:00:49 02.11.2009 15:00:50   APMANUAL     urn:bl:i2g:003:100
    SI_SKF_FIDOC_OB XI Message
    Anyone seen this behavior before?

    Hi,
    Please check the script which creates file in source NFS Folders. There is possibility that script is making change in file when PI is picking up the file.
    When PI picks the file first time it creates one message ID in system. After that if script is making any change in file without file name change (This need not necessary data change), for PI it becomes new file and new message gets generated in PI for same file.
    This error normally comes when File adapter is not able to archive file succesfully. For eg. file with same name alredy exists in Archive folder.
    File adapter generates the new message id whenfile get modified(eg.change in its length or data change) even though
    the file name is same and when file get change ,XI file adapter thinks that its new file and hence generate the new message id for same file.
    If file has same name and notmodified then XI adapter will not generate new message id and will keep on throw the error till you remove that file with same name from the
    archieve directory.
    -Warm Regards,
    Gouri

  • Need a suggestion on how to process XLS files

    Hello.
    I need to process XLS files: compare 2 files and add in the first one lines from the second one. If some columns are the same, I need to add just amount (summarize numbers) and if some columns in second file are not in the first one, I need to add them to first one.
    I need a tool suggestion. First I thought it will be easily done with Python and I prefer Python but after some search I'm not so sure anymore... Seems like Python won't do it smoothly as from here it looks rather foggy how to do it.
    Please suggest something. XLS files contain tabs and I need to be able to read/write/modify inside of different tabs.

    graysky wrote:If this is a one-off
    It's not and those documents will have several thousand items each. So even one time I couldn't do it manually. Plus I'll have to do it weekly. Probably even more than once a week.
    The idea is this.
    Suppose we have "1.xls" with contects:
    +----+-------------+-----------+
    | ID | Name | Amount |
    +----+-------------+-----------+
    | 1 | apple | 2 |
    | 2 | potato | 3 |
    | 3 | cherry | 10 |
    | 4 | water-melon | 1 |
    +----+-------------+-----------+
    Then we have "2.xls":
    +----+-------------+-----------+
    | ID | Name | Amount |
    +----+-------------+-----------+
    | 4 | water-melon | 1 |
    | 5 | pumpkin | 1 |
    +----+-------------+-----------+
    And the script must sum them up basing on ID so the result must look like:
    +----+-------------+-----------+
    | ID | Name | Amount |
    +----+-------------+-----------+
    | 1 | apple | 2 |
    | 2 | potato | 3 |
    | 3 | cherry | 10 |
    | 4 | water-melon | 2 |
    | 5 | pumpkin | 1 |
    +----+-------------+-----------+
    We had 1 new element with ID 5 and it's now at the bottom. And we had 1 similar, with ID 4 — they were summed up to amount == 2.
    All this is xls and with tabs.
    Can I do it with php? “R” is just too new to me at the moment and this job will need to be done soon.
    Last edited by Mr. Alex (2013-04-03 18:19:39)

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

  • Processing Multiple Files for more than 100 Receive Location - File Size - 25 MB each file, file type DML

    Hi Everybody
    Please suggest.
    For one of our BizTalk interface, we have around 120 receive locations.  We are just moving (*.dml) files from the source to destination without doing any processing.  
    We receive lots of files in different receive locations and in few cases the file size will be around 25 MB and so while moving these large files, the CPU usage is varying between 10% to 90+% and the time consuming for this single huge file is around
    10 to 20 minutes.  This solution was already in place and was designed by the previous vendor for moving the files to their clients.  Each client has 2 receive locations and they have around 60 clients.  Is there any best solution for implementing
    this with in BizTalk or outside BizTalk? Please suggest.
    I am also looking for how to control the number of files which gets picked from the BizTalk receive location.  For example, If we have say 1000 files in receive location and we want to pick at a time only 50 files only (batch of 50) then is it possible?
    because currently it is picking all the files available in source location, and one of the process is dropping thousands of files in to the source location, so we want to control  the number of files getting picked (or even if we can control to pick the
    number of KBs).  Please guide us on how we can control the number of files.

    Hi Rajeev,
    25 MB per file, 1000 files. Certainly you got to revisit the reason for choosing BizTalk.
    “the time consuming for this single huge file is around 10 to 20 minutes”
     - This is a problem.
    You could consider other file transfer options like XCopy or RobotCopy etc if you want to transfer to another local/shared drive. Or you can consider using SSIS
    which does comes with many adapters to send to destination system depending on their destination transfer protocol.
    But in your case, you have some of the advantages that you get with BizTalk. For your scenario, you have more source systems (more Receive locations), with BizTalk
    it’s always easier to manage these configurations, you can easily enable and disable them when a need arise. You can easily configure tracking; configure host instances based on load etc. So you can consider following design for your requirement. This design
    would suit you well since you’re not processing the message and just pass it through from source to destination:
    Use a custom pipeline component in the Receive Locations which receives the large file.
    Stores the received file into disk and creates a small XML metadata message that contains the information about where the large file is stored.
    The small XML message is then published into the
    message box db
    instead of the large file. Let the metadata file also contain the same context properties as the received file.
    In the send port, use another custom pipeline component that process the metadata xml file, retrieve the location of the disk where the file is stored, access the file and send it to destination.
    Read the following article on this design..
    http://www.codeproject.com/Articles/180333/Transfer-Large-Files-using-BizTalk-Send-Side
    This way you don’t need to publish the whole message into message box DB which would considerably reduce the processing time and utilises host instance to process
    more files. This way you can still get the advantages of BizTalk and still process large files.
    And regarding your question of restricting the Receive location to handles the number of files from receives location. No it’s not possible.
    Regards,
    M.R.Ashwin Prabhu
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

Maybe you are looking for

  • Is there a better way to add aliases to an interface?

    Hello All, I need a little help in adding/binding a few thousand IPs to an interface --e.g. from the wiki:  'https://wiki.archlinux.org/index.php/Co - P_aliasing' the process is to edit the rc.conf and add: eth0="eth0 192.168.0.1 netmask 255.255.255.

  • Email a Report to Customer in AR

    Hi, I have a requirement to send an email of report to the customer while i run the concurrent program. It is not an XML report. Please advise me to do. Thanks in Advance, Pradeep Edited by: user11165897 on Dec 7, 2012 1:27 AM

  • XSLT Transformation ABAP to XML speacial treatment of empty elements

    Dear All, I have created a deep abap structure which basically reflects the structure of my XML file which I would like to generate later. I have created a transformation via transaction SE80->create XSLT Program. I am also using the ABAP command cal

  • Importing RSS with HTML in AS3

    I succesfully imported RSS in Flash... now in flash i have <title> <link> and <description> I want to Display in a List component the Title and the Description, and on click a textfield displays the HTML text grabbed from the Link... Here's my code v

  • How to use wildcards in REST filter for subscription items

    I am following this documentation: String column filters Support % (starts with) operator. Support * (Contains) operator. Does not support (ends with) operator. Examples: Name=service* Name=*g* Name=*g -- not allowed REST URL: http://<ServerURL>/Requ