Process large file using BPEL

My project have a requirement of processing large file (10 MB) all at once. In the project, the file adapter reads the file, then calls 5 other BPEL process to do 10 different validations before delivering to oracle database. I can't use debatch feature of adapter because of Header and detail record validation requirement. I did some performace tuing (eg: auditlevel to minimum, logging level to error, JVM size to 2GB etc..) as per performance tuing specified in Oracle BPEL user guide. We are using 4 CPU, 4GB RAM IBM AIX 5L server. I observed that the Receive activity in the begining of each process is taking lot of time, while other transient process are as per expected.
Following are statistics for receive activity per BPEL process:
500KB: 40 Sec
3MB: 1 Hour
Because we have 5 BPEL process, so lot of time is wasted in receive activity.
I did't try 10 MB so far, because of poor performance figure for 3 MB file.
Does any one have any idea how to improve performance of begining receive activity of BPEL process?
Thanks
-Simanchal

I believe the limit in SOA Suite is 7MB if you want to use the full payload and perform some kind of orchastration. Otherwise you need to do some kind of debatching, which you stated will not work.
SOA Suite is not really designed for your kind of use case as it needs to parocess this file in memory, when any transformation occurs it can increase this message between 3 - 10 times. If you are writing to a database why can you read the rows one by one?
If you are wanting to perform this kind of action have a look at ODI (Oracle Data Integrator). I Also believe that OSB (Aqua Logic) can handle files upto 200MB this this can be an option as well, but it may require debatching.
cheers
James

Similar Messages

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

  • Read Large Files Using BPEL File Adapter

    Hi,
    I have a scenario, where files of large size in text format are to be read and send to 3rd party. MTOM Policy has to be attached. Files of size 3MB or less are polled and size greater than 3 MB are not Retrieved. How can I resolve the issue. Do I have to break the file and send the data? If so how?
    Thanks
    Ranga

    You have to use streaming feature of the JCA file adapter for handling huge file.
    You could go through the following link
      http://docs.oracle.com/cd/E23943_01/integration.1111/e10231/adptr_file.htm#CIAHHEBF

  • Processing large file using Debatching - SAX Exception

    Hi,
    I have a large xml file (about 20 mb) to be processed. I implemented the debatching feature and in the file adapter I defined the publish messages in bacthes as 500.
    When I run the process, I expected to see several instances in the console. But I see one instance not in the Instances page but in the Perform Manaul Recovery and nothing seems to be happening.
    Do I need to do anything here. Can anyone please help me here.
    Thanks
    -Prapoorna
    Edited by: p123 on Jun 29, 2009 3:07 PM

    The file is 20 mb.
    Sample xml file is as shown below. I have several attendance_row tags between time_and_attendance.
    <time_and_attendance>
    <attendance_row><oracle_person_id>110758</oracle_person_id>
    <absence_reason>Work Abroad</absence_reason>
    <action_type>A</action_type>
    <date>01/04/2009</date>
    <total_hours>8.6</total_hours>
    <last_update_date>16/06/2009 12:35:47</last_update_date>
    </attendance_row>
    <attendance_row><oracle_person_id>110758</oracle_person_id>
    <absence_reason></absence_reason>
    <action_type>W</action_type>
    <date>01/04/2009</date>
    <total_hours>0</total_hours>
    <last_update_date>16/06/2009 12:35:47</last_update_date>
    </attendance_row>
    <attendance_row><oracle_person_id>110758</oracle_person_id>
    <absence_reason>Work Abroad</absence_reason>
    <action_type>A</action_type>
    <date>02/04/2009</date>
    <total_hours>8.6</total_hours>
    <last_update_date>16/06/2009 12:35:47</last_update_date>
    </attendance_row>
    </time_and_attendance>
    Here is the schema file
    <?xml version="1.0" encoding="UTF-8" ?>
    <!--This Schema has been generated from a DTD. A target namespace has been added to the schema.-->
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://TargetNamespace.com/ReadFile" xmlns="http://TargetNamespace.com/ReadFile" nxsd:version="DTD" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd">
    <xs:element name="oracle_person_id" type="xs:string"/>
    <xs:element name="total_hours" type="xs:string"/>
    <xs:element name="action_type" type="xs:string"/>
    <xs:element name="last_update_date" type="xs:string"/>
    <xs:element name="absence_reason" type="xs:string"/>
    <xs:element name="attendance_row">
    <xs:complexType>
    <xs:sequence>
    <xs:element ref="oracle_person_id"/>
    <xs:element ref="absence_reason"/>
    <xs:element ref="action_type"/>
    <xs:element ref="date"/>
    <xs:element ref="total_hours"/>
    <xs:element ref="last_update_date"/>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    <xs:element name="date" type="xs:string"/>
    <xs:element name="time_and_attendance">
    <xs:complexType>
    <xs:sequence>
    <xs:element maxOccurs="unbounded" ref="attendance_row"/>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:schema>
    Thanks
    -Prapoorna
    Edited by: p123 on Jun 29, 2009 3:49 PM
    Edited by: p123 on Jun 29, 2009 7:23 PM

  • Processing large files on Mac OS X Lion

    Hi All,
    I need to process large files (few GB) from a measurement. The data files contain lists of measured events. I process them event by event and the result is relatively small and does not occupy much memory. The problem I am facing is that Lion "thinks" that I want to use the large data files later again and puts them into cache (inactive memory). The inactive memory is growing during the reading of the datafiles up to a point where the whole memory is full (8GB on MacBook Pro mid 2010) and it starts swapping a lot. That of course slows down the computer considerably including the process that reads the data.
    If I run "purge" command in Terminal, the inactive memory is cleared and it starts to be more responsive again. The question is: is there any way how to prevent Lion to start pushing running programs from memory into the swap on cost of useless harddrive cache?
    Thanks for suggestions.

    It's been a while but I recall using the "dd" command ("man dd" for info) to copy specific portions of data from one disk, device or file to another (in 512 byte increments).  You might be able to use it in a script to fetch parts of your larger file as you need them, and dd can be used to throw data from and/or to standard input/output so it's easy to get data and store in temporary container like a file or even a variable.
    Otherwise if you can afford it, and you might with 8 GB or RAM, you could try and disable swapping (paging to disk) alltogether and see if that helps...
    To disable paging, run the following command (in one line) in Terminal and reboot:
    sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    To re-enable paging, run the following command (in one line) in Terminal:
    sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    Hope this helps!

  • No error generated if batch process load file uses wrong naming convention

    Another interesting one...
    When using the batch processing functionality of FDM, which can be executed by either:
    - FDM Workbench (manually)
    - Hyperion FDM Task Manager (scheduled)
    - upsShell.exe (scheduled and executed from a batch script)
    ..., one has to name the data file (to be loaded) using a specific file naming convention (i.e. "A~LOCATION~CATEGORY~PERIOD~RA.csv" format, for example). However, if one does not name the file correctly and then tries to process the file using batch processing functionality using any of the above three methods, FDM happily moves the file out of the OpenBatch folder and into a new folder, but the file is not loaded as it does not know where to map it to (as expected). However, there are no errors in Outbox\Logs\<username>.err to inform the user, so one is non the wiser that anything has gone wrong!
    When using FDM Workbench, an error is displayed on the screen (POV - "Batch Completed With Errors, ( 1 ) Files Contained Errors"), but this is the only indication of any error. And normally, one would be scheduling the load using upsShell.exe or Hyperion FDM Task Manager anyway...
    Has anyone else noticed this, or am I doing something wrong here? :-)

    Yes, as per my original post the only feedback on any POV errors appears to be when using the FDM Workbench Batch Processing GUI.
    Regarding the "Batch Process Report in FDM" you mentioned, are you referring to Analysis | Timeline accessible via FDM web client? Unfortunately this does not appear to provide much in the way of detail or errors, only general events that occurred. I cannot locate any batch process report, other than the log output I defined when calling upsShell.exe. However, this contains no POV errors...

  • Problem while processing large files

    Hi
    I am facing a problem while processing large files.
    I have a file which is around 72mb. It has around more than 1lac records. XI is able to pick the file if it has 30,000 records. If file has more than 30,000 records XI is picking the file ( once it picks it is deleting the file ) but i dont see any information under SXMB_MONI. Either error or successful or processing ... . Its simply picking and igonring the file. If i am processing these records separatly it working.
    How to process this file. Why it is simply ignoring the file. How to solve this problem..
    Thanks & Regards
    Sowmya.

    Hi,
    XI pickup the Fiel based on max. limit of processing as well as the Memory & Resource Consumptions of XI server.
    PRocessing the fiel of 72 MB is bit higer one. It increase the Memory Utilization of XI server and that may fali to process at the max point.
    You should divide the File in small Chunks and allow to run multiple instances. It will  be faster and will not create any problem.
    Refer
    SAP Network Blog: Night Mare-Processing huge files in SAP XI
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Processing huge file loads through XI
    File Limit -- please refer to SAP note: 821267 chapter 14
    File Limit
    Thanks
    swarup
    Edited by: Swarup Sawant on Jun 26, 2008 7:02 AM

  • How to invoke Bpel process  from java using 'bpel process WSDL'

    I want to call bpel process from java using bpel wsdl.
    could any one point me to any url/sample.
    Thanks
    Nagajyothy

    Hi Seshagiri,
    Thanks for providing links and initial steps to create web service proxy(using Jdeveloper 11g).
    I created a web service proxy.
    provided the needed inputs.
    when I ran the client app, bpel process(has a human task) got invoked but faulted with exception as below
    Operation 'initiateTask' failed with exception 'EJB Exception: : java.lang.ExceptionInInitializerError[[
         at oracle.tip.pc.services.common.ServiceFactory.getAuthorizationServiceInstance(ServiceFactory.java:147)
         at oracle.bpel.services.workflow.task.impl.TaskService.initiateTask(TaskService.java:1159)
         at oracle.bpel.services.workflow.task.impl.TaskService.initiateTask(TaskService.java:502)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
    please help me in solving the above problem.
    Thanks
    Nagajyothy

  • Problem processing large message using dbadapter.

    I have a process which is initiated by dbadapter fetch from table.
    Its working fine when the records are less. But when the number of records
    are more than 6000(more than 4MB) I am getting errors as below.
    The process goes to off state after these errors.
    Any body have any suggestions on how to process large messages ?
    <2006-08-02 11:55:25,172> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "cube delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:36,473> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:42,689> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [OracleDB_ptt::receive(HccIauHdrCollection)] - JCA Activation Agent was unable to perform delivery of inbound message to BPEL Process 'bpel://localhost/default/IAUProcess~1.0/' due to: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:56:22,573> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound>
    com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    <2006-08-02 11:57:52,341> <ERROR> <default.collaxa.cube.ws> <Database Adapter::Outbound> <oracle.tip.adapter.db.InboundWork runOnce> Non retriable exception during polling of the database ORABPEL-11624 DBActivationSpec Polling Exception.
    Query name: [OracleDB], Descriptor name: [IAUProcess.HccIauHdr]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by javax.resource.ResourceException: ORABPEL-12509 Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:684)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: ORABPEL-12509
    Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:628)
         ... 9 more
    Caused by: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         ... 9 more
    .

    To process 6000 messages in one shot is not the best practice in BPEL. For that you have to choose concepts like datawarehouse or so.
    But you might want to process it in batch mode. So think of using batch option in DB adapter and try to define MaxRaiseSize and MaxTransactionSize for your DB adapter. Further explanation is here
    http://download-west.oracle.com/docs/cd/B14099_19/integrate.1012/b25307/adptr_db.htm#CHDHAIHA

  • How to Expire Large Files using File Server Resource Manager

    Is there a way to expire Large Files over 2GB that have not been accessed in 2 years.
    I see under the File expiration options that I can expire files that have not been Created, Modified, or Accessed for a certain amount of time.
    Thanks,
    Eddie

    Hi Eddie,
    FSRM can help report large files and also can help move old files to a folder, but I did not found a way to combine them in a single process.
    Instead how about using Robocopy?
    You can run robocopy /min:xxx /minlad:xxx <source> <target>.
    /MIN:n :: MINimum file size - exclude files smaller than n bytes.
    /MINLAD:n :: MINimum Last Access Date - exclude files used since n.
    (If n < 1900 then n = n days, else n = YYYYMMDD date).
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Reading large files -- use FileChannel or BufferedReader?

    Question --
    I need to read files and get their content. The issue is that I have no idea how big the files will be. My best guess is that most are less than 5kb but some with be huge.
    I have it set up using a BufferedReader, which is working fine. It's not the fastest thing (using readLine() and StringBuffer.append()), but so far it's usable. However, I'm worried that if I need to deal with large files, such as a PDF or other binary, BufferedReader won't be so efficient if I do it line by line. (And will I run into issues trying to put a binary file into a String?)
    I found a post that recommended FileChannel and ByteBuffer, but I'm running into a java.lang.UnsupportedOperationException when trying to get the byte[] from ByteBuffer.
    File f = new File(binFileName);
    FileInputStream fis = new FileInputStream(f);
    FileChannel fc = fis.getChannel();
    // Get the file's size and then map it into memory
    int sz = (int)fc.size();
    MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, sz);
    fc.close();
    String contents = new String(bb.array()); //code blows up
    Thanks in advance.

    If all you are doing is reading data I don't think you're going to get much faster than InfoFetcher
    you are welcome to use and modify this class, but please don't change the package or take credit for it as your own work
    InfoFetcher.java
    ==============
    package tjacobs.io;
    import java.io.IOException;
    import java.io.InputStream;
    import java.util.ArrayList;
    import java.util.Iterator;
    * InfoFetcher is a generic way to read data from an input stream (file, socket, etc)
    * InfoFetcher can be set up with a thread so that it reads from an input stream
    * and report to registered listeners as it gets
    * more information. This vastly simplifies the process of always re-writing
    * the same code for reading from an input stream.
    * <p>
    * I use this all over
         public class InfoFetcher implements Runnable {
              public byte[] buf;
              public InputStream in;
              public int waitTime;
              private ArrayList mListeners;
              public int got = 0;
              protected boolean mClearBufferFlag = false;
              public InfoFetcher(InputStream in, byte[] buf, int waitTime) {
                   this.buf = buf;
                   this.in = in;
                   this.waitTime = waitTime;
              public void addInputStreamListener(InputStreamListener fll) {
                   if (mListeners == null) {
                        mListeners = new ArrayList(2);
                   if (!mListeners.contains(fll)) {
                        mListeners.add(fll);
              public void removeInputStreamListener(InputStreamListener fll) {
                   if (mListeners == null) {
                        return;
                   mListeners.remove(fll);
              public byte[] readCompletely() {
                   run();
                   return buf;
              public int got() {
                   return got;
              public void run() {
                   if (waitTime > 0) {
                        TimeOut to = new TimeOut(waitTime);
                        Thread t = new Thread(to);
                        t.start();
                   int b;
                   try {
                        while ((b = in.read()) != -1) {
                             if (got + 1 > buf.length) {
                                  buf = IOUtils.expandBuf(buf);
                             int start = got;
                             buf[got++] = (byte) b;
                             int available = in.available();
                             //System.out.println("got = " + got + " available = " + available + " buf.length = " + buf.length);
                             if (got + available > buf.length) {
                                  buf = IOUtils.expandBuf(buf, Math.max(got + available, buf.length * 2));
                             got += in.read(buf, got, available);
                             signalListeners(false, start);
                             if (mClearBufferFlag) {
                                  mClearBufferFlag = false;
                                  got = 0;
                   } catch (IOException iox) {
                        throw new PartialReadException(got, buf.length);
                   } finally {
                        buf = IOUtils.trimBuf(buf, got);
                        signalListeners(true);
              private void setClearBufferFlag(boolean status) {
                   mClearBufferFlag = status;
              public void clearBuffer() {
                   setClearBufferFlag(true);
              private void signalListeners(boolean over) {
                   signalListeners (over, 0);
              private void signalListeners(boolean over, int start) {
                   if (mListeners != null) {
                        Iterator i = mListeners.iterator();
                        InputStreamEvent ev = new InputStreamEvent(got, buf, start);
                        //System.out.println("got: " + got + " buf = " + new String(buf, 0, 20));
                        while (i.hasNext()) {
                             InputStreamListener fll = (InputStreamListener) i.next();
                             if (over) {
                                  fll.gotAll(ev);
                             } else {
                                  fll.gotMore(ev);
         }

  • COMPUTE CRASHES WHEN PROCESSING LARGE FILES

    As far as basic operations, my G4 is running smoothly.
    However, whenever I need it to process significant files, such as exporting a 30minute video from FCP as a Quicktime movie, or using Compressor to encode an .M2V file, the computer crashes. Basically, if any task is going to take longer than fifteen minutes to complete, I know my computer won't make it.
    Thus far I've done a fresh install of the system software, reinstalled all applications, trashed prefs, run the pro application updtates, run disk utility, played with my work flow (internal vs. external drives), etc.
    I wonder if perhaps my processor is failing, if I need more memory (though my 768MB exceeds application minimum requirements), or if perhaps these G4s just aren't adequately equipped to run the newer pro application versions.
    Thanks in advance for any advice.

    I can't pull the dimm out due to the fact that I need a certain amount of memory installed to be able to run the software in the first place.
    I've run the hardware test disc that came with my computer and it has not detected any problems.
    I don't think heat is the issue as, according to the Temperature Monitor utility I downloaded, my computer remains consistant at around 58 degrees, even when performing difficult processes.
    According to my activity Monitor, when I'm processing one of these larger files, I'm using as much as 130% of the cpu, but it also can remains as low as 10% for extended periods. Both seem odd.
    Any thoughts?

  • Mac OSX desktop dropping connection with multiple copy processes & large files

    The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2 updated. The
    volume the macs are using is part of a cluster. The users mount the volumes
    on their macs and everying is for the most part fine. If they grab a bunch
    of files and copy them from desktop to server it's fine as long as it's only
    a single copy process. The users are part of the hi-res department and the
    files can be 1GB or larger. If they drag one or more large files, and then
    while that's copying they drag some more files, so both copy processes are
    running at once....quite often the volume will dismount from the desktop and
    you will get unable to copy because some resource is unavailable. Sometimes
    the finder crashes, sometimes not. Often the files that were partially
    copied get locked and the users needs to reboot their Mac in order to delete
    them. I'm getting pretty desperate hear, anyone have an idea what's going
    on. I don't know if this is a Tiger thing or a large file thing or a
    multiple copy stream thing, a netware thing or a mac thing.....we have
    hundreds of other users running OSX 10.3 and earlier who are not reporting
    this problem, but they also don't copy files that size. Someone please tell
    me they have seen this before....thanks very much. Oh, before going to 6.5
    and NFAP the servers were 5.1 with Prosoft server and they never had the
    problem.
    Jake

    Thanks for your help, I have incidents open now with Apple and Novell, I
    hope one of them can provide something for us. We tried applying 6.5 SP4 to
    a test server....the problem still happened but was "better", the copy
    operations still quit but with SP4 applied the volume did not dismount....or
    if it did it remounted automatically because it was still connected after
    OKing through the copy errors.
    "Jeffrey D Sessler" <[email protected]> wrote in message
    news:[email protected]...
    >I tried two 2GB files. No problems at all but I'm in a 100% end-to-end
    >Gigabit environment. My server storage is also a very-fast SAN.
    >
    > Best,
    > Jeff
    >
    >
    > "Jacob Shorr" <[email protected]> wrote in message
    > news:[email protected]...
    >> Jeffrey,
    >>
    >> Have you tried the exact same test, dragging say two 500MB files in
    >> seperate
    >> copy operations? I hear what you're saying about the 10/100 link, but we
    >> don't run gigabit to the desktops, and we're not going to anytime soon.
    >> Even if that could resolve the issue we need something kind of other fix
    >> for
    >> our infrastructure. I will look into any errors on the switch.
    >>
    >> "Jeffrey D Sessler" <[email protected]> wrote in message
    >> news:[email protected]...
    >>> Well, considering that I'm not seeing the issue on my 10.4.2 machines
    >>> against my 6.5Sp3 servers, I'm not sure what you should do at this
    >>> point.
    >>> Since you say that the 10.3 machines don't have an issue, it makes it
    >> sound
    >>> to me like this is an Apple issue.
    >>>
    >>> The logs point at a communication issue... Is there anyway to get that
    >>> Mac
    >>> on to a Gigabit connection to see if you can duplicate it?
    >>>
    >>> The other option is to wait for 10.4.3 to be released and see if the
    >> problem
    >>> goes away.
    >>>
    >>> Again, on only a 10/100 link, one copy of a large file _will_ saturate
    >>> the
    >>> link.Perhaps 10.4.2 has an issue with this?
    >>>
    >>> Also, when you're doing the copy, what to the error counters in the
    >> switches
    >>> say?
    >>>
    >>> Jeff
    >>>
    >>> "Jacob Shorr" <[email protected]> wrote in message
    >>> news:[email protected]...
    >>> > There are definately no mis-matches. This has been checked and
    >> re-checked
    >>> > a
    >>> > dozen times. It's only on 10.4......we can replicate it on every 10.4
    >>> > machine, and we cannot replicate it on any machine that is 10.3. What
    >>> > should I do to go about getting this fixed, should I be contacting
    >>> > Apple
    >>> > or
    >>> > Novell? The speed is always good until it actually decides to drop
    >>> > and
    >>> > cut
    >>> > off.
    >>> >
    >>> >
    >>> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> > news:7jj%[email protected]...
    >>> >> Looks like communication between the Mac and the Netware server is
    >>> > dropping.
    >>> >> AFP in 10.3 and 10.4 support auto-reconnection but I'm sure that it
    >> will
    >>> >> fail the copy process.
    >>> >>
    >>> >> I'd first check to make sure that there are not any mis-matches on
    >>> >> the
    >>> >> switch e.g. the Mac is set to Auto (as it should be) but someone has
    >> set
    >>> > the
    >>> >> switch to a forced mode. Both should be auto. A duplex miss-match
    >>> >> could
    >>> >> cause the Mac not to see the heart beat back from the Novell server.
    >>> >>
    >>> >> Like I said, if the workstation is only on 10/100, a single copy
    >> process
    >>> > on
    >>> >> a G5 Mac will saturate that link. Adding more concurrent copies will
    >> only
    >>> >> result in everything slowing down and taking longer, or you'll get
    >>> >> the
    >>> >> dropped connections.
    >>> >>
    >>> >> Best,
    >>> >> Jeff
    >>> >>
    >>> >>
    >>> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> news:Ybc%[email protected]...
    >>> >> > Take a look at the last entries in the system log right after it
    >>> > happened,
    >>> >> > let me know if it means anything to you. Thanks.
    >>> >> >
    >>> >> > Sep 29 13:26:10 yapostolides kernel[0]: AFP_VFS afpfs_mount:
    >>> >> > /Volumes/FP04SYS11, pid 210
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >> doing
    >>> >> > reconnect on /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > connect
    >>> >> > to
    >>> >> > the server /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Opening
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Logging
    >>> >> > in
    >>> >> > with uam 2 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > Restoring
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS
    >>> >> > afpfs_MountAFPVolume:
    >>> >> > GetVolParms failed 0x16
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > afpfs_MountAFPVolume failed 22 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > received
    >>> >> > VQ_DEAD event (32)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > posting
    >>> >> > to
    >>> >> > KEA to unmount /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > type
    >>> >> > 'afpfs', mounted on '/Volumes/FP04SYS11', from
    >>> >> > 'afp_0TQCV10QsPgy0TShVK000000-4340.2c000006', dead
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > found
    >> 1
    >>> >> > filesystem(s) with problem(s)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_unmount:
    >>> >> > /Volumes/FP04SYS11, flags 524288, pid 43
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> >> > news:GH%[email protected]...
    >>> >> >> We move large files all the time under SP3 with no issues however,
    >>> > there
    >>> >> > are
    >>> >> >> several finder/copy/afp issues in Tiger that are do to be fixed in
    >>> >> >> 10.4.3.
    >>> >> >>
    >>> >> >> Also, if you have any type of network issue such as duplex
    >> mis-matches
    >>> > or
    >>> >> >> are running say, only a 10/100 network, a single Mac can not only
    >>> >> >> transfer
    >>> >> >> more than 10MB/sec (filling the network pipe) or generate so many
    >>> >> > collisions
    >>> >> >> (duplex mis-match) that you could drop communication to the
    >>> >> >> server.
    >>> >> >>
    >>> >> >> What type of server (speed, disks, raid level, NIC speed) and what
    >>> >> >> type
    >>> >> >> of
    >>> >> >> network (switched gigabit, switched 10/100, shared 10/100, etc.)
    >>> >> >>
    >>> >> >> How long does it take to copy that single 1GB file to the server?
    >>> >> >>
    >>> >> >> Does a single copy process always work?
    >>> >> >>
    >>> >> >> Jeff
    >>> >> >>
    >>> >> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> >> news:[email protected]...
    >>> >> >> > The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2
    >> updated.
    >>> >> > The
    >>> >> >> > volume the macs are using is part of a cluster. The users mount
    >> the
    >>> >> >> > volumes
    >>> >> >> > on their macs and everying is for the most part fine. If they
    >> grab
    >>> >> >> > a
    >>> >> >> > bunch
    >>> >> >> > of files and copy them from desktop to server it's fine as long
    >>> >> >> > as
    >>> > it's
    >>> >> >> > only
    >>> >> >> > a single copy process. The users are part of the hi-res
    >> department
    >>> > and
    >>> >> >> > the
    >>> >> >> > files can be 1GB or larger. If they drag one or more large
    >>> >> >> > files,
    >>> > and
    >>> >> >> > then
    >>> >> >> > while that's copying they drag some more files, so both copy
    >>> > processes
    >>> >> > are
    >>> >> >> > running at once....quite often the volume will dismount from the
    >>> >> >> > desktop
    >>> >> >> > and
    >>> >> >> > you will get unable to copy because some resource is
    >>> >> >> > unavailable.
    >>> >> >> > Sometimes
    >>> >> >> > the finder crashes, sometimes not. Often the files that were
    >>> > partially
    >>> >> >> > copied get locked and the users needs to reboot their Mac in
    >>> >> >> > order
    >>> >> >> > to
    >>> >> >> > delete
    >>> >> >> > them. I'm getting pretty desperate hear, anyone have an idea
    >> what's
    >>> >> > going
    >>> >> >> > on. I don't know if this is a Tiger thing or a large file thing
    >> or
    >>> >> >> > a
    >>> >> >> > multiple copy stream thing, a netware thing or a mac
    >>> >> >> > thing.....we
    >>> > have
    >>> >> >> > hundreds of other users running OSX 10.3 and earlier who are not
    >>> >> > reporting
    >>> >> >> > this problem, but they also don't copy files that size. Someone
    >>> > please
    >>> >> >> > tell
    >>> >> >> > me they have seen this before....thanks very much. Oh, before
    >> going
    >>> > to
    >>> >> >> > 6.5
    >>> >> >> > and NFAP the servers were 5.1 with Prosoft server and they never
    >> had
    >>> >> >> > the
    >>> >> >> > problem.
    >>> >> >> >
    >>> >> >> > Jake
    >>> >> >> >
    >>> >> >> >
    >>> >> >>
    >>> >> >>
    >>> >> >
    >>> >> >
    >>> >>
    >>> >>
    >>> >
    >>> >
    >>>
    >>>
    >>
    >>
    >
    >

  • Issue while processing large message using schema exposed as WCFservice

    I am using biztalk 2010 and am facing one issue.
    I have a schema exposed as WCF service. Used the ecf pubulishing wizard for the same.
    When we post a large message using this WCF service it throws http 400 error.
    We have isolated the problem to webserver.
    Is there a way to update the MEXbindings to set the request message size.
    If anyone has faced similar problem before request your help

    Hi,
    Following settings in the web.config file of the WCF service may help.
    <bindings>
        <basicHttpBinding>
            <binding
    name="basicHttp" allowCookies="true"
    maxReceivedMessageSize="10000000"
    maxBufferSize="10000000"
    maxBufferPoolSize="10000000">
                <readerQuotas
    maxDepth="32"
    maxArrayLength="100000000"
    maxStringContentLength="100000000"/>
            </binding>
        </basicHttpBinding>
    </bindings>
    HTH
    Sumit
    Sumit Verma - MCTS BizTalk 2006/2010 - Please indicate "Mark as Answer" or "Mark as Helpful" if this post has answered the question

  • Trying to transfer a large file using new ipod touch.

    I am trying to transfer a large file (17gigs)using my new ipod touch. I got a new laptop and i am trying to get some things on it. I was able to do this last time with my old ipod classic. It would let me copy/paste the file or drag it. But with the ipod touch i can not do either of those. Any help will be appreciated. Thank you very much.
    Message was edited by: usagisailormoon
    Message was edited by: usagisailormoon

    ah really. ok thank you very much.

Maybe you are looking for

  • Connecting a Dell computer to my airport base station

    I am having trouble connecting to my wireless network with a Dell laptop. All other macs connect fine to my base station. I am getting a message saying " little or no connectivity" yet my Power book connects fine sitting right next to it. Not sure ho

  • Maintaining non-cumulative values in Mlti provider

    Can we maintain non ucmulative values inteh multi ptovider   Extras - > Maintain non cumulative values section. I created few new cubes which has non cumulative keyfigures . In the cube I made 0CAL DAY as reference characteristic and selected  Compan

  • New Mac Mini server/OS X Lion, no Internet access

    I set up new Mini server on home network, I can access all internal network resources (other computers, airport express, NAS, etc) but cannot get to connect to external/internet sites (apple.com, google.com, iTunes Store, Mac Updates, Mail, etc).  Al

  • ActiveSync question - Sent items

    I have a client who does not want to keep any sent items in her Outlook "Sent" folder. When she was on BES we could manage this, but since moving to a Q10 that uses ActiveSync whenever she sends a message from the BB it syncs back to her Sent folder

  • Suppressing rows based on a condition

    I have report, where I have to remove certain rows out of the report, based on the certain condition. Note that, there are two characteristics in the rows, and one key figure in the columns. If the first characteristic value equals that of the second