Processing Large Files using Chunk Mode with ICO

Hi All,
I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
And I know that we can not use mapping while using Chunk Mode.
While trying I noticed below points:
1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
2) I used Dummy Interface in my scenario and it worked Fine.
So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
Thanks in Advance,
- Pooja.

Hello,
While trying I noticed below points:
1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
2) I used Dummy Interface in my scenario and it worked Fine.
So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
According to this blog:
File/FTP Adapter - Large File Transfer (Chunk Mode)
The following limitations apply to the chunk mode in File Adapter
As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
Only for File Sender to File Receiver
No Mapping
No Content Based Routing
No Content Conversion
No Custom Modules
Probably you are doing content conversion that is why it is not working.
Hope this helps,
Mark
Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

Similar Messages

  • Process large file using BPEL

    My project have a requirement of processing large file (10 MB) all at once. In the project, the file adapter reads the file, then calls 5 other BPEL process to do 10 different validations before delivering to oracle database. I can't use debatch feature of adapter because of Header and detail record validation requirement. I did some performace tuing (eg: auditlevel to minimum, logging level to error, JVM size to 2GB etc..) as per performance tuing specified in Oracle BPEL user guide. We are using 4 CPU, 4GB RAM IBM AIX 5L server. I observed that the Receive activity in the begining of each process is taking lot of time, while other transient process are as per expected.
    Following are statistics for receive activity per BPEL process:
    500KB: 40 Sec
    3MB: 1 Hour
    Because we have 5 BPEL process, so lot of time is wasted in receive activity.
    I did't try 10 MB so far, because of poor performance figure for 3 MB file.
    Does any one have any idea how to improve performance of begining receive activity of BPEL process?
    Thanks
    -Simanchal

    I believe the limit in SOA Suite is 7MB if you want to use the full payload and perform some kind of orchastration. Otherwise you need to do some kind of debatching, which you stated will not work.
    SOA Suite is not really designed for your kind of use case as it needs to parocess this file in memory, when any transformation occurs it can increase this message between 3 - 10 times. If you are writing to a database why can you read the rows one by one?
    If you are wanting to perform this kind of action have a look at ODI (Oracle Data Integrator). I Also believe that OSB (Aqua Logic) can handle files upto 200MB this this can be an option as well, but it may require debatching.
    cheers
    James

  • Downloading via XHR and writing large files using WinJS fails with message "Not enough storage is available to complete this operation"

    Hello,
    I have an issue that some user are experiencing but I can't reproduce it myself on my laptop. What I am trying to do it grab a file (zip file) via XHR. The file can be quite big, like 500Mb. Then, I want to write it on the user's storage.
    Here is the code I use:
    DownloadOperation.prototype.onXHRResult = function (file, result) {
    var status = result.srcElement.status;
    if (status == 200) {
    var bytes = null;
    try{
    bytes = new Uint8Array(result.srcElement.response, 0, result.srcElement.response.byteLength);
    } catch (e) {
    try {
    Utils.logError(e);
    var message = "Error while extracting the file " + this.fileName + ". Try emptying your windows bin.";
    if (e && e.message){
    message += " Error message: " + e.message;
    var popup = new Windows.UI.Popups.MessageDialog(message);
    popup.showAsync();
    } catch (e) { }
    this.onWriteFileError(e);
    return;
    Windows.Storage.FileIO.writeBytesAsync(file, bytes).then(
    this.onWriteFileComplete.bind(this, file),
    this.onWriteFileError.bind(this)
    } else if (status > 400) {
    this.error(null);
    The error happens at this line:
    bytes = new Uint8Array(result.srcElement.response, 0, result.srcElement.response.byteLength);
    With description "Not enough storage is available to complete this operation". The user has only a C drive with plenty of space available, so I believe the error message given by IE might be a little wrong. Maybe in some situations, Uint8Array
    can't handle such large file? The program fails on a "ASUSTek T100TA" but not on my laptop (standard one)
    Can somebody help me with that? Is there a better way to write a downloaded binary file to the disk not passing via a Uint8Array?
    Thanks a lot,
    Fabien

    Hi Fabien,
    If Uint8Array works fine on the other computer, it should not be the problem of the API, but instead it could be the setting or configuration for IE.
    Actually using XHR for 500MB zip file is not suggested, base on the documentation:
    How to download a file, XHR wraps an
    XMLHttpRequest call in a promise, which is not a good approach for big item download, please use
    Background Transfer instead, which is designed to receive big items.
    Simply search on the Internet, and looks like the not enough storage error is a potential issue while using XMLHttpRequest:
    http://forums.asp.net/p/1985921/5692494.aspx?PRB+XMLHttpRequest+returns+error+Not+enough+storage+is+available+to+complete+this+operation, however I'm not familiar with how to solve the XMLHttpRequest issues.
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Processing large file using Debatching - SAX Exception

    Hi,
    I have a large xml file (about 20 mb) to be processed. I implemented the debatching feature and in the file adapter I defined the publish messages in bacthes as 500.
    When I run the process, I expected to see several instances in the console. But I see one instance not in the Instances page but in the Perform Manaul Recovery and nothing seems to be happening.
    Do I need to do anything here. Can anyone please help me here.
    Thanks
    -Prapoorna
    Edited by: p123 on Jun 29, 2009 3:07 PM

    The file is 20 mb.
    Sample xml file is as shown below. I have several attendance_row tags between time_and_attendance.
    <time_and_attendance>
    <attendance_row><oracle_person_id>110758</oracle_person_id>
    <absence_reason>Work Abroad</absence_reason>
    <action_type>A</action_type>
    <date>01/04/2009</date>
    <total_hours>8.6</total_hours>
    <last_update_date>16/06/2009 12:35:47</last_update_date>
    </attendance_row>
    <attendance_row><oracle_person_id>110758</oracle_person_id>
    <absence_reason></absence_reason>
    <action_type>W</action_type>
    <date>01/04/2009</date>
    <total_hours>0</total_hours>
    <last_update_date>16/06/2009 12:35:47</last_update_date>
    </attendance_row>
    <attendance_row><oracle_person_id>110758</oracle_person_id>
    <absence_reason>Work Abroad</absence_reason>
    <action_type>A</action_type>
    <date>02/04/2009</date>
    <total_hours>8.6</total_hours>
    <last_update_date>16/06/2009 12:35:47</last_update_date>
    </attendance_row>
    </time_and_attendance>
    Here is the schema file
    <?xml version="1.0" encoding="UTF-8" ?>
    <!--This Schema has been generated from a DTD. A target namespace has been added to the schema.-->
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://TargetNamespace.com/ReadFile" xmlns="http://TargetNamespace.com/ReadFile" nxsd:version="DTD" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd">
    <xs:element name="oracle_person_id" type="xs:string"/>
    <xs:element name="total_hours" type="xs:string"/>
    <xs:element name="action_type" type="xs:string"/>
    <xs:element name="last_update_date" type="xs:string"/>
    <xs:element name="absence_reason" type="xs:string"/>
    <xs:element name="attendance_row">
    <xs:complexType>
    <xs:sequence>
    <xs:element ref="oracle_person_id"/>
    <xs:element ref="absence_reason"/>
    <xs:element ref="action_type"/>
    <xs:element ref="date"/>
    <xs:element ref="total_hours"/>
    <xs:element ref="last_update_date"/>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    <xs:element name="date" type="xs:string"/>
    <xs:element name="time_and_attendance">
    <xs:complexType>
    <xs:sequence>
    <xs:element maxOccurs="unbounded" ref="attendance_row"/>
    </xs:sequence>
    </xs:complexType>
    </xs:element>
    </xs:schema>
    Thanks
    -Prapoorna
    Edited by: p123 on Jun 29, 2009 3:49 PM
    Edited by: p123 on Jun 29, 2009 7:23 PM

  • Processing large files on Mac OS X Lion

    Hi All,
    I need to process large files (few GB) from a measurement. The data files contain lists of measured events. I process them event by event and the result is relatively small and does not occupy much memory. The problem I am facing is that Lion "thinks" that I want to use the large data files later again and puts them into cache (inactive memory). The inactive memory is growing during the reading of the datafiles up to a point where the whole memory is full (8GB on MacBook Pro mid 2010) and it starts swapping a lot. That of course slows down the computer considerably including the process that reads the data.
    If I run "purge" command in Terminal, the inactive memory is cleared and it starts to be more responsive again. The question is: is there any way how to prevent Lion to start pushing running programs from memory into the swap on cost of useless harddrive cache?
    Thanks for suggestions.

    It's been a while but I recall using the "dd" command ("man dd" for info) to copy specific portions of data from one disk, device or file to another (in 512 byte increments).  You might be able to use it in a script to fetch parts of your larger file as you need them, and dd can be used to throw data from and/or to standard input/output so it's easy to get data and store in temporary container like a file or even a variable.
    Otherwise if you can afford it, and you might with 8 GB or RAM, you could try and disable swapping (paging to disk) alltogether and see if that helps...
    To disable paging, run the following command (in one line) in Terminal and reboot:
    sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    To re-enable paging, run the following command (in one line) in Terminal:
    sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist
    Hope this helps!

  • No error generated if batch process load file uses wrong naming convention

    Another interesting one...
    When using the batch processing functionality of FDM, which can be executed by either:
    - FDM Workbench (manually)
    - Hyperion FDM Task Manager (scheduled)
    - upsShell.exe (scheduled and executed from a batch script)
    ..., one has to name the data file (to be loaded) using a specific file naming convention (i.e. "A~LOCATION~CATEGORY~PERIOD~RA.csv" format, for example). However, if one does not name the file correctly and then tries to process the file using batch processing functionality using any of the above three methods, FDM happily moves the file out of the OpenBatch folder and into a new folder, but the file is not loaded as it does not know where to map it to (as expected). However, there are no errors in Outbox\Logs\<username>.err to inform the user, so one is non the wiser that anything has gone wrong!
    When using FDM Workbench, an error is displayed on the screen (POV - "Batch Completed With Errors, ( 1 ) Files Contained Errors"), but this is the only indication of any error. And normally, one would be scheduling the load using upsShell.exe or Hyperion FDM Task Manager anyway...
    Has anyone else noticed this, or am I doing something wrong here? :-)

    Yes, as per my original post the only feedback on any POV errors appears to be when using the FDM Workbench Batch Processing GUI.
    Regarding the "Batch Process Report in FDM" you mentioned, are you referring to Analysis | Timeline accessible via FDM web client? Unfortunately this does not appear to provide much in the way of detail or errors, only general events that occurred. I cannot locate any batch process report, other than the log output I defined when calling upsShell.exe. However, this contains no POV errors...

  • Digest a large file using SHA1

    Hi,
    Is it possible to hash a file of size 1 GB with MessageDigest class. All the update method of the MessageDigest class are having only byte[] as argument. I guess reading the entire file(of size 1GB) and creating a byte array and updating it in to the MessageDigest class is not a good idea. Instead of doing it do we have any other ways/API supports for updating a FileInputStream to a MessageDigest class? It would be good if it takes care of intelligent hashing of large files.
    I tried with multiple updates(of MessageDigest class) and it is taking 80 seconds to digest a single file of size 1.4GB. I believe it is huge time to get a hash of a file. In my system I am having billions of files that need to be hashed. Please suggest me if you have any other mechanism to do it very quickly.

    Dude, your worry is well-founded - butu it isn't a Java problem. What people are trying to tell you is, you can't do what you want, becuase you don't have enough computing power. Let's do the math, shall we? You give a figure of "100 terabytes daily". Let's assume you want to limit that to "in eight hours," so you can run your backup and still leave time to actually deal with the files.
    So - you want to process 1,000,000,000,000 bytes / 8*60*60 s, = 35Mb/s (roughly). On a 3Ghz machine, that leaves you 100 instruction cycles per byte to load from tape (!!!), process the SHA1 algorithm, and write the results.
    What's the maximum streaming capacity, in Mb/sec, of your tape system?
    I don't think there's ANY reasonable way of doing this - certainly not serially. You could manage it with a processing farm - with 100GB tapes, assign one tape to a single CPU, and then your "terabytes' of storage are handleable. You just need 100 machines to get the job done.
    In any event - your speed issue is almost certainly NOT Java's problem.
    Grant

  • Problem while processing large files

    Hi
    I am facing a problem while processing large files.
    I have a file which is around 72mb. It has around more than 1lac records. XI is able to pick the file if it has 30,000 records. If file has more than 30,000 records XI is picking the file ( once it picks it is deleting the file ) but i dont see any information under SXMB_MONI. Either error or successful or processing ... . Its simply picking and igonring the file. If i am processing these records separatly it working.
    How to process this file. Why it is simply ignoring the file. How to solve this problem..
    Thanks & Regards
    Sowmya.

    Hi,
    XI pickup the Fiel based on max. limit of processing as well as the Memory & Resource Consumptions of XI server.
    PRocessing the fiel of 72 MB is bit higer one. It increase the Memory Utilization of XI server and that may fali to process at the max point.
    You should divide the File in small Chunks and allow to run multiple instances. It will  be faster and will not create any problem.
    Refer
    SAP Network Blog: Night Mare-Processing huge files in SAP XI
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Processing huge file loads through XI
    File Limit -- please refer to SAP note: 821267 chapter 14
    File Limit
    Thanks
    swarup
    Edited by: Swarup Sawant on Jun 26, 2008 7:02 AM

  • Possible to use inline mode with Port Channel

    Hi,
    Just wondering if anyone has used inline mode with Port Channel configuration placing WAE device between router and switch. Any tips or gotcahs to be concerned about. We currently have inline mode running at this location but site would want redudancy built in through port channel.
    kind regards,
    Nigel

    Hi Nigel,
    This is not supported :
    Taken from here :
    http://www.cisco.com/c/en/us/td/docs/app_ntwk_services/waas/waas/v531/command/reference/cmdr/glob_cfg.html#wp1532575
    Best regards
    Finn Poulsen

  • Mac OSX desktop dropping connection with multiple copy processes & large files

    The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2 updated. The
    volume the macs are using is part of a cluster. The users mount the volumes
    on their macs and everying is for the most part fine. If they grab a bunch
    of files and copy them from desktop to server it's fine as long as it's only
    a single copy process. The users are part of the hi-res department and the
    files can be 1GB or larger. If they drag one or more large files, and then
    while that's copying they drag some more files, so both copy processes are
    running at once....quite often the volume will dismount from the desktop and
    you will get unable to copy because some resource is unavailable. Sometimes
    the finder crashes, sometimes not. Often the files that were partially
    copied get locked and the users needs to reboot their Mac in order to delete
    them. I'm getting pretty desperate hear, anyone have an idea what's going
    on. I don't know if this is a Tiger thing or a large file thing or a
    multiple copy stream thing, a netware thing or a mac thing.....we have
    hundreds of other users running OSX 10.3 and earlier who are not reporting
    this problem, but they also don't copy files that size. Someone please tell
    me they have seen this before....thanks very much. Oh, before going to 6.5
    and NFAP the servers were 5.1 with Prosoft server and they never had the
    problem.
    Jake

    Thanks for your help, I have incidents open now with Apple and Novell, I
    hope one of them can provide something for us. We tried applying 6.5 SP4 to
    a test server....the problem still happened but was "better", the copy
    operations still quit but with SP4 applied the volume did not dismount....or
    if it did it remounted automatically because it was still connected after
    OKing through the copy errors.
    "Jeffrey D Sessler" <[email protected]> wrote in message
    news:[email protected]...
    >I tried two 2GB files. No problems at all but I'm in a 100% end-to-end
    >Gigabit environment. My server storage is also a very-fast SAN.
    >
    > Best,
    > Jeff
    >
    >
    > "Jacob Shorr" <[email protected]> wrote in message
    > news:[email protected]...
    >> Jeffrey,
    >>
    >> Have you tried the exact same test, dragging say two 500MB files in
    >> seperate
    >> copy operations? I hear what you're saying about the 10/100 link, but we
    >> don't run gigabit to the desktops, and we're not going to anytime soon.
    >> Even if that could resolve the issue we need something kind of other fix
    >> for
    >> our infrastructure. I will look into any errors on the switch.
    >>
    >> "Jeffrey D Sessler" <[email protected]> wrote in message
    >> news:[email protected]...
    >>> Well, considering that I'm not seeing the issue on my 10.4.2 machines
    >>> against my 6.5Sp3 servers, I'm not sure what you should do at this
    >>> point.
    >>> Since you say that the 10.3 machines don't have an issue, it makes it
    >> sound
    >>> to me like this is an Apple issue.
    >>>
    >>> The logs point at a communication issue... Is there anyway to get that
    >>> Mac
    >>> on to a Gigabit connection to see if you can duplicate it?
    >>>
    >>> The other option is to wait for 10.4.3 to be released and see if the
    >> problem
    >>> goes away.
    >>>
    >>> Again, on only a 10/100 link, one copy of a large file _will_ saturate
    >>> the
    >>> link.Perhaps 10.4.2 has an issue with this?
    >>>
    >>> Also, when you're doing the copy, what to the error counters in the
    >> switches
    >>> say?
    >>>
    >>> Jeff
    >>>
    >>> "Jacob Shorr" <[email protected]> wrote in message
    >>> news:[email protected]...
    >>> > There are definately no mis-matches. This has been checked and
    >> re-checked
    >>> > a
    >>> > dozen times. It's only on 10.4......we can replicate it on every 10.4
    >>> > machine, and we cannot replicate it on any machine that is 10.3. What
    >>> > should I do to go about getting this fixed, should I be contacting
    >>> > Apple
    >>> > or
    >>> > Novell? The speed is always good until it actually decides to drop
    >>> > and
    >>> > cut
    >>> > off.
    >>> >
    >>> >
    >>> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> > news:7jj%[email protected]...
    >>> >> Looks like communication between the Mac and the Netware server is
    >>> > dropping.
    >>> >> AFP in 10.3 and 10.4 support auto-reconnection but I'm sure that it
    >> will
    >>> >> fail the copy process.
    >>> >>
    >>> >> I'd first check to make sure that there are not any mis-matches on
    >>> >> the
    >>> >> switch e.g. the Mac is set to Auto (as it should be) but someone has
    >> set
    >>> > the
    >>> >> switch to a forced mode. Both should be auto. A duplex miss-match
    >>> >> could
    >>> >> cause the Mac not to see the heart beat back from the Novell server.
    >>> >>
    >>> >> Like I said, if the workstation is only on 10/100, a single copy
    >> process
    >>> > on
    >>> >> a G5 Mac will saturate that link. Adding more concurrent copies will
    >> only
    >>> >> result in everything slowing down and taking longer, or you'll get
    >>> >> the
    >>> >> dropped connections.
    >>> >>
    >>> >> Best,
    >>> >> Jeff
    >>> >>
    >>> >>
    >>> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> news:Ybc%[email protected]...
    >>> >> > Take a look at the last entries in the system log right after it
    >>> > happened,
    >>> >> > let me know if it means anything to you. Thanks.
    >>> >> >
    >>> >> > Sep 29 13:26:10 yapostolides kernel[0]: AFP_VFS afpfs_mount:
    >>> >> > /Volumes/FP04SYS11, pid 210
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >> doing
    >>> >> > reconnect on /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > connect
    >>> >> > to
    >>> >> > the server /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Opening
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Logging
    >>> >> > in
    >>> >> > with uam 2 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > Restoring
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS
    >>> >> > afpfs_MountAFPVolume:
    >>> >> > GetVolParms failed 0x16
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > afpfs_MountAFPVolume failed 22 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > received
    >>> >> > VQ_DEAD event (32)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > posting
    >>> >> > to
    >>> >> > KEA to unmount /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > type
    >>> >> > 'afpfs', mounted on '/Volumes/FP04SYS11', from
    >>> >> > 'afp_0TQCV10QsPgy0TShVK000000-4340.2c000006', dead
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > found
    >> 1
    >>> >> > filesystem(s) with problem(s)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_unmount:
    >>> >> > /Volumes/FP04SYS11, flags 524288, pid 43
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> >> > news:GH%[email protected]...
    >>> >> >> We move large files all the time under SP3 with no issues however,
    >>> > there
    >>> >> > are
    >>> >> >> several finder/copy/afp issues in Tiger that are do to be fixed in
    >>> >> >> 10.4.3.
    >>> >> >>
    >>> >> >> Also, if you have any type of network issue such as duplex
    >> mis-matches
    >>> > or
    >>> >> >> are running say, only a 10/100 network, a single Mac can not only
    >>> >> >> transfer
    >>> >> >> more than 10MB/sec (filling the network pipe) or generate so many
    >>> >> > collisions
    >>> >> >> (duplex mis-match) that you could drop communication to the
    >>> >> >> server.
    >>> >> >>
    >>> >> >> What type of server (speed, disks, raid level, NIC speed) and what
    >>> >> >> type
    >>> >> >> of
    >>> >> >> network (switched gigabit, switched 10/100, shared 10/100, etc.)
    >>> >> >>
    >>> >> >> How long does it take to copy that single 1GB file to the server?
    >>> >> >>
    >>> >> >> Does a single copy process always work?
    >>> >> >>
    >>> >> >> Jeff
    >>> >> >>
    >>> >> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> >> news:[email protected]...
    >>> >> >> > The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2
    >> updated.
    >>> >> > The
    >>> >> >> > volume the macs are using is part of a cluster. The users mount
    >> the
    >>> >> >> > volumes
    >>> >> >> > on their macs and everying is for the most part fine. If they
    >> grab
    >>> >> >> > a
    >>> >> >> > bunch
    >>> >> >> > of files and copy them from desktop to server it's fine as long
    >>> >> >> > as
    >>> > it's
    >>> >> >> > only
    >>> >> >> > a single copy process. The users are part of the hi-res
    >> department
    >>> > and
    >>> >> >> > the
    >>> >> >> > files can be 1GB or larger. If they drag one or more large
    >>> >> >> > files,
    >>> > and
    >>> >> >> > then
    >>> >> >> > while that's copying they drag some more files, so both copy
    >>> > processes
    >>> >> > are
    >>> >> >> > running at once....quite often the volume will dismount from the
    >>> >> >> > desktop
    >>> >> >> > and
    >>> >> >> > you will get unable to copy because some resource is
    >>> >> >> > unavailable.
    >>> >> >> > Sometimes
    >>> >> >> > the finder crashes, sometimes not. Often the files that were
    >>> > partially
    >>> >> >> > copied get locked and the users needs to reboot their Mac in
    >>> >> >> > order
    >>> >> >> > to
    >>> >> >> > delete
    >>> >> >> > them. I'm getting pretty desperate hear, anyone have an idea
    >> what's
    >>> >> > going
    >>> >> >> > on. I don't know if this is a Tiger thing or a large file thing
    >> or
    >>> >> >> > a
    >>> >> >> > multiple copy stream thing, a netware thing or a mac
    >>> >> >> > thing.....we
    >>> > have
    >>> >> >> > hundreds of other users running OSX 10.3 and earlier who are not
    >>> >> > reporting
    >>> >> >> > this problem, but they also don't copy files that size. Someone
    >>> > please
    >>> >> >> > tell
    >>> >> >> > me they have seen this before....thanks very much. Oh, before
    >> going
    >>> > to
    >>> >> >> > 6.5
    >>> >> >> > and NFAP the servers were 5.1 with Prosoft server and they never
    >> had
    >>> >> >> > the
    >>> >> >> > problem.
    >>> >> >> >
    >>> >> >> > Jake
    >>> >> >> >
    >>> >> >> >
    >>> >> >>
    >>> >> >>
    >>> >> >
    >>> >> >
    >>> >>
    >>> >>
    >>> >
    >>> >
    >>>
    >>>
    >>
    >>
    >
    >

  • Play Quicktime file on extended mode with 2 monitors using fullscreen

    I have content that plays in QuickTime that was designed to run on 2 displays using extended mode. Whenever I full screen it defaults to only using one of the displays.

    Here, this might help you. I was having problems copying to my Lacie external hard drive, too. I contacted Lacie and this was the response I received - which I followed and have now been able to move all my imovie, idvd, quicktime, etc. files to the Lacie:
    The default formatting on these drives is FAT32, compatible with both Mac and PC. However, FAT32 has some limitations-it will not hold any single file larger than 4GB and it does not like filenames with any other characters other than A-Z, a-z, 0-9, periods and underscores. Mac
    OS 10.1.x and 10.2.x will not mount large FAT32 volumes.
    The Mac Extended format does not have any of these limitations as it is the Mac-native format. If you will not be sharing this drive with a PC at all, reinitialize the drive as follows. You will need to copy off
    any data you need temporarily as this will erase the drive.
    OS X - Initializing with Disk Utility (will erase the drive)
    1. Open the Disk Utility found the Utilities folder.
    2. On the left, select the drive (not the volume below it).
    3. On the right, select the Partition tab.
    4. Under Volume Scheme, set it to the number of partitions desired(usually Mac OS Extended).
    5. Set the format you desire.
    6. It is not necessary to check the OS 9 Drivers check box.
    7. Once you have the drive set up how you desire, click on the Partition button in the lower right.
    8. It should only take a few moments to complete, and when done, the drive will mount on the desktop.
    Answers to most common questions can be found in the manual on the CD
    that came with your product or in our FAQs:
    http://www.lacie.com/support/faq/

  • Reading large files -- use FileChannel or BufferedReader?

    Question --
    I need to read files and get their content. The issue is that I have no idea how big the files will be. My best guess is that most are less than 5kb but some with be huge.
    I have it set up using a BufferedReader, which is working fine. It's not the fastest thing (using readLine() and StringBuffer.append()), but so far it's usable. However, I'm worried that if I need to deal with large files, such as a PDF or other binary, BufferedReader won't be so efficient if I do it line by line. (And will I run into issues trying to put a binary file into a String?)
    I found a post that recommended FileChannel and ByteBuffer, but I'm running into a java.lang.UnsupportedOperationException when trying to get the byte[] from ByteBuffer.
    File f = new File(binFileName);
    FileInputStream fis = new FileInputStream(f);
    FileChannel fc = fis.getChannel();
    // Get the file's size and then map it into memory
    int sz = (int)fc.size();
    MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, sz);
    fc.close();
    String contents = new String(bb.array()); //code blows up
    Thanks in advance.

    If all you are doing is reading data I don't think you're going to get much faster than InfoFetcher
    you are welcome to use and modify this class, but please don't change the package or take credit for it as your own work
    InfoFetcher.java
    ==============
    package tjacobs.io;
    import java.io.IOException;
    import java.io.InputStream;
    import java.util.ArrayList;
    import java.util.Iterator;
    * InfoFetcher is a generic way to read data from an input stream (file, socket, etc)
    * InfoFetcher can be set up with a thread so that it reads from an input stream
    * and report to registered listeners as it gets
    * more information. This vastly simplifies the process of always re-writing
    * the same code for reading from an input stream.
    * <p>
    * I use this all over
         public class InfoFetcher implements Runnable {
              public byte[] buf;
              public InputStream in;
              public int waitTime;
              private ArrayList mListeners;
              public int got = 0;
              protected boolean mClearBufferFlag = false;
              public InfoFetcher(InputStream in, byte[] buf, int waitTime) {
                   this.buf = buf;
                   this.in = in;
                   this.waitTime = waitTime;
              public void addInputStreamListener(InputStreamListener fll) {
                   if (mListeners == null) {
                        mListeners = new ArrayList(2);
                   if (!mListeners.contains(fll)) {
                        mListeners.add(fll);
              public void removeInputStreamListener(InputStreamListener fll) {
                   if (mListeners == null) {
                        return;
                   mListeners.remove(fll);
              public byte[] readCompletely() {
                   run();
                   return buf;
              public int got() {
                   return got;
              public void run() {
                   if (waitTime > 0) {
                        TimeOut to = new TimeOut(waitTime);
                        Thread t = new Thread(to);
                        t.start();
                   int b;
                   try {
                        while ((b = in.read()) != -1) {
                             if (got + 1 > buf.length) {
                                  buf = IOUtils.expandBuf(buf);
                             int start = got;
                             buf[got++] = (byte) b;
                             int available = in.available();
                             //System.out.println("got = " + got + " available = " + available + " buf.length = " + buf.length);
                             if (got + available > buf.length) {
                                  buf = IOUtils.expandBuf(buf, Math.max(got + available, buf.length * 2));
                             got += in.read(buf, got, available);
                             signalListeners(false, start);
                             if (mClearBufferFlag) {
                                  mClearBufferFlag = false;
                                  got = 0;
                   } catch (IOException iox) {
                        throw new PartialReadException(got, buf.length);
                   } finally {
                        buf = IOUtils.trimBuf(buf, got);
                        signalListeners(true);
              private void setClearBufferFlag(boolean status) {
                   mClearBufferFlag = status;
              public void clearBuffer() {
                   setClearBufferFlag(true);
              private void signalListeners(boolean over) {
                   signalListeners (over, 0);
              private void signalListeners(boolean over, int start) {
                   if (mListeners != null) {
                        Iterator i = mListeners.iterator();
                        InputStreamEvent ev = new InputStreamEvent(got, buf, start);
                        //System.out.println("got: " + got + " buf = " + new String(buf, 0, 20));
                        while (i.hasNext()) {
                             InputStreamListener fll = (InputStreamListener) i.next();
                             if (over) {
                                  fll.gotAll(ev);
                             } else {
                                  fll.gotMore(ev);
         }

  • COMPUTE CRASHES WHEN PROCESSING LARGE FILES

    As far as basic operations, my G4 is running smoothly.
    However, whenever I need it to process significant files, such as exporting a 30minute video from FCP as a Quicktime movie, or using Compressor to encode an .M2V file, the computer crashes. Basically, if any task is going to take longer than fifteen minutes to complete, I know my computer won't make it.
    Thus far I've done a fresh install of the system software, reinstalled all applications, trashed prefs, run the pro application updtates, run disk utility, played with my work flow (internal vs. external drives), etc.
    I wonder if perhaps my processor is failing, if I need more memory (though my 768MB exceeds application minimum requirements), or if perhaps these G4s just aren't adequately equipped to run the newer pro application versions.
    Thanks in advance for any advice.

    I can't pull the dimm out due to the fact that I need a certain amount of memory installed to be able to run the software in the first place.
    I've run the hardware test disc that came with my computer and it has not detected any problems.
    I don't think heat is the issue as, according to the Temperature Monitor utility I downloaded, my computer remains consistant at around 58 degrees, even when performing difficult processes.
    According to my activity Monitor, when I'm processing one of these larger files, I'm using as much as 130% of the cpu, but it also can remains as low as 10% for extended periods. Both seem odd.
    Any thoughts?

  • Problem processing large message using dbadapter.

    I have a process which is initiated by dbadapter fetch from table.
    Its working fine when the records are less. But when the number of records
    are more than 6000(more than 4MB) I am getting errors as below.
    The process goes to off state after these errors.
    Any body have any suggestions on how to process large messages ?
    <2006-08-02 11:55:25,172> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "cube delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:36,473> <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "delivery": Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:55:42,689> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [OracleDB_ptt::receive(HccIauHdrCollection)] - JCA Activation Agent was unable to perform delivery of inbound message to BPEL Process 'bpel://localhost/default/IAUProcess~1.0/' due to: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    <2006-08-02 11:56:22,573> <ERROR> <default.collaxa.cube.activation> <AdapterFramework::Inbound>
    com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    <2006-08-02 11:57:52,341> <ERROR> <default.collaxa.cube.ws> <Database Adapter::Outbound> <oracle.tip.adapter.db.InboundWork runOnce> Non retriable exception during polling of the database ORABPEL-11624 DBActivationSpec Polling Exception.
    Query name: [OracleDB], Descriptor name: [IAUProcess.HccIauHdr]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by javax.resource.ResourceException: ORABPEL-12509 Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:684)
         at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:121)
         at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:370)
         at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:332)
         at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:301)
         at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:255)
         at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:189)
         at oracle.tip.adapter.fw.jca.work.WorkerJob.go(WorkerJob.java:51)
         at oracle.tip.adapter.fw.common.ThreadPool.run(ThreadPool.java:267)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: ORABPEL-12509
    Unable to post inbound message to BPEL business process.
    The JCA Activation Agent of the Adapter Framework was unsuccessful in delivering an inbound message from the endpoint [OracleDB_ptt::receive(HccIauHdrCollection)] - due to the following reason: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
    Please examine the log file for any reasons. Make sure the inbound XML messages sent by the Resource Adapter comply to the XML schema definition of the corresponding inbound WSDL message element.
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:628)
         ... 9 more
    Caused by: com.oracle.bpel.client.ServerException: Delivery callback message serialization failed.
    An attempt to serialize the delivery callback messages for conversation "LocalGUID:75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6d", message "75f32d7727f922f9:1712b3a:10ccf9e4cf4:-7f6c" to binary format has failed. The exception reported is:
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPostAnyType(DeliveryHandler.java:327)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.initialPost(DeliveryHandler.java:218)
         at com.collaxa.cube.engine.delivery.DeliveryHandler.post(DeliveryHandler.java:82)
         at com.collaxa.cube.ejb.impl.DeliveryBean.post(DeliveryBean.java:181)
         at IDeliveryBean_StatelessSessionBeanWrapper22.post(IDeliveryBean_StatelessSessionBeanWrapper22.java:1052)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:161)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase$DeliveryServiceMonitor.send(AdapterFrameworkListenerBase.java:2358)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.executeDeliveryServiceSend(AdapterFrameworkListenerBase.java:487)
         at oracle.tip.adapter.fw.AdapterFrameworkListenerBase.deliveryServiceSend(AdapterFrameworkListenerBase.java:545)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.performSingleActivation(AdapterFrameworkListenerImpl.java:746)
         at oracle.tip.adapter.fw.jca.AdapterFrameworkListenerImpl.onMessage(AdapterFrameworkListenerImpl.java:614)
         ... 9 more
    .

    To process 6000 messages in one shot is not the best practice in BPEL. For that you have to choose concepts like datawarehouse or so.
    But you might want to process it in batch mode. So think of using batch option in DB adapter and try to define MaxRaiseSize and MaxTransactionSize for your DB adapter. Further explanation is here
    http://download-west.oracle.com/docs/cd/B14099_19/integrate.1012/b25307/adptr_db.htm#CHDHAIHA

  • Processing Large Files in Adobe Premiere Elements 12

    Greetings,
    I am trying to process video files that are approximately 500 to 750mb in Adobe Premiere Elements 12.  I am running on Windows 7 Home Premium, Intel i5 processor 2300, 8gb DDR3 and Invidia GeForce GT520 Video Card.  It is choking so bad I cannot edit my videos at all. Reliable technical support here says that it is the limitations of the software and upgrading my hardware would be next to useless.  Can you please clarify, unfortunately Premiere Pro is out of my budget.

    Lucidity2014
    I have gone over your thread again. I do not believe that project settings are at the core of file size being a limitation in your workflow. And, at face value, the format of AVCHD.mov (1920 x 1080 @ 24 progressive frames per second) should be supported by Premiere Elements 12 on Windows 7 64 bit (and you have QuickTime latest version installed on your computer along with Premiere Elements 12).
    This is what I would like you to do to demonstrate whether or not, in your situation, the project settings are at the core of your file size limitation issue
    1. Open a new Premiere Elements 12 project to the Expert workspace. Go to File Menu/New/Project and Change Settings.
    2. In the Change Settings dialog, make sure that the project preset is set to
    NTSC
    DSLR
    1080p
    DSLR 1080p24
    Before you close out of there, make sure that you have a check mark next to "Force Selected Project Setting on This Project" in the new project dialog which is the last dialog you should see as you exit that area.
    3. Then back in the Expert workspace of the project, import your AVCHD.mov (500 to 750 MB/4 minutes 49 seconds) using the program's Add Media/Files and Folders.
    a. Do you see an orange line over the Timeline content when it is first dragged to the Timeline?
    b. Do the problems exit as before?
    Click on Start button. In the Search field above the Start button, type in System Information. In System Information, please tell us what you see for:
    Total Physical Memory
    Available Physical Memory
    Total Virtual Memory
    Available Virtual Memory
    Page File Space
    From you initial report, your installed RAM is supposed to be 8 GB.
    Previously you wrote
    The filepath is; C:\Users\Brian Ellis\Desktop\GBC Video for the Web\Video Footage
    I assumed the free hard drive location would be the same properties as the C: Drive (1.07 TB), but please advise if I should move off the desktop?
    Just in case, after you have ruled out the project settings factor, please change the file path so that the file is saved to Libraries/Documents or Libraries/Videos. Then start a new project, go through setting the project preset manually, and then in the project Add Media/Files and Folders from the new hard drive save location (Documents or Videos). Moving forward do you have an external hard drive for video storage?
    We will be watching for your progress.
    Thank you.
    ATR
    Add On Comment...To give others a view of the project settings that you have been using for the problem situation, please go to the Edit Menu/Project Settings/General and tell us what is there for Editing Mode, Timebase, Frame Size, and Pixel Aspect Ratio - even if the fields appear to be grayed out. That should answer everyone's questions.

Maybe you are looking for