File copy of large file fails.

I am trying to copy a large file, say "file.txt", from an external drive to my hard drive. The size of the file is 140 GB. I have tried twice, and both times the copy operation has stopped (after a couple of hours and 135 GB) with the message: "Operation cannot be completed. File already exists." The only file named file.txt that exists at that time is the partial one that was copied.
What's going on and how can I perform the file copy?
Thanks!

Can you tell me if the external drive (for any of the respondents) was NOT formatted with a Mac partition? Instead something like NTFS or FAT?
The file transfer failure issue is a recurrent frustration to a lot of users for many iterations of the Finder. Either through singular large file transfers or through multiple files (thousands) that in total are large in size.
I am speculating that the issue arises from files being transferred between storage locations of differing formats or in the case of internet based storage varying file translations. In part this speculation exists because I do not seem to experience this issue if the drives are all Mac OS Extended (Journaled).
Just looking to confirm your configuration.

Similar Messages

  • Unable to copy very large file to eSATA external HDD

    I am trying to copy a VMWare Fusion virtual machine, 57 GB, from my Macbook Pro's laptop hard drive to an external, eSATA hard drive, which is attached through an ExpressPort adapter. VMWare Fusion is not running and the external drive has lots of room. The disk utility finds no problems with either drive. I have excluded both the external disk and the folder on my laptop hard drive that contains my virtual machine from my Time Machihne backups. At about the 42 GB mark, an error message appears:
    The Finder cannot complete the operation because some data in "Windows1-Snapshot6.vmem" could not be read or written. (Error code -36)
    After I press OK to remove the dialog, the copy does not continue, and I cannot cancel the copy. I have to force-quit the Finder to make the copy dialog go away before I can attempt the copy again. I've tried rebooting between attempts, still no luck. I have tried a total of 4 times now, exact same result at the exact same place, 42 GB / 57 GB.
    Any ideas?

    Still no breakthrough from Apple. They're telling me to terminate the VMWare processes before attempting the copy, but had they actually read my description of the problem first, they would have known that I already tried this. Hopefully they'll continue to investigate.
    From a correspondence with Tim, a support representative at Apple:
    Hi Tim,
    Thank you for getting back to me, I got your message. Although it is true that at the time I ran the Capture Data program there were some VMWare-related processes running (PID's 105, 106, 107 and 108), this was not the case when the issue occurred earlier. After initially experiencing the problem, this possibility had occurred to me so I took the time to terminate all VMWare processes using the activity monitor before again attempting to copy the files, including the processes mentioned by your engineering department. I documented this in my posting to apple's forum as follows: (quote is from my post of Feb 19, 2008, 1:28pm, to the thread "Unable to copy very large file to eSATA external HDD", relevant section in >bold print<)
    Thanks for the suggestions. I have since tried this operation with 3 different drives through two different interface types. Two of the drives are identical - 3.5" 7200 RPM 1TB Western Digital WD10EACS (WD Caviar SE16) in external hard drive enclosures, and the other is a smaller USB2 100GB Western Digital WD1200U0170-001 external drive. I tried the two 1TB drives through eSATA - ExpressPort and also over USB2. I have tried the 100GB drive only over USB2 since that is the only interface on the drive. In all cases the result is the same. All 3 drives are formatted Mac OS Extended (Journaled).
    I know the files work on my laptop's hard drive. They are a VMWare virtual machine that works just fine when I use it every day. >Before attempting the copy, I shut down VMWare and terminated all VMWare processes using the Activity Monitor for good measure.< I have tried the copy operation both through the finder and through the Unix command prompt using the drive's mount point of /Volumes/jfinney-ext-3.
    Any more ideas?
    Furthermore, to prove that there were no file locks present on the affected files, I moved them to a different location on my laptop's HDD and renamed them, which would not have been possible if there had been interference from vmware-related processes. So, that's not it.
    Your suggested workaround, to compress the files before copying them to the external drive, may serve as a temporary workaround but it is not a solution. This VM will grow over time to the point where even the compressed version is larger than the 42GB maximum, and compressing and uncompressing the files will take me a lot of time for files of this size. Could you please continue to pursue this issue and identify the underlying cause?
    Thank you,
    - Jeremy

  • E1000 wifi dead whenever I copy a large file between 2 PC's

    Whenever I try to copy a large file (eg. 200 MB) from one PC to another (both connected wirelessly to the Linksys E1000 router), the wifi on the router dies.  I can see the wifi indicator light on the router go off, and my PC's lose their connection (of course).
    I understand that the E1000 is a low-end router so if it copies really slowly I can accept it.  But how can the wifi just die out like that?  Is there something I didn't setup properly?
    btw, this issue is present even after I upgraded to the latest firmware (2.1.02 build 5).
    Would greatly appreciate any advice.

    Hi,
    Thank you for your reply.
    1.  Problem not due to firmware upgrade.  The problem existed before the firmware upgrade, and persist after the upgrade. But yes, I did power off and power on my router again.
    2. I followed some instructions on this forum to chnage the following settings:
    Channel: 11
    MTU: 1340
    Beacon: 75
    Fragmentation Threshold: 2304
    RTS: 2304
    And strangely enough, it works now!
    3.  This morning, upon seeing your reply, I decided to do some investigation to see which setting did the trick. I modified each setting back to the default, one by one, and tested the large file copy each time I revert something back to default.
    Surprisingly, the file copy operation was successful throughout the tests, even upon reverting all settings back to default.
    So, what is the "problem" with this router? I had problems for 1 month with the default settings, and then suddenly all problems disappear?
    Wai Kee

  • Mac OSX desktop dropping connection with multiple copy processes & large files

    The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2 updated. The
    volume the macs are using is part of a cluster. The users mount the volumes
    on their macs and everying is for the most part fine. If they grab a bunch
    of files and copy them from desktop to server it's fine as long as it's only
    a single copy process. The users are part of the hi-res department and the
    files can be 1GB or larger. If they drag one or more large files, and then
    while that's copying they drag some more files, so both copy processes are
    running at once....quite often the volume will dismount from the desktop and
    you will get unable to copy because some resource is unavailable. Sometimes
    the finder crashes, sometimes not. Often the files that were partially
    copied get locked and the users needs to reboot their Mac in order to delete
    them. I'm getting pretty desperate hear, anyone have an idea what's going
    on. I don't know if this is a Tiger thing or a large file thing or a
    multiple copy stream thing, a netware thing or a mac thing.....we have
    hundreds of other users running OSX 10.3 and earlier who are not reporting
    this problem, but they also don't copy files that size. Someone please tell
    me they have seen this before....thanks very much. Oh, before going to 6.5
    and NFAP the servers were 5.1 with Prosoft server and they never had the
    problem.
    Jake

    Thanks for your help, I have incidents open now with Apple and Novell, I
    hope one of them can provide something for us. We tried applying 6.5 SP4 to
    a test server....the problem still happened but was "better", the copy
    operations still quit but with SP4 applied the volume did not dismount....or
    if it did it remounted automatically because it was still connected after
    OKing through the copy errors.
    "Jeffrey D Sessler" <[email protected]> wrote in message
    news:[email protected]...
    >I tried two 2GB files. No problems at all but I'm in a 100% end-to-end
    >Gigabit environment. My server storage is also a very-fast SAN.
    >
    > Best,
    > Jeff
    >
    >
    > "Jacob Shorr" <[email protected]> wrote in message
    > news:[email protected]...
    >> Jeffrey,
    >>
    >> Have you tried the exact same test, dragging say two 500MB files in
    >> seperate
    >> copy operations? I hear what you're saying about the 10/100 link, but we
    >> don't run gigabit to the desktops, and we're not going to anytime soon.
    >> Even if that could resolve the issue we need something kind of other fix
    >> for
    >> our infrastructure. I will look into any errors on the switch.
    >>
    >> "Jeffrey D Sessler" <[email protected]> wrote in message
    >> news:[email protected]...
    >>> Well, considering that I'm not seeing the issue on my 10.4.2 machines
    >>> against my 6.5Sp3 servers, I'm not sure what you should do at this
    >>> point.
    >>> Since you say that the 10.3 machines don't have an issue, it makes it
    >> sound
    >>> to me like this is an Apple issue.
    >>>
    >>> The logs point at a communication issue... Is there anyway to get that
    >>> Mac
    >>> on to a Gigabit connection to see if you can duplicate it?
    >>>
    >>> The other option is to wait for 10.4.3 to be released and see if the
    >> problem
    >>> goes away.
    >>>
    >>> Again, on only a 10/100 link, one copy of a large file _will_ saturate
    >>> the
    >>> link.Perhaps 10.4.2 has an issue with this?
    >>>
    >>> Also, when you're doing the copy, what to the error counters in the
    >> switches
    >>> say?
    >>>
    >>> Jeff
    >>>
    >>> "Jacob Shorr" <[email protected]> wrote in message
    >>> news:[email protected]...
    >>> > There are definately no mis-matches. This has been checked and
    >> re-checked
    >>> > a
    >>> > dozen times. It's only on 10.4......we can replicate it on every 10.4
    >>> > machine, and we cannot replicate it on any machine that is 10.3. What
    >>> > should I do to go about getting this fixed, should I be contacting
    >>> > Apple
    >>> > or
    >>> > Novell? The speed is always good until it actually decides to drop
    >>> > and
    >>> > cut
    >>> > off.
    >>> >
    >>> >
    >>> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> > news:7jj%[email protected]...
    >>> >> Looks like communication between the Mac and the Netware server is
    >>> > dropping.
    >>> >> AFP in 10.3 and 10.4 support auto-reconnection but I'm sure that it
    >> will
    >>> >> fail the copy process.
    >>> >>
    >>> >> I'd first check to make sure that there are not any mis-matches on
    >>> >> the
    >>> >> switch e.g. the Mac is set to Auto (as it should be) but someone has
    >> set
    >>> > the
    >>> >> switch to a forced mode. Both should be auto. A duplex miss-match
    >>> >> could
    >>> >> cause the Mac not to see the heart beat back from the Novell server.
    >>> >>
    >>> >> Like I said, if the workstation is only on 10/100, a single copy
    >> process
    >>> > on
    >>> >> a G5 Mac will saturate that link. Adding more concurrent copies will
    >> only
    >>> >> result in everything slowing down and taking longer, or you'll get
    >>> >> the
    >>> >> dropped connections.
    >>> >>
    >>> >> Best,
    >>> >> Jeff
    >>> >>
    >>> >>
    >>> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> news:Ybc%[email protected]...
    >>> >> > Take a look at the last entries in the system log right after it
    >>> > happened,
    >>> >> > let me know if it means anything to you. Thanks.
    >>> >> >
    >>> >> > Sep 29 13:26:10 yapostolides kernel[0]: AFP_VFS afpfs_mount:
    >>> >> > /Volumes/FP04SYS11, pid 210
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >> doing
    >>> >> > reconnect on /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > connect
    >>> >> > to
    >>> >> > the server /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Opening
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > Logging
    >>> >> > in
    >>> >> > with uam 2 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > Restoring
    >>> >> > session /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS
    >>> >> > afpfs_MountAFPVolume:
    >>> >> > GetVolParms failed 0x16
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> >> > afpfs_MountAFPVolume failed 22 /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > received
    >>> >> > VQ_DEAD event (32)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_Reconnect:
    >>> > posting
    >>> >> > to
    >>> >> > KEA to unmount /Volumes/FP04SYS11
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > type
    >>> >> > 'afpfs', mounted on '/Volumes/FP04SYS11', from
    >>> >> > 'afp_0TQCV10QsPgy0TShVK000000-4340.2c000006', dead
    >>> >> > Sep 29 13:31:13 yapostolides KernelEventAgent[43]: tid 00000000
    >>> >> > found
    >> 1
    >>> >> > filesystem(s) with problem(s)
    >>> >> > Sep 29 13:31:13 yapostolides kernel[0]: AFP_VFS afpfs_unmount:
    >>> >> > /Volumes/FP04SYS11, flags 524288, pid 43
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> >
    >>> >> > "Jeffrey D Sessler" <[email protected]> wrote in message
    >>> >> > news:GH%[email protected]...
    >>> >> >> We move large files all the time under SP3 with no issues however,
    >>> > there
    >>> >> > are
    >>> >> >> several finder/copy/afp issues in Tiger that are do to be fixed in
    >>> >> >> 10.4.3.
    >>> >> >>
    >>> >> >> Also, if you have any type of network issue such as duplex
    >> mis-matches
    >>> > or
    >>> >> >> are running say, only a 10/100 network, a single Mac can not only
    >>> >> >> transfer
    >>> >> >> more than 10MB/sec (filling the network pipe) or generate so many
    >>> >> > collisions
    >>> >> >> (duplex mis-match) that you could drop communication to the
    >>> >> >> server.
    >>> >> >>
    >>> >> >> What type of server (speed, disks, raid level, NIC speed) and what
    >>> >> >> type
    >>> >> >> of
    >>> >> >> network (switched gigabit, switched 10/100, shared 10/100, etc.)
    >>> >> >>
    >>> >> >> How long does it take to copy that single 1GB file to the server?
    >>> >> >>
    >>> >> >> Does a single copy process always work?
    >>> >> >>
    >>> >> >> Jeff
    >>> >> >>
    >>> >> >> "Jacob Shorr" <[email protected]> wrote in message
    >>> >> >> news:[email protected]...
    >>> >> >> > The servers are 6.5 SP3 running NFAP, the MAC OSX is 10.4.2
    >> updated.
    >>> >> > The
    >>> >> >> > volume the macs are using is part of a cluster. The users mount
    >> the
    >>> >> >> > volumes
    >>> >> >> > on their macs and everying is for the most part fine. If they
    >> grab
    >>> >> >> > a
    >>> >> >> > bunch
    >>> >> >> > of files and copy them from desktop to server it's fine as long
    >>> >> >> > as
    >>> > it's
    >>> >> >> > only
    >>> >> >> > a single copy process. The users are part of the hi-res
    >> department
    >>> > and
    >>> >> >> > the
    >>> >> >> > files can be 1GB or larger. If they drag one or more large
    >>> >> >> > files,
    >>> > and
    >>> >> >> > then
    >>> >> >> > while that's copying they drag some more files, so both copy
    >>> > processes
    >>> >> > are
    >>> >> >> > running at once....quite often the volume will dismount from the
    >>> >> >> > desktop
    >>> >> >> > and
    >>> >> >> > you will get unable to copy because some resource is
    >>> >> >> > unavailable.
    >>> >> >> > Sometimes
    >>> >> >> > the finder crashes, sometimes not. Often the files that were
    >>> > partially
    >>> >> >> > copied get locked and the users needs to reboot their Mac in
    >>> >> >> > order
    >>> >> >> > to
    >>> >> >> > delete
    >>> >> >> > them. I'm getting pretty desperate hear, anyone have an idea
    >> what's
    >>> >> > going
    >>> >> >> > on. I don't know if this is a Tiger thing or a large file thing
    >> or
    >>> >> >> > a
    >>> >> >> > multiple copy stream thing, a netware thing or a mac
    >>> >> >> > thing.....we
    >>> > have
    >>> >> >> > hundreds of other users running OSX 10.3 and earlier who are not
    >>> >> > reporting
    >>> >> >> > this problem, but they also don't copy files that size. Someone
    >>> > please
    >>> >> >> > tell
    >>> >> >> > me they have seen this before....thanks very much. Oh, before
    >> going
    >>> > to
    >>> >> >> > 6.5
    >>> >> >> > and NFAP the servers were 5.1 with Prosoft server and they never
    >> had
    >>> >> >> > the
    >>> >> >> > problem.
    >>> >> >> >
    >>> >> >> > Jake
    >>> >> >> >
    >>> >> >> >
    >>> >> >>
    >>> >> >>
    >>> >> >
    >>> >> >
    >>> >>
    >>> >>
    >>> >
    >>> >
    >>>
    >>>
    >>
    >>
    >
    >

  • File upload script not getting the file name for larger files

    Hi
    I have the following code (see extract below) and find that
    when the size of the file to upload is larger than about 300kb the
    code does not grab the file name. Consequently the upload fails.
    The code works fine when the file size is smaller.
    The code in the form page is:
    <form
    action="UploadAttachment.asp?SubjectName=<%=Request.QueryString("SubjectName")%>&VersionN umber=<%=Request.QueryString("VersionNumber")%>&QualificationName=<%=Request.QueryString(" QualificationName")%>"
    method="post" enctype="multipart/form-data" name="form1">
    <input name="file" type="file" size="100">
    <input name="Upload" type="submit" id="Upload"
    value="Upload">
    </form>
    The code in the UploadAttachment.asp page is:
    <%
    'Grab the file name
    Dim objUpload, strPath, SQLString
    Set objUpload = New clsUpload
    'there is a problem that this next line doesn't grab the file
    name if file is too large.
    strFileName = objUpload.Fields("file").FileName
    etc.
    %>
    If you have any idea how to resolve this I'd be grateful.
    Neil

    Hi
    I have the following code (see extract below) and find that
    when the size of the file to upload is larger than about 300kb the
    code does not grab the file name. Consequently the upload fails.
    The code works fine when the file size is smaller.
    The code in the form page is:
    <form
    action="UploadAttachment.asp?SubjectName=<%=Request.QueryString("SubjectName")%>&VersionN umber=<%=Request.QueryString("VersionNumber")%>&QualificationName=<%=Request.QueryString(" QualificationName")%>"
    method="post" enctype="multipart/form-data" name="form1">
    <input name="file" type="file" size="100">
    <input name="Upload" type="submit" id="Upload"
    value="Upload">
    </form>
    The code in the UploadAttachment.asp page is:
    <%
    'Grab the file name
    Dim objUpload, strPath, SQLString
    Set objUpload = New clsUpload
    'there is a problem that this next line doesn't grab the file
    name if file is too large.
    strFileName = objUpload.Fields("file").FileName
    etc.
    %>
    If you have any idea how to resolve this I'd be grateful.
    Neil

  • GPP File Copy overwrite newer file

    I don't know if im doing something wrong, or this is a feature, but does File copy from GPP, overwrite the destination file, even if the destination has newer time stamp?
    I actually want this to happen, but it is not. Hence, I am assuming if the destination file is newer, then it does not copy.
    Thanks,
    DM

    > Does GPP file copy run with SYSTEM or USER credentials, when you do not
    > enable "Run in logged-on user's security context"?
    SYSTEM
    > Tried to copy a file to C:\Windows area (with a non-admin account), and
    > seemed to fail.
    Some files (as well as some registry keys) are owned by Trusted
    Installer - even SYSTEM lacks write access to them. And of course, you
    cannot overwrite a file with open handles :)
    Martin
    Mal ein
    GUTES Buch über GPOs lesen?
    NO THEY ARE NOT EVIL, if you know what you are doing:
    Good or bad GPOs?
    And if IT bothers me - coke bottle design refreshment :))

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • Web Service to transmit a file (a very large file)

    Greetings,
    Is is possible to use a web service to transmit a file, sometimes a very large file? If so, what are the security implications of doing such a thing?
    Thanks,
    Charles

    Quick answer: It depends... Did you try using Google to look-up examples of doing this? Here is a url to try: http://www.google.com/#sclient=psy&hl=en&source=hp&q=web+service+transfer+file&pbx=1&oq=web+service+transfer+file&aq=f&aqi=g1g-j3g-b1&aql=&gs_sm=e&gs_upl=1984731l1997518l0l1997874l25l19l0l1l1l0l304l3879l0.7.10.1l18l0&bav=on.2,or.r_gc.r_pw.&fp=7caaed5eb0414d97&biw=1280&bih=871
    Thank you,
    Tony Miller
    Webster, TX
    Never Surrender Dreams!
    JMS
    If this question is answered, please mark the thread as closed and assign points where earned..

  • Files copied as .lnk files to usb and does not show up in Windows

    When I copy files from the MBP to a USB disk, instead of copying the files as is, the Mac converts the files into a .LNK file, which then shows up as "shortcuts" rather than the copied file in Windows. Why is this?

    same here.. it happened after installing Lion and updating iTunes
    My nano does not appear on iTunes or desktop, however it charges.
    iPod nano 2nd Gen
    Mac OS X Lion 10.7
    iTunes 10.4
    please don't tell me that Apple added my lovely nano to Black list !!

  • Files.copy java.nio.file.AccessDeniedException

    Hello everybody,
    I hope you are well, Me not.
    I use this method to unzip
    public void deziper(final Path zipFile, final Path destDir) throws IOException*
              // On crée un FileSystem associé à l'archive :
         try ( FileSystem zfs = FileSystems.newFileSystem(zipFile, null) ) {
         // On parcourt tous les éléments root :
         for (Path root : zfs.getRootDirectories()) {
         // Et on parcourt toutes leurs arborescences :
         Files.walkFileTree(root, new SimpleFileVisitor<Path>() {
         private Path unzippedPath(Path path) {
    return Paths.get(destDir.toString(), path.toString()).normalize();
         @Override
         public FileVisitResult preVisitDirectory(Path dir,
         BasicFileAttributes attrs) throws IOException {
         // On crée chaque répertoire intermédiaire :
         Files.createDirectories(unzippedPath(dir));
         return FileVisitResult.CONTINUE;
         @Override
         public FileVisitResult visitFile(Path file,
         BasicFileAttributes attrs) throws IOException {
         // Et on copie chaque fichier :
         Files.copy(file, unzippedPath(file), StandardCopyOption.COPY_ATTRIBUTES, StandardCopyOption.REPLACE_EXISTING);
         return FileVisitResult.CONTINUE;
    Replace all files work well, but when application try to replace Executable Jar File in same directory This error is thrown
    09oct.2012 13:52:12,484 - 0 [AWT-EventQueue-0] WARN barakahfx.ModuleUpdater - Exception
    java.nio.file.AccessDeniedException: G:\Program Files\Djindo\Imanis\Djindo.jar
         at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
         at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
         at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
         at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
         at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(Unknown Source)
         at java.nio.file.Files.deleteIfExists(Unknown Source)
         at java.nio.file.CopyMoveHelper.copyToForeignTarget(Unknown Source)
         at java.nio.file.Files.copy(Unknown Source)
         at lib.GererFichier$3.visitFile(GererFichier.java:167)
         at lib.GererFichier$3.visitFile(GererFichier.java:150)
         at java.nio.file.FileTreeWalker.walk(Unknown Source)
         at java.nio.file.FileTreeWalker.walk(Unknown Source)
         at java.nio.file.FileTreeWalker.walk(Unknown Source)
         at java.nio.file.Files.walkFileTree(Unknown Source)
         at java.nio.file.Files.walkFileTree(Unknown Source)
         at lib.GererFichier.deziper(GererFichier.java:150)
         at barakahfx.ModuleUpdater.demarrerUpdate(ModuleUpdater.java:98)
         at barakahfx.AppliCtrl.demarrer(AppliCtrl.java:92)
         at barakahfx.BarakahFx$1.actionPerformed(BarakahFx.java:56)
         at javax.swing.Timer.fireActionPerformed(Unknown Source)
         at javax.swing.Timer$DoPostEvent.run(Unknown Source)
         at java.awt.event.InvocationEvent.dispatch(Unknown Source)
         at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
         at java.awt.EventQueue.access$200(Unknown Source)
         at java.awt.EventQueue$3.run(Unknown Source)
         at java.awt.EventQueue$3.run(Unknown Source)
         at java.security.AccessController.doPrivileged(Native Method)
         at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
         at java.awt.EventQueue.dispatchEvent(Unknown Source)
         at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
         at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
         at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
         at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
         at java.awt.EventDispatchThread.run(Unknown Source)
    Thank you very match
    Edited by: 964115 on Oct 9, 2012 5:39 AM
    Edited by: 964115 on Oct 9, 2012 5:40 AM
    Edited by: 964115 on Oct 9, 2012 5:46 AM
    Edited by: 964115 on Oct 9, 2012 5:55 AM

    Looks like you are trying to move the jar file you are executing and Windows has placed a lock on it. As far as I am aware there is no simple way round this. Why do you think you need to do this?

  • Put File of a large file through FTP not working correctly

    I put a lot of large files to my web server through Dreamweaver, and when I get files larger than about 45 meg, the file is put fine to the server.  But then Dreamweaver waits for a server responce to make sure it got the file.  and after about 30 seconds or so, it says server timed out, and it starts over with the FTP, and the first thing it does on the start over is to delete the file.
    The only wai I have been able to put my large files to the server, is to sit and wait for it to finish, and then cancel the check for if the file is there or not.
    I have my FVT server timeout set to 300 seconds, but that does not help eihter.  it still times out after about 30 seconds.
    Is there a fix for this so I don't have to sit and watch my files being transfered to the server?

    I changed it to passive PTF, but that did not help, sending a 90 Meg file did the same thing.  What the problem is, is that the Dreamweaver FTP wants to verify that the file got there. and when you send that large of a file, the server wants to make sure it is a clean file, so it takes a while to put it to the correct location.  And while the server is doing this, Dreamweaver says that there is no responce from the server, and causes a reconnect.  and on the reconnect it starts over, and the first thing it does is to make sure the file is deleted.  What it should do it verify that the file is there and the same date as the one you sent up there.  That would eliminate this problem.  I was hoping there was a setting in Dreamweaver that I could tell the FTP not to verify the send.
    Are there any Dreamweaver developers that visit this forum. that could tell me how to bypas the verify on put.  It seams silly to have a tool that costs this much that can not do a task that a free pice of software can do.

  • Error code 0 when copying a large file to a Mac OS Extended formatted drive

    I always back up my photos manually by dragging a folder containing many folders containing many photos into an external drive that I immediately formatted as Mac OS Extended (Journaled) the day I bought it 2 years ago. I now run into a Error Code 0 when I do this. The "mother" folder is 240 GB in size. As I stated, this drive has been formatted for the Mac, so I don't understand why it's (apparently) running into a problem peculiar to FAT32-formatted drives. My iMac has 16 GB of RAM, of which 10 GB was free, the last time I tried to do this.

    Well, you did mention...
    so I don't understand why it's (apparently) running into a problem peculiar to FAT32-formatted drives
    I kind of had to assume at that point the source drive was the one you formatted as Mac OS Extended two years ago, and the target drive was FAT32. If not, then I'm not sure why you mentioned FAT32 at all.
    In that same thinking, error code 0 is directly related to FAT formatted drives.

  • File adapter and large file

    hi experts, i have a problem:
    i have configured sender file adapter to send 4mb file from FTP server, so when i upload file to FTP, file adapter don't upload it, file just stay in folder.
    I have tried to upload small file, is about 16kb, so it works fine, without any errors
    I checked communication channel log in RWB, there are not any errors, all leds are green.
    So i don't know how to upload 4mb file, also i checked all rights and permissions for file and user, they all have admin rights - "777".
    Can anybody give me some suggestion or solution?
    Thanks all for reply.

    Hi,
    Try to increase your server parameters as below and try ....then you would be able to process large data
    u2022     UME Parameters :  May be we need to look into the pool size and poolmax wait parameters - UME recommended parameters (like: poolmaxsize=50, poolmaxwait=60000)
    u2022     Tuning Parameters:  May be we need to look/define the Message Size Limit u201Clike: EO_MSG_SIZE_LIMIT = 0000100u201D under tuning category
    u2022     ICM Parameters: May be we need to consider ICM parameters (ex: icm/conn_timeout = 900000. icm/HTTP/max_request_size_KB = 2097152)
    Regards,
    Naveen

  • Large file copy fails through 4240 sensor

    Customer attempts to copy a large file from a server in an IPS protected vlan to a host in an IPS un-protected vlan and the copy fails if file is greater than about 2Gbytes in size. If the server is moved to the un-protected vlan the copy succeeds. There are no events on the IPS suggesting any blocking or other actions.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Network speed affected by large file copy operations. Also, why intermittent network outages?

    Hi
    I have a couple of issues on our company network.
    The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
    network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
    The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
    We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
    Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
    When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
    (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
    While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
    I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
    I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
    Ordinarlily we work with many files
    Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
    I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
    Thanks

    What have you checked for resource usage during one of these copies of a large file?
    At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
    What RAID level is there on the file server?
    There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
    Hi, JohnB352
    Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
    I'm going to replace the RAM and double it up to 12GB.
    Thanks! This may help with some other issues we are having. I'll post back after it has been done.
    [Edit]
    Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

Maybe you are looking for

  • Creative ZEN 8GB + WMA Lossl

    my creative zen player WAS up to date with the latest firmware (before v .2.0). as a test, i tried a WMA Lossless file (COldplay - Clocks) as well as an uncompressed WAV file. anyway, the WAV file didn't even show up in the browser, but the WMA Lossl

  • What can be done when my ISP is down and a Cloud payment is due?

    I love Adobe products. I've been using them since the late '80s. I make my living by using Adobe products. I've owned single programs and the Master Suites, and I've been on the cloud since it started. But I've got a major problem in that in my locat

  • I need help getting iLife back on my iMac G5 (PPC)!

    Around a year ago I decided to finally retire my beloved iMac G5 (non-isight version, in case it matters).  My (not at all tech-savy) mother recently got an iPhone, and I decided to give her my old iMac G5, all set up so that she could use iTunes, Ca

  • Old Font Files in Leopard showing up as Unix Executable File

    Is there any way I can make the computer be able to use and open the font files that now are being seen as Unix Executable Files even though they worked just fine as fonts before I upgraded to 10.5. I've tried putting new file extensions on them, or

  • IMac OS problems and Heat/Screen Issues

    At this moment in time I am running my Intel Based iMac with Solo windows Vista and no OSX, i am doing this because while i was using boot camp with windows Vista something damaged the OSX and made booting it impossible, i completely wiped the imac a