CFContent failing on large files since moving to ColdFusion 9

Keep getting "The website canno display the page". It is ok when the file is 50MB, but when 300MB or above  - it failed.
It worked on CF8. We migrated to CF9 during the weekend and all the settings are the same.
I found that several CF users ran into similar issue - so I know it's not only me like Dan, Tom, etc (http://www.mail-archive.com/[email protected]/msg348532.html//www.mail-archive.com/[email protected]/msg348532.html)
Thanks,
Pat.

This has caused me a bit of a nightmare. When you upgrade versions, you expect things to at least improve. This has to be a priority fix - it's a howling problem!
Anyhow, for the benefit of others in a similar situation, I eventually got around this by creating a read-only FTP account (literally read-only - the user can't even list files/directories) on our web server and then I use a cflocation redirect instead of the cfcontent approach to serve up the file. I use UUIDs as folder names, so it's a pretty safe way of stopping people from downloading content they shouldn't.
So, instead of this...
<cfset Path = "#pathToMyFile#\#myFileName#">
<cfset FileInfo = GetFileInfo(Path)>
<cfset FileSize = FileInfo.size>
<cfset MimeType = getPageContext().getServletContext().getMimeType(Path)> <!--- Sometimes doesn't work, so we need to check that MimeType exists below --->
<cfheader name="Content-Disposition" value="attachment; filename=""#myFileName#""">
<cfheader name="Expires" value="#Now()#">
<cfheader name="Content-Length" value="#FileInfo.size#">
<cfif IsDefined("MimeType")>
     <cfcontent type="#MimeType#" file="#Path#" deletefile="No">
<cfelse>
     <cfcontent type="application/octet-stream" file="#Path#" deletefile="No">
</cfif>
I now have this...
<cfset FTPDownloadLink = "ftp://[email protected]/myFTPPath/#myFileName#//[email protected]/myFTPPath/#myFileName#">
<cflocation url="#FTPDownloadLink#" addtoken="no">
I suspect that this is actually a better way of doing things anyway, since it offloads responsibility to serving the file from the CF server to the FTP, which is, after all, what it is designed to do.

Similar Messages

  • Wpg_docload fails with "large" files

    Hi people,
    I have an application that allows the user to query and download files stored in an external application server that exposes its functionality via webservices. There's a lot of overhead involved:
    1. The user queries the file from the application and gets a link that allows her to download the file. She clicks on it.
    2. Oracle submits a request to the webservice and gets a XML response back. One of the elements of the XML response is an embedded XML document itself, and one of its elements is the file, encoded in base64.
    3. The embedded XML document is extracted from the response, and the contents of the file are stored into a CLOB.
    4. The CLOB is converted into a BLOB.
    5. The BLOB is pushed to the client.
    Problem is, it only works with "small" files, less than 50 KB. With "large" files (more than 50 KB), the user clicks on the download link and about one second later, gets a
    The requested URL /apex/SCHEMA.GET_FILE was not found on this serverWhen I run the webservice outside Oracle, it works fine. I suppose it has to do with PGA/SGA tuning.
    It looks a lot like the problem described at this Ask Tom question.
    Here's my slightly modified code (XMLRPC_API is based on Jason Straub's excellent [Flexible Web Service API|http://jastraub.blogspot.com/2008/06/flexible-web-service-api.html]):
    CREATE OR REPLACE PROCEDURE get_file ( p_file_id IN NUMBER )
    IS
        l_url                  VARCHAR2( 255 );
        l_envelope             CLOB;
        l_xml                  XMLTYPE;
        l_xml_cooked           XMLTYPE;
        l_val                  CLOB;
        l_length               NUMBER;
        l_filename             VARCHAR2( 2000 );
        l_filename_with_path   VARCHAR2( 2000 );
        l_file_blob            BLOB;
    BEGIN
        SELECT FILENAME, FILENAME_WITH_PATH
          INTO l_filename, l_filename_with_path
          FROM MY_FILES
         WHERE FILE_ID = p_file_id;
        l_envelope := q'!<?xml version="1.0"?>!';
        l_envelope := l_envelope || '<methodCall>';
        l_envelope := l_envelope || '<methodName>getfile</methodName>';
        l_envelope := l_envelope || '<params>';
        l_envelope := l_envelope || '<param>';
        l_envelope := l_envelope || '<value><string>' || l_filename_with_path || '</string></value>';
        l_envelope := l_envelope || '</param>';
        l_envelope := l_envelope || '</params>';
        l_envelope := l_envelope || '</methodCall>';
        l_url := 'http://127.0.0.1/ws/xmlrpc_server.php';
        -- Download XML response from webservice. The file content is in an embedded XML document encoded in base64
        l_xml := XMLRPC_API.make_request( p_url      => l_url,
                                          p_envelope => l_envelope );
        -- Extract the embedded XML document from the XML response into a CLOB
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/methodResponse/params/param/value/string/text()').getclobval(), 1 );
        -- Make a XML document out of the extracted CLOB
        l_xml := xmltype.createxml( l_val );
        -- Get the actual content of the file from the XML
        l_val := DBMS_XMLGEN.convert( l_xml.extract('/downloadResult/contents/text()').getclobval(), 1 );
        -- Convert from CLOB to BLOB
        l_file_blob := XMLRPC_API.clobbase642blob( l_val );
        -- Figure out how big the file is
        l_length    := DBMS_LOB.getlength( l_file_blob );
        -- Push the file to the client
        owa_util.mime_header( 'application/octet', FALSE );
        htp.p( 'Content-length: ' || l_length );
        htp.p( 'Content-Disposition: attachment;filename="' || l_filename || '"' );
        owa_util.http_header_close;
        wpg_docload.download_file( l_file_blob );
    END get_file;
    /I'm running XE, PGA is 200 MB, SGA is 800 MB. Any ideas?
    Regards,
    Georger

    Script: http://www.indesignsecrets.com/downloads/MultiPageImporter2.5JJB.jsx.zip
    It works great for files upto ~400 pages, when have more pages than that, is when I get the crash at around page 332 .
    Thanks

  • Uploading Very Large Files via HTTP

    I am developing some classes that must upload files to a web server via HTTP and multipart/form-data. I am using Apache's Tomcat FileUpload library contained within the commons-fileupload-1.0.jar file on the server side. My code fails on large files or large quantities of small files because of the memory restriction of the VM. For example when uploading a 429 MB file I get this exception:
    java.lang.OutOfMemoryError
    Exception in thread "main"I have never been successful in uploading, regardless of the server-side component, more than ~30 MB.
    In a production environment I cannot alter the clients VM memory setting, so I must code my client classes to handle such cases.
    How can this be done in Java? This is the method that reads in a selected file and immediately writes it upon the output stream to the web resource as referenced by bufferedOutputStream:
    private void write(File file) throws IOException {
      byte[] buffer = new byte[bufferSize];
      BufferedInputStream fileInputStream = new BufferedInputStream(new FileInputStream(file));
      // read in the file
      if (file.isFile()) {
        System.out.print("----- " + file.getName() + " -----");
        while (fileInputStream.available() > 0) {
          if (fileInputStream.available() >= 0 &&
              fileInputStream.available() < bufferSize) {
            buffer = new byte[fileInputStream.available()];
          fileInputStream.read(buffer, 0, buffer.length);
          bufferedOutputStream.write(buffer);
          bufferedOutputStream.flush();
        // close the files input stream
        try {
          fileInputStream.close();
        } catch (IOException ignored) {
          fileInputStream = null;
      else {
        // do nothing for now
    }The problem is, the entire file, and any subsequent files being read in, are all being packed onto the output stream and don't begin actually moving until close() is called. Eventually the VM gives way.
    I require my client code to behave no different than the typcial web browser when uploading or downloading a file via HTTP. I know of several commercial applets that can do this, why can't I? Can someone please educate me or at least point me to a useful resource?
    Thank you,
    Henryiv

    Are you guys suggesting that the failures I'm
    experiencing in my client code is a direct result of
    the web resource's (servlet) caching of my request
    (files)? Because the exception that I am catching is
    on the client machine and is not generated by the web
    server.
    trumpetinc, your last statement intrigues me. It
    sounds as if you are suggesting having the client code
    and the servlet code open sockets and talk directly
    with one another. I don't think out customers would
    like that too much.Answering your first question:
    Your original post made it sound like the server is running out of memory. Is the out of memory error happening in your client code???
    If so, then the code you provided is a bit confusing - you don't tell us where you are getting the bufferedOutputStream - I guess I'll just assume that it is a properly configured member variable.
    OK - so now, on to what is actually causing your problem:
    You are sending the stream in a very odd way. I highly suspect that your call to
    buffer = new byte[fileInputStream.available()];is resulting in a massive buffer (fileInputStream.available() probably just returns the size of the file).
    This is what is causing your out of memory problem.
    The proper way to send a stream is as follows:
         static public void sendStream(InputStream is, OutputStream os, int bufsize)
                     throws IOException {
              byte[] buf = new byte[bufsize];
              int n;
              while ((n = is.read(buf)) > 0) {
                   os.write(buf, 0, n);
         static public void sendStream(InputStream is, OutputStream os)
                     throws IOException {
              sendStream(is, os, 2048);
         }the simple implementation with the hard coded 2048 buffer size is fine for almost any situation.
    Note that in your code, you are allocating a new buffer every time through your loop. The purpose of a buffer is to have a block of memory allocated that you then move data into and out of.
    Answering your second question:
    No - actually, I'm suggesting that you use an HTTPUrlConnection to connect to your servlet directly - no need for special sockets or ports, or even custom protocols.
    Just emulate what your browser does, but do it in the applet instead.
    There's nothing that says that you can't send a large payload to an http servlet without multi-part mime encoding it. It's just that is what browsers do when uploading a file using a standard HTML form tag.
    I can't see that a customer would have anything to say on the matter at all - you are using standard ports and standard communication protocols... Unless you are not in control of the server side implementation, and they've already dictated that you will mime-encode the upload. If that is the case, and they are really supporting uploads of huge files like this, then their architect should be encouraged to think of a more efficient upload mechanism (like the one I describe) that does NOT mime encode the file contents.
    - K

  • How can I get to read a large file and not break it up into bits

    Hi,
    How do I read a large file and not get the file cut into bits so each has its own beginning and ending.
    like
    1.aaa
    2.aaa
    3.aaa
    4....
    10.bbb
    11.bbb
    12.bbb
    13.bbb
    if the file was read on the line 11 and I wanted to read at 3 and then read again at 10.
    how do I specify the byte in the file of the large file since the read function has a read(byteb[],index,bytes to read);
    And it will only index in the array of bytes itself.
    Thanks
    San Htat

    tjacobs01 wrote:
    Peter__Lawrey wrote:
    Try RandomAccessFile. Not only do I hate RandomAccessFiles because of their inefficiency and limited use in today's computing world, The one dominated by small devices with SSD? Or the one dominated by large database servers and b-trees?
    I would also like to hate on the name 'RandomAccessFile' almost always, there's nothing 'random' about the access. I tend to think of the tens of thousands of databases users were found to have created in local drives in one previous employer's audit. Where's the company's mission-critical software? It's in some random Access file.
    Couldn't someone have come up with a better name, like NonlinearAccessFile? I guess the same goes for RAM to...Non-linear would imply access times other than O(N), but typically not constant, whereas RAM is nominally O(1), except it is highly optimised for consecutive access, as are spinning disk files, except RAM is fast in either direction.
    [one of these things is not like the other|http://www.tbray.org/ongoing/When/200x/2008/11/20/2008-Disk-Performance#p-11] silicon disks are much better at random access than rust disks and [Machine architecture|http://video.google.com/videoplay?docid=-4714369049736584770] at about 1:40 - RAM is much worse at random access than sequential.

  • Large file copy fails through 4240 sensor

    Customer attempts to copy a large file from a server in an IPS protected vlan to a host in an IPS un-protected vlan and the copy fails if file is greater than about 2Gbytes in size. If the server is moved to the un-protected vlan the copy succeeds. There are no events on the IPS suggesting any blocking or other actions.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Failing to print very large files

    I've got an iMac running the newest Leopard, printing to a network printer. Printing usually works without incident. However, when printing large documents -- say a 20-40MB PowerPoint presentation with lots of color -- I receive the following message after 10-20 minutes of waiting:
    /usr/libexec/cups/backend/socket failed
    The printer is an HP Color LaserJet 8550DN with 96MB RAM and a 3.1GB internal HD.
    When printing large documents to this printer, printing seems to proceed correctly (if slowly) before reaching the inevitable error message.
    Possible causes would seem to be
    --too large a print job for the printer (though lpq suggests that many of the jobs that failed were <50MB). There was no other network printing taking place on this printer when the jobs failed.
    --some kind of timeout issue. How is the timeout value set? It doesn't appear to be in /etc/cups/cupsd.conf, at least at the moment.
    Any other suggestions appreciated.
    Thanks.

    This info relates directly only to an Epson 9800. However, it may have applications elsewhere.
    Generally, the print spooler in the 9800 with Photoshop will not print large files (>100 mb or so) unless you first convert them to the printer profile and shut down and restart the Mac and then print the converted image. This is true for Tiger and Leopard. In addition Leopard takes about twice as long to spool and since I print lots of 500 mb files, I have given up and gone back to Tiger (lots of other reasons also).

  • BT Cloud - large file ( ~95MB) uploads failing

    I am consistently getting upload failures for any files over approximately 95MB in size.  This happens with both the Web interface, and the PC client.  
    With the Web interface the file upload gets to a percentage that would be around the 95MB amount, then fails showing a red icon with a exclamation mark.  
    With the PC client the file gets to the same percentage equating to approximately 95MB, then resets to 0%, and repeats this continuously.  I left my PC running 24/7 for 5 days, and this resulted in around 60GB of upload bandwidth being used just trying to upload a single 100MB file.
    I've verified this on two PCs (Win XP, SP3), one laptop (Win 7, 64 bit), and also my work PC (Win 7, 64 bit).  I've also verified it with multiple different types and sizes of files.  Everything from 1KB to ~95MB upload perfectly, but anything above this size ( I've tried 100MB, 120MB, 180MB, 250MB, 400MB) fails every time.
    I've completely uninstalled the PC Client, done a Windows "roll-back", reinstalled, but this has had no effect.  I also tried completely wiping the cloud account (deleting all files and disconnecting all devices), and starting from scratch a couple of times, but no improvement.
    I phoned technical support yesterday and had a BT support rep remote control my PC, but he was completely unfamiliar with the application and after fumbling around for over two hours, he had no suggestion other than trying to wait for longer to see if the failure would clear itself !!!!!
    Basically I suspect my Cloud account is just corrupted in some way and needs to be deleted and recreated from scratch by BT.  However I'm not sure how to get them to do this as calling technical support was futile.
    Any suggestions?
    Thanks,
    Elinor.
    Solved!
    Go to Solution.

    Hi,
    I too have been having problems uploading a large file (362Mb) for many weeks now and as this topic is marked as SOLVED I wanted to let BT know that it isn't solved for me.
    All I want to do is share a video with a friend and thought that BT cloud would be perfect!  Oh, if only that were the case :-(
    I first tried web upload (as I didn't want to use the PC client's Backup facility) - it failed.
    I then tried the PC client Backup.... after about 4 hrs of "progress" it reached 100% and an icon appeared.  I selected it and tried to Share it by email, only to have the share fail and no link.   Cloud backup thinks it's there but there are no files in my Cloud storage!
    I too spent a long time on the phone to Cloud support during which the tech took over my PC.  When he began trying to do completely inappropriate and irrelevant  things such as cleaning up my temporary internet files and cookies I stopped him.
    We did together successfully upload a small file and sharing that was successful - trouble is, it's not that file I want to share!
    Finally he said he would escalate the problem to next level of support.
    After a couple of weeks of hearing nothing, I called again and went through the same farce again with a different tech.  After which he assured me it was already escalated.  I demanded that someone give me some kind of update on the problem and he assured me I would hear from BT within a week.  I did - they rang to ask if the problem was fixed!  Needless to say it isn't.
    A couple of weeks later now and I've still heard nothing and it still doesn't work.
    Why can't Cloud support at least send me an email to let me know they exist and are working on this problem.
    I despair of ever being able to share this file with BT Cloud.
    C'mon BT Cloud surely you can do it - many other organisations can!

  • Transfer of large files to and from Egnyte failing...

    One of my clients uses Egnyte for file management. For a typical job I will usually be required to download 5GB of files and upload 1.5GB.
    However, when at home, transfer of large files to and from Egnyte will often fail. (On download, Chrome gives the error message: "connection failed". Uploading, Egnyte's error message is: "HTTP error").
    I have three machines at home. Two Macs (running Yosemite and Lion) and a PC running Windows 7. I've had no luck with any of them on any browser but when using other people's broadband I have no problem at all (using my MacBook).
    I have no firewalls running. Yes, I've turned everything on-and-off-again. So that leaves me to think that the problem lies with my BT Homehub 4 router. But why would my router be botching the transfer of large files? I've switched the router's firewall off, tried adding my Mac to DMZ (whatever that is) but that seems to be the most I can do. Ethernet is no different to wireless.
    I've not noticed this porblem when using other file transfer sites (like WeTransfer).
    What's going on?
    Please help!

    From my own experience (I admin a few gaming servers and often get disconnections from them in the middle of monitoring operations) and based on other users experiences here on the forums I suspect BT have been having some core infrastructure issues which can lead to A) intermittent packet loss B) extended packet delay - both of which can cause servers to assume a 'failure' and disconnect or suspend upload/download.
    I dont know what package you are on from BT (I'm Infinity 2) << and as its Hogmany im the one that drawn the short straw to keep cheaters off out servers << so I'm a bit intoxicated and may not make total sense atm.
    https://community.bt.com/t5/BT-Infinity-Speed-Connection/BT-Infinity-issues-for-the-last-few-days/td...
    ^^ this thread illustrates issues that people have been having over the last few weeks.
    This probably wont help - but it might make you aware that you arent alone in ONGOING issues.
    Happy New Year !

  • FTP and HTTP large file ( 300MB) uploads failing

    We have two IronPort Web S370 proxy servers in a WCCP Transparent proxy cluster.  We are experiencing problems with users who upload large video files where the upload will not complete.  Some of the files are 2GB in size, but most are in the hundreds of megabytes size.  Files of sizes less than a hundred meg seem to work just fine.  Some users are using FTP proxy and some are using HTTP methods, such as YouSendIt.com. We have tried explict proxy settings with some improvment, but it varies by situation.
    Is anyone else having problems with users uploading large files and having them fail?  If so, any advice?
    Thanks,
       Chris

    Have you got any maximum sizes set in the IronPort Data Security Policies section?
    Under Web Security Manager.
    Thanks
    Chris

  • I have over 200 hours of HD video on 5 different TB Thunderbolt GRaid hard drives. I need to reorganize my projects, moving large files from one drive to another. Advice?

    I have over 200 hours of HD video on 5 different TB Thunderbolt GRaid hard drives. I need to reorganize my projects, moving large files from one drive to another. Advice?

    Do some testing to get your method working right with some less than important footage.
    Copy/paste files where you want then.
    Use the FCE Reconnect feature to tell FCE where the newly copied files reside.
    Make sure the new location and files are working as expected with your Projects.
    Delete the original files if no longer required.
    Al

  • Large File Copy fails in KDEmod 4.2.2

    I'm using KDEmod 4.2.2 and I often have to transfer large files on my computer.  Files are usually 1-2Gb and I have to transfer about 150Gb of them.  The files are stored on one external 500Gb HD and are being transferred to another identical 500Gb HD over firewire.  Also I have another external firewire drive that I transfer about 20-30Gb of the same type of 1-2Gb files to the internal HD of my laptop over firewire.  If I try to drag and drop in Dolphin, it gets through a few hundred Mb of the transfer then fails.  Now if I use the cp in the terminal, the transfer is fine.  Also when I was still distro hopping and using Fedora 10 with KDE 4.2.0 I had this same problem.  When I use Gnome this problem is non existent.  I do this often with work so it is a very important function to me.  All drives are FAT32 and there is no option to change them as they are used on serveral different machines/OS's before all is said and done and the only file system that all of the machines will read is FAT32 (thanks to one machine of course).  In many cases time is very important on the transfer and that is why I prefer to do the transfer in a desktop environment, so I can see progress and ETA.  This is a huge deal breaker for KDE and I would like to fix it.  Any help is greatly appreciated and please don't reply just use Gnome.

    You can use any other file manager under KDE that works, you know? Just disable that nautilus takes command of your desktop and you should be fine with it.
    AFAIR the display of the remaining time for a transfer comes at the cost of even more transfer time. And wouldn't some file synchronisation tool work too for this task? (someone with more knowledge please tell me if this would be a bad idea ).

  • SFTP MGET of large files fails - connection closed - problem with spool file

    I have a new SFTP job to get files from an FTP Server.  The files are large (80mg, 150mg).  I can get smaller files from the ftp site with no issue, but when attempting the larger files the job completes abnormally after 2 min 1 sec. each time.  I can see the file is created on our local file system with 0 bytes, then when the FTP job fails, the 0 byte file is deleted.
    Is there a limit to how large an ftp file can be in Tidal?  How long an ftp job can run?
    The error in the job audit is Problem with spool file for job XXXX_SFTPGet and an exit code of 127 (whatever that is).
    In the log, the error is that the connection was closed.  I have checked with the ftp host and their logs show that we are disconnecting unexpectedly also.
    Below is an excerpt from the log
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.055 : Send : Name=SSH_FXP_STAT,Type=17,RequestID=12
    DEBUG [SSH2Channel] 6 Feb 2015 14:17:33.055 : Transmit 44 bytes
    DEBUG [ChannelDataWindow] 6 Feb 2015 14:17:33.055 : Remote window size decreased to 130808
    DEBUG [PlainSocket] 6 Feb 2015 14:17:33.071 : RepeatCallback received 84 bytes
    DEBUG [SSH2Connection] 6 Feb 2015 14:17:33.071 : ProcessPacket pt=SSH_MSG_CHANNEL_DATA
    DEBUG [SFTPMessageFactory] 6 Feb 2015 14:17:33.071 : Received message (type=105,len=37)
    DEBUG [SFTPMessageStore] 6 Feb 2015 14:17:33.071 : AddMessage(12) - added to store
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.071 : Reply : Name=SSH_FXP_ATTRS,Type=105,RequestID=12
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.071 : Send : Name=SSH_FXP_OPEN,Type=3,RequestID=13
    DEBUG [SSH2Channel] 6 Feb 2015 14:17:33.071 : Transmit 56 bytes
    DEBUG [ChannelDataWindow] 6 Feb 2015 14:17:33.071 : Remote window size decreased to 130752
    DEBUG [PlainSocket] 6 Feb 2015 14:17:33.087 : RepeatCallback received 52 bytes
    DEBUG [SSH2Connection] 6 Feb 2015 14:17:33.087 : ProcessPacket pt=SSH_MSG_CHANNEL_DATA
    DEBUG [SFTPMessageFactory] 6 Feb 2015 14:17:33.087 : Received message (type=102,len=10)
    DEBUG [SFTPMessageStore] 6 Feb 2015 14:17:33.087 : AddMessage(13) - added to store
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.087 : Reply : Name=SSH_FXP_HANDLE,Type=102,RequestID=13
    DEBUG [SFTPMessage] 6 Feb 2015 14:17:33.087 : Send : Name=SSH_FXP_READ,Type=5,RequestID=14
    DEBUG [SSH2Channel] 6 Feb 2015 14:17:33.087 : Transmit 26 bytes
    DEBUG [ChannelDataWindow] 6 Feb 2015 14:17:33.087 : Remote window size decreased to 130726
    DEBUG [PlainSocket] 6 Feb 2015 14:17:33.118 : RepeatCallback received 0 bytes
    DEBUG [SFTPChannelReceiver] 6 Feb 2015 14:17:33.118 : Connection closed:  (code=0)
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 : Disconnected unexpectedly ( [errorcode=0])
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 : EnterpriseDT.Net.Ftp.Ssh.SFTPException:  [errorcode=0]
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 :    at EnterpriseDT.Net.Ftp.Ssh.SFTPMessageStore.CheckState()
    ERROR [SFTPMessageStore] 6 Feb 2015 14:17:33.118 :    at EnterpriseDT.Net.Ftp.Ssh.SFTPMessageStore.GetMessage(Int32 requestId)

    I believe there is a limitation on FTP and what you are seeing is a timeout built into the 3rd party application that tidal uses (I feel like it was hardcoded and it would be a big deal to change but this was before Cisco purchased tidal)  there may have been a tagent.ini setting that tweaks that but I can't find any details.
    We wound up purchasing our own FTP software (ipswitch MOVEit Central & DMZ) because we also had the need to host as well as Get/Put to other FTP sites. It now does all our FTP and internal file delivery activity (we use it's api and call from tidal if we need to trigger inside a workflow)

  • Since I updated my Creative Cloud desktop App to its last version, my files are no longer synchronized. I received the "fail to synch files" and "server error" messages.

    Since I updated my Creative Cloud desktop App to its last version, my files are no longer synchronized. I received the "fail to synch files" and "server error" messages.

    Hi, Jeff.
    I'm not on a network and I didn't change anything on my secutiy setups, so I got in touch to the customer support. They checked my computer and found nothing wrong, so they uploded some log files to analyse the case. I'm waiting for a answer.
    Thanks for the tips.

  • Large file copy fails

    trying to move a 60GB folder from a NAS to a local USB drive, and regardless of how many times it try to do this it fails within the first few minutes.
    I'm on a managed Ciscso Gigabit Ethernet switch in a commercial building and I have hundreds of users having no problems with OS 10.6, Windows XP and Windows 7 but my Yosemite system is not able to do this unless I boot into my OS 10.6. partition.
    Reconfig of the switch is not a viable option, I can't change things on a switch that would jeopardize hundreds of users to fix one thing on a mac testing the legitimacy of OS 10.10 in  a corporate setting.

    The CPU does occasionly peak at 100% when transferring a large file but the copy often fails when the CPU is significantly lower. I know a 4240 has 300Mbit/s throughput but as I understood it traffic would still be serviced but would bypass the inspection process if exceeded, maybe a transition from inspection to non inspection causes the copy to fail like a tcp reset, I may try a sniffer.
    I do have TAC involved but like to try and utilise the knowledge of other expert users like yourself to try and rectify issues. Thanks for your help. If you have any other comments please let me know, I will certainly post my findings if you are interested.

  • Can't Upload Large Files (Upload Fails using Internet Explorer but works with Google Chrome)

    I've been experience an issue uploading large (75MB & greater) PDF files to a SharePoint 2010 document library. Using normal upload procedures using Internet Explorer 7 (our company standard for the time being) the upload fails. No error message is thrown,
    the upload screen goes away and the page refreshes and the document isn't there. I tried upload multiple and it says throws a failed error after a while.
    Using google chrome I made an attempt just to see what it did and the file using the "Add a document" uploaded in seconds. Can't figure out why one browser worked and the other doesn't. We are getting sporadic inquiries with the same issue.
    We have previously setup large file support in the appropriate areas and large files are uploaded to the sites successfully. Any thoughts?

    File size upload has to be configured on the server farm level. Your administrator most likely set
    up the limit to size of files that can be uploaded. This size can be increased and you would then be able to upload your documents.

Maybe you are looking for

  • Opening Balance - Transactions - Closing Balance in FD10N & FK10N

    Hi All, I need to see the report in following format for FD10N & FK10N 1. Opening Balance 2. (+/-) Transactions 3. Closing Balance FD10N and FK10N show the period balance and drilling into the balance shows the detail view of transactions posted in t

  • My pdf attachments to emails suddenly will no open.

    I have had no problem opening pdf attachments to emails. I have done this repeatedly. Over the last day, I have suddenly had problems opening pdf attachments to emails. If I reboot the computer, I may get one or two pdf attachments to open and then t

  • Problems installing cs3 production premium: database corrupt

    I have a new computer with windows 8. On my old computer i used Premiere Pro CS3, now i tried Premiere Elements 11. I´m not happy with it so i deinstalled it and tried to install the CS3 production premium on the new Computer. When I insert the disk

  • Schedule agreements to sales order

    Hi Dear, I am getting one problem during coping Scheduling agreement[Sales doc type : LZM] to Sales order [ZOR] For every movement,  system is through the Information message like ‘Item 000010: Pickup data is not valid in shipping point calendar Z1’.

  • Preview While Capturing High Def in Pro CS4?

    Greetings: Running Vista 64 bit, 12 GB RAM, Intel Core i7, NVidia GTX275.  When I capture high def via 1394 from my Sony tape-based camcorder, the preview screen in the capture window changes to that text message, and does not show the video.  Is the