Azure Cloud service fails when sent large amount of data

This is the error;
Exception in AZURE Call: An error occurred while receiving the HTTP response to http://xxxx.cloudapp.net/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being
aborted by the server (possibly due to the service shutting down). See server logs for more details.
Calls with smaller amounts of data work fine. Large amounts of data cause this error.
How can I fix this??

Go to the web.config file, look for the <binding> that is being used for your service, and adjust the various parameters that limit the maximum length of the messages, such as
maxReceivedMessageSize.
http://msdn.microsoft.com/en-us/library/system.servicemodel.basichttpbinding.maxreceivedmessagesize(v=vs.100).aspx
Make sure that you specify a size that is large enough to accomodate the amount of data that you are sending (the default is 64Kb).
Note that even if you set a very large value here, you won't be able to go beyond the maximum request length that is configured in IIS. If I recall correctly, the default limit in IIS is 8 megabytes.

Similar Messages

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Freeze when writing large amount of data to iPod through USB

    I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
    Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
    The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
    Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
    MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram   Mac OS X (10.4.5)   IPod Video 60G connected through USB

    Ex PC user...never had a problem.
    Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
    I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
    Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
    The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
    Apple should DEFINITELY fix this though. Unbelievable.
    MacBook Pro 2.0   Mac OS X (10.4.6)  

  • Error when exporting large amount of data to Excel from Apex4

    Hi,
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.
    Thanks
    Sue

    Hi,
    >
    I'm trying to export over 30,000 lines of data from a report in Apex 4 to an Excel spreadsheet, this is not using a csv file.
    >
    How? Application Builder > Data Workshop? Apex Page Process? (Packaged) procedure?
    >
    It appears to be working and then I get 'No Response from Application Web Server'. The report works fine when exporting smaller amounts of data.
    We have just upgraded the application to Apex 4 from Apex 3, where it worked without any problem.
    >
    Have you changed your webserver in the process? Say moved from OHS to ApexListener?
    >
    Has anyone else had this problem? We were wondering if it was a parameter in Apex4 that needs to be set.
    We are using Application Express 4.1.1.00.23 on Oracle 11g.
    Any help would be appreciated.

  • Power BI performance issue when load large amount of data from database

    I need to load data set from my database, which have large amount of data, it will take so many time to initialize data before I can build report, is there any good way to process large amount of data for PowerBI? As I know many people analysis data based
    on PowerBI, is there any suggestion for loading large amount of data from database?
    Thanks a lot for help

    Hi Ruixue,
    We have made significant performance improvements to Data Load in the February update for the Power BI Designer:
    http://blogs.msdn.com/b/powerbi/archive/2015/02/19/6-new-updates-for-the-power-bi-preview-february-2015.aspx
    Would you be able to try again and let us know if it's still slow? With the latest improvements, it should take between half and one third of the time that it used to.
    Thanks,
    M.

  • Airport Extreme Intermittent Network Interruption when Downloading Large Amounts of Data.

    I've had an Airport Extreme Base Station for about 2.5 years and have had no problems until the last 6 months.  I have my iMac and a PC directly connected through ethernet and another PC connected wirelessly.  I occasionally need to download very large data files that max out my download connection speed at about 2.5Mbs.  During these downloads, my entire network loses connection to the internet intermittently for between 2 and 8 seconds with a separation between connection losses at around 20-30 seconds each.  This includes the hard wired machines.  I've tested a download with a direct connection to my cable modem without incident.  The base station is causing the problem.  I've attempted to reset the Base Station with good results after reset, but then the problem simply returns after a while.  I've updated the firmware to latest version with no change. 
    Can anyone help me with the cause of the connection loss and a method of preventing it?  THIS IS NOT A WIRELESS PROBLEM.  I believe it has to do with the massive amount of data being handled.  Any help would be appreciated.

    Ok, did some more sniffing around and found this thread.
    https://discussions.apple.com/thread/2508959?start=0&tstart=0
    It seems that the AEBS has had a serious flaw for the last 6 years that Apple has been unable to address adequately.  Here is a portion of the log file.  It simply repeats the same log entries over and over.
    Mar 07 21:25:17
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:25:17
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:36:41
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted +0 seconds).
    Mar 07 21:55:08
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:08
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:55:32
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:33
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:59:47
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:24:53
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:24:53
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:36:42
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted -1 seconds).
    Mar 07 22:54:37
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:54:37
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Anyone have any ideas why this is happening?

  • Osx server crashes when copying large amount of data

    Ok. I have set up a mac os x server on a G4 Dual 867. Set to standalone server. The only services running are, VPN, AFP, DNS (I am pretty sure the DNS is set up correctly). I have about 3 Firewire drives and 2 USB 2.0 drives hooked up to it.
    When I try and copy roughly 230GB from one drive to another, it either just stops in the middle or CRASHES the server! I can't see anything out of the ordinary in the logs, though I am a newbie.
    I am stumped. Could this be hardware related? I just did a complete fresh install of os x server!

    This could be most anything, whether a disk error, a non-compliant device, a firewire error (I've had FireWire drivers tip over Mac OS X with a kernel panic; if the cable falls out at an inopportune moment when recording in GarageBand, toes up it all goes), to a memory error. This could also be a software error. This could be a FireWire device(s) that's simply drawing too much power.
    Try different combinations of drives, and replace one or more of these drives with another; start a sequence of elimination targeting the drives.
    Here's what Apple lists about kernel panics as an intro; it's details from the panic log that'll most probably be interesting...
    http://docs.info.apple.com/article.html?artnum=106228
    With some idea of which code is failing, it might be feasible to find a related discussion.
    A recent study out of CERN found three hard disk errors per terabyte of storage, so a clean install is becoming more a game of moving the errors around than actually fixing anything. FWIW.

  • NMH305 dies when copying large amounts of data to it

    I have an NMH305 still set up with the single original 500GB drive.
    I have an old 10/100 3COM rackmount switch (the old white one) uplinked to my Netgear WGR614v7 wireless router.  I had the NAS plugged into the 3COM switch and everything worked flawlessly.  Only problem was it was only running at 100m.
    I recently purchased a TRENDnet TEG-S80g 10/100/1000 'green' switch.  I basically replaced the 3com with this switch.  To test the 1g speeds, I tried a simple drag & drop of about 4g worth of pics to the NAS on a mapped drive.  After about 2-3 seconds, the NAS dropped and Explorer said it was no longer accessible.  I could ping it, but the Flash UI was stalled.
    If I waited several minutes, it could access it again.  I logged into the Flash UI and upgraded to the latest firmware, but had the same problem.
    I plugged the NAS directly into the Netgear router and transfered files across the wireless without issue.  I plugged it back into the green switch and it dropped after about 6-10 pics transfered.
    I totally bypassed the switch and plugged it directly into my computer.  Verified I can ping & log in to the Flash UI, then tried to copy files and it died again.
    It seems to only happend when running at 1g links speeds.  The max transfer I was able to get was about 10mbps, but I'm assuming that's limited by the drive write speeds & controllers.
    Anyone ran into this before?
    TIA!

    Hi cougar694u,
    You may check this review "click here". This is a thorough review about the Media Hub's Write and Read throughput vs. File Size - 1000 Mbps LAN.
    Cheers

  • Finder issues when copying large amount of files to external drive

    When copying large amount of data over firewire 800, finder gives me an error that a file is in use and locks the drive up. I have to force eject. When I reopen the drive, there are a bunch of 0kb files sitting in the directory that did not get copied over. This is happens on multiple drives. I've attached a screen shot of what things look like when I reopen the drive after forcing an eject. Sometime I have to relaunch finder to get back up and running correctly. I've repaired permissions for what it's worth.
    10.6.8, by the way, 2.93 12-core, 48gb of ram, fully up to date. This has been happening for a long time, just now trying to find a solution

    Scott Oliphant wrote:
    iomega, lacie, 500GB, 1TB, etc, seems to be drive independent. I've formatted and started over with several of the drives and same thing. If I copy the files over in smaller chunks (say, 70GB) as opposed to 600GB, the problem does not happen. It's like finder is holding on to some of the info when it puts it's "ghost" on the destination drive before it's copied over and keeping the file locked when it tries to write over it.
    This may be a stretch since I have no experience with iomega and no recent experience with LaCie drives, but the different results if transfers are large or small may be a tip-off.
    I ran into something similar with Seagate GoFlex drives and the problem was heat. Virtually none of these drives are ventilated properly (i.e, no fans and not much, if any, air flow) and with extended use, they get really hot and start to generate errors. Seagate's solution is to shut the drive down when not actually in use, which doesn't always play nice with Macs. Your drives may use a different technique for temperature control, or maybe none at all. Relatively small data transfers will allow the drives to recover; very large transfers won't, and to make things worse, as the drive heats up, the transfer rate will often slow down because of the errors. That can be seen if you leave Activity Monitor open and watch the transfer rate over time (a method which Seagate tech support said was worthless because Activity Monitor was unreliable and GoFlex drives had no heat problem).
    If that's what's wrong, there really isn't any solution except using the smaller chunks of data which you've found works.

  • "Failed to debug the Windows Azure Cloud Service project. The Output directory .... does not exist" - Looking for Solution Config Name Folder?

    Good evening,
    I've been working on and with a VS2013 Update 2 / Azure SDK 2.3 Cloud Service project for a while now and never had a problem debugging it (setting the .ccproj Project as Startup Project) but at the moment I cannot Debug it anymore - I always get the following
    error message:
    Failed to debug the Windows Azure Cloud Service project.  The output directory 'D:\Workspace\Development\Sources\AzureBackend\csx\Backend - Debug' does not exist.
    Now what's odd here, is the last part - the "Backend - Debug" is the Solution configuration name, ALL projects in that particular solution configuration are set to the Debug Configuration. The .ccproj file also only specifies Debug|Any CPU (and
    Release|Any CPU respectively) as its output folder(s). Why is the Solution config appearing up there?
    And more importantly.. why is this happening and what can I do?!
    Thanks,
    -Jörg
    Ps: there seems to be a related
    connect bug and these sorts of issues do appear around the forums but none contains a solution (neither reinstalling the Azure SDK nor cloaking the workspace/re-retrieving & building everything worked).

    Good morning Jambor,
    I already tried de-installing everything Azure-Tooling related including the Azure SDK, Restarting my machine and re-installing the SDK.
    Same result. I can build the .ccproj perfectly fine and the cspack file IS generated perfectly fine, only debugging does not work and there's NO information in the VS output window (again - all projects succeed to build).
    I tried explicitely running VS as Administrator, no change. I removed all IIS Express sites (as the ccproj has one web worker role), remapped my local TFS workspace.. nothing helped.
    As building works, deploying to Azure Cloud Service (manually and via Publish inside VS) all works -perfectly-, I am pretty sure this IS a bug and I'd LOVE to help to get this fixed. As I said, currently I cannot debug and/or run & test my work, hence
    I cannot do ANY work.

  • MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    I Have the same problem, I switched off location services, maps in data, whatever else maps could be involved in nd then just last nite it chewed 100mb... I'm also on vodacom so I'm seeing a pattern here somehow. Siri was switched on however so I switched it off now nd will see what happens. but I'm gonna go into both apple and vodacom this afternoon because this must be sorted out its a serious issue we have on our hands and some uproar needs to be made against those responsible!

  • ERROR MESSAGE WHEN DISPLAYING LARGE RETRIEVING AND DISPLAYING LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Exe on Cloud Service crashing for slightly larger data

    Hi,
    I have launched a C++ exe in cloud service which is working fine for smaller data and crashing as we provide slightly larger data which is taking hardly 5 minutes to solve when running locally.
    Can anyone suggest what could be the issue???
    Thank you.

    Hi ,
    It seems that this is the executionTimeout error. Please try to change "executionTimeout" value.
    <system.web>
    <httpRuntime executionTimeout="600" />
    </system.web>
    >>Also, can you explain the parallel approach that we can use to handle data on azure role. I am using a single web role in my application.
    I am not familiar with C++, but I think you could use the concurrent or multiple threads on your projects:
    http://stackoverflow.com/questions/218786/concurrent-programming-c
    And I found some resource about how to deploy exe on azure, please refer to it:
    http://www.codeproject.com/Articles/331425/Running-an-EXE-in-a-WebRole-on-Windows-Azure
    Another way, You also used the startup script to start and run the EXE file, you could see this blog,
    http://blogs.msdn.com/b/mwasham/archive/2011/03/30/migrating-a-windows-service-to-windows-azure.aspx
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • ERROR MESSAGE WHEN DOING SIMPLE QUERY TO RETRIEVE LARGE AMOUNT OF DATA

    Hello,
    Am querying my database(mysql) and displaying my data in a
    DataGrid (Note that am using Flex 2.0)
    It works fine when the amount of data populating the grid is
    not much. But when I have large amount of data I get the following
    error message and the grid is not populated.
    ERROR 1
    faultCode:Server.Acknowledge.Failed
    faultString:'Didn't receive an acknowledge message'
    faultDetail: 'Was expecting
    mx.messaging.messages.AcknowledgeMessage, but receive Null'
    ERROR 2
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail: 'Channel disconnected before and acknowledge was
    received'
    Note that my datagrid is populated when I run the query on my
    Server but does not works on my client pcs.
    Your help would br greatly appreciated here.
    Awaiting a reply.
    Regards

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • In azure cloud service with Location West Europe, why the IP address shows the server is locate in United States?

    We have 9 projects in azure cloud service, and all are use West Europe location, but with different subscription, and we found that one of those cloud service which its IP address is United
    States(Why?) and others are all in Amsterdam(this is correct)
    Can someone explain why? the server which it's ip located in United
    States is very slow, BTW i'm in Amsterdam.

    Hi LH,
    I have saw the same problem on Brazil. There has some comments about this issue, you can refer to it:
    Microsoft owns large ranges of IP addresses which are typically registered in Redmond, so usually Azure IP addresses around the world show up as being
    physically located in Redmond when using these types of tools.
    It's more or less an issue with the way our IP are registered. They all "belong" to Microsoft in the US and Brasil
    IP locator tools like whatismyip are sometimes incorrect. Some will give the real location, some will give the location of the ISP, etc …
    IP geo lookup tools typically rely on a static database of IP address range registrations.
    It's a Microsoft issue in the sense that we may publish the correct location for our datacenters
    (eg. Amsterdam should locate in Europe and not in Redmond…) – but it's also a 3rd party tools issue.
    If you want to be sure – use
    http://msdn.microsoft.com/en-us/library/windowsazure/dn175718.aspx 
     And you could also  see this blog about this issue :
    http://azure.microsoft.com/blog/2014/06/11/windows-azures-use-of-non-us-ipv4-address-space-in-us-regions/
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for

  • Comment to new wish list

    I usually only read these forums through the Newsgroups interface, in which case you don't see any explanation of or introduction to new sub-forums. So I visited the forums' web interface and read the introduction. Here are my comments: > Welcome to

  • Invoking a Remote Method

    The Question is ? Suppose there is a object running on system, Say A. Another Object/Client want to invoke a method of the object in the System A and that method returns some value. Now the problem is if the method invoked returns the values in more

  • Checque lot creation error

    hi.... when i create the chk lot than system showing msg. "Check the format of the check numbers" I  want to know can we change the check no. digits?.before that we give the numbers digits 4 and now we giving 6. Thanks & regards Rekha sharma

  • Problem locating components by coordinates

    Hi all, I'm converting an asp application to jsp using JDeveloper. On the asp all the components are located by coordinates, which is not the Java model. I can get a reasonable layout for most of the components using the Java model of forms & panels

  • Update Nokia phone firmware version

    How do you update phone firmware version? 1. Download Nokia pc suite and use NSU.(For Windows User only) 2. Try over the Air update (MAC/Windows User) If your mobile data rates allow, try doing an over the air update. Support varies by phone, variant