Exe on Cloud Service crashing for slightly larger data

Hi,
I have launched a C++ exe in cloud service which is working fine for smaller data and crashing as we provide slightly larger data which is taking hardly 5 minutes to solve when running locally.
Can anyone suggest what could be the issue???
Thank you.

Hi ,
It seems that this is the executionTimeout error. Please try to change "executionTimeout" value.
<system.web>
<httpRuntime executionTimeout="600" />
</system.web>
>>Also, can you explain the parallel approach that we can use to handle data on azure role. I am using a single web role in my application.
I am not familiar with C++, but I think you could use the concurrent or multiple threads on your projects:
http://stackoverflow.com/questions/218786/concurrent-programming-c
And I found some resource about how to deploy exe on azure, please refer to it:
http://www.codeproject.com/Articles/331425/Running-an-EXE-in-a-WebRole-on-Windows-Azure
Another way, You also used the startup script to start and run the EXE file, you could see this blog,
http://blogs.msdn.com/b/mwasham/archive/2011/03/30/migrating-a-windows-service-to-windows-azure.aspx
Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Service design for handling large datasets

    As an overnight process we need to invoke 2 services against every record in our database (over 1 million records). Specifically, the process flow should be as follows:
    - For each record in the database invoke service A.
    - For each record use the return value from service A as a parameter to invoke service B.
    If we were to process each record one at a time in a synchronous fashion, the time needed for processing all records would be too great. I was wondering if there is a better way to implement this? I have considered batching and making asynchronous calls
    using a duplex but am unclear about which option would be superior.

    Datasets with datatables, the salad bowl,  are two slow for Service Oriented Achitecture.
    http://www.hanselman.com/blog/ReturningDataSetsFromWebServicesIsTheSpawnOfSatanAndRepresentsAllThatIsTrulyEvilInTheWorld.aspx
    Datatables use boxing and unboxing, which makes it slow..
    http://www.csharphelp.com/2010/02/c-best-practices-to-write-high-performance-code/
    You should be using DTO(s) and a List of DTO(s)
    http://lauteikkehn.blogspot.com/2012/03/datatable-vs-list.html
    http://en.wikipedia.org/wiki/Data_transfer_object
    http://www.mindscapehq.com/documentation/lightspeed/Building-Distributed-Applications-/Building-WCF-Services-using-Data-Transfer-Objects
    On the other hand and if using SQL Server,  you may want to look into MS SQL Server Service Broker too.
    https://technet.microsoft.com/en-us/library/ms166104(v=sql.105).aspx

  • Safari keeps crashing when loading large data on web pages

    I have owned my iPad 3 for more than a year now and I have never encountered this problem but ever since I have updated to the latest iOS version(7.1.1), my safari keeps crashing on my community site where threads have maybe over a 100 comments in them.
    Now whenever I open a thread page on Safari, it loads for a few seconds then crashes, to make things worse for me is one in a while when it crashes, it reboots my iPad. I have seen several similar questions asked by other people regarding safari crashing when loading large amounts of data and got help saying that there's something wrong with the CSS. or something but I never got any idea on how to resolve this bug.
    Can anyone tell me how do I prevent my Safari from crashing everytime I open a thread? Any solutions? Because I am getting really ******.
    (Ps: I have already tried other browsing apps such as Google Search and Chrome, they still crash. I even tried letting it load half-way and then stopping it but it isn't going well.)

    I have restored my iOS devices a number of times in order to resolve issues and it has gone very smoothly every time. I am not going to lie, it takes a fair amount of time to backup, restore the iOS and then restore from the backup, but it could very well resolve the issue.
    On the other hand, it may not help at all. Restoring the software is a standard troubleshooting measure And that is why it is recommended when other suggestions aren't working. But before you restore, there are a couple of other things that you could try,
    Reset all settings. Settings>General>Reset>Reset all Settings. You will not lose any data when you do this, but it does take some time to enter all of the device settings again, so be aware of that.
    Another thing to try is to erase the device and start over. This is different than restoring to factory settings. Reads rhis for more information.
    iOS: How to back up your data and set up your device as a new device

  • Azure Cloud service fails when sent large amount of data

    This is the error;
    Exception in AZURE Call: An error occurred while receiving the HTTP response to http://xxxx.cloudapp.net/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being
    aborted by the server (possibly due to the service shutting down). See server logs for more details.
    Calls with smaller amounts of data work fine. Large amounts of data cause this error.
    How can I fix this??

    Go to the web.config file, look for the <binding> that is being used for your service, and adjust the various parameters that limit the maximum length of the messages, such as
    maxReceivedMessageSize.
    http://msdn.microsoft.com/en-us/library/system.servicemodel.basichttpbinding.maxreceivedmessagesize(v=vs.100).aspx
    Make sure that you specify a size that is large enough to accomodate the amount of data that you are sending (the default is 64Kb).
    Note that even if you set a very large value here, you won't be able to go beyond the maximum request length that is configured in IIS. If I recall correctly, the default limit in IIS is 8 megabytes.

  • Enterprise Service Required for - Credit Limit Data

    Hi,
    Can any one tell me, is there any Web-Service available to the Credit Limit Details in the FD32 T.Code.
    Thanks,
    Sekhar.J

    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/espackages/credit%2bmanagement
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/espackages/creditManagementEnterprise+Services
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/db5a9890-0201-0010-5091-a3c483bbea7d
    Hope these links helps.
    Thanks,
    Parvathy

  • Are Analytic Workspaces suitable for very large data sets?

    Hi all,
    I have made many different tests with analytic workspaces and i have used the different features (compression,composites...). The results especially for maintenance are disappointing.
    I have a star schema with 6 dimensions. The fact table has 730 million rows, the first dimension has 2,9 million rows and the other 5 dimensions have between 25 and 300 rows each.
    My conclusion is that Analytic Workspaces don't help in situations like mine. The time for maintenance is very very bad not to mention the time for aggregations. I even tried to populate the cube in parts( 90 million rows for the first population) but nothing change. And there are some other problems with storage and tablespaces ( I always get the message unable to extent TEMP tablespace. The size of it is 54Gb).
    Is there something i missing? Has anyone similar problem or different opinion?
    Thank you,
    Ilias

    A few other tips to add to Keith's excellent advice:
    - How many CPU's does your server have? The answer to this may help you decide the optimal level to partition at (in my experience DAY is too low and can cause different problems). What other levels does your time dimension have? Are you loading your cubes in parallel?
    - To speed up your load, partition your underlying fact table with the same granularity as your cubes and place an index on the field mapped to the partition dimension
    - Are you using 10.2.0.3? If so, be very careful with the storage data type you choose when creating your cubes. The default in 10.2.0.3 is NUMBER which has the capability of storing data to 38 significant figures. This usually exceeds what is required for most datasets. If your dataset allows you to use storage of 15 significant figures then you should create your cubes using the DECIMAL data type instead. This will use about one third of the storage space and significantly increase your build speeds (in my experience, more than 3 times faster)
    - Make sure you have preallocated enough permanent and temporary tablespaces for your build. Autoextending can be very time consuming.
    - Consider reducing the amount of aggregation you do in batch. It should not be necessary to pre-aggregate everything in order to get good query performance.
    Generally, I would say that the volume should not be a problem. A single dimension with 2.9 million values is fairly big and can be slow (in OLAP terms) to query but that should not be an obstacle to building it in the first place.
    Good luck!
    Stuart

  • XML Publisher: issues for  a large data and datatemplate

    There is proposal to build a data template with query consisting of 350 columns and the query is expected to fetch around 10000 rows. The data template will have all 350 columns as elements.
    In the report, the user can select and show whatever columns it wants.
    Apart from tuning of the sql query, what are the other issues involved in the above case.
    thanks and regards

    I guess,
    its gonna take time to get the data into xml from DB,
    and
    of course, if you attach the template with lot of grouping, which i suppose is going to happen after by looking at the no of columns you have...
    wil be heavy on output generation part,
    if the template is going to be little complex one, then it wont be a problem :)
    what kind of issues you faced till now ?

  • "Failed to debug the Windows Azure Cloud Service project. The Output directory .... does not exist" - Looking for Solution Config Name Folder?

    Good evening,
    I've been working on and with a VS2013 Update 2 / Azure SDK 2.3 Cloud Service project for a while now and never had a problem debugging it (setting the .ccproj Project as Startup Project) but at the moment I cannot Debug it anymore - I always get the following
    error message:
    Failed to debug the Windows Azure Cloud Service project.  The output directory 'D:\Workspace\Development\Sources\AzureBackend\csx\Backend - Debug' does not exist.
    Now what's odd here, is the last part - the "Backend - Debug" is the Solution configuration name, ALL projects in that particular solution configuration are set to the Debug Configuration. The .ccproj file also only specifies Debug|Any CPU (and
    Release|Any CPU respectively) as its output folder(s). Why is the Solution config appearing up there?
    And more importantly.. why is this happening and what can I do?!
    Thanks,
    -Jörg
    Ps: there seems to be a related
    connect bug and these sorts of issues do appear around the forums but none contains a solution (neither reinstalling the Azure SDK nor cloaking the workspace/re-retrieving & building everything worked).

    Good morning Jambor,
    I already tried de-installing everything Azure-Tooling related including the Azure SDK, Restarting my machine and re-installing the SDK.
    Same result. I can build the .ccproj perfectly fine and the cspack file IS generated perfectly fine, only debugging does not work and there's NO information in the VS output window (again - all projects succeed to build).
    I tried explicitely running VS as Administrator, no change. I removed all IIS Express sites (as the ccproj has one web worker role), remapped my local TFS workspace.. nothing helped.
    As building works, deploying to Azure Cloud Service (manually and via Publish inside VS) all works -perfectly-, I am pretty sure this IS a bug and I'd LOVE to help to get this fixed. As I said, currently I cannot debug and/or run & test my work, hence
    I cannot do ANY work.

  • Cloud services for SAP Utilities

    Hi,
    Interested to know where can i find the knowledge and documents related to cloud services offered for Utility industry by SAP

    Hi Vijay,
    Now we have Currently the only place to get these type of materials is Service market place(service.sap.com and Business Center).
    if you have authorization you may get the detailed material of Cloud services and SAP Business By Design .
    i hope the above information may help you.
    Thanks&Regards
    Sreenivas Pachva

  • Diagnostics Data for Virtual Machine and Cloud service

    Hi,
    We have an azure cloud account, contains Virtual Machine(Windows) with worker role and one Web Role. In this setup, we need to collect diagnostics data of virtual machine and cloud service. How to enable the diagnostics data in our setup and How to retrive
    the data
    I have read the below link says diagnostics data stores in WAD Table. How to read the data from table? Is there any query available.
    Or can we get this data from REST API. We need the perfomance data of every 5 min . Please help me to solve this.
    http://msdn.microsoft.com/en-us/library/azure/hh411534.aspx
    Thanks & Regards
    Rathidevi

    Hi,
    You can configure Azure diagnostics by using Visual Studio. Azure diagnostics captures system data and logging data on the virtual machines and virtual machine instances that run your cloud service and brings transfers that data into your storage account.
    References :-
    http://msdn.microsoft.com/en-us/library/azure/dn186185.aspx
    http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/
    Regards,
    Mekh.

  • Alerts broken after redeploy of Cloud Service?

    I've been experimenting with Alerts since they were released into Preview and really like the idea. I have however had enough trouble with them that I am losing faith in their effectiveness. It seems like each time I go into the Alerts section of the Azure
    Portal I find my alerts are broken yet again in some way. Right now all 10 of the alerts I have defined are showing errors when I view the dashboard page. These are alerts I deleted and re-created a few weeks ago because they were broken back then for some
    unknown reason. Now when I go into the dashboard for any alert I see at the top this error:
    Error: Could not find the resource that is defined for this alert rule. Verify that the resource still exists.
    If I try and edit the alert I see this at the top of the edit popup:
    Could not retrieve the metrics for (microsoft/compute/hostedservices/{mydeploymentname}/deployments/{mydeploymentid}) because the resource was not found. Verify the resource exists and then try again.
    I suspect the problem is that we are deploying new builds of our Cloud Service every few weeks. We use the VIP Swap approach so we get new deployment ids each time. It would be completely unworkable for me to have to re-create alerts each time we redeploy.
    Can anyone on the Alerts team speak to this issue? I am sufficiently frustrated with alerts now that I just may stop trying to use them.

    I ask that you all try to expedite this.  I am having to look at third party tools to accomplish this when the ability is there in the Portal.  It's just not working as it should for VIP swaps.
    Also, it may be worth noting that I recently got several alerts that a Cloud service no longer had monitoring data available and could not autoscale.  It makes me wonder if that could have also been due to monitoring being tied to the deployment ID
    (but maybe not).
    I have heard that this will likely be resolved in the next 2 months though I know Microsoft can not commit to that.  I do ask that you do everything possible to tie this to the slot rather than the deployment ID that can change.
    Thanks!

  • Storing iPad Photos in cloud services to free up space

    I am moving photos from my iPad to Mega and amazon cloud storage but it is not freeing up any room on my iPad. The usage still shows the same amount. What do I have to do to actually have the photos and videos stored on the cloud services?

    For photos in Photo Stream you can move them to your PC by using the instructions in iCloud: My Photo Stream FAQ
    Note in particular the section titled "What do I need to use my Photo Stream" and "How do I turn on My Photo Stream"
    If you have photos in the Camera Roll that are not in Photo Stream you can import them to your PC iOS: Import personal photos and videos from iOS devices to your computer

  • WebOS Cloud Services to end January 15, 2015 alternatives?

    what happen with this "Cloud services support for webOS devices will be endingon January 15, 2015"

    It means your data is going to be lost if you do not move it now.  You will not be able to back up or restore the device.
    Your apps will be lost if you ever have to reset the device.
    If you have a TouchPad and want to continue using it, install Android on it now.
    HP is calling this an orderly "end of life" process.  If your data is valuable to you, i would take HP at its word here, and consider getting your data onto a device that is current and supported.
    To the best of my knowledge there is currently no alternative for backing up your device.  You should still be able to install Preware and apps through through webOS Internals.  
    smkranz
    I am a volunteer, and not an HP employee.
    Palm OS ∙ webOS ∙ Android

  • PaaS - Cloud Service OS maintenance

    How exactly are Windows Updates patches handled by Azure PaaS (Cloud Service)?
    If the Operating System Version is set to Automatic does the Fabric Controller apply updates as required using the "Update Domain" to stagger the updates to the role instances?
    Do these updates occur at set schedule, eg Patch Tuesday?

    Hi Michael,
    Thanks for posting!
    Firstly, I suggest you could refer to this helpful blog from this link (http://blogs.technet.com/b/markrussinovich/archive/2012/08/22/3515679.aspx ).
    >>How exactly are Windows Updates patches handled by Azure PaaS (Cloud Service)?
    For this issue, I suggest you could see the 'PaaS Update Orchestration' part on above link.
    >>If the Operating System Version is set to Automatic does the Fabric Controller apply updates as required using the "Update Domain" to stagger the updates to the role instances?
    You can find more information about Update Domains, Fault Domains and Availability Sets in Windows Azure conference sessions, recordings of which you can find on my Mark’s Webcasts page
    here.
    Windows Azure MSDN documentation describes host OS updates
    here
    and the service definition schema for Update Domains
    here.
    >>Do these updates occur at set schedule, eg Patch Tuesday?
    For this issue, you could view the Kevin's blog from this
    link.
    Hope this helps.
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Reading Large data files from client machine ( urgent)

    Hi,
    I want to know the best way for reading large data at about 75MB file from client machine and insert to the database.
    Can anybody provide sample code for it.
    Loading the file should be done at client machine and inserting into the database should be done at server side.
    How should i load the file?
    How should i transfer this file or data to server ?
    How should i insert into the database ?
    Thanks in advance.
    regards
    Kalyan

    Like I said before you should be using your application server to serve files >from the server off the filesystem. The database should not store files this big >and should instead just have a reference to this file. I think u have not understood the problem corectly.
    I will make it clear.
    The requirement is as follows.
    This is a j2ee based application.
    Application server is oracle application server.
    Database is oracle9i
    it is thick client (swing based application)
    User enters datasource like c:\turkey.data
    This turkey.data file contains data
    1@1@20050131@1@4306286000113@D00@32000002005511069941@@P@10@0@1@0@0@0@DK@70059420@4330654016574@1@51881100@51881100@@99@D@40235@0@0@1@430441800000000@@11@D@42389@20050201@28483@15@@@[email protected]@@20050208@20050307@0@@@@@@@@@0@@0@0@0@430443400000800@0@0@@0@@@29@0@@@EUR
    like wise we may have more than 3 lacs rows in it.
    We need to read this file and transfer this to the application server. Which are EJBS.
    There we read this file each row in file is one row in the database for a table.
    Like wise we need to insert 3 lacs records in the database.
    We can use Jdbc to insert the data which is not a problem.
    Only problem is how to transfer this data to server.
    I can do it in one way. This is only a example
    I can read all the data in StringBuffer and pass to server.
    There again i get the data from StringBuffer and insert into database using jdbc.
    This way if u do it. It is performance issue and takes long time to insert into the database.It even may give MemoryOutofBond exception.
    just iam looking for the better way of doing this which may get good performace issue.
    Hope u have understood the problem.

Maybe you are looking for