Changing cloud services auto scale weekends

hi
i want to change the default weekends in the azure auto scale ( our weekend is from Friday to Saturday ).
can it be done using the portal or in any other way ?

hi,
>>( our weekend is from Friday to Saturday )?
Base on my experience, if your weekend has two days, you could set weekend by selected the prior time zone .
About customize the weekend time, it may be difficult on azure portal.
Another way, I suggest you could try to auto scale could service by customize time using the Windows Azure Monitoring Services Management Library . I recommend you could refer to this code sample about how to use this library (http://blogs.msdn.com/b/cie/archive/2014/02/20/how-to-use-windows-azure-monitoring-services-management-library-to-create-an-autoscale-rule.aspx
). And you also see this document (http://msdn.microsoft.com/en-us/library/hh680945(v=pandp.50).aspx ).
Hope this helps.
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Cloud service auto scale delay

    Hi team, 
    We have a cloud service that have 10 worker roles,  to let the cloud service can auto scale up/down, we configure the autoscale settings that if the average CPU reaches 70% and keeps last 30 minutes, new worker role instance will be created.
    We also configure a email notification monitor that if the worker role average CPU reach 75% and keeps last 30 minute it will send a notification to our administrator, we receivce the notification at 2015/1/26 0:55 and 1 hour later the instances ware scaled
    up, is that means  the autoscale have been delayed about 1 hour , is there some best practise that avoid this kind of problem and let the instances can be scaled up immeditatly
    thanks in advance.

    Hi,
    Please have a look at this article:
    http://azure.microsoft.com/en-us/documentation/articles/cloud-services-how-to-scale/, here is a snippet.
    All instances are included when calculating the average percentage of CPU usage and the average is based on use over the previous hour. Depending on the number of instances that your application is using, it can take longer than the specified wait time
    for the scale action to occur if the wait time is set very low. The minimum time between scaling actions is five minutes. Scaling actions cannot occur if any of the instances are in a transitioning state.
    Hope this helps.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Change basicHttpBinding to wsHttpBinding in azure cloud service web role

    So I created a cloud service project in visual studio and added wcf service web role to cloud service.
    By default wcf service web role binding is set to basicHttpBinding.
    Currently my web.config looks like this:
    <?xml version="1.0"?>
    <configuration>
    <system.diagnostics>
    <trace>
    <listeners>
    <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=2.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"
    name="AzureDiagnostics">
    <filter type="" />
    </add>
    </listeners>
    </trace>
    </system.diagnostics>
    <system.web>
    <compilation debug="true" targetFramework="4.0" />
    </system.web>
    <system.serviceModel>
    <behaviors>
    <serviceBehaviors>
    <behavior>
    <!-- To avoid disclosing metadata information, set the value below to false before deployment -->
    <serviceMetadata httpGetEnabled="true"/>
    <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information -->
    <serviceDebug includeExceptionDetailInFaults="false"/>
    </behavior>
    </serviceBehaviors>
    </behaviors>
    <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
    </system.serviceModel>
    <system.webServer>
    <modules runAllManagedModulesForAllRequests="true"/>
    <directoryBrowse enabled="true"/>
    </system.webServer>
    </configuration>
    How do I change my config to use wsHttpBinding?

    Hi,
    Please refer to
    http://msdn.microsoft.com/en-us/library/ms733099.aspx. From my experience, in Windows Azure, we cannot use Windows authentication (the default configuration for WSHttpBinding), unless we use WAAD and let the cloud server join the local domian. So it is
    needed to turn off authentication, or use an alternative authentication mechanism (such as username and password).
    Best Regards,
    Ming Xu
    <THE CONTENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED>
    Thanks
    MSDN Community Support
    Please remember to "Mark as Answer" the responses that resolved your issue. It is a common way to recognize those who have helped you, and makes it easier for other visitors to find the resolution later.

  • Is there a way to restart my Java cloud service? What are "non-dynamic configuration changes"?

    I am trying to deploy an ADF application to my trial instance of Java cloud, I am getting the following error:
    2013-10-15 08:24:18 CDT: Deploy Application started
    2013-10-15 08:24:19 CDT: weblogic.management.DeploymentException: [Deployer:149189]An attempt was made to execute the 'deploy' operation on an application named 'xyzAPP' that is not currently available. The application may have been created after non-dynamic configuration changes were activated. If so, the operation can not be performed until server is restarted so that the application will be available.
    2013-10-15 08:24:19 CDT: WL action state: failed
    2013-10-15 08:24:19 CDT: Action FAILED with WL action state: failed
    2013-10-15 08:24:19 CDT: Check the server log of your Java cloud service for more info about the failure.
    There are tutorials available for deploying your first ADF app to the cloud, which I followed carefully. Except for one thing, the ADF app was originally developed in Jdeveloper 11.1.2.4 and then migrated to 11.1.1.6. Could that be what is causing the problem? How can I try to troubleshoot this?

    Hi,
    You will have to raise an sr towards the hosting team to perform the restart.
    Kind regards,
    Flori

  • Track cloud service configuration changes

    I've cloud service and if you go to the azure portal and click on cloud services ->choose service -> click on configure . you see the setting which you put in VS cloud service MVC project like following:
    This configure settings was configured and deployod from the cloud service in VS MVC project (which have cloud service) when you click on cloud service click on your MVC project and click on specific role and than you have screen of cofigure
    /settings /endpoints etc ,and you put the setting like key val in the setting view
    which reflect in the azure portal when you deploy the cloud service in the cloud service configuretion screen.
    User can change this config setting values in the portal and I want to track it,If I go the the managment services I can
    see all the changes in the azure cloud service (caller id ,operation ID etc and there is option to see detailes but this is not sufficent) ,I want to see which
    property was changed and the old values.
    1.There is event which is invoked when user change this configuration in the portal,if yes which event and how should I get this data 
    2.I want to find which field was changed and what is the old value... ?
    3.in the managment service you can see the operation ID ,can I get from it somehow addional data?
    I've read the following (and more about diagnostic )but not find how to do that...
    http://msdn.microsoft.com/library/azure/dn186185.aspx
    I guess I need to do it by code so any example will be very helpful!

    I found the this which when I change conifgration I see it in the operation name
    ChangeDeploymentConfigurationBySlot
    But how should I use it I didnt understand it...Im new to azure ...
     http://msdn.microsoft.com/en-us/library/azure/ee460809.aspx
    AZURE MVC ASP CLOUDSERVICE

  • Need advice for starting a Managed Cloud Service for Small Businesses

    I hope this is in the right forum.  I have done a lot of research and searching but havent found anything that specifically answers, in total, what I am wanting to accomplish.  I live in a small town and want to start a Managed Cloud Services for
    small to small-medium business in my area (2-30 users for each business).  I want to market this to have businesses replace their in-house server(s) to virtual ones I would host in a local Data Center with my own equipment that I would maintain.  I
    am just starting off so I don't have any clients I do this for currently, but I get asked about this frequently.  I want to run a 2012 R2 Domain Controller and a Hyper-V 2012 R2 server.  The virtual servers I will host are going to be for AD, RDS,
    FTP, and files.  Software examples that people are going to be using these virtual servers for are Quickbooks, Sage Accounting, Remote Desktop or RemoteApp, custom CRM or small database software, Office 2013, etc.  No Exchange currently but will
    probably configure something for that in the future (maybe run 1-3 virtually for now if someone asks, but will only do it if the user base is fairly small ~under 10 users).  I only have 1 static IP to work with over a 100Mbps connection up and down.
    For hardware, I am figuring something along the lines of this:
    (1) 1U, single CPU w/2-4 cores, 8GB, 2x73GB SAS 10k RAID 1, Dual PSU, running Windows Server 2012 R2
    Domain Controller
    (1) 2U, 2x 8-core Xeon ~2.6Ghz, 80GB RAM, 8x600GB SAS 15k in Raid 10 for Storage (VHDX files, etc), RAID 1 small Basic drives (or USB stick) for OS, Dual PSU, Quad GB Nic which I can use for load balancing/teaming, Hyper-V
    2012 R2
    Hyper-V Virtual Server
    (1) GB Unmanaged Network Switch & (1) Cisco 5510 Firewall
    Most of my questions are about the best way to configure this.  I am planning on managing my Hyper-V from the physical Domain Controller server.  Each virtual server will have RDS & (possibly) AD services on a single server.
    1) I want to replicate the physical Domain Controller.  Should I get another server or just virtualize the replica in Hyper-V?  I understand that if the Hyper-V goes down, so does my DC replica.
    2) Should I use my Domain Controller to manage ALL users on each virtual Server, by creating separate Organizational Units for each business?
    3) Should I setup my domain controller with Hyper-V management and then each Virtual Server I setup be a separate domain (Ex. mydomain.local, business1.local, business2.local, etc)?  Each one has no connection to any
    other, completely seperate.  Or should I do subdomains (business1.mydomain.local).
    4) What I have read is that Subdomains are a pain to manage with user rights, etc.  I want to keep each server complete separated from one another over a network connection, I suppose the VLan through Hyper-V options
    do this?  I dont want wondering users to stumble upon another businesses files (I know they would probably be prompted with a login for that business/domain).
    5) For each virtual server, I want to create and have an HTTP subdomain point to that server from my domain name. (Ex: business1.mydomain.com, business2.mydomain.com, etc.)  I want them to be able to have access to
    only their RemoteApps or be able to type that address in their Remote Desktop program as the host name.  This would be for viewing the RemoteApp login page and RemoteApps for that business over HTTP/S through a browser.
    6) If I do not have separate DC's in each virtual and my main DC manages each one, is their a way to connect up each companies RemoteApps using a single site that only shows what they are assigned to based upon their login?
    (Ex. http://login.mydomain.com which then shows that user what they are assigned on their own virtual server)
    7) Since each business will use the same ports for RemoteApp (443) & RDC (3389 unless I change it), how would I setup the subdomains to point to their correct server and not overlap for mess with any of the other servers
    since its all over 1 static WAN IP for all servers.  Thats why I figured setting up IIS subdomains would solve this.
    8) For backups or Hyper-V replication, is it better to have software that backs up the ENTIRE Hyper-V server (Acronis Advanced Backup for Hyper-V) as well as replication or just backups?  Or should I do separate file
    backup on each virtual with a replica?  Can a replica be a slower server since its just a backup? (Ex. 1x 8 core, 80GB, 8x600GB 10k SAS)
    9) For the servers that will be using FTP, can I again rely on the subdomains to determine which server to connect to on port 21 without changing each FTP servers ports?  I just want each business/person to type in
    the subdomain for their business and it connect up to their assigned FTP directory over port 21.
    10) If the physical DC manages DNS for all Virtual servers, can I forward sub domain requests to the proper virtual server so they connect to the correct RemoteApp screen etc.  Again all I have is 1 IP.
    I hope all of these questions make sense.  I just want every business to be independent of each other on the Hyper-V, each on their own virtual server, all without changing default ports on each server, each server running RDS, (possibly) AD, (a few) FTP,
    and all over a common single WAN IP.  Hoping subdomains (possibly managed through IIS on the physical DC) will redirect users to their appropriate virtual server.

    If you really want to run your own multi-tenant service provider cloud, Microsoft has defined the whole setup needed.
    They call it Infrastructure as a Service Product Line Architecture.  You can find the full documentation here -
    http://blogs.technet.com/b/yuridiogenes/archive/2014/04/17/infrastructure-as-a-service-product-line-architecture.aspx
    There are several different ways of configuring and installing it.  Here is a document I authored that provides step-by-step instructions for deploying into a Cisco UCS and EMC VSPEX environment -
    http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/ucs_mspc_fasttrack40_phase1.pdf
    This document contains the basic infrastructure required to manage a private cloud.  I will soon be publishing a document to add the Windows Azure Pack components onto the above configuration.  That is what would more easily provide a multi-tenant
    experience with a Azure look and feel.  It is not Azure, but the Azure pack is a series of applications, some of which came from Azure, the provides Azure-like capabilities only in a service provider type of environment.
    Whether you use my document or not (which has actually corrected errors found in the Microsoft documentation), you should take a look at it to see what it takes to put something like this up, if you are really serious about it.  It is not a small undertaking. 
    It requires a lot of moving pieces to be coordinated.  Yes, my document is designed to scale to a large environment, but you need the components that are there.  No need re-inventing the wheel.  Microsoft's documentation is based on a lot of
    real hands on experience of their consulting organization that has been doing this for customers for years.  This one is also know as Fast Track 4.  I've done 2 (2008 R2) and 3 (2012), also and it just keeps getting more complicated based on customer
    demands and expectations.
    Good luck!
    . : | : . : | : . tim

  • Azure Cloud Service Scaling - do I have to configure a Load Balancer?

    I'm a little bit confused by how scaling in Azure works. I'm using a Cloud Service and have 2 web roles running a PHP application. I can RDP on both machines and both applications run great on each machine. Also I don't have any problems calling the staging
    URL.
    But I can't figure out if I configure scaling so that 2 machines run always, if I have to configure a load balancer somehow. Or is this already done for me?
    In Azure VM's I had to create a load-balanced set endpoint for an endpoint, but what about cloud services?
    And how is this done in the XML configuration file for my service? What if I don't do it?

    Hi,
    Scaling is affected by core usage. Larger role instances or Virtual Machines use more cores. You can only scale an application within the limit of cores for your subscription. For example, if your subscription has a limit of twenty cores
    and you run an application with two medium sized Virtual Machines (a total of four cores), you can only scale up other cloud service deployments in your subscription by sixteen cores. All Virtual Machines in an availability set that are used in scaling an
    application must be the same size.
    Windows Azure supports load balance for cloud services and standard websites, we just need to set instance count to more than 1 to enable load balance. For virtual machines, it needs to set up manually.
    Please refer this link for Load Balance a Virtual Machine:
    http://www.windowsazure.com/en-us/manage/windows/common-tasks/how-to-load-balance-virtual-machines/
     for more information.
    Auto scale lets you set scaling limits and scheduling goals to ensure you are always getting optimal performance
    Please refer this link for Scaling on Cloud Services:
    http://azure.microsoft.com/en-us/services/cloud-services/
    Also, Please refer this link for Scaling an Application :
    http://azure.microsoft.com/en-us/documentation/articles/cloud-services-how-to-scale/
    XML configuration : Azure (Load-balanced) Endpoints can only be used for TCP/UDP based services. please check
    https://techlib.barracuda.com/display/BNGv54/How+to+Configure+a+High+Availability+Cluster+in+Azure/printable for the detailed information
    Hope this helps.
    Regards,
    Shirisha Paderu.

  • Error when trying to deploy ADF application in Java Cloud Service - SaaS Extension

    Hello guys,
    I'm trying to deploy a simple ADF application in "Oracle Java Cloud Service - SaaS Extension" and i'm still having the error below
    The job turns to Failed at "Deploy Application" step:
    Did someone already got this error ?
    Thanks at advance
    Sid
    2014-12-10 11:06:20 PST: Starting action "Deploy Application"
    2014-12-10 11:06:20 PST: Deploy Application started
    2014-12-10 11:06:28 PST: weblogic.application.ModuleException: weblogic.application.ModuleException:
      at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1531)
      at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:488)
      at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:425)
      at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
      at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:119)
      at weblogic.application.internal.flow.ScopedModuleDriver.start(ScopedModuleDriver.java:200)
      at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:247)
      at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:425)
      at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
      at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:119)
      at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:27)
      at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:671)
      at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
      at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:212)
      at weblogic.application.internal.SingleModuleDeployment.activate(SingleModuleDeployment.java:44)
      at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:161)
      at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:80)
      at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:573)
      at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:150)
      at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:116)
      at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:327)
      at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:844)
      at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1253)
      at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:440)
      at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:163)
      at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:195)
      at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:13)
      at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:68)
      at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)
      at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
      at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    Caused by: java.security.AccessControlException: access denied ("java.net.NetPermission" "specifyStreamHandler")
      at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
      at java.security.AccessController.checkPermission(AccessController.java:559)
      at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
      at java.net.URL.checkSpecifyHandler(URL.java:649)
      at java.net.URL.<init>(URL.java:373)
      at weblogic.application.io.MergedDescriptorFinder.getSource(MergedDescriptorFinder.java:46)
      at weblogic.application.io.DescriptorFinder.getSource(DescriptorFinder.java:44)
      at weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
      at weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
      at weblogic.servlet.internal.War$ResourceFinder.getSource(War.java:1213)
      at weblogic.servlet.internal.War$ResourceFinder.getSource(War.java:1203)
      at weblogic.servlet.internal.War.getResourceAsSource(War.java:512)
      at weblogic.servlet.internal.WebAppServletContext.getResourceAsSource(WebAppServletContext.java:3436)
      at weblogic.servlet.internal.WebAppServletContext.getResourceAsSource(WebAppServletContext.java:3427)
      at weblogic.servlet.internal.WebAppServletContext.getResourceAsStream(WebAppServletContext.java:872)
      at com.sun.faces.config.ConfigureListener$WebXmlProcessor.scanForFacesServlet(ConfigureListener.java:805)
      at com.sun.faces.config.ConfigureListener$WebXmlProcessor.<init>(ConfigureListener.java:768)
      at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:178)
      at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:481)
      at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
      at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
      at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:181)
      at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1871)
      at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:3173)
      at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1529)
    2014-12-10 11:06:28 PST: WL action state: failed
    2014-12-10 11:06:28 PST: Action FAILED with WL action state: failed
    2014-12-10 11:06:28 PST: Check the server log of your Java cloud service for more info about the failure.
    2014-12-10 11:06:28 PST: Application deployment failed.
    2014-12-10 11:06:28 PST: "Deploy Application" complete: status FAILED

    The Application does nothing, it's just a simple page (the button does nothing too)  the aim is to deploy a jsf page with ADF forms.
    The deployment log is on the first message.
    I'm using Jdeveloper and i note the Jdev inserts some servlets into the web.xml file ! is it possible that the probleme is related to this ? (below the web.xml file)
    Sid
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
             version="2.5">
      <servlet>
        <servlet-name>Faces Servlet</servlet-name>
        <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
      </servlet>
      <servlet>
        <servlet-name>resources</servlet-name>
        <servlet-class>org.apache.myfaces.trinidad.webapp.ResourceServlet</servlet-class>
      </servlet>
      <servlet>
        <servlet-name>BIGRAPHSERVLET</servlet-name>
        <servlet-class>oracle.adf.view.faces.bi.webapp.GraphServlet</servlet-class>
      </servlet>
      <servlet>
        <servlet-name>BIGAUGESERVLET</servlet-name>
        <servlet-class>oracle.adf.view.faces.bi.webapp.GaugeServlet</servlet-class>
      </servlet>
      <servlet>
        <servlet-name>MapProxyServlet</servlet-name>
        <servlet-class>oracle.adf.view.faces.bi.webapp.MapProxyServlet</servlet-class>
      </servlet>
      <servlet-mapping>
        <servlet-name>Faces Servlet</servlet-name>
        <url-pattern>/faces/*</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>resources</servlet-name>
        <url-pattern>/adf/*</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>resources</servlet-name>
        <url-pattern>/afr/*</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>BIGRAPHSERVLET</servlet-name>
        <url-pattern>/servlet/GraphServlet/*</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>BIGAUGESERVLET</servlet-name>
        <url-pattern>/servlet/GaugeServlet/*</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>MapProxyServlet</servlet-name>
        <url-pattern>/servlet/mapproxy/*</url-pattern>
      </servlet-mapping>
      <servlet-mapping>
        <servlet-name>resources</servlet-name>
        <url-pattern>/bi/*</url-pattern>
      </servlet-mapping>
      <filter>
        <filter-name>trinidad</filter-name>
        <filter-class>org.apache.myfaces.trinidad.webapp.TrinidadFilter</filter-class>
      </filter>
      <filter>
        <filter-name>ServletADFFilter</filter-name>
        <filter-class>oracle.adf.share.http.ServletADFFilter</filter-class>
      </filter>
      <filter-mapping>
        <filter-name>trinidad</filter-name>
        <servlet-name>Faces Servlet</servlet-name>
        <dispatcher>FORWARD</dispatcher>
        <dispatcher>REQUEST</dispatcher>
        <dispatcher>ERROR</dispatcher>
      </filter-mapping>
      <filter-mapping>
        <filter-name>ServletADFFilter</filter-name>
        <servlet-name>Faces Servlet</servlet-name>
        <dispatcher>FORWARD</dispatcher>
        <dispatcher>REQUEST</dispatcher>
      </filter-mapping>
      <context-param>
        <param-name>javax.faces.STATE_SAVING_METHOD</param-name>
        <param-value>client</param-value>
      </context-param>
      <context-param>
        <param-name>javax.faces.PARTIAL_STATE_SAVING</param-name>
        <param-value>false</param-value>
      </context-param>
      <context-param>
        <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>
        <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>
        <param-value>false</param-value>
      </context-param>
      <context-param>
        <description>Whether the 'Generated by...' comment at the bottom of ADF Faces HTML pages should contain version number information.</description>
        <param-name>oracle.adf.view.rich.versionString.HIDDEN</param-name>
        <param-value>true</param-value>
      </context-param>
      <context-param>
        <description>Security precaution to prevent clickjacking: bust frames if the ancestor window domain(protocol, host, and port) and the frame domain are different. Another options for this parameter are always and never.</description>
        <param-name>org.apache.myfaces.trinidad.security.FRAME_BUSTING</param-name>
        <param-value>differentOrigin</param-value>
      </context-param>
      <context-param>
        <param-name>javax.faces.VALIDATE_EMPTY_FIELDS</param-name>
        <param-value>true</param-value>
      </context-param>
      <context-param>
        <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>
        <param-value>auto</param-value>
      </context-param>
      <context-param>
        <param-name>oracle.adf.view.rich.SYNCROWS</param-name>
        <param-value>enable</param-value>
      </context-param>
      <context-param>
        <param-name>javax.faces.FACELETS_SKIP_COMMENTS</param-name>
        <param-value>true</param-value>
      </context-param>
      <context-param>
        <param-name>javax.faces.FACELETS_DECORATORS</param-name>
        <param-value>oracle.adfinternal.view.faces.facelets.rich.AdfTagDecorator</param-value>
      </context-param>
      <context-param>
        <param-name>javax.faces.FACELETS_RESOURCE_RESOLVER</param-name>
        <param-value>oracle.adfinternal.view.faces.facelets.rich.AdfFaceletsResourceResolver</param-value>
      </context-param>
      <context-param>
        <param-name>javax.faces.FACELETS_VIEW_MAPPINGS</param-name>
        <param-value>*.jsf;*.xhtml</param-value>
      </context-param>
      <mime-mapping>
        <extension>swf</extension>
        <mime-type>application/x-shockwave-flash</mime-type>
      </mime-mapping>
      <mime-mapping>
        <extension>amf</extension>
        <mime-type>application/x-amf</mime-type>
      </mime-mapping>
      <listener>
        <listener-class>oracle.adf.mbean.share.config.ADFConfigLifeCycleCallBack</listener-class>
      </listener>
      <login-config />
    </web-app>

  • Unable to connect to VM's in new cloud service via express route

    We have changed our express route setup, initially we had an express route via London, but we have added a second one via Amsterdam and removed the one via London. All existing and new vm's in the different vnet's have connection to our local datacenter,
    but as soon as we create vm's in a new cloud service the published routes don't seem to be picked up and the machine are only reachable in their local vnet on azure.
    Does anyone have an idea where to look, it looks like the route publishing does not seem to work correctly, but it is strange that new vm's in existing cloud service do work correctly. BGP peering and vnet have been provided access via the expressroute and
    all have status provisioned.

    Hi Syed,
    When I try to connect to a new vm via rdp or try to do a tracert to the machine (with firewall turned off on the vm) I don't get a response (traffic is routed via the express-route correctly). If I do a tracert to an ip on the on premise network
    from the vm in question the trace is directed to internet instead of to the on premise network via the express route.
    the new cloud services were created in the same region as the working cloud services and the vm's are also in the same vnet/subnet as the working vm's. If I delete a vm (keeping the disks) from a new cloud service and redeployed it in an existing cloud service
    I can reach it again via the internal ip.
    We have checked the route publishing and the correct routes are published to the express route/vnet.
    When I check the provisioning of the vnet's via get-azurededicatedcircuitlink all the vnet's in question are listed as provisioned.
    I'll try to remove the bpgrouting for the original expressroute this evening to see if that helps.
    kind regards
    Xander

  • How do I tell my clients to configure the connectionstrings for a cloud service?

    I have an application that exists in two forms
    A Windows Service
    A Clouse Service with a Web Worker Role
    Both applications have an encrypted connection string in the app.config;
    for clients using the Windows Service I know how to tell them to change the config file.
    For a cloud service is it possible to edit the configuration file?
    I read something about Azure Settings, but I can't find any good information about that, is that the preferred method for setting environment settings in a Cloud Service?
    Can you remote in to a VM or whatever hosts the Cloud Service?
    Thank you for any help. I am writing the documentation about how to setup the Azure environment and I realized I don't know myself, I only know how to publish with Visual Studio to a cloud service with the values already set. That works, but I can't
    tell a client to use Visual Studio.

    Hi,
    For a cloud service, though it is possible to access instance VMs and do changes on their file system by RDP sessions, but it is not recommended, as you will end up loosing your changes if role instance VMs are restarted.
    If you really want to keep certain settings configurable and which will be shared by all your role instances, best way to do is to utilise the cloud service configurations, typically you mention these settings in .cscfg file and you can also edit those using
    azure management portal.
    You can also access those from your code 
    string settingValue = CloudConfigurationManager.GetSetting("SettingString");
    Read more about it here - http://msdn.microsoft.com/en-us/library/azure/ee405486.aspx
    http://haishibai.blogspot.in/2012/09/windows-azure-cloud-service.html
    Bhushan | Blog |
    LinkedIn | Twitter

  • Cisco Jabber for Telepresence 4.6.3 Setting for "Webex Telepresence" Cloud Service

    I am looking for the server setting that can be installed into the Cisco Jabber for Telepresence Client (MOVI) that will allow it to connect to the "Webex Telepresence" cloud service.  I cannot find a download anywhere that has the infomation re-configured for the "Webex Telepresence" service.
    Thanks for your help in advance
    Tim
    Well after a few hours of searching i have the Answer
    The Client can be downloaded from: http://download.telepresence.webex.com/MCX/4.6.3.17194/WIN/JabberVideo_4.6.3GA.exe
    This will come with the following setting pre-configured: Sign-in Setting > Internal Server: https://boot.telepresence.webex.com/tmsagent/api/rest/devices/movi/provisioning
    If you have the JabberVideo Client installed and just want to change where it gets provisioned from VCS or Webex Telepresence then you should be able to use your current client and just change the internal server settings and clear the external setting.
    Then you will need to use your Webex Telepresence Login ID and Password.
    Hope this helps Someone.
    Tim
    Message was edited by: Timothy Shire

    A1: Because it can use either (webex or CUPS), there are two deployment modes, all is on the documentation if you haven't gone thru it.
    You type in what it's asking, nothing more. Choose one, and then use the CUPS hostname, DNS is basically mandatory, the PC needs name resolution for all you have configured (VM/CUCM/etc)
    A2: No, it can be IM&P, full UC and phone mode. No, Full UC and phone mode use the PC for media termination, deskphone control uses the phone for media. Did you change the parameters during install for phone mode??? If not, it's not in phone mode.
    A3: No, that can be many things, DNS, connectivity, credentials, etc
    You mention DNS is not used, so, most likely that's the problem.
    I'd strongly recommend you to review the whole deployment guide for Jabber for Windows.
    HTH
    java
    if this helps, please rate
    www.cisco.com/go/pdihelpdesk

  • A question regarding using a JDBC class to connect with cloud service database

    Hello,
    I am currently working on a small scale cloud service report where the company I chose is obviously Oracle. My question is regarding the cloud service in the following way. I was doing my report with the free trial until it just came to me that
    why not to do a small one class program with my netBeans or Eclipse that uses JDBC but I am not sure what username, password and the url to use in the connection to retrieve, manipulate and store values. Can somebody help me please if this is possible or not?
    edit: Anyone please? I have a deadline in 15.8. and could create something great until then if I get the anwser in few days

    To correct my question, I already have the oracle account and I created the cloud service trial account with database and java section.

  • Error while deploying adf application on oracle cloud service

    hello, i hv registered oracle cloud service for java and database. i hv created simple adf application in which there are 2 jsf pages only linked together. I m using oracle jdeveloper 11gr2. so i hv created ear file for deployment on cloud. i deployed adf application on cloud using java console. But, after uploading application, deployment was failed. I tried 3 times this, but the result was same. I checked log, where i got 3 warnings in whitelist log and error in deploy log. Those are as follows:
    Warnings in   whitelist log:
    2013-04-14 06:57:11 CDT: Starting action "API Whitelist"
    2013-04-14 06:57:11 CDT: API Whitelist started
    2013-04-14 06:57:12 CDT: WARNING - There are 3 warnings(s) found for Testapp.ear.
    2013-04-14 06:57:12 CDT: WARNING - Path:Testapp.ear (3 Warnings)
    2013-04-14 06:57:12 CDT: WARNING - Path:Testapp.ear (3 Warnings)
    2013-04-14 06:57:12 CDT: WARNING - Path:Test_ViewController_webapp.war (3 Warnings)
    2013-04-14 06:57:12 CDT: WARNING - Path:WEB-INF**** (1 Warning)
    2013-04-14 06:57:12 CDT: WARNING - 1:Recommended child element "login-config" missing under element /
    javaee:web-app.
    If you want to make your application public, you can have empty
    <login-config/> in your web.xml. If you need authentication then you must
    have <login-config> and its child <auth-method> element in web.xml.
    Without this element(<login-config>), users may be challenged by SSO, but
    the application code will be executed as anonymous user only. Line No:4.
    2013-04-14 06:57:12 CDT: WARNING - Path:WEB-INF**** (2 Warnings)
    2013-04-14 06:57:12 CDT: WARNING - 1:Recommended child element "jsp-descriptor" missing under element /
    orcl-weblogic:weblogic-web-app.
    If you have a JSP file that is not pre-compiled, The compilation errors
    could be shown on the browser. It is recommended to include
    <jsp-descriptor><verbose>false<****><****-descriptor> in weblogic.xml.
    Line No:2.
    2013-04-14 06:57:12 CDT: WARNING - 2:Recommended child element "session-descriptor" missing under element /
    orcl-weblogic:weblogic-web-app.
    You will be required to have distinct cookie-path, if multiple
    applications are accessed with in the same SSO session or if you have
    multiple applications with different auth-method(CLIENT-CERT, FORM, BASIC)
    in the same service instance.
    Line No:2.
    2013-04-14 06:57:12 CDT: WARNING - Testapp.ear had 3 warning(s).
    2013-04-14 06:57:12 CDT: INFO - Whitelist validation has completed with 0 error(s) and 3 warning(s).
    2013-04-14 06:57:12 CDT: Whitelist validation passed.
    2013-04-14 06:57:12 CDT: "API Whitelist" complete: status SUCCESS
    and Error in deploy log:
    2013-04-14 06:57:12 CDT: Starting action "Deploy Application"
    2013-04-14 06:57:12 CDT: Deploy Application started
    2013-04-14 06:57:15 CDT: weblogic.application.ModuleException: Failed to load webapp: Test-ViewController-context-root because of DeploymentException: java.lang.ClassNotFoundException: oracle.adf.view.faces.bi.webapp.MapProxyServlet
    2013-04-14 06:57:15 CDT: WL action state: failed
    2013-04-14 06:57:15 CDT: Action FAILED with WL action state: failed
    2013-04-14 06:57:15 CDT: Check the server log of your Java cloud service for more info about the failure.
    2013-04-14 06:57:16 CDT: Application deployment failed.
    2013-04-14 06:57:16 CDT: "Deploy Application" complete: status FAILED
    I am using jdeveloper 11gr2, so pls dont tell me to use jdeveloper 11gr1. because, i hv already developed an application for my final year B.Tech and i cant migrate to previous release. So there is only one way for me by generating ear file and deploying from console.
    So,
    I m not getting what is the problem and what will be solution for this?
    What should i do?
    What changes should required?
    pls, help me to get out from this problem !!!!!

    Well, I guess you have a problem here. Check http://multikoop.blogspot.de/2012/12/deploying-adf-applications-into-oracle.html and from this
    >
    Note: In its current stage Oracle Java Cloud Service runs WebLogic Server 10.3.6 with the appropriate Runtime ADF 11.1.1.6. Deployment of ADF 11gR2 Applications is currently not supported. Beside this limitation some ADF Features are not supported on the Oracle Cloud. According to the Oracle Cloud Documentation it is not supported to use the following ADF features
    ADF Desktop Integration
    ADF mBean
    ADF MDS (Seeded customizations or cross-session personalization)
    ADF Mobile
    ADF Active Data Services (=> No real-time ADF Web Apps in Oracles Cloud)
    ADF Business Components services interfaces (web services) or events
    ADF Data Controls for BI, Essbase, BAM, and JMX
    Further there are some restrictions which are good to know I think
    No Java Mail API (=>Sending Mails is prohibited)
    No File system access by deployed applications (=>Writing files is prohibited)
    No Direct use of Oracle JDBC Driver APIs
    No Java Message Service (JMS)
    Max Size for deployment archive 95MB
    >
    I hope for you that the information from the blog has changes in the meantime (blog is from end of last year). Check the current doc for the cloud ...
    Timo

  • What are Azure limitations for Websockets in Cloud Services (web and worker role)?

    A WebSocket Server should be built on Azure platform with OnPrem connections and have questions regarding limitations for Websockets in Azure Cloud Services - web and worker roles.
    Websockets can be configured for Web Sites and limitations are understood, but Azure Websites is not an option. 
    Nevertheless it is planned to run a web service (without UI - no web site) as a Cloud service which has secure websocket (WSS) connections to OnPrem machines. Websocket protocol is enabled for IIS8 on Cloud services web and worker roles. Azure Service Bus Relay
    is not an option.
    Questions:
    1) Are Websockets supported for Azure Cloud services web and worker roles? we assume yes
    2) What are potential limitations from Azure side to support concurrent Websocket connections? We are aware that CPU, memory etc are limitations, but are there additional limitations from MS Azure side? 
     

    Hi,
    As I know, azure cloud service web and worker role support Websockets, users can connect to the role via the special endpoint, if we use Azure cloud service, I think we can monitor the metrics such as CPU, memory, etc... and scale our cloud service via these
    metrics to keep the websockets working, refer to
    http://azure.microsoft.com/en-us/documentation/articles/cloud-services-how-to-scale/ for more information about how to scale a cloud service.
    Regards

  • "Failed to debug the Windows Azure Cloud Service project. The Output directory .... does not exist" - Looking for Solution Config Name Folder?

    Good evening,
    I've been working on and with a VS2013 Update 2 / Azure SDK 2.3 Cloud Service project for a while now and never had a problem debugging it (setting the .ccproj Project as Startup Project) but at the moment I cannot Debug it anymore - I always get the following
    error message:
    Failed to debug the Windows Azure Cloud Service project.  The output directory 'D:\Workspace\Development\Sources\AzureBackend\csx\Backend - Debug' does not exist.
    Now what's odd here, is the last part - the "Backend - Debug" is the Solution configuration name, ALL projects in that particular solution configuration are set to the Debug Configuration. The .ccproj file also only specifies Debug|Any CPU (and
    Release|Any CPU respectively) as its output folder(s). Why is the Solution config appearing up there?
    And more importantly.. why is this happening and what can I do?!
    Thanks,
    -Jörg
    Ps: there seems to be a related
    connect bug and these sorts of issues do appear around the forums but none contains a solution (neither reinstalling the Azure SDK nor cloaking the workspace/re-retrieving & building everything worked).

    Good morning Jambor,
    I already tried de-installing everything Azure-Tooling related including the Azure SDK, Restarting my machine and re-installing the SDK.
    Same result. I can build the .ccproj perfectly fine and the cspack file IS generated perfectly fine, only debugging does not work and there's NO information in the VS output window (again - all projects succeed to build).
    I tried explicitely running VS as Administrator, no change. I removed all IIS Express sites (as the ccproj has one web worker role), remapped my local TFS workspace.. nothing helped.
    As building works, deploying to Azure Cloud Service (manually and via Publish inside VS) all works -perfectly-, I am pretty sure this IS a bug and I'd LOVE to help to get this fixed. As I said, currently I cannot debug and/or run & test my work, hence
    I cannot do ANY work.

Maybe you are looking for