Best practice to use the Slot on Azure Website

Hi Team,
I want to understand the website slot feature little deeper. Can anyone suggest that How should we plan while we have testing, staging & production environment on Azure? (Development might be from Visual Studio Azure Emulator)
Take an example, we have ToDoApp. How should we use Azure Website & Slot feature to make it available for Testing, Staging & Production?
Q1) Should we create two different site (1 for Testing and Staging while 2 for Production) or should we use all in one with 3 slot?
Q2) If we create two different site like as I said in Q1 How should we move the package from Staging to Production once it done with staging?
Q3) If we have one single site instead of two, Can we have role based feature like, Whoever is swapping the slot from testing to staging would do upto that level only. while from staging to production except Product Admin no-one can do?
Q4) If none of the above is recommended then what would it be & How should we use swapping feature there for move package on production?
Regards, Brijesh Shah

Hi Brij,
Deployment slots were developed with whole development lifecycle in mind. A good practice is to have a source control branch for stage of the environment so you know what are the changes. There are generally two types of development:
One branch - always fix forward and goes to production at least once per day. The build might get deployed to staging environment (slot) to run full functional validation before taking live traffic and then deployed to Production. With Azure Websites this
could be as simple as swap staging to Production slot. After swap (new) staging is ready to take next build and run another set of validation. So one site with many slots here
Multiple branches - when deploying to Prod is slower you need multiple branches and slots, and stabilization phase. This means you can have Dev branch where is vNext - vNext slot (you can setup continuous deployment here so every check in is deployed),
Main branch where only selected changes are integrated or when is time for a major Release or hotfix - points on Stage slot, Release branch pointing to PreRelease slot which later to be swapped with Production after deployment to avoid
cold start. Better to stay with one site and may slots as well but current limitation is 5 slots.
Thus said swap is the easiest way to move code thru stages :). There are many features there that can be used:
during swap some settings stay at slot while others are moved with the code (this is available only in PowerShell and SDK at the moment)
Auto swap - to avoid cold start you can deploy to PreRelease slot and auto swap with Production slot right after it finishes the deployment (this is available only in PowerShell and SDK at the moment)
TiP - redirect only small % of the traffic to PreRelease slot (vNext or your choice) to expose let's say 1 % of customers to new code base.
Video
on Q3: RBAC(role based access control) allows you to do exactly this. only admin can have access to Production slot and separate groups can have access to other slots. Thus only admin can swap with production
Hope that helps,
Galin Iliev [MCSD.NET, MCPD], http://www.galcho.com

Similar Messages

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Best practices for using the knowledge directory

    Anyone know when it is best to store docs in the Knowledge Directory versus Collab? They are both searchable, but I guess you can publish from the Publisher to the KD. Anyone have any best practices for using the KD or setting up taxonomies in the KD?

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

  • Best practices for using the 'cost details' fields

    Hi
    Please could you advise us to the best practices for using the 'cost details' field within Pricing. Currently I cannot find the way to surface the individual Cost Details fields within the Next Generation UI, even with the tick box for 'display both cost and price' ticked. It seems that these get surfaced when the Next Generation UI is turned off, but cannot find them when it is turned on. We can see the 'Pricing Summary' field but this does not fulfill our needs, as some of our services have both recurring and one-off costs.
    Attached are some screenshots to further explain the situation.
    Many thanks,
    Richard Thornton

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

  • What are the best practices for using the enhancement framework?

    Hello enhancement framework experts,
    Recently, my company upgraded to SAP NW 7.1 EhP6.  This presents us with the capability to use the enhancement framework.
    A couple of senior programmers were asked to deliver a guideline for use of the framework.  They published the following statement:
    "SAP does not guarantee the validity of the enhancement points in future releases/versions. As a result, any implemented enhancement points may require significant work during upgrades. So, enhancement points should essentially be used as an alternative to core modifications, which is a rare scenario.".
    I am looking for confirmation or contradiction to the statement  "SAP does not guarantee the validity of enhancement points in future releases/versions..." .  Is this a true statement for both implicit and explicit enhancement points?
    Is the impact of activated explicit and implicit enhancements much greater to an SAP upgrade than BAdi's and user exits?
    Is there any SAP published guidelines/best practices for use of the enhancement framework?
    Thank you,
    Kimberly
    Edited by: Kimberly Carmack on Aug 11, 2011 5:31 PM

    Found an article that answers this question quite well:
    [How to Get the Most From the Enhancement and Switch Framework as a Customer or Partner - Tips from the Experts|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0f0373e-a915-2e10-6e88-d4de0c725ab3]
    Thank you Thomas Weiss!

  • Best Practice while configuring Traffic Manager for Azure Website

    Hi Team,
    I want to understand What is the best practice while we configure traffic manager for Azure website.
    To give you the base, Here let me explain my requirement. I have one website which 40% target audiences would be East US, while  40% would be UK and rest 20% would be from Asia-pacific.
    Now, What I want is Failover + Performance based Traffic Manager Configuration.
    My thinking:
    1) we need to create 1 website with 2 instances in each region (east us, east asia, west us for an example). so, total 3 deployment of website. (give region based url for the website)
    2) create traffic manager based on performance and add 3 of those instances. that would become website-tmonperformance
    3) create traffic manager based on failover and add 3 of those instances. that would become website-tmonfailover
    4) create traffic manager and ?? don't know the criteria but add both above traffic manager here and take your final url for end user.
    I am not sure (1) this may be the right approach or not (2) if this is right, in the 4th step which criteria we should select while creating final traffic manager round-robin/ performance/ failover?
    after all these if use try to access site from US.. traffic manager will divert that to US Data-Centre or it will wait for failover and till that it will be served from east-asia if in configuration, east-asia is my 1st instance?
    Regards, Brijesh Shah

    Hi Jonathan,
    Thanks for your quick reply. actually question is bit different. Let me explain you different way.
    I was asking for recommendation from Azure Traffic Manager team. whether my understanding is correct or not.We want Performance with Failover.
    So, One azure website we have: take an example todoapp. I deployed that in 3 different region. now, I want to have performance based routing as well as failover based routing. but obviously I can't give two URL to my end user. so, at the top of that I will
    require one more traffic manager. So,
    step 1: I will create one traffic manager with performance criteria named: TMForPerformance.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 2: I will create one more traffic manager with failover criteria named: TMForFailover.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 3: I will create one final traffic manager with performance criteria named: todoapp.trafficmanager.com where I will add these two traffic manager instead of 3 different region's website.
    Question 1) Is it correct structure if we want to achieve Performance with Failover or Is there any better solution?
    Question 2) in step 3, what criteria we should select? performance/ round robin/ failover
    Regards, Brijesh Shah

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Does anyone know the best practices to use Captivates on an Elearning course, please...

    I need to know the best practices to use captivates on an eLearning course, as how much information should it has, etc..

    Hello There,
    Adobe Captivate has multiple workflows which can help you to create eLearning courses. It can create various types of learning content and I suggest you to visit the following links.
    Product Info: www.adobe.com/products/captivate/
    OnDemand Seminars to get more info on what captivate can do: http://www.adobe.com/cfusion/event/index.cfm?event=list&type=ondemand_seminar&loc=en_us
    Register for Trainings and Webinars: http://www.adobe.com/cfusion/event/index.cfm?event=list&loc=en_us&type=&product=Captivate& interest=&audience=&monthyear=
    If you have specific scenarios to discuss, you can mail me at [email protected] or tweet me at @vish_adobe
    Thanks,
    Vish
    @vish_adobe

  • Best practices for making the end result web help printable

    Hi all, using TCS3 Win 7 64 bit.  All patched and up to date.
    I was wondering what the best practices are for the following scenario:
    I am authoring in Frame, link by reference into RH.
    I use Frame to generate PDFs and RH to generate webhelp.
    I have tons of conditional text which ultimately produce four separate versions of PDFs as well as online help - I handle these codes in FM and pull them into RH.
    I use a css on all pages of my RH to make it 'look' right.
    We now need to add the ability for end users to print the webhelp - outside of just CTRL+P because a)that cuts off the larger images and b)it doesn't show header, footer, logo, date, etc. (stuff that is in the master pages in FM).
    My thought is doing the following:
    Adding four sentences (one for each condition) in the FM book on the first page. Each one would be coded for audience A, B, C, or D (each of which require separate PDFs) as well as coded with ONLINE so that they don't show up in my printed PDFs that I generate out of Frame. Once the PDFs are generated, I would add a hyperlink in RH (manually) to each sentence and link the associated PDF (this seems to add the PDF file to the baggage files in RH). Then when I generate my RH webhelp, it would show the link, with the PDF, correctly based on the condition of the user looking at the help.
    My questions are as follows:
    1- This seems more complicated than it needs to be. Is it?
    2- I would have to manually update every single hyperlink each time I update my FM book, because I am single sourcing out of Frame and I am unable (as far as I can tell) to link a PDF within the frame doc. I update the entire book (over 1500 pages) once every 6 weeks so while this wouldn't be a common occurrence it will happen regularly, and it would be manual (as far as I can tell)?
    3- Eventually, I would have countless PDFs inside RH. I assume this will eventually impact performance. So this also doesn't seem ideal?
    If anyone has thoughts/suggestions on a simpler way or better way to do this, I'd certainly appreciate it. I have watched the Adobe TV tutorial on adding a master page but that seems to remove the ability to use a css across all my topics and it also requires the manual addition of a manual hyperlink to the PDF file, so that is what I am proposing above, anyway (not sure the benefit, therefore).
    Thanks in advance,
    Adriana

    Anything other than CTRL + P is going to create a lot of work so perhaps I can comment on what you see as drawbacks to that.
    a)that cuts off the larger images and b)it doesn't show header, footer,
    logo, date, etc. (stuff that is in the master pages in FM).
    Larger images.
    I simply make a point of keeping my image sizes down to a size that works. It's not a problem for me but that doesn't mean it will work for you. Here all I am doing is suggesting you review how big a problem that would be.
    Master Page Details
    I have to preface this with the statement that I don't work with FM. The details you refer to print when they are in RoboHelp master pages. Perhaps one of the FM users here can comment on how to get FM master pages to come through.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best practice to use MediaPlayer?

    Is the best practice to use attributes of MediaPlayer (such as playing) or to request the PlayTrait and read its playState attribute?
    (This question comes from a customer.  Reposting here for the OSMF team to respond and for the benefit of the whole group.)
    Sumner Paine
    osmf product manager

    For playback use cases, I'd recommend you stick with MediaPlayer, as it's simpler to use and manages all of the trait event registration.

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Best practice for using common VIs

    Hi,
    I have some projects, which use some common VIs (like open/close file format and similar). What is the best practice to use these common VIs? Now I copy these to the project folders, but now I have multiple copies of some VIs, which is difficult to maintian, if I need to modify something in these common files. 
    What sould i use? Still copy, or link the common folder to my project, or use a packed project, or something else?
    Thanks

    dont know if it is the best practice but with every new project, I create a new folder in my project (right click on my computer -> new -> virtual folder) and link that virtual folder to my folder that contains all my common code (vi/ctl/class) by right clicking on the newly created virtual folder and choosing "Convert to auto-populating folder".
    One downside is that if you modify one of those vis for your new application, it might break it for one of your older projects.
    I also use source code control (svn / git / mercurial / etc..) so if that happens, I can always go back to a previous working version.

  • Best Practice for using multiple models

    Hi Buddies,
         Can u tell me the best practices for using multiple models in single WD application?
        Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
        WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment  multiple            models  in single application ?

    It very much depends on your design, but One RFC per model is definitely a no no.
    Refer to this document to understand how should you use the model in most efficient way.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
    Thanks
    Prashant

Maybe you are looking for