Best practice for maintaining URLs between Dev, Test, Production servers

We sometimes send order confirmations which include links to other services in requestcenter.
For example, we might use the link <href="http://#Site.URL#/myservices/navigate.do?query=orderform&sid=54>Also see these services</a>
However, the service ID (sid=54) changes between our dev, test, and production environments.  Thus we need to manually go through notifications when we deploy between servers.
Any best practices out there?

Your best practice in this instance depends a bit on how much work you want to put into it at the front end and how tied to the idea of a direct link to a service you are.
If your team uses a decent build sheet and migration checklist then updating the various URL’s can just be part of the process. This is cumbersome but it’s the least “technical” solution if you want to continue using direct links.
A more technical solution would be to replace your direct links with links to a “broker page”. It’s relatively simple to create an asp page that can accept the name of the service as a parameter and then execute an SQL query against the DB to return the ServiceID, construct the appropriate link and pass the user through.
A less precise, but typically viable, option would be to use links that take advantage of the built in search query functionality. Your link might display more results than just one service but you can typically tailor your search query to narrow it down. For example:
If you have a service called Order New Laptop or Desktop and you want to provide a link that will get the user to that service you could use: http://#Site.URL#/RequestCenter/myservices/navigate.do?query=searchresult&&searchPattern=Order%20New%20Desktop%20or%20Laptop
The above would open the site and present the same results as if the user searched for “Order New Desktop or Laptop” manually. It’s not as exact as providing a direct link but it’s quick to implement, requires no special technical expertise and would be “environment agnostic”.

Similar Messages

  • Best Practice for Migrating code from Dev to a fresh Test ODI instance

    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines?
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

    948115 wrote:
    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines? If this is the 1st time you are moving to QA, better export/import complete work repositories. If it is not the 1st time then create scenario of specific packages and export/import them to QA. In case of scenario you need not to bother about model/datastores. keep in mind that the logical schema name should be same in QA as used in your DEV.
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?It should be different.
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.It is required to ensure object uniqueness across several work repositories. For more understanding you can refer
    http://docs.oracle.com/cd/E14571_01/integrate.1111/e12643/export_import.htm
    http://odiexperts.com/odi-internal-id/
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?If you are exporting topology then you will get the logical connection and context details. If you are not exporting topology then you need to manually create context and other physical connection/logical connection.
    >
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

  • Best Practices for Maintaining SSAS Projects

    We started using SSAS recently and we maintain we one project to deploy to both DEV and PROD instances by changing the deployment properties. However, this gets messy when we introduce new fact tables in to DEV data warehouse (that are not promoted to
    Production data warehouse). While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business
    users. Sorry for long question but is there is a best practice to manage projects and migrations? Thanks.

     While we work on adding new measure groups and calculations (based on new fact tables in DEV) we are unable to make any changes to production cube (such as changes to calculations, formatting etc) requested by business users.
    Hi Sbc_wisc,
    You can create a new project by importing the metadata from the production cube on the server, using the template, Import from Server (Multidimensional and Data Mining) Project, in SQL Server Data Tools (SSDT). And then make some changes on this project
    and then redeploy it to production server.
    Referencec:
    Import a Data Mining Project using the Analysis Services Import Wizard
    Regards,
    Charlie Liao
    TechNet Community Support

  • Best practice for moving images between projects?

    Hey all,
    I have a project that has an album inside of it. I want to move the album, containing all of the photographs to a new project. If I drag the photos individually to the new project it moves them successfully although they now don't belong to an album in their new project. If I drag the album itself it moves the album and photographs but leaves the photos in the original project as well.
    Does anyone have some best practice ideas for this scenario?
    Thanks in advance for any help!

    As you have discovered if the drop target is the project the images move projects. If the drop target is an album the images show up in the album but do not actually move anywhere. Moving albums does nothing to move masters. So...
    Select all of the images in the album. Drag them to the new project and then drag the album to the new project. Simple enough.
    RB

  • Best practices for realtime communication between background tasks and main app

    I am developing (in fact, porting to WinRT Universal App) an application connecting to Bluetooth medical devices. In order to support background connectivity, it seems best is to use background tasks triggered by a device connection. However, some of these
    devices provide a stream of data which has to be passed to the main app in real time when it is active - i.e. to show an ECG on the screen. So my task ideally should receive and store data all the time (both background and foreground) and additionally make
    main app receive it live when it is in foreground.
    My question is: how do I make background task pass real-time data to the app when it is active? Documentation talks about using storage, but it does not seem optimal for realtime messaging.. Looking for best practices and advice. Platform is Windows 8.1
    and Windows Phone 8.1.

    Hi Michael,
    Windows phone app has resource quotas, to prevent it from interfering with real-time communication functionality, background task using the ControlChannelTrigger and PushNotificationTrigger receive guaranteed resource quotas for every running task. You can
    find more information from
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh977056(v=win.10).aspx. See Background task resource guarantees for real-time communication section. ControlChannelTrigger is not supported on windows phone, so you can have a look at PushNotificationTrigger
    class.
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.applicationmodel.background.pushnotificationtrigger.aspx.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate
    the survey.

  • Best Practice for Maintaining a web scoped feature

    I am in the process of creating a web scoped feature that creates a few lists and some event recievers for those lists.
    My plan was to have the lists created on feature activating. This way, users can simply activate the feature if they want to take advantage of this functionality on a site by site basis.
    While trying to flesh out this idea, I realized that if we ever need to make changes to this feature in the future, we'll likely have to uninstall and reinstall the solution.  After uninstalling/reinstalling the solution, the feature will
    no longer be active on any site by default. This means that all of those users who were using the functionality will have to go in and re-activate the feature to continue using the event recievers.
    What I'd like to happen is that the sites that had this feature activated, still have the feature activated after I uninstall and reinstall the solution. Those sites that didn't have the feature active, still dont' have the feature activated.
    I know that there is the option to update a solution, which will do what I want, but updating is limited in that I cannot add additional items to the solution.  Which means eventually I'll have to uninstall and reinstall.
    Is there a good way to go about this?

    If you are modifying or expanding the features, then your best option would be using Feature upgrade
    http://www.sharepointnutsandbolts.com/2010/06/feature-upgrade-part-1-fundamentals.html
    >> know that there is the option to update a solution, which will do what I want, but updating is limited in that I cannot add additional items to the solution.  Which means eventually I'll have to uninstall and reinstall.
    Yes this is correct. Update solution works only for same set of files and features in the existing solution. If there is any change then you have to retract and deploy the new solution.
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful

  • Best practice for @EJB injection in junit test (out-of-container) ?

    Hi all,
    I'd like to run a JUnit test for a Stateless bean A which has another bean B injected via the @EJB annotation. Both beans are pure EJB 3 POJOs.
    The JUnit test should run out-of-container and is not meant to be an EJB client, either.
    What is the easiest/suggested way of getting this injection happening without explicitely having to instantiate the bean B in my test setup ?

    you can deal with EntityBeans without having the Container managed senario , you can obtain instance of EntityManager using the "EntityManagerFactory" and providing the "persistence.xml" file and provide the "provider" (toplink,hibernate ,...), then you can use entities as plain un managed classes

  • Execute SQL Tasks Failing for Duplicate Syntax Between DEV and Production DB

    Newbie here...be patient with me!
    I added tasks to refresh two tables (delete from, insert into, update) to an SSIS project . I have them running from the WinXP scheduler. The issue:
    In dev the tasks integrate and execute successfully from scheduler
    In prod I can right click and execute each of the three tasks without any problem, but the same tasks cause my project to fail when executed from scheduler
    Questions:
    Any ideas about what I am failing to see?
    How do I get a meaningful log messages from the tasks that are failing?
    Thanks for your ideas...
    Installed Edition: IDE Standard
    SQL Server Analysis Services  
    Microsoft SQL Server Analysis Services Designer
    Version 9.00.1399.00
    SQL Server Integration Services  
    Microsoft SQL Server Integration Services Designer
    Version 9.00.1399.00
    SQL Server Reporting Services  
    Microsoft SQL Server Reporting Services Designers
    Version 9.00.1399.00
    Microsoft Visual Studio 2005
    Version 8.0.50727.42  (RTM.050727-4200)
    Microsoft .NET Framework
    Version 2.0.50727
    Installed Edition: IDE Standard
    SQL Server Analysis Services  
    Microsoft SQL Server Analysis Services Designer
    Version 9.00.1399.00
    SQL Server Integration Services  
    Microsoft SQL Server Integration Services Designer
    Version 9.00.1399.00
    SQL Server Reporting Services  
    Microsoft SQL Server Reporting Services Designers
    Version 9.00.1399.00

    Newbie here...be patient with me!
    I added tasks to refresh two tables (delete from, insert into, update) to an SSIS project . I have them running from the WinXP scheduler. The issue:
    In dev the tasks integrate and execute successfully from scheduler
    In prod I can right click and execute each of the three tasks without any problem, but the same tasks cause my project to fail when executed from scheduler
    Questions:
    Any ideas about what I am failing to see?
    How do I get a meaningful log messages from the tasks that are failing?
    If i understand correctly you are trying to run SSIS package from Window XP task scheduler.  The reason for the failure may be due to permission while accessing resources required in the package. However it is difficult to suggest without
    looking at the error message.
    Scheduled tasks maintains a log file (Schedlgu.txt), in the c:\Windows folder. You can view the log from the Scheduled Tasks window by clicking
    View Log on the Advanced menu.
    The log file size is 32 kilobytes (KB), and when the file reaches its maximum size, it automatically starts to record new information at the beginning of the log file and writes over the old log file information.
    Refer http://support.microsoft.com/kb/308558
    http://msdn.microsoft.com/en-us/library/windows/desktop/aa383604(v=vs.85).aspx
    Regards, RSingh

  • SAP Best Practices for Payroll

    Hi Gurus,
    We have a SAP ECC6.0 version installed at our client side. My Client is developing a software for SAP Payroll. They want to run Payroll in their SAP system to figure out few things. SAP has recommended to install SAP Best Practices for HCM to get some test data. My questions to you all are:-
    1) SAP best Practices do provide preconfigured settings and some amount of Test Data. How will I install SAP Best Practices for HCM at my client System. Please do tell me in detail because I dont have any knowledge how to proceed as I a SAP HCM functional consultant. We dont have any basis consultant in our team.
    2) How long it can take to install SAP Best Practices for HCM and the complete proceedure to do it?
    3) Is it true that the test data provided by SAP BP and the pre configured base settings will be sufficient to run Payroll. Mind it,my client does not want to run payroll as per their needs,they just want to see how it is run in SAP with the SAP test data.
    I would really appreciate if you guyz can provide me any knowledge related to this matter.
    Thanks a ton
    Sanchit Gupta

    Friend
    You should visit www.service.sap.com (you need login and passwd ) and search under EUROPE.
    or try in www.help.sap.com.
    Regards
    Renu

  • What is the Best practice for ceramic industry?

    Dear All;
    i would like to ask two questions:
    1- which manufacturing category (process or discrete) fit ceramic industry?
    2- what is the Best practice for ceramic industry?
    please note from the below link
    [https://websmp103.sap-ag.de/~form/sapnet?_FRAME=CONTAINER&_OBJECT=011000358700000409682008E ]
    i recognized that ceramic industry is under category called building material which in turn under mill product and mining
    but there is no best practices for building material or even mill product and only fabricated meta and mining best practices is available.
    thanks in advance

    Hi,
    I understand that you refer to production of ceramic tiles. The solution for PP was process, with these setps: raw materials preparation (glazes and frits), dry pressing (I don't know extrusion process), glazing, firing (single fire), sorting and packing. In Spain, usually are All-in-one solutions (R/3 o ECC solutions). Perhaps the production of decors have fast firing and additional processes.
    In my opinion, the curiosity is in batch determination in SD, that you must determine in sales order because builders want that the order will be homogeneus in tone and caliber, and he/she can split the order in diferents deliveries. You must think that batch is tone (diferents colours in firing and so on) and in caliber.
    I hope this helps you
    Regards,
    Eduardo

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for test reports location -multiple installers

    Hi,
    What is recommended best practice for saving test reports with multiple installers of different applications:
    For example, if I have 3 different teststand installers: Installer1, Installer2 and Installer3 and I want to save test reports of each installer at:
    1. C:\Reports\Installer1\TestReportfilename
    2. C:\Reports\Installer2\TestReportfilename
    3. C:\Reports\Installer3\TestReportfilename
    How could I do this programatically as to have all reports at the proper folder when teststand installers are deployed to a test PC?
    Thanks,
    Frank

    There's no recommended best practice for what you're suggesting. The example here shows how to programmatically modify a report path. And, this Knowledge Base describes how you can change a report's filepath based on test results.
    -Mike 
    Applications Engineer
    National Instuments

  • Url category best practices for ESA 8.5.6-074

    In the new version  8.5.6-074 of ESA C170, what are the best practices for applying the new URL Category?
    Is it possible to crate filters that quarantine mails based on URL filtering? Is so could you upload sample script (for example quarantine emails that have adult links in body)

    You should be able to do it with a content filter. You have some conditions based on URL and categories.

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

Maybe you are looking for

  • My Toshiba TV 40TL933 does not turn with the command

    Hi I have a Toshiba 40TL933 TV and the remote turn on the LED changes from red to green but the screen goes black, no load. It has happened three times since I bought it and only a month ago. The current disconnect and re-plug in and then it works co

  • Freeze during download

    I recently tried downloading from itunes store and it froze in the middle. I got charged for the download but it did not transfer into my purchased file. What can I do?

  • SQL Statement batching in Kodo 2.5

    Can statement batching in Kodo 2.5 be observed in sql statement logs? How do I know whether it does batching or not?

  • Multiple JavaFX stages in fullscreen mode

    I have an application that contains two stages which should be shown on two different screens both in fullscreen mode. I managed to position the two stages on seperate screens, and tried to set the fullscreen property to true on each Stage, but only

  • G4 (10.4.11) won't boot from disk and won't safe boot

    I lost the items from the bottom half of my finder sidebar list on my G4 running under 10.4.11. The instructions to correct this problem that I found include safe booting (holding down shift after startup tone). However, this just doesn't happen (lef