Deployment specific configuration - best practice

I'm trying to figure out the best way to set-up deployment specific configurations.
From what I've seen I can configure things like session timeouts and datasources. What I'd like to configure is a set of programmatically accessible parameters. We're connecting to a BPM server and we need to configure the URL, username and password, and make these available to our operating environment so we can set-up the connection.
Can we set-up the parameters via a deployment descriptor?
What about Foreign JNDI? Can I create a simple JNDI provider(from a file perhaps?) and access the values?
Failing these, I'm looking at stuffing the configuration into the database and pulling it from there.
Thanks

Which version of the product are you using?
Putting the configs in web.xml config params as in this example:
https://codesamples.samplecode.oracle.com/servlets/tracking/remcurreport/true/template/ViewIssue.vm/id/S461/nbrresults/103
Will allow you to change the values per deployment easily with a deployment plan.
Another alternative 10.3.2 and later is to use a features that allows resources like normal properties files to be overloaded by putting them in the plan directory. I don't have the link to this one right now.

Similar Messages

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • SRST Configuration - Best Practices

    We are starting a new Unified Communication deployment and will have an SRST at each remote location. I am wondering if there are any best practices in regards to the configuration of the SRST.
    For example Does it matter what interface is specific for the source address. I have seen some that say it needs to be the LAN address and others say it needs to be a Loopback address. Since the phones themselves will be attached to a VLAN on a switch that is connected to the router is there a benefit either way? Are there any considerations not really covered in the base configuration that need to be considered as a best practice?
    I am sure I will have more questions as we progress so thanks for the patience in advance...
    Brent                    

    Hi Brent,
    The loopback is used because it is an interface that remains up regardless of the physical layer, so provided that appropriate routing is in place, the lo address will be reachable through the physical interfaces.
    Best practices on the top of my mind should include looking at the release notes for the software version you're using, check network requirements and compatibility matrix, interworking, caveats, and reserve time for testing.
    I'm sure you'll be just fine
    hth

  • DNS Configured-Best Practice on Snow Leopard Server?

    How many of you configure and run DNS on your Snow Leopard server as a best practice, even if that server is not the primary DNS server on the network, and you are not using Open Directory? Is configuring DNS a best practice if your server has a FQDN name? Does it run better?
    I had an Apple engineer once tell me (this is back in the Tiger Server days) that the servers just run better when DNS is configured correctly, even if all you are doing is file sharing. Is there some truth to that?
    I'd like to hear from you either way, whether you're an advocate for configuring DNS in such an environment, or if you're not.
    Thanks.

    Ok, local DNS services (unicast DNS) are typically straightforward to set up, very useful to have, and can be necessary for various modern network services, so I'm unsure why this is even particularly an open question.  Which leads me to wonder what other factors might be under consideration here; of what I'm missing.
    The Bonjour mDNS stuff is certainly very nice, too.  But not everything around supports Bonjour, unfortunately.
    As for being authoritative, the self-hosted out-of-the-box DNS server is authoritative for its own zone.  That's how DNS works for this stuff.
    And as for querying other DNS servers from that local DNS server (or, if you decide to reconfigure it and deploy and start using DNS services on your LAN), then that's how DNS servers work.
    And yes, the caching of DNS responses both within the DNS clients and within the local DNS server is typical.  This also means that there is need no references to ISP or other DNS servers on your LAN for frequent translations; no other caching servers and no other forwarding servers are required.

  • Deploy to production (best practices)

    I am wondering if there are some best practices published somewhere in regard to deploying an app from your dev environment to a production environment?
    I currently do the following.
    1) export/inport application
    2) generate a sync script (toad) to get the schema objects in sync. Maybe it's better to do an export/import (expdp) of the schema here.
    3) import images/files needed by the application

    Hi,
    Have a look at:
    Book: Pro Oracle Application Express
    Author: John Edward Scott
    Author: Scott Spendolini
    Publisher: apress
    Year: 2008
    It has a good section on migrating between environments.
    Hope this helps.
    Cheers,
    Patrick Cimolini

  • IP over Infiniband network configuration best practices

    Hi EEC Team,
    A question I've been asked a few times, do we have any best practices or ideas on how best to implement the IPoIB network?
    Should it be Class B or C?
    Also, what are your thoughts in regards to the netmask, if we use /24 it doesn't give us the ability to visually separate two different racks (ie Exalogic / Exadata), whereas netmask /23, we can do something like:
    Exalogic : 192.168.*10*.0
    Exadata : 192.168.*11*.0
    While still being on the same subnet.
    Your thoughts?
    Gavin

    I think it depends on a couple of factors, such as the following:
    a) How many racks will be connected together on the same IPoIB fabric
    b) What rack configuration do you have today, and do you foresee any expansion in the future - it is possible that you will move from a purely physical environment to a virtual environment, and you should consider the number of virtual hosts and their IP requirements when choosing a subnet mask.
    Class C (/24) with 256 IP values is a good start. However, you may want to choose a mask of length 23 or even 22 to ensure that you have enough IPs for running the required number of WLS, OHS, Coherence Server instances on two or more compute nodes assigned to a department for running its application.
    In general, when setting a net mask, it is always important that you consider such growth projections and possibilities.
    By the way, in my view, Exalogic and Exadata need not be in the same IP subnet, especially if you want to separate application traffic from database traffic. Of course, they can be separated by VLANs too.
    Hope this helps.
    Thanks
    Guru

  • UDDI and deployed Web Services Best Practice

    Which would be considered a best practice?
    1. To run the UDDI Registry in it's own OC4J container with Web Services deployed in another container
    2. To run the UDDI Registry in the same OC4J container as the deployed Web Services

    The reason you don't see your services in the drop-down is because, CE does lazy initialization of EJB components (gives you a faster startup time of the server itself). But your services are still available to you. You do not need to redeply each time you start the server. One thing you could do is create a logical destinal (in NWA) for each service and use the "search by logical destination" button. You should always see your logical names in that drop-down that you can use to invoke your services. Hope it helps.
    Rao

  • Deploying Branding Files Best Practice

    Question about best practice (if exists) for deployment method of branding files.
    Background:
    I created 2 differefent projects for a public facing SP 2010 site.
    First project contains 1 feature and is responsible for deploying branding files: contains my custom columns, layouts, masterpage, css, etc...
    Second project is my web template. Contains 3 features, contenttypebinding, webtemp default page, and the web temp files (onet.xml).
    I deploy my branding project, then my template files.
    Do you deploy branding as Farm solution or Sandboxed solution? So how do you update your branding files at this point?
    1. You don't, you deploy and forget about solution doing everything in SP Designer from this point on.
    2. Do step 1, then copy everything back to the VS project and save to TFS server.
    3. Do all design in VS and then update solution.
    I like the idea of having a full completed project, but don't like the idea of having to go back to VS, package, and re-deploy every time I have a minor change to my masterpage when I can just open up Designer and edit.
    Do you deploy and forget about the branding files using SP Designer to update master pages, layouts, etc.. or do you work and deploy via VS project.

    Hi,
    Many times we use Sandboxed solutions for branding projects, that way it would minimize the dependency of SharePoint Farm administration.  Though there are advantages using SharePoint Farm solution, but Sandboxed solution is simple to use, essentialy
    limited to set of available Sandboxed solutions API.
    On the SP Designer side, many of the clients that I have worked with were not comfortable with the Idea of enabling SharePoint Designer access to their portal. SP Designer though powerful and it does some times brings more issues when untrained business
    users start customizing the site.
    Another issue with SPD is you will not be able track changes(still can use versioning) and retracting changes are not that easy. As you said, you simply connect to the site using SPD and make changes to the master pages, but those changes are to be
    documented, and maintain copies of master pages always to retract the changes. 
    Think about a situation when you make changes to the Live site using SPD and the master page is messed up, and your users cannot access the site. we cannot follow trial and error methodology on the production server.This would bring
    out more questions like, What would be your contingency plan for restoring the site and what is your SLA for restoring your site, how critical is your business data.
    Although this SPD model would be useful for a few user SharePoint setup with limited tech budget but not also advisable.
    I would always favor VS based solution, which will give us more control over the design and planning for deployment.
    We do have solution deployment window, as per our governance we categorize the solution deployment based on the criticality and for important changes we plan for the weekends to avoid unavailability of the site.
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • Server 2008 R2 RDS HA Licensing configuration best practices

    Hello
    What is the best practice for setting up and HA licensing environment for RDS?  I'm using a mixture of RDS CALs for my internal/AD users and External Connector license for my external/Internet users. 
    Daddio

    Hi,
    To ensure high availability you want to have a fallback License Server in your environment. The recommended method to configure Terminal Service
    Licensing servers for high availability is to install at least two Terminal Services Licensing servers in Enterprise Mode with available Terminal Services CALs. Each server will then advertise in Active Directory as enterprise license servers with regard to
    the following Lightweight Directory Access Protocol (LDAP) path: //CN=TS-Enterprise-License-Server,CN=site name,CN=sites,CN=configuration-container.
    To get more details on how to setup your License Server environment for redundancy and fallback, go over the "Configuring License Servers for High Availability"
    section in the Windows Server 2003 Terminal Server Licensing whitepaper
    Regards,
    Dollar Wang
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Technology changes life……

  • RDM Configuration (Best Practices...)

    Folks,  when attaching multiple RDM's on a VM, would you add one RDM per vSCSI adapter, or would you add multiple RDM's to the same vSCSi adapter?
    Also,  when creating the RDM would you keep it with the virtual machine, or store it on a seperate datastore (RDM pointers)?
    Just looking for some best practices. 

    Stuarty1874 wrote:
    Folks,  when attaching multiple RDM's on a VM, would you add one RDM per vSCSI adapter, or would you add multiple RDM's to the same vSCSi adapter?
    multiple
    Also,  when creating the RDM would you keep it with the virtual machine, or store it on a seperate datastore (RDM pointers)?
    Keep it the same
    Just looking for some best practices. 
    Also you might find this good
    http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage-guide.pdf
    start with p135

  • WS application configuration - best practice advice

    Hi there,
    I'm a newbie on web services.
    I'm currently developing a Web Service using JAX-WS 2.1 and deploying on Tomcat 6.
    I'm wondering where to put my application's configuration. If I was developing a standalone java application I would put it in a Java Properties file or perhaps use the Java Preferences API.
    But this is not a standalone java application, it is something that runs in a container, in this case Tomcat.
    I have a fealing that web.xml is the intended place to put this information but I've also read posts discouraging this. I'm also in the dark as to how to access the information in web.xml (Context parameters section) from within my web service.
    Basically I would like to use a solution which is a much standard as possible, i.e. not tied to any specific container.
    Thanks.
    Changed typo: "have" changed to "how"

    What is the model number of the switch?
    Normally a switch that cannot be changed does not have an IP address.  So if your switch has an address (you said it was 192.168.2.12)  I would assume that it can be changed and that it must support either a gui or have some way to set or reset the switch.
    Since Router1 is using the 192.168.1.x  subnet , then the switch would need to have a 192.168.1.x  address (assuming that it even has an IP address), otherwise Router1 will not be able to access the switch.
    I would suggest that initially, you setup your two routers without the switch, and make sure they are working properly, then add the switch.  Normally you should not need to change any settings in your routers when you add the switch.
    To setup your two routers, see my post at this URL:
    http://forums.linksys.com/linksys/board/message?board.id=Wireless_Routers&message.id=108928
    Message Edited by toomanydonuts on 04-07-2009 02:39 AM

  • RMAN/TSM Configuration - Best Practices?

    Wondering how best to configure RMAN with TSM from a network perspective. Dual HBA's? Suggestions?
    Thanks.

    * The OWB client software will reside on each user's windows desktop.
    Yes.
    * Should there be one repository that will deploy to the runtime environments of TEST, USER ACCEPTANCE and PRODUCTION environments? or should there be three repositories, one for each environment?
    One.
    * If there is only a single repository that will deploy to all three environments, should it reside on TEST, USER ACCEPTANCE or PRODUCTION server?
    Test, but you need a repository on the other environments too.
    * Does OWB have sufficient version control capabilities to drive three environments simultaneously?
    No, no version control at all. You can buy a third-party tool like http://www.scm4all.com
    * Is it possible to have an additional development repository on a laptop to do remote development of source/target definition, ETL etc and then deploy it to a centralized development repository? Perhaps it is possible to generate OMB+ scripts from the laptop repository and then run them against the centralized repository on the shared server.
    Use the export/Import from the OWB via mdl-Files.
    Regards
    Detlef
    Message was edited by:
    289512
    null

  • Configuration best practice

    My final deployed solution is going to consist of several PC-Fieldpoint pairs all located on the company intranet. Each PC talks to its own Fieldpoint and not any other. I have the sequences and Users.ini stored on the server.
    Now I'm trying to come up with the best place to store the Fieldpoint IP address on the station. I know I could create a property loader and associated xml file, and will do that if it the preferred method. But it seems like overkill to store one IP address.
    Is there a better place to store information like this? I'm a beginner, so no suggestion is too obvious for me.
    I'm trying to make a system that will be maintainable by a real TestStand expert - either me in the future or my replacement<g>.

    Hi jtdavies,
    It sounds like the use of Station Globals would be the ideal route for your application. Attached is an easy example showing the use of Station Globals.
    Nestor
    Attachments:
    stationGlobals_access2.zip ‏38 KB

  • Problem in deploy BPM 11g best practices to weblogic

    Hi everybody,
    I do step by step BPM11g BestPractices instruction. when I want to deploy SalesProcesses to application server in select SOA severs step in deploy wizard there isn't any server that I select. I login to admin server console of weblogic and see that soa_server1 and bam_server1 state is SHUTDOWN I find two ways for start them
    1. from console that first must start node manager and then start them
    2. from command line --> startManagedWebLogic.cmd soa_server1
    I try two ways but both of them have problems. when I see log files I see below exception finally :
    *<Jul 10, 2011 10:15:20 AM GMT+03:30> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason:*
    There are 1 nested errors:
    weblogic.management.ManagementException: Booting as admin server, but servername, soa_server1, does not match the admin server name, AdminServer
         at weblogic.management.provider.internal.RuntimeAccessService.start(RuntimeAccessService.java:67)
         at weblogic.t3.srvr.ServerServicesManager.startService(ServerServicesManager.java:461)
         at weblogic.t3.srvr.ServerServicesManager.startInStandbyState(ServerServicesManager.java:166)
         at weblogic.t3.srvr.T3Srvr.initializeStandby(T3Srvr.java:802)
         at weblogic.t3.srvr.T3Srvr.startup(T3Srvr.java:489)
         at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:446)
         at weblogic.Server.main(Server.java:67)
    *<Jul 10, 2011 10:15:20 AM GMT+03:30> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED>*
    *<Jul 10, 2011 10:15:20 AM GMT+03:30> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down>*
    *<Jul 10, 2011 10:15:20 AM GMT+03:30> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>*
    what should I do ?? please help me :-(
    Regards.

    try starting as : startManagedWebLogic.cmd <man server name> <Admin Server URL>
    e.g. startManagedWebLogic.cmd soa_server1 t3://localhost:7001

  • WLC Configuration Best practices - no updates since 2008?

    There has been no updates to this doc for almost 4 years.
    http://www.cisco.com/en/US/partner/tech/tk722/tk809/technologies_tech_note09186a0080810880.shtml
    That's a long time for wireless, especially since it still references release 5.2, and we now it's 7.0.  Plus quite a few new AP families have been announced, 802.11n, Cleanair, etc.  I think this document is overdue for an update.  Has there not been any lessons learned since 2008?  Can anyone from Cisco comment on this?

    Guys:
    I agree with you. many docs are old, pretty old.
    You can use the Feedback button at the bottom of the doc page and send your feedback to Cisco.
    Most of the time they will reply you and you can discuss your opinion about that doc is very old.
    I've done this with more than one doc and config examples that describes the config by providing images for version 3.x. They updated some of the docs to reflect later releases (6.x and 7.x).
    They have no problem with updating the docs, they have a good team to work on the docs to create and update. Just you be positive and hit that "Feedback" button and tell them and they'll surely help. (if not please tell me. I have a kind of personal contact with the wireless docs manager).
    HTH,
    Amjad
    You want to say "Thank you"?
    Don't. Just rate the useful answers,
    that is more useful than "Thank you".

Maybe you are looking for

  • [CS5 Win] Switching to InDesign is slow in CS5 for certain documents. XML links?

    Hi, I put this question in the scripting part, since you might be the ones to notice such things as this, and it might relate to importing XML by scripting. Also I got no positive response in one of the other forums, "complaining" about the sluggish

  • Java_home settings in OEL 5.2

    I have followe this steps to set my java home: my installed java dir is ======= */usr/java* step 1 [root@localhost java]# locate javac |grep bin /u01/app/oracle/product/11.1.0/db_1/jdk/bin/javac /usr/bin/javac /usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/bin/

  • JSDK SE 1.4.2_03 Bugs?!

    I use SE SDK 1.4.2_03 What wrong with: JFrame frame = new JFrame("HelloWorldSwing"); I get this error: HelloWorldSwing.java:5: cannot resolve symbol symbol : constructor JFrame (java.lang.String) location: class JFrame JFrame frame = new JFrame("Hell

  • CS4 : Accessing points on a Curve from keyboard

    No problem in earlier versions (Control-Tab, or Control Shift Tab), unable to do so in CS4 on my MacIntel running 10.5.5. Trying to get acquainted with this version installed on an external drive on which my system is cloned. And while I'm at it, the

  • Black Screen on Ipad2

    I was using the voice over accessibility feature with my son for reading a book using the Kindle App.  He shut my ipad cover, the passcode screen came on, and while attempting to enter my passcode in voice over function (argh, another issue I need as