Constants or properties file - best practice question

Hi,
My application has a number of values that will be used in different classes throughout my project. For example, I perform a check against the max allowed length of an ID in numerous places in my code.
Therefore, it makes sense to set this value in a central location, and refer to it as a variable where it is required in my code.
I see in other projects that using a public static final member in a Constants class is used to set these types of values. Is this recommended or best practice?
The only alternative I can think of would be to use a properties file, and inject the value using Spring etc.
What is considered best practice or the neatest way for doing this?
Thanks

user10340197 wrote:
Thanks. I'm using Spring anyways, so it would provide me with the PropertyPlaceHolderConfigurer for injecting these.And the name of that class provides a clue. As Kayaman said, constants are constants. Math.PI does not and will not change, EVER. Neither will Integer.MAX_VALUE.
Configuration parameters, on the other hand, might change. If your MAX_ID_LENGTH is ever likely to change, and could do so without causing widespread chaos, then it probably should be a property (ie, a configuration) value.
If not, it should probably be a constant (with appropriate 60-point documentation warning people what might happen if they DO change it).
Winston
PS: There is nothing particularly terrible about having a Properties class (except that you'll want to call it something different) that initializes its values from configuration files; except that if there are gazillions of them, you might want to:
(a) Split them up into "themes".
(b) Re-think your design.
Winston

Similar Messages

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SAP Adapter Best Practice Question for Migration of Channels

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • Import data from excel file - best practice in the CQ?

    Hi,
    I have question related to importing data from excel file and creates from those data a table in the CQ page. Is inside CQ some OOTB component which provides this kind of functionalities? Maybe somebody implement this kind of functionality or there is best practice to do this kind of functionalities?
    Thanks in advance for any answer,
    Regards
    kasq

    You can check a working example package [1] (use your Adobe ID to log in)
    After installing it, go to [2] for immediate example.
    Unfortunately it only supports the old OLE-2 Excel format (.xls and not .xlsx)
    [1] - http://dev.day.com/content/packageshare/packages/public/day/cq540/demo/xlstable.html
    [2] - http://localhost:4502/cf#/content/geometrixx/en/company/news/pressreleases/my_personal_bes ts.html

  • Deploying Branding Files Best Practice

    Question about best practice (if exists) for deployment method of branding files.
    Background:
    I created 2 differefent projects for a public facing SP 2010 site.
    First project contains 1 feature and is responsible for deploying branding files: contains my custom columns, layouts, masterpage, css, etc...
    Second project is my web template. Contains 3 features, contenttypebinding, webtemp default page, and the web temp files (onet.xml).
    I deploy my branding project, then my template files.
    Do you deploy branding as Farm solution or Sandboxed solution? So how do you update your branding files at this point?
    1. You don't, you deploy and forget about solution doing everything in SP Designer from this point on.
    2. Do step 1, then copy everything back to the VS project and save to TFS server.
    3. Do all design in VS and then update solution.
    I like the idea of having a full completed project, but don't like the idea of having to go back to VS, package, and re-deploy every time I have a minor change to my masterpage when I can just open up Designer and edit.
    Do you deploy and forget about the branding files using SP Designer to update master pages, layouts, etc.. or do you work and deploy via VS project.

    Hi,
    Many times we use Sandboxed solutions for branding projects, that way it would minimize the dependency of SharePoint Farm administration.  Though there are advantages using SharePoint Farm solution, but Sandboxed solution is simple to use, essentialy
    limited to set of available Sandboxed solutions API.
    On the SP Designer side, many of the clients that I have worked with were not comfortable with the Idea of enabling SharePoint Designer access to their portal. SP Designer though powerful and it does some times brings more issues when untrained business
    users start customizing the site.
    Another issue with SPD is you will not be able track changes(still can use versioning) and retracting changes are not that easy. As you said, you simply connect to the site using SPD and make changes to the master pages, but those changes are to be
    documented, and maintain copies of master pages always to retract the changes. 
    Think about a situation when you make changes to the Live site using SPD and the master page is messed up, and your users cannot access the site. we cannot follow trial and error methodology on the production server.This would bring
    out more questions like, What would be your contingency plan for restoring the site and what is your SLA for restoring your site, how critical is your business data.
    Although this SPD model would be useful for a few user SharePoint setup with limited tech budget but not also advisable.
    I would always favor VS based solution, which will give us more control over the design and planning for deployment.
    We do have solution deployment window, as per our governance we categorize the solution deployment based on the criticality and for important changes we plan for the weekends to avoid unavailability of the site.
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • HyperV 2012 best practice question (storage of guests)

    I was reading this post:
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    and I saw two things, one where it says fiber channel is not support and one where is says loopback is not support?
    so my question is for a best practice, would local storage (say a local attached raid 10) be considered best practice? or a shared
    san lun? and if a san, would virtual stuff like IBMs SVC be supported?

    Using a SoFS with SMB3 as a Storage for Hyper-V and SQL Server/Clusters is Microsoft Nr.1 and you will see more and more push for that Design in every next Version.
    The TOP Features for me in that Combo is, you get 100% of all Features on Day One when new Version comes out, within Microsoft SMB 3.x are some realy sexy Things in it and you can use all the Standard Ethernet Stuff yu have and know already for years. If
    you Need super Speed you can do that with some RDMA Cards, still Ethernet :-) and if you do a Live Migration between any Combo of Clusters and Single Nodes you never Need to move the Data and Config Files :-) they always stay on
    \\SoFS\VMs :-)
    All Information you need are on http://smb3.info , Jose, my Mr. SoFS  collects and blogs about all Features and Step-by-Step Guides to test and prove that even on one sinlge Notebook, very very cool.
    As a Backend fot the SoFs you have many choices, from the new Wave of JBODs togehter with Storage Spaces up to the old Style SAN Boxes with or without Special Features. Allways check the Windows Server Catalog for supported Confgs, specially on the JBOD
    Side, make sure you get one with R2 Support und also with SES Support (SCSI Enclosure Services ). Jose also Blogs about that Topics very well.
    https://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    http://www.windowsservercatalog.com/results.aspx?&chtext=&cstext=&csttext=&chbtext=&bCatID=1642&cpID=0&avc=10&ava=0&avq=0&OR=1&PGS=25&ready=0
    Hyper-V Cluster up to 64 Nodes and SoFS up to 8 Nodes depending on your Scaling Needs and having Storage splitted over separat DataCenters should be a good start :-)
    Just let me know of you need some Advice after doing a Design Session.
    Udo

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best Practice Question - Business Logic in Value Objects?

    Just wondering what people's thoughts on best practices for setting properties of Value Objects.
    For instance, I have several getter/setters in one of my Value Objects with logic in the setter that uses the value to set values of other properties.
    For example, I have a Value Object that has the following properties:
    category (of type Category, which is another Value Object with properties "name" and "id")
    categoryId (of type int)
    categoryUpdated (of type Boolean)
    I have a collection of Category Objects in the Model. When I set the categoryId of this class, I set the "categoryUpdated" to true, and dispatch an CairngormEvent to that find the "category" with the specified "categoryId" and set the "category" property to this item.
    So what is the best practice? To simply make the "categoryId" a public variable, and create a new Event/Command to perform all this logic? Or is it ok to do it all in the Value Object setter?
    Thanks.

    Hi Eric,
    I can't speak for best practices, but the only logic I've ever added to a VO were getters: an example was a set of getters on an airline flight VO to get overall flight departure/arrival times/cities from an array of flight segments in a property of the VO.
    I feel uneasy (in the nicest possible way) about your VO for two reasons: firstly it has a strong dependency on bits of the Cairngorm framework to look up the category (and VOs normally don't need to depend on anything), and secondly the intent of Commands in Cairngorm is more to represent user gestures than to wire up VO properties (I often see people shoehorning stuff into Commands that might be better of as plain old business utility classes). I would rather see an UpdateCategoryCommand (that's what the user is trying to do?) that updates categoryId and hits a delegate to populate the category property.
    That said you might have a very good reason for doing this. Could you tell us where the code is that's setting categoryId, and if anything is using categoryUpdated?
    Cheers,
    Robin

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Best Practice Question: Portal Sync Rules

    Hi,
    Is there any benefit in combining Inbound and Outbound Flows in a single Sync Rule?
    Or does it not matter if Inbound flows have their own Sync Rule and Outbound flows have their own Sync Rules?
    Does either option generate more/less EREs/DREs?
    Is either option better for performance reasons?
    look forward to your comments, thank you
    sk

    1:
    Only the OSR's create EREs, so if you have 4 outbound 3 inbound (assuming a user gets all 4 outbound SR's) the user should have 4 EREs. If the rules were combined into 3 outbound/inbound and 1 outbound, the user still gets 4 EREs.
    2:
    If my source was AD, I would create a new MV attribute called inAD. Then import a constant "true" from the target, AD for example. Then from my source, HR for example,  ill flow in a constant "false". Lastly update the MV attribute precedence for the
    inAD attribute to be AD then HR. From there i'd create a custom attribute in the portal so I can use the flag for set criteria.
    This might be a long winded way of getting it done, but say you have 10k users in AD and used DRE's. It means 10k more objects in the MV and in the portal. 
    Disclaimer: Not saying this is best practice, it's just what I do :) 

  • Best Practice Question on Heartbeat Network

    After running 3.0.3 a few weeks in production, we are wondering if we set up our Heartbeat /Servers correctly.
    We have 2 servers in our Production Server pool. Our LAN, a 192.168.x.x network, has the Virtual IP of the Cluster (heartbeat), the 2 main IP addresses of the servers, and a NIC assigned to each guest. All of this has been configured on the same network. Over the weekend, I wanted to separate the Heartbeat onto a new network, but when trying to add to the pool I received:
    Cannot add server: ovsx.mydomain.com, to pool: mypool. Server Mgt IP address: 192.168.x.x, is not on same subnet as pool VIP: 192.168.y.y
    Currently, I only have one router that translate our WAN to our LAN of 192.168.x.x. I thought the heartbeat would strictly be internal and would not need to be routed anywhere and just set up as a separate VLAN and this is why I created 192.168.y.y. I know that the servers can have multiple IP addresses, and I have 3 networks added to my OVM servers. 192.168.x.x, 192.168.y.y and 192.168.z.z. y and z are not pingable from anything but the servers themselves or one of the guests that I have assigned that network to. I can not ping them directly from our office network, even through the VPN which only gives us access to 192.168.x.x.
    I guess I can change my Sever Mgt IP away from 192.168.x.x to 192.168.y.y, but can I do that without reinstalling the VM server? How have others structured there networks especially relating to the heartbeat?
    Is there any documentation/guides that would describe how to set up the networks properly relating to the heartbeat?
    Thanks for any help!!

    Hello user,
    In order to change your environment, what you could do is go to the Hardware tab -> Network. Within here you can create new networks and also change via the Edit this Network pencil icon what networks should manage what roles (i.e. Virtual Machine, Cluster Heartbeat, etc). In my past experience, I've had issues changing the cluster heartbeat once it has been set. If you have issues changing it, via the OVM Manager, one thing you could do is change it manually via the /etc/ocfs2/cluster.conf file. Also, if it successfully lets you change it via the OVM Manager, verify it within the cluster.conf to ensure it actually did your change. This is where that is being set. However, doing it manually can be tricky because OVM has a tendency to like to revert it's changes back to its original state say after a reboot. Of course I'm not even sure if they support you manually making that change. Ideally, when setting up an OVM environment, best practice would be to separate your networks as much as possible i.e. (Public network, private network, management network, clusterhb network, and live migration network if you do a lot of live migrating, otherwise you can probably place it with say the management network).
    Hope that helps,
    Roger

  • Best practice question for Bounded task flows

    We are new to Jdeveloper/ADF and I was wondering if we should always try to use a bounded task flow for our applications... is this considered the best way to develop an app? even the small single page ones?
    Thanks in advance.

    Hi,
    let me turn teh question around: How many tools do you have at home beside of a hammer ? In other words, its the problem you need to solve (and use cases are problems) that should determine the use of bounded task flows and its granularity. Note that for each use case the bounded task flow makes a good candidate for delivering it. Doesn't mean that every single page needs to go into its own task flow. For general bounded best practices have a look here
    http://www.oracle.com/technetwork/developer-tools/jdev/adf-task-flow-design-132904.pdf
    Frank

Maybe you are looking for

  • Unable to run code analysis with WDK 9926

    Hi, I am trying to run code analysis with WDK 9926 and seeing below issue? can anybody help me to resolve this ? Error: ==== C:\sw\dev\T4\windows\Src\kernel\vbd>msbuild.exe cht4vbd.vcxproj /p:Configuration="Windows 8.1 Debug" /P:Platform=x64 /P:RunCo

  • Exit button not working in exe file made with Aggregator

    Hi, I created an .exe file using Aggregator in full screen mode but the exit button on the skin doesn't work. When I click the exit button, the dislay hiccups (wobbles a bit) but the file keeps playing and doesn't close. This is especially problemati

  • Event Triggers not getting executed

    I'm having following code in beforeData(PLSQL) event trigger: ADCPackageName.MethodName There is no such package in database, even my report works fine and publish data without any error. Is there any configuration for enabling event triggers. I didn

  • Partner does not exist for partner function LF

    Hi gurus while trying to print the output of a Purchase order thru ME22N, after selecting Print output, Output type as Purchase Order (NEU), and a partner, the system isues a arning message "Patner xxxx does not exist for partner function LF". Can so

  • AppleWorks and OS X 10.4.6

    After installing OS X 10.4.6, AppleWorks (v 6.2.9) seems intent upon crashing every single time I attempt to use it. I can launch A.W., and get to the Starting Points screen. I can even open a new Word Processing document. However, as soon as I touch