Code best practice question LV2013

I am still fairly new to LabVIEW programming and I have been learning from the online self paced classes. I started in June of 2012.
I am currently working on a new program front panel (host) vi and I am trying to determine a better way to implement a certain section of my code so that it works the way I would like it too. This vi is being designed for a touchscreen interface so all user input is touch screen input.
What I want the code to do: the code is designed so when the user presses the command button on the front panel vi it creates an information pop-up window explaining the parameters for input. The user pushes the OK button (closes the info pop-up) and opens a new sub-vi that is a numeric keypad. If the user input is within the accepted parameters then when 'enter' is pushed the sub-vi closes and inputs the value to a numeric indicator. If the user inputs an incorrect value, then the keypad sub-vi closes and another info window pops up informing the user 'incorrect value - re-enter value' user hits the OK button and closes the info box and reopens the keypad sub-vi. This cycle should continue until the operator inputs a valid value.
What the code does as written: when the user presses the command button on the front panel vi it creates an information pop-up window explaining the parameters for input. The user pushes the OK button (closes the info pop-up) and opens a new sub-vi that is a numeric keypad. If the user input is within the accepted parameters then when 'enter' is pushed the sub-vi closes and inputs the value to a numeric indicator. If the user inputs an incorrect value, then the keypad sub-vi closes and another info window pops up informing the user 'incorrect value - re-enter value' user hits the OK button and closes the info box and reopens the keypad sub-vi. If now on the second attempt the user still inputs an incorrect value and presses 'enter' the keypad sub-vi closes and I have had to defer the value to output '0' (or some minimum allowed value).
So the question I am trying to figure out is this. Is there a better way to implement this section of code to work like I want it to, maybe using shift registers or something and where would they go?
I am using the most current version of LabVIEW 2013.
The code as pictured works great outside the fact that I have to stop it after two attempts otherwise the code would have to be rewritten for each failure and it would look like staring off into oblivion between two mirrors.
Thanks,
Solved!
Go to Solution.
Attachments:
code question.jpg ‏429 KB

Gearmiester wrote:
What I want the code to do: the code is designed so when the user presses the command button on the front panel vi it creates an information pop-up window explaining the parameters for input. The user pushes the OK button (closes the info pop-up) and opens a new sub-vi that is a numeric keypad. If the user input is within the accepted parameters then when 'enter' is pushed the sub-vi closes and inputs the value to a numeric indicator. If the user inputs an incorrect value, then the keypad sub-vi closes and another info window pops up informing the user 'incorrect value - re-enter value' user hits the OK button and closes the info box and reopens the keypad sub-vi. This cycle should continue until the operator inputs a valid value.
Why do you check for valid entry outside the sub-vi that allows the operator to input a value?  You could simply monitor each keystroke (every character entered) and have a small algorithm that checks for a correct entry on the fly (while it is being entered).  You could change the color of the text to red and make a brief description appear below the string control which describes valid entries.  Keep it simple.
Gearmiester wrote:
What the code does as written: when the user presses the command button on the front panel vi it creates an information pop-up window explaining the parameters for input. The user pushes the OK button (closes the info pop-up) and opens a new sub-vi that is a numeric keypad. If the user input is within the accepted parameters then when 'enter' is pushed the sub-vi closes and inputs the value to a numeric indicator. If the user inputs an incorrect value, then the keypad sub-vi closes and another info window pops up informing the user 'incorrect value - re-enter value' user hits the OK button and closes the info box and reopens the keypad sub-vi. If now on the second attempt the user still inputs an incorrect value and presses 'enter' the keypad sub-vi closes and I have had to defer the value to output '0' (or some minimum allowed value).
Why is the 2nd attempt different than the others?  It does not matter how many attempts are made... since it is not an automated process, but human interaction.  Surely, the operators are trained and would have an idea of what should be entered.  Additional instructions can be provided within documentation that accompanies the application.
Gearmiester wrote:
So the question I am trying to figure out is this. Is there a better way to implement this section of code to work like I want it to, maybe using shift registers or something and where would they go?
I am using the most current version of LabVIEW 2013.
The code as pictured works great outside the fact that I have to stop it after two attempts otherwise the code would have to be rewritten for each failure and it would look like staring off into oblivion between two mirrors.
Do you absolutely want it to work as described within your first paragraph?  There are always "better ways" to do things, that's a personal preference..  Maybe you should ask the people that will be using the application to get their feedback (operators).  That's what I usually do.
Using the latest LabVIEW version is fine, but has little or no impact on the implementation decisions. 
If all you really want is to fix the code to prevent it from stopping after 2 attempts, then show us a code snippet of what you have done.

Similar Messages

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SAP Adapter Best Practice Question for Migration of Channels

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

  • Best Practice Question - Activate Company Code - Open client or transport?

    Hello.
    When activating Company Codes in a newly productive system, is it best practice to do it directly in an open client, or to change the setting via transport?
    Thanks and Regards,
    D Flores

    What do you mean by activating company code?
    Is it productive check box you are talking about in OBY6.
    If so, open client for manual change, make the setting and put it back the client settings.

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Bean best practice question

    Simple questions (hopefully), I know how to code this but just want some advice on the best way to do the following:
    1. User enters data into HTML form and submits
    2. Some Java at the backend grabs these details and emails them off somewhere
    I am thinking of doing the following, but what’s the best way?
    1. HTML form submits and data is sent directly to a JavaBean (FormBean.java)
    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?
    Just a bit confused on what’s best practice for this stuff?
    Thanks!

    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? A better approach is this way
    a) Have all the form data in the form bean
    b) Write sendMail in a all together different class, as action.
    c) Send the form bean as a parameter to sendMail for processing and sending an email
    This way your sendMail() will become a kind of a service. Tomorrow you might have some other data, which you will have to send it in an email. In that case, you just reuse sendMail() method. Otherwise, if you have sendMail() in form bean itself, then if there are many form beans, then you would have to write sendMail() in every form bean, which is a bad practice. One principle of OOAD is to separate the functionality, which is redundant in your classes and make it as a separate module. If there are changes to the sendMail() functionality then, by having it in one module, you only have to change it at one place.
    Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?You can have a servlet which acts like a controller, which receives the request parameters, constructs the form bean and invokes appropriate Action (in your case sendMail()). This is same as an MVC framework. Instead of you re-inventing the wheel to create a servlet controller, form bean, action etc. You could use one of the several MVC frameworks available in the market, such as Struts or Spring MVC.

  • Best Practices: Question about Passing DataSet to Crystal through C#

    I have used the tutorial provided by Business Objects and have successfully passed a dataset from C# code to Crystal Reports.
    I have a few questions about "best pratices" though.
    It appears that when passing the dataset to crystal then you no longer have the ability to put SQL Expressions in the report anymore otherwise errors will occur.
    So, I'm trying to come up with a way to have a custom field in the SELECT statement and have it show up on the report as a field. So, in the below example I created a custom SQL field called TIMES7 in the query.
    "Select CLM_ID, CLM, PAID_111X, *(PAID111X * 7) As TIMES7*_ From WIKI.MULCRICKET WHERE CLM_ID < 5"
    If I create a formula field in Crystal and set it's value to {MULCRICKET .TIMES7} then it works like a hybrid SQL Expression at run-time.
    This does work but has issues because the database field doesn't really exist in the design environment which causes error when the formula is saved. But it does work at run-time.
    I was wondering if their was a best practice for this?

    So why are you using a formula that doesn't work? If it errors in the designer it's telling you it's not supported.
    Drop the formula into a filed and hide it if you don't want it displayed, this way it gets into the record selection formula.
    "Best Practices" is don't use formulae that won't verify in the Designer. "If it doesn't work in the designer it won't work in code"
    If you need to add an "unknown" field at run time then use RAS to insert the field.

  • Best Practice question - null or empty object?

    Given a collection of objects where each object in the collection is an aggregation, is it better to leave references in the object as null or to instantiate an empty object? Now I'll clarify this a bit more.....
    I have an object, MyCollection, that extends Collection and implements Serializable(work requirement). MyCollection is sent as a return from an EJB search method. The search method looks up data in a database and creates MyItem objects for each row in the database. If there are 10 rows, MyCollection would contain 10 MyItem objects (references, of course).
    MyItem has three attributes:
    public class MyItem implements Serializable {
        String name;
        String description;
        MyItemDetail detail;
    }When creating MyItem, let's say that this item didn't have any details so there is no reason to create MyitemDetail. Is it better to leave detail as a null reference or should a MyItemdetail object be created? I know this sounds like a specific app requirement, but I'm looking for a best practice - what most people do in this case. There are reasons for both approaches. Obviously, a bunch of empty objects going over RMI is a strain on resources whereas a bunch of null references is not. But on the receiving end, you have to account for the MyItemDetail reference to be null or not - is this a hassle or not?
    I looked for this at [url http://www.javapractices.com]Java Practices but found nothing.

    I know this sounds like a specific apprequirement,
    , but I'm looking for a best practice - what most
    people do in this case. It depends but in general I use null.Stupid.Thanks for that insightful comment.
    >
    I do a lot of database work though. And for that
    null means something specific.Sure, return null if you have a context where null
    means something. Like for example that you got no
    result at all. But as I said before its's best to
    keep the nulls at the perimeter of your design. Don't
    let nulls slip through.As I said, I do a lot of database work. And it does mean something specific. Thus (in conclusion) that means that, in "general", I use null most of the time.
    Exactly what part of that didn't you follow?
    And exactly what sort of value do you use for a Date when it is undefined? What non-null value do you use such that your users do not have to write exactly the same code that they would to check for null anyways?

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • Best Practice Question - Business Logic in Value Objects?

    Just wondering what people's thoughts on best practices for setting properties of Value Objects.
    For instance, I have several getter/setters in one of my Value Objects with logic in the setter that uses the value to set values of other properties.
    For example, I have a Value Object that has the following properties:
    category (of type Category, which is another Value Object with properties "name" and "id")
    categoryId (of type int)
    categoryUpdated (of type Boolean)
    I have a collection of Category Objects in the Model. When I set the categoryId of this class, I set the "categoryUpdated" to true, and dispatch an CairngormEvent to that find the "category" with the specified "categoryId" and set the "category" property to this item.
    So what is the best practice? To simply make the "categoryId" a public variable, and create a new Event/Command to perform all this logic? Or is it ok to do it all in the Value Object setter?
    Thanks.

    Hi Eric,
    I can't speak for best practices, but the only logic I've ever added to a VO were getters: an example was a set of getters on an airline flight VO to get overall flight departure/arrival times/cities from an array of flight segments in a property of the VO.
    I feel uneasy (in the nicest possible way) about your VO for two reasons: firstly it has a strong dependency on bits of the Cairngorm framework to look up the category (and VOs normally don't need to depend on anything), and secondly the intent of Commands in Cairngorm is more to represent user gestures than to wire up VO properties (I often see people shoehorning stuff into Commands that might be better of as plain old business utility classes). I would rather see an UpdateCategoryCommand (that's what the user is trying to do?) that updates categoryId and hits a delegate to populate the category property.
    That said you might have a very good reason for doing this. Could you tell us where the code is that's setting categoryId, and if anything is using categoryUpdated?
    Cheers,
    Robin

  • Where to put java code - Best Practice

    Hello. I am working with the Jdeveloper 11.2.2. I am trying to figure out the best practice for where to put code. After reviewing http://docs.oracle.com/cd/E26098_01/web.1112/e16182.pdf it seemed like the application module was the preferred spot (although many of the examples in the pdf are in main methods). After coding a while though, I noticed that there were quite a few libraries imported, and wondered whether this would impact performance.
    I reviewed postings on the forum, especially Re: Access service method (client interface) programmatically . This link mentions accessing code from a backing bean -- and the gist of the recommendations seems to be to use the data control to drag it to the JSF, or use the bindings to access code.
    My interest lies in where to put java code in the first place; In the View Object, Entity Object, and Am object, backing bean.....other?
    I can outline several best guesses about where to put code and the pros and cons:
    1. In the application module
    Pros: Centralized location for code makes development and support more simple as there are not multiple access points. Much like a data control centralizes services, the application module can act as a conduit for different pieces of code you have in objects in your model.
    Cons: Everything in one place means the application module becomes bloated. I am not sure how memory works in java -- if the app module has tons of different libraries are they all called when even a simple query re-execute method is called? Memory hog?
    2. Write code in the objects it affects. If you are writing code that accesses a view object, write it in a view object. Then make it visible to the client.
    pros: The code is accessed via fewer conduits (for example, I would expect that if you call the application module from a JSF backing bean, then the application module calls the view object, you have three different pieces of code --
    conts: The code gets spread out, harder to locate etc.
    I would greatly appreciate your thoughts on the matter.
    Regards,
    Stuart
    Edited by: Stuart Fleming on May 20, 2012 5:25 AM
    Edited by: Stuart Fleming on May 20, 2012 5:27 AM

    First point here is when you say "where to put the java code" and you're referring to ADF BC, the point is you put "business logic java code" in the ADF Business Components. It's fine of course to have Java code in the ViewController layer that deals with the UI layer. Just don't put business logic in the UI layer, and don't put UI logic in the model layer. In your 2 examples you seem to be considering the ADF BC layer only, so I'll assume you mean business logic java code only.
    Meanwhile I'm not keen on the term best practice as people follow best practices without thinking, typically best practices come with conditions and people forget to apply them. Luckily you're not doing that here as you've thought through the pros and cons of each (nice work).
    Anyway, back on topic and off my soap box, as for where to put your code, my thoughts:
    1) If you only have 1 or 2 methods put it in the AppModuleImpl
    2) If you have hundreds of methods, or there's a chance #1 above will morph into #2, split the code up between the AppModuleImpl, ViewImpl and ViewRowImpls. Why? Because your AM will become overloaded with hundreds of methods making it unreadable. Instead put the code where it should logically go. Methods that work on a specific VO row go into the associated ViewRowImpl, methods that work across rows in a VO go into the ViewImpl, and methods that work across VOs in the associated AppModuleImpl.
    To be honest which you ever option you choose, one thing I do recommend as a best practice is be consistent and document the standard so your other programmers know.
    Btw there isn't an issue about loading lots of libraries/imports into a class, it has no runtime cost. However if your methods require lots of class variables, then yes this will have a memory cost.
    On a side note if you're interested in more ideas around how to build ADF apps correctly think about joining the "ADF EMG", a free online forum which discusses ADF architecture, best practices (cough), deployment architectures and more.
    Regards,
    CM.

  • Best practice question for Bounded task flows

    We are new to Jdeveloper/ADF and I was wondering if we should always try to use a bounded task flow for our applications... is this considered the best way to develop an app? even the small single page ones?
    Thanks in advance.

    Hi,
    let me turn teh question around: How many tools do you have at home beside of a hammer ? In other words, its the problem you need to solve (and use cases are problems) that should determine the use of bounded task flows and its granularity. Note that for each use case the bounded task flow makes a good candidate for delivering it. Doesn't mean that every single page needs to go into its own task flow. For general bounded best practices have a look here
    http://www.oracle.com/technetwork/developer-tools/jdev/adf-task-flow-design-132904.pdf
    Frank

  • Constants or properties file - best practice question

    Hi,
    My application has a number of values that will be used in different classes throughout my project. For example, I perform a check against the max allowed length of an ID in numerous places in my code.
    Therefore, it makes sense to set this value in a central location, and refer to it as a variable where it is required in my code.
    I see in other projects that using a public static final member in a Constants class is used to set these types of values. Is this recommended or best practice?
    The only alternative I can think of would be to use a properties file, and inject the value using Spring etc.
    What is considered best practice or the neatest way for doing this?
    Thanks

    user10340197 wrote:
    Thanks. I'm using Spring anyways, so it would provide me with the PropertyPlaceHolderConfigurer for injecting these.And the name of that class provides a clue. As Kayaman said, constants are constants. Math.PI does not and will not change, EVER. Neither will Integer.MAX_VALUE.
    Configuration parameters, on the other hand, might change. If your MAX_ID_LENGTH is ever likely to change, and could do so without causing widespread chaos, then it probably should be a property (ie, a configuration) value.
    If not, it should probably be a constant (with appropriate 60-point documentation warning people what might happen if they DO change it).
    Winston
    PS: There is nothing particularly terrible about having a Properties class (except that you'll want to call it something different) that initializes its values from configuration files; except that if there are gazillions of them, you might want to:
    (a) Split them up into "themes".
    (b) Re-think your design.
    Winston

Maybe you are looking for

  • How can I get my movie clip to resize according to screen size?

    I have a movie clip (startMenu); inside it there are buttons, text, and graphics. This is a game I am making, and it will be played on different screen sizes of Android. This is the code I have so far: stage.scaleMode = StageScaleMode.NO_SCALE; stage

  • Looking for a FM to download entirely PO/CTR data from R/3

    Hi all gurus, as told in subject, I'm looking for a way to dowload all the data of a contract or a purchase order into a local file on my client. For those who are familiar with SRM, there's a transaction, BBP_PD, which - given a contract or po numbe

  • Can you connect external HD to time capsule and use it as NAS?

    I have a Time Capsule which I uses to back up my family's mac.  I also have an Airport Extreme to extend the range.  I have no problem reversing the function as to have the Time Capsule extend if need be. My wife's old Power PC display took a crap, a

  • How to free up 9Gb of hard drive space

    I know this can be found by Googling, but it sometimes takes several sites. In the event it hasn't been posted here (hard to believe), here are a few steps to free up about 8-10 Gb on your Macbook's HDD. In my experience, none of these raise stabilit

  • Unable to download the Crystal Reports for Eclipse

    <p>Hi,</p><p>I could able to see the download links for "Crystal Reports for Eclipse" in diamond site. But i am not able to download the file completely and the download is not supporting the resume option. Some times the downloads crosses 50% and fi