Best practice using regular properties

What is considered best practice when it comes to using properties ? Example like hostname, port number when connecting towards an external resource.
Should property files be used ? Is this considered a bad practice ? Should deployment descriptors be used - if so - how do one update these properties when changed ?
Are there any utility classes that makes easy access to this kind of properties ?
---- Trond

Depends on what properties. Many properties like hostname etc can be retrieved using different API calls - such as the request object or other portal specific objects.
Properties that you applications need, that might change - can be stored in a properties file. I use a singleton to retreive them - and have a reload method on the singleton that I can call if I need to reload the properties once the server has started.
Kunal

Similar Messages

  • Favorite / Best Practice / Useful MTE's ??!?!

    Hi Everyone,
    I'm setting up monitoring on a multi-system architecture; i've got all of the agents working and reporting to my CEN. I've even got the auto-response email all set up, great!
    I've been looking around for any best practice on what MTE's (out of the 100's there!) I should be setting up virtual and rule based nodes for.
    So come on everyone...what are your favorite MTE's. Of course I'm looking at Dialog Response Times, and Filesystem %'s -- but any other tips? hints? tricks? Any neat MTE's out there that you love to check.
    A best practice / useful guide would be brilliant?
    Thanks in advance.
    Nic Doodson

    ??

  • What is the BEST practice - use BO or Java Object in process as webservice

    Hi All,
    I have my BP published as web service. I have defined My process input & output as BOs. My BP talks to DB through DAO layer(written in JAVA) which has Java objects. So I have BO as well as java Objects. Since I am collecting user input in BO, I have to assign individual values contained in BO to Java object's fields.
    I want to reduce this extra headache & want to use either of BO or Java object.I want to know What is the best practice - use BO or Java object as process input. If it is BO,how I can reuse BOs in Java?
    Thanks in advance.
    Thanks,
    Sujata P. Galinde

    Hi Mark,
    Thanks for your response. I also wanted to use java object only. When I use java object as process input argument..it is fine. But when I try to create Process web service, I am getting compilation error - "data type not supported".....
    To get rid of this error, I tried to use heir (BO inheriting from java class). But while invoking process as web service, it is not asking for fields that are inherited from java class.
    Then I created Business Object with a field of type java class... This also is not working. While sending request, it is giving an error that - field type for fields from java class not found.
    Conclusion - not able to use java object as process(exposed as web service) input argument .
    What is the Best & feasible way to accomplist the task - Process using DAO in Java & exposed as web service.
    Thanks & Regards,
    Sujata

  • Looking for best practices using Linux

    I use Linux plataform to all the Hyperion tools, we has been problems with Analyzer V7.0.1, the server hangs up ramdomly.<BR>I'm looking for a Linux best practices using Analyzer, Essbsae, EAS, etc.<BR>I'll appreciate any good or bad comments related Hyperion on Linux OS.<BR><BR>Thanks in advance.<BR><BR>Mario Guerrero<BR>Mexico

    Hi,<BR><BR>did you search for patches? It can be known problem. I use all Hyperion tools on Windows without any big problem.<BR><BR>Hope this helps,<BR>Grofaty

  • Best Practices Used in CMS

    Hi,
    Can anyone share the best practices used in CMS transport.
    Basically, why I need this is that, we want to have a track of all the transport that are done in QA/Prod
    Regards,
    Sreenivas

    Hi
    U can try checking this document to know info about CMS
    (How To… Transport XI Content Using CMS)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f85ff411-0d01-0010-0096-ba14e5db6306
    Also...
    How to configuring the CMS for XI?
    Business System Groups - CMS

  • Best Practices - Update Explorer Properties of BW Objects

    What are some best practices of using the process chain type "Update Explorer Properties of BW Objects"?
    We have the option of updating Conversion Indexes, Hierarchy Indexes, Authorization Indexes, and RKF/CKF Indexes.
    When should we run each update process?
    Here are some options we're considering:
    Conversion Indexes - Run this within InfoCube load process chains that contain currency conversions within explorer objects.
    Hierarchy Indexes - When would this need to be run? Does this need to be run for PartProviders and/or Snapshots? Do ACRs handle this update? Should this be run within InfoCube load chains, or after ACRs?
    Authorization - We plan to run this a couple times a day for all explorer objects.
    RKF/CKS - Does this need to run after InfoCube loads? With PartProvider and/or Snapshot indexes? After transports have completed?
    Thanks,
    Cote

    Does anyone productively use explorer and this process type for process chains?

  • Best practice using keyword in subject for encryption

    I have setup an encryption content filter on my appiance to encrypt messages that are outbound, and have a subject header that begin with the keyword Secure inside brackets. [Secure]. My intent was to eliminate some fals positives by including the brackets. What ended up happening was for some reason ALL outbound messages were being encrypted.
    After some more testing it seems like the content filter is ignoring the brackets as a requirement, but I can seem to find anything in the online help to back this up.
    Can someone assist with verification of the requirements around using a keywork in the subject to allow users to encrypt oubound messages voluntarily?
    Best practices around achieving this if anyone has them would also be greatly appreciated.
    Thanks,
    Chris

    Hi Chris,
    While I would have to see the syntax of the filter in question to be 100% sure , it sounds as if your conditional variable is being treated literally. The content filters will except regular expressions so what this means is something like [Secure] could be seen as look for anything that contains
    an S, or a
    e or a
    c or a
    u or a
    r or a
    e
    Since you said begins with we would be looking for anything in the subject that begins with of of these letters. This is due to the fact that the brackets [ ] are special characters in regular expressions, thus they need to be escaped. That being said [Secure] would become \\[Secure\\], we use the \\ to escape the brackets. In content filters however, you can get by with using just one escape as the filter is smart enough to go ahead an enter the additional escape for you. So in the filter it would be something like begins with \[Secure\]
    Once you enter this in the condition for you filter it will display like the following,
    subject == "^\\[Secure\\]"
    I hope that helps.
    Christopher C Smith
    CSE
    Cisco IronPort Customer Support  

  • 2nd Mac - best practices using iPhoto on both?

    Hi -
    I just got a new MacBook and have an iMac that is still the "hub" of my photo library. It is, in fact, about a 180 GB iPhoto library. I know that I can't sync libraries between Macs (a shame - someone should come up with a way to that assuming they haven't already!) so I'm just looking for any best practices?
    I got the MacBook to be able to work on some photos while on the road - I can at least work on post processing in Photoshop, etc. I'm thinking now that my best strategy is to possibly work with the images on my MacBook, importing them into the iPhoto library if desired. Then use my Photo sharing service - Phanfare - to "sync" them? It requires me to download them on the other side and pull them again into the iPhoto Library on the iMac?
    I don't use the Mobile Me Gallery but I suppose that would be another way to have access to them on the alternate computer?
    Any other best practices or suggestions?
    Thx!

    So, if there are times when I'm not home to access my external drive, then going with the two libraries is the best solution, yes?
    Perhaps, but you can get very small and portable external HDs these days.
    I'm not sure though if I should really make both a 180 GB iPhoto library, do you? It is a back up true, but seems like a chunk to move
    But you only do it once. The first time. Thereafter you're simply updating the other with the changes.
    At least maybe I could split into pictures from 2009 - 2010 and have that library for both my iMac and the MacBook. I very rarely access before then (only if I need something specific) so then I could access that via the iMac exclusively?
    That would be viable.
    I would maintain a +full Library+ on the Desktop, the mobile versions a Smaller subset.
    I'm sort of ruling out the one library on the external solution because it eliminates the possibility of being remote -
    As I said above you can get tiny portable drives...
    unless there is some swanky Login to My Computer or something that works with a Mac that can go remotely to my computer and then to my external drive.
    *_This_* might help.
    Regards
    TD

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Best practice using clusters to create queue/notifier/bundles ?

    I have in a block diagram a queue, notifier and several instances of bundle cluster
    that all use the same data structure.   There is a cluster typedef for the data structure.
    Of course, each of these objects (define queue, define notifier, bundle)
    need to know how the cluster is defined.
    What is considered best practice?
    1) create a dummy instance of the cluster everywhere the data structure
        definition is needed (and hide them all on the FP)
    2) create only one instance and wire it to all the places it is needed
       But there is no data flow on this wire -- it's only the cluster *definition*
       that is being used, so this would seem to clutter up the BD unnecessarily.
    3) Create only one control instance of the cluster and use local variables
        everywhere else the cluster definition is needed.  It's _value_ is never
        assigned or read so there's no issue with race conditions.
    4) Some other way?
    If you had to clean up someone else's code, how would you hope to
    see this handled?
    It occurred to me in the course of writing this that where I have
    "unbundle... code ... bundle" I could wire the original bundle to 
    both "unbundle" and "bundle" -- but would this be too confusing
    and clutter the BD with unnecessary wire ?
    Thanks and Best Regards,
    -- J.
    Message Edited by JeffBuckles on 11-06-2008 03:17 PM
    Solved!
    Go to Solution.

    Discovered a rather unpleasant side-effect of using the typedef constant cluster this way. Since I'm using the typedef cluster constant only to communicate the definition of the cluster--not for dataflow--I resize it to the same size and shape as a control terminal.  This is shown in the first image, clean_cluster.jpg.   I was also careful to set "Autosizing->None" on all instances of the constant.   However, when I updated the typedef, all instances of the constant resized!  That's dozens of instances across many different VIs.   Why was "Autosizing->None" not respected?  Is there a better way to do this so that my BD doesn't get messed up if I have to update a typedef such as this one?
    Thanks and Best Regards,
    J. 
    Attachments:
    clean_cluster.png ‏2 KB
    Cluster_Updated.png ‏3 KB

  • Best practice: Using break statement inside for loop

    Hi All,
    Using break statment inside FOR loop is a best practice or not?
    I have given some sample code:
    1. With break statement
    2. With some boolean variable that decide whether to come out of the loop or not.
    for(int i = 0; i < 10; i++){
    if(i == 5){
    break;
    boolean breakForLoop = false;
    for(int i = 0; i < 10 && !breakForLoop; i++){
    if(i == 5){
    breakForLoop = true;
    The example may be a stupid one. But i want to know which one is good?
    Thanks and Regards,
    Ashok kumar B.

    Actually, it's bad practice to use break anywhere other than in conjunction with a switch statement.Presumably, if you favour:
    boolean test = true;
    while (test)
      test = foo && bar;
      if (test)
    }overfor (;;)
      if (! ( foo && bar) ) break;
    }then you also favour
    boolean test = foo && bar;
    if (test)
    }overif (foo && bar)
    }Or can you justify your statement with any example which doesn't cause more complexity, more variables in scope, and multiple assignments and tests?

  • Tabular Model Best Practice - use of views

    Hi,
    I've read in some sites that using views to get the model data is a best practice.
    Is this related to the fact tables only right? Friendly names can be configured in the model, so views can be used to restrict data volume but besides from that what are the other advantages?
    Model needs to know all the relation between tables, so using a unique view that combines joins to present one big view with all the data isn't useful.
    Best regards

    Yes, I think most people would agree that it isn't helpful to "denormalise" multiple tables into a single view. The model understands the relationships between tables and queries are more efficient with the multiple smaller related tables.
    Views can be helpful in giving a thin layer of independence from the data. You might want to change data types (char() to date etc), split first/last names, trim irrelevant columns or simply isolate the model from future physical table changes.
    In my view, there aren't any hard and fast rules. Do what is pragmatic and cleanest.
    Hope that helps,
    Richard

  • Best Practices using trace?

    I've been using trace() on all my functions so when I debug I can read which functions are running and what order. I like it, but is there a more efficient way or a best practice on tracing? I google "Best Practices Trace() Flex" but found nothing.

    Trace() is supposed to be used for debug-time diagnostics.  It gets optimized out for release builds as the production players don’t output to a console for performance reasons.  Flex has a logging subsystem via mx.logging.* package that can be used in production.  Third parties have created things like this as well.  These subsystems require “annotation”, putting in code that adds logging statements.  If you forget one or are using a library that doesn’t support it, you don’t get information from it.
    Flash.trace.Trace is different from the Trace() method.  It is an “undocumented” feature of the player that calls you back for every function call and or line of code.  Annotation is therefore not required, but it still won’t work for libraries that don’t have debug information in it.
    We are looking into subsystems that don’t require annotation and work even in production.

  • Best Practices using Pre-configurator

    Hi
    Anybody aware of doing QM configuration as per the Best Practices through Pre-configurator.

    Hi
    By using sap library you can configure the QM module . Basically you have to know the process of Quality Managment
    Regards
    M B raju

  • Command Authorization Config best practice using ACS

    Hi
    Is there any best practices for configuring Command authorization (for router/switch/asa) in CS-ACS? To be specific, is there any best practices to configure authorization for a set of commands allowed for L1,L2,L3 support levels?
    Regards
    V Vinodh.

    Vinodh,
    The main thing here is to ensure that we have backup/fall-back method configured for command authorization, inorder to avoid lockout situation or do wr mem once you are sure configs are working fine.
    Please check this link,
    http://www.cisco.com/en/US/products/sw/secursw/ps2086/products_configuration_example09186a00808d9138.shtml
    Regards,
    ~JG
    Do rate helpful posts

Maybe you are looking for

  • How to get check box in alv grid list output

    hi gurus, can anyone inform me how to get check box in alv output it should not be a pop up window thank you regards kals.

  • BT Infinity Connection Issue.

    This morning I cannot load pages using my internet connection. All blue lights on HH3 and Green lights on VDSL Modem are on and correct. This issue is also occurs when using Ethernet connection. I have reset/restored router numerous occasions but sti

  • Problems with Photoshop Elements 9 Help Function

    I recently installed Photoshop Organizer and Elements 9.  When I try to access any of the first six options in the help section of the menu at the top of the window I get an Adobe AIR error message stating that the installation of this application is

  • Unable to save Infoview pdf reports in Windows Vista

    Good morning, My coworker needs to be able to save Infoview pdf reports from her Windows Vista PC, but she is presently unable to.  She was able to save the reports fine previously from the Windows XP PC she used to have. Following are the steps take

  • PDF displays question mark font

    I'm trying to study for an exam, but am having trouble viewing certain PDF files. It displays a box-enclosed question mark font. Up till a few days ago, I could read them just fine, but then I ran Onyx and since then I get the weird font. I've tried