Logging Best Practices in J2EE

Hi,
I've been struggling with Apache Commons Logging and Log4J Class Loading problems between module deployments in the Sun App Server. I've also had the same problems with other App servers.
What is the best practice for Logging in J2EE.
i.e. I think i may be java.util.logging. But what is the best practise for providing different logging config (i.e. Levels for classes and output) for each deployed module. and how would you structure that in the EAR.
Thanks in advance.
Graham

I find using the java.util.logging works fine. For configuration of the log levels I use a LifeCycle module that sets up all my levels and handlers. That way I can set up the server.policy to allow only the LifeCycle module jar to configure logging (with a codebase grant), but no other normal modules can.
The LifeCycle module gets its properties as event data with the INIT event and configures the logging on the STARTUP event.
Hope this helps.

Similar Messages

  • Looking for best practice on J2EE development environment

    Hi,
    We are starting to develope with J2EE. We are looking for best practice on J2EE development environment. Our concern is mainly on code sharing and deployment.
    Thanks, Charles

    To support "code sharing" you need an integrated source code control system. Several options are out there but CVS (https://www.cvshome.org/) is a nice choice, and it's completely free and it runs on Windows, Linux, and most UNIX variants.
    Your next decision is on IDE and application server. These are usually from a single "source". For instance, you can choose Oracle's JDeveloper and Deploy to Oracle Application Server; or go with free NetBeans IDE and Jakarta Tomcat; or IBM's WebSphere and their application server. Selection of IDE and AppServer will likely result in heated debates.

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

  • Redo log best practice for performace - ASM 11g

    Hi All,
    I am new to ASM world....can you please give your valuable suggestion on below topic.
    What is the best practice for online redo log files ? I read it somewhere that oracle recommand to have only two disk group one for the all DB fils, control files online redo log files and other disk group for recovery files, like mulitplex redo file, archive log files etc.
    Will there be any performance improvment to make different diskgroup for online redo log (separate from datafiles, contorl files etc) ?
    I am looking for oracle document to find the best practise for redo log (peformance) on ASM.
    Please share your valuable views on this.
    Regards.

    ASM is only a filesystem.
    What really count on performance about I/O is the Storage Design i.e (RAID used, Array Sharing, Hard Disk RPM value, and so on).
    ASM itself is only a layer to read and write on ASM DISK. What mean if you storage design is ok ... ASM will be ok. (of course there is best pratices on ASM but Storage must be in first place)
    e.g There is any performance improvement make different Diskgroup?
    Depend...and this will lead to another question.
    Is the Luns on same array?
    If yes the performance will end up on same point. No matter if you have one or many diskgroup.
    Coment added:
    Comparing ASM to Filesystem in benchmarks (Doc ID 1153664.1)
    Your main concern should be how many IOPs,latency and throughput the storage can give you. Based on these values you will know if your redologs will be fine.

  • Hotfix Management | Best Practices | WCS | J2EE environment

    Hi All,
    Trying to exploit some best practices around hotfix management in a J2EE environment. After some struggle, we managed to handle the tracking of individual hotfixes using one of our home-grown tools. However, the issue remains on how to manage the 'automated' build of these hotfixes, rather than doing the same manually, as we are currently doing.
    Suppose we need to hotfix a particular jar file on a production environment, I would need to understand how to build 'just' that particular jar. I understand we can label the related code (which in this case could be just a few java files). Suppose this jar contains 10 files out of which 2 files need to be hotfixed. The challenge is to come up with a build script which builds -
    - ONLY this jar
    - the jar with 8 old files and 2 new files.
    - the jar using whatever dependent jars are required.
    - the hotfix build script needs to be generic enough to handle the hotfix build of any jar in the system.
    Pointers, more in line with a WCS environment would be very much appreciated!
    Regards,
    Mrinal Mukherjee

    Moderator Action:
    This post has been moved from the SysAdmin Build & Release Engineering forum
    To the Java EE SDK forum, hopefully for closer topic alignment.
    @)O.P.
    I don't think device driver build/release engineering is what you were intending.
    Additionally, your partial post that was accidentally created as a duplicate to this one
    has been removed before it confuses anyone.

  • Console logging best practices?

    All,
    I just had a guest "LDom" crash. There was no system crash dump recovered and nothing in /var/adm/messages. Is there a best practice for sending virtual console output to some sort of log server?
    Any insight would be appreciated.
    Regards,
    Gene

    Could you use something like this: on control domain start screen (the terminal multiplexer) with the following options and arguments: screen -d -m sh -c "telnet <ldom's console> | tee -a <path to the log file>". That should create screen that has a telnet session to the ldom's console which writes its output to a file.
    For example: screen -d -m sh -c "telnet localhost:5001 | tee -a /var/tmp/ldom1_console.log"
    Remember to check running screens with screen -ls command.

  • ICM Trace Log best practices?

    Hello,
    Partner is asking if we have best practice guidelines/documentation for setting Trace Log file sizes on ICM 7.2.X
    thanks!

    Actually we did open a TAC. Cisco was not able to make a recommendation because the issue is intermittent and we cannot leave tracing on indefinitely. Because of this they instead recommended installing a packet sniffer. However our network support team came back with a similar response - cannot leave packet sniffing turned on indefinitely.
    This is a difficult situation in which we cannot reproduce the issue, we don't know when it will happen again. And we cannot take any proactive action to ensure that we capture the logs on the next occurrence.
    So, has anyone else been through something like this?
    Thanks!
    Joe

  • Best Practice: A J2EE Blue-Print for a Typical Web App

    Consider a typical synchronous Struts-based Web application which does a simple DB search and post. What are some of the main patterns and components that should be used if following the �industry best practices�
    Does the following flow seem accurate?
    Strust Action creates a TransferObject , and passes it to a Business Delegate. Delegate finds the appropriate BusinessObject, the Business Object uses the Data Access Object�.the CRUD operation happens and the result is sent back to the Action in the same TransferObject.
    Which one of these components need an interface?
    What's the best way for this components to interact with each other (factory, etc.)?
    Message was edited by:
    kmkiani
    Message was edited by:
    kmkiani

    There are 3 tiers in a Java EE application. (Presentation, Business, Integration).
    The BusinessDelegate in this scenario would be a Presentation-tier business delegate. This guy would interact with a Session Facade who lives on the Business-tier. The SessionFacade is the abstraction on the Business-tier and the Business Delegate is the abstraction on the Presentation-tier. It is these guys that have direct communication. This design enables low coupling between the actual implementations of each area. If done properly, you could go from EJB to Web Service to POJO business models without ever having to change anything in the Presentation-tier.
    These object-oriented design patterns are primarily for Enterprise applications with extensive Quality-of-Service requirements.
    In your scenario, the Presentation-tier would contain a MVC-based web application, i.e. Struts. The business model and business/domain requirements would be implemented in the Business-tier.
    Presentation Tier - Struts Web Application
    Business Tier - (EJB | POJO | WEB SERVICES) Application
    Integration Tier - (Relational Database | File System | XML Database | EIS)

  • Logging best practices

    Hi!
    I have a few java apps and need to implements some common logging functionallities, I need all the applications to log to the same destination and save the info fo monitoring/analysys
    I considered using Log4j and Apache commons logging, but I need to be able to pass the name of the aplication as a parameter to the logging device so that you can easily track the origin. Some other parameters are required as well such as user, timestamp, etc...
    Which is the best way to go?
    Thanks,
    Iggy

    well, of course every appllication will/could have
    it's own log file but for ease of maintenance we need
    a single place where to save the informationThat is a bad idea, you do know that your logging statements might be interleaved in the file? E.g first you get a few characters from one statement (application) and then a few characters from another statement.
    You do instead have to have a logging server and let the applications log to that server (using tcp or udp). The server should then write to the log file.
    Kaj
    Message was edited by:
    kajbj

  • Weblogic 8.1 Logging Best Practices?

    I have an application that uses WLI. We have a set of interfaces and we would like each interface to output to its own Log File. In case you want to know how we define an interface basically an interface is a 1 to 1 relationship to a jpd file. I am really looking for the best way to implement this that will be easiest to scale. Thanks for any help you can provide. Any code examples would be very helpful!!!
    Message was edited by manderj at Dec 6, 2004 5:35 AM

    Does anyone see a value in the logging APIs supporting this more performant logging style?
    curt
    public class Myclass
    // class level shared object instead of new'ed for each method call.
    static protected NonCatalogLogger myLogger = new NonCatalogLogger("MyApplication");
    public void myMethod() {
    If (mylogger.isInfoLevel()) // needed method !!
    // The expense of String creation and other expensive operations are prevented!!!!
    mylogger.info("Application started. Foo="+foo.getExpensiveOperation());
    }

  • Redo / Archive Log Best Practices?

    I am a newb when it comes to Oracle administration. The problem is that our "DBA" knows even less about it.
    I'd like to get some advice/recommendations on redo and archive logs.
    We are currently running:
    Windows 2000 Server
    Oracle 8.1.7
    Oracle DB is ~50gb
    ~250 users
    Database is under fairly heavy load, as it is used to run the company's primary accounting software.
    Recently when reviewing back up procedures, I realized that our "DBA" did not have archive logging turned on. This obviously is a problem. Our DBA was relying solely on a data dump every night that was then backed up to tape. I was forced to take care of this, as the "DBA" didn't have any knowledge on this subject. I got archive logging turned on, changed the init file, etc. etc.
    Where the problem comes in, and where my questions come from... The database was writing archive logs ~2-3 mins, sometimes less depending on the database load. Oracle was configured to use 3 redo logs @ ~105mb each. The server was getting "Archive process error: Oracle instance xxxx - Cannot allocate log, archival required." I changed the redo logs to run 5 logs at ~200mb each. I also added a scsi drive to the server for the sole purpose of storing the archive logs. Log Buffer was set at 64k, I upped this to 1mb.
    My specific questions are:
    - How fast should logs be being written?
    - Should I up the number of redo logfiles, or up the size of each?
    - Should I be writing the redo logs to multiple destinations?
    - Should I archive to multiple destinations? If so, would archiving to a network drive lag the archive process, and kill the bandwidth to the server/database since it would be writing 200mb+ files to the network every few minutes?
    - What are some recommended file size minimums / maximums under the current environment listed above?
    - Other tips/suggestions?
    Any help would be appreciated.
    Thanks.

    hi,
    havce u configured LOG_ARCHIVE_START = TRUE ???
    How fast should logs be being written?Should I up the number of redo logfiles, or up the size of each?
    - Should I be writing the redo logs to multiple destinations?
    - Should I archive to multiple destinations? If so, would archiving to a network drive lag the archive process, and kill the bandwidth to the server/database since it would be writing 200mb+ files to the network every few minutes?
    - What are some recommended file size minimums / maximums under the current environment listed above?
    IF U WANT TO KEEP TIME BETWEEN FAILURES TO MINIMUM , THEN KEEP UR REDO LOG FILE SIZES TO SMALLER,,BUT ,,AS GENERALLY,,U SHUD TAKE IT TO A GOOD BIG ONE LIKE IN UR SITUATION I THINK IT SHUD BE AS:
    LOG_BUFFER = 104857600 --IN INIT.ORA (100MB)
    5 REDO LOG FILES AT MULTIPLE LOCATIONS EACH OF THE SIZE 400 MB...
    IT IS RECOMMENDED THAT DON'T TAKE UR ARCHIVES ON A NETWORKED LOCATION AS IT WILL DEFINITLY OVERLOADS NETWORK TRAFIC AS WELL AS SLOWS DOWN ARCHIVAL SPEED.
    REGARDS
    MUHAMMAD UMAR LIAQUAT

  • Best Practice to Atomic Read and Write a Field In Database

    I am from Java Desktop Application background. May I know what is the best practice in J2EE, to atomic read and write a field in database. Currently, here is what I did
    // In Servlet.
    synchronized(private_static_final_object)
        int counter = read_counter_from_database();
        counter++;
        write_counter_back_to_database(counter);
    }However, I suspect the above method will work all the time.
    As my observation is that, if I have several web request at the same time, I am executing code within single instance of servlet, using different thread. The above method shall work, as different thread web request, are all referring to same "private_static_final_object"
    However, my guess is "single instance of servlet" is not guarantee. As after some time span, the previous instance of servlet may destroy, with another new instance of servlet being created.
    I also came across [http://code.google.com/appengine/docs/java/datastore/transactions.html|http://code.google.com/appengine/docs/java/datastore/transactions.html] in JDO. I am not sure whether they are going to solve the problem.
    // In Servlet.
    Transaction tx = pm.currentTransaction();
    tx.begin();
        int counter = read_counter_from_database();  // Line 1
        counter++;                                                  // Line 2
        write_counter_back_to_database(counter);    // Line 3
    tx.commit();Is the code guarantee only when Thread A finish execute Line 1 till Line 3 atomically, only Thread B can continue to execute Line 1 till Line 3 atomically?
    As I do not wish the following situation happen.
    (1) Thread A read counter from Database as 0
    (2) Thread A increment counter to 1
    (3) Thread B read counter from Database as 0
    (4) Thread A write counter as 1 to database
    (5) Thread B increment counter to 1
    (6) Thread B write counter as 1 to database
    What I wish is
    (1) Thread A read counter from Database as 0
    (2) Thread A increment counter to 1
    (4) Thread A write counter as 1 to database
    (3) Thread B read counter from Database as 1
    (5) Thread B increment counter to 2
    (6) Thread B write counter as 2 to database
    Thanks you.
    Edited by: yccheok on Oct 27, 2009 8:39 PM

    This is my understanding of the issue (you should research it further on you own to get a better understanding):
    I suggest you use local variables (ie, defined within a function), and keep away from static variables. Those local variables are thread safe. If you call functions within functions, its still thread safe. If you read or write to one record in a database using sql, its thread safe (you dont need a transaction). If you read/write to multiple tables and/or records, you probably need a transaction. Servlets are thread safe. You usually dont need the 'synchronized' word anywhere unless you have a function updating/reading a static variable and therefore want to ensure only one user is accessing the static varible at a time. Also do so if you are writing to some resource (such as a file, a variable in applicaton scope, session scope, or resource that everyone uses such as email server). You dont want more than one person at a time to write to it). Note the database is one of of those resources that is handled by transactions rather than the the synchronized keyword (the synchronized keyword is applied to your application only (not other applications that someone is running), whereas the transaction ensures all applications are locked out while you update those records in the database).
    By the way, if you have a static variable, you should have one and only one function (synchronized) that updates it that everyone uses. If you have more than one synchronized function, that updates it, its probably not thread safe.
    An example of a static variable you would use is a Datasource object (to obtain your database connections). You only need one connection pool in your application and you access it via the datasource variable.
    If you're unsure your code is thread safe, you can create two seperate threads that call the same block of functions repeatedly to ensure it works as expected.

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • What are best practice for packaging and deploying j2EE apps to iAS?

    We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
    Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
    What is considered to be best practice in terms of packaging and deployment, specifically:
    1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
    2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
    3) Deploying ear files can take 5 to 10 mins, is this normal?
    4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
    thanks in asvance for your replies
    Owen

    You may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
    If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
    You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
    For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
    There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
    Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
    Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
    2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
    If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
    3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
    One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
    4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session.

  • Best practice of OSB logging Report handling or java code using publish

    Hi all,
    I want to do common error handling of OSB I did two implementations as below just want to know which one is the best practice.
    1. By using the custom report handler --> When ever we want to log we will use the report action of OSB which will call the Custom java class which
    Will log the data in to DB.
    2. By using plain java class --> creating a java class publish to the proxy which will call this java class and do the logging.
    Which is the best practice and pros and cons.
    Thanks
    Phani

    Hi Anuj,
    Thanks for the links, they have been helpful.
    I understand now that OSR is only meant to contain only Proxy services. The synch facility is between OSR and OSB so that in case when you are not using OER, you can publish Proxy services to OSR from OSB. What I didn't understand was why there was a option to publish a Proxy service back to OSB and why it ended up as a Business service. From the link you provided, it mentioned that this case is for multi-domain OSBs, where one OSB wants to use the other OSB's service. It is clear now.
    Some more questions:
    1) In the design-time, in OER no Endpoints are generated for Proxy services. Then how do we publish our design-time services to OSR for testing purposes? What is the correct way of doing this?
    Thanks,
    Umar

Maybe you are looking for