Weblogic 8.1 Logging Best Practices?

I have an application that uses WLI. We have a set of interfaces and we would like each interface to output to its own Log File. In case you want to know how we define an interface basically an interface is a 1 to 1 relationship to a jpd file. I am really looking for the best way to implement this that will be easiest to scale. Thanks for any help you can provide. Any code examples would be very helpful!!!
Message was edited by manderj at Dec 6, 2004 5:35 AM

Does anyone see a value in the logging APIs supporting this more performant logging style?
curt
public class Myclass
// class level shared object instead of new'ed for each method call.
static protected NonCatalogLogger myLogger = new NonCatalogLogger("MyApplication");
public void myMethod() {
If (mylogger.isInfoLevel()) // needed method !!
// The expense of String creation and other expensive operations are prevented!!!!
mylogger.info("Application started. Foo="+foo.getExpensiveOperation());
}

Similar Messages

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

  • Logging Best Practices in J2EE

    Hi,
    I've been struggling with Apache Commons Logging and Log4J Class Loading problems between module deployments in the Sun App Server. I've also had the same problems with other App servers.
    What is the best practice for Logging in J2EE.
    i.e. I think i may be java.util.logging. But what is the best practise for providing different logging config (i.e. Levels for classes and output) for each deployed module. and how would you structure that in the EAR.
    Thanks in advance.
    Graham

    I find using the java.util.logging works fine. For configuration of the log levels I use a LifeCycle module that sets up all my levels and handlers. That way I can set up the server.policy to allow only the LifeCycle module jar to configure logging (with a codebase grant), but no other normal modules can.
    The LifeCycle module gets its properties as event data with the INIT event and configures the logging on the STARTUP event.
    Hope this helps.

  • Redo log best practice for performace - ASM 11g

    Hi All,
    I am new to ASM world....can you please give your valuable suggestion on below topic.
    What is the best practice for online redo log files ? I read it somewhere that oracle recommand to have only two disk group one for the all DB fils, control files online redo log files and other disk group for recovery files, like mulitplex redo file, archive log files etc.
    Will there be any performance improvment to make different diskgroup for online redo log (separate from datafiles, contorl files etc) ?
    I am looking for oracle document to find the best practise for redo log (peformance) on ASM.
    Please share your valuable views on this.
    Regards.

    ASM is only a filesystem.
    What really count on performance about I/O is the Storage Design i.e (RAID used, Array Sharing, Hard Disk RPM value, and so on).
    ASM itself is only a layer to read and write on ASM DISK. What mean if you storage design is ok ... ASM will be ok. (of course there is best pratices on ASM but Storage must be in first place)
    e.g There is any performance improvement make different Diskgroup?
    Depend...and this will lead to another question.
    Is the Luns on same array?
    If yes the performance will end up on same point. No matter if you have one or many diskgroup.
    Coment added:
    Comparing ASM to Filesystem in benchmarks (Doc ID 1153664.1)
    Your main concern should be how many IOPs,latency and throughput the storage can give you. Based on these values you will know if your redologs will be fine.

  • Console logging best practices?

    All,
    I just had a guest "LDom" crash. There was no system crash dump recovered and nothing in /var/adm/messages. Is there a best practice for sending virtual console output to some sort of log server?
    Any insight would be appreciated.
    Regards,
    Gene

    Could you use something like this: on control domain start screen (the terminal multiplexer) with the following options and arguments: screen -d -m sh -c "telnet <ldom's console> | tee -a <path to the log file>". That should create screen that has a telnet session to the ldom's console which writes its output to a file.
    For example: screen -d -m sh -c "telnet localhost:5001 | tee -a /var/tmp/ldom1_console.log"
    Remember to check running screens with screen -ls command.

  • ICM Trace Log best practices?

    Hello,
    Partner is asking if we have best practice guidelines/documentation for setting Trace Log file sizes on ICM 7.2.X
    thanks!

    Actually we did open a TAC. Cisco was not able to make a recommendation because the issue is intermittent and we cannot leave tracing on indefinitely. Because of this they instead recommended installing a packet sniffer. However our network support team came back with a similar response - cannot leave packet sniffing turned on indefinitely.
    This is a difficult situation in which we cannot reproduce the issue, we don't know when it will happen again. And we cannot take any proactive action to ensure that we capture the logs on the next occurrence.
    So, has anyone else been through something like this?
    Thanks!
    Joe

  • Logging best practices

    Hi!
    I have a few java apps and need to implements some common logging functionallities, I need all the applications to log to the same destination and save the info fo monitoring/analysys
    I considered using Log4j and Apache commons logging, but I need to be able to pass the name of the aplication as a parameter to the logging device so that you can easily track the origin. Some other parameters are required as well such as user, timestamp, etc...
    Which is the best way to go?
    Thanks,
    Iggy

    well, of course every appllication will/could have
    it's own log file but for ease of maintenance we need
    a single place where to save the informationThat is a bad idea, you do know that your logging statements might be interleaved in the file? E.g first you get a few characters from one statement (application) and then a few characters from another statement.
    You do instead have to have a logging server and let the applications log to that server (using tcp or udp). The server should then write to the log file.
    Kaj
    Message was edited by:
    kajbj

  • Redo / Archive Log Best Practices?

    I am a newb when it comes to Oracle administration. The problem is that our "DBA" knows even less about it.
    I'd like to get some advice/recommendations on redo and archive logs.
    We are currently running:
    Windows 2000 Server
    Oracle 8.1.7
    Oracle DB is ~50gb
    ~250 users
    Database is under fairly heavy load, as it is used to run the company's primary accounting software.
    Recently when reviewing back up procedures, I realized that our "DBA" did not have archive logging turned on. This obviously is a problem. Our DBA was relying solely on a data dump every night that was then backed up to tape. I was forced to take care of this, as the "DBA" didn't have any knowledge on this subject. I got archive logging turned on, changed the init file, etc. etc.
    Where the problem comes in, and where my questions come from... The database was writing archive logs ~2-3 mins, sometimes less depending on the database load. Oracle was configured to use 3 redo logs @ ~105mb each. The server was getting "Archive process error: Oracle instance xxxx - Cannot allocate log, archival required." I changed the redo logs to run 5 logs at ~200mb each. I also added a scsi drive to the server for the sole purpose of storing the archive logs. Log Buffer was set at 64k, I upped this to 1mb.
    My specific questions are:
    - How fast should logs be being written?
    - Should I up the number of redo logfiles, or up the size of each?
    - Should I be writing the redo logs to multiple destinations?
    - Should I archive to multiple destinations? If so, would archiving to a network drive lag the archive process, and kill the bandwidth to the server/database since it would be writing 200mb+ files to the network every few minutes?
    - What are some recommended file size minimums / maximums under the current environment listed above?
    - Other tips/suggestions?
    Any help would be appreciated.
    Thanks.

    hi,
    havce u configured LOG_ARCHIVE_START = TRUE ???
    How fast should logs be being written?Should I up the number of redo logfiles, or up the size of each?
    - Should I be writing the redo logs to multiple destinations?
    - Should I archive to multiple destinations? If so, would archiving to a network drive lag the archive process, and kill the bandwidth to the server/database since it would be writing 200mb+ files to the network every few minutes?
    - What are some recommended file size minimums / maximums under the current environment listed above?
    IF U WANT TO KEEP TIME BETWEEN FAILURES TO MINIMUM , THEN KEEP UR REDO LOG FILE SIZES TO SMALLER,,BUT ,,AS GENERALLY,,U SHUD TAKE IT TO A GOOD BIG ONE LIKE IN UR SITUATION I THINK IT SHUD BE AS:
    LOG_BUFFER = 104857600 --IN INIT.ORA (100MB)
    5 REDO LOG FILES AT MULTIPLE LOCATIONS EACH OF THE SIZE 400 MB...
    IT IS RECOMMENDED THAT DON'T TAKE UR ARCHIVES ON A NETWORKED LOCATION AS IT WILL DEFINITLY OVERLOADS NETWORK TRAFIC AS WELL AS SLOWS DOWN ARCHIVAL SPEED.
    REGARDS
    MUHAMMAD UMAR LIAQUAT

  • Best practice for application debug logging?

    I am building a webcenter portal using Oracle Webcenter 11.1.1.1.5 and deploying on Oracle Weblogic Server 11g. Please suggest what is the best practice to use for application debug logs. Should I use log4j or Apache commons library? Is it possible to manage the logs (enable/ disable/ change severity) of the logs using WLS admin console?

    You might want to read the chapter about AM Granularity in the ADF Developer Guide:
    http://download.oracle.com/docs/html/B25947_01/bcservices009.htm#sm0229

  • Best practice of OSB logging Report handling or java code using publish

    Hi all,
    I want to do common error handling of OSB I did two implementations as below just want to know which one is the best practice.
    1. By using the custom report handler --> When ever we want to log we will use the report action of OSB which will call the Custom java class which
    Will log the data in to DB.
    2. By using plain java class --> creating a java class publish to the proxy which will call this java class and do the logging.
    Which is the best practice and pros and cons.
    Thanks
    Phani

    Hi Anuj,
    Thanks for the links, they have been helpful.
    I understand now that OSR is only meant to contain only Proxy services. The synch facility is between OSR and OSB so that in case when you are not using OER, you can publish Proxy services to OSR from OSB. What I didn't understand was why there was a option to publish a Proxy service back to OSB and why it ended up as a Business service. From the link you provided, it mentioned that this case is for multi-domain OSBs, where one OSB wants to use the other OSB's service. It is clear now.
    Some more questions:
    1) In the design-time, in OER no Endpoints are generated for Proxy services. Then how do we publish our design-time services to OSR for testing purposes? What is the correct way of doing this?
    Thanks,
    Umar

  • Best practice for CM log and Out

    Hi,
    I've following architecture:
    DB SERVER = DBSERVER1
    APPS SERVER =APP1 AND APP2
    LB= Netscaler
    PCP configured.
    What is the best practice to have CM log and out? Do I need to keep these files on DB server and mount to APP1 and APP2?
    Please advice.
    Thanks

    Hi,
    see if you want to save the logfiles of other cm node when crash happens then why u want to share APPLCSF?if the node which is hosting shared APPLCSF gets crashed then ALL the logfiles(of both nodes) are gone, keep same directories and path of APPLCSF on both the nodes,so the CMNODE(A) have logfiles local in its directories and CMNODE(B) have logfile local in its directories....
    In the last what ever  i said above was my thinking so follow it or not its u r wish,But always follow what oracle says...the poster should also check with oracle.
    Regards
    Edited by: hungry_dba on Jan 21, 2010 1:20 PM

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Best practice to modify OIM webApp.war in Weblogic (10g - 9102)

    I very generic question for OIM in Weblogic:
    If I'm doing any change(minor or major) in any of the JSP file in OIM xlWebApp.war, what is the best practice of moving it to production ?
    Thanks,

    Hi,
    Here the steps you should do:
    - unwar the xlWebApp.war file (through jar command)
    - Modify/add the jsp's.
    - War it again.
    - copy/replace that at XEL_HOME/webapp location.
    - if you had modified xml or properties file, copy that to DDTemplate/webapp
    - run patch_weblogic command.
    - restart server.
    You can also write a build script which will take OOTB xlWebApp.war and custom jsp's and will generate new xlWebApp.war.
    Once tested successfully in dev/QA env, you can check in code and new xlWebApp.war file in some subversion.
    In Production you can directly use this xlWebApp.war file and run patch command.
    Also look at this:
    Re: Help: OIM 10G Server With Weblogic
    Cheer$
    A..

  • Best practice for weblogic portal

    While creating a portal application , what is the best approach, use jsp or jsr-168 portlets and minimum proprietary tags from weblogic or should use all those custom controls and stuff from weblogic

    Hi,
    There is no such best practices given by the Bea Systems.But if you are creating a weblogic portal application and you want to have portlet rendered on the portal,the pageFlow portlet is the best way to render your web application on the portal.There are advantanges of the pageFlow:
    (1)Easy in development of web application
    (2)Graphical view of your navigation
    (3)J2EE Resources can be accessed easily with use of controls.
    (4)Can use Behive Netui tags which is having lots of tags which can be used without writing complex logic.
    (5)Complex j2ee logic can be written in the form of Annotation which can help the developer to write complex j2ee coding
    Thx

Maybe you are looking for

  • Open browser window behavior in Mac not working right

    I am simply selecting my text in my html document and then selecting "Open Browser Window" from the Behaviors panel.  I then put in the info for the file I want it to open along with the size, etc... When I test it in the browser (Safari) the link wo

  • Using ADF trying to display an af:outputLabel below a radioButton

    This problem seemed to me to be a lot easier than it winded up being. I hope someone can help shed some light on it. Here it goes... I need to dynamically display a list of radio buttons(it gets the data from a database). Here is the catch. Underneat

  • Customization of "Positive Pay file with additional parameters"  template

    Hi, We've a requirement to customize the eText template of the Positive Pay file with additional parameters report. The setup is done , now the program is picking up the new format. Since the payments involves multilines hierarchy, we are unable to p

  • Generics in swing?

    Just wondering -- Are there any plans to use generics with swing classes? In particular, ListModel, TableModel, and (perhaps) TreeModel? After all, the default implementations of the first two must now be based on Vector<E> ...

  • CSW: Filtered Google Images still appearing with HTTPS Inspect configured

    Hi, I'm currently testing https Inspect to close a hole in the Google Images search. I was under the impression that https inspect would not display any images that are in the a blocked category. I have a CSW created certificate installed on the PC I