Logging based on EAR

Our app is currently logging using Log4j at application level. There are at least 8 EAR files, which are deployed when the server runs. All 8 EAR's are logging into File1.log file. I want to separate 1 ear. For example, 7 EAR files should conitnue logging in File1.log file however, the 8th one say ABCD.EAR should log into ABCD.LOG file.
I am able to log things based on class heirarchi. But there are number of classes, who doesn't fall under specific hierarchy but still fall under ABCD.EAR. How can i configure my log4j.xml based on EAR. My current settings for ABCD loggins are
<appender name="ABCDAppender" class="org.apache.log4j.RollingFileAppender">
<param name="File" value="./logs/abcd.log" />
<param name="Append" value="true" />
<param name="MaxFileSize" value="100MB" />
<param name="MaxBackupIndex" value="10" />
<layout class="provision.services.logging.PSPatternLayout">
<param name="conversionPattern" value="%d %D %S %-5p {%P} %#: [%-17c{5} %T] %m%n" />
</layout>
</appender>
<logger name="project.abcd">
<level value="DEBUG"/>
<appender-ref ref="ABCDAppender"/>
</logger>
this currently logging all log information for the classes falls under project.abcd, but there are other classes which are in abcd.ear but not being logged.

What version of log4j are you using as I think you will need 1.2.8 in order to find the log4j.xml in the classpath?
You can try specifiying the location like this:
JAVA_OPTIONS=-Dlog4j.config=C:\bea\user_projects\domains\....\WEB-INF\classes\log4j.xml
As a last resort why not get your application to write a file then read it. If this works you can then search your file system for the file and place your log4j.xml in the same location, no don't laugh :-)

Similar Messages

  • Powershell Get-Eventlog to export logs based on target machine

    Is there a way  to export windows event logs based on target machine with powershell? 
    I want to use this code to filter or classify System Center related logs according a specific agent on a hostname.

    Get-EventLog-ComputerNameseimi-nb-LogName'Windows
    PowerShell'|Export-Csvc:\EvetLog.csv 
    Seidl Michael | http://www.techguy.at |
    twitter.com/techguyat | facebook.com/techguyat

  • Implement log based change data capture

    Hi,
    I am trying to get log based change data capture to work . My ODI version is 11.1.1.5. I guess for log based there are 2 ways:
    1) use streams
    2) use log miner tool
    My database is Oracle 11g Express Edition. Streams i know can be possible only in enterprise edition of Oracle. So can anyone tell me how to implement log based CDC then since logminer tool is not preferred to be used in 11g

    Hi,
    Thanks for ur reply...
    I received an error while creating the change table ..
    ORA-29540: class oracle/CDC/PublishApi does not exist
    ORA-06512: at "SYS.DBMS_CDC_PUBLISH", line 298
    Canu pls help me to fix this..
    by,
    Nagaa

  • Clone from standby  ended in ARC: Cannot archive online log based on backup

    Hi
    I m into scenario where my prod db is in one data center and standby is in other data center.
    Both are geographically separated. I have to get a copy of prod on to standby data center side.
    Sending data over the network is taking long time either with duplicate db from active db or take backup and copy over standby side and restore it.
    so i thought of duplicate db from standby db which is in same data center, using 11g RMAN duplicate from active standby command.
    I have simulated scenario which is as below
    oracle version 11.2.0.1
    os version REHL 5.4
    My procedure & parameter are as below.
    on standby side from where i m copying (TARGET)
    1) on standby
    alter database recover managed standby db cancel;
    2)alter database convert to snapshot standby;
    which gave me
    /u01/data/DGSTD/archive/1_152_750425930.dbf
    /u01/data/DGSTD/archive/1_153_750425930.dbf
    */u01/data/DGSTD/archive/1_1_752604441.dbf*
    */u01/data/DGSTD/archive/1_2_752604441.dbf*
    3) alter database open;
    4) alter system switch logfile;
    now from rman
    RMAN> connect target sys/system@DGSTD
    connect auxiliary sys/system@GGR
    connected to target database: DGPRM (DBID=578436102)
    RMAN>
    connected to auxiliary database: NOTREAL (not mounted)
    RMAN>
    run{
    allocate channel prmy1 type disk;
    allocate channel prmy2 type disk;
    allocate channel prmy3 type disk;
    allocate channel prmy4 type disk;
    allocate channel prmy5 type disk;
    allocate auxiliary channel stby1 type disk;
    duplicate target database to ggr from active database
    spfile
    parameter_value_convert='DGSTD','GGR','/u01/data/DGSTD/','/u01/data/ggr/'
    set db_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set log_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set 'db_unique_name'='ggr'
    set 'audit_file_dest'='/u00/app/oracle/admin/ggr/adump'
    set 'sga_max_size'='140m'
    set 'pga_aggregate_target'='28940697'
    nofilenamecheck;
    and when output of rman reaches up below
    Starting backup at 31-MAY-11
    channel prmy1: starting datafile copy
    input datafile file number=00001 name=/u01/data/DGSTD/datafile/system01.dbf
    channel prmy2: starting datafile copy
    input datafile file number=00002 name=/u01/data/DGSTD/datafile/sysaux01.dbf
    in alert log of clone db it gives massive error saying
    ARC3: Cannot archive online log based on backup controlfile
    ARC2: Cannot archive online log based on backup controlfile
    ARC3: Cannot archive online log based on backup controlfile
    ARC2: Cannot archive online log based on backup controlfile
    and it fill up whole fs. and finally duplicate command throws error.
    not sure what i m missing of inside duplicate command or is it valid to duplicate database from snapshot standby.
    can somebody light on it please
    Edited by: user12281508 on Jun 1, 2011 10:26 AM
    Edited by: user12281508 on Jun 1, 2011 10:28 AM

    duplicate target database to ggr from active database
    spfile
    parameter_value_convert='DGSTD','GGR','/u01/data/DGSTD/','/u01/data/ggr/'
    set db_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set log_file_name_convert='/u01/oradata/DGSTD/','/u01/data/ggr/'
    set 'db_unique_name'='ggr'
    set 'audit_file_dest'='/u00/app/oracle/admin/ggr/adump'
    set 'sga_max_size'='140m'
    set 'pga_aggregate_target'='28940697'
    nofilenamecheck;
    }I think you should use standby cluase as
    DUPLICATE TARGET DATABASE TO dup1 FOR STANDBY FROM ACTIVE DATABASE;

  • VB Scripting to monitor application event log based on specific words.

    Hi All,
    I Have written, vb script to monitor application event log based on specific word in the message. when I have included same script in monitor, after running this script at specific time once in day, I am getting run time error in the server, where it
    supposed to run, could you please check the command where I have highlighted in below script.
    Dim VarSize
    Dim objMOMAPI
    Dim objBag
    Set objMOMAPI = CreateObject("MOM.ScriptAPI")
    Set objBag = objMOMAPI.CreateTypedPropertyBag(StateDataType)
    Set objFSO = CreateObject("Scripting.FileSystemObject")
    Const CONVERT_TO_LOCAL_TIME = True
    Set dtmStartDate = CreateObject("WbemScripting.SWbemDateTime")
    dtmStartDate.SetVarDate dateadd("n", -1440, now)' CONVERT_TO_LOCAL_TIME
    strComputer = "."
    Set objWMIService = GetObject("winmgmts:" _
     & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
    Set colLoggedEvents = objWMIService.ExecQuery _
     ("SELECT * FROM Win32_NTLogEvent WHERE Logfile = 'Application' AND " _
     & "EventCode = '100'")
    For Each objEvent in colLoggedEvents
    If InStr(LCase(colLoggedEvents.Message), "Message :Application A3 has been successfully processed for today") Then
    X= "Success"
    end if
    Next
    if X="Success" then
    call objBag.AddValue("State","GOOD")
    call objMOMAPI.Return(objBag)
    wscript.quit()
    Else
    call objBag.AddValue("State","BAD")
    call objMOMAPI.Return(objBag)
    wscript.quit()
    End If

    By programming standards since as long as I can remember the use of the value of a variable to detect its Boolean state has been used.
    Cast your mind back to strongly typed languages, e.g. Pascal.
    I'll cast back to the very early days of the "C" language where all variables could be treated as "bool" without a cast. The is no more strongly type language than "C". "C" practically invented the standards for all modern languages. 
    When I was writin machine language we also used zero as false but many machines only  tested the high bit for truthieness.  The HP machines and Intel allowed a test to aggregate to the sign bit.  Adding that flag to the test alloed tru for
    an numeric value that was non-zero.  A boool test was also used for a negative e switch.  If you study micro language implementation you will find that this hardware design and the companion compiler design is ... well... by design.  It is a
    way of improving the completeness and usefulness of an instruction set.
    Other langauges may require further decoration due to some mistaken desire to be better than perfect. That is like trying to change number theory by renaming addition to be "gunking" and forcing everyone to use multiplication when adding the same number
    more than once.  A Boolean test os a test of the flag bit with to without aggregation.    Even if we test a bit in a word we still mask and aggregate.  It is always the most primitive operation.  It is also the most useful
    operation when you finally realize that it is like an identity in math.
    Use the language features that are designed in. They can help to make code much more flexible and logical.
    By the way, Pascal also treats everything as Boolean when asked to.
    ¯\_(ツ)_/¯

  • How to update the log based on verifyAttributes status

    Hi All
    I am new to OTS and i am using 12.2 version. Here is the question..
    I am writing some info like PASSED/FAILED in log based on my assertion.This can be achieved in two ways.
    1.Using exists() method ,if the object is present then we can update the log as "passed" similarly if object does not exist then we log it as :"Failed"
    If i user verifyattribute method and i want update the log as "Passed" of verifyattribute is passed and vice verssa(ie update the log as :"failed:" if verifyattribute is failed)
    from help doc i have seen that this method verifyattribute/ verifiyattributes returns void.
    Any ways to handle this.. let me know
    Thanks inadvance
    Edited by: 1000235 on Apr 22, 2013 3:27 AM

    Hi,
    You'll have to work with exists() if you have to custom print your own log as assert/verify are void methods. If custom log isn't necessary I would suggest you to use Object test in Openscript it logs pass/fail of objects tests you create in results page. To create user object tests click on the Add Object Test > Give a name to test > select the element you want to test > select the attribute you want to verify > Save Test.

  • Log based capture vs. synchronous capture

    Hi, Log based capture based on rego logs scan and capture process can re-scan redo/archive logs if db will crash at some point to make sure that all required changes were captured and sent to destination db. What about synchronous capture? Is there any mechanism which can be used to re-discover changes made to database and make sure that those changes will be send to destination db?

    log based capture can be both synchronous and asynchronous. It can read from online redo logs and not from archived ones.

  • I need to start and stop logging based on a digital input event(or analog if necessary), log data for several seconds prior to the event, and have the data file close at the end of event and increment the filename for the next logging event.

    I don't know if this can be done with VI Logger or need to use Labview V7.1.

    After browsing through the VI Logger User Manual, it looks like the triggering that you are hoping to accomplish is possible. However, incrementing the filename for the next logging event is not going to be possible. VI Logger does exactly what its name tells - logs data. I don't think the automation that you are hoping to accomplish is possible.
    For help with setting up your application, if you do choose to stay with VI Logger, make sure to chek out the Getting Started with VI Logger Manual.
    Best of luck.
    Jared A

  • Console log-based question

    I recently noticed that a folder "Akamai" on my Applications folder and I uninstalled it (CleanApp). However, I've noticed several repeated logs on my console that indicates the need for Akamai. I'm not sure what this is for, and how to get rid of it. Can someone advise? Thx.
    I also recently installed a couple of Autodesk programs that I later decided I did not want (and uninstalled). I don't know if this is related.
    Here's the log.
    5/24/10 10:39:54 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21599]) Bug: launchdcorelogic.c:4143 (24003):13
    5/24/10 10:39:54 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21599]) posix_spawn("/Applications/Akamai/loader.pl", ...): No such file or directory
    5/24/10 10:39:54 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21599]) Exited with exit code: 1
    5/24/10 10:39:54 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist) Throttling respawn: Will start in 10 seconds
    5/24/10 10:40:04 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21600]) Bug: launchdcorelogic.c:4143 (24003):13
    5/24/10 10:40:04 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21600]) posix_spawn("/Applications/Akamai/loader.pl", ...): No such file or directory
    5/24/10 10:40:04 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21600]) Exited with exit code: 1
    5/24/10 10:40:04 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist) Throttling respawn: Will start in 10 seconds
    5/24/10 10:40:14 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21601]) Bug: launchdcorelogic.c:4143 (24003):13
    5/24/10 10:40:14 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21601]) posix_spawn("/Applications/Akamai/loader.pl", ...): No such file or directory
    5/24/10 10:40:14 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist[21601]) Exited with exit code: 1
    5/24/10 10:40:14 AM com.apple.launchd.peruser.501[413] (com.akamai.client.plist) Throttling respawn: Will start in 10 seconds
    5/24/10 10:40:20 AM osascript[21605] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find:
    /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper

    Yes it's a program that gets installed when you install Adobe or Autodesk products.
    Do you have a folder "Akamai" in your applications folder?
    If you do, do not uninstall or delete it.
    Instead:
    Open terminal
    Type in: cd /Applications/Akamai
    Type in: ./AdminTool uninstall -force
    And it will be removed from your system

  • Separating logs in OSB based on project - is this possible?

    Hi,
    I'm using OSB 11.1.1.3.0 and I'd like log messages generated from the log action to go to a different log file based on OSB project. I've found messages on the forum that discuss this, for example:
    alsb logging
    These posts only explain how to log messages from ALL OSB projects in a domain to a SINGLE log file, which I already have working using a LogFilter, but I'd like to take it a step further and log messages to a different log file depending on the specific project where the log action was invoked.
    I want to use a method that involves LogFilters rather than explicit OSB Reporting or File business services, to keep it unobtrusive from the perspective of the application developer.
    I've checked the output of the LogRecord methods and I don't see anything that will let me get a handle on the specific project a log originates from, but I've noticed that at the start of each log message information is logged inside [], for example:
    [raiseEvent Operation, raiseEvent Operation_request, stage1, REQUEST]
    This looks promising but unfortunately it doesn't always contain the OSB project name (sometimes it does, but not in this example).
    So my questions are:
    1. Has anyone successfully split logs based on OSB project using the WLS/JDK logging system?
    2. Is there a way to specify what is logged in the [] at the start of each log message?
    Any help is much appreciated.
    Thanks,
    Jason

    It has been done in a previous client by the then BEA in a different manner. Instead of using any java callouts or log4j code within osb, that task has been shifted to a weblogic startup class. So this is how it works:
    At OSB
    Use Log action to record log messages. Put appropriate annotation based on the project to categorize the log messages into different files later.
    Example: Log : Request Message Processed succesfully. Annotation: OSB_LOG: <Project_Name> Severity: Info. Note Annotation has 2 parts : a hard coded OSB_LOG and a variable +<Project_Name> part+
    This log message gets recorded in the manged server log file as : Info: OSB_LOG:<Project_Name> Request Message Processed successfully
    At WLS
    You have to configure the logging settings of the server so that this log message reaches the domain logger. For e.g. server should be configured to broadcast to domain logger at severity info and above if you want to get the above message
    Startup class
    This startup class will be deployed to the admin server. What it does is in the startup method gets the domain logger and registers a custom handler to it. The custom handler is implemented to inspect each LogRecord it gets from domain logger to check for the presence of hard coded string (OSB_LOG) . If the record has this string then it understands this log record has come from OSB application and not from any WLS subsystem and hence this message is important to it. Next step is to determine the logger name by getting the <Project_Name> from the Log Record. It creates a new logger with the same name as that of Project_Name and logs the message with the same severity as in the original message.
    One key advantage of this is that is if your OSB is deployed to a cluster, you will get log messages from all managed servers for that particular project ( or a proxy service) at one location.

  • Logging option on file adapter based JNDI

    Hello,
    While setting up a file adapter based JNDI, I notice under outbound connection, we have logging option where we have rotate logs based on size/time. Could someone please help me understand what does this refers to and how we can use that.
    its under - Deployment -> File adapter -> Outbound connection -->"Select a JNDI created by you" -> Logging.
    my JNDI, represent to a location where I am writing few specific log content in a text file. Can we handle(rotate) that based on this.
    Pls advice.

    Hi,
    you need to create 40 channels if you have users
    as dynamically you can only create:
    File Name
    Directory
    File Type
    File Encoding
    Temporary Name Scheme for Target File Name
    with FTP adapter
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

  • Log NC based on data collection

    Is it possible to trigger the logging of an NC based on a data collection value being outside the acceptable range?
    ie. Acceptable range for the data collection is a number less than 6, if the user enters 7 I would like to log an NC that says the data collection is out or range.

    To summarize:
    What I'm taking away from this is that it is the best practice to have only one parameter per DC group if you intend to trigger the automatic logging of an NC when that group "fails." The one parameter in the DC group MUST have a min/max value assigned and a fail is triggered when the operator enters a value outside of that range.  The NC is logged using the value assigned to the LOGNC_ID_ON_GROUP_FAILURE parameter in activity maintenance.
    If there are multiple parameters in the DC group, they all have to have a min/max value assigned and ALL of the responses have to be out of range in order to fail the SFC.
    I cannot have a DC group that contains parameters of multiple types and expect an NC to be logged based on an incorrect answer (for one question or multiple.)
    I cannot expect an NC to be logged based on an incorrect answer of one question, if the rest of the questions in the DC group are answered "correctly."
    Sound correct?
    Edited by: Allison Davidson on Apr 18, 2011 10:06 AM  - typo

  • OC4J : Log4JLogger does not implement Log

    Hi All,
    I would like to have some information regarding the "does not implement Log" problem which occurs when I deploy my ear application. I use OC4J 10.1.3.0 version and when I try to deploy my ear by having "search-local-classes-first" option as "true", I get the exception listed at the bottom of this mail..
    I saw the link http://wiki.apache.org/jakarta-commons/Logging/FrequentlyAskedQuestions,
    tried to resolve the problem through class loaders but in vain.
    I got struck with this and can anybody please help me in this regard.? Also please send me any diagnostic code which would help me to identify the classloader which tries to load these Log based classes.
    Thanks
    Raj
    ....................... Exception .......................
    07/04/24 15:58:35 SEVERE: CoreRemoteMBeanServer.getEvents Could not retrieve remote events: Error deserializing return-value: org.apache.commons.logging.LogConfigurationException; nested exception is:
    java.lang.ClassNotFoundException: org.apache.commons.logging.LogConfigurationExceptionjava.rmi.UnmarshalException: Error deserializing return-value: org.apache.commons.logging.LogConfigurationException; nested exception is:
    java.lang.ClassNotFoundException: org.apache.commons.logging.LogConfigurationException
    at com.evermind.server.rmi.RMICall.EXCEPTION_ORIGINATES_FROM_THE_REMOTE_SERVER(RMICall.java:110)
    at com.evermind.server.rmi.RMICall.throwRecordedException(RMICall.java:128)
    at com.evermind.server.rmi.RMIClientConnection.obtainRemoteMethodResponse(RMIClientConnection.java:472)
    at com.evermind.server.rmi.RMIClientConnection.invokeMethod(RMIClientConnection.java:416)
    at com.evermind.server.rmi.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:63)
    at com.evermind.server.rmi.RecoverableRemoteInvocationHandler.invoke(RecoverableRemoteInvocationHandler.java:28)
    at com.evermind.server.ejb.StatefulSessionRemoteInvocationHandler.invoke(StatefulSessionRemoteInvocationHandler.java:31)
    at __Proxy6.getEvents(Unknown Source)
    at oracle.oc4j.admin.jmx.client.MBeanServerEjbRemoteSynchronizer.getEvents(MBeanServerEjbRemoteSynchronizer.java:530)
    at oracle.oc4j.admin.jmx.client.CoreRemoteMBeanServer.getEvents(CoreRemoteMBeanServer.java:311)
    at oracle.oc4j.admin.jmx.client.EventManager.run(EventManager.java:199)
    at oracle.oc4j.admin.jmx.client.ThreadPool$ConfigurableThreadImpl.run(ThreadPool.java:295)
    Caused by: java.lang.ClassNotFoundException: org.apache.commons.logging.LogConfigurationException
    at com.evermind.server.rmi.RMIClassLoader.findClass(RMIClassLoader.java:54)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:242)
    at com.evermind.io.ClassLoaderObjectInputStream.resolveClass(ClassLoaderObjectInputStream.java:33)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
    at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:479)
    at javax.management.Notification.readObject(Notification.java:350)
    at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:585)
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:946)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1809)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1634)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1299)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
    at com.evermind.server.rmi.RMIProtocol$Version.unmarshallParameterDirectly(RMIProtocol.java:402)
    at com.evermind.server.rmi.RMIProtocol$Version_1_0.unmarshallParameter(RMIProtocol.java:471)
    at com.evermind.server.rmi.RMIProtocol.readObject(RMIProtocol.java:80)
    at com.evermind.server.rmi.RMIProtocol.readValue(RMIProtocol.java:161)
    at com.evermind.server.rmi.RMIClientConnection.handleMethodInvocationResponse(RMIClientConnection.java:794)
    at com.evermind.server.rmi.RMIClientConnection.handleOrmiCommandResponse(RMIClientConnection.java:242)
    at com.evermind.server.rmi.RMIClientConnection.dispatchResponse(RMIClientConnection.java:197)
    at com.evermind.server.rmi.RMIClientConnection.processReceivedCommand(RMIClientConnection.java:179)
    at com.evermind.server.rmi.RMIConnection.handleCommand(RMIConnection.java:154)
    at com.evermind.server.rmi.RMIConnection.listenForOrmiCommands(RMIConnection.java:126)
    at com.evermind.server.rmi.RMIConnection.run(RMIConnection.java:105)
    at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:819)
    at java.lang.Thread.run(Thread.java:595)
    07/04/24 15:58:35 WARNING: ApplicationUnDeployer.removeFiles WARNING: Unable to remove appDir C:\product\10.1.3\OracleAS_1\j2ee\home\applications\epcis : Unable to remove C:\product\10.1.3\OracleAS_1\j2ee\home\applications\epcisjava.io.IOException: Unable to remove C:\product\10.1.3\OracleAS_1\j2ee\home\applications\epcis
    at oracle.oc4j.util.FileUtils.recursiveRemove(FileUtils.java:249)
    at oracle.oc4j.admin.internal.ApplicationUnDeployer.removeFiles(ApplicationUnDeployer.java:146)
    at oracle.oc4j.admin.internal.ApplicationUnDeployer.doUndeploy(ApplicationUnDeployer.java:117)
    at oracle.oc4j.admin.internal.UnDeployerBase.execute(UnDeployerBase.java:91)
    at oracle.oc4j.admin.internal.UnDeployerBase.execute(UnDeployerBase.java:72)
    at oracle.oc4j.admin.internal.ApplicationDeployer.undo(ApplicationDeployer.java:222)
    at oracle.oc4j.admin.internal.DeployerBase.execute(DeployerBase.java:138)
    at oracle.oc4j.admin.jmx.server.mbeans.deploy.OC4JDeployerRunnable.doRun(OC4JDeployerRunnable.java:52)
    at oracle.oc4j.admin.jmx.server.mbeans.deploy.DeployerRunnable.run(DeployerRunnable.java:81)
    at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:819)
    at java.lang.Thread.run(Thread.java:595)
    07/04/24 15:58:36 WARNING: DeployerRunnable.run java.lang.ExceptionInInitializerErrororacle.oc4j.admin.internal.DeployerException: java.lang.ExceptionInInitializerError
    at oracle.oc4j.admin.internal.DeployerBase.execute(DeployerBase.java:139)
    at oracle.oc4j.admin.jmx.server.mbeans.deploy.OC4JDeployerRunnable.doRun(OC4JDeployerRunnable.java:52)
    at oracle.oc4j.admin.jmx.server.mbeans.deploy.DeployerRunnable.run(DeployerRunnable.java:81)
    at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:819)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.lang.ExceptionInInitializerError
    at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:232)
    at com.evermind.server.http.HttpApplication.initDynamic(HttpApplication.java:1015)
    at com.evermind.server.http.HttpApplication.<init>(HttpApplication.java:649)
    at com.evermind.server.ApplicationStateRunning.getHttpApplication(ApplicationStateRunning.java:428)
    at com.evermind.server.Application.getHttpApplication(Application.java:512)
    at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.createHttpApplicationFromReference(HttpSite.java:1975)
    at com.evermind.server.http.HttpSite$HttpApplicationRunTimeReference.<init>(HttpSite.java:1894)
    at com.evermind.server.http.HttpSite.addHttpApplication(HttpSite.java:1591)
    at oracle.oc4j.admin.internal.WebApplicationBinder.bindWebApp(WebApplicationBinder.java:206)
    at oracle.oc4j.admin.internal.WebApplicationBinder.bindWebApp(WebApplicationBinder.java:96)
    at oracle.oc4j.admin.internal.ApplicationDeployer.bindWebApp(ApplicationDeployer.java:541)
    at oracle.oc4j.admin.internal.ApplicationDeployer.doDeploy(ApplicationDeployer.java:197)
    at oracle.oc4j.admin.internal.DeployerBase.execute(DeployerBase.java:93)
    ... 4 more
    Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Log4JLogger does not implement Log
    at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:532)
    at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:272)
    at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:246)
    at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
    at com.sun.faces.config.beans.FacesConfigBean.<clinit>(FacesConfigBean.java:28)
    ... 17 more
    Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Log4JLogger does not implement Log
    at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:416)
    at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:525)
    ... 21 more
    Caused by: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Log4JLogger does not implement Log
    at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:412)
    ... 22 more

    Raj,
    your post is a bit confusing.
    1. search-local-classes-first is a setting for WAR applications.
    2. The exception seems to come from the EJB layer.
    During deployment you might try to use the commons-logging shared library from Oracle AS/OC4J to solve this. Another approach might be to package the library in the archive you're using and referring it from the META-INF/Manifest.mf.
    --olaf                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • OSB Logging: Case study

    Hi
    I want to separate the OSB logs based on proxy service names. I went through following threads:
    1. alsb logging
    2. https://kr.forums.oracle.com/forums/thread.jspa?threadID=1556555
    I was wondering if i can change DEFAULT BEHAVIOUR OF 'LOG' action in OSB??
    I mean when we use 'LOG' action, the logs go to single file named Servername.log. So MangedServer1.log and ManagedServer2.log in my case (2 managed servers).
    What I want is this: When I use 'LOG' action, the logged xquery expression should go to the file whose name is the same as of PS where the action was used.
    I also want the logs to be in a single location irrespective of the number of managed servers.
    I played with creating log filters in WebLogic but did not help.
    Any ideas
    1. Using WebLogic logging filters
    2. Using WebLogic logging API

    ah! changing behavior of log action must be a tedious task, so far i could not find any illustrations.
    some code sample was available on dev2devbea.com, but since forum is archived - the sample is not available for download :-(

  • What order are Archive logs restored in when RMAN recover database issued

    Ok, you have a run block that has restored your level-0 RMAN backup.
    Your base datafiles are down on disc.
    You are about to start recovery to point in time, lets say until this morning at 07:00am.
    run {   set until time "TO_DATE('2010/06/08_07:00:00','YYYY/MM/DD_HH24:MI:SS')";
    allocate channel d1 type disk;
    allocate channel d2 type disk;
    allocate channel d3 type disk;
    allocate channel d4 type disk;
    recover database;
    So the above runs, it analyses the earlies SCN required for recovery, checks for incremental backups (none here), works out the archivelog range
    required and starts to restore the archive logs. All as expected and works.
    My question: Is there a particular order that RMAN will restore the archive logs and is the restore / recover process implemented as per the run block.
    i.e Will all required archive logs based on the run block be restored and then the database recovered forward. Or is there something in RMAN that says restore these archive logs, ok now roll forwards, restore some more.
    When we were doing this the order of the archive logs coming back seemed to be random but obviously constrained by the run block. Is this an area we need to tune to get recoveries faster for situations where incrementals are not available?
    Any inputs on experience welcome. I am now drilling into the documentation for any references there.
    Thanks

    Hi there, thanks for the response I checked this and here are the numbers / time stamps on an example:
    This is from interpreting the list backup of archivelog commands.
    Backupset = 122672
    ==============
    Archive log sequence 120688 low time: 25th May 15:53:07 next time: 25th May 15:57:54
    Piece1 pieceNumber=123368 9th June 04:10:38 <-- catalogued by us.
    Piece2 pieceNumber=122673 25th May 16:05:18 <-- Original backup on production.
    Backupset = 122677
    ==============
    Archive log sequence 120683 low time: 25th May 15:27:50 Next time 25th May 15:32:24 <-- lower sequence number restored after above.
    Piece1 PieceNumber=123372 9th June 04:11:34 <-- Catalogued by us.
    Piece2 PieceNumber=122678 25th May 16:08:45 <-- Orignial backup on Production.
    So the above would show that if catalogue command you could influence the Piece numbering. Therefore the restore order if like you say piece number is the key. I will need to review production as to why they were backed up in different order completed on production. Would think they would use the backupset numbering and then piece within the set / availability.
    Question: You mention archive logs are restored and applied and deleted in batches if the volume of archivelogs is large enough to be spread over multiple backup sets. What determines the batches in terms of size / number?
    Thanks for inputs. Answers some questions.

Maybe you are looking for