Log mining utility
hi experts,
is there any log mining utility from which we can output data to flat files for application?
regards,
SKP
Hi,
check this out..
http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/logminer.htm
http://www.oracle.com/technology/deploy/availability/htdocs/LogMinerOverview.htm
But still I did not get you question.. Correctly.. about "output data to flat files for application"..??
- Pavan Kumar N
Similar Messages
-
Hi All,
Can anyone tell me why the log mining server is used ? And also where it is located ?
If possible with an example,
Thanks a lot.Hi;
Pelase review:
Master Note for LogMiner [ID 1264738.1]
LogMiner - Frequently Asked Questions (FAQ) [ID 174504.1]
How to Setup LogMiner [ID 111886.1]
The LogMiner Utility [ID 62508.1]
Also see:
http://download.oracle.com/docs/cd/A87860_01/doc/server.817/a76956/archredo.htm#8951
PS:Please dont forget to change thread status to answered if it possible when u belive your thread has been answered, it pretend to lose time of other forums user while they are searching open question which is not answered,thanks for understanding
Regard
Helios -
Log mining is taking too much time in logical standby database
dear DBAs,
today i found a gap between the production database and the logical standby database and i found that the log mining is taking more than 1 hour to complete an archivelog (size: 500M)
note that the MAX_SGA is 1500M and the MAX_SERVERS=45
the databases is 10gR2 (10.2.0.5.0) running on a linux machine RHEL 4
please your help.
thx in advance
Eliehi,
can you check metalink id [ID 241512.1]
thanks -
Hi
How do i use the log miner utility to retreive the information from the generated archive logs?
ImranFinf the below output.
SQL>EXECUTE DBMS_LOGMNR_D.BUILD( -
DICTIONARY_FILENAME =>'dictionary.ora', -
DICTIONARY_LOCATION => '/oracle/utl/');> >
PL/SQL procedure successfully completed.
SQL>
EXECUTE DBMS_LOGMNR.ADD_LOGFILSQL> E( -
LOGFILENAME => '/oracle/oradata/arch/1_61_715193506.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
>>
PL/SQL procedure successfully completed.
SQL> BEGIN
DBMS_LOGMNR.start_logmnr (
options => Dbms_Logmnr.DDL_Dict_Tracking);
END;
SELECT OPERATION, SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS
WHERE SEG_OWNER='SIFON';
no rows selected
I have deleted few rows and created logs manually. But it does not shows any rows. Did i left any steps. pls help me. -
BOMM XI 3.1 - Logs Cleanup Utility - failure
BOMM XI 3.1 on Windows Server 2003
BusinessObjects Enterprise XI 3.1 SP2
I do not succeed in running the Logs Cleanup Uitility. This is the error log. I have not modified the MM.properties file.
[C-3027231] 2010-01-06 13:32:15.723 Metadata Management build version 12.1.0.1.
[I-3026012] 2010-01-06 13:32:15.739 Starting utility com.bobj.mm.utility.MMLogsCleanupUtility.
[I-3027129] 2010-01-06 13:32:15.739 Process id = 7824 on machine VWAC106.post.bpgnet.net.
[C-3027100] 2010-01-06 13:32:15.739 Extracting arguments from C:\Program Files\Business Objects\MetadataManagement\MM\bin\..\config\MM.properties.
[E-3026103] 2010-01-06 13:32:15.754 java.lang.NullPointerException
[C-0000000] 2010-01-06 13:32:15.754 java.lang.NullPointerException
at com.bobj.mm.boe.plugin.desktop.configuration.internal.UtilityConfigurationImpl.getPropertiesInBag(UtilityConfigurationImpl.java:171)
at com.bobj.mm.boe.plugin.desktop.configuration.internal.UtilityConfigurationImpl.getUtilityProperties(UtilityConfigurationImpl.java:151)
at com.bobj.mm.core.ProgramConfiguration.parseConfigurationFromInfoStore(ProgramConfiguration.java:573)
at com.bobj.mm.core.ProgramConfiguration.setArgs(ProgramConfiguration.java:366)
at com.bobj.mm.core.Main.handleArgs(Main.java:1222)
at com.bobj.mm.core.Main.run(Main.java:1145)
at com.bobj.mm.core.Main.main(Main.java:229)
[E-3026267] 2010-01-06 13:32:15.754 Error reading configuration with configuration_id = 84653.
[E-3026022] 2010-01-06 13:32:15.770 Utility failed with code 1.
[I-3026014] 2010-01-06 13:32:15.770 Completed execution on 2010-01-06 13:32:15.
[I-3026105] 2010-01-06 13:32:15.770 Elapsed time is 0:00.063 [min:sec.millisec].did you upgrade from MM 12.0 to MM 12.1 or this is new installation of MM 3.1 ?
if you have upgraded from 12.0 then this might be a bug, the MM Cleanup Utility obejct in CMS repository didn't get updated during upgrade, you can try the following workaround
go to <BOE Installation>\BusinessObjects Enterprise 12.0\Packages folder
open BusinessObjects_MM_Utility_Log_Cleanup_dfo.xml in text editor (notepad)
change the following
<propertybag name="SI_MM_UTILITY_PROPERTIES" type="Bag" flags="0">
to
<propertybag name="SI_MM_UTILITY_PROPERTIES" type="Bag" flags="0" version="3000100">
restart the CMS Server -
How to configure logging for Utils buildFacesMessage
hi
Please consider this example application created using JDeveloper 11.1.1.3.0
at http://www.consideringred.com/files/oracle/2010/UtilsBuildFacesMessageLoggingApp-v0.01.zip
(note, there is no ErrorHandlerClass configured in DataBindings.cpx)
Running the trySomeFailingMethod.jspx page and entering a value starting with "err" will result in an exception being thrown and a message shown.
For such scenario the "IntegratedWebLogicServer - Log" panel shows
<Utils><buildFacesMessage> ADF: Adding the following JSF error message: unable to work with [err not ok value] as pFirstParam
utilsbuildfacesmessageloggingapp.model.exception.MyRuntimeException: unable to work with [err not ok value] as pFirstParam
at utilsbuildfacesmessageloggingapp.model.MyServiceImpl.someFailingMethod(MyServiceImpl.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at oracle.adf.model.binding.DCInvokeMethod.invokeMethod(DCInvokeMethod.java:567)
<Utils><buildFacesMessage> ADF: Adding the following JSF error message: unable to work with [err not ok value] as pFirstParam
utilsbuildfacesmessageloggingapp.model.exception.MyRuntimeException: unable to work with [err not ok value] as pFirstParam
at utilsbuildfacesmessageloggingapp.model.MyServiceImpl.someFailingMethod(MyServiceImpl.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at oracle.adf.model.binding.DCInvokeMethod.invokeMethod(DCInvokeMethod.java:567)
question:
- (q1) How do I configure this "Utils buildFacesMessage" to avoid (or change) the logging of the exception (stacktrace) on the "IntegratedWebLogicServer - Log" panel?
many thanks
Jan VerveckenThanks for your reply Timo.
Yes, indeed, that is basically "logger" configuration in
[...]\JDeveloper\system11.1.1.3.37.56.60\DefaultDomain\config\fmwconfig\servers\DefaultServer\logging.xml
But an important part of my question (q1), which is not very clear, is how to determine what to configure, in this case "oracle.adf.controller.faces.lifecycle.Utils".
But reviewing the log_handler configuration in logging.xml pointed me to ...
[...]\JDeveloper\system11.1.1.3.37.56.60\DefaultDomain\servers\DefaultServer\logs\DefaultServer-diagnostic.log
... which has more detailed information about which code is logging the message
[DefaultServer] [WARNING] [] [oracle.adf.controller.faces.lifecycle.Utils] [tid: [ACTIVE].ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 0000IjULgG87y0YjLpyGOA1Ckc_900002a,0] [APP: UtilsBuildFacesMessageLoggingApp] [dcid: 4a36ca961f645018:-3ebf9ae5:12bd7e7fe26:-8000-0000000000000174] ADF: Adding the following JSF error message: unable to work with [err not ok value] as pFirstParam[[
utilsbuildfacesmessageloggingapp.model.exception.MyRuntimeException: unable to work with [err not ok value] as pFirstParam
at utilsbuildfacesmessageloggingapp.model.MyServiceImpl.someFailingMethod(MyServiceImpl.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[...]So, configuring a logger for "oracle.adf.controller.faces.lifecycle.Utils" works, but there does not seem to be level that just logs the exception message without the stack-trace. It would have been convenient to have a better overview of the (other) log messages without it being cluttered with large stack-traces.
regards
Jan Vervecken -
Hello,
We've a database on which auditing having been activated. Some fraudulous acctivities have been operated on that database. The database was not running in archivelog mode. Now management wants information about those activities to be retrieved. The database is 9iR2.
1- Is it possible to get the trace of those fraudulous activities given the situation I described?
2- Some people suggest we exploit the content of the redos. Can anyone explain/tell the techniques that we can use to mine the redo log?
Thanks in advance.Damorga,Erka, thanks for your replies. The team that was managing the databases wasn't a team of DBAs. They were rather System/network Administrators and we cannot blame them.
New structural chages are being made and the new team of DBAs was set up about 2 months ago. It's only at that moment that we realized the sinning I described above. We turned auditing and archiving on. The problem which brought me to post here happened long before we were hired. That means the content of the redos were flushed everytime they were full, but management is putting pressure on us to try and get a trace of those activities. I wanted to confirm that we cannot get those trace in the situation I presented.
Now I would like to know, how far in the past can logminer go? I mean, I know it works best in archivelog mode but if we apply logminer on archives that are like 4 months or more old, can we have significant information about everything that happended during that period?
Edited by: [email protected] on 7 févr. 2009 02:41 -
Getting no output from java.util.logging.FileHandler
I am new to Java as is the company I work for, but we have just landed a contract that specifies J2EE as the platform, so here we are. :-) Please bear with me.
I have been charged with determining our logging architecture and it looks like what is available in java.util.logging will do well (though we may use log4j). However, at this point I am just trying to get anything to work and not having much luck.
We are using JSF on the front end and I have created a very simple JSF page to test logging. The relevant code is below and I hope will be self explanatory: This code is not meant to be efficient or anything. It is just a proof of concept.
public String button1_action() {
// User event code here...
try {
Logger l = java.util.logging.Logger.getLogger(Page1.class.getName());
l.entering(Page1.class.getName(), "button1_action");
l.info(this.textField1.getValue().toString());
l.exiting(Page1.class.getName(), "button1_action");
java.util.logging.Handler h = l.getHandlers()[0];
h.flush();
catch(Exception ex) {
//I have tested this and we aren?t catching any errors.
System.err.println(ex);
return "";
}My logger.properties files looks like this:
handlers= java.util.logging.FileHandler
.level= FINEST
java.util.logging.FileHandler.pattern = c:/sun/logs/test-%u.log
java.util.logging.FileHandler.limit = 50000
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatterI have developed and tested this in Sun Studio Creator 2004Q2 What is happening is that I am getting three log files in c:/sun/logs
test-0.log, test-1.log and test-2.log. The first two contain output from various sun components. (sun.rmi.transport for example). The third log remains empty. (zero length)
I have also deployed the test app to a tomcat 5.0.28 server and get similar results. The only difference is I get only one two log files and the second one remains empty.
Any assistance or suggestions as to what direction I should be taking would be appreciated.
--KenDo not use default logger as getLoggers[0] , but use your java.util.Logger.FileHandler and add filehandler object to log your fButtonActions and you do not need to mess with logger.properties too.
-
How to log just plain text but not xml while using util.logging?
i tried the following code to log simple messages in my app.However it logs the messages in xml. I don't want this and only plain text will be enough so what should i do? e.g. using some formatters?
import java.util.logging.*;
class LoggerTest{
public static Logger logger =Logger.getLogger("demo.test");
public static void main(String args[]) throws Exception{
FileHandler fh = new FileHandler("log.xml",true);
logger.addHandler(fh);
for(int i=0;i<100;i++){
logger.log(java.util.logging.Level.INFO,"appanded");
}thx!This post
http://forum.java.sun.com/thread.jsp?thread=319861&forum=31
shows that if you make a properties file (you can name it logging.properties or whatever) with at least the line
java.util.logging.FileHandler.formatter= java.util.logging.SimpleFormatter
and then run with
java -Djava.util.logging.config.file=logging.properties LoggerTest
you will get what you want.
You can set this property programmatically (actually the SBIS is deprecated, but I am too lazy to do the StringReader class right here):
String my_prop = "java.util.logging.FileHandler.formatter=java.util.logg
ing.SimpleFormatter" ;
Properties props = new Properties (System.getProperties());
props.load( new StringBufferInputStream( my_prop));
System.setProperties(props);
but you would need to set the property before you create the formatter. -
Enabling java.util.logging for Toplink
JDeveloper 10.1.3.0.4 Build 3673 running Embedded OC4J (jdk1.5.0_02).
I'm trying to get Toplink to log to a custom handler (a la java.util.logging) while running Embedded OC4J, but nothing I've tried works.
If I use the java.util.logging.LogManager to loop through and print out all of the existing loggers, none of the 145 listed in the Embedded OC4J environment mention anything about toplink. So presumably Toplink is not running in "java" logging mode. I'm not using toplink workbench which seems to have a UI checkbox to enable java logging, and I haven't found a way to turn it on using JDeveloper. I've tried using the system property -Dtoplink.log.destination=JAVA (no effect) and adding my handler to j2ee-logging.xml (couldn't find my custom logging handler class, and won't help anyway if toplink isn't using java.util.logging), and various attempts at hacking sessions.xml with using log-type and java-log elements (I couldn't get them right--xml parse errors).
What do I need to do to get Toplink to log to a custom handler?
TIA,
ClarkHi Clark,
You can use either java or server log when running an application in OC4J.
1 When you run a CMP application, use system property (e.g. Dtoplink.log.destination=JAVA)
2 When you run a non CMP server application, use the logging tag in sessions.xml (e.g. <logging xsi:type="server-log"/>)
You have the following options to get TopLink to log to a custom handler
1 If you want to do the entire configuration from logging.properties, remove all handlers/loggers declarations from j2ee-logging.xml and in that case all configuration will be from logging.properties
2 Use a combination of j2ee-logging.xml and logging.properties. You can define certain attributes in j2ee-logging.xml. They are "name", "class", "level", "errorManager", "filter", "formatter" and "encoding", and they correspond to the attributes of java.util.logging.Handler. Attributes in subclasses of Handler are not supported. All other properties for the handler are defined in logging.properties. j2ee-logging.xml is processed on top of logging.properties, which means j2ee-logging.xml takes the precedence for certain attributes/properties. Take FileHandler for an example,
j2ee-logging.xml:
<log_handlers>
<log_handler name='my-handler' class='java.util.logging.FileHandler'
formatter='java.util.logging.SimpleFormatter' level='INFO'/>
</log_handlers>
logging.properties:
java.util.logging.FileHandler.pattern = %h/java%u.log
java.util.logging.FileHandler.limit = 10000
java.util.logging.FileHandler.count = 2
If there are no properties defined in logging.properties, it will use its default values, which are documented in the FileHandler javadoc.
Shannon -
Please Help! Console Log Attached:
I've been having this problem since I purchased my machine in Oct 07. I've reinstalled my OS (10.4.11), and downloaded all of Apple's Updates. Once all of my software was updated, I proceeded to install Final Cut Studio 2. Let me begin by saying that, historically, my problem is related to any given application suddenly quitting. Additionally, I've purchased a new Hard Drive, and installed another copy of my OS without Final Cut Studio - which does not report any problems or errors according to the Console Log. However... The current startup disk I'm working with did, in fact, report Syndication Error 306 "BestCalendarDateInString"... Whatever that means... But... I deleted my "Database 3" File from user/library/syndication, and reinstalled Safari 3.1.2. I did this because my research in to the topic described the syndication error as being a potential problem in Safari collecting & updating RSS Feeds. After my Safari re-install, and several restarts, and disk permission repairs, all error messages from the Console Log disappeared. Which brings me to the present time... Which now, after installing Final Cut Studio 2, updating all software, and repairing disk permissions - I'm still getting (different now) error messages at startup - as reported by my Console Log:
Mac OS X Version 10.4.11 (Build 8S2167)
2008-07-24 23:16:32 -0400
2008-07-24 23:16:32.827 SystemUIServer[88] lang is:en
2008-07-24 23:16:37.666 qmasterd[181] [CDOHostInfoFactory localPortName]: gethostbyname() failed.
gethostbyname: Unknown host
Jul 24 23:16:39 john-rogins-computer crashdump[182]: qmasterd crashed
qmasterd: terminated due to signal().
Jul 24 23:16:39 john-rogins-computer crashdump[182]: crash report written to: /Library/Logs/CrashReporter/qmasterd.crash.log
THIS IS FROM ANOTHER RESTART
Mac OS X Version 10.4.11 (Build 8S2167)
2008-07-24 23:06:55 -0400
2008-07-24 23:06:55.491 SystemUIServer[90] lang is:en
2008-07-24 23:07:00.175 qmasterd[175] [CDOHostInfoFactory localPortName]: gethostbyname() failed.
gethostbyname: Unknown host
Jul 24 23:07:00 john-rogins-computer ntpdate[141]: the NTP socket is in use, exiting
Jul 24 23:07:00 john-rogins-computer ntpd[181]: bind() fd 5, family 2, port 123, addr 0.0.0.0, in_classd=0 flags=8 fails: Address already in use
Jul 24 23:07:00 john-rogins-computer ntpd[181]: bind() fd 5, family 30, port 123, addr ::, in6is_addrmulticast=0 flags=0 fails: Address already in use
Jul 24 23:07:04 john-rogins-computer ntpdate[137]: no server suitable for synchronization found
Jul 24 23:07:04 john-rogins-computer ntpd[202]: bind() fd 7, family 2, port 123, addr 127.0.0.1, in_classd=0 flags=0 fails: Address already in use
Jul 24 23:07:04 john-rogins-computer ntpd[202]: bind() fd 7, family 30, port 123, addr ::1, in6is_addrmulticast=0 flags=0 fails: Address already in use
Jul 24 23:07:04 john-rogins-computer ntpd[202]: bind() fd 7, family 30, port 123, addr fe80:1::1, in6is_addrmulticast=0 flags=0 fails: Address already in use
Jul 24 23:07:04 john-rogins-computer ntpd[202]: bind() fd 7, family 30, port 123, addr fe80:4::217:f2ff:fe0e:4078, in6is_addrmulticast=0 flags=0 fails: Address already in use
Jul 24 23:07:04 john-rogins-computer ntpd[202]: bind() fd 7, family 2, port 123, addr 67.149.67.30, in_classd=0 flags=8 fails: Address already in use
Jul 24 23:07:05 john-rogins-computer ntpd[202]: sendto(17.151.16.21): Bad file descriptor
Disk Utility started.
Repairing permissions for “Untitled”
Determining correct file permissions.
parent directory ./Users/Shared/SC Info does not exist
Permissions repair complete
The privileges have been verified or repaired on the selected volume
Repairing permissions for “Untitled”
Determining correct file permissions.
parent directory ./Users/Shared/SC Info does not exist
Permissions repair complete
The privileges have been verified or repaired on the selected volume
qmasterd: terminated due to signal().
qmasterd: supervisor exiting, shutting down daemon 175.
Jul 24 23:12:16 john-rogins-computer crashdump[287]: qmasterprefs crashed
Jul 24 23:12:17 john-rogins-computer crashdump[287]: crash report written to: /Library/Logs/CrashReporter/qmasterprefs.crash.log
Disk Utility started.
Repairing permissions for “Untitled”
Determining correct file permissions.
parent directory ./Users/Shared/SC Info does not exist
Permissions differ on ./Library, should be drwxrwxr-t , they are drwxrwxr-x
Owner and group corrected on ./Library
Permissions corrected on ./Library
Permissions repair complete
The privileges have been verified or repaired on the selected volume
Repairing permissions for “Untitled”
Determining correct file permissions.
parent directory ./Users/Shared/SC Info does not exist
Permissions repair complete
The privileges have been verified or repaired on the selected volume
Repairing permissions for “Untitled”
Determining correct file permissions.
parent directory ./Users/Shared/SC Info does not exist
Permissions repair complete
The privileges have been verified or repaired on the selected volume
Verifying volume “CHIRON”
Checking HFS Plus volume.
Checking Extents Overflow file.
Checking Catalog file.
Checking multi-linked files.
Checking Catalog hierarchy.
Checking Extended Attributes file.
Checking volume bitmap.
Checking volume information.
The volume CHIRON appears to be OK.
Mounting Disk
1 HFS volume checked
Volume passed verification
Verifying volume “CHIRON”
Checking HFS Plus volume.
Checking Extents Overflow file.
Checking Catalog file.
Checking multi-linked files.
Checking Catalog hierarchy.
Checking Extended Attributes file.
Checking volume bitmap.
Checking volume information.
The volume CHIRON appears to be OK.
Mounting Disk
1 HFS volume checked
Volume passed verificationThis worked for me:
http://forums.macrumors.com/showthread.php?t=431587
I did have a Shared folder but may have added it manually or something, so I moved it to the desktop and pasted the codes into terminal (I then typed exit to get out of there)
Good Luck! -
Logging with jdk1.4 - how to add a handler using configuration file
Hi, all
I am playing around with java.util.logging in jdk1.4. In particular, I am using a properties file for configuration. However, one thing I couldn't do is to assign a handler, such as the ConsoleHandler, to the com.xyz.foo logger. Everything for the root logger works just fine. Here's the file I use
handlers= java.util.logging.FileHandler
.level= INFO
java.util.logging.FileHandler.pattern = jdk14.log
java.util.logging.FileHandler.limit = 50000
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
com.xyz.foo.level = WARNING
com.xyz.foo.handlers = java.util.logging.ConsoleHandler
Nothing comes out on the console and everything from the com.xyz.foo logger is logged to jdk14.log file.
Can any one tell me why the last line has no effect?
Thanks much!Logger configuration files are grossly misunderstood due in large part to extremely poor documentation (some of the worst I have ever seen for the Java platform). The LogManager class uses logger configuration files to do three things:
1. Load porperties into a private Properties object that application programmers can subsequently access using the getProperty(String name) method in LogManager.
2. Those properties (or else the documented defaults) are then used to configure the root logger as well as the "global" handlers that are used by the root logger
3. Finally, whenever a logger is created the Properties object is checked to see if a key exists for the logger name + ".limit". If so, then the logger is assigned that level.
Notice that nowhere in here does it say that a programmatically created logger is configured. In your case, you must invoke getProperty("com.xyz.foo.handlers"), parse the property value (which is a bit tricky if there is more than one handler class name), load and instantiate the handler class, and invoke addHandler. Great huh? I'm in the middle of a indepth study of logger configuration, and I can tell you for sure the static configuration using configuration files is an order of magnitude harder than dynamic configuration. It offers the advantage of field service engineers being able to change the logger configuration, but at a very significant cost. -
Logging API configuration trouble
Here is my Java code:
private static final String LOGGER_CONFIG_FILE="java.util.logging.config.file";
private static final String CONFIG_FILE="c:/logging.properties";
public static synchronized void init() {
System.setProperty(LOGGER_CONFIG_FILE, CONFIG_FILE);
try {
LogManager.getLogManager().readConfiguration();
} catch (IOException E) {
System.err.println("Error while reading logging configuration file \""+CONFIG_FILE+"\":"+E);
Here is my logging.properties:
handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
.level=INFO
com.mypackage.level=ALL
java.util.logging.ConsoleHandler.level = WARNING
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.level = ALL
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.pattern=c:/java%u.log
java.util.logging.FileHandler.limit=50000
java.util.logging.FileHandler.count=1
Now here is my trouble:
when I'm running init() method all is ok !
when I'm trying to do something like this MyClass.getLogger().severe(msg); then there is an error message in my console:
Can't set level for java.util.logging.ConsoleHandler
Can't set level for java.util.logging.FileHandler
But the fact is, that my servere message is actually displayed in console and in my java0.log file :(
And the second trouble: all FileHanler's output is in XMLFormatter, but in config file there is SimpleFormatter set.
What are the problems, guys, please help !
ThanksHelp somebody, pleaseeee !
-
I've been working on archiving a ton of video to DVD. I'm using Dual-Layer discs to minimize the amount of physical storage space needed for the finished project. I burned several discs in iDVD in July and regularly received errors. Then the drive suddenly stopped loading any type of media. Anything I inserted would spin briefly and then eject. Repaired permissions and followed steps outlined here and here. Took the Mac back to the store where I bought it (sadly, not an Apple Store - nearest one is 12 hrs away) where naturally it performed flawlessly for the store rep.
So I took it home and resumed burning, now saving as a disc image (ignoring iDVD's warning about the performance of DVDs burned this way) and using Disk Utility, since iDVD didn't seem to be reliable. Burned about 25 discs, error-free, in this fashion, then three days ago, got the error: "Unable to burn “Disc 19.img”. (The device failed to respond properly, unable to recover or retry.)"
Retried the same image, and it worked. Then burned two more in the same fashion with no errors. Today I got the same error again. Logs are identical for each failed burn, except for the time span between certain lines.
Price of DL media being what it is, I'd hate to keep ruining discs if there's something I can do to avoid it. I suppose it could be bad media, but it seems odd that I'd get two bum discs that close together after burning so many without issue. Checked the site and the forums, and there doesn't seem to be any answers. Does anyone have any ideas? (Other than "try a different brand of media." I've got a buttload of these discs and it's too late to return them now.) Even if it just helps determine where the issue lies. Warranty will be up soon, so if it's hardware related, I need to get it taken care of quick.
Media: Philips DVD+R DL 8x
*Drive info:*
HL-DT-ST DVDRW GS21N:
Firmware Revision: SA15
Interconnect: ATAPI
Burn Support: Yes (Apple Shipping Drive)
Cache: 2048 KB
Reads DVD: Yes
CD-Write: -R, -RW
DVD-Write: -R, -R DL, -RW, +R, +R DL, +RW
Write Strategies: CD-TAO, CD-SAO, CD-Raw, DVD-DAO
Media: Insert media and refresh to show available burn speeds
DiscUtility.log:
2009-10-03 08:54:01 -0600: Burning Image “Disc 22.img”
2009-10-03 08:55:15 -0600: Image name: “Disc 22.img”
2009-10-03 08:55:15 -0600: Burn disc in: “HL-DT-ST DVDRW GS21N”
2009-10-03 08:55:15 -0600: Erase disc before burning: No
2009-10-03 08:55:15 -0600: Leave disc appendable: No
2009-10-03 08:55:15 -0600: Verify burned data after burning: Yes
2009-10-03 08:55:15 -0600: Eject disc after burning
2009-10-03 08:55:15 -0600:
2009-10-03 08:55:15 -0600: Preparing data for burn
2009-10-03 08:56:30 -0600: Opening session
2009-10-03 08:56:35 -0600: Opening track
2009-10-03 08:56:35 -0600: Writing track
2009-10-03 09:01:37 -0600: Finishing burn
2009-10-03 09:02:37 -0600: Closing session
2009-10-03 09:02:38 -0600: Finishing burn
2009-10-03 09:38:37 -0600: Burn failed
2009-10-03 09:38:37 -0600: The device failed to respond properly, unable to recover or retry.
2009-10-03 09:38:37 -0600: Additional information can be found in the ~/Library/Logs/DiscRecording.log log file.
2009-10-03 09:38:37 -0600: Unable to burn “Disc 22.img”. (The device failed to respond properly, unable to recover or retry.)
DiscRecording.log:
Disk Utility: Burn started, Sat Oct 3 08:55:15 2009
Disk Utility: Burning to DVD+R DL (CMC MAG D03) media with DAO strategy in HL-DT-ST DVDRW GS21N SA15 via ATAPI.
Disk Utility: Requested burn speed was 47x, actual burn speed is 4x.
Disk Utility: Burn failed, Sat Oct 3 09:37:15 2009
Disk Utility: Burn sense: 4/44/90 Hardware Error,
Disk Utility: Burn error: 0x80020022 The device failed to respond properly, unable to recover or retry.
Message was edited by: Weasel42I have encountered the same problem. I have a Mac Book Pro from 2007, and it has gone through 3 DVD writers so far, all of which have ruined DVD+R DL disks. I have occasionally been able to burn one, but usually during verify, I get this error message:
Communication to the disk drive failed: 0x8002022
And disk utility fails with a "Invalid B-tree node size" error
Oddly, when I try to burn a disk near capacity (6.5Gb or more) it is more likely to fail than if I burn a smaller disk.
I've been using Memorex DVD+R DL disks, but I have used other brands in the past but they have failed, too. Out of about 50 disks so far, I have had success with about 5. -
Logical standby apply won't apply logs
RDBMS Version: Oracle 10.2.0.2
Operating System and Version: Red Hat Enterprise Linux ES release 4
Error Number (if applicable):
Product (i.e. SQL*Loader, Import, etc.): Oracle Dataguard (Logical Standby)
Product Version:
Hi!!
I have problem logical standby apply won't apply logs.
SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
TYPE HIGH_SCN
STATUS
COORDINATOR 288810
ORA-16116: no work available
READER 288810
ORA-16240: Waiting for logfile (thread# 1, sequence# 68)
BUILDER 288805
ORA-16116: no work available
TYPE HIGH_SCN
STATUS
PREPARER 288804
ORA-16116: no work available
ANALYZER 288805
ORA-16116: no work available
APPLIER 288805
ORA-16116: no work available
TYPE HIGH_SCN
STATUS
APPLIER
ORA-16116: no work available
APPLIER
ORA-16116: no work available
APPLIER
ORA-16116: no work available
TYPE HIGH_SCN
STATUS
APPLIER
ORA-16116: no work available
10 rows selected.
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, DICT_BEGIN, DICT_END FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;
SEQUENCE# FIRST_TIM NEXT_TIME DIC DIC
66 11-JAN-07 11-JAN-07 YES YES
67 11-JAN-07 11-JAN-07 NO NO
SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
NAME
VALUE
coordinator state
IDLE
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
APPLIED_SCN NEWEST_SCN
288803 288809
INITPRIMARY.ORA
DB_NAME=primary
DB_UNIQUE_NAME=primary
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
service_names=primary
instance_name=primary
UNDO_RETENTION=3600
LOG_ARCHIVE_CONFIG='DG_CONFIG=(primary,standy)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/home/oracle/primary/arch1/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_2=
'SERVICE=standy LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=standy'
LOG_ARCHIVE_DEST_3=
'LOCATION=/home/oracle/primary/arch2/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
FAL_SERVER=standy
FAL_CLIENT=primary
DB_FILE_NAME_CONVERT='standy','primary'
LOG_FILE_NAME_CONVERT=
'/home/oracle/standy/oradata','home/oracle/primary/oradata'
STANDBY_FILE_MANAGEMENT=AUTO
INITSTANDY.ORA
db_name='standy'
DB_UNIQUE_NAME='standy'
REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
SERVICE_NAMES='standy'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(primary,standy)'
DB_FILE_NAME_CONVERT='/home/oracle/primary/oradata','/home/oracle/standy/oradata'
LOG_FILE_NAME_CONVERT=
'/home/oracle/primary/oradata','/home/oracle/standy/oradata'
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_DEST_1=
'LOCATION=/home/oracle/standy/arc/
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=standy'
LOG_ARCHIVE_DEST_2=
'SERVICE=primary LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=primary'
LOG_ARCHIVE_DEST_3=
'LOCATION=/home/oracle/standy/arch2/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=standy'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=primary
FAL_CLIENT=standy
Alert Log Banco "Standy" desde a inicialização do SQL Apply
Thu Jan 11 15:00:54 2007
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
Thu Jan 11 15:01:00 2007
alter database add supplemental log data (primary key, unique index) columns
Thu Jan 11 15:01:00 2007
SUPLOG: Updated supplemental logging attributes at scn = 289537
SUPLOG: minimal = ON, primary key = ON
SUPLOG: unique = ON, foreign key = OFF, all column = OFF
Completed: alter database add supplemental log data (primary key, unique index) columns
LOGSTDBY: Unable to register recovery logfiles, will resend
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
Thu Jan 11 15:01:04 2007
ALTER DATABASE START LOGICAL STANDBY APPLY (standy)
with optional part
IMMEDIATE
Attempt to start background Logical Standby process
LSP0 started with pid=21, OS id=12165
Thu Jan 11 15:01:05 2007
LOGSTDBY Parameter: DISABLE_APPLY_DELAY =
LOGSTDBY Parameter: REAL_TIME =
Completed: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
Thu Jan 11 15:01:07 2007
LOGSTDBY status: ORA-16111: log mining and apply setting up
Thu Jan 11 15:01:07 2007
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
LOGMINER: session# = 1, reader process P000 started with pid=22 OS id=12167
LOGMINER: session# = 1, builder process P001 started with pid=23 OS id=12169
LOGMINER: session# = 1, preparer process P002 started with pid=24 OS id=12171
Thu Jan 11 15:01:17 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:01:17 2007
LOGMINER: Turning ON Log Auto Delete
Thu Jan 11 15:01:26 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:01:26 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Thu Jan 11 15:01:26 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_ATTRCOL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_CCOL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_CDEF$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_COL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_COLTYPE$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_ICOL$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_IND$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDCOMPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDSUBPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_LOB$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_LOBFRAG$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_OBJ$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TAB$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABCOMPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABSUBPART$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TS$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TYPE$ have been marked unusable
Thu Jan 11 15:01:33 2007
Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_USER$ have been marked unusable
Thu Jan 11 15:02:05 2007
Indexes of table SYSTEM.LOGMNR_ATTRCOL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_ATTRIBUTE$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_CCOL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_CDEF$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_COL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_COLTYPE$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_DICTIONARY$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_ICOL$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_IND$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_INDCOMPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_INDPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_INDSUBPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_LOB$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_LOBFRAG$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_OBJ$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TAB$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TABCOMPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TABPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TABSUBPART$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TS$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_TYPE$ have been rebuilt and are now usable
Indexes of table SYSTEM.LOGMNR_USER$ have been rebuilt and are now usable
LSP2 started with pid=25, OS id=12180
LOGSTDBY Analyzer process P003 started with pid=26 OS id=12182
LOGSTDBY Apply process P008 started with pid=20 OS id=12192
LOGSTDBY Apply process P007 started with pid=30 OS id=12190
LOGSTDBY Apply process P005 started with pid=28 OS id=12186
LOGSTDBY Apply process P006 started with pid=29 OS id=12188
LOGSTDBY Apply process P004 started with pid=27 OS id=12184
Thu Jan 11 15:02:48 2007
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[1]: Assigned to RFS process 12194
RFS[1]: Identified database type as 'logical standby'
Thu Jan 11 15:02:48 2007
RFS LogMiner: Client enabled and ready for notification
Thu Jan 11 15:02:49 2007
RFS LogMiner: RFS id [12194] assigned as thread [1] PING handler
Thu Jan 11 15:02:49 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:02:49 2007
LOGMINER: Turning ON Log Auto Delete
Thu Jan 11 15:02:51 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
Thu Jan 11 15:02:51 2007
LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Thu Jan 11 15:02:51 2007
LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
Please, help me more time!!!!
Thanks.Hello!
thank you for the reply.
The archive 1_68_608031954.arc that error of reading occurred, did not exist in the date of the error sees below:
$ ls -lh /home/oracle/standy/arch2/
total 108M
-rw-r----- 1 oracle oinstall 278K Jan 11 15:00 1_59_608031954.arc
-rw-r----- 1 oracle oinstall 76K Jan 11 15:00 1_60_608031954.arc
-rw-r----- 1 oracle oinstall 110K Jan 11 15:00 1_61_608031954.arc
-rw-r----- 1 oracle oinstall 1.0K Jan 11 15:00 1_62_608031954.arc
-rw-r----- 1 oracle oinstall 2.0K Jan 11 15:00 1_63_608031954.arc
-rw-r----- 1 oracle oinstall 96K Jan 11 15:00 1_64_608031954.arc
-rw-r----- 1 oracle oinstall 42K Jan 11 15:00 1_65_608031954.arc
-rw-r----- 1 oracle oinstall 96M Jan 13 06:10 1_68_608031954.arc
-rw-r----- 1 oracle oinstall 12M Jan 13 13:29 1_69_608031954.arc
$ ls -lh /home/oracle/primary/arch1/
total 112M
-rw-r----- 1 oracle oinstall 278K Jan 11 14:21 1_59_608031954.arc
-rw-r----- 1 oracle oinstall 76K Jan 11 14:33 1_60_608031954.arc
-rw-r----- 1 oracle oinstall 110K Jan 11 14:46 1_61_608031954.arc
-rw-r----- 1 oracle oinstall 1.0K Jan 11 14:46 1_62_608031954.arc
-rw-r----- 1 oracle oinstall 2.0K Jan 11 14:46 1_63_608031954.arc
-rw-r----- 1 oracle oinstall 96K Jan 11 14:55 1_64_608031954.arc
-rw-r----- 1 oracle oinstall 42K Jan 11 14:55 1_65_608031954.arc
-rw-r----- 1 oracle oinstall 4.2M Jan 11 14:56 1_66_608031954.arc
-rw-r----- 1 oracle oinstall 5.5K Jan 11 14:56 1_67_608031954.arc
-rw-r----- 1 oracle oinstall 96M Jan 13 06:09 1_68_608031954.arc
-rw-r----- 1 oracle oinstall 12M Jan 13 13:28 1_69_608031954.arc
Alert log
hu Jan 11 15:01:00 2007
SUPLOG: Updated supplemental logging attributes at scn = 289537
SUPLOG: minimal = ON, primary key = ON
SUPLOG: unique = ON, foreign key = OFF, all column = OFF
Completed: alter database add supplemental log data (primary key, unique index) columns
LOGSTDBY: Unable to register recovery logfiles, will resend
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
Thu Jan 11 15:01:04 2007
LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
You it would know as to help me?
Would be a BUG of the Oracle 10g?
Thanks.
Maybe you are looking for
-
iPhoto will not organize all photos by date. Some photos from 20 years ago suddenly appear to be current. Entering edit, select all; photo, adjust date; view, arrange by date. Does not move the photos to their proper location. How can I move these ph
-
Persistent network traffic monitoring
I've recently had to switch from wired network to a USB dongle for my internet connection, and as the prepaid connection has a 2 GB limit, I'd like to record how much I'm using and get a warning when I'm running low. I'm already using Conky as a syst
-
This is a bit of a cross post, but the Boot Camp thread is going nowhere and I really don't want to buy a new hard drive. I tried to install boot camp/XP on my son's 4 mos old 2.4 GH, 250meg hd macbook aiming for a 32GB windows partition. The process
-
I have bluescreened 3 times today right after my keyboard spams the space key a bunch, please help. Here's my dmp file: <iframe frameborder="0" height="120" scrolling="no" src="https://onedrive.live.com/embed?cid=CBF6025D16668520&resid=CBF6025D16668
-
Delete duplicate songs off i pod touch
how do i delete duplicate songs off I-pod touch(160 gb) I clicked delete duplicates and lost all my music was restored some song missings now duplicates