[logging] Problems to configure logging rotation.
Hi,
I have an application .ear deployed in weblogic v10.3.1.0
This application use java.util.logging to write a log file.
fh = new FileHandler(logFileName,0,1,true);
fh.setFormatter(new XMLFormatter());
logger.addHandler(fh);
FileHandler(String pattern, int limit, int count, boolean append)
pattern - the pattern for naming the output file
limit - the maximum number of bytes to write to any one file. If this is zero, then there is no limit. (Defaults to no limit).
count - the number of files to use
append - specifies append mode
http://www.javadocexamples.com/java/util/logging/java.util.logging.FileHandler.html
logFileName is dynamic with date formated like this yyyMMdd + ApplicationName + ".log"
This file is created but I have also yyyyMMddSEC.log.1, yyyyMMddSEC.log.2, yyyyMMddSEC.log.3,...
I DON'T WANT THESE FILES_, that's why I put limit to 0, count to 1 and append to true.
This code works without jdev/weblogic but has not effect in weblogic.
Q1. Why?
So I go to Weblogic console: Domain Structure-> DefaultDomain->Logging
Log file name: logs/DefaultDomain.log
Rotation type: None
NONE
Messages accumulate in a single file.
You must erase the contents of the file when the size is too large.
Note that WebLogic Server sets a threshold size limit of 500 MB before it forces a hard rotation to prevent excessive log file growth.
But it doesn't work, Weblogic continue to create log files like this *<filename>.log.<n>*
Q2. Why?
I have also created weblogic.xml in ViewControler/WEB-INF
thanks to this documentation:
http://download.oracle.com/docs/cd/E13222_01/wls/docs103/webapp/weblogic_xml.html#wp1063199
but it doesn't work...again.
Q3. Why?
Q4. If I want applications manage themselves their log, how to deactivate the logging handler in weblogic (LogFileMBean?)
Thanks for your help.
You may want to ask in the WebLogic Server - Diagnostics / WLDF / SNMP forum. They own logging.
Similar Messages
-
I am getting this warning on my ASA 5505 when I try to set up logging from my off site FW to the central FW, which is a 5510. What I am trying to do is send the FW logs through the VPN Tunnel into the central 5510 to our logging server at 192.168.22.99, but allow all other traffic out the outside interface so customers can hit our web servers down there. Here is an example of my config with fake IP's. I get this error when trying to do "logging inside host 192.168.22.99". If I try to put in "logging Tunnel host 192.168.22.99" I get the "Warning:Security Level is 1" message
5505
ethe0/0
desc To LA ISP (217.34.122.1)
switchport access vlan2
ethe0/1
desc To Redwood City HQ via VPN Tunnel
switchport access vlan1
ethe0/2
desc To Internal Web Server
switchport access vlan3
VLAN1
desc Tunnel to HQ
ifinterface Tunnel
security level 1
217.34.122.3 255.255.255.248
VLAN3
desc Internal Web Server
ifinterface inside
security level 100
192.168.0.1 255.255.255.0
access-list LosAngeles extended permit ip 192.168.0.0 255.255.255.0 192.168.22.0 255.255.255.0
(No access-group is performed, as I match from the crypto map instead since I have multiple sites going out of HQ - see HQ configs)
route Tunnel 192.168.22.0 255.255.255.0 65.29.211.198
crypto map TO-HQ 10 match address LosAngeles
crypto map TO-HQ set peer ip 65.29.211.198
5510 at HQ
access-list LA extended permit ip 192.168.22.0 255.255.255.0 192.168.0.0 255.255.255.0
(again no access-group, since I have a couple other off sites)
crypto map TO-LA 20 match address LA
crypto map TO-LA 20 set peer ip 217.34.122.3Hi Jouni,
I have the following configs in place with fake IPs
5505
1 outside interface with security level 0 (vlan1 direct connect to isp 217.33.122.2/30) - goes to ISP
1 Tunnel interface with security level 1 (vlan 2 direct connect to isp 217.33.122.6/30) - goes to Tunnel to our 5510
1 inside interface with security level 100 (servers connected to hub, with vlan3 ip of 192.168.0.1)
access-list LosAngeles extended permit ip 192.168.0.0 255.255.255.0 192.168.22.0 255.255.255.0 - acl to 5510 inside network
route outside 0.0.0.0 0.0.0.0 217.33.122.1 - route for all traffic (except for 192.168.22.0/24) to take the outside connection
route Tunnel 192.168.22.0 255.255.255.0 65.29.211.198 - route for 192.168.22.0 destined traffic to take the Tunnel connection
crypto map TO-HQ 10 match address LosAngeles
crypto map TO-HQ 10 set peer ip 65.29.211.198
tunnel-group 65.29.211.198 type ipsec-l2l
5510
1 outside interface with security level 0 (vlan1 direct connect to isp 65.29.211.198) - goes to isp
1 inside interface with security level 100 (vlan2 connection to corporate servers and SIP 192.168.22.0/24)
access-list LA extended permit ip 192.168.22.0 255.255.255.0 192.168.0.0 255.255.255.0
access-list OUTBOUND extended permit icmp host 217.33.122.6 host 192.168.22.99 (allows Nagios monitor to ping the DE interface
access-group OUTBOUND in interface outside
nat (inside,outside) static 192.168.22.99 interface destination static 217.33.122.6
route outside 192.168.0.0 255.255.255.0 217.33.122.6
crypto map TO-LA 20 match address LA
crypto map TO-LA 20 set peer ip 217.33.122.6
tunnel-group 217.33.122.6 type ipsec-l2l
I am mistaken on the 5510 interfaces. They do not have vlans, and the IP address is directly applied to the interfaces for outside and inside. -
Problem changing Access Log Configuration
Hi,
I want to change the default configuration for creating access log files in the "http-web-site.xml".
After I inserted any additional attributes to the <access-log>-Tag (for example 'split' or 'format') and restarted the server, the server doesn't generate log files any more.
Does anyone know about this bug ?
I'm running oc4j 9.0.2 (stand-alone without Apache).
Any help would be appreciated.
OliverGreat! Thanks! It worked. I modified server.xml with the following:
<access-log>
<file>../logs/access</file>
<format>%Ses->client.ip% - %Req->vars.auth-user% [%SYSDATE%] "%Req->reqpb.clf-request%" %Req->srvhdrs.clf-status% %Req->srvhdrs.content-length% "%Req->headers.referer%" "%Req->headers.user-agent%"</format>
</access-log>Then I stopped and restarted the Web server (/bin/stopserv, /bin/startserv). The access log now shows the additional data.
Thanks for the reply! -
How to configure logs and trace files
Hello people,
We have just implemented ESS-MSS, we have around 25000 people using this service and every 2 days my logs and trace file in server gets full and portal gets down.
Please suggest how to solve this problem,how can i reduce trace and log files,,,,,any configuration or setting is there to configure this...please suggest and explain how can it be done.
BirenHi,
You can control what messages gets logged depending on the severity.
This can be configured using Log Configurator, check this how you can set severity to different locations.
Netweaver Portal Log Configuration & Viewing (Part 1)
Regards,
Praveen Gudapati -
Java.util.logging - Problem with setting different Levels for each Handler
Hello all,
I am having issues setting up the java.util.logging system to use multiple handlers.
I will paste the relevant code below, but basically I have 3 Handlers. One is a custom handler that opens a JOptionPane dialog with the specified error, the others are ConsoleHandler and FileHandler. I want Console and File to display ALL levels, and I want the custom handler to only display SEVERE levels.
As it is now, all log levels are being displayed in the JOptionPane, and the Console is displaying duplicates.
Here is the code that sets up the logger:
logger = Logger.getLogger("lib.srr.applet");
// I have tried both with and without the following statement
logger.setLevel(Level.ALL);
// Log to file for all levels FINER and up
FileHandler fh = new FileHandler("mylog.log");
fh.setFormatter(new SimpleFormatter());
fh.setLevel(Level.FINER);
// Log to console for all levels FINER and up
ConsoleHandler ch = new ConsoleHandler();
ch.setLevel(Level.FINER);
// Log SEVERE levels to the User, through a JOptionPane message dialog
SRRUserAlertHandler uah = new SRRUserAlertHandler();
uah.setLevel(Level.SEVERE);
uah.setFormatter(new SRRUserAlertFormatter());
// Add handlers
logger.addHandler(fh);
logger.addHandler(ch);
logger.addHandler(uah);
logger.info(fh.getLevel().toString() + " -- " + ch.getLevel().toString() + " -- " + uah.getLevel().toString());
logger.info("Logger Initialized.");Both of those logger.info() calls displays to the SRRUserAlertHandler, despite the level being set to SEVERE.
The getLevel calls displays the proper levels: "FINER -- FINER -- SEVERE"
When I start up the applet, I get the following in the console:
Apr 28, 2009 12:01:34 PM lib.srr.applet.SRR initLogger
INFO: FINER -- FINER -- SEVERE
Apr 28, 2009 12:01:34 PM lib.srr.applet.SRR initLogger
INFO: FINER -- FINER -- SEVERE
Apr 28, 2009 12:01:40 PM lib.srr.applet.SRR initLogger
INFO: Logger Initialized.
Apr 28, 2009 12:01:40 PM lib.srr.applet.SRR initLogger
INFO: Logger Initialized.
Apr 28, 2009 12:01:41 PM lib.srr.applet.SRR init
INFO: Preparing Helper Files.
Apr 28, 2009 12:01:41 PM lib.srr.applet.SRR init
INFO: Preparing Helper Files.
Apr 28, 2009 12:01:42 PM lib.srr.applet.SRR init
INFO: Getting PC Name.
Apr 28, 2009 12:01:42 PM lib.srr.applet.SRR init
INFO: Getting PC Name.
Apr 28, 2009 12:01:42 PM lib.srr.applet.SRR init
INFO: Finished Initialization.
Apr 28, 2009 12:01:42 PM lib.srr.applet.SRR init
INFO: Finished Initialization.Notice they all display twice. Each of those are also being displayed to the user through the JOptionPane dialogs.
Any ideas how I can properly set this up to send ONLY SEVERE to the user, and FINER and up to the File/Console?
Thanks!
Edit:
Just in case, here is the code for my SRRUserAlertHandler:
public class SRRUserAlertHandler extends Handler {
public void close() throws SecurityException {
public void flush() {
public void publish(LogRecord arg0) {
JOptionPane.showMessageDialog(null, arg0.getMessage());
}Edited by: compbry15 on Apr 28, 2009 9:44 AMFor now I have fixed the issue of setLevel not working by making a Filter class:
public class SRRUserAlertFilter implements Filter {
public boolean isLoggable(LogRecord arg0) {
if (arg0.getLevel().intValue() >= Level.WARNING.intValue()) {
System.err.println(arg0.getLevel().intValue() + " -- " + Level.WARNING.intValue());
return true;
return false;
}My new SRRUserAlertHandler goes like this now:
public class SRRUserAlertHandler extends Handler {
public void close() throws SecurityException {
public void flush() {
public void publish(LogRecord arg0) {
Filter theFilter = this.getFilter();
if (theFilter.isLoggable(arg0))
JOptionPane.showMessageDialog(null, arg0.getMessage());
}This is ugly as sin .. but I cannot be required to change an external config file when this is going in an applet.
After much searching around, this logging api is quite annoying at times. I have seen numerous other people run into problems with it not logging specific levels, or logging too many levels, etc. A developer should be able to complete configure the system without having to modify external config files.
Does anyone else have another solution? -
Java.io.IOException: Failed to rename log file on attempt to rotate logs
Hello.
I'm currently using Weblogic 5.1 SP6 on WinNT Server 4.0 SP6.
I set the weblogic.properties file like this so that the "access.log" will
be rotated every day at midnight.
-- weblogic.properties --
weblogic.httpd.enableLogFile=true
weblogic.httpd.logFileName=D:/WLSlog/access.log
weblogic.httpd.logFileFlushSecs=60
weblogic.httpd.logRotationType=date
weblogic.httpd.logRotationPeriodMins=1440
weblogic.httpd.logRotationBeginTime=11-01-2000-00:00:00
-- weblogic.properties <end>--
The rotation has been working well, but one day when I checked my
weblogic.log, I was getting some errors.
I found out that my "access.log" wasn't being rotated (nor being written,
flushed) after this error came out.
After rebooting WebLogic, this problem went away.
Has anyone clues about why WebLogic failed to "rename log file?"
-- weblogic.log --
? 2 04 00:00:00 JST 2001:<E> <HTTP> Exception flushing HTTP log file
java.io.IOException: Failed to rename log file on attempt to rotate logs
at weblogic.t3.srvr.httplog.LogManagerHttp.rotateLog(LogManagerHttp.java,
Compiled Code)
at java.lang.Exception.<init>(Exception.java, Compiled Code)
at java.io.IOException.<init>(IOException.java, Compiled Code)
at weblogic.t3.srvr.httplog.LogManagerHttp.rotateLog(LogManagerHttp.java,
Compiled Code)
at
weblogic.t3.srvr.httplog.LogManagerHttp.access$2(LogManagerHttp.java:271)
at
weblogic.t3.srvr.httplog.LogManagerHttp$RotateLogTrigger.trigger(LogManagerH
ttp.java:539)
at
weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigg
er.java, Compiled Code)
at
weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java
, Compiled Code)
at weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java,
Compiled Code)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled Code)
? 2 04 00:00:25 JST 2001:<E> <HTTP> Exception flushing HTTP log file
java.io.IOException: Bad file descriptor
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java, Compiled Code)
at
weblogic.utils.io.DoubleBufferedOutputStream.flushBuffer(DoubleBufferedOutpu
tStream.java, Compiled Code)
at
weblogic.utils.io.DoubleBufferedOutputStream.flush(DoubleBufferedOutputStrea
m.java, Compiled Code)
at
weblogic.t3.srvr.httplog.LogManagerHttp$FlushLogStreamTrigger.trigger(LogMan
agerHttp.java, Compiled Code)
at
weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigg
er.java, Compiled Code)
at
weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java
, Compiled Code)
at weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java,
Compiled Code)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled Code)
-- weblogic.log <end> --
note:
? 2 04 00:00:25 JST 2001:<E> <HTTP> Exception flushing HTTP log file
java.io.IOException: Bad file descriptor
keeps coming out every minute after on.
I suppose this is because I have set the HTTP log to be flushed every one
minute.
Thanks in advance.
RyotaroI'm also getting this error on Weblogic 6.1.1.
It only occurs if you set the format to "extended".
Is there any fix or workaround for this? -
DB2 Transaction log Problem after set db manuell to Rollforward
Hello,
I have a problem when set the db2 manuell in rollforward mode. After I set the command db2rfpen ON E35 my database (E35) is in rollforward mode.
Now I can see with command "db2 rollforward db E35 query status" the Last committed transaction
but the timestamp of the Last committed transaction is 2106-02-07-06.28.15.000000 UTC
but we do not have year 2106!!
did someone have similar issue or a solution for my problem??
Is it possible to manuell update the table via SQL? But I dont know which table i should update..
Thanks in advanced.
RobinHi,
following output of db2 get snapshot for all databases because db2 get snapshot for database E35 does not work..
there I think the timestamp is also ok. Thanks for your help!
regards
Robin
Database Snapshot
Database name = E35
Database path = /db2/E35/db2e35/NODE0000/SQL00001/MEMBER0000/
Input database alias =
Database status = Active
Catalog database partition number = 0
Catalog network node name = srvnetapp07
Operating system running at database server= LINUXAMD64
Location of the database = Local
First database connect timestamp = 08/11/2014 12:16:58.594371
Last reset timestamp =
Last backup timestamp = 08/08/2014 09:57:42.000000
Snapshot timestamp = 08/11/2014 12:17:46.494342
Number of automatic storage paths = 4
Automatic storage path = /db2/E35/sapdata1
Node number = 0
State = In Use
File system ID = 64782
Storage path free space (bytes) = 23376351232
File system used space (bytes) = 3045990400
File system total space (bytes) = 26422341632
Automatic storage path = /db2/E35/sapdata2
Node number = 0
State = In Use
File system ID = 64783
Storage path free space (bytes) = 23376351232
File system used space (bytes) = 3045990400
File system total space (bytes) = 26422341632
Automatic storage path = /db2/E35/sapdata3
Node number = 0
State = In Use
File system ID = 64784
Storage path free space (bytes) = 23376351232
File system used space (bytes) = 3045990400
File system total space (bytes) = 26422341632
Automatic storage path = /db2/E35/sapdata4
Node number = 0
State = In Use
File system ID = 64785
Storage path free space (bytes) = 23376351232
File system used space (bytes) = 3045990400
File system total space (bytes) = 26422341632
High water mark for connections = 13
Application connects = 9
Secondary connects total = 11
Applications connected currently = 1
Appls. executing in db manager currently = 0
Agents associated with applications = 11
Maximum agents associated with applications= 13
Maximum coordinating agents = 13
Number of Threshold Violations = 0
Locks held currently = 29
Lock waits = 2
Time database waited on locks (ms) = 56
Lock list memory in use (Bytes) = 77568
Deadlocks detected = 0
Lock escalations = 0
Exclusive lock escalations = 0
Agents currently waiting on locks = 0
Lock Timeouts = 0
Number of indoubt transactions = 0
Total Private Sort heap allocated = 0
Total Shared Sort heap allocated = 6
Shared Sort heap high water mark = 245
Post threshold sorts (shared memory) = 0
Total sorts = 2
Total sort time (ms) = 2
Sort overflows = 0
Active sorts = 0
Buffer pool data logical reads = 68507
Buffer pool data physical reads = 1312
Buffer pool temporary data logical reads = 0
Buffer pool temporary data physical reads = 0
Asynchronous pool data page reads = 899
Buffer pool data writes = 0
Asynchronous pool data page writes = 0
Buffer pool index logical reads = 52871
Buffer pool index physical reads = 449
Buffer pool temporary index logical reads = 0
Buffer pool temporary index physical reads = 0
Asynchronous pool index page reads = 0
Buffer pool index writes = 0
Asynchronous pool index page writes = 0
Buffer pool xda logical reads = 0
Buffer pool xda physical reads = 0
Buffer pool temporary xda logical reads = 0
Buffer pool temporary xda physical reads = 0
Buffer pool xda writes = 0
Asynchronous pool xda page reads = 0
Asynchronous pool xda page writes = 0
Total buffer pool read time (milliseconds) = 4912
Total buffer pool write time (milliseconds)= 0
Total elapsed asynchronous read time = 3219
Total elapsed asynchronous write time = 0
Asynchronous data read requests = 454
Asynchronous index read requests = 0
Asynchronous xda read requests = 0
No victim buffers available = 0
LSN Gap cleaner triggers = 0
Dirty page steal cleaner triggers = 0
Dirty page threshold cleaner triggers = 0
Time waited for prefetch (ms) = 982
Unread prefetch pages = 0
Direct reads = 986
Direct writes = 2
Direct read requests = 77
Direct write requests = 1
Direct reads elapsed time (ms) = 243
Direct write elapsed time (ms) = 0
Database files closed = 0
Host execution elapsed time = 5.785568
Commit statements attempted = 16
Rollback statements attempted = 2
Dynamic statements attempted = 6950
Static statements attempted = 23
Failed statement operations = 0
Select SQL statements executed = 6936
Xquery statements executed = 0
Update/Insert/Delete statements executed = 6
DDL statements executed = 0
Inactive stmt history memory usage (bytes) = 0
Internal automatic rebinds = 0
Internal rows deleted = 0
Internal rows inserted = 0
Internal rows updated = 0
Internal commits = 21
Internal rollbacks = 0
Internal rollbacks due to deadlock = 0
Number of MDC table blocks pending cleanup = 0
Rows deleted = 0
Rows inserted = 0
Rows updated = 13838
Rows selected = 13897
Rows read = 141752
Binds/precompiles attempted = 0
Log space available to the database (Bytes)= 333797877
Log space used by the database (Bytes) = 26523
Maximum secondary log space used (Bytes) = 0
Maximum total log space used (Bytes) = 37070
Secondary logs allocated currently = 0
Log pages read = 0
Log read time (sec.ns) = 0.000000000
Log pages written = 13
Log write time (sec.ns) = 0.006250000
Number write log IOs = 9
Number read log IOs = 0
Number partial page log IOs = 7
Number log buffer full = 0
Log data found in buffer = 0
Log to be redone for recovery (Bytes) = 26523
Log accounted for by dirty pages (Bytes) = 26523
Node number = 0
File number of first active log = 90
File number of last active log = 94
File number of current active log = 90
File number of log being archived = Not applicable
Package cache lookups = 6955
Package cache inserts = 20
Package cache overflows = 0
Package cache high water mark (Bytes) = 1026593
Application section lookups = 20824
Application section inserts = 29
Catalog cache lookups = 740
Catalog cache inserts = 55
Catalog cache overflows = 0
Catalog cache high water mark = 429098
Catalog cache statistics size = 0
Workspace Information
Number of hash joins = 4
Number of hash loops = 0
Number of hash join overflows = 0
Number of small hash join overflows = 0
Post threshold hash joins (shared memory) = 0
Active hash joins = 0
Number of OLAP functions = 0
Number of OLAP function overflows = 0
Active OLAP functions = 0
Statistic fabrications = 0
Synchronous runstats = 0
Asynchronous runstats = 0
Total statistic fabrication time (milliseconds) = 0
Total synchronous runstats time (milliseconds) = 0
Memory usage for database:
Node number = 0
Memory Pool Type = Backup/Restore/Util Heap
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 204800000
Node number = 0
Memory Pool Type = Package Cache Heap
Current size (bytes) = 1179648
High water mark (bytes) = 1179648
Configured size (bytes) = 472317952
Node number = 0
Memory Pool Type = Other Memory
Current size (bytes) = 196608
High water mark (bytes) = 196608
Configured size (bytes) = 20971520
Node number = 0
Memory Pool Type = Catalog Cache Heap
Current size (bytes) = 524288
High water mark (bytes) = 524288
Configured size (bytes) = 10485760
Node number = 0
Memory Pool Type = Buffer Pool Heap
Secondary ID = 1
Current size (bytes) = 494010368
High water mark (bytes) = 494010368
Configured size (bytes) = 494010368
Node number = 0
Memory Pool Type = Buffer Pool Heap
Secondary ID = System 32k buffer pool
Current size (bytes) = 1835008
High water mark (bytes) = 1835008
Configured size (bytes) = 1835008
Node number = 0
Memory Pool Type = Buffer Pool Heap
Secondary ID = System 16k buffer pool
Current size (bytes) = 1572864
High water mark (bytes) = 1572864
Configured size (bytes) = 1572864
Node number = 0
Memory Pool Type = Buffer Pool Heap
Secondary ID = System 8k buffer pool
Current size (bytes) = 1441792
High water mark (bytes) = 1441792
Configured size (bytes) = 1441792
Node number = 0
Memory Pool Type = Buffer Pool Heap
Secondary ID = System 4k buffer pool
Current size (bytes) = 1376256
High water mark (bytes) = 1376256
Configured size (bytes) = 1376256
Node number = 0
Memory Pool Type = Shared Sort Heap
Current size (bytes) = 2359296
High water mark (bytes) = 3932160
Configured size (bytes) = 40435712
Node number = 0
Memory Pool Type = Lock Manager Heap
Current size (bytes) = 159645696
High water mark (bytes) = 159645696
Configured size (bytes) = 159711232
Node number = 0
Memory Pool Type = Database Heap
Current size (bytes) = 87687168
High water mark (bytes) = 87687168
Configured size (bytes) = 119472128
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 107
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 106
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 105
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 104
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 103
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 102
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 101
Current size (bytes) = 196608
High water mark (bytes) = 196608
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 100
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 99
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 98
Current size (bytes) = 65536
High water mark (bytes) = 65536
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Application Heap
Secondary ID = 97
Current size (bytes) = 131072
High water mark (bytes) = 131072
Configured size (bytes) = 1048576
Node number = 0
Memory Pool Type = Applications Shared Heap
Current size (bytes) = 458752
High water mark (bytes) = 458752
Configured size (bytes) = 81920000 -
Problem for applying logs automatically to Standby
DBAz,
In my Dataguard , The primary database is archiving to the standby location.
But after that when we are verifying using following queries Its showing
that, the recently archived log is not applied.
Also From the primary it looks like the log were shipped twice.
I am using Maximum Performance protection mode . Is it necessary to run the
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
command each and every time to
synchronize the standby with primary ?
Will it be automatically updated when ever a SCN changed ?
More specifically. How Could i automatically update a standby database based
on primary.
I think the rodo logs are not perfectly applying to the standby.
Iam sure that I have configuared that as per documentation.
can anybody suggest a solution....
I can show u the status of logs too-
Primary =>
SQL> select SEQUENCE# ,FIRST_TIME , NEXT_TIME ,ARCHIVED,APPLIED,CREATOR
2 from v$archived_log;
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
139 26-FEB-07 26-FEB-07 YES NO ARCH
140 26-FEB-07 26-FEB-07 YES NO ARCH
141 26-FEB-07 26-FEB-07 YES NO ARCH
142 26-FEB-07 27-FEB-07 YES NO ARCH
143 27-FEB-07 07-APR-07 YES NO ARCH
144 07-APR-07 16-MAR-07 YES NO ARCH
145 16-MAR-07 20-MAR-07 YES NO ARCH
146 20-MAR-07 20-MAR-07 YES NO ARCH
147 20-MAR-07 21-MAR-07 YES NO FGRD
148 21-MAR-07 21-MAR-07 YES NO ARCH
149 21-MAR-07 21-MAR-07 YES NO ARCH
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
150 21-MAR-07 21-MAR-07 YES NO FGRD
151 21-MAR-07 22-MAR-07 YES NO ARCH
152 22-MAR-07 22-MAR-07 YES NO ARCH
152 22-MAR-07 22-MAR-07 YES NO ARCH
153 22-MAR-07 22-MAR-07 YES NO FGRD
153 22-MAR-07 22-MAR-07 YES YES FGRD
154 22-MAR-07 22-MAR-07 YES NO ARCH
154 22-MAR-07 22-MAR-07 YES YES ARCH
155 22-MAR-07 24-MAR-07 YES NO FGRD
155 22-MAR-07 24-MAR-07 YES NO FGRD
156 24-MAR-07 24-MAR-07 YES NO ARCH
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
156 24-MAR-07 24-MAR-07 YES YES ARCH
157 24-MAR-07 26-MAR-07 YES NO ARCH
157 24-MAR-07 26-MAR-07 YES NO ARCH
158 26-MAR-07 26-MAR-07 YES NO FGRD
158 26-MAR-07 26-MAR-07 YES NO FGRD
27 rows selected.
Standby =>
SQL> select SEQUENCE# ,FIRST_TIME , NEXT_TIME ,ARCHIVED,APPLIED,CREATOR
2 from v$archived_log;
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
152 22-MAR-07 22-MAR-07 YES YES ARCH
153 22-MAR-07 22-MAR-07 YES YES FGRD
154 22-MAR-07 22-MAR-07 YES YES ARCH
155 22-MAR-07 24-MAR-07 YES YES FGRD
156 24-MAR-07 24-MAR-07 YES YES ARCH
157 24-MAR-07 26-MAR-07 YES NO ARCH
158 26-MAR-07 26-MAR-07 YES NO FGRD
7 rows selected.
SQL> select sequence#, archived, applied
2 from v$archived_log order by sequence#;
SEQUENCE# ARC APP
152 YES YES
153 YES YES
154 YES YES
155 YES YES
156 YES YES
157 YES NO
158 YES NO
7 rows selected.
Regards,
RajIam facing this problem at the very next moment after Dataguard configuration. because when i created some objects there in primary and when iam checking it on standby..its missing there.So i have queried all the above.
Here is the last few lines from 'Alert log ' but i think its normal
Primary =>
Sat Mar 24 15:03:36 2007
ARCH: Beginning to archive log 1 thread 1 sequence 155
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_155.arc'
ARCH: Completed archiving log 1 thread 1 sequence 155
Sat Mar 24 15:21:31 2007
Thread 1 advanced to log sequence 157
Current log# 3 seq# 157 mem# 0: /oracle/oradata/mudra/redo03.log
Sat Mar 24 15:21:31 2007
ARC0: Evaluating archive log 2 thread 1 sequence 156
ARC0: Beginning to archive log 2 thread 1 sequence 156
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_156.arc'
ARC0: Completed archiving log 2 thread 1 sequence 156
Mon Mar 26 14:49:06 2007
Thread 1 advanced to log sequence 158
Current log# 1 seq# 158 mem# 0: /oracle/oradata/mudra/redo01.log
Mon Mar 26 14:49:06 2007
ARC1: Evaluating archive log 3 thread 1 sequence 157
ARC1: Beginning to archive log 3 thread 1 sequence 157
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_157.arc'
ARC1: Completed archiving log 3 thread 1 sequence 157
Mon Mar 26 17:16:15 2007
Thread 1 advanced to log sequence 159
Current log# 2 seq# 159 mem# 0: /oracle/oradata/mudra/redo02.log
Mon Mar 26 17:16:15 2007
ARCH: Evaluating archive log 1 thread 1 sequence 158
Mon Mar 26 17:16:15 2007
ARC0: Evaluating archive log 1 thread 1 sequence 158
ARC0: Unable to archive log 1 thread 1 sequence 158
Log actively being archived by another process
Mon Mar 26 17:16:15 2007
ARCH: Beginning to archive log 1 thread 1 sequence 158
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_158.arc'
ARCH: Completed archiving log 1 thread 1 sequence 158
Standby =>
also when iam giving the query -
select SEQUENCE# from v$log_history;
Primary got SEQUENCE# => 1....158
Standby got SEQUENCE# = > 1...156 -
Managing and configuring log files for Oracle 9ias
Hi all,
I'm wondering where I can find documentation on managing and configuring log files like:
ORACLE_HOME/admin/ sid/*dump/* ORACLE_HOME/assistants/opca/install.log ORACLE_HOME/webcache/logs/*
ORACLE_HOME/dcm/logs/*
ORACLE_HOME/ldap/log/*
ORACLE_HOME/opmn/logs/*
ORACLE_HOME/sysman/log/*
ORACLE_HOME/j2ee/ OC4J_instance/log/*/* ORACLE_HOME/config/schemaload.log ORACLE_HOME/config/useinfratool.log
because I didn't find anything in document like:
Oracle9 i Application Server
Administrator�s Guide
Release 2 (9.0.2)
May 2002
Part No. A92171-02
So, if anyone has any idea...
Thanks in advanceDoes anyone know how or if it is possible to send the stdout and/or stderr to the log4j type of logging? I already capture the stdout and stderr to the flat file. But I would like to timestamp every line to compare and diagnose problems with each application that encounters problems. Each web app is using log4j to their own application log4j log file. When they encounter errors or resource draining problems, I would like to see the container logs and see what was occuring inside the container at around that same time.
Any ideas? -
DAQmx configure logging (TDMS) and counter
Hi,
I'm trying to stream analog data jointly with counter data to one TDMS file with use of DAQmx configure logging vi. This is quite new but powerful function but I can't manage with it successfully. I'm able to acquire analog data only and display no. of counts simultaneously (attached diagram). I have no idea how attach counter data to TDMS stream as separate channel group.
It is important for me to stream data with this VI bacause I'm sampling analog data with 95kHz per channel (PCI-6143 board).
Could you post any ideas how to resolve this problem?
Best regards
Mark
Attachments:
acq_analog_and_contr.PNG 44 KBEric, hear is short info about DAQmx configure logging http://zone.ni.com/devzone/cda/tut/p/id/9574.
Yes I thought about producer/consumer structure, It is first alternative way.
There is also another very simple way - connecting encoder to analog input and acquire together with analog signals. Of course It will be neccasary to process sigal from encoder after recording. I'm also concerning such solution
Ok I have to sleep with this problem
Best regards
Mark -
WebGate: Could not configure log engine (Linux)
After installing the WebGate on Linux AS 3 with Apache 2.x I am getting the error message in apache error.log:
WebGate: Could not configure log engine
whenever I am trying to access http://localhost
Could anyone offer some suggestion on how to fix it?
Thanks KrisThanks for the response. I was using the default version of oblog_config_wg.xml initially. Below is the current one I am using.
I end up with the error in my apache error log every time I navigate to the server from a web browser. The sections I changed are below in bold. Thanks for any insight you can provide.
<CompoundList
xmlns="http://www.oblix.com"ListName="logframework.xml.staging">
<SimpleList>
<NameValPair
ParamName="LOG_THRESHOLD_LEVEL"Value="LOGLEVEL_WARNING">
</NameValPair>
<NameValPair
ParamName="AUTOSYNC"Value="True"></NameValPair>
</SimpleList>
<CompoundList
xmlns="http://www.oblix.com"
ListName="LOG_CONFIG">
<!-- Write all FATAL logs to the system logger. -->
<ValNameList
xmlns="http://www.oblix.com"
ListName="LogFatal2Sys">
<NameValPair
ParamName="LOG_LEVEL"
Value="LOGLEVEL_FATAL">
</NameValPair>
<NameValPair
ParamName="LOG_WRITER"
Value="SysLogWriter">
</NameValPair>
<NameValPair
ParamName="LOG_STATUS"
Value="On">
</NameValPair>
</ValNameList>
<!-- Write all logs to the Oracle log file. -->
<ValNameList
xmlns="http://www.oblix.com"
ListName="LogAll2File">
<NameValPair
ParamName="LOG_LEVEL"
Value="LOGLEVEL_ALL">
</NameValPair>
<NameValPair
ParamName="LOG_WRITER"
Value="*FileLogWriter*">
</NameValPair>
<NameValPair
ParamName="FILE_NAME"
Value="*/usr/local/apache2/logs/webgate.log*">
</NameValPair>
<!-- Buffer up to 64 KB (expressed in bytes) of log entries before flushing to the file. -->
<NameValPair
ParamName="BUFFER_SIZE"
Value="131072">
</NameValPair>
<!-- Rotate the log file once it exceeds 50 MB (expressed in bytes). -->
<NameValPair
ParamName="MAX_ROTATION_SIZE"
Value="52428800">
</NameValPair>
<!-- Rotate the log file after 24 hours (expressed in seconds). -->
<NameValPair
ParamName="MAX_ROTATION_TIME"
Value="86400">
</NameValPair>
<NameValPair
ParamName="LOG_STATUS"
Value="On">
</NameValPair>
</ValNameList>
</CompoundList>
</CompoundList> -
Configure Logging VI start and stop without losing or dropping data
Hi there,
I am currently using a M-series PCI-6280 and a counter card PCI-6601 to do measurements on Labview 2010.
I have 3 tasks set up, 2 linear encoder counter tasks on the 6601 and 1 analog input task with 4 x inputs on my 6280. These are all timed with a counter on the 6601 through an RTSI line.
On all of these tasks, I have a similar set-up as the picture in http://zone.ni.com/devzone/cda/tut/p/id/9574 except they are all set up as "Log" only and "create/replace" file for the configure logging.vi.
Since I want to have the encoders continuously locate my position, I never "stop" the counter tasks. However, I need to use the data acquired after every move along the path to obtain and process the location and the subsequent analog input data points I have. What I have done, is I would "stop.vi" my sample clock on the counter, then open the tdms file, read the tdms file, get the array from there, close the tdms file, then "run.vi" the sample clock on the counter. However, I found when I stop the sample clock, not ALL the data points have been streamed into the file.
For example, say I move my carriage to 100mm to the right while logging, then stop the sample clock, obtain the array from the tdms file, find the size of this array called "X" for future use. Then, I move the carraige back to the origin WITHOUT starting the clock again. THEN I start the sample clock again and move 100mm towards the left. After this is done, I again, stop the sample clock and obtain the data from my tdms file starting the offset of read at "X+1". I EXPECT that this array SHOULD start at 0 then move on to 100mm to the left. However, what it does, is I see the data start at 100mm from the right for a few data points, then JUMPS to 0, then moves to the left as expected.
I propose that this means that when I stop the sample clock, the data from the counters (buffer?) have not all been streamed to the tdms file yet, and when I start the clock again, the remaining gets dumped it. Here, I am confused since I thought that "configure logging.vi" streams directly into the file without a buffer.
I was wondering if I am doing anything wrong, expecting the wrong things, and if there is a way to implement what I want. As in, is there a way to flush the remaning data in the "buffer" (i tried the flush tdms vi, but that didn't work and I didn't think it was relevant) after I stop the clock or something like that, or is my implementation competely off.
Thanks for reading,
Lester
PS. Having a way to read the most recent point from the counters would be awesome too. However, everytime I set the offset/relative to most current, it says that I cant do that while logging, even though I saw a community post saying that it works with analog inputs. Maybe this is off topic, but yeah, currently I'm just trying to get the arrays from tdms working, THANKS!
Solved!
Go to Solution.Hello, Lester!
There are a few solutions for you here.
First, since you don't need the excess data points, you can set the "DAQmx Read" Property Node to "Overwrite" any data from previous acquisitions.
To do this, first select a "DAQmx Read Property Node" from the functions palette. The first property should default to "RelativeTo." Click on this, and instead select "OverWrite Mode" (see attached). Next, right-click on the input, create constant, and select "Overwrite Unread Samples" from the drop-down enum. After wiring the other inputs and outputs on the node, This should accomplish what you're looking for. If for some reason it doesn't, consider the following options.
Your second option is to limit the buffer size. If you know how long you want to acquire data for, you can use the sampling rate (it looks like 5000Hz, in your case) to determine the number of samples to acquire. Simply use the "DAQmx Buffer Property Node" and input the number of samples desired. When the required number of samples is acquired, the buffer will begin to overwrite the old samples. This, of course, is not a very flexible method, and obviously requires a pre-set number of data points.
Your final option is to do a brute-force open and close of your task with each acquisition. Simply use the "Clear Task" and "Create Task" functions with every new set of data desire. This should clear the buffer, but will be fairly slow (especially if you eventually change your program to take many sets of data very quickly).
As mentioned, the "OverWrite" property in the "DAQmx Read" node should take care of this for you, but feel free to try the other options if they better suit your needs.
Let us know if you have any further questions!
Best,
Will H | Applications Engineer | National Instruments
Will Hilzinger | Switch Product Support Engineer | National Instruments -
Hi,
I got this doubt, when searching logs on the servers. I have 2 WFE's in my farm, I got an error from enduser. So in which WFE server i need to check the logs.
How to configure logs path. Is it is possible to specify logs path on our own instead of 14 hive folder.
BadriThat is a really bad idea, especially with idle disconnects and other unreliability of CIFS.
You should instead check out the command
Merge-SPLogFiles which will allow you to combine ULS logs from multiple servers into a single file.
You can certainly specify your own path, but the path must be available on all servers. For example, if you specified D:\Logs, D:\Logs must exist on all SharePoint servers within the farm.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Hello!
I want to write 1 digital port from PXI-6536 with streaming to TDMS file.
I'm writing by 'DAQmx Configure Logging.vi' and become TDMS file with 8 boolean channels.
How can I write to 1integer channel?
Attachments:
1.JPG 27 KBHey Atrina,
The actual data stored on disk is just the raw data (that is, a byte per sample in your case). It's really just a matter of how that data is being represented in LabVIEW whenever you read back the TDMS file.
I'm not sure if there is a better way to do this, but here is a way to accomplish what you're wanting:
Read back the TDMS file as a digital waveform. Then there's a conversion function in LabVIEW called DWDT Digital to Binary. This function will convert that set of digital channels into the "port format" that you're wanting. I've attached an example of what I mean.
Note: When looking at this VI, there are a few things that the downgrade process did to the VI that I would not recommend for these TDMS files. It added a 1.0 constant on the TDMS Open function, and it set "disable buffering" on the TDMS Open function to false; you can get rid of both of those constants.
Message Edited by AndrewMc on 01-27-2010 11:21 AM
Thanks,
Andy McRorie
NI R&D
Attachments:
digitalconvert.vi 13 KB -
error logging problem:
I would like to implement an error logger that will do the following tasks when a error/exception arrises:
- surpress the DacfErrorPopupLogger
- alert the user that an error has occured with a simplified popup (create a global listener then use the ErrorAttributes to create the text of the popup)
- log the error in a file with a timestamp and all error information
- later if the above works....i would like to add the error attributes (time stamp, error type) to a oracle object/ Jdev domain.
Questions:
What is the best technique to use....errorManager, error logger ...?? combination
How do i use the error manager to register listners for the errors?.
In the following code i am not sure how to access the ErrorsAttributes[] array that is returned by loggerReader.getErrors();
Any general tips places to find sample code on errorManager or associated interfaces, will be appreciated
I used the OutPutStreamLogger to write error information to a FileOutputStream then a loggerReader to get the error attributes from the file. The reason i went in this direction is because i found some smple code on the outputStream logger.
package DACVideo;
import oracle.dacf.util.errorloggers.*;
import oracle.dacf.util.errormanager.*;
import oracle.dacf.util.errorloggers.InputStreamLoggerReader.ErrorAttributes;
import java.io.*;
* A Class class.
* <P>
* @author Adam Maddox
public class ErrorLogger extends Object {
static OutputStreamLogger logger = null;
static InputStreamLoggerReader loggerReader = null;
public ErrorLogger() {
System.out.println("==============ErrorLogger Created==============");
//remove default error logger (popup logger)
ErrorManager.removeErrorLogger(ErrorManager.findLoggerByName(DacfErrorPopupLogger.NAME));
try
logger = new OutputStreamLogger(new FileOutputStream("out.dat"));
loggerReader = new InputStreamLoggerReader(new FileInputStream("out.dat"));
catch(java.io.IOException e)
System.err.println("Error!");
try
ErrorManager.addErrorLogger(logger);
catch(NameAlreadyRegisteredException e)
System.err.println("A Logger with this name is already registered.");
private void closeErrorLog()
//close the OutputStream, to force flushing
logger.closeOutputStream();
ErrorManager.removeErrorLogger(logger);
public static void showErrorLog()
ErrorAttributes[] errorArray = loggerReader.getErrors(); <<<<CANNOT GET ERROR ATTRIBUTES ??
nullJDev could you help??
Maybe you are looking for
-
Where to download Adobe Photoshop CS5.1
-
What the correct codec for Final Cut Pro?
-
Hello I am trying to add ShipToCode from the RDR1 table to the PLD for Pick List, but it gives me a blank column when it should display the text values that are in ShipToCode.. Is there a way to display the values for ShipToCode? Any help is apprecia
-
Reports without the web server
Is is possible to use the Java API without a web server? To create a new report and export it as a file directly - without viewing it via the web? From the API it seems like this is possible, but has anyone used it this way? I am fighting through da
-
Not able to download stack using MOPZ (There is no OS/DB dependent file)
Hi, I want to download ERP 6.0 EHP4 Stack 8. I am having solution manager 7 with EHP1, so once I select the Target enhancement package product version & Target enhancement package stack. I get option of selecting the Technical Usage, In that I select