Log Files on UNIX
We are trying to debug the Biller Direct application by using either the SAP Logging Framework or standard System.out.println statements. On Windows the logging is written to either C:\usr\sap\AZ1\JC00\j2ee\cluster\server0\log\console_logs\output.log or C:\usr\sap\AZ1\JC00\j2ee\cluster\server0\log\applications\com.sap.fin.ebpp\ebpp.trc.0. On UNIX we do not see anything written to these files. We have verified that the Severity is set to debug and the Log Destination for application_sap.com/com.sap.fin.ebpp_com.sap.fin.destination is ./log/applications/com.sap.fin.ebpp/ebpp.trc.
Does anyone know what the problem is?
Make sure you have the J2EE server set to DEBUG mode in the Config Tool. Also in the Visual Admin, in the Kernel, make sure the LogManager attribute for ForceSingleTraceFile to 'NO'. Restart the server and you should see the ebpp.trc
Similar Messages
-
How to clear trc and log files in unix
How to clear alert log/trace and diag files in unix , database is 11g R2 , is there any way to setup jobs through EM(we are using 11g EM)
appsguyflorence wrote:
How to clear alert log/trace and diag files in unix , database is 11g R2 , is there any way to setup jobs through EM(we are using 11g EM)
Create a script job to rotate / delete old files for example older than 1 month, etc
Cheers -
Redirect DBMS_OUTPUT to calling application and to log file
Hi,
I have a procedure to insert a set of records into a table using Merge statement and capture the inserted record count.
Currently i display the record count using DBMS_OUTPUT in Oracle SQL Developer tool using DBMS_OUTPUT.ENABLE.
How do i redirect this output to both calling application and Log file on Unix server.
I have more DBMS_OUTPUT statements in Exception handling to handle failed inserts. How do i redirect these statements to Calling Application and Log file on Unix.
Can we send any email to a group from PL/SQL if at all program fails and Exception handle is triggered OR if the program complete successfully.
I appreciate your responses.user10405899 wrote:
Hi,
I have a procedure to insert a set of records into a table using Merge statement and capture the inserted record count.
Currently i display the record count using DBMS_OUTPUT in Oracle SQL Developer tool using DBMS_OUTPUT.ENABLE.
How do i redirect this output to both calling application and Log file on Unix server.
I have more DBMS_OUTPUT statements in Exception handling to handle failed inserts. How do i redirect these statements to Calling Application and Log file on Unix.
Can we send any email to a group from PL/SQL if at all program fails and Exception handle is triggered OR if the program complete successfully.
I appreciate your responses.DBMS_OUTPUT is not the correct tool to be using for outputting information. It writes data to a buffer on the server, and then it's up to the client tool to read the data out of that buffer using the DBMS_OUTPUT.GET_LINE call.
You could try implementing something like that in your own application if you wanted, but in truth, if you're wanting to capture some trace of what's happening in your application then you are better logging those things to a table using an autonomous transaction procedure, and then have whataver application you want just query that table. -
How to Monitor oracle Application in unix through log files?
Hello Every Body,
I Would Like To Know The Log Files locations to monitor the start-up and stop error entries in the logs , like web server where its log files located,...forms server ...etc
is it in $COMMON_TOP ..if yes where..?
one last Q: what should i do to be a pro-active apps dba?
using unix -AIX commands
Thanks and Regards,Startup/Shutdown logs
$COMMON_TOP/admin/log/<SID_hostname>
Apache logs
$APACHE_TOP/Apache/Jserv/logs
$APACHE_TOP/Apache/Jserv/logs/jvm
Concurrent Manager Log Files
$APPLCSF/$APPLLOG
$APPLCSF/$APPLOUT
Database Log Files
$ORACLE_HOME/admin/<SID_hostname>/bdump
You can also use Oracle Applcations Manager (OAM), with OAM you can view information on general system activity including the statuses of the database, concurrent managers and other services, concurrent requests, and Oracle Workflow processes. You can also start or stop services, and submit concurrent requests. You can also view configuration information, such as initialization parameters and profile options .. etc. -
Scrolling SSL message in PS Unix Process Scheduler PSAESRV log file
This message is being spammed to the AESRV log file and it's doing it's best to fill up the partition. We recently migrated to new hardware, and it looks like the problem started day 1 from there. There are no other known issues at the moment. Any idea what's causing this, how to stop it and/or how to put a muzzle on it?
Here's the line:
PSAESRV.15511 (2) \[08/24/09 00:06:06 <USERNAME>@<SERVER> RunAeAsync2\](2) Invalid or No 'CA' entry in SSL Config File
$ grep "entry in SSL Config File" AESRV_0824.LOG|wc -l
11799608
Environment is HR89 8.47.12
SunOS 5.9
Edited by: user7342576 on Aug 24, 2009 1:51 PMWhere do you upload the root certificates to the DB? I now think this issue may be arising out of some custom encryption that was done a while back and has since progressively gotten worse with higher utilization. Any idea where the root certificate would be found for a custom encryption project?
-
Startall log file in 11.5.10.2/unix
Hi, gurus:
I should know but I couldn't find the adstrll.sh log file. I searched website and it says under $COMMON_TOP/admin/log/CONTEXT_NAME. What is the name?
Sorry to bother you for such basic question.I should know but I couldn't find the adstrll.sh log file. I searched website and it says under $COMMON_TOP/admin/log/CONTEXT_NAME. What is the name? ${dd}.log -- Where dd is mmddhhmm (month, day, hour, minute)
For example:
-rw-r--r-- 1 applmgr dba 4233 Mar 9 18:37 *03091837*.log
Thanks,
Hussein -
Steps to empty SAPDB (MaxDB) log file
Hello All,
i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
I do have some idea what to do like the steps below
1. take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
3. It will automatically overwrite log after log backups.
or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
Can the log area be overwritten cyclically without having to make a log backup?
Yes, the log area can be automatically overwritten without log backups. Use the DBM command
util_execute SET LOG AUTO OVERWRITE ON
to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
util_execute SET LOG AUTO OVERWRITE OFF
and by creating a complete data backup in the ADMIN or ONLINE status.
Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
any reply will be highly appreciated.
Thanks
ManiHello Mani,
1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your firewall and restrict access to these ports to only those computers that need to access the database.u201D
Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
See the document u201CNetwork Communicationu201D at
http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
Thank you and best regards, Natalia Khlopina -
Alert Email notification for Log file alerts
Hi,
Scenario: SCOM 2012 R2 UR4.
There are created unix/linux log file monitoring objects. In SCOM console I can view alerts related to unix/linux log file monitoring. Email notification is: Warning or Critical for severity, and, Medium or High for priority. The alerts for unix/linux log
file are severity warning and priority medium.
In my inbox there are emails for alerts (Warning or Critical for severity, and, Medium or High for priority) except for unix/linux monitoring.
The question is:
How to enable email notification for unix/linux log file monitoring?
Thanks in advance!Hello,
If you go into the "Subscription" in the Notifications section of the Operations Console\Administration, you should be able to see the Description of the subscription criteria. Could you copy paste that in a reply?
Thanks,
Kris
www.operatingquadrant.com -
How to see data for particular date from a alert log file
Hi Experts,
I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
Thanks
ShaanHi Jaffar,
Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
It's not working. Here is the sample log file
Mon Nov 26 21:42:43 2007
Thread 1 advanced to log sequence 138
Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Mon Nov 26 21:42:43 2007
ARCH: Evaluating archive log 1 thread 1 sequence 137
Mon Nov 26 21:42:43 2007
ARC1: Evaluating archive log 1 thread 1 sequence 137
ARC1: Unable to archive log 1 thread 1 sequence 137
Log actively being archived by another process
Mon Nov 26 21:42:43 2007
ARCH: Beginning to archive log 1 thread 1 sequence 137
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
.dbf'
ARCH: Completed archiving log 1 thread 1 sequence 137
Mon Nov 26 21:42:44 2007
Thread 1 advanced to log sequence 139
Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Mon Nov 26 21:42:44 2007
ARC0: Evaluating archive log 3 thread 1 sequence 138
ARC0: Beginning to archive log 3 thread 1 sequence 138
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
.dbf'
Mon Nov 26 21:42:44 2007
ARCH: Evaluating archive log 3 thread 1 sequence 138
ARCH: Unable to archive log 3 thread 1 sequence 138
Log actively being archived by another process
Mon Nov 26 21:42:45 2007
ARC0: Completed archiving log 3 thread 1 sequence 138
Mon Nov 26 21:45:12 2007
Starting control autobackup
Mon Nov 26 21:45:56 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0033'
handle 'c-2861328927-20071126-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Tue Nov 27 21:23:50 2007
Starting control autobackup
Tue Nov 27 21:30:49 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0280'
handle 'c-2861328927-20071127-00'
Tue Nov 27 21:30:57 2007
ARC1: Evaluating archive log 2 thread 1 sequence 139
ARC1: Beginning to archive log 2 thread 1 sequence 139
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
.dbf'
Tue Nov 27 21:30:57 2007
Thread 1 advanced to log sequence 140
Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
Tue Nov 27 21:30:57 2007
ARCH: Evaluating archive log 2 thread 1 sequence 139
ARCH: Unable to archive log 2 thread 1 sequence 139
Log actively being archived by another process
Tue Nov 27 21:30:58 2007
ARC1: Completed archiving log 2 thread 1 sequence 139
Tue Nov 27 21:30:58 2007
Thread 1 advanced to log sequence 141
Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Tue Nov 27 21:30:58 2007
ARCH: Evaluating archive log 1 thread 1 sequence 140
ARCH: Beginning to archive log 1 thread 1 sequence 140
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
.dbf'
Tue Nov 27 21:30:58 2007
ARC1: Evaluating archive log 1 thread 1 sequence 140
ARC1: Unable to archive log 1 thread 1 sequence 140
Log actively being archived by another process
Tue Nov 27 21:30:58 2007
ARCH: Completed archiving log 1 thread 1 sequence 140
Tue Nov 27 21:33:16 2007
Starting control autobackup
Tue Nov 27 21:34:29 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0205'
handle 'c-2861328927-20071127-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Wed Nov 28 21:43:31 2007
Starting control autobackup
Wed Nov 28 21:43:59 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0202'
handle 'c-2861328927-20071128-00'
Wed Nov 28 21:44:08 2007
Thread 1 advanced to log sequence 142
Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Wed Nov 28 21:44:08 2007
ARCH: Evaluating archive log 3 thread 1 sequence 141
ARCH: Beginning to archive log 3 thread 1 sequence 141
Wed Nov 28 21:44:08 2007
ARC1: Evaluating archive log 3 thread 1 sequence 141
ARC1: Unable to archive log 3 thread 1 sequence 141
Log actively being archived by another process
Wed Nov 28 21:44:08 2007
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
.dbf'
Wed Nov 28 21:44:08 2007
ARC0: Evaluating archive log 3 thread 1 sequence 141
ARC0: Unable to archive log 3 thread 1 sequence 141
Log actively being archived by another process
Wed Nov 28 21:44:08 2007
ARCH: Completed archiving log 3 thread 1 sequence 141
Wed Nov 28 21:44:09 2007
Thread 1 advanced to log sequence 143
Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
Wed Nov 28 21:44:09 2007
ARCH: Evaluating archive log 2 thread 1 sequence 142
ARCH: Beginning to archive log 2 thread 1 sequence 142
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
.dbf'
Wed Nov 28 21:44:09 2007
ARC0: Evaluating archive log 2 thread 1 sequence 142
ARC0: Unable to archive log 2 thread 1 sequence 142
Log actively being archived by another process
Wed Nov 28 21:44:09 2007
ARCH: Completed archiving log 2 thread 1 sequence 142
Wed Nov 28 21:44:36 2007
Starting control autobackup
Wed Nov 28 21:45:00 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0202'
handle 'c-2861328927-20071128-01'
Clearing standby activation ID 2873610446 (0xab47d0ce)
The primary database controlfile was created using the
'MAXLOGFILES 5' clause.
The resulting standby controlfile will not have enough
available logfile entries to support an adequate number
of standby redo logfiles. Consider re-creating the
primary controlfile using 'MAXLOGFILES 8' (or larger).
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
Thu Nov 29 21:36:44 2007
Starting control autobackup
Thu Nov 29 21:42:53 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0206'
handle 'c-2861328927-20071129-00'
Thu Nov 29 21:43:01 2007
Thread 1 advanced to log sequence 144
Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
Thu Nov 29 21:43:01 2007
ARCH: Evaluating archive log 1 thread 1 sequence 143
ARCH: Beginning to archive log 1 thread 1 sequence 143
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
.dbf'
Thu Nov 29 21:43:01 2007
ARC1: Evaluating archive log 1 thread 1 sequence 143
ARC1: Unable to archive log 1 thread 1 sequence 143
Log actively being archived by another process
Thu Nov 29 21:43:02 2007
ARCH: Completed archiving log 1 thread 1 sequence 143
Thu Nov 29 21:43:03 2007
Thread 1 advanced to log sequence 145
Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
Thu Nov 29 21:43:03 2007
ARCH: Evaluating archive log 3 thread 1 sequence 144
ARCH: Beginning to archive log 3 thread 1 sequence 144
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
.dbf'
Thu Nov 29 21:43:03 2007
ARC0: Evaluating archive log 3 thread 1 sequence 144
ARC0: Unable to archive log 3 thread 1 sequence 144
Log actively being archived by another process
Thu Nov 29 21:43:03 2007
ARCH: Completed archiving log 3 thread 1 sequence 144
Thu Nov 29 21:49:00 2007
Starting control autobackup
Thu Nov 29 21:50:14 2007
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.0.0.0',
media 'WP0280'
handle 'c-2861328927-20071129-01'
Thanks
Shaan -
Log.0000000001: log file open failed
I have been seeing an error off and on recently where my app will go along just fine writing to dbxml - and then for no apparent reason, blow up with
log.0000000001: log file open failed: No such file or directory
PANIC: No such file or directory
When I go look - there is indeed no log.00000001 in my dbxml directory.
What is the story with log.00000001? When is it created? What would cause this creation to fail. I have seen this problem on both an XP system and a Unix system.
I think I have made this problem go away by manually creating an empty log.0000001 file before I start my app - but this seems bogus.
Any tips appreciatedHi,
If you have multiple applications or processes using Berkeley DB XML (including our utility programs) you may have set up a separate log directory for your log files or they simply were created in another directory. For this reason you may want to consider using a DB_CONFIG file and setting the location for your log files there.
For more information about this please look at these references:
http://www.sleepycat.com/docs/ref/env/db_config.html
DB_CONFIG
http://www.sleepycat.com/docs/api_c/env_set_lg_dir.html
http://www.sleepycat.com/docs/api_c/env_set_data_dir.html
An example for how to insert this information in a DB_CONFIG file is:
set_data_dir datadir
set_lg_dir logdir
Regards,
Ron Cohen
Berkeley DB XML Support
Oracle Corporation -
How to view weblogic log files from a browser
Hi,
I am running WebLogic Server 7.03 on Solaris 8.
I have one Admin and multiple Managed servers running.
Each creates its own log file.
Is there anyway I can access this log files from the browser ?
In Apache, you can create a link from htdocs dir to the logs dir
and then view the log files from the browser. Is there a similar
mechanism in Weblogic server.
A quick response is well appreciated.
Thanks in advance.
-Anil VarmaIf you are on a unix system you can do something similar by making an open
directory webapp with symbolic links to the weblogic log directories. I suggest
that you protect that webapp with administration access only.
Sam
Anil Varma wrote:
Hi,
I am running WebLogic Server 7.03 on Solaris 8.
I have one Admin and multiple Managed servers running.
Each creates its own log file.
Is there anyway I can access this log files from the browser ?
In Apache, you can create a link from htdocs dir to the logs dir
and then view the log files from the browser. Is there a similar
mechanism in Weblogic server.
A quick response is well appreciated.
Thanks in advance.
-Anil Varma -
How to configure Log file generation
Hi,
I am in a migration project. Currently the OS is Unix. After migration it is going to be Windows.
So we want to change the log files being created in Unix to Windows.
Can anyone suggest any settings in SAP for the log file.
Regards,
GijoyHi Gijoy,
can you please reformulate your question for better understanding?
The log location and tracing severity setup mechanism is platform independent.
After migration there's no necessary step(s) to be taken, the logs will be created in the same way on windows as on unix under your current sap installation folder (e.g. defaultTrace is on unix under /usr/sap/.../j2ee/cluster/server<n>/log , on windows this will be <DRIVE:>\usr\sap\...\j2ee\cluster\server<n>\log)
I hope this answers your question.
Best Regards,
Ervin -
Hi,
I am trying to log in, with the user name and password... I am not able to log in to server...
Main issue is the database is not created as per standard path... I want to find the trace files in unix... How it is possible..
Thanks and Regards..If you are able to login to database then check what is the value for these parameters where the trace file is created (user_dump_dest, background_dump_dest, db_recovery_file_dest)
Also use the unix 'find' command to search for the alert log file and .trc files.
http://content.hccfl.edu/pollock/unix/findcmd.htm
http://javarevisited.blogspot.com/2011/03/10-find-command-in-unix-examples-basic.html -
Why is there no error when checkpointing after db log files are removed?
I would like to test a scenario when an application's embedded database is corrupted somehow. The simplest test I could think of was removing the database log files while the application is running. However, I can't seem to get any failure. To demonstrate, below is a code snippet that demonstrates what I am trying to do. (I am using JE 3.3.75 on Mac OS 10.5.6):
public class FileRemovalTest {
public static void main(String[] args) throws Exception
// Setup the DB Environment
EnvironmentConfig ec = new EnvironmentConfig();
ec.setAllowCreate(true);
ec.setTransactional(true);
ec.setConfigParam(EnvironmentConfig.ENV_RUN_CLEANER, "false");
ec.setConfigParam(EnvironmentConfig.ENV_RUN_CHECKPOINTER, "false");
ec.setConfigParam(EnvironmentConfig.CLEANER_EXPUNGE, "true");
ec.setConfigParam("java.util.logging.FileHandler.on", "true");
ec.setConfigParam("java.util.logging.level", "FINEST");
Environment env = new Environment(new File("."), ec);
// Create a database
DatabaseConfig dbConfig = new DatabaseConfig();
dbConfig.setAllowCreate(true);
dbConfig.setTransactional(true);
Database db = env.openDatabase(null, "test", dbConfig);
// Insert an entry and checkpoint the database
db.put(
null,
new DatabaseEntry("key".getBytes()),
new DatabaseEntry("value".getBytes()));
CheckpointConfig checkpointConfig = new CheckpointConfig();
checkpointConfig.setForce(true);
env.checkpoint(checkpointConfig);
// Delete the DB log files
File[] dbFiles = new File(".").listFiles(new DbFilenameFilter());
if (dbFiles != null)
for (File file : dbFiles)
file.delete();
// Add another entry and checkpoint the database again.
db.put(
null,
new DatabaseEntry("key2".getBytes()),
new DatabaseEntry("value2".getBytes())
{color:#ff0000} *// Q: Why does this 'put' succeed?*
{color}
env.checkpoint(checkpointConfig);
{color:#ff0000}*// Q: Why does this checkpoint succeed?*{color}
// Close the database and the environment
db.close();
env.close();
private static class DbFilenameFilter implements FilenameFilter
public boolean accept(File dir, String name) {
return name.endsWith(".jdb");
This is what I see in the logs:
2009-03-05 12:53:30:631:CST CONFIG Recovery w/no files.
2009-03-05 12:53:30:677:CST FINER Ins: bin=2 ln=1 lnLsn=0x0/0xe9 index=0
2009-03-05 12:53:30:678:CST FINER Ins: bin=5 ln=4 lnLsn=0x0/0x193 index=0
2009-03-05 12:53:30:688:CST FINE Commit:id = 1 numWriteLocks=1 numReadLocks = 0
2009-03-05 12:53:30:690:CST FINEST size interval=0 lastCkpt=0x0/0x0 time interval=0 force=true runnable=true
2009-03-05 12:53:30:703:CST FINER Ins: bin=8 ln=7 lnLsn=0x0/0x48b index=0
2009-03-05 12:53:30:704:CST CONFIG Checkpoint 1: source=recovery success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
2009-03-05 12:53:30:705:CST CONFIG Recovery finished: Recovery Infonull> useMinReplicatedNodeId=0 useMaxNodeId=0 useMinReplicatedDbId=0 useMaxDbId=0 useMinReplicatedTxnId=0 useMaxTxnId=0 numMapINs=0 numOtherINs=0 numBinDeltas=0 numDuplicateINs=0 lnFound=0 lnNotFound=0 lnInserted=0 lnReplaced=0 nRepeatIteratorReads=0
2009-03-05 12:53:30:709:CST FINEST Environment.open: name=test dbConfig=allowCreate=true
exclusiveCreate=false
transactional=true
readOnly=false
duplicatesAllowed=false
deferredWrite=false
temporary=false
keyPrefixingEnabled=false
2009-03-05 12:53:30:713:CST FINER Ins: bin=2 ln=10 lnLsn=0x0/0x7be index=1
2009-03-05 12:53:30:714:CST FINER Ins: bin=5 ln=11 lnLsn=0x0/0x820 index=1
2009-03-05 12:53:30:718:CST FINE Commit:id = 2 numWriteLocks=0 numReadLocks = 0
2009-03-05 12:53:30:722:CST FINEST Database.put key=107 101 121 data=118 97 108 117 101
2009-03-05 12:53:30:728:CST FINER Ins: bin=13 ln=12 lnLsn=0x0/0x973 index=0
2009-03-05 12:53:30:729:CST FINE Commit:id = 3 numWriteLocks=1 numReadLocks = 0
2009-03-05 12:53:30:729:CST FINEST size interval=0 lastCkpt=0x0/0x581 time interval=0 force=true runnable=true
2009-03-05 12:53:30:735:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x193 newLnLsn=0x0/0xb61
2009-03-05 12:53:30:736:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x820 newLnLsn=0x0/0xc3a
2009-03-05 12:53:30:737:CST FINER Ins: bin=8 ln=15 lnLsn=0x0/0xd38 index=0
2009-03-05 12:53:30:738:CST CONFIG Checkpoint 2: source=api success=true nFullINFlushThisRun=6 nDeltaINFlushThisRun=0
2009-03-05 12:53:30:741:CST FINEST Database.put key=107 101 121 50 data=118 97 108 117 101 50
2009-03-05 12:53:30:742:CST FINER Ins: bin=13 ln=16 lnLsn=0x0/0xeaf index=1
2009-03-05 12:53:30:743:CST FINE Commit:id = 4 numWriteLocks=1 numReadLocks = 0
2009-03-05 12:53:30:744:CST FINEST size interval=0 lastCkpt=0x0/0xe32 time interval=0 force=true runnable=true
2009-03-05 12:53:30:746:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0xb61 newLnLsn=0x0/0x1166
2009-03-05 12:53:30:747:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0xc3a newLnLsn=0x0/0x11e9
2009-03-05 12:53:30:748:CST FINER Ins: bin=8 ln=17 lnLsn=0x0/0x126c index=0
2009-03-05 12:53:30:748:CST CONFIG Checkpoint 3: source=api success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
2009-03-05 12:53:30:750:CST FINEST Database.close: name=test
2009-03-05 12:53:30:751:CST FINE Close of environment . started
2009-03-05 12:53:30:751:CST FINEST size interval=0 lastCkpt=0x0/0x1363 time interval=0 force=true runnable=true
2009-03-05 12:53:30:754:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x1166 newLnLsn=0x0/0x14f8
2009-03-05 12:53:30:755:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x11e9 newLnLsn=0x0/0x15a9
2009-03-05 12:53:30:756:CST FINER Ins: bin=8 ln=18 lnLsn=0x0/0x16ab index=0
2009-03-05 12:53:30:757:CST CONFIG Checkpoint 4: source=close success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
2009-03-05 12:53:30:757:CST FINE About to shutdown daemons for Env .Hi,
OS X, being Unix-like, probably isn't actually deleting file 00000000.jdb since JE still has it open -- the file deletion is deferred until it is closed. JE keeps N files open, where N is configurable.
We do corruption testing ourselves, in the following test by overwriting a file and then attempting to read back the entire database:
test/com/sleepycat/je/util/DbScavengerTest.java
--mark -
Errors appeared in alert log file should send an email
Hi,
I have one requirement to do, as i am new dba, i dont know how to do this,
sendmail is configured on my unix machine, but dont know how to send the errors appeared in alert logfile should send an email to administrator.
daily, it has to check the errors in alert log file, if any errors occurs(ORA- errors or WARNING errors) should send an email to administrator.
please help me how to do itHi,
There are many methods for interrogating the alert log and sending e-mail. Here are my notes:
http://www.dba-oracle.com/t_alert_log_monitoring_errors.htm
- PL/SQL or Java with e-mail alert: http://www.dba-village.com/village/dvp_papers.PaperDetails?PaperIdA=2383
- Shell script - Database independent and on same level as alert log file and e-mail.
- OEM - Too inflexible for complex alert log analysis rules
- SQL against the alert log - You can define the alert log file as an external table and detect messages with SQL and then e-mail.
Because the alert log is a server side flat file and because e-mail is also at the OS-level, I like to use a server side scell script. It's also far more robust then OEM, especially when combining and evaluating multiple alert log events. Jon Emmons has great notes on this:
http://www.lifeaftercoffee.com/2007/12/04/when-to-use-shell-scripts/
If you are on Windows, see here:
http://www.dba-oracle.com/t_windows_alert_log_script.htm
For UNIX, Linux, Jon Emmons has a great alert log e-mail script.
Hope this helps . . . .
Donald K. Burleson
Oracle Press author
Maybe you are looking for
-
I am having many problems with Firefox 5 how do I go back to Firefox 4
Whenever I log on to mymsn via Firefox I get this error message: TpyeError: Components. classes[cid] is undefined. Also I used to be able to type in an address or select one from the drop down menu and it would go to the site now I have to click the
-
Terminal overwrites text at beginning as opposed to starting a newline
I have been using some terminal tools lately like atos and otool and sometimes the command is quite long. After some testing it seems like the issue is caused because of the folder name. See the screenshots below and notice how the top image overwr
-
This is my first try - I managed to upload my new website onto the server (netpark.net, not .Mac), but then needed to change one of the photo files. I somehow broke the link between the thumbnail and the full-size image. I've redone the iWeb file twi
-
Tuning concat index but still fail
I have query : select count(*) as y0_ from t_transaction this_ where this_.USER_ID=:1 and this_.OPTYPE in (:2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :
-
Disabling Sponsors in an MSI deployment -- Java 8u11
Hello, I'm looking for options to disable Sponsors in an MSI deployment for Java 8u11. I can see the registry key created to control the checkbox, but it's under HKCU:\Software\AppDataLow\Software\JavaSoft\DeploymentProperties - install.disable.spon