Database log files

Hi all.
I am new to the oracle database and databases in general and need some help. I was wondering if the sql queries issued to an instance of an oracle database by a particular user, through Oracle Forms or SQL*Plus, are logged somewhere? And if so where?
Any information is appreciated. Thanks

Hi!
No the querys aren't logged anywhere by default. But you can use Fine Grained Auditing (FGA) to log access to sensitive data. For more information on FGA see the Security Guide of the Oracle documentation.
By default only DML-Statements (insert, update, delete) are logged, but they are logged in the online and the archived redo logs.
Regards,
Petra

Similar Messages

  • Database Log file Shrink information

    Hello Team,
    Database log file shrink information: Due to space problem
    One of my database having 600 gb,under  600 gb log file having 260 gb, in this database we take full backup daily no log backup. If we shrink the log file any impact is happen?
                   We shrink the log file internally what will happen?
    One of my database having 600 gb,log file having 260 gb, in this database we take full backup daily and every 15 min take log backup.
    This is scenario we shrink the log file what will happen.

    Hello,
    You should not try to shrink the log file regularly, because is resource intensive, it takes a lot of time and it creates fragmentation on the disk storage.
    If you don’t backup the log files of the databases (option 1) you don’t have the ability to restore to a point in time between backups full and differential. If
    you don’t want to backup files then it makes sense to change the recovery model to simple.
    Backing up the logs regularly minimize the risk of the logs getting filled and extending their size.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Database Log File Size

    We are in the process of migrating disabled users to a new Exchange 2013 database on secondary storage. I've noticed that the Logs folder is abnormally large (182 GB). I was wondering if there was a way to clean this up?
    We have other Exchange 2013 databases whose Logs folder is much smaller in comparison (~200 MB). How can I go about cleaning these log files?

    Hi,
    Have you checked the above suggestion to do a full backup and check the result?
    Is there any update with your issue?
    Best regards,
    If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Amy Wang
    TechNet Community Support
    Sorry to reply to this so late but I wanted to provide somewhat of an update. I am running a DPM job on the database now. We'll see if it resolves the issue. When I ran the DPM job originally it had an error. I believe it was because the drive was at capacity.
    I've since expanded the drive and kicked off the DPM job again.

  • Database Log File getting full by Reindex Job

    Hey guys
    I have an issue with one of my databases during Reindex Job.  Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
    file space.  Any suggestions?

    Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
    need to take log backup after changing back to Full recovery
    I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Large and Many Replication Manager Database Log Files

    Hi All,
    I've recently added replication manager support to our database systems. After enabling the replication manager I end up with many log.* files of many gigabytes a piece on the master. This makes backing up the database difficult. Is there a way to purge the log files more often?
    It also seems that the replication slave never finishes syncronizing with the master.
    Thank you,
    Rob

    So, I setup a debug env on test machines, with a snapshot of the db. We now set rep_set_limit to 5mb.
    Now its failing to sync, so I recompiled with --enable-diagnostic and enabled DB_VERB_REPLICATION.
    On the master we see this:
    2007-06-06 18:40:26.646768500 DBMSG: ERROR:: sendpages: 2257, page lsn [293276][4069284]
    2007-06-06 18:40:26.646775500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35e370
    2007-06-06 18:40:26.646782500 DBMSG: ERROR:: sendpages: 2257, lsn [640947][6755391]
    2007-06-06 18:40:26.646794500 DBMSG: ERROR:: sendpages: 2258, page lsn [309305][9487507]
    2007-06-06 18:40:26.646801500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35f3b4
    2007-06-06 18:40:26.646803500 DBMSG: ERROR:: sendpages: 2258, lsn [640947][6755391]
    2007-06-06 18:40:26.646809500 DBMSG: ERROR:: send_bulk: Send 562140 (0x893dc) bulk buffer bytes
    2007-06-06 18:40:26.646816500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.647064500 DBMSG: ERROR:: wrote only 147456 bytes to site 10.0.3.235:9003
    2007-06-06 18:40:26.648559500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.648561500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.648562500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.648563500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649966500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.649968500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649970500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.649971500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.651699500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.651702500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb3d801c
    2007-06-06 18:40:26.651704500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.651705500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.651706500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.652858500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.652860500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb2d701c
    2007-06-06 18:40:26.652861500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.652862500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.652864500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:38.951290500 1 28888 dbnet: 0,0: MSG: ** checkpoint start **
    2007-06-06 18:40:38.951321500 1 28888 dbnet: 0,0: MSG: ** checkpoint end **
    On the slave, we see this:
    2007-06-06 18:40:26.668636500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668637500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668644500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66c1fc ep 0x2afb671344 pgrec data 0x2afb66c1fc, size 4152 (0x1038)
    2007-06-06 18:40:26.668645500 DBMSG: ERROR:: PAGE: Received page 2254 from file 0
    2007-06-06 18:40:26.668658500 DBMSG: ERROR:: PAGE: Received duplicate page 2254 from file 0
    2007-06-06 18:40:26.668664500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668666500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668672500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66d240 ep 0x2afb671344 pgrec data 0x2afb66d240, size 4152 (0x1038)
    2007-06-06 18:40:26.668674500 DBMSG: ERROR:: PAGE: Received page 2255 from file 0
    2007-06-06 18:40:26.668686500 DBMSG: ERROR:: PAGE: Received duplicate page 2255 from file 0
    2007-06-06 18:40:26.668703500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668704500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668706500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66e284 ep 0x2afb671344 pgrec data 0x2afb66e284, size 4152 (0x1038)
    2007-06-06 18:40:26.668707500 DBMSG: ERROR:: PAGE: Received page 2256 from file 0
    2007-06-06 18:40:26.668714500 DBMSG: ERROR:: PAGE: Received duplicate page 2256 from file 0
    2007-06-06 18:40:26.668715500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668722500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668723500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66f2c8 ep 0x2afb671344 pgrec data 0x2afb66f2c8, size 4152 (0x1038)
    2007-06-06 18:40:26.668730500 DBMSG: ERROR:: PAGE: Received page 2257 from file 0
    2007-06-06 18:40:26.668743500 DBMSG: ERROR:: PAGE: Received duplicate page 2257 from file 0
    2007-06-06 18:40:26.668750500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668752500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668758500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb67030c ep 0x2afb671344 pgrec data 0x2afb67030c, size 4152 (0x1038)
    2007-06-06 18:40:26.668760500 DBMSG: ERROR:: PAGE: Received page 2258 from file 0
    2007-06-06 18:40:26.668772500 DBMSG: ERROR:: PAGE: Received duplicate page 2258 from file 0
    2007-06-06 18:40:26.668779500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.690980500 DBMSG: ERROR:: /ask/bloglines/db/sitedb-slave rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391]
    2007-06-06 18:40:26.690982500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.690983500 DBMSG: ERROR:: rep_bulk_page: p 0x736584 ep 0x7375bc pgrec data 0x736584, size 4152 (0x1038)
    2007-06-06 18:40:26.690985500 DBMSG: ERROR:: PAGE: Received page 2124 from file 0
    2007-06-06 18:40:26.690986500 DBMSG: ERROR:: PAGE: Received duplicate page 2124 from file 0
    2007-06-06 18:40:26.690992500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:36.289310500 DBMSG: ERROR:: election thread is exiting
    I have full log files if that could help, these are just the end of those.
    Any ideas? Thanks...
    -Paul

  • Correct cmdlet for moving my database/log files?

    Currently, my databases/logs reside on an external USB 1TB drive (D:\Mailbox\Mail400).  
    That being said, I need to move both database/logs to the C:\ drive
    Database - "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400"
    Logs-  "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400\Logs\". 
    Is this the correct cmdlet I would use to accomplish this move?
    Move-DatabasePath "Mail400" -EdbFilePath "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400\Mail400.edb" -LogFolderPath "C:\Program Files\Microsoft\Exchange Server\V15\Mailbox\Mail400\Logs\"
    I realize the best practice is to separate the database file and transaction logs onto separate disks but that isn't an option for us at this time.
    Ex2013sp1
    PennyM

    Hi PennyM,
    I have a test in my environment using the following cmdlet. I use Exchange server 2013.
    Move-DatabasePath -Identity 'Mailbox Database 1294421375' -EdbFilePath 'C:\Data\Mailbox Database 1294421375\test.edb' -LogFolderPath 'C:\Data\Mailbox Database 1294421375'
    There is no need to create these folders manually. When you run the above cmdlet, the folders you set will be created automatically.
    Besides, the move will take some time, it depends on the size of the database and transaction log files being moved.
    Hope it helps.
    Best regards,
    If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Amy Wang
    TechNet Community Support

  • Database-Specific Log Files?

    Hi,
    Assuming that you are using an environment, is there some way to specify database-specific (rather than environment-specific) location for log files? Or, put another way, can you ask log files to be created for each database rather than each environment?
    Thanks,
    -- [email protected]
    Andrew Bell
    Iowa State University

    Hi Andrew,
    Could you let us know what is the goal that you want to achieve by using per database log files?
    You could open a single database per environment, thus having the log files associated with a single database, but this is not a scalable solution at all. Or, you could create your own logging mechanism, which might be quite complicated as it will affect related operations, such as checkpoints, recovery etc.
    Regards,
    Andrei

  • How to configure log files in SQL 2012?

    I'm installing a SharePoint Solution and using Microsoft SQL 2012 and I have limited knowledge in installing SQL.  I simply run the wizard and create the DB for SharePoint. 
    Is there any links/materials available demonstrating how to correctly configure the log files step by step in a specific partition during the SQL installation for SharePoint 2013?
    thanks, 

    Hello,
    SQL Server database log files benefit from RAID 1 and RAID 10 configurations. RAID 10 is recommended for both data files and log files.
    To control de grow of transaction log files, please backup them regularly. The following article may explain you in detail why:
    http://technet.microsoft.com/en-us/library/ms175495.aspx
    Monitor the growth of log files using Performance Monitor and the SQLServer:Databases --> Log Growths counter. Adjust the size of log files until Log Growths
    is constantly zero.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Exchange 2013 Window Backup failure - fails to clean up log files - error FFFFFFFC

    Hello,
    We currently have an Exchange 2013 - Exchange 2007 coexistence setup.
    1x Windows Server 2008 R2 with Exchange 2007 CAS/HUB
    1x Windows Server 2008 R2 with Exchange 2007 MBX
    1x Windows Server 2008 R2 with Exchange 2013 CU 3 (all roles)
    Since our current backup solution (Symantec Backup Exec 2010 R3) does not support backing up Exchange 2013 CU3 databases, we are using the builtin Windows Backup feature to backup the Exchange 2013 CU3 databases.
    This has functioned for a while, until all of a sudden, the log files are not cleared anymore, causing our log disk to fill up.
    Our Exchange 2013 server looks like this:
    C: Windows + Exchange installation
    D: Databases
    E: Database Log Files
    F: Archive Databases
    G: Archive Database Log Files
    S: Dedicated backup volume
    The backup is scheduled to run every day at 21:00 hours, performing a full VSS backup of D:,E:,F: and G: to the volume S:
    Windows backup log says the backup completed with exceptions, leaving the Exchange logs untouched and filling the drive.
    The Windows Backup Command Line tools are NOT installed since they are not compatible with Exchange 2013.
    In the eventvwr we can see the following error:
    The Microsoft Exchange Replication service VSS Writer (Instance a33ec440-7f12-4c50-806b-1bc5ceaf8aad) failed with error FFFFFFFC when processing the backup completion event.
    The command VSSADMIN LIST WRITERS displays the following:
    Writer name: 'Microsoft Exchange Writer'
       Writer Id: {76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}
       Writer Instance Id: {cf84a907-76dc-428c-90e0-c18a3f9db493}
       State: [1] Stable
       Last error: Retryable error
    Restarting the Microsoft Exchange Replication (MSExchangeRepl) clears the writer error, same with a reboot, however the error comes backup after the next backup.
    Any thoughts on how to resolve this issue?
    Thanks.

    Hello,
    Here is a blog for your reference.
    http://blogs.technet.com/b/timmcmic/archive/2012/03/11/exchange-and-vss-my-exchange-writer-is-in-a-failed-retryable-state.aspx
    If there is any useful information after you change the logging to expert, please free let me know.
    Cara Chen
    TechNet Community Support

  • Why is there no error when checkpointing after db log files are removed?

    I would like to test a scenario when an application's embedded database is corrupted somehow. The simplest test I could think of was removing the database log files while the application is running. However, I can't seem to get any failure. To demonstrate, below is a code snippet that demonstrates what I am trying to do. (I am using JE 3.3.75 on Mac OS 10.5.6):
    public class FileRemovalTest {
    public static void main(String[] args) throws Exception
    // Setup the DB Environment
    EnvironmentConfig ec = new EnvironmentConfig();
    ec.setAllowCreate(true);
    ec.setTransactional(true);
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CLEANER, "false");
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CHECKPOINTER, "false");
    ec.setConfigParam(EnvironmentConfig.CLEANER_EXPUNGE, "true");
    ec.setConfigParam("java.util.logging.FileHandler.on", "true");
    ec.setConfigParam("java.util.logging.level", "FINEST");
    Environment env = new Environment(new File("."), ec);
    // Create a database
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(true);
    dbConfig.setTransactional(true);
    Database db = env.openDatabase(null, "test", dbConfig);
    // Insert an entry and checkpoint the database
    db.put(
    null,
    new DatabaseEntry("key".getBytes()),
    new DatabaseEntry("value".getBytes()));
    CheckpointConfig checkpointConfig = new CheckpointConfig();
    checkpointConfig.setForce(true);
    env.checkpoint(checkpointConfig);
    // Delete the DB log files
    File[] dbFiles = new File(".").listFiles(new DbFilenameFilter());
    if (dbFiles != null)
    for (File file : dbFiles)
    file.delete();
    // Add another entry and checkpoint the database again.
    db.put(
    null,
    new DatabaseEntry("key2".getBytes()),
    new DatabaseEntry("value2".getBytes())
    {color:#ff0000} *// Q: Why does this 'put' succeed?*
    {color}
    env.checkpoint(checkpointConfig);
    {color:#ff0000}*// Q: Why does this checkpoint succeed?*{color}
    // Close the database and the environment
    db.close();
    env.close();
    private static class DbFilenameFilter implements FilenameFilter
    public boolean accept(File dir, String name) {
    return name.endsWith(".jdb");
    This is what I see in the logs:
    2009-03-05 12:53:30:631:CST CONFIG Recovery w/no files.
    2009-03-05 12:53:30:677:CST FINER Ins: bin=2 ln=1 lnLsn=0x0/0xe9 index=0
    2009-03-05 12:53:30:678:CST FINER Ins: bin=5 ln=4 lnLsn=0x0/0x193 index=0
    2009-03-05 12:53:30:688:CST FINE Commit:id = 1 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:690:CST FINEST size interval=0 lastCkpt=0x0/0x0 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:703:CST FINER Ins: bin=8 ln=7 lnLsn=0x0/0x48b index=0
    2009-03-05 12:53:30:704:CST CONFIG Checkpoint 1: source=recovery success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:705:CST CONFIG Recovery finished: Recovery Infonull> useMinReplicatedNodeId=0 useMaxNodeId=0 useMinReplicatedDbId=0 useMaxDbId=0 useMinReplicatedTxnId=0 useMaxTxnId=0 numMapINs=0 numOtherINs=0 numBinDeltas=0 numDuplicateINs=0 lnFound=0 lnNotFound=0 lnInserted=0 lnReplaced=0 nRepeatIteratorReads=0
    2009-03-05 12:53:30:709:CST FINEST Environment.open: name=test dbConfig=allowCreate=true
    exclusiveCreate=false
    transactional=true
    readOnly=false
    duplicatesAllowed=false
    deferredWrite=false
    temporary=false
    keyPrefixingEnabled=false
    2009-03-05 12:53:30:713:CST FINER Ins: bin=2 ln=10 lnLsn=0x0/0x7be index=1
    2009-03-05 12:53:30:714:CST FINER Ins: bin=5 ln=11 lnLsn=0x0/0x820 index=1
    2009-03-05 12:53:30:718:CST FINE Commit:id = 2 numWriteLocks=0 numReadLocks = 0
    2009-03-05 12:53:30:722:CST FINEST Database.put key=107 101 121 data=118 97 108 117 101
    2009-03-05 12:53:30:728:CST FINER Ins: bin=13 ln=12 lnLsn=0x0/0x973 index=0
    2009-03-05 12:53:30:729:CST FINE Commit:id = 3 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:729:CST FINEST size interval=0 lastCkpt=0x0/0x581 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:735:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x193 newLnLsn=0x0/0xb61
    2009-03-05 12:53:30:736:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x820 newLnLsn=0x0/0xc3a
    2009-03-05 12:53:30:737:CST FINER Ins: bin=8 ln=15 lnLsn=0x0/0xd38 index=0
    2009-03-05 12:53:30:738:CST CONFIG Checkpoint 2: source=api success=true nFullINFlushThisRun=6 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:741:CST FINEST Database.put key=107 101 121 50 data=118 97 108 117 101 50
    2009-03-05 12:53:30:742:CST FINER Ins: bin=13 ln=16 lnLsn=0x0/0xeaf index=1
    2009-03-05 12:53:30:743:CST FINE Commit:id = 4 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:744:CST FINEST size interval=0 lastCkpt=0x0/0xe32 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:746:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0xb61 newLnLsn=0x0/0x1166
    2009-03-05 12:53:30:747:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0xc3a newLnLsn=0x0/0x11e9
    2009-03-05 12:53:30:748:CST FINER Ins: bin=8 ln=17 lnLsn=0x0/0x126c index=0
    2009-03-05 12:53:30:748:CST CONFIG Checkpoint 3: source=api success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:750:CST FINEST Database.close: name=test
    2009-03-05 12:53:30:751:CST FINE Close of environment . started
    2009-03-05 12:53:30:751:CST FINEST size interval=0 lastCkpt=0x0/0x1363 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:754:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x1166 newLnLsn=0x0/0x14f8
    2009-03-05 12:53:30:755:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x11e9 newLnLsn=0x0/0x15a9
    2009-03-05 12:53:30:756:CST FINER Ins: bin=8 ln=18 lnLsn=0x0/0x16ab index=0
    2009-03-05 12:53:30:757:CST CONFIG Checkpoint 4: source=close success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:757:CST FINE About to shutdown daemons for Env .

    Hi,
    OS X, being Unix-like, probably isn't actually deleting file 00000000.jdb since JE still has it open -- the file deletion is deferred until it is closed. JE keeps N files open, where N is configurable.
    We do corruption testing ourselves, in the following test by overwriting a file and then attempting to read back the entire database:
    test/com/sleepycat/je/util/DbScavengerTest.java
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How to Monitor oracle Application in unix through log files?

    Hello Every Body,
    I Would Like To Know The Log Files locations to monitor the start-up and stop error entries in the logs , like web server where its log files located,...forms server ...etc
    is it in $COMMON_TOP ..if yes where..?
    one last Q: what should i do to be a pro-active apps dba?
    using unix -AIX commands
    Thanks and Regards,

    Startup/Shutdown logs
    $COMMON_TOP/admin/log/<SID_hostname>
    Apache logs
    $APACHE_TOP/Apache/Jserv/logs
    $APACHE_TOP/Apache/Jserv/logs/jvm
    Concurrent Manager Log Files
    $APPLCSF/$APPLLOG
    $APPLCSF/$APPLOUT
    Database Log Files
    $ORACLE_HOME/admin/<SID_hostname>/bdump
    You can also use Oracle Applcations Manager (OAM), with OAM you can view information on general system activity including the statuses of the database, concurrent managers and other services, concurrent requests, and Oracle Workflow processes. You can also start or stop services, and submit concurrent requests. You can also view configuration information, such as initialization parameters and profile options .. etc.

  • MS_SQL Shrinking Log Files.

    Hi Experts,
    We have checked the documentation which have been received from SAP (SBO_Customer portal), Based on the Early Watch Alert.
    AS per SAP requsition we have minimized the size of 'Test Database Log' file  through MS_SQL Management Studio.(Restricted file size 10 percent, size 10 MB)
    Initially it was 50 Percent, 1000 MB
    Doubt is:
    Is that any problem will occur in future regarding this changes of 'LOG FILES'.
    Kindly help me.
    Based on your reply ...
    i will update in live production database....
    By
    kart

    The risk to shrink log file is fairly small.  Current hardware and software has much better reliability than before.  When you shrink your log file, you just lose some history which nobody even know there is any value in it.
    On the contrary, if you keep very large log file, it may cause more troubles than doing anything good.
    Thanks,
    Gordon

Maybe you are looking for