New Log Files Not 100% Utilization

I just recently deployed my web app with new configurations to require my BDB to have at least 65% disk utilization (up from the default 50%) and 25% minimum file utilization (up from the default 5%). On start of my app, I temporarily coded an env.cleanLog() to force a clean of the logs to bring the DB up to these utilization parameters.
I've deployed the app, and its been chugging away at the clean process now for about 2.5 hours. It is generating a new 100 MB log file approximately every minute as it is attempting to compact the DB. However, at 2.5 hours, I just ran DbSpace and the DB utilization is only up to 51%. Perhaps most surprising are the following two facts:
1) The newly created log files from the cleaner are nowhere near 100% utilization
2) Some of the newly created log files have already now been cleaned (deleted) themselves.
There are 0 updates happening to the BDB while this process is running. My understanding was that when the cleaner removed old files and created new files, only good/valid data is written to the new files. However, this doesn't seem to be the case. Can someone enlighten me? I'm happy to read docs if someone can point me in the right direction.
Thanks.

Sorry, somehow I missed that you weren't doing any updates, even though you said so. I apologize.
However, your cache size is much too small for your data set. Running DbCacheSize with a rough approximation of your data set size gives:
~/je.cvs$ java -jar build/lib/je.jar DbCacheSize -records 50000000 -key 30 -data 300
Inputs: records=50000000 keySize=30 dataSize=300 nodeMax=128 binMax=128 density=80% overhead=10%
=== Cache Sizing Summary ===
   Cache Size       Btree Size    Description
  3,543,168,702    3,188,851,832  Minimum, internal nodes only
  3,963,943,955    3,567,549,560  Maximum, internal nodes only
22,209,835,368   19,988,851,832  Minimum, internal nodes and leaf nodes
22,630,610,622   20,367,549,560  Maximum, internal nodes and leaf nodes
=== Memory Usage by Btree Level ===
Minimum Bytes    Maximum Bytes      Nodes    Level
  3,157,713,227    3,532,713,035     488,281    1
     30,834,656       34,496,480       4,768    2
        297,482          332,810          46    3
          6,467            7,235           1    4Apparently when you started, the cleaner was "behind" -- utilization was low. Without enough cache to hold the internal nodes -- 3.5GB according to DbCacheSize -- the cleaner will either take a very long time or never be able to clean up to 50% utilization.
Do you really only have that much memory available, or is this just a dev environment issue?
--mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Not able to add new log file to the 11g database.

    Hi DBA's
    I am not able to add the log file i am getting error while adding the database.
    SQL> alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse;
    alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse
    ERROR at line 1:
    ORA-01505: error in adding log files
    ORA-01577: cannot add log file '/oracle/DEV/db/apps_st/data/log03a.dbf' - file
    already part of database
    SQL> select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf ACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf ACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    Kindly help me to add the new log file to my database.
    Thanks,
    SG

    Hi Sawwan,
    V$LOGMEMBER was written in the document,
    I query the log members as bellow
    1)select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf INACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf INACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    2)SQL> select group#,member,status from v$logfile;
    GROUP# MEMBER STATUS
    2 /oracle/DEV/db/apps_st/data/log02a.dbf
    2 /oracle/DEV/db/apps_st/data/log02b.dbf
    1 /oracle/DEV/db/apps_st/data/log01a.dbf
    1 /oracle/DEV/db/apps_st/data/log01b.dbf
    But i am littile bit confused that there is no group or datafile called " Group 3 and log03a.dbf" as per the above query, how can i drop tease group and datafile.
    and i crossverified in the data top the files are exist or not but those are not existing. but still i am getting the same error that i can't create that already exist.
    can issue the bellow queris to drop those group which i dont think so it will exist?
    SQL>alter database drop logfile group 3;
    Thanks in advance.
    Regards,
    SG

  • Thread 1 cannot allocate new log - Checkpoint not complete caused by small

    Hello, one of my customers databases (setup by the customer) is running in archivelog mode. We find every day in the archivelog more than 10x the error "...cannot allocate new log - Checkpoint not complete "
    The system has theese parameters:
    select group#,members,bytes from v$log;
    GROUP# MEMBERS BYTES
    1 2 8388608
    2 2 8388608
    3 2 8388608
    4 2 8388608
    5 2 8388608
    6 2 8388608
    7 2 8388608
    8 2 8388608
    9 2 8388608
    10 2 8388608
    I asked the formder admin, why the redologs has only 8mb size. The answer was: "To find out, when there was big activity in the database. The count of archivelogs are bigger if you has small redologs. If you use some 250m redologs, we cannot see when tha database has big load. But this is important in case of recovery to recover the database to the point in time before the big load was has taken place." ... ???
    Okay, if the customer wants to have 8 MB redologs ... I don´t care. But how can i prevent now the error message? The only way for me is to setup more redolog groups. But it would be very confusing to have 50 or 70 groups of redologs.
    What can I do else to prevent the error?

    what is the log frequency?
    logfile size?
    can you check fast_start_mttr_target as well
    check the following description:
    Sometimes, you can see in your alert.log file, the following corresponding
    messages:
    Thread 1 advanced to log sequence 248
    Current log# 2 seq# 248 mem# 0: /prod1/oradata/logs/redologs02.log
    Thread 1 cannot allocate new log, sequence 249
    Checkpoint not complete
    This message indicates that Oracle wants to reuse a redo log file, but
    the current checkpoint position is still in that log. In this case, Oracle must
    wait until the checkpoint position passes that log. Because the
    incremental checkpoint target never lags the current log tail by more than 90%
    of the smallest log file size, this situation may be encountered if DBWR writes
    too slowly, or if a log switch happens before the log is completely full,
    or if log file sizes are too small.
    When the database waits on checkpoints,redo generation is stopped until the
    log switch is done.
    Edited by: CKPT on Jul 19, 2010 7:37 PM

  • BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED

    Hello,
    To generate the log report, /VIRSA/ZVFATBAK program is scheduled on hourly basis but some time report doesn't get generated and if we see the background job then it shows sucessfully finished.
    If we see the maually the log report for FFID then below error message is displayed.
    " BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED"
    Can anyone guide me to solve the issue.
    Thanks in advance.
    Best Regards,
    Prashant Dubey

    Hi,
    once chk the status of the job by selecting that and check job status(cltr+shift_f12)
    since it was periodically scheduled job there will be a RELEASED job after every active job..
    so try to copy that into another job using copy option and give some new name which u have to remember...
    the moment u copy u can find the same copied job in SCHEDULED status...
    from here, try to run it again on hourly basis....
    After copying the job u can unschedule the old released job to scheduled otherwise 2 will run at a time...
    rgds,

  • Resisting the creation of new log files when SQL SERVER is restarted

    Hi,
    I know that when SQL server is restarted new log files are created. But is it possible to resist creating new log fils and insert log data in the existing log files that are used before restarting the sql server

    Hello,
    I guess Raghvendra answered your question. And as per your previous post its not clear what you want to ask an you did not revert. Again if your issue is solved appreciate if you can please mark the answer and vote the posts helpful.
     Can I continue to log in the same file.?
    What does this line mean exactly ? Yes SQL Server will continue to use same transaction log file(LDF file) for writing information as it was using before shutdown. If you are talking about errorlog file a new errorlog file would be created which you can
    read using
    sp_readerrorlog
    Even if you stopped SQL Server service mistakenly its not that server is gone. Yes when you stopped the server all inflight transactions are rolled back. And when SQL Server would come online it would undergo crash recovery and would bring all the databases
    online by reading transaction log file and performing redo and undo of information. All committed transaction would be rolled forward and uncommitted would be rolled back.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Each time I start up my computer and have my iPhone plugged in, I get a message window that eads "New Wave" file not found in your files....How do I get this file I am missing?

    Each time I start up my computer and have my iPhone plugged in, I get a message window that eads "New Wave" file not found in your files....How do I get this file I am missing?

    Hey Podieboy,
    Thanks for the question. It sounds like you are receiving a message about missing files when you start iTunes. The following resource may provide more information:
    iTunes: Finding lost media and downloads - Apple Support
    http://support.apple.com/en-us/TS1408
    If the file/song is not important, you can simply delete it from your library to avoid this error message in the future:
    iTunes 12 for Windows: Delete songs, playlists, or other items
    http://support.apple.com/kb/PH20391
    Thanks,
    Matt M.

  • New Log file

    I just noticed that there is a new log file in /var/log called secure.log. It is grayed out and when I click on it I get a "you don't have permission to see this file" message - and I am the administrator. I don't remember this file from a while ago and I have been running Tiger since it came out. Anyone know what this is about? Thanks, Bill

    Owner: root. Group: admin. Rights -rw------- or -rw-r----- The former seems to be the rights you have on your computer. On mine I have the latter on secure.log and secure.log.2.gz, the others being only rw for the owner...

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Empty/underutilized log files not removed

    I have an application that runs the cleaner and the checkpointer explicitly (instead of relying on the database to do it).
    Here are the relevant environment settings: je.env.runCheckpointer=false, je.env.runCleaner=false, je.cleaner.minUtilization=5, je.cleaner.expunge=true.
    When running the application, I noticed that the few dozen log files have been removed, but later (even the cleaner was executed at regular intervals), no more log files were removed.
    I have run the DbSpace utility on the environment and found the following result:
    File Size (KB) % Used
    00000033 97656 0
    00000034 97655 0
    00000035 97656 0
    00000036 97656 0
    00000037 97656 0
    00000038 97655 2
    00000039 97656 0
    0000003a 97656 0
    0000003b 97655 0
    0000003c 97655 0
    0000003d 97655 0
    0000003e 97655 0
    0000003f 97656 0
    00000040 97655 0
    00000041 97656 0
    00000042 97656 0
    00000043 97656 0
    00000044 97655 0
    00000045 97655 0
    00000046 97656 0
    This goes on for a long time. I had the database tracing enabled at CONFIG level. Here are the last lines of the log just before the last log file (0x32) is removed:
    2009-05-06 08:41:51:111:CDT INFO CleanerRun 49 on file 0x30 begins backlog=2
    2009-05-06 08:41:52:181:CDT SEVERE CleanerRun 49 on file 0x30 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206347 nINsObsolete=6365 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199971 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:52:182:CDT INFO CleanerRun 50 on file 0x31 begins backlog=1
    2009-05-06 08:41:53:223:CDT SEVERE CleanerRun 50 on file 0x31 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205475 nINsObsolete=6319 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199144 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:53:224:CDT INFO CleanerRun 51 on file 0x32 begins backlog=0
    2009-05-06 08:41:54:292:CDT SEVERE CleanerRun 51 on file 0x32 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205197 nINsObsolete=6292 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198893 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:24:300:CDT INFO CleanerRun 52 on file 0x33 begins backlog=1
    2009-05-06 08:42:24:546:CDT CONFIG Checkpoint 963: source=api success=true nFullINFlushThisRun=13 nDeltaINFlushThisRun=0
    2009-05-06 08:42:24:931:CDT SEVERE Cleaner deleted file 0x32
    2009-05-06 08:42:24:938:CDT SEVERE Cleaner deleted file 0x31
    2009-05-06 08:42:24:946:CDT SEVERE Cleaner deleted file 0x30
    Here are a few log lines right after the last log message with cleaner deletion (until the next checkpoint):
    2009-05-06 08:42:25:339:CDT SEVERE CleanerRun 52 on file 0x33 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204164 nINsObsolete=6277 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197865 nLNsCleaned=11 nLNsDead=0 nLNsMigrated=0 nLNsMarked=11 nLNQueueHits=9 nLNsLocked=0
    2009-05-06 08:42:25:340:CDT INFO CleanerRun 53 on file 0x34 begins backlog=0
    2009-05-06 08:42:26:284:CDT SEVERE CleanerRun 53 on file 0x34 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=203386 nINsObsolete=6281 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197091 nLNsCleaned=2 nLNsDead=2 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:56:290:CDT INFO CleanerRun 54 on file 0x35 begins backlog=4
    2009-05-06 08:42:57:252:CDT SEVERE CleanerRun 54 on file 0x35 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205497 nINsObsolete=6312 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199164 nLNsCleaned=10 nLNsDead=3 nLNsMigrated=0 nLNsMarked=7 nLNQueueHits=6 nLNsLocked=0
    2009-05-06 08:42:57:253:CDT INFO CleanerRun 55 on file 0x39 begins backlog=4
    2009-05-06 08:42:58:097:CDT SEVERE CleanerRun 55 on file 0x39 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204553 nINsObsolete=6301 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198238 nLNsCleaned=2 nLNsDead=0 nLNsMigrated=0 nLNsMarked=2 nLNQueueHits=1 nLNsLocked=0
    2009-05-06 08:42:58:098:CDT INFO CleanerRun 56 on file 0x3a begins backlog=3
    2009-05-06 08:42:59:261:CDT SEVERE CleanerRun 56 on file 0x3a invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204867 nINsObsolete=6270 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198586 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:59:262:CDT INFO CleanerRun 57 on file 0x36 begins backlog=2
    2009-05-06 08:43:02:185:CDT SEVERE CleanerRun 57 on file 0x36 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206158 nINsObsolete=6359 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199786 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:02:186:CDT INFO CleanerRun 58 on file 0x37 begins backlog=2
    2009-05-06 08:43:03:243:CDT SEVERE CleanerRun 58 on file 0x37 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206160 nINsObsolete=6331 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199817 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:03:244:CDT INFO CleanerRun 59 on file 0x3b begins backlog=1
    2009-05-06 08:43:04:000:CDT SEVERE CleanerRun 59 on file 0x3b invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206576 nINsObsolete=6385 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200179 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:04:001:CDT INFO CleanerRun 60 on file 0x38 begins backlog=0
    2009-05-06 08:43:08:180:CDT SEVERE CleanerRun 60 on file 0x38 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205460 nINsObsolete=6324 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=194125 nLNsCleaned=4999 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=4999
    2009-05-06 08:43:08:224:CDT INFO CleanerRun 61 on file 0x3c begins backlog=0
    2009-05-06 08:43:09:099:CDT SEVERE CleanerRun 61 on file 0x3c invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206589 nINsObsolete=6343 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200235 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:24:548:CDT CONFIG Checkpoint 964: source=api success=true nFullINFlushThisRun=12 nDeltaINFlushThisRun=0
    I could not see anything fundamentally different between the log messages when log files were removed and when they were not. The DbSpace utility confirmed that there are plenty of log files under the minimum utilization, so I can't quite explain while the log file removal stopped all of a sudden.
    Any help would be appreciated (JE version: 3.3.75).

    Hi Bertold,
    My first guess is that one or more transactions have accidentally not been ended (committed or aborted), or cursors not closed.
    A clue is the nLNsLocked=4999 in the second set of trace messages. This means that 4999 records were locked by your application and were unable to be migrated by the cleaner. The cleaner will wait until these record locks are released before deleting any log files. Records locks are held by transactions and cursors.
    If this doesn't ring a bell and you need to look further, one thing you can do is print the EnvironmentStats periodically (System.out.println(Environment.getStats(null))). Take a look at the nPendingLNsProcessed and nPendingLNsLocked. The former is the number of records the cleaner attempts to migrate because they were locked earlier. The latter is the number that are still locked and cannot be migrated.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How do I create a new mailbox file, not folder. My primary mailbox file is getting too large and I want to split it into multiple physical files.

    I have done this multiple times in the past (I have several files - not folders - that contain mail). However due to the fact that II was somewhat brain damaged several years ago I can no longer remember (or figure out) how to do it. For instance, I have a "MoreJunque" mailbox file that is (from the "Properties" dialog) at mailbox:///C:/Users/Daniel Mathews/AppData/Roaming/Thunderbird/Profiles/bxk6ngnt.default/Mail/pop.att.yahoo.com/MoreJunque. However, my inbox is in mailbox:///C:/Users/Daniel Mathews/AppData/Roaming/Thunderbird/Profiles/bxk6ngnt.default/Mail/pop.att.yahoo.com/Inbox. These are unique files that contain folders.

    right click on your account on the left and select new folder.

  • Cannot publish get error message - log file not being created

    When trying to publish a FlashHelp project, I get an error
    message window that says "Publishing has been cancelled. Failed to
    create file: (project name).log "
    When I click okay in the message window, the publishing
    process stops. However, if I look in the SSL folder, I see the log
    file. It is a text file.
    I had this problem in January 09 but it seemed to be an issue
    with the password and path in the FTP command window. I fixed it
    and it worked fine. However, I haven't published since the end of
    January. Now, when I try to publish, it is giving me the same error
    message. I checked and reviewed the FTP window fields and they are
    fine. But I'm still getting the error message and can't publish.
    Why?
    I need to get this problem fixed ASAP and ensure that it
    doesn't occur again. What's strange is that I've got 3 other
    projects and this is the only 1 that gets this error message.

    Yes, the generation worked. I checked the log file that
    worked from the time it worked before and it seems to be the same
    as the log file that is generated when I get the error message.
    I created a new FlashHelp layout and got the same error
    message. What's really weird is there is a log file in the SSL
    folder but when you click OK in the error message, it stops the
    publish function.
    Last time I had to blow away the cpd file as if this was a
    corrupt project. But that gets to be painful. As I use templates to
    put change dates in the footers of topics and templates get lost
    when you blow away the cpd.
    Any other thoughts?

  • Log file not generated

    i follow the steps
    1.In Application Set profile FND: Debug Log Level to "Statement"
    2.restart apache
    3.Run debug from help-->diagnostics-->debug
    4.Secure debug log file which should be in
    select value from v$parameter where name like 'utl_file%'
    but the is no log file created i dont know why (these steps are provided by an SR)
    thnx

    What about "FND: Debug Log Filename for Middle-Tier" and "FND: Diagnostics" profile options?
    Note: 372209.1 - How to Collect an FND Diagnostics Trace (aka FND:Debug)
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=372209.1
    If the above does not help, set the debug log at the user level and check then.
    Note: 390881.1 - How To Set The Debug Log At User Level?
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=390881.1

  • Archived log files not registered in the Database

    I have Widows Server 2008 R2
    I have Oracle 11g R2
    I configured primary and standby database in 2 physical servers , please find below the verification:
    I am using DG Broker
    Renetly I did failover from primary to standby database
    Then I did REINSTATE DATABASE to returen the old primary to standby mode
    Then I did Switchover again
    I have problem that archive logs not registered and not imeplemented.
    SQL> select max(sequence#) from v$archived_log; 
    MAX(SEQUENCE#)
             16234
    I did alter system switch logfile then I ssue the following statment to check and I found same number in primary and stanbyd has not been changed
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
             16234
    Any body can help please?
    Regards

    Thanks for reply
    What I mean after I do alter system switch log file, I can see the archived log files is generated in the physical Disk but when
    select MAX(SEQUENCE#) FROM V$ARCHIVED_LOG;
    the sequence number not changed it should increase by 1 when ever I do switch logfile.
    however I did as you asked please find the result below:
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> SELECT DB_NAME,HOSTNAME,LOG_ARCHIVED,LOG_APPLIED_02,LOG_APPLIED_03,APPLIED_TIME,LOG_ARCHIVED - LOG_APPLIED_02 LOG_GAP_02,
      2  LOG_ARCHIVED - LOG_APPLIED_03 LOG_GAP_03
      3  FROM (SELECT NAME DB_NAME FROM V$DATABASE),
      4  (SELECT UPPER(SUBSTR(HOST_NAME, 1, (DECODE(INSTR(HOST_NAME, '.'),0, LENGTH(HOST_NAME),(INSTR(HOST_NAME, '.') - 1))))) HOSTNAME FROM V$INSTANCE),
      5  (SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID = 1 AND ARCHIVED = 'YES'),
      6  (SELECT MAX(SEQUENCE#) LOG_APPLIED_02 FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES'),
      7  (SELECT MAX(SEQUENCE#) LOG_APPLIED_03 FROM V$ARCHIVED_LOG WHERE DEST_ID = 3 AND APPLIED = 'YES'),
      8  (SELECT TO_CHAR(MAX(COMPLETION_TIME), 'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES');
    DB_NAME HOSTNAME           LOG_ARCHIVED   LOG_APPLIED_02    LOG_APPLIED_03     APPLIED_TIME     LOG_GAP_02      LOG_GAP_03
    EPPROD  CORSKMBBOR01     16252                  16253                        (null)                      15-JAN/12:04                  -1                   (       null)

  • Thread 1 cannot allocate new log:Checkpoint not complete

    Hai,
    i got this statement frequently in alert log file.
    current database in non archive log mode and 2 log groups with each 2 members.
    tell me how to overcome this issue.
    regards
    dba

    This may be helpful:
    cannot allocate new log
    Cannot allocate new log
    cannot allocate new log, sequence xxxx
    Cheers,
    A.

  • URGENT: New JAR file NOT being pushed to web clients.

    Greetings...
    When our web app first came to market, some of our customers were still using a dial-up connection. Therefore, since one of our web pages uses an applet, it was decided to ship the necessary JAR files to clients on an installation CD, rather than burden our dialup customer with having the JAR files pushed out to them over the line. The jar files are installed in the C:\Program Files\JavaSoft\JRE\1.3\lib\ext directory on the client computers.
    Don't ask why, but we are now attempting to have our web site push out a new JAR file to just one of our clients, but the problem I'm running into is that if the OLD jar file already exists in the C:\Program Files\JavaSoft\JRE\1.3\lib\ext directory, the NEW jar file isn't pushed out by the website. My understanding was that the system would automatically detect if the JAR file on the client was older than the JAR file on the server and would then push the newer JAR file out to the client, but obviously, I'm missing something.
    Any help/suggestions you can provide would be greatly appreciated. Following is the kludgy ASP code for this...
    <%szCustomer = getCustomer();%>
    <%if (szCustomer="SccTest") {%>
    <!-- NEW CODE: Push out the NEW JAR file with the meters to feet change. -->
    <OBJECT classid="clsid:8AD9C840-044E-11D1-B3E9-00805F499D93" width="600" height="400" name="SccMapplet"
    codebase="http://java.sun.com/products/plugin/1.3/jinstall-13-win32.cab#Version=1,3,0,0" ID="IMSMap" VIEWASTEXT>
    <PARAM NAME="code" VALUE="com.scc.mapplet.SccMapplet">
    <param name="archive" value="SccMapplet.jar,xml.jar,iiimp.jar,jai_codec.jar,jai_core.jar,mlibwrapper_jai.jar">
    <PARAM NAME="scriptable" VALUE="true">
    <PARAM NAME="type" VALUE="application/x-java-applet;version=1.3">
    <PARAM NAME="WebAXL" VALUE="/EweData/<%=getCustomer()%>.axl">
    <PARAM NAME="Session" VALUE="<%=SCCSession.SessionID%>">
    <COMMENT>
    <EMBED type="application/x-java-applet;version=1.3" width="600" height="400"
    code="com.scc.mapplet.SccMapplet" archive="SccMapplet.jar,xml.jar,iiimp.jar,jai_codec.jar,jai_core.jar,mlibwrapper_jai.jar"
    pluginspage="http://java.sun.com/products/plugin/1.3/plugin-install.html">
    <NOEMBED>
    </COMMENT>
    No Java 2 SDK, Standard Edition v 1.3 support for APPLET!!
    </NOEMBED></EMBED>
    </OBJECT>
    <%} else {%>
    <!-- OLD CODE: Use the JAR file already installed on C:\Program Files\JavaSoft\JRE\1.3\lib\ext -->
    <object classid="clsid:8AD9C840-044E-11D1-B3E9-00805F499D93"
    width=600 height=400 ID="IMSMap" name="SccMapplet" VIEWASTEXT>
    <param NAME="code" VALUE="com/scc/mapplet/SccMapplet">
    <param NAME="name" VALUE="IMSMap">
    <param NAME="SCRIPTABLE" VALUE="TRUE">
    <param NAME="type" VALUE="application/x-java-applet;version=1.2">
    <PARAM NAME="WebAXL" VALUE="/EweData/<%=getCustomer()%>.axl">
    <PARAM NAME="Session" VALUE="<%=SCCSession.SessionID%>">
    <comment>
    <embed type="application/x-java-applet;version=1.2" width="600" height="480" code="com/esri/ae/applet/IMSMap" name="IMSMap">
    <noembed>
    </COMMENT>
    No JDK 1.2 support for APPLET!!
    </noembed></embed>
    </object>
    <%}%>

    I guess the problem is, that your local jar file's classes copied to the JRE's ext directory have higher priority. The JRE does not store your applet jar locally and so does not overwrite anything you may have installed, it only executes the jar file given by the browser which may cache it or not.
    Why don't you use the Java Webstart technolgy? It's pretty simple to use it. Customers would have to download every new jar version only once and Webstart will take care that the local version is up-to-date every time your app is started.

Maybe you are looking for