Empty/underutilized log files not removed

I have an application that runs the cleaner and the checkpointer explicitly (instead of relying on the database to do it).
Here are the relevant environment settings: je.env.runCheckpointer=false, je.env.runCleaner=false, je.cleaner.minUtilization=5, je.cleaner.expunge=true.
When running the application, I noticed that the few dozen log files have been removed, but later (even the cleaner was executed at regular intervals), no more log files were removed.
I have run the DbSpace utility on the environment and found the following result:
File Size (KB) % Used
00000033 97656 0
00000034 97655 0
00000035 97656 0
00000036 97656 0
00000037 97656 0
00000038 97655 2
00000039 97656 0
0000003a 97656 0
0000003b 97655 0
0000003c 97655 0
0000003d 97655 0
0000003e 97655 0
0000003f 97656 0
00000040 97655 0
00000041 97656 0
00000042 97656 0
00000043 97656 0
00000044 97655 0
00000045 97655 0
00000046 97656 0
This goes on for a long time. I had the database tracing enabled at CONFIG level. Here are the last lines of the log just before the last log file (0x32) is removed:
2009-05-06 08:41:51:111:CDT INFO CleanerRun 49 on file 0x30 begins backlog=2
2009-05-06 08:41:52:181:CDT SEVERE CleanerRun 49 on file 0x30 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206347 nINsObsolete=6365 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199971 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:41:52:182:CDT INFO CleanerRun 50 on file 0x31 begins backlog=1
2009-05-06 08:41:53:223:CDT SEVERE CleanerRun 50 on file 0x31 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205475 nINsObsolete=6319 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199144 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:41:53:224:CDT INFO CleanerRun 51 on file 0x32 begins backlog=0
2009-05-06 08:41:54:292:CDT SEVERE CleanerRun 51 on file 0x32 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205197 nINsObsolete=6292 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198893 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:42:24:300:CDT INFO CleanerRun 52 on file 0x33 begins backlog=1
2009-05-06 08:42:24:546:CDT CONFIG Checkpoint 963: source=api success=true nFullINFlushThisRun=13 nDeltaINFlushThisRun=0
2009-05-06 08:42:24:931:CDT SEVERE Cleaner deleted file 0x32
2009-05-06 08:42:24:938:CDT SEVERE Cleaner deleted file 0x31
2009-05-06 08:42:24:946:CDT SEVERE Cleaner deleted file 0x30
Here are a few log lines right after the last log message with cleaner deletion (until the next checkpoint):
2009-05-06 08:42:25:339:CDT SEVERE CleanerRun 52 on file 0x33 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204164 nINsObsolete=6277 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197865 nLNsCleaned=11 nLNsDead=0 nLNsMigrated=0 nLNsMarked=11 nLNQueueHits=9 nLNsLocked=0
2009-05-06 08:42:25:340:CDT INFO CleanerRun 53 on file 0x34 begins backlog=0
2009-05-06 08:42:26:284:CDT SEVERE CleanerRun 53 on file 0x34 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=203386 nINsObsolete=6281 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197091 nLNsCleaned=2 nLNsDead=2 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:42:56:290:CDT INFO CleanerRun 54 on file 0x35 begins backlog=4
2009-05-06 08:42:57:252:CDT SEVERE CleanerRun 54 on file 0x35 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205497 nINsObsolete=6312 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199164 nLNsCleaned=10 nLNsDead=3 nLNsMigrated=0 nLNsMarked=7 nLNQueueHits=6 nLNsLocked=0
2009-05-06 08:42:57:253:CDT INFO CleanerRun 55 on file 0x39 begins backlog=4
2009-05-06 08:42:58:097:CDT SEVERE CleanerRun 55 on file 0x39 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204553 nINsObsolete=6301 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198238 nLNsCleaned=2 nLNsDead=0 nLNsMigrated=0 nLNsMarked=2 nLNQueueHits=1 nLNsLocked=0
2009-05-06 08:42:58:098:CDT INFO CleanerRun 56 on file 0x3a begins backlog=3
2009-05-06 08:42:59:261:CDT SEVERE CleanerRun 56 on file 0x3a invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204867 nINsObsolete=6270 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198586 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:42:59:262:CDT INFO CleanerRun 57 on file 0x36 begins backlog=2
2009-05-06 08:43:02:185:CDT SEVERE CleanerRun 57 on file 0x36 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206158 nINsObsolete=6359 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199786 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:02:186:CDT INFO CleanerRun 58 on file 0x37 begins backlog=2
2009-05-06 08:43:03:243:CDT SEVERE CleanerRun 58 on file 0x37 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206160 nINsObsolete=6331 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199817 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:03:244:CDT INFO CleanerRun 59 on file 0x3b begins backlog=1
2009-05-06 08:43:04:000:CDT SEVERE CleanerRun 59 on file 0x3b invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206576 nINsObsolete=6385 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200179 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:04:001:CDT INFO CleanerRun 60 on file 0x38 begins backlog=0
2009-05-06 08:43:08:180:CDT SEVERE CleanerRun 60 on file 0x38 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205460 nINsObsolete=6324 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=194125 nLNsCleaned=4999 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=4999
2009-05-06 08:43:08:224:CDT INFO CleanerRun 61 on file 0x3c begins backlog=0
2009-05-06 08:43:09:099:CDT SEVERE CleanerRun 61 on file 0x3c invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206589 nINsObsolete=6343 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200235 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
2009-05-06 08:43:24:548:CDT CONFIG Checkpoint 964: source=api success=true nFullINFlushThisRun=12 nDeltaINFlushThisRun=0
I could not see anything fundamentally different between the log messages when log files were removed and when they were not. The DbSpace utility confirmed that there are plenty of log files under the minimum utilization, so I can't quite explain while the log file removal stopped all of a sudden.
Any help would be appreciated (JE version: 3.3.75).

Hi Bertold,
My first guess is that one or more transactions have accidentally not been ended (committed or aborted), or cursors not closed.
A clue is the nLNsLocked=4999 in the second set of trace messages. This means that 4999 records were locked by your application and were unable to be migrated by the cleaner. The cleaner will wait until these record locks are released before deleting any log files. Records locks are held by transactions and cursors.
If this doesn't ring a bell and you need to look further, one thing you can do is print the EnvironmentStats periodically (System.out.println(Environment.getStats(null))). Take a look at the nPendingLNsProcessed and nPendingLNsLocked. The former is the number of records the cleaner attempts to migrate because they were locked earlier. The latter is the number that are still locked and cannot be migrated.
--mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Similar Messages

  • BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED

    Hello,
    To generate the log report, /VIRSA/ZVFATBAK program is scheduled on hourly basis but some time report doesn't get generated and if we see the background job then it shows sucessfully finished.
    If we see the maually the log report for FFID then below error message is displayed.
    " BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED"
    Can anyone guide me to solve the issue.
    Thanks in advance.
    Best Regards,
    Prashant Dubey

    Hi,
    once chk the status of the job by selecting that and check job status(cltr+shift_f12)
    since it was periodically scheduled job there will be a RELEASED job after every active job..
    so try to copy that into another job using copy option and give some new name which u have to remember...
    the moment u copy u can find the same copied job in SCHEDULED status...
    from here, try to run it again on hourly basis....
    After copying the job u can unschedule the old released job to scheduled otherwise 2 will run at a time...
    rgds,

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Log files not being removed.

    Hello,
    I've upgraded an application from BerkeleyDB 5.1.25 to 5.3.21, and after that, log files are no more automatically removed. This is the only change in the application. It's an application written in C.
    The environment of the application is created with the flag DB_LOG_AUTO_REMOVE
    dbenv->log_set_config(dbenv, DB_LOG_AUTO_REMOVE, TRUE).
    The application has a thread to periodically checkpoint the data
    dbenv->txn_checkpoint(dbenv, 0, 0, 0)
    So far, so good, with version 5.1.25, this was enough to remove unused log files (I don't need to be able to do catastrophic recovery). But this doesn't work anymore with version 5.3.21.
    I I run db_archive (no options), it shows nothing, suggesting that all log files are still needed. But if I run db_hot_backup on the database, all but the last logfiles are removed (on the backup) as wanted.
    Rem : Usually, I don't want to run db_archive or any external tool, to remove unused log files. I hope what is inside the application is enough to remove unused log files.
    Is this something known, something changed or can you suggest me something to look for ?
    Thanks for your help
    José-Marcio
    Edited by: user564597 on Mar 24, 2013 6:35 PM
    Edited by: user564597 on Mar 24, 2013 6:38 PM
    Edited by: user564597 on Mar 25, 2013 8:57 AM

    thank you for giving us a test program. This helped tremendously to fully understand what you are doing. In 5.3 we fixed a bug dealing with the way log files are archived in an HA environment. What you are running into is the consequences of that bug fix. In the test program you are using DB_INIT_REP. This is the key to use that you want an HA environment. With HA, there is a master and some number of read only clients. By default we treat the initiating database as the master. This is what is happening in our case. In an HA (replicated) environment, we cannot archive log files until we can be assured that the clients have applied the contents of that log file. Our belief is that you are not really running in an HA environment and you do not need the DB_INIT_REP flag. In our initial testing where we said it worked for us, this was because we did not use the DB_INIT_REP flag, as there was no mention of replication being needed in the post.
    Recommendation: Please remove the use of the DB_INIT_REP flag or properly set up an HA environment (details in our docs).
    thanks
    mike

  • Log4j log file not being created

    Using websphere for a web app. At first I was getting the error log4j:WARN No appenders could be found for logger....
    So I created the property file and I assume correctly referenced it. The error went away and my logging messages are showing up in the websphere console, but the .log file specified in my log4j.properties file is not being written to... it is only writing to my systemOut.log.
    If I remove the ROOT.File line it still does not create the file (I've done a search on the IBM directory
    #Default log level to ERROR. Other levels are INFO and DEBUG.
    log4j.rootLogger=INFO,ROOT
    log4j.appender.ROOT=org.apache.log4j.RollingFileAppender
    log4j.appender.ROOT.File=c:\myapplication.log
    log4j.appender.ROOT.MaxFileSize=1000KB
    #Keep 5 old files around.
    log4j.appender.ROOT.MaxBackupIndex=5
    log4j.appender.ROOT.layout=org.apache.log4j.PatternLayout
    #Format almost same as WebSphere's common log format.
    log4j.appender.ROOT.layout.ConversionPattern=[%d] %t %c %-5p - %m%n
    #Optionally override log level of individual packages or classes
    log4j.logger.com.webage.ejbs=INFO       
    private static final Logger logger = Logger.getLogger(LoginAction.class);
        public ActionForward execute(ActionMapping mapping, ActionForm form,
                HttpServletRequest request, HttpServletResponse response)
                throws IOException, ServletException {
            initializeLogger();
    private void initializeLogger() {
            org.apache.log4j.BasicConfigurator.configure();
    //trying the above just to get it to work.. because by default this
    //should look in WEB-INF/classes/log4j.properties... I thought
            /*try {
                String log4jUrl = servlet.getServletContext().getInitParameter(
                        "LOG4J_XML");
                if (!(log4jUrl == null || log4jUrl.equals("")))
                    DOMConfigurator.configure(servlet.getServletContext()
                            .getResource(log4jUrl));
            } catch (MalformedURLException e) {
                e.printStackTrace();
            } catch (FactoryConfigurationError e) {
                e.printStackTrace();
        }    Edited by: gmachamer on Nov 30, 2007 6:37 AM

    ok changed to xml file and found a few things out.
    now when I debug though the logger that was created has an empty level... but if I look at the parent logger it is correctly pulling the root logger from my xml (if I change the priority attribute then it changes when debugging the code)
    <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true">
         <!-- this appender would be the same as having a System.out -->
         <appender name="console" class="org.apache.log4j.ConsoleAppender">
              <param name="Target" value="System.out"/>
              <layout class="org.apache.log4j.PatternLayout">
                         <param name="ConversionPattern" value="%-5p %c{1} - %m%n"/>
                  </layout>
           </appender>
           <appender name="rollingFileAppender" class="org.apache.log4j.RollingFileAppender">
              <!-- name and location of the file to log to -->
                 <param name="File" value="c:/appLog.log"/>
              <!-- the maximum size the file will be before it rolls the file -->
                 <param name="MaxFileSize" value="1000kb"/>
              <!-- the number of backups you want to maintain -->
              <param name="MaxBackupIndex" value="5"/>
              <!--
                   This is the layout of your messages, you can do alot with this.
                   See the java docs for the class PatternLayout for an explanation of
                   the different values you can have.
              -->
                 <layout class="org.apache.log4j.PatternLayout">
                          <param name="ConversionPattern" value="%t %-5p %c{2} - %m%n"/>
                      </layout>          
              </appender>
           <root>
                  <priority value ="error" />
                  <appender-ref ref="rollingFileAppender" />
                  <appender-ref ref="console" />
           </root> 
    </log4j:configuration>

  • Temp Log file not shrinking

    hi,
    i am using MS SQL 2005 for my SAP ERP6 ,after using the T-Code SGEN  the temp Log file incresed and reached 48gb now when i try to shrink with MS Sql application or by using dbcc shrinkfile comand , it only shows that there is free 1,5 gb is used and nearly 46.5 Gb is unused but the main problem is its not shrinking.
    help needed.....
    thanks in advance

    Hi,
    Execute the following commands in sequence.
    1. BACKUP LOG <SID> WITH NO_LOG
    2. DBCC SHRINKFILE (<name_logfile>, <size>)
    or
    Just execute.
    BACKUP LOG <SID> WITH TRUNCATE_ONLY
    The above command will only remove the unused space (inactive parts) from log file and will note do backup of log file.
    Refer [SAP Note 625546 - Size of transaction log file is too big|https://service.sap.com/sap/support/notes/625546] to get more info.
    Regards,
    Bhavik G. Shroff

  • Distiller produces log file, not a pdf

    I'm trying to create a PDF from a quark file and getting this:
    %%[ Error: ioerror; OffendingCommand: charpath ]%%
    It's only happening with one file, but it's one pretty important file.
    I tried created a ps file, putting it on another computer (identical computer) and it PDF'd perfectly...which leads me to believe it's a problem with Distiller, not the file.
    Any thoughts?
    Working in Mac 10.5.5, Distiller 8.1.2, Quark 7.3

    >But wouldn't it indicate the bad font in the log?
    Not in general, no.
    >
    >Also, there appears to be no rhyme or reason: we work from templates. All of the pages come from the same quark templates and use the same fonts; some will pdf, others won't.
    It's all there is to go on in the log.
    >
    >And again, when I brought the ps file to another computer, Distiller PDF'd it fine.
    Local font files could be relevant.
    >
    >I tried removing the art, too, which didn't work either.
    The art it won't be. "charpath" is a PostScript instruction directly
    related to text. (Unless it is art in the shape of a character, or
    some such).
    Try dropping all the text from this document. If it works now, you can
    use divide and conquer techniques to isolate the problem.
    Aandi Inston

  • OES2 SP3 AFP How to empty AFP log file

    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

    If you get a lot of afp activity, your probably best off if you just set the log to rotate.
    create a file under /etc/logrotate.d/ and name it whatever you want.
    then just enter in something like this:
    /var/log/afptcpd/afptcp.log {
    compress
    dateext
    maxage 365
    rotate 99
    size=+4096k
    notifempty
    missingok
    create 644 root root
    postrotate
    /etc/init.d/novell-afptcpd reload
    endscript
    Originally Posted by Skylon5000
    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

  • Log file not generated

    i follow the steps
    1.In Application Set profile FND: Debug Log Level to "Statement"
    2.restart apache
    3.Run debug from help-->diagnostics-->debug
    4.Secure debug log file which should be in
    select value from v$parameter where name like 'utl_file%'
    but the is no log file created i dont know why (these steps are provided by an SR)
    thnx

    What about "FND: Debug Log Filename for Middle-Tier" and "FND: Diagnostics" profile options?
    Note: 372209.1 - How to Collect an FND Diagnostics Trace (aka FND:Debug)
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=372209.1
    If the above does not help, set the debug log at the user level and check then.
    Note: 390881.1 - How To Set The Debug Log At User Level?
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=390881.1

  • Server has not enough memory for operation (Some .rpt files not removing from Temp folder )

    We have web application developed in ASP.NET 4.0 ftramework and published on IIS. And we are using 13_0_8 version of CR.
    I am creating report files and exporting these as pdf. And I am disposing streams and report documents at the end. Initially, there wasn't any problem and temporary files which are created by CrystalReport were deleting all. But, now requests to the web application increeased to about 50.000 in a day and now some .rpt files are staying in Temp folder and I can't delete them. After recycling application pool all files are removed by IIS. Then, after 1 or 2 hours new .rpt files are creating in Temp folder. And after somewhile, application throws Server has not enough memory for operation. And, IMHO the reason is temp files. Here is the code I am using to export report as pdf.
    Questions:
    1. Is the reason of this exception is temp files in Temp folder?
    2. What is wrong in that code?
    ReportDocument report = DownloadPDF.GetReport(id);
       MemoryStream stream = (MemoryStream)report.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat);
       Response.ContentType = "application/pdf";
       Response.AddHeader("content-disposition", "attachment; filename=" + id+ ".pdf");
      report.Close();
      report.Dispose();
       try
       Response.BinaryWrite(stream.ToArray());
       Response.End();
       catch (Exception)
       finally
      stream.Flush();
      stream.Close();
      stream.Dispose();
    Here is the StackTrace

    Hi Farhad
    At 50,000 requests, you are more than likely running into the CR engine limit. E.g.; you're pushing way too hard... The following will be good reads for you:
    Crystal Reports 2008 Component Engine Scalability | SCN
    (The above doc does apply to current versions of CR - e.g.; no changes.)
    Crystal Reports Maximum Report Processing Jobs ... | SCN
    Scaling Crystal Reports for Visual Studio .NET
    Choosing the Right Business Objects SDK for Your Needs
    Choose the Right SDK for the Right Task
    How Can I Optimize Scalability?
    All of the above apply to your version of CR and thus the next question will be; how to proceed:
    1) Bigger, faster servers will not hurt.
    2) Web farms.
    How Do I Use Crystal Reports in a Web Farm or Web Garden?
    3) Crystal Reports Application Server, or perhaps even SAP BusinessObjects BI Platform 4.1
    Crystal Enterprise Report Application Server - Overview
    - Ludek
    Senior Support Engineer AGS Product Support, Global Support Center Canada
    Follow us on Twitter

  • Why is there no error when checkpointing after db log files are removed?

    I would like to test a scenario when an application's embedded database is corrupted somehow. The simplest test I could think of was removing the database log files while the application is running. However, I can't seem to get any failure. To demonstrate, below is a code snippet that demonstrates what I am trying to do. (I am using JE 3.3.75 on Mac OS 10.5.6):
    public class FileRemovalTest {
    public static void main(String[] args) throws Exception
    // Setup the DB Environment
    EnvironmentConfig ec = new EnvironmentConfig();
    ec.setAllowCreate(true);
    ec.setTransactional(true);
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CLEANER, "false");
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CHECKPOINTER, "false");
    ec.setConfigParam(EnvironmentConfig.CLEANER_EXPUNGE, "true");
    ec.setConfigParam("java.util.logging.FileHandler.on", "true");
    ec.setConfigParam("java.util.logging.level", "FINEST");
    Environment env = new Environment(new File("."), ec);
    // Create a database
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(true);
    dbConfig.setTransactional(true);
    Database db = env.openDatabase(null, "test", dbConfig);
    // Insert an entry and checkpoint the database
    db.put(
    null,
    new DatabaseEntry("key".getBytes()),
    new DatabaseEntry("value".getBytes()));
    CheckpointConfig checkpointConfig = new CheckpointConfig();
    checkpointConfig.setForce(true);
    env.checkpoint(checkpointConfig);
    // Delete the DB log files
    File[] dbFiles = new File(".").listFiles(new DbFilenameFilter());
    if (dbFiles != null)
    for (File file : dbFiles)
    file.delete();
    // Add another entry and checkpoint the database again.
    db.put(
    null,
    new DatabaseEntry("key2".getBytes()),
    new DatabaseEntry("value2".getBytes())
    {color:#ff0000} *// Q: Why does this 'put' succeed?*
    {color}
    env.checkpoint(checkpointConfig);
    {color:#ff0000}*// Q: Why does this checkpoint succeed?*{color}
    // Close the database and the environment
    db.close();
    env.close();
    private static class DbFilenameFilter implements FilenameFilter
    public boolean accept(File dir, String name) {
    return name.endsWith(".jdb");
    This is what I see in the logs:
    2009-03-05 12:53:30:631:CST CONFIG Recovery w/no files.
    2009-03-05 12:53:30:677:CST FINER Ins: bin=2 ln=1 lnLsn=0x0/0xe9 index=0
    2009-03-05 12:53:30:678:CST FINER Ins: bin=5 ln=4 lnLsn=0x0/0x193 index=0
    2009-03-05 12:53:30:688:CST FINE Commit:id = 1 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:690:CST FINEST size interval=0 lastCkpt=0x0/0x0 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:703:CST FINER Ins: bin=8 ln=7 lnLsn=0x0/0x48b index=0
    2009-03-05 12:53:30:704:CST CONFIG Checkpoint 1: source=recovery success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:705:CST CONFIG Recovery finished: Recovery Infonull&gt; useMinReplicatedNodeId=0 useMaxNodeId=0 useMinReplicatedDbId=0 useMaxDbId=0 useMinReplicatedTxnId=0 useMaxTxnId=0 numMapINs=0 numOtherINs=0 numBinDeltas=0 numDuplicateINs=0 lnFound=0 lnNotFound=0 lnInserted=0 lnReplaced=0 nRepeatIteratorReads=0
    2009-03-05 12:53:30:709:CST FINEST Environment.open: name=test dbConfig=allowCreate=true
    exclusiveCreate=false
    transactional=true
    readOnly=false
    duplicatesAllowed=false
    deferredWrite=false
    temporary=false
    keyPrefixingEnabled=false
    2009-03-05 12:53:30:713:CST FINER Ins: bin=2 ln=10 lnLsn=0x0/0x7be index=1
    2009-03-05 12:53:30:714:CST FINER Ins: bin=5 ln=11 lnLsn=0x0/0x820 index=1
    2009-03-05 12:53:30:718:CST FINE Commit:id = 2 numWriteLocks=0 numReadLocks = 0
    2009-03-05 12:53:30:722:CST FINEST Database.put key=107 101 121 data=118 97 108 117 101
    2009-03-05 12:53:30:728:CST FINER Ins: bin=13 ln=12 lnLsn=0x0/0x973 index=0
    2009-03-05 12:53:30:729:CST FINE Commit:id = 3 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:729:CST FINEST size interval=0 lastCkpt=0x0/0x581 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:735:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x193 newLnLsn=0x0/0xb61
    2009-03-05 12:53:30:736:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x820 newLnLsn=0x0/0xc3a
    2009-03-05 12:53:30:737:CST FINER Ins: bin=8 ln=15 lnLsn=0x0/0xd38 index=0
    2009-03-05 12:53:30:738:CST CONFIG Checkpoint 2: source=api success=true nFullINFlushThisRun=6 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:741:CST FINEST Database.put key=107 101 121 50 data=118 97 108 117 101 50
    2009-03-05 12:53:30:742:CST FINER Ins: bin=13 ln=16 lnLsn=0x0/0xeaf index=1
    2009-03-05 12:53:30:743:CST FINE Commit:id = 4 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:744:CST FINEST size interval=0 lastCkpt=0x0/0xe32 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:746:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0xb61 newLnLsn=0x0/0x1166
    2009-03-05 12:53:30:747:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0xc3a newLnLsn=0x0/0x11e9
    2009-03-05 12:53:30:748:CST FINER Ins: bin=8 ln=17 lnLsn=0x0/0x126c index=0
    2009-03-05 12:53:30:748:CST CONFIG Checkpoint 3: source=api success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:750:CST FINEST Database.close: name=test
    2009-03-05 12:53:30:751:CST FINE Close of environment . started
    2009-03-05 12:53:30:751:CST FINEST size interval=0 lastCkpt=0x0/0x1363 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:754:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x1166 newLnLsn=0x0/0x14f8
    2009-03-05 12:53:30:755:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x11e9 newLnLsn=0x0/0x15a9
    2009-03-05 12:53:30:756:CST FINER Ins: bin=8 ln=18 lnLsn=0x0/0x16ab index=0
    2009-03-05 12:53:30:757:CST CONFIG Checkpoint 4: source=close success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:757:CST FINE About to shutdown daemons for Env .

    Hi,
    OS X, being Unix-like, probably isn't actually deleting file 00000000.jdb since JE still has it open -- the file deletion is deferred until it is closed. JE keeps N files open, where N is configurable.
    We do corruption testing ourselves, in the following test by overwriting a file and then attempting to read back the entire database:
    test/com/sleepycat/je/util/DbScavengerTest.java
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Archived log files not registered in the Database

    I have Widows Server 2008 R2
    I have Oracle 11g R2
    I configured primary and standby database in 2 physical servers , please find below the verification:
    I am using DG Broker
    Renetly I did failover from primary to standby database
    Then I did REINSTATE DATABASE to returen the old primary to standby mode
    Then I did Switchover again
    I have problem that archive logs not registered and not imeplemented.
    SQL> select max(sequence#) from v$archived_log; 
    MAX(SEQUENCE#)
             16234
    I did alter system switch logfile then I ssue the following statment to check and I found same number in primary and stanbyd has not been changed
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
             16234
    Any body can help please?
    Regards

    Thanks for reply
    What I mean after I do alter system switch log file, I can see the archived log files is generated in the physical Disk but when
    select MAX(SEQUENCE#) FROM V$ARCHIVED_LOG;
    the sequence number not changed it should increase by 1 when ever I do switch logfile.
    however I did as you asked please find the result below:
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> SELECT DB_NAME,HOSTNAME,LOG_ARCHIVED,LOG_APPLIED_02,LOG_APPLIED_03,APPLIED_TIME,LOG_ARCHIVED - LOG_APPLIED_02 LOG_GAP_02,
      2  LOG_ARCHIVED - LOG_APPLIED_03 LOG_GAP_03
      3  FROM (SELECT NAME DB_NAME FROM V$DATABASE),
      4  (SELECT UPPER(SUBSTR(HOST_NAME, 1, (DECODE(INSTR(HOST_NAME, '.'),0, LENGTH(HOST_NAME),(INSTR(HOST_NAME, '.') - 1))))) HOSTNAME FROM V$INSTANCE),
      5  (SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID = 1 AND ARCHIVED = 'YES'),
      6  (SELECT MAX(SEQUENCE#) LOG_APPLIED_02 FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES'),
      7  (SELECT MAX(SEQUENCE#) LOG_APPLIED_03 FROM V$ARCHIVED_LOG WHERE DEST_ID = 3 AND APPLIED = 'YES'),
      8  (SELECT TO_CHAR(MAX(COMPLETION_TIME), 'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES');
    DB_NAME HOSTNAME           LOG_ARCHIVED   LOG_APPLIED_02    LOG_APPLIED_03     APPLIED_TIME     LOG_GAP_02      LOG_GAP_03
    EPPROD  CORSKMBBOR01     16252                  16253                        (null)                      15-JAN/12:04                  -1                   (       null)

  • Online redo log files being removed physically

    Grid Infra version: 11.2.0.4
    RDBMS Version: 11.2.0.4
    Although this is a RAC DB, this is not a RAC-specific question. Hence posting it here.
    Few months back, I remember issuing a command similair to below (DROP LOGFILE GROUP ...) and the redo log files were still physically present in the diskgroup.
    If I remember correctly, the file is not deleted physical so that we can use the REUSE functionality (ALTER DATABASE ADD LOGFILE MEMBER '+REDO/orcl/onlinelog/redo1b.log' reuse to group 11; ) ie. you can use the REUSE command to add the logfile of the same name which is physically present in OS Filesystem/Diksgroup to redo log group.
    But today, after I issued the below command, I checked the diskgroup location from ASMCMD
    SQL> alter database drop logfile group 31;
    Database altered.
    From ASMCMD, I can that the file has disappeared physically. Is this a new feature with 11.2.0.4 or am I missing something here ?
    ASMCMD> ls +DATA/msblprd/onlinelog/group_31.548.833154995
    ASMCMD-8002: entry 'group_31.548.833154995' does not exist in directory '+DATA/msblprd/onlinelog/'

    Just to add to what Aman has said.
    It is a bad practice not to let OMF decide the placement of Online redo logs because of this issue especially when you use ASM.
    Executing rm command in Linux/Unix is easy but Dropping ASM aliases in the disk group can be a hassle.
    This is documented.
    "When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, ensure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file."
    http://docs.oracle.com/cd/E11882_01/server.112/e25494/onlineredo.htm#ADMIN11324
    BTW . You don't even need to set  db_create_online_log_dest_n to enable OMF for ORLs.
    SQL> show parameter log_dest
    NAME                                 TYPE        VALUE
    db_create_online_log_dest_1          string
    db_create_online_log_dest_2          string
    db_create_online_log_dest_3          string
    db_create_online_log_dest_4          string
    db_create_online_log_dest_5          string
    SQL> show parameter db_create_file_dest
    NAME                                 TYPE        VALUE
    db_create_file_dest                  string      +MBL_DATA
    alter database add logfile thread 4
    group 31 ('+MBL_DATA','+MBL_FRA') size 4096M,
    group 32 ('+MBL_DATA','+MBL_FRA') size 4096M,
    group 33 ('+MBL_DATA','+MBL_FRA') size 4096M,
    group 34 ('+MBL_DATA','+MBL_FRA') size 4096M ;
    Database altered.
    And redo logs will be neatly placed as shown below
       INST     GROUP# MEMBER                                             STATUS           ARC
             4         31 +MBL_DATA/bsblprd/onlinelog/group_31.276.832605441 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_31.297.832605445  UNUSED           YES
                       32 +MBL_DATA/bsblprd/onlinelog/group_32.547.832605451 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_32.372.832605457  UNUSED           YES
                       33 +MBL_DATA/bsblprd/onlinelog/group_33.548.832605463 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_33.284.832605469  UNUSED           YES
                       34 +MBL_DATA/bsblprd/onlinelog/group_34.549.832605475 UNUSED           YES
                          +MBL_FRA/bsblprd/onlinelog/group_34.359.832605481  UNUSED           YES

  • Archived log file not displaying

    While navigating around the "home" page for OCS as an administrator...I was trying to run a report under Reports>Conferences>Diagnostics.
    The links says:
    Click the link below to view comprehensive conference diagnostics. To see the log file correctly, use Internet Explorer 6.0 or higher.
    I am using IE 6 and the page shows up as being done...but it is blank. Any idea what is wrong? The URL reads:
    https://mywebserver/imtapp/logs/imtLogs.jsp?fileName=D:/ocs_onebox/mtier/imeeting/logs/sessions/12.20.2004/10000-clbsvr_OCS_home_mid.mywebserver.imt-collab.0-06_34_01.xml
    The file is there on the filesystem.
    TIA.

    Stages means Transformations in Data flow...
    Transformation names are not displaying correctly in log file.
    for example if i given name as  "TC_table_name" for Table compare Transformation then its displaying only "Table Comparison"  in Log file

  • New Log Files Not 100% Utilization

    I just recently deployed my web app with new configurations to require my BDB to have at least 65% disk utilization (up from the default 50%) and 25% minimum file utilization (up from the default 5%). On start of my app, I temporarily coded an env.cleanLog() to force a clean of the logs to bring the DB up to these utilization parameters.
    I've deployed the app, and its been chugging away at the clean process now for about 2.5 hours. It is generating a new 100 MB log file approximately every minute as it is attempting to compact the DB. However, at 2.5 hours, I just ran DbSpace and the DB utilization is only up to 51%. Perhaps most surprising are the following two facts:
    1) The newly created log files from the cleaner are nowhere near 100% utilization
    2) Some of the newly created log files have already now been cleaned (deleted) themselves.
    There are 0 updates happening to the BDB while this process is running. My understanding was that when the cleaner removed old files and created new files, only good/valid data is written to the new files. However, this doesn't seem to be the case. Can someone enlighten me? I'm happy to read docs if someone can point me in the right direction.
    Thanks.

    Sorry, somehow I missed that you weren't doing any updates, even though you said so. I apologize.
    However, your cache size is much too small for your data set. Running DbCacheSize with a rough approximation of your data set size gives:
    ~/je.cvs$ java -jar build/lib/je.jar DbCacheSize -records 50000000 -key 30 -data 300
    Inputs: records=50000000 keySize=30 dataSize=300 nodeMax=128 binMax=128 density=80% overhead=10%
    === Cache Sizing Summary ===
       Cache Size       Btree Size    Description
      3,543,168,702    3,188,851,832  Minimum, internal nodes only
      3,963,943,955    3,567,549,560  Maximum, internal nodes only
    22,209,835,368   19,988,851,832  Minimum, internal nodes and leaf nodes
    22,630,610,622   20,367,549,560  Maximum, internal nodes and leaf nodes
    === Memory Usage by Btree Level ===
    Minimum Bytes    Maximum Bytes      Nodes    Level
      3,157,713,227    3,532,713,035     488,281    1
         30,834,656       34,496,480       4,768    2
            297,482          332,810          46    3
              6,467            7,235           1    4Apparently when you started, the cleaner was "behind" -- utilization was low. Without enough cache to hold the internal nodes -- 3.5GB according to DbCacheSize -- the cleaner will either take a very long time or never be able to clean up to 50% utilization.
    Do you really only have that much memory available, or is this just a dev environment issue?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for

  • How to create a client console to connect to server in windows service using c#

    my code is error can you check the code please client console code partial class Program : ServiceBase         public static void Main(string[] args)             serverservice ss = new serverservice();             ss.myserver();                 TcpCl

  • Classic Scenario and free text

    Hi, Scenario : Classic (No sourcing cockpit in SRM) As for the free text item, I want to assign 'fixed' vendor to create a PO in back-end. How do I do it ? Thanks Pranav

  • GTIN number generation in SAP

    Dear All, As per my client requirement GTIN should be 14 digit number  , we need to configure the below in SAP The structure is given below. For the same material according to the Different UOM it have to be mapped like below. If the UOM is each  = 0

  • Bridge, windows notification area, bridge CC does not appear there

    Hi With Bridge CS6 if I tell it to open at startup I get an icon in the windows 7 notification area. But now I have installed Bridge CC and I do not get that icon. Bridge opens at startup but if I close it it is gone until I restart the program. When

  • Is Oracle Spatial, Mapbuilder and Mapviewer supports TTF(True Type Fonts)

    Hi to all, Is Oracle Spatial, Mapbuilder and Mapviewer Supports TTF,How to load TTF into Oracle Spatial and how to assign TTF to Point Data, i think Map Builder had option,but how load using map builder, is any Oracle script to load TTF,pl provide he