Temp Log file not shrinking

hi,
i am using MS SQL 2005 for my SAP ERP6 ,after using the T-Code SGEN  the temp Log file incresed and reached 48gb now when i try to shrink with MS Sql application or by using dbcc shrinkfile comand , it only shows that there is free 1,5 gb is used and nearly 46.5 Gb is unused but the main problem is its not shrinking.
help needed.....
thanks in advance

Hi,
Execute the following commands in sequence.
1. BACKUP LOG <SID> WITH NO_LOG
2. DBCC SHRINKFILE (<name_logfile>, <size>)
or
Just execute.
BACKUP LOG <SID> WITH TRUNCATE_ONLY
The above command will only remove the unused space (inactive parts) from log file and will note do backup of log file.
Refer [SAP Note 625546 - Size of transaction log file is too big|https://service.sap.com/sap/support/notes/625546] to get more info.
Regards,
Bhavik G. Shroff

Similar Messages

  • BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED

    Hello,
    To generate the log report, /VIRSA/ZVFATBAK program is scheduled on hourly basis but some time report doesn't get generated and if we see the background job then it shows sucessfully finished.
    If we see the maually the log report for FFID then below error message is displayed.
    " BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED"
    Can anyone guide me to solve the issue.
    Thanks in advance.
    Best Regards,
    Prashant Dubey

    Hi,
    once chk the status of the job by selecting that and check job status(cltr+shift_f12)
    since it was periodically scheduled job there will be a RELEASED job after every active job..
    so try to copy that into another job using copy option and give some new name which u have to remember...
    the moment u copy u can find the same copied job in SCHEDULED status...
    from here, try to run it again on hourly basis....
    After copying the job u can unschedule the old released job to scheduled otherwise 2 will run at a time...
    rgds,

  • Log file not generated

    i follow the steps
    1.In Application Set profile FND: Debug Log Level to "Statement"
    2.restart apache
    3.Run debug from help-->diagnostics-->debug
    4.Secure debug log file which should be in
    select value from v$parameter where name like 'utl_file%'
    but the is no log file created i dont know why (these steps are provided by an SR)
    thnx

    What about "FND: Debug Log Filename for Middle-Tier" and "FND: Diagnostics" profile options?
    Note: 372209.1 - How to Collect an FND Diagnostics Trace (aka FND:Debug)
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=372209.1
    If the above does not help, set the debug log at the user level and check then.
    Note: 390881.1 - How To Set The Debug Log At User Level?
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=390881.1

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Archived log files not registered in the Database

    I have Widows Server 2008 R2
    I have Oracle 11g R2
    I configured primary and standby database in 2 physical servers , please find below the verification:
    I am using DG Broker
    Renetly I did failover from primary to standby database
    Then I did REINSTATE DATABASE to returen the old primary to standby mode
    Then I did Switchover again
    I have problem that archive logs not registered and not imeplemented.
    SQL> select max(sequence#) from v$archived_log; 
    MAX(SEQUENCE#)
             16234
    I did alter system switch logfile then I ssue the following statment to check and I found same number in primary and stanbyd has not been changed
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
             16234
    Any body can help please?
    Regards

    Thanks for reply
    What I mean after I do alter system switch log file, I can see the archived log files is generated in the physical Disk but when
    select MAX(SEQUENCE#) FROM V$ARCHIVED_LOG;
    the sequence number not changed it should increase by 1 when ever I do switch logfile.
    however I did as you asked please find the result below:
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> SELECT DB_NAME,HOSTNAME,LOG_ARCHIVED,LOG_APPLIED_02,LOG_APPLIED_03,APPLIED_TIME,LOG_ARCHIVED - LOG_APPLIED_02 LOG_GAP_02,
      2  LOG_ARCHIVED - LOG_APPLIED_03 LOG_GAP_03
      3  FROM (SELECT NAME DB_NAME FROM V$DATABASE),
      4  (SELECT UPPER(SUBSTR(HOST_NAME, 1, (DECODE(INSTR(HOST_NAME, '.'),0, LENGTH(HOST_NAME),(INSTR(HOST_NAME, '.') - 1))))) HOSTNAME FROM V$INSTANCE),
      5  (SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID = 1 AND ARCHIVED = 'YES'),
      6  (SELECT MAX(SEQUENCE#) LOG_APPLIED_02 FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES'),
      7  (SELECT MAX(SEQUENCE#) LOG_APPLIED_03 FROM V$ARCHIVED_LOG WHERE DEST_ID = 3 AND APPLIED = 'YES'),
      8  (SELECT TO_CHAR(MAX(COMPLETION_TIME), 'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES');
    DB_NAME HOSTNAME           LOG_ARCHIVED   LOG_APPLIED_02    LOG_APPLIED_03     APPLIED_TIME     LOG_GAP_02      LOG_GAP_03
    EPPROD  CORSKMBBOR01     16252                  16253                        (null)                      15-JAN/12:04                  -1                   (       null)

  • Why is the transaction log file not truncated though its simple recovery model?

    My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
    data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
    mayooran99

    My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
    data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
    mayooran99
    If log records were never deleted(truncated) from the transaction log it wont show as 99% free.Simple recoveyr model
    Log truncation automatically frees space in the logical log for reuse by the transaction log and thats what you are seeing. Truncation wont change file size. It more like
    log clearing, marking
    parts of the log free for reuse. 
    As you said "When I shrink it does shrink" I dont see any issues here. Log truncation and shrink file is 2 different things.
    Please read below link for understanding "Transaction log Truncate vs Shrink"
    http://blog.sqlxdetails.com/transaction-log-truncate-why-it-didnt-shrink-my-log/

  • How to send output from SQL script to the specified log file (not *.sql)

    ## 1 -I write sql command into sql file
    echo "SELECT * FROM DBA_USERS;">results.sql
    echo "quit;">>results.sql
    ##2- RUN sqlplus, run sql file and get output/results into jo.log file
    %ORACLE_HOME/bin/sqlplus / as sysdba<results.sql>>jo.log
    It doesn't work please advise

    $ echo "set pages 9999" >results.sql ### this is only to make the output more readable
    $ echo "SELECT * FROM DBA_USERS;" >>results.sql
    $ echo "quit" >>results.sql
    $ cat results.sql
    set pages 9999
    SELECT * FROM DBA_USERS;
    quit
    $ sqlplus -s "/ as sysdba" @results >jo.log
    $ cat jo.log
    USERNAME                          USER_ID PASSWORD
    ACCOUNT_STATUS                   LOCK_DATE  EXPIRY_DAT
    DEFAULT_TABLESPACE             TEMPORARY_TABLESPACE           CREATED
    PROFILE                        INITIAL_RSRC_CONSUMER_GROUP
    EXTERNAL_NAME
    SYS                                     0 D4C5016086B2DC6A
    OPEN
    SYSTEM                         TEMP                           06/12/2003
    DEFAULT                        SYS_GROUP
    SYSTEM                                  5 D4DF7931AB130E37
    OPEN
    SYSTEM                         TEMP                           06/12/2003
    DEFAULT                        SYS_GROUP
    DBSNMP                                 19 E066D214D5421CCC
    OPEN
    SYSTEM                         TEMP                           06/12/2003
    DEFAULT                        DEFAULT_CONSUMER_GROUP
    SCOTT                                  60 F894844C34402B67
    OPEN
    USERS                          TEMP                           06/12/2003
    DEFAULT                        DEFAULT_CONSUMER_GROUP
    HR                                     47 4C6D73C3E8B0F0DA
    OPEN
    EXAMPLE                        TEMP                           06/12/2003
    DEFAULT                        DEFAULT_CONSUMER_GROUPThat's only a part of the file, it's too long :-)

  • Archived log file not displaying

    While navigating around the "home" page for OCS as an administrator...I was trying to run a report under Reports>Conferences>Diagnostics.
    The links says:
    Click the link below to view comprehensive conference diagnostics. To see the log file correctly, use Internet Explorer 6.0 or higher.
    I am using IE 6 and the page shows up as being done...but it is blank. Any idea what is wrong? The URL reads:
    https://mywebserver/imtapp/logs/imtLogs.jsp?fileName=D:/ocs_onebox/mtier/imeeting/logs/sessions/12.20.2004/10000-clbsvr_OCS_home_mid.mywebserver.imt-collab.0-06_34_01.xml
    The file is there on the filesystem.
    TIA.

    Stages means Transformations in Data flow...
    Transformation names are not displaying correctly in log file.
    for example if i given name as  "TC_table_name" for Table compare Transformation then its displaying only "Table Comparison"  in Log file

  • Log files not being removed.

    Hello,
    I've upgraded an application from BerkeleyDB 5.1.25 to 5.3.21, and after that, log files are no more automatically removed. This is the only change in the application. It's an application written in C.
    The environment of the application is created with the flag DB_LOG_AUTO_REMOVE
    dbenv->log_set_config(dbenv, DB_LOG_AUTO_REMOVE, TRUE).
    The application has a thread to periodically checkpoint the data
    dbenv->txn_checkpoint(dbenv, 0, 0, 0)
    So far, so good, with version 5.1.25, this was enough to remove unused log files (I don't need to be able to do catastrophic recovery). But this doesn't work anymore with version 5.3.21.
    I I run db_archive (no options), it shows nothing, suggesting that all log files are still needed. But if I run db_hot_backup on the database, all but the last logfiles are removed (on the backup) as wanted.
    Rem : Usually, I don't want to run db_archive or any external tool, to remove unused log files. I hope what is inside the application is enough to remove unused log files.
    Is this something known, something changed or can you suggest me something to look for ?
    Thanks for your help
    José-Marcio
    Edited by: user564597 on Mar 24, 2013 6:35 PM
    Edited by: user564597 on Mar 24, 2013 6:38 PM
    Edited by: user564597 on Mar 25, 2013 8:57 AM

    thank you for giving us a test program. This helped tremendously to fully understand what you are doing. In 5.3 we fixed a bug dealing with the way log files are archived in an HA environment. What you are running into is the consequences of that bug fix. In the test program you are using DB_INIT_REP. This is the key to use that you want an HA environment. With HA, there is a master and some number of read only clients. By default we treat the initiating database as the master. This is what is happening in our case. In an HA (replicated) environment, we cannot archive log files until we can be assured that the clients have applied the contents of that log file. Our belief is that you are not really running in an HA environment and you do not need the DB_INIT_REP flag. In our initial testing where we said it worked for us, this was because we did not use the DB_INIT_REP flag, as there was no mention of replication being needed in the post.
    Recommendation: Please remove the use of the DB_INIT_REP flag or properly set up an HA environment (details in our docs).
    thanks
    mike

  • Empty/underutilized log files not removed

    I have an application that runs the cleaner and the checkpointer explicitly (instead of relying on the database to do it).
    Here are the relevant environment settings: je.env.runCheckpointer=false, je.env.runCleaner=false, je.cleaner.minUtilization=5, je.cleaner.expunge=true.
    When running the application, I noticed that the few dozen log files have been removed, but later (even the cleaner was executed at regular intervals), no more log files were removed.
    I have run the DbSpace utility on the environment and found the following result:
    File Size (KB) % Used
    00000033 97656 0
    00000034 97655 0
    00000035 97656 0
    00000036 97656 0
    00000037 97656 0
    00000038 97655 2
    00000039 97656 0
    0000003a 97656 0
    0000003b 97655 0
    0000003c 97655 0
    0000003d 97655 0
    0000003e 97655 0
    0000003f 97656 0
    00000040 97655 0
    00000041 97656 0
    00000042 97656 0
    00000043 97656 0
    00000044 97655 0
    00000045 97655 0
    00000046 97656 0
    This goes on for a long time. I had the database tracing enabled at CONFIG level. Here are the last lines of the log just before the last log file (0x32) is removed:
    2009-05-06 08:41:51:111:CDT INFO CleanerRun 49 on file 0x30 begins backlog=2
    2009-05-06 08:41:52:181:CDT SEVERE CleanerRun 49 on file 0x30 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206347 nINsObsolete=6365 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199971 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:52:182:CDT INFO CleanerRun 50 on file 0x31 begins backlog=1
    2009-05-06 08:41:53:223:CDT SEVERE CleanerRun 50 on file 0x31 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205475 nINsObsolete=6319 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199144 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:53:224:CDT INFO CleanerRun 51 on file 0x32 begins backlog=0
    2009-05-06 08:41:54:292:CDT SEVERE CleanerRun 51 on file 0x32 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205197 nINsObsolete=6292 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198893 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:24:300:CDT INFO CleanerRun 52 on file 0x33 begins backlog=1
    2009-05-06 08:42:24:546:CDT CONFIG Checkpoint 963: source=api success=true nFullINFlushThisRun=13 nDeltaINFlushThisRun=0
    2009-05-06 08:42:24:931:CDT SEVERE Cleaner deleted file 0x32
    2009-05-06 08:42:24:938:CDT SEVERE Cleaner deleted file 0x31
    2009-05-06 08:42:24:946:CDT SEVERE Cleaner deleted file 0x30
    Here are a few log lines right after the last log message with cleaner deletion (until the next checkpoint):
    2009-05-06 08:42:25:339:CDT SEVERE CleanerRun 52 on file 0x33 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204164 nINsObsolete=6277 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197865 nLNsCleaned=11 nLNsDead=0 nLNsMigrated=0 nLNsMarked=11 nLNQueueHits=9 nLNsLocked=0
    2009-05-06 08:42:25:340:CDT INFO CleanerRun 53 on file 0x34 begins backlog=0
    2009-05-06 08:42:26:284:CDT SEVERE CleanerRun 53 on file 0x34 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=203386 nINsObsolete=6281 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197091 nLNsCleaned=2 nLNsDead=2 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:56:290:CDT INFO CleanerRun 54 on file 0x35 begins backlog=4
    2009-05-06 08:42:57:252:CDT SEVERE CleanerRun 54 on file 0x35 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205497 nINsObsolete=6312 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199164 nLNsCleaned=10 nLNsDead=3 nLNsMigrated=0 nLNsMarked=7 nLNQueueHits=6 nLNsLocked=0
    2009-05-06 08:42:57:253:CDT INFO CleanerRun 55 on file 0x39 begins backlog=4
    2009-05-06 08:42:58:097:CDT SEVERE CleanerRun 55 on file 0x39 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204553 nINsObsolete=6301 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198238 nLNsCleaned=2 nLNsDead=0 nLNsMigrated=0 nLNsMarked=2 nLNQueueHits=1 nLNsLocked=0
    2009-05-06 08:42:58:098:CDT INFO CleanerRun 56 on file 0x3a begins backlog=3
    2009-05-06 08:42:59:261:CDT SEVERE CleanerRun 56 on file 0x3a invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204867 nINsObsolete=6270 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198586 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:59:262:CDT INFO CleanerRun 57 on file 0x36 begins backlog=2
    2009-05-06 08:43:02:185:CDT SEVERE CleanerRun 57 on file 0x36 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206158 nINsObsolete=6359 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199786 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:02:186:CDT INFO CleanerRun 58 on file 0x37 begins backlog=2
    2009-05-06 08:43:03:243:CDT SEVERE CleanerRun 58 on file 0x37 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206160 nINsObsolete=6331 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199817 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:03:244:CDT INFO CleanerRun 59 on file 0x3b begins backlog=1
    2009-05-06 08:43:04:000:CDT SEVERE CleanerRun 59 on file 0x3b invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206576 nINsObsolete=6385 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200179 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:04:001:CDT INFO CleanerRun 60 on file 0x38 begins backlog=0
    2009-05-06 08:43:08:180:CDT SEVERE CleanerRun 60 on file 0x38 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205460 nINsObsolete=6324 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=194125 nLNsCleaned=4999 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=4999
    2009-05-06 08:43:08:224:CDT INFO CleanerRun 61 on file 0x3c begins backlog=0
    2009-05-06 08:43:09:099:CDT SEVERE CleanerRun 61 on file 0x3c invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206589 nINsObsolete=6343 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200235 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:24:548:CDT CONFIG Checkpoint 964: source=api success=true nFullINFlushThisRun=12 nDeltaINFlushThisRun=0
    I could not see anything fundamentally different between the log messages when log files were removed and when they were not. The DbSpace utility confirmed that there are plenty of log files under the minimum utilization, so I can't quite explain while the log file removal stopped all of a sudden.
    Any help would be appreciated (JE version: 3.3.75).

    Hi Bertold,
    My first guess is that one or more transactions have accidentally not been ended (committed or aborted), or cursors not closed.
    A clue is the nLNsLocked=4999 in the second set of trace messages. This means that 4999 records were locked by your application and were unable to be migrated by the cleaner. The cleaner will wait until these record locks are released before deleting any log files. Records locks are held by transactions and cursors.
    If this doesn't ring a bell and you need to look further, one thing you can do is print the EnvironmentStats periodically (System.out.println(Environment.getStats(null))). Take a look at the nPendingLNsProcessed and nPendingLNsLocked. The former is the number of records the cleaner attempts to migrate because they were locked earlier. The latter is the number that are still locked and cannot be migrated.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Pfirewall.log file not updating

    Hi,
    I configured Windows Firewall to allow logging in my DC Firewall GPO.  settings are
    Log dropped Packets -enabled
    Log successfull connections -enabled
    Log file path and Name:
    %systemroot%\system32\logfiles\firewall\pfirewall.log
    size limit  32767
    The Pfirewall.log file is located in this area but is not updating itslef.   Is there something I need to do to enable the file to have an updated time stamp and overrwite itself after it reaches a certian size.   I changed the size limit from
    1024 to 32767 today but still does not seem to update itself. 
    My 2003 Domain's pfirewall.log IS updating and it is pointing to c:\windows directory.  Is there a service that needs NFTS permissions on the folder where the log file resides?  or a different GPO setting that handles this?
    Thanks,
    Kevin C.

    Hello Kevin,
    Please check this link.
    http://www.grouppolicy.biz/2010/07/how-to-manage-windows-firewall-settings-using-group-policy/
    Before exporting, go to the properties (check the last part of the link), where you see the logging option and configure your logging settings. Then import these settings in your GPO.

  • TGW on Linux - log files not deleting

    I have been running TGW 3.5 for a few months on Linux and noticed the disk space is full. It seems the TGW logs are not clearing out. I have had to manually delete loads of log files.
    From the logs there seems to be a process which checks how long the logs have been there but for some reason not deleting them.

    By design TGW doesn't clear out old log files thinking that it would help in troubleshooting. But we are planning to enhance it by clearing log files older than x number of days..
    As you mentioned, the current fix is to manually clear out old log files. How often do you have to clear the logs ? Probably increasing available diskspace may help reduce iterations for cleaning up old logs ...

  • Log file not creating

    Hi, I tried to configure log for my application, but the log files are not getting created, I am getting the following error.
    My log configuration -
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE log-configuration SYSTEM "log-configuration.dtd">
    <log-configuration>
         <log-destinations>
              <log-destination
                   count="10"
                   effective-severity="ALL"
                   limit="10000"
                   name="LogTestLog"
                   pattern="./log/applications/TestLog/LogTestLog.%g.log"
                   type="FileLog"/>
         </log-destinations>
         <log-controllers>
              <log-controller
                   effective-severity="ALL"
                   name="System.out">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="LogTestLog"/>
                   </associated-destinations>
              </log-controller>
              <log-controller
                   effective-severity="ALL"
                   name="com.giri.test.LogServlet">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="LogTestLog"/>
                        <anonymous-destination
                             association-type="LOG"
                             type="FileLog"/>
                   </associated-destinations>
              </log-controller>
         </log-controllers>
    </log-configuration>
    #1.5#0030485D5AE617CD000000000000142000042AAA6682264C#1172811259302#/System/Logging##com.sap.tc.logging.FileLogInfoData.[getFileHeader(String fileName, int cntHeadLines)]######147f6460c87a11dba2920030485d5ae6#SAPEngine_System_Thread[impl:5]_99##0#0#Warning##Java#SAP_LOGGING_UNEXPECTED##Unexcepted error occured on !#1#FileHeader parsing#
    #1.5#0030485D5AE617CD000000010000142000042AAA6682297F#1172811259302#/System/Logging##/System/Logging######147f6460c87a11dba2920030485d5ae6#SAPEngine_System_Thread[impl:5]_99##0#0#Path##Java###Caught #1#java.lang.Exception: .
    log
    applications
    TestLog
    LogTestLog.0.log (The system cannot find the path specified)
         at com.sap.tc.logging.FileLogInfoData.getEOLLength(FileLogInfoData.java:432)
         at com.sap.tc.logging.FileLogInfoData.getFileHeaderLines(FileLogInfoData.java:348)
         at com.sap.tc.logging.FileLogInfoData.getFileHeaderLines(FileLogInfoData.java:334)
         at com.sap.tc.logging.FileLogInfoData.loadFileLogHeader(FileLogInfoData.java:320)
         at com.sap.tc.logging.FileLogInfoData.init(FileLogInfoData.java:260)
         at com.sap.tc.logging.FileLogInfoData.<init>(FileLogInfoData.java:119)
         at com.sap.tc.logging.FileLog.init(FileLog.java:373)
         at com.sap.tc.logging.FileLog.<init>(FileLog.java:282)
         at com.sap.tc.logging.FileLog.<init>(FileLog.java:246)
         at com.sap.engine.services.log_configurator.admin.LogConfigurator.adjustConfiguration(LogConfigurator.java:665)
         at com.sap.engine.services.log_configurator.admin.LogConfigurator.applyConfiguration(LogConfigurator.java:1488)
         at com.sap.engine.services.log_configurator.LogConfiguratorContainer.prepareStart(LogConfiguratorContainer.java:545)
         at com.sap.engine.services.deploy.server.application.StartTransaction.prepareCommon(StartTransaction.java:239)
         at com.sap.engine.services.deploy.server.application.StartTransaction.prepare(StartTransaction.java:187)
         at com.sap.engine.services.deploy.server.application.ApplicationTransaction.makeAllPhasesOnOneServer(ApplicationTransaction.java:301)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter.makeAllPhasesImpl(ParallelAdapter.java:327)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter.runMe(ParallelAdapter.java:74)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter$1.run(ParallelAdapter.java:218)
         at com.sap.engine.frame.core.thread.Task.run(Task.java:64)
         at com.sap.engine.core.thread.impl5.SingleThread.execute(SingleThread.java:79)
         at com.sap.engine.core.thread.impl5.SingleThread.run(SingleThread.java:150)

    I have the same problem. I also see many similar exception from different apps, such as writing to file ".
    log
    applications
    cms
    default.0.trc", ".
    log
    applications
    cms
    default.0.trc"
    Do I have to create log file before I use the logging service.
    I change "ForceSingleTraceFile" to "NO" in "LogManager" by Visual Administrator. Could it be a problem?
    I am running SAP Web AS 6.4 and deploying using SDM. I have a log-configuration.xml in the ear file. This is my log configuration
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE log-configuration SYSTEM "log-configuration.dtd">
    <log-configuration>
         <log-formatters/>
         <log-destinations>
              <log-destination
                   count="10"
                   effective-severity="ALL"
                   limit="1000000"
                   name="TraceLog01"
                   pattern="./log/file.%g.trc"
                   type="FileLog"/>
         </log-destinations>
         <log-controllers>
              <log-controller
                   effective-severity="ALL"
                   name="LogController01">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="TraceLog01"/>
                   </associated-destinations>
              </log-controller>
         </log-controllers>
    </log-configuration>

  • Log4j log file not being created

    Using websphere for a web app. At first I was getting the error log4j:WARN No appenders could be found for logger....
    So I created the property file and I assume correctly referenced it. The error went away and my logging messages are showing up in the websphere console, but the .log file specified in my log4j.properties file is not being written to... it is only writing to my systemOut.log.
    If I remove the ROOT.File line it still does not create the file (I've done a search on the IBM directory
    #Default log level to ERROR. Other levels are INFO and DEBUG.
    log4j.rootLogger=INFO,ROOT
    log4j.appender.ROOT=org.apache.log4j.RollingFileAppender
    log4j.appender.ROOT.File=c:\myapplication.log
    log4j.appender.ROOT.MaxFileSize=1000KB
    #Keep 5 old files around.
    log4j.appender.ROOT.MaxBackupIndex=5
    log4j.appender.ROOT.layout=org.apache.log4j.PatternLayout
    #Format almost same as WebSphere's common log format.
    log4j.appender.ROOT.layout.ConversionPattern=[%d] %t %c %-5p - %m%n
    #Optionally override log level of individual packages or classes
    log4j.logger.com.webage.ejbs=INFO       
    private static final Logger logger = Logger.getLogger(LoginAction.class);
        public ActionForward execute(ActionMapping mapping, ActionForm form,
                HttpServletRequest request, HttpServletResponse response)
                throws IOException, ServletException {
            initializeLogger();
    private void initializeLogger() {
            org.apache.log4j.BasicConfigurator.configure();
    //trying the above just to get it to work.. because by default this
    //should look in WEB-INF/classes/log4j.properties... I thought
            /*try {
                String log4jUrl = servlet.getServletContext().getInitParameter(
                        "LOG4J_XML");
                if (!(log4jUrl == null || log4jUrl.equals("")))
                    DOMConfigurator.configure(servlet.getServletContext()
                            .getResource(log4jUrl));
            } catch (MalformedURLException e) {
                e.printStackTrace();
            } catch (FactoryConfigurationError e) {
                e.printStackTrace();
        }    Edited by: gmachamer on Nov 30, 2007 6:37 AM

    ok changed to xml file and found a few things out.
    now when I debug though the logger that was created has an empty level... but if I look at the parent logger it is correctly pulling the root logger from my xml (if I change the priority attribute then it changes when debugging the code)
    <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true">
         <!-- this appender would be the same as having a System.out -->
         <appender name="console" class="org.apache.log4j.ConsoleAppender">
              <param name="Target" value="System.out"/>
              <layout class="org.apache.log4j.PatternLayout">
                         <param name="ConversionPattern" value="%-5p %c{1} - %m%n"/>
                  </layout>
           </appender>
           <appender name="rollingFileAppender" class="org.apache.log4j.RollingFileAppender">
              <!-- name and location of the file to log to -->
                 <param name="File" value="c:/appLog.log"/>
              <!-- the maximum size the file will be before it rolls the file -->
                 <param name="MaxFileSize" value="1000kb"/>
              <!-- the number of backups you want to maintain -->
              <param name="MaxBackupIndex" value="5"/>
              <!--
                   This is the layout of your messages, you can do alot with this.
                   See the java docs for the class PatternLayout for an explanation of
                   the different values you can have.
              -->
                 <layout class="org.apache.log4j.PatternLayout">
                          <param name="ConversionPattern" value="%t %-5p %c{2} - %m%n"/>
                      </layout>          
              </appender>
           <root>
                  <priority value ="error" />
                  <appender-ref ref="rollingFileAppender" />
                  <appender-ref ref="console" />
           </root> 
    </log4j:configuration>

  • Distiller produces log file, not a pdf

    I'm trying to create a PDF from a quark file and getting this:
    %%[ Error: ioerror; OffendingCommand: charpath ]%%
    It's only happening with one file, but it's one pretty important file.
    I tried created a ps file, putting it on another computer (identical computer) and it PDF'd perfectly...which leads me to believe it's a problem with Distiller, not the file.
    Any thoughts?
    Working in Mac 10.5.5, Distiller 8.1.2, Quark 7.3

    >But wouldn't it indicate the bad font in the log?
    Not in general, no.
    >
    >Also, there appears to be no rhyme or reason: we work from templates. All of the pages come from the same quark templates and use the same fonts; some will pdf, others won't.
    It's all there is to go on in the log.
    >
    >And again, when I brought the ps file to another computer, Distiller PDF'd it fine.
    Local font files could be relevant.
    >
    >I tried removing the art, too, which didn't work either.
    The art it won't be. "charpath" is a PostScript instruction directly
    related to text. (Unless it is art in the shape of a character, or
    some such).
    Try dropping all the text from this document. If it works now, you can
    use divide and conquer techniques to isolate the problem.
    Aandi Inston

Maybe you are looking for

  • Retrieve values from a HTML table !!!

    Hi. How can i retrieve values from a HTML table using javascript ? I´m trying to use the command "document.getElementsByTagName" without success. Thanks in advance. Eduardo

  • Error when trying to burn a Data DVD

    Whenever I try to burn a Data DVD I get the same error - "The items in this playlist will not fit on one data disc." The playlist in question is 2.88gb, well under the 4.3gb limit. Is this a known issue? I remember an older version of iTunes would do

  • Caching problem in Applets - Java Control Panel

    I have a problem of applets being cached in Java Control Panel. Is there a way in Java to stop caching applets programatically in the Java Control Panel. There is one way to stop caching files by unchecking "Keep Temporary Files on my Computer" under

  • Yellow status on Enclosure intrusion

    Hi, I've got a yellow status in server monitor for enclosure intrusion, but the it's not really clear what this means. The label in the details under the Security tab says "Enclosure Intrusion: <yellow orb> Yes". Does this mean yes, it's okay, or yes

  • No signal in ME17 postcode and no one to talk to

    Hi, I've had no voice reception since yesterday and you phone customer services and keep going round and round trying to find the right option. Look on there website and they say they have a mast down. Try talking online to O2 and their online agents