Distiller produces log file, not a pdf

I'm trying to create a PDF from a quark file and getting this:
%%[ Error: ioerror; OffendingCommand: charpath ]%%
It's only happening with one file, but it's one pretty important file.
I tried created a ps file, putting it on another computer (identical computer) and it PDF'd perfectly...which leads me to believe it's a problem with Distiller, not the file.
Any thoughts?
Working in Mac 10.5.5, Distiller 8.1.2, Quark 7.3

>But wouldn't it indicate the bad font in the log?
Not in general, no.
>
>Also, there appears to be no rhyme or reason: we work from templates. All of the pages come from the same quark templates and use the same fonts; some will pdf, others won't.
It's all there is to go on in the log.
>
>And again, when I brought the ps file to another computer, Distiller PDF'd it fine.
Local font files could be relevant.
>
>I tried removing the art, too, which didn't work either.
The art it won't be. "charpath" is a PostScript instruction directly
related to text. (Unless it is art in the shape of a character, or
some such).
Try dropping all the text from this document. If it works now, you can
use divide and conquer techniques to isolate the problem.
Aandi Inston

Similar Messages

  • BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED

    Hello,
    To generate the log report, /VIRSA/ZVFATBAK program is scheduled on hourly basis but some time report doesn't get generated and if we see the background job then it shows sucessfully finished.
    If we see the maually the log report for FFID then below error message is displayed.
    " BACKGROUND JOB WAS NOT SCHEDULED/LOG FILE NOT YET GENERATED"
    Can anyone guide me to solve the issue.
    Thanks in advance.
    Best Regards,
    Prashant Dubey

    Hi,
    once chk the status of the job by selecting that and check job status(cltr+shift_f12)
    since it was periodically scheduled job there will be a RELEASED job after every active job..
    so try to copy that into another job using copy option and give some new name which u have to remember...
    the moment u copy u can find the same copied job in SCHEDULED status...
    from here, try to run it again on hourly basis....
    After copying the job u can unschedule the old released job to scheduled otherwise 2 will run at a time...
    rgds,

  • Adobe Acrobat 9 Pro making .log files instead of .pdf files

    I just uninstalled Acrobat 9 from one computer and installed it on my new one.  Now when I try to create a .pdf, it's creating a .log file instead. I have tried uninstalling again and installng it. Also, there is NOT a pdf hiding somewhere else. The program worked before on my other computer.

    If you are receiving a .log file, it typically means that an error has occurred during the creation of the PDF file from a PostScript or EPS file via the Distiller. Open up the .log file in NotePad (assuming you are on Windows) and see what it says! Likely, the problem is that you new system is missing some fonts from your old system.
            - Dov

  • APEX on Oracle XE PDF printing produces: Format error: not a PDF or corrupted.

    Dear fellow Apexers and Oracle Gurus,
    I have the following configuration:
    Oracle XE 11gR2
    APEX 4.2.3.00.08
    Listener 2.0.5
    On this setup I can create workspaces and applications as I please.
    Now I want to print a PDF report.
    I have set up PDF printing to "Oracle Listener" in the "manage Instance" settings in the instance administration.
    I have created a classical report on the EMPLOYEES table (Select * from EMPLOYEES)
    and enabled PDF printing in the "Printing" area of the "Print Attributes" of the page.
    When I run the page I do get the "print" link on the bottom of the page.
    Clicking the link does produce a .PDF but showing this file triggers an error in my PDF reader: Format error: not a PDF or corrupted.
    Opening the .PDF file in a text editor reveals the corrupt content.
    %PDF-1.4
    %ª«¬
    Unknown function: gatherContextInfo
    The same setup works fine and produces the expected PDF file with the report on the following configuration:
    Oracle Vbox with Developer days image;
    DB 11gR2
    Upgraded to apex 4.2.3.00.08
    Listener 2.0.5
    Since the PDF shows "unknown function" I suspected the XE configuration to lack some of the necessary rights, maybe I forgot to configure the ACLs correctly.
    So I compared the ACL info on both configurations. Alas,.. on both machines they return the same result..
    SQL> SELECT * FROM DBA_NETWORK_ACLS
    HOST         LOWER_PORT    UPPER_PORT    ACL
    localhost    null          null           /sys/acls/local-access-users.xml
    *            null          null           /sys/acls/power_users.xml    
    SQL> select * from dba_network_acl_privileges
    ACL                                  PRINCIPAL      PRIVILEGE   IS_GRANT    INVERT
    /sys/acls/local-access-users.xml     APEX_040200    connect     true        false
    /sys/acls/power_users.xml            APEX_040200    connect     true        false 
    Anyone any idea why this works fine on the Vbox and not on the local XE configuration?
    Any hint or answer as to where the problem might be is appreciated
    TIA
    Wouter

    I'm having the same issue. I'm using Oracle XE 11gR2 as well. I've tried with APEX 4.2.2 and APEX 4.2.4. I have set up the Oracle Listener in instance settings and set the report to print and I have the same result as you. Have you had any progress yet?
    Thanks
    Jason

  • Log file not downloaded

    Hello Experts,
    We have configured a new application on our existing server and have done the XCM setting of parameters 'appinfo' and 'logfiledownload' to 'True'. Despite doing these setting, when we try to take a JAVA log of the application, we get a blank zip file. The text file is not present in this zip folder.  Any settings that I have missed out on.  On the Visual Administrator  the parameter: ForceSingleTraceFile is set to 'NO'     
    Any pointers will be helpful.
    Thanks and Regards,

    Hi Mukta,
    Try to see below SAP NOTE it will help you to understand how log configuration works in ISA. Some SAP NOTE has PDF file for "How to do it"
    SAPNOTE-569976 - SAP Internet Sales-Creation of session logs
    SAPNOTE-812332 - How to set up logging on a remote J2EE client
    SAPNOTE-921409 - Enable session tracing in mySAP CRM 5.0 java components
    SAPNOTE-1017756 - E-Commerce 5.0- creating own log-trace files
    SAPNOTE-1032305 - ECO-Separate tracing and runtime entries for b2b and b2c
    SAPNOTE-1090753 - Creation of logs for B2B and B2C Release 5.0
    SAPNOTE- 975115 - 5.0 How to Turn On-Off ECO RUNTIME trace in 7.0 J2EE
    I hope this info will help you.
    eCommerce Developer

  • Log file not generated

    i follow the steps
    1.In Application Set profile FND: Debug Log Level to "Statement"
    2.restart apache
    3.Run debug from help-->diagnostics-->debug
    4.Secure debug log file which should be in
    select value from v$parameter where name like 'utl_file%'
    but the is no log file created i dont know why (these steps are provided by an SR)
    thnx

    What about "FND: Debug Log Filename for Middle-Tier" and "FND: Diagnostics" profile options?
    Note: 372209.1 - How to Collect an FND Diagnostics Trace (aka FND:Debug)
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=372209.1
    If the above does not help, set the debug log at the user level and check then.
    Note: 390881.1 - How To Set The Debug Log At User Level?
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=390881.1

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Archived log files not registered in the Database

    I have Widows Server 2008 R2
    I have Oracle 11g R2
    I configured primary and standby database in 2 physical servers , please find below the verification:
    I am using DG Broker
    Renetly I did failover from primary to standby database
    Then I did REINSTATE DATABASE to returen the old primary to standby mode
    Then I did Switchover again
    I have problem that archive logs not registered and not imeplemented.
    SQL> select max(sequence#) from v$archived_log; 
    MAX(SEQUENCE#)
             16234
    I did alter system switch logfile then I ssue the following statment to check and I found same number in primary and stanbyd has not been changed
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
             16234
    Any body can help please?
    Regards

    Thanks for reply
    What I mean after I do alter system switch log file, I can see the archived log files is generated in the physical Disk but when
    select MAX(SEQUENCE#) FROM V$ARCHIVED_LOG;
    the sequence number not changed it should increase by 1 when ever I do switch logfile.
    however I did as you asked please find the result below:
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> SELECT DB_NAME,HOSTNAME,LOG_ARCHIVED,LOG_APPLIED_02,LOG_APPLIED_03,APPLIED_TIME,LOG_ARCHIVED - LOG_APPLIED_02 LOG_GAP_02,
      2  LOG_ARCHIVED - LOG_APPLIED_03 LOG_GAP_03
      3  FROM (SELECT NAME DB_NAME FROM V$DATABASE),
      4  (SELECT UPPER(SUBSTR(HOST_NAME, 1, (DECODE(INSTR(HOST_NAME, '.'),0, LENGTH(HOST_NAME),(INSTR(HOST_NAME, '.') - 1))))) HOSTNAME FROM V$INSTANCE),
      5  (SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID = 1 AND ARCHIVED = 'YES'),
      6  (SELECT MAX(SEQUENCE#) LOG_APPLIED_02 FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES'),
      7  (SELECT MAX(SEQUENCE#) LOG_APPLIED_03 FROM V$ARCHIVED_LOG WHERE DEST_ID = 3 AND APPLIED = 'YES'),
      8  (SELECT TO_CHAR(MAX(COMPLETION_TIME), 'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES');
    DB_NAME HOSTNAME           LOG_ARCHIVED   LOG_APPLIED_02    LOG_APPLIED_03     APPLIED_TIME     LOG_GAP_02      LOG_GAP_03
    EPPROD  CORSKMBBOR01     16252                  16253                        (null)                      15-JAN/12:04                  -1                   (       null)

  • Archived log file not displaying

    While navigating around the "home" page for OCS as an administrator...I was trying to run a report under Reports>Conferences>Diagnostics.
    The links says:
    Click the link below to view comprehensive conference diagnostics. To see the log file correctly, use Internet Explorer 6.0 or higher.
    I am using IE 6 and the page shows up as being done...but it is blank. Any idea what is wrong? The URL reads:
    https://mywebserver/imtapp/logs/imtLogs.jsp?fileName=D:/ocs_onebox/mtier/imeeting/logs/sessions/12.20.2004/10000-clbsvr_OCS_home_mid.mywebserver.imt-collab.0-06_34_01.xml
    The file is there on the filesystem.
    TIA.

    Stages means Transformations in Data flow...
    Transformation names are not displaying correctly in log file.
    for example if i given name as  "TC_table_name" for Table compare Transformation then its displaying only "Table Comparison"  in Log file

  • Log files not being removed.

    Hello,
    I've upgraded an application from BerkeleyDB 5.1.25 to 5.3.21, and after that, log files are no more automatically removed. This is the only change in the application. It's an application written in C.
    The environment of the application is created with the flag DB_LOG_AUTO_REMOVE
    dbenv->log_set_config(dbenv, DB_LOG_AUTO_REMOVE, TRUE).
    The application has a thread to periodically checkpoint the data
    dbenv->txn_checkpoint(dbenv, 0, 0, 0)
    So far, so good, with version 5.1.25, this was enough to remove unused log files (I don't need to be able to do catastrophic recovery). But this doesn't work anymore with version 5.3.21.
    I I run db_archive (no options), it shows nothing, suggesting that all log files are still needed. But if I run db_hot_backup on the database, all but the last logfiles are removed (on the backup) as wanted.
    Rem : Usually, I don't want to run db_archive or any external tool, to remove unused log files. I hope what is inside the application is enough to remove unused log files.
    Is this something known, something changed or can you suggest me something to look for ?
    Thanks for your help
    José-Marcio
    Edited by: user564597 on Mar 24, 2013 6:35 PM
    Edited by: user564597 on Mar 24, 2013 6:38 PM
    Edited by: user564597 on Mar 25, 2013 8:57 AM

    thank you for giving us a test program. This helped tremendously to fully understand what you are doing. In 5.3 we fixed a bug dealing with the way log files are archived in an HA environment. What you are running into is the consequences of that bug fix. In the test program you are using DB_INIT_REP. This is the key to use that you want an HA environment. With HA, there is a master and some number of read only clients. By default we treat the initiating database as the master. This is what is happening in our case. In an HA (replicated) environment, we cannot archive log files until we can be assured that the clients have applied the contents of that log file. Our belief is that you are not really running in an HA environment and you do not need the DB_INIT_REP flag. In our initial testing where we said it worked for us, this was because we did not use the DB_INIT_REP flag, as there was no mention of replication being needed in the post.
    Recommendation: Please remove the use of the DB_INIT_REP flag or properly set up an HA environment (details in our docs).
    thanks
    mike

  • Empty/underutilized log files not removed

    I have an application that runs the cleaner and the checkpointer explicitly (instead of relying on the database to do it).
    Here are the relevant environment settings: je.env.runCheckpointer=false, je.env.runCleaner=false, je.cleaner.minUtilization=5, je.cleaner.expunge=true.
    When running the application, I noticed that the few dozen log files have been removed, but later (even the cleaner was executed at regular intervals), no more log files were removed.
    I have run the DbSpace utility on the environment and found the following result:
    File Size (KB) % Used
    00000033 97656 0
    00000034 97655 0
    00000035 97656 0
    00000036 97656 0
    00000037 97656 0
    00000038 97655 2
    00000039 97656 0
    0000003a 97656 0
    0000003b 97655 0
    0000003c 97655 0
    0000003d 97655 0
    0000003e 97655 0
    0000003f 97656 0
    00000040 97655 0
    00000041 97656 0
    00000042 97656 0
    00000043 97656 0
    00000044 97655 0
    00000045 97655 0
    00000046 97656 0
    This goes on for a long time. I had the database tracing enabled at CONFIG level. Here are the last lines of the log just before the last log file (0x32) is removed:
    2009-05-06 08:41:51:111:CDT INFO CleanerRun 49 on file 0x30 begins backlog=2
    2009-05-06 08:41:52:181:CDT SEVERE CleanerRun 49 on file 0x30 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206347 nINsObsolete=6365 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199971 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:52:182:CDT INFO CleanerRun 50 on file 0x31 begins backlog=1
    2009-05-06 08:41:53:223:CDT SEVERE CleanerRun 50 on file 0x31 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205475 nINsObsolete=6319 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199144 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:41:53:224:CDT INFO CleanerRun 51 on file 0x32 begins backlog=0
    2009-05-06 08:41:54:292:CDT SEVERE CleanerRun 51 on file 0x32 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205197 nINsObsolete=6292 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198893 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:24:300:CDT INFO CleanerRun 52 on file 0x33 begins backlog=1
    2009-05-06 08:42:24:546:CDT CONFIG Checkpoint 963: source=api success=true nFullINFlushThisRun=13 nDeltaINFlushThisRun=0
    2009-05-06 08:42:24:931:CDT SEVERE Cleaner deleted file 0x32
    2009-05-06 08:42:24:938:CDT SEVERE Cleaner deleted file 0x31
    2009-05-06 08:42:24:946:CDT SEVERE Cleaner deleted file 0x30
    Here are a few log lines right after the last log message with cleaner deletion (until the next checkpoint):
    2009-05-06 08:42:25:339:CDT SEVERE CleanerRun 52 on file 0x33 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204164 nINsObsolete=6277 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197865 nLNsCleaned=11 nLNsDead=0 nLNsMigrated=0 nLNsMarked=11 nLNQueueHits=9 nLNsLocked=0
    2009-05-06 08:42:25:340:CDT INFO CleanerRun 53 on file 0x34 begins backlog=0
    2009-05-06 08:42:26:284:CDT SEVERE CleanerRun 53 on file 0x34 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=203386 nINsObsolete=6281 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=197091 nLNsCleaned=2 nLNsDead=2 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:56:290:CDT INFO CleanerRun 54 on file 0x35 begins backlog=4
    2009-05-06 08:42:57:252:CDT SEVERE CleanerRun 54 on file 0x35 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205497 nINsObsolete=6312 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199164 nLNsCleaned=10 nLNsDead=3 nLNsMigrated=0 nLNsMarked=7 nLNQueueHits=6 nLNsLocked=0
    2009-05-06 08:42:57:253:CDT INFO CleanerRun 55 on file 0x39 begins backlog=4
    2009-05-06 08:42:58:097:CDT SEVERE CleanerRun 55 on file 0x39 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204553 nINsObsolete=6301 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198238 nLNsCleaned=2 nLNsDead=0 nLNsMigrated=0 nLNsMarked=2 nLNQueueHits=1 nLNsLocked=0
    2009-05-06 08:42:58:098:CDT INFO CleanerRun 56 on file 0x3a begins backlog=3
    2009-05-06 08:42:59:261:CDT SEVERE CleanerRun 56 on file 0x3a invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=204867 nINsObsolete=6270 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=198586 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:42:59:262:CDT INFO CleanerRun 57 on file 0x36 begins backlog=2
    2009-05-06 08:43:02:185:CDT SEVERE CleanerRun 57 on file 0x36 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206158 nINsObsolete=6359 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199786 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:02:186:CDT INFO CleanerRun 58 on file 0x37 begins backlog=2
    2009-05-06 08:43:03:243:CDT SEVERE CleanerRun 58 on file 0x37 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206160 nINsObsolete=6331 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=199817 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:03:244:CDT INFO CleanerRun 59 on file 0x3b begins backlog=1
    2009-05-06 08:43:04:000:CDT SEVERE CleanerRun 59 on file 0x3b invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206576 nINsObsolete=6385 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200179 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:04:001:CDT INFO CleanerRun 60 on file 0x38 begins backlog=0
    2009-05-06 08:43:08:180:CDT SEVERE CleanerRun 60 on file 0x38 invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=205460 nINsObsolete=6324 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=194125 nLNsCleaned=4999 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=4999
    2009-05-06 08:43:08:224:CDT INFO CleanerRun 61 on file 0x3c begins backlog=0
    2009-05-06 08:43:09:099:CDT SEVERE CleanerRun 61 on file 0x3c invokedFromDaemon=false finished=true fileDeleted=false nEntriesRead=206589 nINsObsolete=6343 nINsCleaned=0 nINsDead=0 nINsMigrated=0 nLNsObsolete=200235 nLNsCleaned=0 nLNsDead=0 nLNsMigrated=0 nLNsMarked=0 nLNQueueHits=0 nLNsLocked=0
    2009-05-06 08:43:24:548:CDT CONFIG Checkpoint 964: source=api success=true nFullINFlushThisRun=12 nDeltaINFlushThisRun=0
    I could not see anything fundamentally different between the log messages when log files were removed and when they were not. The DbSpace utility confirmed that there are plenty of log files under the minimum utilization, so I can't quite explain while the log file removal stopped all of a sudden.
    Any help would be appreciated (JE version: 3.3.75).

    Hi Bertold,
    My first guess is that one or more transactions have accidentally not been ended (committed or aborted), or cursors not closed.
    A clue is the nLNsLocked=4999 in the second set of trace messages. This means that 4999 records were locked by your application and were unable to be migrated by the cleaner. The cleaner will wait until these record locks are released before deleting any log files. Records locks are held by transactions and cursors.
    If this doesn't ring a bell and you need to look further, one thing you can do is print the EnvironmentStats periodically (System.out.println(Environment.getStats(null))). Take a look at the nPendingLNsProcessed and nPendingLNsLocked. The former is the number of records the cleaner attempts to migrate because they were locked earlier. The latter is the number that are still locked and cannot be migrated.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Pfirewall.log file not updating

    Hi,
    I configured Windows Firewall to allow logging in my DC Firewall GPO.  settings are
    Log dropped Packets -enabled
    Log successfull connections -enabled
    Log file path and Name:
    %systemroot%\system32\logfiles\firewall\pfirewall.log
    size limit  32767
    The Pfirewall.log file is located in this area but is not updating itslef.   Is there something I need to do to enable the file to have an updated time stamp and overrwite itself after it reaches a certian size.   I changed the size limit from
    1024 to 32767 today but still does not seem to update itself. 
    My 2003 Domain's pfirewall.log IS updating and it is pointing to c:\windows directory.  Is there a service that needs NFTS permissions on the folder where the log file resides?  or a different GPO setting that handles this?
    Thanks,
    Kevin C.

    Hello Kevin,
    Please check this link.
    http://www.grouppolicy.biz/2010/07/how-to-manage-windows-firewall-settings-using-group-policy/
    Before exporting, go to the properties (check the last part of the link), where you see the logging option and configure your logging settings. Then import these settings in your GPO.

  • Temp Log file not shrinking

    hi,
    i am using MS SQL 2005 for my SAP ERP6 ,after using the T-Code SGEN  the temp Log file incresed and reached 48gb now when i try to shrink with MS Sql application or by using dbcc shrinkfile comand , it only shows that there is free 1,5 gb is used and nearly 46.5 Gb is unused but the main problem is its not shrinking.
    help needed.....
    thanks in advance

    Hi,
    Execute the following commands in sequence.
    1. BACKUP LOG <SID> WITH NO_LOG
    2. DBCC SHRINKFILE (<name_logfile>, <size>)
    or
    Just execute.
    BACKUP LOG <SID> WITH TRUNCATE_ONLY
    The above command will only remove the unused space (inactive parts) from log file and will note do backup of log file.
    Refer [SAP Note 625546 - Size of transaction log file is too big|https://service.sap.com/sap/support/notes/625546] to get more info.
    Regards,
    Bhavik G. Shroff

  • TGW on Linux - log files not deleting

    I have been running TGW 3.5 for a few months on Linux and noticed the disk space is full. It seems the TGW logs are not clearing out. I have had to manually delete loads of log files.
    From the logs there seems to be a process which checks how long the logs have been there but for some reason not deleting them.

    By design TGW doesn't clear out old log files thinking that it would help in troubleshooting. But we are planning to enhance it by clearing log files older than x number of days..
    As you mentioned, the current fix is to manually clear out old log files. How often do you have to clear the logs ? Probably increasing available diskspace may help reduce iterations for cleaning up old logs ...

  • Log file not creating

    Hi, I tried to configure log for my application, but the log files are not getting created, I am getting the following error.
    My log configuration -
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE log-configuration SYSTEM "log-configuration.dtd">
    <log-configuration>
         <log-destinations>
              <log-destination
                   count="10"
                   effective-severity="ALL"
                   limit="10000"
                   name="LogTestLog"
                   pattern="./log/applications/TestLog/LogTestLog.%g.log"
                   type="FileLog"/>
         </log-destinations>
         <log-controllers>
              <log-controller
                   effective-severity="ALL"
                   name="System.out">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="LogTestLog"/>
                   </associated-destinations>
              </log-controller>
              <log-controller
                   effective-severity="ALL"
                   name="com.giri.test.LogServlet">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="LogTestLog"/>
                        <anonymous-destination
                             association-type="LOG"
                             type="FileLog"/>
                   </associated-destinations>
              </log-controller>
         </log-controllers>
    </log-configuration>
    #1.5#0030485D5AE617CD000000000000142000042AAA6682264C#1172811259302#/System/Logging##com.sap.tc.logging.FileLogInfoData.[getFileHeader(String fileName, int cntHeadLines)]######147f6460c87a11dba2920030485d5ae6#SAPEngine_System_Thread[impl:5]_99##0#0#Warning##Java#SAP_LOGGING_UNEXPECTED##Unexcepted error occured on !#1#FileHeader parsing#
    #1.5#0030485D5AE617CD000000010000142000042AAA6682297F#1172811259302#/System/Logging##/System/Logging######147f6460c87a11dba2920030485d5ae6#SAPEngine_System_Thread[impl:5]_99##0#0#Path##Java###Caught #1#java.lang.Exception: .
    log
    applications
    TestLog
    LogTestLog.0.log (The system cannot find the path specified)
         at com.sap.tc.logging.FileLogInfoData.getEOLLength(FileLogInfoData.java:432)
         at com.sap.tc.logging.FileLogInfoData.getFileHeaderLines(FileLogInfoData.java:348)
         at com.sap.tc.logging.FileLogInfoData.getFileHeaderLines(FileLogInfoData.java:334)
         at com.sap.tc.logging.FileLogInfoData.loadFileLogHeader(FileLogInfoData.java:320)
         at com.sap.tc.logging.FileLogInfoData.init(FileLogInfoData.java:260)
         at com.sap.tc.logging.FileLogInfoData.<init>(FileLogInfoData.java:119)
         at com.sap.tc.logging.FileLog.init(FileLog.java:373)
         at com.sap.tc.logging.FileLog.<init>(FileLog.java:282)
         at com.sap.tc.logging.FileLog.<init>(FileLog.java:246)
         at com.sap.engine.services.log_configurator.admin.LogConfigurator.adjustConfiguration(LogConfigurator.java:665)
         at com.sap.engine.services.log_configurator.admin.LogConfigurator.applyConfiguration(LogConfigurator.java:1488)
         at com.sap.engine.services.log_configurator.LogConfiguratorContainer.prepareStart(LogConfiguratorContainer.java:545)
         at com.sap.engine.services.deploy.server.application.StartTransaction.prepareCommon(StartTransaction.java:239)
         at com.sap.engine.services.deploy.server.application.StartTransaction.prepare(StartTransaction.java:187)
         at com.sap.engine.services.deploy.server.application.ApplicationTransaction.makeAllPhasesOnOneServer(ApplicationTransaction.java:301)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter.makeAllPhasesImpl(ParallelAdapter.java:327)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter.runMe(ParallelAdapter.java:74)
         at com.sap.engine.services.deploy.server.application.ParallelAdapter$1.run(ParallelAdapter.java:218)
         at com.sap.engine.frame.core.thread.Task.run(Task.java:64)
         at com.sap.engine.core.thread.impl5.SingleThread.execute(SingleThread.java:79)
         at com.sap.engine.core.thread.impl5.SingleThread.run(SingleThread.java:150)

    I have the same problem. I also see many similar exception from different apps, such as writing to file ".
    log
    applications
    cms
    default.0.trc", ".
    log
    applications
    cms
    default.0.trc"
    Do I have to create log file before I use the logging service.
    I change "ForceSingleTraceFile" to "NO" in "LogManager" by Visual Administrator. Could it be a problem?
    I am running SAP Web AS 6.4 and deploying using SDM. I have a log-configuration.xml in the ear file. This is my log configuration
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE log-configuration SYSTEM "log-configuration.dtd">
    <log-configuration>
         <log-formatters/>
         <log-destinations>
              <log-destination
                   count="10"
                   effective-severity="ALL"
                   limit="1000000"
                   name="TraceLog01"
                   pattern="./log/file.%g.trc"
                   type="FileLog"/>
         </log-destinations>
         <log-controllers>
              <log-controller
                   effective-severity="ALL"
                   name="LogController01">
                   <associated-destinations>
                        <destination-ref
                             association-type="LOG"
                             name="TraceLog01"/>
                   </associated-destinations>
              </log-controller>
         </log-controllers>
    </log-configuration>

Maybe you are looking for