Empty log files in STMS

Hello,
For some transport orders only but not all, when I want to see the logs of the import (via SE01 or STMS) I only get a line of "##########". The physical file under UNIX is empty. I don't find anything in the SAP notes.
Has anybody ever experienced that ???
We have 1 central instance and 4 AS. The directory /usr/sap/trans is mounted via NFS.
Rgds,
Y.
Message was edited by:
        Youssef ANEGAY

If the Directory is not mounted properly, it happened once for us... We reported the problem to DB admin and he said the mount is not done propery...then again after a re mount of the DB its all OK...
Hope it helps..
Br,
Sri
Award points fo rhelpful answers

Similar Messages

  • Empty Log File - log settings will not save

    Description of Problem or Question:
    Cannot get logging to work in folder D:\Program Files\Business Objects\Dashboard and Analytics 12.0\server\log
    (empty log file is created)
    Product\Version\Service Pack\Fixpack (if applicable):
    BO Enterorise 12.0
    Relevant Environment Information (OS & version, java or .net & version, DB & version):
    Server: windows Server 2003 Enterprise SP2.
    Database Oracle 10g
    Client : Vista
    Sporadic or Consistent (if applicable):
    Consistent
    What has already been tried (where have you searched for a solution to your question/problem):
    Searched forum, SMP
    Steps to Reproduce (if applicable):
    From InfoViewApp, logged in as Admin
    Open ->Dashboard and Analytics Setp -> Parameters -> Trace
    Check "Log to folder" and "SQL Queries", Click Apply.
    Now, navigate away and return to this page - the "Log to folder" is unchecked. Empty log file is created.

    Send Apple feedback. They won't answer, but at least will know there is a problem. If enough people send feedback, it may get the problem solved sooner.
    Feedback
    Or you can use your Apple ID to register with this site and go the Apple BugReporter. Supposedly you will get an answer if you submit feedback.
    Feedback via Apple Developer
    Do a backup.
    Quit the application.
    Go to Finder and select your user/home folder. With that Finder window as the front window, either select Finder/View/Show View options or go command - J.  When the View options opens, check ’Show Library Folder’. That should make your user library folder visible in your user/home folder.  Select Library. Then go to Preferences/com.apple.systempreferences.plist. Move the .plist to your desktop.
    Restart, open the application and test. If it works okay, delete the plist from the desktop.
    If the application is the same, return the .plist to where you got it from, overwriting the newer one.
    Thanks to leonie for some information contained in this.

  • Empty Log files not deleted by Cleaner

    Hi,
    we have a NoSql database installed on 3 nodes with a replication factor of 3 (see exact topology below).
    We run a test which consisted in the following operations repeated in a loop : store a LOB, read it , delete it.
    store.putLOB(key, new ByteArrayInputStream(source),Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    store.getLOB(key,Consistency.NONE_REQUIRED, 5, TimeUnit.SECONDS);
    store.deleteLOB(key, Durability.COMMIT_SYNC, 5, TimeUnit.SECONDS);
    During the test the space occupied by the database continues to grow !!
    Cleaner threads are running but logs these warnings:
    2015-02-03 14:32:58.936 UTC WARNING [rg3-rn2] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.937 UTC WARNING [rg3-rn2] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.920 UTC WARNING [rg3-rn1] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.921 UTC WARNING [rg3-rn1] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:32:58.908 UTC WARNING [rg3-rn3] JE: Replication prevents deletion of 12 files by Cleaner. Start file=0x0 holds CBVLSN 1, end file=0xe holds last VLSN 24,393
    2015-02-03 14:32:58.909 UTC WARNING [rg3-rn3] JE: Cleaner has 12 files not deleted because they are protected by replication.
    2015-02-03 14:33:31.704 UTC INFO [rg3-rn2] JE: Chose lowest utilized file for cleaning. fileChosen: 0xc (adjustment disabled) totalUtilization: 1 bestFileUtilization: 0 isProbe: false
    2015-02-03 14:33:32.137 UTC INFO [rg3-rn2] JE: CleanerRun 13 ends on file 0xc probe=false invokedFromDaemon=true finished=true fileDeleted=false nEntriesRead=1129 nINsObsolete=64 nINsCleaned=2 nINsDead=0 nINsMigrated=2 nBINDeltasObsolete=2 nBINDeltasCleaned=0 nBINDeltasDead=0 nBINDeltasMigrated=0 nLNsObsolete=971 nLNsCleaned=88 nLNsDead=0 nLNsMigrated=88 nLNsMarked=0 nLNQueueHits=73 nLNsLocked=0 logSummary=<CleanerLogSummary endFileNumAtLastAdjustment="0xe" initialAdjustments="5" recentLNSizesAndCounts=""> inSummary=<INSummary totalINCount="68" totalINSize="7570" totalBINDeltaCount="2" totalBINDeltaSize="254" obsoleteINCount="66" obsoleteINSize="7029" obsoleteBINDeltaCount="2" obsoleteBINDeltaSize="254"/> estFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="102482" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> recalcFileSummary=<summary totalCount="2072" totalSize="13069531" totalINCount="68" totalINSize="7570" totalLNCount="1059" totalLNSize="13024352" maxLNSize="0" obsoleteINCount="66" obsoleteLNCount="971" obsoleteLNSize="12974449" obsoleteLNSizeCounted="971" getObsoleteSize="13019405" getObsoleteINSize="7347" getObsoleteLNSize="12974449" getMaxObsoleteSize="13019405" getMaxObsoleteLNSize="12974449" getAvgObsoleteLNSizeNotCounted="NaN"/> lnSizeCorrection=NaN newLnSizeCorrection=NaN estimatedUtilization=0 correctedUtilization=0 recalcUtilization=0 correctionRejected=false
    Log files are not delete even if empty as seen using DBSpace utility:
    Space -h /mam2g/data/sn1/u01/rg2-rn1/env/ib/kvstore.jar com.sleepycat.je.util.Db
      File    Size (KB)  % Used
    00000000      12743       0
    00000001      12785       0
    00000002      12725       0
    00000003      12719       0
    00000004      12703       0
    00000005      12751       0
    00000006      12795       0
    00000007      12725       0
    00000008      12752       0
    00000009      12720       0
    0000000a      12723       0
    0000000b      12764       0
    0000000c      12715       0
    0000000d      12799       0
    0000000e      12724       1
    0000000f       5717       0
    TOTALS      196867       0
    Here is the configured topology:
    kv-> show topology
    store=MMS-KVstore  numPartitions=90 sequence=106
      zn: id=zn1 name=MAMHA repFactor=3 type=PRIMARY
      sn=[sn1] zn:[id=zn1 name=MAMHA] 192.168.144.11:5000 capacity=3 RUNNING
        [rg1-rn1] RUNNING
                 single-op avg latency=4.414467 ms   multi-op avg latency=0.0 ms
        [rg2-rn1] RUNNING
                 single-op avg latency=1.5962526 ms   multi-op avg latency=0.0 ms
        [rg3-rn1] RUNNING
                 single-op avg latency=1.3068943 ms   multi-op avg latency=0.0 ms
      sn=[sn2] zn:[id=zn1 name=MAMHA] 192.168.144.12:6000 capacity=3 RUNNING
        [rg1-rn2] RUNNING
                 single-op avg latency=1.5670061 ms   multi-op avg latency=0.0 ms
        [rg2-rn2] RUNNING
                 single-op avg latency=8.637241 ms   multi-op avg latency=0.0 ms
        [rg3-rn2] RUNNING
                 single-op avg latency=1.370075 ms   multi-op avg latency=0.0 ms
      sn=[sn3] zn:[id=zn1 name=MAMHA] 192.168.144.35:7000 capacity=3 RUNNING
        [rg1-rn3] RUNNING
                 single-op avg latency=1.4707285 ms   multi-op avg latency=0.0 ms
        [rg2-rn3] RUNNING
                 single-op avg latency=1.5334034 ms   multi-op avg latency=0.0 ms
        [rg3-rn3] RUNNING
                 single-op avg latency=9.05199 ms   multi-op avg latency=0.0 ms
      shard=[rg1] num partitions=30
        [rg1-rn1] sn=sn1
        [rg1-rn2] sn=sn2
        [rg1-rn3] sn=sn3
      shard=[rg2] num partitions=30
        [rg2-rn1] sn=sn1
        [rg2-rn2] sn=sn2
        [rg2-rn3] sn=sn3
      shard=[rg3] num partitions=30
        [rg3-rn1] sn=sn1
        [rg3-rn2] sn=sn2
        [rg3-rn3] sn=sn3
    Why empty files are not delete by cleaner? Why empty log files are protected by replicas if all the replicas seam to be aligned with the master ?
    java -jar /mam2g/kv-3.2.5/lib/kvstore.jar ping -host 192.168.144.11 -port 5000
    Pinging components of store MMS-KVstore based upon topology sequence #106
    Time: 2015-02-03 13:44:57 UTC
    MMS-KVstore comprises 90 partitions and 3 Storage Nodes
    Storage Node [sn1] on 192.168.144.11:5000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn1]      Status: RUNNING,MASTER at sequence number: 24,413 haPort: 5011
            Rep Node [rg2-rn1]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 5012
            Rep Node [rg3-rn1]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 5013
    Storage Node [sn2] on 192.168.144.12:6000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg3-rn2]      Status: RUNNING,REPLICA at sequence number: 12,829 haPort: 6013
            Rep Node [rg2-rn2]      Status: RUNNING,MASTER at sequence number: 13,277 haPort: 6012
            Rep Node [rg1-rn2]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 6011
    Storage Node [sn3] on 192.168.144.35:7000    Zone: [name=MAMHA id=zn1 type=PRIMARY]    Status: RUNNING   Ver: 12cR1.3.2.5 2014-12-05 01:47:33 UTC  Build id: 7ab4544136f5
            Rep Node [rg1-rn3]      Status: RUNNING,REPLICA at sequence number: 24,413 haPort: 7011
            Rep Node [rg2-rn3]      Status: RUNNING,REPLICA at sequence number: 13,277 haPort: 7012
            Rep Node [rg3-rn3]      Status: RUNNING,MASTER at sequence number: 12,829 haPort: 7013

    Solved setting a non documented parameter " je.rep.minRetainedVLSNs"
    The solution is described in NoSql forum:   Store cleaning policy

  • Getting empty log files with log4j and WebLogic 10.0

    Hi!
    I get empty log files with log4j 1.2.13 and WebLogic 10.0. If I don't run the application in the application server, then the logging works fine.
    The properties file is located in a jar in the LIB folder of the deployed project. If I change the name of the log file name in the properties file, it just creates a new empty file.
    What could be wrong?
    Thanks!

    I assume that when you change the name of the expected log file in the properties file, the new empty file is that name, correct?
    That means you're at least getting that properties file loaded by log4j, which is a good sign.
    As the file ends up empty, it appears that no logger statements are being executed at a debug level high enough for the current debug level. Can you throw in a logger.error() call at a point you're certain is executed?

  • Data Services 4.0 Designer. Job Execution but empty log file no matter what

    Hi all,
    am running DS 4.0. When i execute my batch_job via designer, log window pops up but is blank. i.e. cannot see any trace messages.
    doesn't matter if i select "Print all trace messages" in execution properties.
    Jobserver is running on a seperate server. The only thing i have locally is just my designer.
    if i log into the Data Services management console and select the job server, i can see trace and error logs from the job. So i guess what i need is for this stuff to show up in my designer?
    Did i miss a step somewhere?
    can't find anything in docs about this.
    thanks
    Edited by: Andrew Wangsanata on May 10, 2011 11:35 AM
    Added additional detail

    awesome. Thanks Manoj
    I found the log file. in it relevant lines for last job i ran are
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  Starting job with command line -PLocaleUTF8 -Utip_coo_ds_admin
                                                    -P+04000000001A030100100000328DE1B2EE700DEF1C33B1277BEAF1FCECF6A9E9B1DA41488E99DA88A384001AA3A9A82E94D2D9BCD2E48FE2068E59414B12E
                                                    48A70A91BCB  -ek********  -G"70dd304a_4918_4d50_bf06_f372fdbd9bb3" -r1000 -T1073745950  -ncollect_cache_stats
                                                    -nCollectCacheSize  -ClusterLevelJOB  -Cmxxx -CaDesigner -Cjxxx -Cp3500 -CtBatch  -LocaleGV
                                                    -BOESxxx.xxx.xxx.xxx -BOEAsecLDAP -BOEUi804716
                                                    -BOEP+04000000001A0301001000003F488EB2F5A1CAB2F098F72D7ED1B05E6B7C81A482A469790953383DD1CDA2C151790E451EF8DBC5241633C1CE01864D93
                                                    72DDA4D16B46E4C6AD -Sxxx.xxx.xxx -NMicrosoft_SQL_Server -Qlocal_repo  coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e" -l"C:\Program Files (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/trace_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -z"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/error_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -w"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/monitor_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -Dt05_11_2011_16_52_27_9
                                                    (BODI-850052)
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  StartJob : Job '05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3' with pid '148' is kicked off
                                                    (BODI-850048)
    (14.0) 05-11-11 16:52:28 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <4> (BODI-850170)
    (14.0) 05-11-11 16:52:28 (2272:2472) JobServer:  AddChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 17:02:32 (2272:2472) JobServer:  RemoveChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  GetRunningJobs() success. (BODI-850058)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  PutLastJobs Success.  (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <5> (BODI-850170)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    it does not look like i have any errors with respect to connectivity? ( or any errors at all....)
    Please advise on what, if anything you notice from log file and/or next steps i can take.
    thanks.

  • Centralized logging producing empty log files searches

    Not sure what I am doing wrong here. Experimenting with Lync 2013 Centralized logging. I started the AlwaysOn scenario which was off by default. I checked the directories on all 3 of my FE servers and a bunch of ETL files are present so it's doing something.
    Thing is, no matter how I search, the output or the log file if I pipe the output is always zero. Following the Microsoft document on Centralized Logging. They make it look so easy. Anyone has success with this tool? Seems like a nice feature and more convenient
    than ocslogger but its not producing the correct search results.

    I am quickly finding out that this utility is nothing but a headache. I am getting errors in the Lync Server log telling me the threshold for logging has been reached. I changed the
    CacheFileMaxDiskUsage from 80 to 20. 80%!! Seriously. Who wants a utility to take 80% of the disk space! Even at 20%, with a 125GB drive, I should be able to go up to 25GB. The ETL file was 14MB and I started getting errors saying the threshold
    was reached!
    Then, I could not stop the scenario. I tried 3 times. Either it would keep running or I got some weird error. I finally spelled AlwaysOn with the caps like it was case sensitive and it worked. This utility is whacked. Maybe I am doing something wrong.
    According to MS article CacheFileMaxDiskUsage is Defined as the percentage of disk space that can be used by the cache files. So, 20 for this value means 20% of 125GB or if its talking about free disk space, 18GB in my case.  Below is the error I am
    getting: 90,479.939.584 is the amount of free space on the disk. I did do the search again and it did work this time. I restarted the Agent on all FE servers. If I can get around this threshold error I think I am in business.
    Lync Server Centralized Logging Service Agent Service reached the local disk usage threshold and no network share is configured
    EtlFileFolder:  c:\temp\tracing - 90,479,939,584 (67.47 %)
    CacheFileLocalMaxDiskUsage: 20 %
    CacheFileLocalFolders:
      c:\temp\tracing - 90,479,939,584 (67.47 %)
    CacheFileNetworkFolder: Not set
    Cause: Lync Server Centralized Logging Service Agent Service will stop tracing when the local disk usage threshold is reached and no network share is configured. Verify CLS configuration using Get-CsCentralizedLoggingConfiguration. Necessary scenarios will
    need to be re-enabled once the more space is made available locally or a network share is configured
    Resolution:
    Free up local disk space, or increase disk usage threshold for CLS, or configure network share with write permissions for Network Service account

  • Discoverer 3i server trace yields empty log file

    Has anyone had issues with setting up a Discoverer 3i trace on the server? Our viewer shows version 3.3.62.02. I have tried Registry entries under 'HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1' (per the documentation) and also in 'Discoverer 3.3', in case it thought that was the version. I have varied file names and parameters, stopping and restarting the Discoverer server service each time (and sometimes rebooting, just to be sure).
    Each time I enter Viewer, it creates a new 'Discoverer.log' file in 'Winnt/System32' of zero bytes, whether or not I indicate that is supposed to be the file name, but never writes anything into it, whether the workbook works correctly or not. I have done this before on a workstation version with no problems.
    Am I missing something on the server side?
    Thanks,
    Ron

    Has anyone had issues with setting up a Discoverer 3i trace on the server? Our viewer shows version 3.3.62.02. I have tried Registry entries under 'HKEY_CURRENT_USER\Software\Oracle\Discoverer 3.1' (per the documentation) and also in 'Discoverer 3.3', in case it thought that was the version. I have varied file names and parameters, stopping and restarting the Discoverer server service each time (and sometimes rebooting, just to be sure).
    Each time I enter Viewer, it creates a new 'Discoverer.log' file in 'Winnt/System32' of zero bytes, whether or not I indicate that is supposed to be the file name, but never writes anything into it, whether the workbook works correctly or not. I have done this before on a workstation version with no problems.
    Am I missing something on the server side?
    Thanks,
    Ron

  • Steps to empty SAPDB (MaxDB) log file

    Hello All,
    i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
    I do have some idea what to do like the steps below
    1.  take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
    2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
    3. It will automatically overwrite log after log backups.
    or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
    Can the log area be overwritten cyclically without having to make a log backup?
    Yes, the log area can be automatically overwritten without log backups. Use the DBM command
    util_execute SET LOG AUTO OVERWRITE ON
    to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
    Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
    util_execute SET LOG AUTO OVERWRITE OFF
    and by creating a complete data backup in the ADMIN or ONLINE status.
    Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
    any reply will be highly appreciated.
    Thanks
    Mani

    Hello Mani,
    1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
    http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
               u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your  firewall and restrict access to these ports to only those computers that need to access the database.u201D
                 Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
    Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
    2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
    See the document u201CNetwork Communicationu201D at
    http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
    Thank you and best regards, Natalia Khlopina

  • Hardening & Keeping Log files in 10.9

    I'm not in IT but I'm trying to Harden our Macs to please a client.  I found several Hardening Tips & Guides written for older versions of OS X, but none for 10.9.  Does anyone know of a Hardening Guide written with commands 10.9.
    Right now I found a guide written for 10.8 and have been mostly sucessful implementing it except for a couple sticking points.
    They suggested keeping security.log files for 30 days, I found out that they got rid of security.log and most of its functionality is in authd.log.  But I can't figure out how to keep authd logs for 30 days.  Does anyone know how I can set this?
    I also need to keep install.log for 30 days as well, but not seeing a way to control this in /etc/newsyslog.conf.  Anyone know how to set this as well.
    Does anyone know if the following audit flags should still work: lo,ad,fd,fm,-all?
    I'm trying to keep system.log & appfirewall.log for 30 days as well, I've figured out these have moved from /etc/newsyslog.conf to etc/asl.conf, but I'm not sure if I've set this correctly. Right now I have added "store_ttl=30" to these 2 lines asl.conf.  Should this work? Is there a better way to do this?
              > system.log mode=0640 format=bsd rotate=seq compress file_max=5M all_max=100M store_ttl=30
              ? [= Facility com.apple.alf.logging] file appfirewall.log file_max=5M all_max=100M store_ttl=30

    Hi Alex...
    Jim,
    who came up with this solution????
    I got these solutions for creating log files and reconstructing the database from this forum a while back....probably last year sometime.
    Up until recently after doing this, there has been
    no
    problem - server runs as it should.
    I dare to say pure luck.
    The reason I do
    this is because if I don't, the server does NOT
    automatically create new empty .log files, and
    when
    it fills the current log file, it "crashes" with
    the
    "unkown mailbox path" displayed for all mailboxes.
    I would think you some fundamental underlying issue
    there.
    I assume by "unkown mailbox path" problem you mean a
    corrupt cyrus database?
    Yes, I believe that db corruption is the case...
    You should never ever manually modify anthing inside
    cyrus' configuration database. This is just a
    desaster waiting to happen.
    If your database gets regularly corrupted, we need to
    investigate why. Many possible reasons: related
    processes crashing, disk failure, power
    failure/surges and so on.
    Aha!...about a month ago - thinking back to when this problem started - there was a power outage here, over a weekend! The hard drive was "kicked out" of the server box when I returned to work on that Monday....and that's when this problem started!
    I suggest you increase the logging level for a few
    days and keep an eye on things. Then post log
    extracts and /etc/imapd.conf and we'll take it from
    there.
    Alex
    Ok, thanks, will do!
    P.S. Download mailbfr from here:
    http://osx.topicdesk.com/downloads/
    This will allow you to easily rebuild if needed and
    most important to do proper backups of your mail
    services.
    Thanks for that, too. I will check it out and return to this forum with an update in the near future.
    Jim
    Mac OS X (10.3.9)

  • Log file locations

    Where is the OC4J server logs? I have found several empty log files in the j2ee_home\logs directory. How do I configure more log details? Also, do servlet/JSP logs go somewhere else?i.e. to the APache logs directory?
    I am using Oracle9iAS Release 2 (9.0.3)
    Thanks,
    Allan

    In oracle9ias env
    also see
    /ora9ias/opmn/logs
    You could use log4J as well.
    -Prasad

  • Check Event Alert failed with error - No errors in the log file.

    Hi All,
    I am developing a simple event based alert on PO_HEADERS table. I want to send alerts when a PO is created.
    I did all the steps according to the metalink note How To Send An Email In A Simple Periodic Or Event Alert? [ID 1162153.1]
    When i create the PO, the alert is triggering, and Check Event Alert concurrent program is running. But the program completes with error.
    Checking the output file (empty) log file (no errors)
    What can i do here to find out what is the problem? There is nothing in the Alert Manager - History form also. I have kept 7 days as days to keep.
    Thanks!
    M

    Can you find any details about the error from the "View Detail" button (the same window where you check the log and output files)?
    I found the Workflow logs, I am not sure what I am looking for, but i am not seeing any errors reported.The event viewer is supposed to send an email, so do you see anything in the logs that could be related?
    Thanks,
    Hussein

  • Teradata fast load log file empty

    hei all,
    after update odi 11g teradata fast load script not running, error tells to see the log file but log is empty
    any solution
    naseer

    any solution please

  • OES2 SP3 AFP How to empty AFP log file

    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

    If you get a lot of afp activity, your probably best off if you just set the log to rotate.
    create a file under /etc/logrotate.d/ and name it whatever you want.
    then just enter in something like this:
    /var/log/afptcpd/afptcp.log {
    compress
    dateext
    maxage 365
    rotate 99
    size=+4096k
    notifempty
    missingok
    create 644 root root
    postrotate
    /etc/init.d/novell-afptcpd reload
    endscript
    Originally Posted by Skylon5000
    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

  • Odi 11g - IKM SQL to Hyperion Essbase (DATA) log file always empty

    In odi 11g when using *"IKM SQL to Hyperion Essbase (DATA)"* setting the the "LOG_ENABLED" = true,
    only an empty file are generated.
    Just "LOG_ERRORS" file (if errors occurs) are created.
    Is this just an my issue?
    Can someone help me?
    p.s.: the same issue, I got even with the *"IKM SQL to Hyperion Planning"*
    Thx in advance, Paolo

    Thanks John for your suggestion.
    here the patch *"Patch 10302682: IKM SQL TO PLANNING: LOG FILE IS CREATED BUT NOTHING INSIDE."*
    I didn't see any other about Essbase...
    I try to check all day on support site.
    Paolo
    Edited by: Paolo on 19-apr-2011 8.44

  • Calendar not syncronized - log file empty

    I'm using dm 4.5 and I'am trying to syncronize the calendar of 8310 (firmware 4.5) with outlook 2003 but the DM doesn't do it.
    The preferences are set up, I mapped the folders but when I launch the sync task, DM syncronize correctly my contacts but it misses to syncronize the calendar; looking at the log file I don't find any trace of the activity.
    Can someone help me?
    thanks,
    Enrico

    any solution please

Maybe you are looking for

  • Dependant lovs in jsp with mysql data

    hi to all. here is my doubt How to create dependant dynamic list of values in jsp where the data comes from mysql tables.(I mean i am storing all the details in mysql tables using foreign keys etc..). Ex.:country--->state--->district--->city---> we w

  • Changing file name and directory structure for use outside of iPhoto

    Hi, I was wondering if its possible to get iPhoto to name the files from my library to reflect the names that I've given the files in iPhoto. I'm thinking along the lines of iTunes, where its possible to chose in the preferences how the files are nam

  • Trigger Creation Error in Migration

    During the migration process from an Access 2000 db to Oracle 8i, a number of triggers are created to replace Validation Rules on fields in the Access Tables (eg >0 And <Now()). However, the trigger at migration time is created with a call to a funct

  • Why does my iphone have and enguged tone and can't receive any calls?

    Why does my iphone have an engauged tone and I can't receive any calls?

  • External number range in equipment

    Hello! I want to create a number range in equipment which contains letters, numbers, symbols. For example equipment numbers like: A125D-6 1-DSDV tHANK YOU!