Thousands of log entries for systemd-tmpfiles-clean.timer on boot

I'm running a 32 bit Arch install as a VMware ESXi 5.1 guest. Whenever the guest boots up, I get several thousand of the following entries in the system log:
Feb 18 12:49:01 squid systemd[1]: systemd-tmpfiles-clean.timer: time change, recalculating next elapse.
The most recent boot had almost 20,000 entries within 5 seconds:
$ sudo journalctl -b | grep systemd-tmpfiles-clean.timer | wc -l
19693
$ sudo journalctl -b | grep systemd-tmpfiles-clean.timer | sed -n '1p;$p'
Feb 18 12:49:01 squid systemd[1]: systemd-tmpfiles-clean.timer: time change, recalculating next elapse.
Feb 18 12:49:06 squid systemd[1]: systemd-tmpfiles-clean.timer: time change, recalculating next elapse.
I've pasted the entry into Google but have not come up with anything helpful.
I have disabled host-guest time sync:
$ vmware-toolbox-cmd timesync status
Disabled
There is a NTP daemon running that syncs time with a single windows server (which is also a guest on the same ESXi host).
As far as I'm aware there shouldn't be anything else playing with the time, but theres obviously something going on.
Can anyone please help me troubleshoot?

I've had the same problem and I don't know what's going wrong. But I have a workaround:
If you're booting into a graphical environment you can disable the vmtoolsd service
# systemctl disable vmtoolsd
and add the following line to your ~/.xinitrc:
vmware-user-suid-wrapper
The ~/.xinitrc will start the vmtoolsd service then.
This solved two problems for me:
1. No more messages like you posted in my log file.
2. The virtual machine shuts down promptly (see vmtoolsd not stopping)
Last edited by BertiBoeller (2013-03-14 13:40:21)

Similar Messages

  • [SOLVED] systemd-tmpfiles-clean takes a very long time to run

    I've been having an issue for a while with systemd-tmpfiles-clean.service taking a very long time to run. I've tried to just ignore it, but it's really bothering me now.
    Measuring by running:
    # time systemd-tmpfiles --clean
    systemd-tmpfiles --clean 11.63s user 110.37s system 10% cpu 19:00.67 total
    I don't seem to have anything funky in any tmpfiles.d:
    # ls /usr/lib/tmpfiles.d/* /run/tmpfiles.d/* /etc/tmpfiles.d/* | pacman -Qo -
    ls: cannot access /etc/tmpfiles.d/*: No such file or directory
    error: No package owns /run/tmpfiles.d/kmod.conf
    /usr/lib/tmpfiles.d/gvfsd-fuse-tmpfiles.conf is owned by gvfs 1.20.1-2
    /usr/lib/tmpfiles.d/lastlog.conf is owned by shadow 4.1.5.1-9
    /usr/lib/tmpfiles.d/legacy.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/libvirt.conf is owned by libvirt 1.2.4-1
    /usr/lib/tmpfiles.d/lighttpd.conf is owned by lighttpd 1.4.35-1
    /usr/lib/tmpfiles.d/lirc.conf is owned by lirc-utils 1:0.9.0-71
    /usr/lib/tmpfiles.d/mkinitcpio.conf is owned by mkinitcpio 17-1
    /usr/lib/tmpfiles.d/nscd.conf is owned by glibc 2.19-4
    /usr/lib/tmpfiles.d/postgresql.conf is owned by postgresql 9.3.4-1
    /usr/lib/tmpfiles.d/samba.conf is owned by samba 4.1.7-1
    /usr/lib/tmpfiles.d/slapd.conf is owned by openldap 2.4.39-1
    /usr/lib/tmpfiles.d/sudo.conf is owned by sudo 1.8.10.p2-1
    /usr/lib/tmpfiles.d/svnserve.conf is owned by subversion 1.8.8-1
    /usr/lib/tmpfiles.d/systemd.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/systemd-nologin.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/tmp.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/uuidd.conf is owned by util-linux 2.24.1-6
    /usr/lib/tmpfiles.d/x11.conf is owned by systemd 212-3
    How do I debug why it is taking so long? I've looked in man 8 systemd-tmpfiles and on google, hoping to find some sort of --dubug option, but there seems to be none.
    Is it some how possible to get a list of the directories that it looks at when it runs?
    Anyone have any suggestions on how else to fix this.
    Anyone else have this issue?
    Thanks,
    Gary
    Last edited by garyvdm (2014-05-08 18:57:43)

    Thank you very much falconindy. SYSTEMD_LOG_LEVEL=debug helped my find my issue.
    The cause of the problem was thousands of directories in /var/tmp/ created by a test suite with a broken clean up method. systemd-tmpfiles-clean was recursing through these, but not deleting them.

  • Thousands of eventlog entries for Diagnosis-PCW

    My event log is filled with thousands of these:
    Counter <#> of instance ({ecebd0ea-e19b-4208-afa0-6e4fd89ddfab}, VCPkg, 1) could not be modified. Error: "Element not found."
    I couldn't find anything relevant by searching. Anyone know what it is?

    Hi,
    PCW means Performance Counters of Windows, which is used to provide information as to how well the operating system or an application, service, or driver is performing. The counter data can help determine system bottlenecks and fine-tune system and application
    performance. The event in your event log should record the actions or errors for Windows performance or diagnostic results of performance.
    Would you please share the full saved log here for our research?
    Please upload the event log (contains the events you mentioned) into OneDrive or similar network drives and post back the shared link:
    How to Save Event Logs
    http://msdn.microsoft.com/en-us/library/gg163107.aspx
    Kate Li
    TechNet Community Support

  • Creating action log entry for incident via SDK in C#

    Hi,
    Does anyone have any example code, or pointer to, of how to add an action log entry (with icon) to an incident? I can't work out what the target for the relationship should be or how to configure it...
    With Thanks,
    Rob

    Anton,
    Thanks for your response! I think the problem may be in how I'm creating "WorkItemMP". In the method below I'm trying to pass in an issue Id parameter to add an action log item to an Issue. 
    How are you creating the  "WorkItemMP"?
    public
    void
    UpdateActionLog(string
    nsId)
    EnterpriseManagementGroup
    emg1 = new
    EnterpriseManagementGroup("server01.xyx.com"
    ManagementPackClass
    classIncident = emg1.EntityTypes.GetClass(new
    Guid(SYSTEM_WORKITEM_INCIDENT_CLASSS));
    // A604B942-4C7B-2FB2-28DC-61DC6F465C68
    EnterpriseManagementObjectProjection
    incidentProjection = new
    EnterpriseManagementObjectProjection
    (emg1, classIncident);
    ManagementPack
    WorkItemMP = emg1.ManagementPacks.GetManagementPack(new
    Guid("DD26C521-7C2D-58C0-0980-DAC2DACB0900"));
    //System.WorkItem.Incident.Library MP
    CreatableEnterpriseManagementObject
    cemoIncident = new
    CreatableEnterpriseManagementObject(emg1,
    classIncident);
    cemoIncident[classIncident,
    "Id"
    ].Value = nsId;
    ManagementPackClass
    typeActionLog = emg1.EntityTypes.GetClass("System.WorkItem.TroubleTicket.ActionLog"
    , WorkItemMP);
    CreatableEnterpriseManagementObject
    objectActionLog = new
    CreatableEnterpriseManagementObject
    (emg1, typeActionLog);
    objectActionLog[typeActionLog,
    "Id"].Value
    = Guid
    .NewGuid().ToString();
    objectActionLog[typeActionLog,
    "Description"].Value
    = "Incident updated via SDK.\n"
    objectActionLog[typeActionLog,
    "Title"].Value
    = "Incident updated via SDK"
    objectActionLog[typeActionLog,
    "EnteredBy"].Value
    = "Administrator"
    objectActionLog[typeActionLog,
    "EnteredDate"].Value
    = DateTime
    .Now.ToUniversalTime();
    ManagementPackEnumeration
    enumeration6 = WorkItemMP.GetEnumerations().GetItem("System.WorkItem.ActionLogEnum.TaskExecuted"
    objectActionLog[typeActionLog,
    "ActionType"
    ].Value = enumeration6.Id;
    ManagementPackRelationship
    relationship2 = emg1.EntityTypes.GetRelationshipClass("System.WorkItem.TroubleTicketHasActionLog"
    , WorkItemMP);
    if
    (incidentProjection != null
    incidentProjection.Add(objectActionLog, relationship2.Target);
    incidentProjection.Commit();

  • SSIS log provider for Text files - Clean logs

    I have SQL Server 2012 with package deployment model.
    I'm thinking what is best practise for logging.
    Does SSIS log create new log files each day or does it log alway to same file forever?
    How to handle that size of files in Logging Folder is under control? Is is manual process to clean(remove) logs or any automated way to remove logs older that 30 days etc?
    Kenny_I

    In all SSIS versions the logging to file is an append operation, or create then append if the log file does not exist. To remedy the file growth one needs to create a scheduled job.
    Another option is to create a new file in the package http://goo.gl/4c1O3n this is helpful if you want to get rid of the files older than x days. IMO the easiest way to remove these is thru
    Using Robocopy to delete old files from folder
    Arthur My Blog

  • Log Entries for Terminal Services in Event Viewer?

    Hello
    I wasn't sure exactly where to post this. Answers.microsoft.com directed me here for an answer.
    I'm running Windows 7 Professional 32 bit. It's a standalone PC, not joined to a domain, never configured as a server. I'm puzzled. When I review entries in the Event Viewer, all logon and logoff entries are located in Event Viewer/Applications and Services
    Logs/Microsoft/ Windows/Terminal Server/Local Session Manager/Operational.  Every logon/logoff event is recorded here, although I have always had Remote Desktop Services disabled in Services. I would think that logon/logoff events would be recorded in
    Applications and Services Logs/Microsoft/Windows/Winlogon. That makes more sense to me. Some of these user entries have Address: LOCAL and some are blank. No major hardware or software changes that might have caused this. The Event Viewer only goes back
    6 months (1 Mb) and then it's overwritten. Can anyone explain this to me? Thanks for your help.

    Hi,
    The path of Event Viewer/Applications and Services Logs/Microsoft/ Windows/Terminal Server/Local Session Manager is used to record Remote Desktop Services activity even through it's disabled.
    Windows logon and logoff activity is recorded in another path: Windows Logs/Security.
    Karen Hu
    TechNet Community Support

  • Excessive 'SecurityServer' log entries for ServerEventAgent after Adaptive Firewall

    Hello all,
    I'm running an OS X Server running 10.8.2. After enabling the Adaptive Firewall last night ( http://support.apple.com/kb/HT5519, http://support.apple.com/kb/TS4418 ), I started noticing a massive number of logs in /var/log/system.log that look like this:
    Jan 11 17:44:59 <hostname> com.apple.SecurityServer[21]: Succeeded authorizing right 'system.privilege.admin'
    by client '/Applications/Server.app/Contents/ServerRoot/usr/libexec/ServerEventAgent' [131] for authorization
    created by '/Applications/Server.app/Contents/ServerRoot/usr/libexec/ServerEventAgent' [131] (2,0)
    Jan 11 17:44:59 <hostname> com.apple.SecurityServer[21]: Succeeded authorizing right 'system.privilege.admin'
    by client '/Library/PrivilegedHelperTools/com.apple.serverd' [71] for authorization created by
    '/Applications/Server.app/Contents/ServerRoot/usr/libexec/ServerEventAgent' [131] (100000,0)
    Does anyone have thoughts on this? They generally come in pairs like above. I've seen other SecurityServer logs while managing the server, but the number of them (and ServerEventAgent string) have really jumped up after trying to enable the Adaptive Firewall. I'm not even sure the firewall is working at this point, as running hb_summary tells me there have been 0 blocks in the last 24 hours. Yesterday, before trying to enable the AF, the server was trying to block login bots every few minutes, so I'm not sure everything is hooked-up correctly.
    It should be noted that I had some trouble with the second KB article linked above because I had previously tried using IceFloor to manage the new pffirewall. Apparently IceFloor removes some lines from /etc/pf.anchors/com.apple and doesn't put them back when you uninstall the program. I re-added the two missing lines at the end (with Apple's edits):
    anchor "400.AdaptiveFirewall/*"
    load anchor "400.AdaptiveFirewall" from "/Applications/Server.app/Contents/ServerRoot/private/etc/pf.anchors/400.AdaptiveFirewall"
    Any help would be greatly appreciated!

    Ahhhhhhh...that's gotta be it!
    Um, I mean no, I did not have relations with that application.
    Thanks!

  • Since applying Feb 2013 Sharepoint 2010 CUs - Critical event log entries for Blob cache and missing images

    Hi,
    Since applying the February 2013 SharePoint 2010 updates, we are getting lots of entries in our event logs along the following:
    Content Management     Publishing Cache         
    5538     Critical 
    An error occurred in the blob cache.  The exception message was 'The system cannot find the file specified. (Exception from HRESULT: 0x80070002)’
    In pretty much all of these cases the image/ file in question that is reported in the ULS logs as missing is not actually in the collaboration site, master page / html etc so the fix needs to go back to the site owner to make the correction to avoid
    the 404 (if they make it!). This has only started happening, I believe since feb 2013 sp2010 cumulative updates updates
    I didn’t see this mentioned as a change / in the Fix list of the February updates. i.e. it flags up a critical error in our event logs. So with a lot of sites and a lot of missing images your event log can quickly fill up.
    Obviously you can suppress them in the monitoring -> web content management ->publishing cache = none & none which is not ideal.
    So my question is... are others seeing this and was a change made by Microsoft to flag a 404 missing image / file up a critical error in event log when blob cache is enabled?
    If i log this with MS they will just say, you need to fix it up the missing files in the site but would be nice to know this had changed prior! I also deleted and recreated the blob cache and this made no diffference
    thanks
    Brad

    I'm facing the same error on our SharePoint 2013 farm. We are on Aug 2013 CU and if the Dec CU (which is supposed to be the latest) doesn't solve it then what else could be done.
    Some users started getting the message "Server is busy now try again later" with a corelation id. I looked up ULS with that corelation id and found these two errors in addition to hundreds of "Micro Trace Tags (none)" and "forced
    due to logging gap":
    "GetFileFromUrl: FileNotFoundException when attempting get file Url /favicon.ico The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Error in blob cache. System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002)"
    "Unable to cache URL /FAVICON.ICO.  File was not found" 
    Looks like this is a bug and MS hasn't fixed it in Dec CU..
    &quot;The opinions expressed here represent my own and not those of anybody else&quot;

  • Log Entries for T001W

    The table T001w(Plant Table) has LOG CHANGES ticked in the technical settings. Can i know where will it be stored as in whihc LOG table and how the search ley would be formed.

    Hi,
    U can get the log changes through  Tcode SCU3.
    Or u can go to CDHDR/CDPOS tables , where all the changes for that table are recorded.
    Regards,
    Naveen

  • Event ID 62464 - Getting Thousands of Log Entries

    Hello all,
    If your event logs are getting overwhelmed by this informational, there is a way to suppress such loggings.
    Look at the followup post for the solution.
    Solved!
    Go to Solution.

    Here's the solution:
    CAUTION: The solution requires a registry edit so proceed with caution!
    This solution is courtesy of "Your Fairy Godmother" (that is her real forum name)!
    This is a way to suppress those useless informational logs:
    Edit the registry value:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Atierecord\eRecordEnable and set it to 0.
    Then reboot.
    Note: You will have to do the above every time you install a new driver (which should not be often).
    This solution was extracted directly from the thread below. All praise for this solution goes to "Your Fairy Godmother."
    http://forums.amd.com/game/messageview.cfm?catid=260&threadid=138421

  • WorkManager entry for max-stuck-thread-time doesnot override the default

    Hi All,
    I am using WLS 10.0 and have configured the following workmanager entry in the config.xml:
    <work-manager>
    <name>myworkmanager</name>
    <target>AdminServer</target>
    <ignore-stuck-threads>false</ignore-stuck-threads>
    <work-manager-shutdown-trigger>
    <max-stuck-thread-time>60</max-stuck-thread-time>
    <stuck-thread-count>5</stuck-thread-count>
    </work-manager-shutdown-trigger>
    </work-manager>
    I want that the <BEA-000337> message should be logged after the work-manager-shutdown-trigger's max-stuck-thread-time exceeds (after 60 seconds).
    If more than 5 threads of applications which are dispatched to this workmanager becomes stuck for more than 60 sec. an additional request to this applications are responded with http code 503.
    However, no <BEA-000337> message is written in to the server’s log file.
    This message is only written to the log file, when the stuck-thread-max-time (e.g. set to 150 secs) element of the servers configuration is exceeded.
    Sample:
    ####<09.01.2009 13:31:58 IST> <Error> <WebLogicServer> <appcashd>
    <AdminServerrefs1m1vm1> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default
    (self-tuning)'> <<WLS Kernel>> <> <> <1231504318065>
    <BEA-000337> <[STUCK] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "151" seconds working on the request "Http Request: /testservlet/sleep", which is more than the configured time (StuckThreadMaxTime) of "150" seconds.
    Stack trace:
    java.lang.Thread.sleep(Native Method)
    at.compny.testservlet.SleepServlet.doGet(SleepServlet.java:61)
    I believe that the <BEA-000337> message should already be logged after the work-manager-shutdown-trigger's max-stuck-thread-time exceeds (ergo after 60 seconds).
    I would like to know if this is the default behavior or we are encountering a bug.
    Any pointers would be appreciated

    Hi,
    what I was thinking about is that the work manager indeed works since the next request after the specifies number of requests is returned with a 503 error.
    This makes me believe that it works !
    However, the behavior of the server regarding the <stuck-thread-max-time> is not overridden.
    I have specified this value as 60 seconds in the work manager.
    However, this message is only written to the log file, when the stuck-thread-max-time (e.g. set to 150 secs) element of the servers configuration is exceeded.
    Any Idea if this is the default behavior or if I am encountering a bug

  • Systemd-tmpfiles and gvfs - permission denied

    systemd-tmpfiles[2346]: stat(/run/user/1000/gvfs) failed: Permission denied
    Can anyone tell me why this is in my journalctl ?  Sometimes many times, depending on the application I use,
    other times, once or twice in a 5 hour uptime.

    same here with systemd. From log file:
    localhost systemd-tmpfiles[12898]: stat(/run/user/1000/gvfs) failed: Permission denied
    [gabx@magnolia:1000]$ ls -al
    dr-x------ 2 gabx users 0 Aug 31 21:32 gvfs
    [gabx@magnolia:1000]$ chmod -R u+w gvfs
    [gabx@magnolia:1000]$ ls -al
    dr-x------ 2 gabx users 0 Aug 31 21:32 gvfs
    As you can see, can't even change permission for user gabx to write the directory.
    EDIT:
    [gabx@magnolia:~]$ systemctl status systemd-tmpfiles-setup.service
    systemd-tmpfiles-setup.service - Recreate Volatile Files and Directories
    Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-setup.service; enabled)
    Active: active (exited) since Fri, 31 Aug 2012 21:32:00 +0200; 46min ago
    Docs: man:tmpfiles.d(5)
    Process: 353 ExecStart=/usr/bin/systemd-tmpfiles --create --remove (code=exited, status=0/SUCCESS)
    CGroup: name=systemd:/system/systemd-tmpfiles-setup.service
    [gabx@magnolia:~]$ sudo systemctl status systemd-tmpfiles-setup.service
    systemd-tmpfiles-setup.service - Recreate Volatile Files and Directories
    Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-setup.service; enabled)
    Active: active (exited) since Fri, 31 Aug 2012 21:32:00 +0200; 47min ago
    Docs: man:tmpfiles.d(5)
    Process: 353 ExecStart=/usr/bin/systemd-tmpfiles --create --remove (code=exited, status=0/SUCCESS)
    CGroup: name=systemd:/system/systemd-tmpfiles-setup.service
    Following the above status, it seems everything is OK, so I am not sure this permission denied is really an issue.
    Last edited by gabx (2012-08-31 23:31:01)

  • Windowserver log entries: kCGErrorIllegalArgument:

    I'm a newbie to Mac & OS X - I'm seeing lots of entries (sample below) in the log which I don't understand.
    Any help appreciated.
    Dec 09 11:18:23 [55] kCGErrorIllegalArgument: CGXSetWindowListTags: Operation on a window 0x2 not owned by caller SecurityAgent
    Dec 09 11:18:23 [55] kCGErrorIllegalArgument: Set a breakpoint at CGErrorBreakpoint() to catch errors as they are returned
    Dec 09 11:18:23 [55] kCGErrorIllegalArgument: CGXOrderWindow: Operation on a window 0x2 not owned by caller SecurityAgent
    Dec 09 11:29:53 [55] kCGErrorIllegalArgument: CGXSetWindowListTags: Operation on a window 0x6 not owned by caller SystemUIServer
    Dec 09 11:44:49 [55] kCGErrorIllegalArgument: CGXSetWindowListTags: Operation on a window 0x6 not owned by caller Tunnelblick

    Hi,
    I checked my lab, and saw that only incident's log entries is sorted by date:
    Log entries for SR and Problem are not sorted:
    And this is hard-coded, if you want to sort them by date, we should click Date Time.
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Log Entries not sorted in Problem work items

    We have noticed that the Log Entries in all Problem work items appear to be randomly sorted. You can manually sort them by clicking on the column headers.
    Log entries for Service Request and Incident work items are sorted by Created date as default which I guess is how most people would want them.  Has anyone else noticed this or can this be configured locally somehow?
    Thanks

    Hi,
    I checked my lab, and saw that only incident's log entries is sorted by date:
    Log entries for SR and Problem are not sorted:
    And this is hard-coded, if you want to sort them by date, we should click Date Time.
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • WATERMARK log entries

    Hi guys,
    by any chance you might know what are IPC-WATERMARK/ICC-WATERMARK log messages, I googled, everything points to the fact that they are cosmetic, which is not the case, as SIP-400 stops passing traffic, sometimes even directly connected, until box is completely reloaded, module reload does not help. Router is ISG terminating a few K PPPoE sessions, and is one of the anycast RP's, for the access plant. Without reload it causes massive outage 1-2 hours. 
    Entries keep on logging, even after multicast sources were stopped.
    Any input would be much appreciated.
    Thank you,
    Elnur

    Hi,
    I checked my lab, and saw that only incident's log entries is sorted by date:
    Log entries for SR and Problem are not sorted:
    And this is hard-coded, if you want to sort them by date, we should click Date Time.
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

Maybe you are looking for