888k Error in ULS Logs for File System Cache

Hello,
We have a SharePoint 2010 farm in a three-tier architecture with multiple WFEs and APP servers.
Roughly once a week we will have a number of WFEs seize up and jump to 100% CPU usage. Usually they come in pairs; two servers will jump to 100% at the same time while all the other servers are fine in the 20% - 50% range.
Corresponding to the 100% CPU spike, the following appear in the ULS logs:
"File system cache monitor encoutered error, flushing in memory cache: System.IO.InternalBufferOverflowException: Too many changes at once in directory:C:\ProgramData\Microsoft\SharePoint\Config\<GUID>\."
When these appear, the ULS logs will show hundreds back-to-back flooding the logs.
I have yet to figure out how to stop these and bring the CPU usage down while the incident is happening, and how to prevent them in the future.
While the incident is happening, I have tried clearing the configuration cache, shutting the timer jobs down on each server, deleting all the files but config.ini in the folder listed above, changing config.ini to 1, and restarting the timer. The CPU will
drop momentarily during this process, but as soon as all the timer jobs are restarted the CPUs jump back to 100% on the same servers.
This week as part of my weekly maintenance I thought I'd be proactive and clear the cache even though the behavior wasn't happening, and all CPUs were normal. As soon as I finished, the CPU on two servers that were previously fine jumped to 100% and wouldn't
come down. Needless to say, users complain of latency when servers are at 100% CPU.
So I am frustrated. The only thing I have found that works when the CPUs jump to 100% with these errors are a reboot. Nothing else, including IISReset and stopping/starting the admin and timer job services work. Being Production systems, reboots during the
middle of the day are bad.
Any ideas? I have scoured the Internet resources on this error and have come up relatively empty-handed. All the articles reference clearing the configuration cache, which, in my instance, does not get rid of these issues, and can even trigger them.
Thanks,
Joseph Irvine

Take a look at http://support.microsoft.com/kb/952167 for the list of recommended exclusions per Microsoft.
Trevor Seward
Follow or contact me at...
&nbsp&nbsp
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • Search event logs for file system access

    I'm looking to create a script that will allow me to search Windows 2012 security event logs for access to specific folders.  Ideally it would allow the granularity to search for read access events (4663) and specify specific users to view.  One
    example would be to show events for drive F:\ where the folder name is JSmith (including subfolders) and the username is not JSmith.
    I've tried something like this, but can't see how to filter.
    Get-EventLog security | ? {$_.Message.contains("F:\JSmith")}

    Is the match explicit?  How can I use wildcard?  How can I exclude events?
    I recommend asking a search engine and doing some initial research. Here's a starter:
    https://technet.microsoft.com/en-us/library/hh849682.aspx
    http://blogs.msdn.com/b/powershell/archive/2009/06/11/windows-event-log-in-powershell-part-ii.aspx
    http://blogs.technet.com/b/ashleymcglone/archive/2013/08/28/powershell-get-winevent-xml-madness-getting-details-from-event-logs.aspx
    http://blogs.technet.com/b/heyscriptingguy/archive/2011/01/24/use-powershell-cmdlet-to-filter-event-log-for-easy-parsing.aspx
    https://richardspowershellblog.wordpress.com/2009/03/08/get-winevent/
    Don't retire TechNet! -
    (Don't give up yet - 13,225+ strong and growing)

  • [SOLVED] ERROR: Unable to determine the file system type of /dev/root:

    :: Running Hook [udev]
    :: Triggering uevents...done
    Root device '804' doesn't exist.
    Creating root device /dev/root with major 8 and minor 4.
    error: /dev/root: No such device or address
    ERROR: Unable to determine the file system type of /dev/root:
    Either it contains no filesystem, an unknown filesystem,
    or more than one valid file system signature was found.
    Try adding
    rootfstype=your_filesystem_type
    to the kernelcommand line.
    You are now being dropped into an emergency shell.
    /bin/sh: can't access tty; job control turned off
    [ramfs /]# [ 1.376738] Refined TSC clocksource calibration: 3013.000 MHz.
    [ 1.376775] Switching to clocksource tsc
    That's what I get when I boot my Arch system. It worked fine for quite a while, but suddenly it ran into an error where the SCSI driver module was corrupt. I fixed it by reinstalling util-linux-ng and kernel26, however, I run into this issue now. http://www.pastie.org/2163181 < Link to /var/log/pacman.log for the month of July, just in case. Yes, I bought a new ATI/AMD Radeon HD 5450 this Saturday, but it seemed to work fine till this broke and it works fine on Ubuntu too, so I suppose we can rule it out.
    Last edited by SgrA (2011-07-05 20:45:36)

    Autodetection failed on your first image, in both your previous kernel installs:
    [2011-07-04 16:14] find: `/sys/devices': No such file or directory
    Which means that sysfs was not mounted. You should be able to boot from the fallback image, which does not use autodetect. Figure out why /sys isn't mounted, as well, and fix that.
    This is also a somewhat crappy bug in mkinitcpio that lets you create an autodetect image that's useless. It'll be fixed in the next version of mkinitcpio that makes it to core.
    Last edited by falconindy (2011-07-04 17:41:19)

  • Is possible control tape library slot 1 - 10 for file system backup

    hi ..
    i am new to osb , i just install and setup osb , i have a question as below , hope expert can help me
    env: testing
    rhel 5.5
    tape library with 20 slot
    file system backup
    1. is possible osb only use slot 1 - 10 for file system backup ? amanda can control slot x - slot y for the configuration .
    2. how do i label the tape for slot 1 - slot 10 by obtool ? how to control osb auto load the tape for next backup ? where to check the log say that next tape is tape-02 ?
    thanks ..

    hi dcooksey
    how do i use list for a tape drive...for example, if you want tape drive A to only use slots 1-10 from obtool or webtool ?
    becoz i new to backup solution & osb ( i always use ghost or acronis to clone the image ) , my thinking as below , pls correct me if i am wrong
    slot 1 - 10 for daily backup
    slot 11-16 for full system backup
    slot 17 - 20 reserve ( this tape only use for full system backup before perform any application upgrade patches , )
    daily backup mon - fri ( 2 week ) ( no backup on saturday and sunday ) , server application offline
    full system backup friday ( 1 , 14 on calendar ) every 2 week perform full system backup after daily backup completed
    for upgrade application ,
    perform full system backup after daily backup , then release the server to application team to perform upgrading .
    so how to set my media family for the above setting ? the slot configuration is control by media family ?
    hope you can help ...

  • Unknown error please see log for details...

    Hi,
    I have been having major issues with not being able to
    connect or administer a site, so I deleted the site from the server
    and uploaded a fresh version as the error suggested the files where
    corrupt?
    Now I get the error "Unknown error please see log for
    details" the log is as follows;
    Date: 12/22/2006
    LocalTime: 12:36
    Host: ftp.caitlin-labradoodles.co.uk
    Port:
    LoginID: caitlinlauk
    Path: /
    Passive Enabled: false
    ProxyHost: NoneContribute Alternate Rename: no
    Contribute Optimized: yes
    ======================== Test Results
    ==========================
    NOTE: For more information on FTP server compatibility
    issues, please see
    http://www.macromedia.com/support/contribute/ts/documents/ftp_info.htm
    Login: SUCCESS!
    Changing Directory To: / SUCCESS!
    Directory Listing Test: SUCCESS!
    Make Directory Test: SUCCESS!
    Change Directory Test: SUCCESS!
    Upload a File Test: REMOTE_IO_ERROR - Error reading or
    writing remote resource.
    ----------------------- FTP log from the last operation
    > CWD /mm_diagnose_9qw83
    < 250 Directory successfully changed.
    > PORT 192,168,165,54,207,26
    < 200 PORT command successful. Consider using PASV.
    > TYPE I
    < 200 Switching to Binary mode.
    > STOR upload_test1_reg_.htm
    < 425 Failed to establish connection.
    Cleaning Test Directory: Contribute could not create
    "mm_diagnose" because it already exists. Please remove this folder.
    ----------------------- FTP log from the last operation
    > CWD /
    < 250 Directory successfully changed.
    > RMD mm_diagnose_9qw83
    < 550 Remove directory operation failed.
    > CWD /mm_diagnose_9qw83
    < 250 Directory successfully changed.
    > PORT 192,168,165,54,207,27
    < 200 PORT command successful. Consider using PASV.
    > TYPE A
    < 200 Switching to ASCII mode.
    > LIST
    < 150 Here comes the directory listing.
    < -rw-r--r-- 1 6518 99 0 Dec 22 12:34
    upload_test1_reg_.htm
    < 226 Directory send OK.
    Any suggestions would be great at this stage as Im scratching
    my head with confusion and my customer is getting very irate as she
    can not do anything.

    Hi,
    Can you please verify whether user has
    read/write/delete/append/create directory rights as an FTP user for
    the FTP home directory. If not give the same and try creating the
    connection.
    Please let me knwo if it helps.
    Regards,
    Manoj

  • Error in codepage mapping for source system

    Hi Experts,
    I am getting the following error while loading data from ECC only for Account COPA datasource.
    Error in codepage mapping for source system. Error text:
    Message no. RSDS_ACCESS013
    Regards,
    Sandeep Sharma

    Hi Sandeep,
    This is a issue related to idocsRFC connections from teh sourcesystem....
    U can contact ur BASIS team to solve this...
    From their end they have to chk the unprocessed/errored idocs in BD87...
    They have to run a report:RBDAGAIN to process the errored idoc's in the source which are in STATUS:'02'.
    This is a solution for temporarilrly basis...
    If this occurs regularly they have to change the settings in SM59 which is so important...
    Refer exact note for this: 784381
    The SAP note 613389 may also be relevant for the error message "Could not find code page for receiving system", please check the information given in the note and see if you can use it to resolve the problem.
    Regards,
    Marasa.

  • Error in codepage mapping for source system. Error

    Hi BW Experts,
    I am facing following error:
    Error message: Error in codepage mapping for source system. Error text:
    Details: Inbound Processing ( 1000  Records ) : Errors occurred
                Error in codepage mapping for source system. Error text:
                Update PSA ( 0 Records posted ) : Missing messages
    I repeated the delta working and everything fine.
    Does anybody know why this error occurs?

    Run rsa13 (for bi 7.0) find your source system which one you are using for data transfer and double  click on it and find special options there select  the optioned i mentioned already.
    Please search SDN you can fin threads related to this thread
    if not let me know.
    Regards.

  • Error in codepage mapping for source system. Error text:

    guyz,
    im facing the below issue in prod.
    Error in codepage mapping for source system. Error text:
    Message no. RSDS_ACCESS013
    Collection in source system ended
    Error message during processing in BI
    Diagnosis
    An error occurred in BI while processing the data. The error is documented in an error message.
    System Response
    A caller 01, 02 or equal to or greater than 20 contains an error meesage.
    Further analysis:
    The error message(s) was (were) sent by:
    Inbound Processing
    Procedure
    Check the error message (pushbutton below the text).
    Select the message in the message dialog box, and look at the long text for further information.
    Follow the instructions in the message.
    need your guidance,
    cheerz,
    raps.
    Edited by: raps on Mar 1, 2012 9:39 AM

    Hi ,
    Kindly go throught the following threads for your issue resolution::
    Error in code page mapping for Source System
    Erroe in source system
    Regards,
    Arpit

  • URM Adapter for File System issue.

    Hi, I am just starting out on using the URM Adapter for File System and I have a few questions about issues I am facing.
    1.     When I try to create multiple searches and map them to Folders/Retention Categories in URM, it does not work. I am able to map one search via one URM source to one Folder/Retention Category (without my custom attribute from question 1). However in Adapter’s Search Preview I am able to perform a search on the documents successfully. Would different searches require different URM sources in Adapter?
    2.     Does the adapter work with other Custom Attributes? I have added an attribute in addition and in the same way as "URMCrawlTimeGMT" is added in Oracle Secure Enterprise Search (I created a custom Document Service and Pipeline to add a metadata value) and in the URM Adapter’s config.properties file and when I create a search in Adapter based on the custom attribute, it does not map the documents into URM. I am however able to search the documents in Adapter’s Search Preview window with the custom attribute displaying correctly.
    Any help with this topic would be really appreciated. Thank you.
    Regards,
    Amar

    Hi Srinath,
    Thanks for the response, as to your questions,
    1. I am not sure how to enable Records Manager in adapter mode. But I am able to login to the Records Manager web page after starting it up through StartManagedWebLogic.cmd URM_server1.
    2. The contents of the file system should be searchable in Records Manager, and should be able to apply retention policies to the documents in the file system, I do not need to have SES, but apparently the adapter needs to have SES as a pre requisite.
    Upon further investigation I found that in the AGENT_DATA table the values being inserted were "User ID"(UA_KEY) and NULL(UA_VALUE), so I just made the UA_VALUE column nullable and I was able to pass that step. Is this the wrong approach to fix the issue.
    Could you please let me know about enabling Records Manager in adapter mode, I am not able to find documentation online, I have been through the Adapter installation and administration guides. Thank you once again.
    Regards,
    Amar

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • File system cache on APO for ATP

    Hi, aces,
    Do you have any recommendation percentage of file system cache on AIX for APO/ATP environment?  My system was configured to be 20% min -80% max.  But I am not sure if this is good for APO/ATP.
    I suspect the file system cache takes a lot of memory and leaves less memory for APO work processes.
    Regards,
    Dwight

    sar will give you what you need....

  • Error: WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=26

    Hi every one,
    Today, i met a problem: Application cannot connect to database because database hang ( I also cannot connect to database with sqlplus) . Check alert log, only one error:
    WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=26This error not only appear first time, but also happen every one month. I must reset server for database release all memory but I think It isn't a true solution!
    Could you give a recommend for this.
    Regards.

    The Row Cache is actually the Data Dictionary Cache. It is where definitions from the data dictionary (tablespaces, objects, users etc) are loaded into memory.
    There would be an associated trace file written with the occurrence of this warning.
    See Oracle Support Note 278316.1 for more information
    Hemant K Chitale

  • File system cache performance

    hi.
    i was wondering if anyone could offer any insight into how to
    assess the performance of the file system cache. I am interested
    in things like hit rate (which % of pages read are coming from the
    cache instead of from disk), the amount of data read from the cache
    over a time span, etc.
    outside of the ::memstat dcmd for mdb, i cannot seem to find a whole lot about this topic.
    thanks.

    sar will give you what you need....

  • Windows Embedded Standard File system cache

    Hey I am new in Windows Embedded.
    I am using Windows Embedded Standard XP, and looking for information regarding cache and file system in OS.
    File Systems are designed to reduce the disk hits. File write operations does not write to disk immediately until we use the flush API. Flush API makes the system slower though. Os on the other hand keeps flushing the data in optimized way.
    We need to know 
    1- how frequent windows embedded standard is flushing the data. ?
    2- How much data it keeps in file system cache(Ram) before flushing ?
    3- Can we change things mentioned in above two points by using code?

    Ok Thank you very much .
    How much data it keeps in file system cache(Ram) before flushing ? How much cache memory i have on ram
    How  we know this ?

  • AIX File system caching

    Dear Experts,
    How to disable file system caching in aix environment?
    Thanks in advance.
    Regards,
    Rudi

    > How to disable file system caching in aix environment?
    This depends on the filesystem used.
    Check http://stix.id.au/wiki/Tuning_the_AIX_file_caches
    Markus

Maybe you are looking for

  • BLOB in Oracle Streams

    Oracle 10.2.0.4: I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration

  • BAPI for PR Creation  and Equipment master updation

    Hi Gurus , Please provide me the function module (BAPI) for the following. 1. Create Purchase Requisition 2. Update/Change Equipment master(Iq02) Thanks Tausif

  • Combo drive noisy/CD playback freezes

    I've had my ibook for about a year and a half. There are one or two CDs i own that my combo drive has never been able to play-- it makes a lot of noise (whirring, like the lens is shifting around a lot) and i just get the spinning beach ball in iTune

  • Upper limits of iTunes library?

    I have a very large music collection -- roughly estimated at over 2,500 cds. I have spent years working (off and on) on digitizing it, and although I don't really know exactly what percentage of it I've managed to digitize, I have about 150 gigs of h

  • Shmmax for 64bit redhat w/16GB Memory

    I'm getting conflicting responses for what should be the correct shmmax setting on a Linux system. I currently have 8GB of memory and about to add another 8GB totaling 16GB, I am then going to make my MAX_SGA_SIZE 10GB. Should I be setting the shmmax