File index cacheing

Is there any way to speed up accessing file indexes in gtk2 apps?  Whenever I save or open a file in a gtk2 app, it takes several seconds (10 to 20) to open the file directory.  This is noticeably slower than kde apps.
Its a pain since even re-opening recently accessed directories takes just as long.
Is there any way of cacheing file indexes to prevent this?

It contains the following. which I believe is correct:
127.0.0.1        localhost.localdomain    localhost
192.168.0.2        XP1800.domain    XP1800
and rc.conf contains:
HOSTNAME="XP1800"

Similar Messages

  • IMAP Dovecot Error - Corrupted index cache 10.6.4

    My CEO reported that he was unable to find a series of emails sent to and received from a specific email address. As the day progressed, we discovered that additional emails were missing.
    When checking the IMAP Mail Logs I noticed a number of emails for his UID had the following error:
    Nov 29 17:01:44 cb1 dovecot[78]: IMAP(*): Maildir filename has wrong W= value, can't fix it automatically: /Volumes/RAID_1/ServiceData/Mail/47263EEC-802E-4CB3-9C95-1651C3ED6341/.INBOX.Pr oduct Planning.CDM/cur/1265646497.M883398P65151.my.domain.com,W=12024,S=11768:2,Sa
    Nov 29 22:23:31 cb1 dovecot[558]: IMAP(*): Corrupted index cache file /Volumes/RAID_1/ServiceData/Mail/1F31D1AA-obfuscated/.Non-CB/dovecot.index.cach e: Corrupted virtual size for uid=202: 0 != 3018
    Nov 29 22:23:31 cb1 dovecot[558]: IMAP(*): Corrupted index cache file /Volumes/RAID_1/ServiceData/Mail/1F31D1AA-obfuscated/.Non-CB/dovecot.index.cach e: Corrupted physical size for uid=202: 0 != 2967
    Any ideas?

    I am not sure if this is the best procedure and there may be unintended consequences, but this is what I did to resolve the issue.
    I stopped the mail service.
    From a shell, I navigated to each directory where an issue was reported.
    In these directories, I renamed the following files, prepending them with "old."
    dovecot.index
    dovecot.index.cache
    dovecot.index.log
    EXAMPLE: mv dovecot.index old.dovecot.index
    I then restarted the mail service. These 3 files were recreated for each IMAP folder on client access.

  • Index cache sizing

    hi,
    i m using a BSO with size of the ind file being 1.92gb...
    can ny1 plz suggest hw mush shd i keep my index cache...

    Show plz esscmd comands:
    select "YOUR_APP" "YOUR_CUBE";
    GETDBINFO;
    GETDBSTATE;
    GETDBSTATS;
    GETAPPINFO;
    GETAPPSTATE;
    + show essbase.cfg + aggregation script
    P.S. Chapter 52 Optimizing Essbase Caches:
    Index Cache Size Settings - Combined size of all essn.ind files, if possible; otherwise, as large as possible. Do not set this cache size higher than the total index size, because no performance improvement results

  • I can't open pages files from USB and iCloud due to : "The necessary file index.xml is missing".

    I can't open pages files from USB and iCloud due to : "The necessary file index.xml is missing". Original file is written on iMac (2014) and saved to an USB stick and then transferred till macBook Pro (2009, Yosemite) for editing. Then I shared the files by iCloud but I am not able to open them on iMac again. Niether can I use the USB stick to transfer the edited files back. I'm running Pages '09 4,1 (923) on the macBook and Pges 5.5.2 (2120) on the iMac.
    Håkan

    Pages 5.5.2 is extremely incompatible with even other Macs let alone the vast majority of PC and MsWord users out there, so not a good idea to leap into the fire with both eyes shut.
    After having done that to the first Pages 5 files, Apple has repeatedly done it to even the most minor point updates. Apple can't even anticipate its own erratic changes let alone what everyone else needs doing.
    Peter

  • ATS files in cache hang system at startup

    We have this inconsistent problem happening on all our Mac's (10.4.10). The problem does not happen each day or consecutive days. Here is what happens. Start the Mac in the morning. When it get's past the startup screen and then starts the load the desktop and finder etc. It just hangs. What you see is the spotlight icon in top right but the rest of the menu bar does not show up. None of the desktop icons appear either. If you move the mouse towards spotlight you see it is the spinning ball. What we do then is hard shutdown the Mac then restart in safe boot mode. Once the Mac is running clean out the files in caches for ATS in library, then ATS caches in system/library. Restart the Mac and everything is normal.
    Some how this hang up is related to the ATS cache files but not sure how it hangs the system?
    Anyone else have this problem? It is causing us a growing frustration some of our operators are leaving the computer running overnight as to not have to deal with the issue when they turn on the Mac.
    Thanks
    Steve

    Well, if it is working now...
    You could check to see what the version of the Finder is as it is not reported in the crash log. The application is here:
    /System/Library/CoreServices/Finder; the version should be 10.6.7. If you suspect a problem caused by updating to 10.6.6, then I suggest downloading and installing the Combo update.
    bd

  • How to print a file index in mac os 10.9

    How do I print a file index in 10.9?

    I don't have a plain CD, but I tried a CD-RW and the ability to leave it appendable was not available. I'm not sure if that is related to CD-RW or if the feature was dropped in Mavericks.
    Check the More Like This links on the right.
    The process previously was to create an image of the folder that held the things you wanted to burn. Then, in Disk Utility, burn the image to disk and check the box to leave it appendable. This was only available for CDs, not DVDs.
    You might try some other burning software.

  • How can I see the stored files in cache?

    How can I see the stored files in cache? I need them!
    Thanks for helping me.

    The History menu lets you revisit the page where you can drag any page element from it.
    How would you sort through thousands (tens of thousands) of files found in a cache?

  • File system cache performance

    hi.
    i was wondering if anyone could offer any insight into how to
    assess the performance of the file system cache. I am interested
    in things like hit rate (which % of pages read are coming from the
    cache instead of from disk), the amount of data read from the cache
    over a time span, etc.
    outside of the ::memstat dcmd for mdb, i cannot seem to find a whole lot about this topic.
    thanks.

    sar will give you what you need....

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • 888k Error in ULS Logs for File System Cache

    Hello,
    We have a SharePoint 2010 farm in a three-tier architecture with multiple WFEs and APP servers.
    Roughly once a week we will have a number of WFEs seize up and jump to 100% CPU usage. Usually they come in pairs; two servers will jump to 100% at the same time while all the other servers are fine in the 20% - 50% range.
    Corresponding to the 100% CPU spike, the following appear in the ULS logs:
    "File system cache monitor encoutered error, flushing in memory cache: System.IO.InternalBufferOverflowException: Too many changes at once in directory:C:\ProgramData\Microsoft\SharePoint\Config\<GUID>\."
    When these appear, the ULS logs will show hundreds back-to-back flooding the logs.
    I have yet to figure out how to stop these and bring the CPU usage down while the incident is happening, and how to prevent them in the future.
    While the incident is happening, I have tried clearing the configuration cache, shutting the timer jobs down on each server, deleting all the files but config.ini in the folder listed above, changing config.ini to 1, and restarting the timer. The CPU will
    drop momentarily during this process, but as soon as all the timer jobs are restarted the CPUs jump back to 100% on the same servers.
    This week as part of my weekly maintenance I thought I'd be proactive and clear the cache even though the behavior wasn't happening, and all CPUs were normal. As soon as I finished, the CPU on two servers that were previously fine jumped to 100% and wouldn't
    come down. Needless to say, users complain of latency when servers are at 100% CPU.
    So I am frustrated. The only thing I have found that works when the CPUs jump to 100% with these errors are a reboot. Nothing else, including IISReset and stopping/starting the admin and timer job services work. Being Production systems, reboots during the
    middle of the day are bad.
    Any ideas? I have scoured the Internet resources on this error and have come up relatively empty-handed. All the articles reference clearing the configuration cache, which, in my instance, does not get rid of these issues, and can even trigger them.
    Thanks,
    Joseph Irvine

    Take a look at http://support.microsoft.com/kb/952167 for the list of recommended exclusions per Microsoft.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • File system cache on APO for ATP

    Hi, aces,
    Do you have any recommendation percentage of file system cache on AIX for APO/ATP environment?  My system was configured to be 20% min -80% max.  But I am not sure if this is good for APO/ATP.
    I suspect the file system cache takes a lot of memory and leaves less memory for APO work processes.
    Regards,
    Dwight

    sar will give you what you need....

  • AIX File system caching

    Dear Experts,
    How to disable file system caching in aix environment?
    Thanks in advance.
    Regards,
    Rudi

    > How to disable file system caching in aix environment?
    This depends on the filesystem used.
    Check http://stix.id.au/wiki/Tuning_the_AIX_file_caches
    Markus

  • Why does Encore copy the file into Cache?

    It seems that Encore will copy the m2v file into ProjectDir/Cache\Media Cache Files\mlf.cache.MACC
    The file is rather large, nearly 4 GB (2gb for each ac3 file).
    I used to run the previous version and I don't think it makes copy in Cache.
    Why does Encore do that?
    Any way to make Encore just use the m2v file directly without copying it into Cache?
    Thanks!

    Same prob for me. Very annoying becuz the program itself is very useful, probably the best dvd menu making system ever but I was tryin to set up an audio dvd wit my own design, prepared sound files like converted them all to ac3 format but every time it is the same, despite deleting cache, the program recreates and doubles these unknown file types of large size taking up irrationally large space. A normal dvd size of 4.7 gbytes to set up would in this case require 50 gbytes or more as I have calculated. More than ten times larger than one dvd. Ridiculous.
    If there's no need for transcode by file format ac3, then why should they time after time be stored in cache mostly taking up double space ? It should be working as cd or dvd write where you're offered two options, either creating an image file on hard disk or on the fly which doesn't take any space of the disk and is more direct.
    So what should I do now ? Look for another program or surrender this one and let it eat up more than 50 gigas ?
    Awful situation is I don't have time to look after programs.
    But I ain't got this amount of disk space either.
    My project must be ready in a few days but no hope for that like this.
    Help me, it's urgent. Thank you.

  • How to Delete Temp files and Cache on MBP 15" Late 2011

    Hi,
    Off late i am experiencing a lag in my mac and it runs bit slowly. Wanted to get a clear picture on how to clear the unwanted temp files and cache to make my mac run smoothly. Any other suggestions to make it run fast are welcome and would be of great help.
    Regards,

    The assumption that unwanted temp files and cache are causing poor performance is unjustified. OS X does not maintain unwanted cache files. They exist to increase performance and are updated or deleted as necessary. So are temporary files. Certain font cache files are moved to the Trash during Safe Boot: OS X: What is Safe Boot, Safe Mode? - Apple Support. Do not use any other utility.
    If your Mac is running slowly, determine if the same symptoms occur in Safe Mode.
    Back up your Mac if you have not done so already. To learn how to use Time Machine read Mac Basics: Time Machine backs up your Mac - Apple Support.

  • Any one noticed issues when UCM contributor data files indexing in GSA

    Hi Guys,
    We are using Google search appliance to crawl UCM content (native documents).
    We don't have any issues with search results in this way. We are using dynamic converters to convert these documents into HTML in site studio web sites.
    But we have plans to move to site studio contributor data files (XML format).
    From your experience, any one noticed any issues in search results with UCM site studio contributor data files indexed by GSA.
    Thank you in advance.
    Edited by: 958795 on Oct 8, 2012 10:50 AM

    Hi Don,
    Thanks for the reply. I would discard the first one, because
    I already built the whole site using XML Data Sets, and the idea
    from the start was to use my own Atom feed to update the site. But
    the second one seems like a good choice, but I'm a bit puzzled. I
    was already using Spry:content for the pages that don't index
    correctly, putting them on an empty <span> tag so that they
    hid unloaded references... could you elaborate on that second
    choice, then, please?
    Here's a sample of the code:
    <!--start main content-->
    <div id="secondary-content" spry:detailregion="dsBase"
    class="wrapper">
    <div id="leftnav" class="frontpage">
    <div spry:state="loading">Loading content. Please
    wait...</div>
    <ul spry:repeatchildren="dsBase">
    <li class="Frontpage"><span
    spry:content="{title}"></span></li>
    <li class="subtitleFront">Posted on <span
    spry:content="{simpleDate}"></span></li>
    <li class="post"><span
    spry:content="{content}"></span></li>
    </ul>
    <p align="left"><br />
    Click <a href="archive.html">here</a> to see
    older posts.</p>
    </div>
    <div id="content-right" class="frontpage">
    <div class="wrapper">
    <h4 align="center">Featured art :</h4>
    <p align="center"> </p>
    <p align="center"><a
    href="gallery.html?row=4"><img
    src="images/tns/tn-AgainstAllOdds.gif" alt="Against All Odds"
    width="81" height="160"/></a></p>
    <p align="center">&quot;Against All
    Odds&quot;<br />
    Tobías Bartolomé</p>
    <p align="center"><a
    href="gallery.html?row=4">See more art at the
    gallery!</a></p>
    </div>
    </div>
    <div style="clear: both"></div>
    </div>
    And here's the link for the main page and Google's index for
    the site:
    http://www.cosmicollective.org/
    http://www.google.com/search?q=site:www.cosmicollective.org&hl=en
    Thanks again!
    Tomas

Maybe you are looking for

  • Images sent from Lightroom 5 won't appear in Elements 12

    About one month ago I've bought and installed on my computer Photoshop Elements 12. However since then I had never launched the programme, which I did for the first time yesterday by trying to send five TIFF shots from Lightoroom 5.4 to Elements (Pho

  • Using variable in physical table of type "select"

    Hello! I have to use query as physical table (in Administration tool - http://file.qip.ru/file/120930377/8713693/1_online.html): SELECT ID, CODE FROM TABLE (pkg.output('1','2')) This code works well. I need to insert instead of parameters '1' and '2'

  • URGENT HELP plz! crop image

    I have an image when loaded into my program, i have the getScaledInstance to about half of the size and then i want to crop on that scaled image icon, how can i do that? i tried the crop image filter, but it always need to getImage().getSource() whic

  • Informix query question

    All, I am trying to integrate a system with cucm and need to pull the device (mac address), device profile actively logged in and extension information.  I am not sure the filed nams on which to query and how to formulate the query itself.  Essential

  • Adobe Encore CS5 Startup Problems

    I was recently finished editing a project in Premiere Pro CS5 and decided to burn it onto a disk to see how it would look, but as I tried to open up Encore it wouldn't open and this mesage would appear: "The application has failed to start because it