AIX File system caching

Dear Experts,
How to disable file system caching in aix environment?
Thanks in advance.
Regards,
Rudi

> How to disable file system caching in aix environment?
This depends on the filesystem used.
Check http://stix.id.au/wiki/Tuning_the_AIX_file_caches
Markus

Similar Messages

  • File system cache on APO for ATP

    Hi, aces,
    Do you have any recommendation percentage of file system cache on AIX for APO/ATP environment?  My system was configured to be 20% min -80% max.  But I am not sure if this is good for APO/ATP.
    I suspect the file system cache takes a lot of memory and leaves less memory for APO work processes.
    Regards,
    Dwight

    sar will give you what you need....

  • File system cache performance

    hi.
    i was wondering if anyone could offer any insight into how to
    assess the performance of the file system cache. I am interested
    in things like hit rate (which % of pages read are coming from the
    cache instead of from disk), the amount of data read from the cache
    over a time span, etc.
    outside of the ::memstat dcmd for mdb, i cannot seem to find a whole lot about this topic.
    thanks.

    sar will give you what you need....

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • 888k Error in ULS Logs for File System Cache

    Hello,
    We have a SharePoint 2010 farm in a three-tier architecture with multiple WFEs and APP servers.
    Roughly once a week we will have a number of WFEs seize up and jump to 100% CPU usage. Usually they come in pairs; two servers will jump to 100% at the same time while all the other servers are fine in the 20% - 50% range.
    Corresponding to the 100% CPU spike, the following appear in the ULS logs:
    "File system cache monitor encoutered error, flushing in memory cache: System.IO.InternalBufferOverflowException: Too many changes at once in directory:C:\ProgramData\Microsoft\SharePoint\Config\<GUID>\."
    When these appear, the ULS logs will show hundreds back-to-back flooding the logs.
    I have yet to figure out how to stop these and bring the CPU usage down while the incident is happening, and how to prevent them in the future.
    While the incident is happening, I have tried clearing the configuration cache, shutting the timer jobs down on each server, deleting all the files but config.ini in the folder listed above, changing config.ini to 1, and restarting the timer. The CPU will
    drop momentarily during this process, but as soon as all the timer jobs are restarted the CPUs jump back to 100% on the same servers.
    This week as part of my weekly maintenance I thought I'd be proactive and clear the cache even though the behavior wasn't happening, and all CPUs were normal. As soon as I finished, the CPU on two servers that were previously fine jumped to 100% and wouldn't
    come down. Needless to say, users complain of latency when servers are at 100% CPU.
    So I am frustrated. The only thing I have found that works when the CPUs jump to 100% with these errors are a reboot. Nothing else, including IISReset and stopping/starting the admin and timer job services work. Being Production systems, reboots during the
    middle of the day are bad.
    Any ideas? I have scoured the Internet resources on this error and have come up relatively empty-handed. All the articles reference clearing the configuration cache, which, in my instance, does not get rid of these issues, and can even trigger them.
    Thanks,
    Joseph Irvine

    Take a look at http://support.microsoft.com/kb/952167 for the list of recommended exclusions per Microsoft.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Help Library on Aix file system

    Dear Experts,
    Users in my company are using SAP GUI  for windows as a front end tool to access the SAP Application
    My Application is runinig on AIX, I was configuerd the Help files on windows machine " i shared directory on this machine for every one , i put the help files there, from SR13 i put the path ,and it was working fine"
    but now i want to move this HELP directory to the AIX file system, i don't know how to make this file system accessed by the SAPGUI
    Could you please help me on how can i use one file system on AIX as shared folder containig the HELP files and can be accessed by the windows SAPGUI?
    Thanks
    Sherif

    Hello Sherif,
    Please go to transaction FILE. There you will file an entry for NETSCAPE_PATH. This is the path you need to give in SR13. Actually this is a logical path which has to be mapped with the physical file path which will be nothing but the location of your online help files. The mapping will be done in FILE itself.
    Next you need to make settings in SR13 and give this path.
    Addtionally check :
    http://www.jt77.com/human2/resources-13782.html
    Also the OSS note 101481
    Regards.
    Ruchit.

  • Windows Embedded Standard File system cache

    Hey I am new in Windows Embedded.
    I am using Windows Embedded Standard XP, and looking for information regarding cache and file system in OS.
    File Systems are designed to reduce the disk hits. File write operations does not write to disk immediately until we use the flush API. Flush API makes the system slower though. Os on the other hand keeps flushing the data in optimized way.
    We need to know 
    1- how frequent windows embedded standard is flushing the data. ?
    2- How much data it keeps in file system cache(Ram) before flushing ?
    3- Can we change things mentioned in above two points by using code?

    Ok Thank you very much .
    How much data it keeps in file system cache(Ram) before flushing ? How much cache memory i have on ram
    How  we know this ?

  • Oracle cache and File System cache

    on the checkpoint, oracle cache will be written to disk. But, If an oracle database is over file system datafile, it likely that the data are still leave in FileSystem cache. I don't know how could oracle keep data consistency.

    Thanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
    And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous.

  • AIX File system design

    Dear Friends,
    As i am going to install ECC6.0 ABAP on AIX5.3.
    i Would like to know how should i design the File system on the AIX, and if you can help me with the concepts of Volume Group and Logical Volumes.
    Regards
    Ayush

    Hi,
    This is brief about the VG (volume group) LV (logical volume) .
    VG is collection of PV (physical volumes)
    There is physical hard disks grouped into volume group.One PV can belong to one VG only.
    LV(logical volume) Can reside in only one VG however can spread across number of disks in the same VG.
    F/s is created they are of type JFS,JFS2.
    There are lot of rules to before designing this structure.
    what and all disks to be mirrored etc for data security as well as performance.
    As the mirror log and origlog are not on the disk.
    And also to decrease I/O traffic on Disk the Table spaces and f/s are need to be given importance while designing.
    Allocation happens in PP level .LP and PP are mapped as the PPs are not contegious blocks  so with the help og LP it is made seen as contigious allocation is done.
    Internally PP and LP will be mapped.
    VG>LV-> FS (Logical)----LP(logical partiton)
    PV (physical)----PP(physical partiton)
    You can get more information from the IBM redbooks to design and configure the f/s.
    Thanks.

  • AIX file system

    Hi,
    What command would be used to check the type of the file system on AIX?
    Thx,
    Edited by: nemohm on Jan 6, 2009 11:25 AM

    Check this on aix or unix forum.
    http://www.ibm.com/developerworks/forums/forum.jspa?forumID=747
    or
    http://www.unix.com/
    -Arun

  • Verifying and seting file system cache parameters

    I have a Solaris 10 system that has 64Gb of memory that is running a Sybase database with raw devices. Based on the output of "echo ::memstat | mdb -k" it looks like I have about 5Gb of memory being chewed up by filesystem caching which is really not a big deal for us. Can anyone point me to the way for changing the default filesystem caching parameters so I can free up some of this memory?
    EDIT: One last thing is that we're using VxVM for this system with all non-system filesystems being VxFS. That's basically just our dump and tempdb filesystems.
    # echo ::memstat | mdb -k
    Page Summary Pages MB %Tot
    Kernel 424258 3314 5%
    Anon 7004059 54719 85%
    Exec and libs 21785 170 0%
    Page cache 57433 448 1%
    Free (cachelist)           664030              5187    8%
    Free (freelist) 48494 378 1%
    Total 8220059 64219
    Physical 8189297 63978
    Edited by: trouphaz on May 10, 2010 12:49 PM

    So, the memory listed under Free (cachelist) is also useable by applications? I thought that stuff was dedicated to the filesystem cache, which is really unnecessary for our system. Almost all IO on this system is through raw devices and the rest is on VxFS filesystems.

  • Local persistent caching file system or RDBMS?

    Hello,
    I have a need to cache Oracle blob data locally on disk on the client machine. I am running a plain vanilla java app which connects to Oracle using Type 4 JDBC connectivity.
    My problem , what should i use to cache data ? FileSystem or RDBMS? currently i see about 3 blob columns from the same table on server which need to be locally cached. Using a file system caching mechanism developing an file heirarchy strategy is simple enough and i do have the advantage of not increasing complexity on client application by not including a local RDBMS. But i got to do the plumbing for data retreival.
    If i use a local RDBMS then i do understand that data reteival plumbing work is not my headache but i am not sure which lightweight Db would support KBs of column data. Any suggestions for lighweight dbs which are free and crossplatform would be useful
    Also ar there any known patterns for local disk caching?
    thank you
    SAmeer

    Look into http://hsqldb.org/. you can actually bundle it up with your app as a jar and run the db inprocess. or you can deploy it as a separate component on your desktop (or whatever it is you are deploying it on)

  • Raw devices versus Cluster File Systems in RAC 10gR2

    Hi,
    Does anyone using cluster file systems in a RAC 10gR2 installation, specifically IBM’s GPFS?
    I’ve visited a company that is running RAC 10gR2 in AIX over raw devices. Why someone would choose to use raw devices , with all the problems to administer , when all the modern file systems are so powerful? Is there any issues when using cluster file systems + RAC? Is there considerable performance benefits when using raw devices with RAC ?
    I´ve always used Oracle stand alone instances over file systems (since version 7) , and performance was always very good. I´ve tested raw devices almost 10 years ago , and even in that time (the hardware today is much better - SAN , 15K rpm disks , huge caches - and the file systems software today is much better) the cost to administer it does not compensate the benefits (only 5% more faster than file systems in Oracle 7).
    So , besides any limitations imposed by RAC , why use raw devices nowadays ?
    Regards,
    Antonio Belloni

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • Windows 8.1 File System Performance Down Compared to Windows 7

    I have a good workstation and a fast SSD array as my boot volume. 
    Ever since installing Windows 8.1 I have found the file system performance to be somewhat slower than that of Windows 7.
    There's nothing wrong with my setup - in fact it runs as stably as it did under Windows 7 on the same hardware with a similar configuration. 
    The NTFS file system simply isn't quite as responsive on Windows 8.1.
    For example, under Windows 7 I could open Windows Explorer, navigate to the root folder of C:, select all the files and folders, then choose
    Properties.  The system would count up all the files in all the folders at a rate of about
    30,000 files per second
    the first time, then about 50,000 files per second the next time, when all the file system data was already cached in RAM.
    Windows 8.1 will enumerate roughly
    10,000 files per second the first time, and around
    18,000 files per second the second time -
    a roughly 1 to 3 slowdown.  The reduced speed once the data is cached in RAM implies that something in the operating system is the bottleneck.
    Not every operation is slower.  I've benchmarked raw disk I/O, and Windows 8.1 can sustain almost the same data rate, though the top speed is a little lower.  For example, Windows 7 vs. 8 comparisons using the ATTO speed benchmark:
    Windows 7:
    Windows 8:
    -Noel
    Detailed how-to in my eBooks:  
    Configure The Windows 7 "To Work" Options
    Configure The Windows 8 "To Work" Options

    No worries, and thanks for your response.
    The problem can be characterized most quickly by the slowdown in enumerating files in folders.  Unfortunately, besides some benchmarks that show only an incremental degradation in file read/write performance, I don't have any good before/after
    measurements of other actual file operations.
    Since posting the above I have verified:
    My system has 8dot3 support disbled (same as my Windows 7 setup did).
    Core Parking is disabled; CPU benchmarks are roughly equivalent to what they were.
    File system caching is configured the same.
    CHKDSK reports no problems
    C:\TEMP>fsutil fsinfo ntfsInfo C:
    NTFS Volume Serial Number :       0xdc00eddf00edc11e
    NTFS Version   :                  3.1
    LFS Version    :                  2.0
    Number Sectors :                  0x00000000df846fff
    Total Clusters :                  0x000000001bf08dff
    Free Clusters  :                  0x000000000c9c57c5
    Total Reserved :                  0x0000000000001020
    Bytes Per Sector  :               512
    Bytes Per Physical Sector :       512
    Bytes Per Cluster :               4096
    Bytes Per FileRecord Segment    : 1024
    Clusters Per FileRecord Segment : 0
    Mft Valid Data Length :           0x0000000053f00000
    Mft Start Lcn  :                  0x00000000000c0000
    Mft2 Start Lcn :                  0x0000000000000002
    Mft Zone Start :                  0x0000000008ad8180
    Mft Zone End   :                  0x0000000008ade6a0
    Resource Manager Identifier :     2AFD1794-8CEE-11E1-90F4-005056C00008
    C:\TEMP>fsutil fsinfo volumeinfo c:
    Volume Name : C - NoelC4 SSD
    Volume Serial Number : 0xedc11e
    Max Component Length : 255
    File System Name : NTFS
    Is ReadWrite
    Supports Case-sensitive filenames
    Preserves Case of filenames
    Supports Unicode in filenames
    Preserves & Enforces ACL's
    Supports file-based Compression
    Supports Disk Quotas
    Supports Sparse files
    Supports Reparse Points
    Supports Object Identifiers
    Supports Encrypted File System
    Supports Named Streams
    Supports Transactions
    Supports Hard Links
    Supports Extended Attributes
    Supports Open By FileID
    Supports USN Journal
    I am continuing to investigate:
    Whether file system fragmentation could be an issue.  I think not, since I measured the slowdown immediately after installing Windows 8.1.
    All of the settings in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
    Thank you in advance for any and all suggestions.
    -Noel
    Detailed how-to in my eBooks:  
    Configure The Windows 7 "To Work" Options
    Configure The Windows 8 "To Work" Options

  • Cluster file systems performace issues

    hi all,
    I've been running a 3 node 10gR2 RAC cluster on linux using OCFS2 filesystem for some time as a test environment which is due to go into production.
    Recently I noticed some performance issues when reading from disk so I did some comparisons and the results don't seem to make any sense.
    For the purposes of my tests I created a single node instance and created the following tablespaces:
    i) a local filesystem using ext3
    ii) an ext3 filesystem on the SAN
    iii) an OCFS2 filesystem on the SAN
    iv) and a raw device on the SAN.
    I created a similar table with the exact data in each tablespace containing 900,000 rows and created the same index one each table.
    (i was trying to generate a i/o intensive select statement, but also one which is reallistic to our application)
    I then ran the same query against each table (making sure to flush the buffer cache between each query execution).
    I checked that the explain plan were the same for all queries (they were) and the physical reads (from an autotrace) were also comparable.
    The results from the ext3 filesystems (both local and SAN) were approx 1 second, whilst the results from OCFS2 and the raw device were between 11 and 19 seconds.
    I have tried this test every day for the past 5 days and the results are always in this ballpark.
    we currently cannot put this environment into production as queries which read from disk are cripplingly slow....
    I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db.
    judging from this, and many other forums, OCFS2 is in quite wide use so this cannot be an inherent problem with this type of filesystem.
    Also, given the results from my raw device test I am not sure that moving to ASM would provide any benefits either...
    if anyone has any advice, I'd be very grateful

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

Maybe you are looking for