File system cache on APO for ATP

Hi, aces,
Do you have any recommendation percentage of file system cache on AIX for APO/ATP environment?  My system was configured to be 20% min -80% max.  But I am not sure if this is good for APO/ATP.
I suspect the file system cache takes a lot of memory and leaves less memory for APO work processes.
Regards,
Dwight

sar will give you what you need....

Similar Messages

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • 888k Error in ULS Logs for File System Cache

    Hello,
    We have a SharePoint 2010 farm in a three-tier architecture with multiple WFEs and APP servers.
    Roughly once a week we will have a number of WFEs seize up and jump to 100% CPU usage. Usually they come in pairs; two servers will jump to 100% at the same time while all the other servers are fine in the 20% - 50% range.
    Corresponding to the 100% CPU spike, the following appear in the ULS logs:
    "File system cache monitor encoutered error, flushing in memory cache: System.IO.InternalBufferOverflowException: Too many changes at once in directory:C:\ProgramData\Microsoft\SharePoint\Config\<GUID>\."
    When these appear, the ULS logs will show hundreds back-to-back flooding the logs.
    I have yet to figure out how to stop these and bring the CPU usage down while the incident is happening, and how to prevent them in the future.
    While the incident is happening, I have tried clearing the configuration cache, shutting the timer jobs down on each server, deleting all the files but config.ini in the folder listed above, changing config.ini to 1, and restarting the timer. The CPU will
    drop momentarily during this process, but as soon as all the timer jobs are restarted the CPUs jump back to 100% on the same servers.
    This week as part of my weekly maintenance I thought I'd be proactive and clear the cache even though the behavior wasn't happening, and all CPUs were normal. As soon as I finished, the CPU on two servers that were previously fine jumped to 100% and wouldn't
    come down. Needless to say, users complain of latency when servers are at 100% CPU.
    So I am frustrated. The only thing I have found that works when the CPUs jump to 100% with these errors are a reboot. Nothing else, including IISReset and stopping/starting the admin and timer job services work. Being Production systems, reboots during the
    middle of the day are bad.
    Any ideas? I have scoured the Internet resources on this error and have come up relatively empty-handed. All the articles reference clearing the configuration cache, which, in my instance, does not get rid of these issues, and can even trigger them.
    Thanks,
    Joseph Irvine

    Take a look at http://support.microsoft.com/kb/952167 for the list of recommended exclusions per Microsoft.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • File system cache performance

    hi.
    i was wondering if anyone could offer any insight into how to
    assess the performance of the file system cache. I am interested
    in things like hit rate (which % of pages read are coming from the
    cache instead of from disk), the amount of data read from the cache
    over a time span, etc.
    outside of the ::memstat dcmd for mdb, i cannot seem to find a whole lot about this topic.
    thanks.

    sar will give you what you need....

  • Windows Embedded Standard File system cache

    Hey I am new in Windows Embedded.
    I am using Windows Embedded Standard XP, and looking for information regarding cache and file system in OS.
    File Systems are designed to reduce the disk hits. File write operations does not write to disk immediately until we use the flush API. Flush API makes the system slower though. Os on the other hand keeps flushing the data in optimized way.
    We need to know 
    1- how frequent windows embedded standard is flushing the data. ?
    2- How much data it keeps in file system cache(Ram) before flushing ?
    3- Can we change things mentioned in above two points by using code?

    Ok Thank you very much .
    How much data it keeps in file system cache(Ram) before flushing ? How much cache memory i have on ram
    How  we know this ?

  • AIX File system caching

    Dear Experts,
    How to disable file system caching in aix environment?
    Thanks in advance.
    Regards,
    Rudi

    > How to disable file system caching in aix environment?
    This depends on the filesystem used.
    Check http://stix.id.au/wiki/Tuning_the_AIX_file_caches
    Markus

  • Oracle cache and File System cache

    on the checkpoint, oracle cache will be written to disk. But, If an oracle database is over file system datafile, it likely that the data are still leave in FileSystem cache. I don't know how could oracle keep data consistency.

    Thanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
    And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous.

  • Required "/" (root) file system size on UNIX for Solution Manager.

    Hello SAP Gurus,
       I am setting up SAP Solution Manager 3.2 on HP-UX. It is asking me about 350MB free sapce on "/" file system for Central Instance installation and about 120MB free sapce on "/" file system for Database Instance installation.
       I am installaing everything on to shared disk which mounted under /usr/sap. Why it needs free sapce in "/" file system. Is there any workaround to get rid of this requirement, as I have very less free sapce on "/" file system and I don't want to take the risks involved in increasing this size.
       Are there any SAP recommended sizes for "/" file system?
       I stuck in the middle of setting up SAP landscape on HP-UX (11.23). I searched through the Installation documents but I couldn't find any thing helpful in this regard. It is urgent requirement to set up this so please let me know any solution or workaround ASAP.
       Any help is greatly appriciated.
    Thanks in advance.
    Regards,
    cvr/

    Hi Vaibhav.
    Normally "canonical path not available for (folder name)" means:
    1. Wrong username/password. Please double check you credentials.
    2. The resource cannot be linked from the portal server. Please be sure that you can connect to the next ports in windows server from the Unix Server:
    a. NetBIOS Session Service TCP 139 This port is used to connect file shares for example.
    b. TCP 445 The SMB (Server Message Block) protocol is used among other things for file sharing in Windows NT/2000/XP. In windows NT it ran on top of NetBT (NetBIOS over TCP/IP), which used the famous ports 137, 138 (UDP) and 139 (TCP). In Windows 2000/XP/2003, Microsoft added the possibility to run SMB directly over TCP/IP, without the extra layer of NetBT. For this they use TCP port 445.
    I hope these things help somebody.
    Best Regards,
    Jheison A. Urzola H.

  • How to create a 2 digit (gigabyte) file system to be used for 3 digit tiny size files and not running out of inodes

    I have a 83Gb file system that apparently is not full, but its actually full, because I have run out of inodes, and I have 10 million files in there, so I want to create another that will cater for this inode issue. My question is if I use the
    newfs -T
    option will it work? As its only going to be 90Gb?

    You use probably a UFS file system, right ? Then in this case, use the "-i" option with a value less as 8192. 8192 is the default value for a UFS filesystem that is between 3GB and 1 TB size.
    But if you want to store 10M of files, UFS will be limited to 1M per TB : 64-bit: Support of MultiterabyteUFS File Systems (System Administration Guide: Devices and File Systems)
    In this case, use instead UFS a ZFS filesystem that supports 2^48 entries !!!

  • EFS(Encrypting File System ) Folder can work for cloning Windows 7 PC?

    Dear All,
      I am using EFS  (Encrypting File System) on a Windows 7 notebook to encrypt a folder.
      I would like to clone this notebook to several other notebooks.
      Will the EFS still work on the cloned PC?

    Hi,
    As I know, after cloning the system, the SID may be changed. So, the EFS folder unable to open.
    But if you have the EFS certificate, you can still use the folder with no issue.
    In theory, the EFS can work on the cloned PC, please backup the certificate first.
    Back up Encrypting File System (EFS) certificate:
    http://windows.microsoft.com/en-in/windows/back-up-efs-certificate#1TC=windows-7
    Hope it helps.
    Regards,
    Blair Deng
    Blair Deng
    TechNet Community Support

  • Verifying and seting file system cache parameters

    I have a Solaris 10 system that has 64Gb of memory that is running a Sybase database with raw devices. Based on the output of "echo ::memstat | mdb -k" it looks like I have about 5Gb of memory being chewed up by filesystem caching which is really not a big deal for us. Can anyone point me to the way for changing the default filesystem caching parameters so I can free up some of this memory?
    EDIT: One last thing is that we're using VxVM for this system with all non-system filesystems being VxFS. That's basically just our dump and tempdb filesystems.
    # echo ::memstat | mdb -k
    Page Summary Pages MB %Tot
    Kernel 424258 3314 5%
    Anon 7004059 54719 85%
    Exec and libs 21785 170 0%
    Page cache 57433 448 1%
    Free (cachelist)           664030              5187    8%
    Free (freelist) 48494 378 1%
    Total 8220059 64219
    Physical 8189297 63978
    Edited by: trouphaz on May 10, 2010 12:49 PM

    So, the memory listed under Free (cachelist) is also useable by applications? I thought that stuff was dedicated to the filesystem cache, which is really unnecessary for our system. Almost all IO on this system is through raw devices and the rest is on VxFS filesystems.

  • Discussion: What file system do you use for external backups and why?

    I just got a brand new seagate 2TB hdd to act as an external backup.
    I use ext2 with it, because it's USB (sata->usb enclosure), and I'm not likely to connect it direct to a PC anytime soon (also my computer is old and I liek to keep ram usage down as much as possible)...
    What filesystems do you use\prefer for external backups.. and why?
    Also, what type of connection do you use (USB,eSATA,etc,) and size (ie larger then a disk on key..)?

    I used to use ext4 on a 1.5TB HD, but recently switched back to ext3—and a tiny NTFS partition for the ext2fsd install file. I realized I use this drive a lot more outside of my Linux environment than I thought, and it was a hassle (and also unsafe) to disable driver signatures at Windows boot just to get a dev version of ext2fsd to work.
    pseudonomous wrote:If portability to other operating systems is a concern, then fat32 is, unfortunately, still the best choice, unless you have single files bigger than 4gb, which would probably mean you want to use ntfs or ext2.
    While helping a friend with her external hard drive, I noticed another limitation of fat32: only 65535 file descriptors per directory. She tried to backup her entire NTFS Windows C: drive to a fat32 one. Took us a good while until we figured that out, and I couldn't find much about it on the net.

  • Local persistent caching file system or RDBMS?

    Hello,
    I have a need to cache Oracle blob data locally on disk on the client machine. I am running a plain vanilla java app which connects to Oracle using Type 4 JDBC connectivity.
    My problem , what should i use to cache data ? FileSystem or RDBMS? currently i see about 3 blob columns from the same table on server which need to be locally cached. Using a file system caching mechanism developing an file heirarchy strategy is simple enough and i do have the advantage of not increasing complexity on client application by not including a local RDBMS. But i got to do the plumbing for data retreival.
    If i use a local RDBMS then i do understand that data reteival plumbing work is not my headache but i am not sure which lightweight Db would support KBs of column data. Any suggestions for lighweight dbs which are free and crossplatform would be useful
    Also ar there any known patterns for local disk caching?
    thank you
    SAmeer

    Look into http://hsqldb.org/. you can actually bundle it up with your app as a jar and run the db inprocess. or you can deploy it as a separate component on your desktop (or whatever it is you are deploying it on)

  • Lucreate - „Cannot make file systems for boot environment“

    Hello!
    I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
    --($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207  -m /:/dev/md/dsk/d200:ufs
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <d100> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Searching /dev for possible boot environment filesystem devices
    Updating system configuration files.
    The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <S10U6_20081207>.
    Source boot environment is <d100>.
    Creating boot environment <S10U6_20081207>.
    Creating file systems on boot environment <S10U6_20081207>.
    Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
    Mounting file systems for boot environment <S10U6_20081207>.
    Calculating required sizes of file systems              for boot environment <S10U6_20081207>.
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
    ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
    Well - why's that?
    I can do a "newfs /dev/md/dsk/d200" just fine.
    When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
    /bin/nawk: can't open file /etc/lu/ICF.2
    Quellcodezeilennummer 1
    Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
    lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
    Would anyone maybe have an idea about what's broken there?
    --($ ~)-- LC_ALL=C metastat
    d200: Mirror
        Submirror 0: d20
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d20: Submirror of d200
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s0          0     No            Okay   Yes
    d100: Mirror
        Submirror 0: d10
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 31458321 blocks (15 GB)
    d10: Submirror of d100
        State: Okay        
        Size: 31458321 blocks (15 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s0          0     No            Okay   Yes
    d201: Mirror
        Submirror 0: d21
          State: Okay        
        Submirror 1: d11
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 2097414 blocks (1.0 GB)
    d21: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t1d0s1          0     No            Okay   Yes
    d11: Submirror of d201
        State: Okay        
        Size: 2097414 blocks (1.0 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s1          0     No            Okay   Yes
    hsp001: is empty
    Device Relocation Information:
    Device   Reloc  Device ID
    c1t1d0   Yes    id1,sd@THITACHI_DK32EJ-36NC_____434N5641
    c1t0d0   Yes    id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
    --($ ~)-- /bin/df -k | grep md
    /dev/md/dsk/d100     15490539 10772770 4562864    71%    /Thanks,
    Michael

    Hello.
    (sys01)root# devfsadm -Cv
    (sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
    --($ ~)-- sudo env LC_ALL=C LANG=C lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    d100                       yes      yes    yes       no     -        
    --($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
                   boot environment name: d100
                   This boot environment is currently active.
                   This boot environment will be active on next system boot.
    Filesystem              fstype    device size Mounted on          Mount Options
    /dev/md/dsk/d100        ufs       16106660352 /                   logging
    /dev/md/dsk/d201        swap       1073875968 -                   -In the rebooted system, I re-did the original lucreate:
    <code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
    Copying.
    *{color:#ff0000}Excellent! It now works!{color}*
    Thanks a lot,
    Michael

  • (Read-only file system) ,EXT3-fs error (device sda6) in start_transaction, FOR MY MCS7825I3-K9-CMC1

    I got this problem for my cucm version System version: 7.0.2.20000-5
    Product Part Number: MCS7825I3-K9-CMC1
    java.io.FileNotFoundException: /var/log/active/platform/log/cli.bin (Read-only file system)
    log4j:ERROR No output stream or file set for the appender named [CLI_LOG].
    EXT3-fs error (device sda6) in start_transaction: Journal has aborted   SeverityMatch - Critical kernel: EXT3-fs error (device sda6) in start_transaction: Journal has aborted   SeverityMatch - Critical kernel: EXT3-fs error (device sda6) in
    Any Advice to solve this issue ?
    Kind Regards
    Mohammed khamis
    java.io.FileNotFoundException: /var/log/active/platform/log/cli.bin (Read-only file system)
            at java.io.RandomAccessFile.open(Native Method)
            at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
            at com.cisco.iptplatform.fappend.ciscoRollingFileAppender.restoreIndex(c                                                                            iscoRollingFileAppender.java:100)
            at com.cisco.iptplatform.fappend.ciscoRollingFileAppender.setFile(ciscoR                                                                            ollingFileAppender.java:43)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.                                                                            java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces                                                                            sorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:585)
            at org.apache.log4j.config.PropertySetter.setProperty(PropertySetter.jav                                                                            a:196)
            at org.apache.log4j.config.PropertySetter.setProperty(PropertySetter.jav                                                                            a:155)
            at org.apache.log4j.xml.DOMConfigurator.setParameter(DOMConfigurator.jav                                                                            a:530)
            at org.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.ja                                                                            va:182)
            at org.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurat                                                                            or.java:140)
            at org.apache.log4j.xml.DOMConfigurator.findAppenderByReference(DOMConfi                                                                            gurator.java:153)
            at org.apache.log4j.xml.DOMConfigurator.parseChildrenOfLoggerElement(DOM                                                                            Configurator.java:415)
            at org.apache.log4j.xml.DOMConfigurator.parseRoot(DOMConfigurator.java:3                                                                            84)
            at org.apache.log4j.xml.DOMConfigurator.parse(DOMConfigurator.java:783)
            at org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java                                                                            :666)
            at org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java                                                                            :616)
            at org.apache.log4j.xml.DOMConfigurator.doConfigure(DOMConfigurator.java                                                                            :584)
            at org.apache.log4j.xml.DOMConfigurator.configure(DOMConfigurator.java:6                                                                            87)
            at sdMain.main(sdMain.java:511)
    java.lang.NullPointerException
            at com.cisco.iptplatform.fappend.ciscoRollingFileAppender.updateIndex(ci                                                                            scoRollingFileAppender.java:117)
            at com.cisco.iptplatform.fappend.ciscoRollingFileAppender.nextFileName(c                                                                            iscoRollingFileAppender.java:92)
            at com.cisco.iptplatform.fappend.ciscoRollingFileAppender.append(ciscoRo                                                                            llingFileAppender.java:74)
            at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:221)
            at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders                                                                            (AppenderAttachableImpl.java:57)
            at org.apache.log4j.Category.callAppenders(Category.java:187)
            at org.apache.log4j.Category.forcedLog(Category.java:372)
            at org.apache.log4j.Category.info(Category.java:674)
            at sdMain.main(sdMain.java:525)
    log4j:ERROR No output stream or file set for the appender named [CLI_LOG].
       Welcome to the Platform Command Line Interface
    admin:java.io.FileNotFoundException: /var/log/active/platform/log/cli.bin (Read-only f                                                                            ile system):java.io.FileNotFoundException: /var/log/active/platform/log/cli.bin (Read-only f                                                                            ile system)

    Hello
    Thanks Manish .
    I found many articles in cisco which says that this error is related to the bug which need firmware upgrade , but I don't have support from cisco because EOS.
    https://supportforums.cisco.com/document/49511/updated-16-february-2011-ibm-7816-i4-782x-i4-filesystem-errors
    Solution
    The file-system going read-only issue which has recently been affecting server models MCS-7816-I4, MCS-7825-I4, and MCS-7828-I4 (or their IBM equivilants) in the field is addressed by CSCti52867 - "IBM 7816-I4 and 782x-I4 READONLY file system".
    The fix for CSCti52867 is now available and requires the application of two patch files.  Install both of these patch files in the order listed below.
    1. First install ciscocm.ibm-diskex-1.0.cop.sgn 
         The Readme file ciscocm.ibm-diskex-1.0.cop.sgn includes installation instructions for this .cop.sgn.
         Make sure to only install this utility when show hardware CLI output indicates the array is in a healthy state.
         If your server has never had the filesystem go readonly then this step is optional. 
    2. Next install Cisco-HDD-FWUpdate-3.0.1-I.ISO .
         The Readme file Cisco-HDD-FWUpdate-3.0.1-I.Readme.pdf includes installation instructions for this ISO.
         This installer is completely independant of the OS installed on the server.
    Note:  Installing the FWUpdate v3.0(1) or later will get you firmware with the fix for this defect.  It is always recommended that you apply the latest FWUCD available for your server.
    Refer to the Release Note of CSCti52867 and the Readme file for each of the above mentioned patch files for more details.
    Symptoms
    The file system goes READONLY, then CUCM services may go down, the server may become "unresponsive" meaning that it is not possible to ssh into the server, login to the console, or web into the server although it may still respond to pings.
    Traces from all services stop writing (including syslog)
    You see the following error on the server console
    EXT3-fs error (device sda6) in start_transaction: Jornal has aborted

Maybe you are looking for

  • Published Videos from IPhoto to Gallery have sound but no video in browser

    I've published some iphoto pictures and videos to a gallery on MobileMe. The videos in the gallery will play in the browser but there is no video, just sound. The videos in IPhoto work fine, but through the browser they have no image. Do I need a spe

  • How do I create a form to link to SAP?

    I have a requirement to create a form to enable users to requests  updates from SAP, with code. I have not been able to find anything to lead me in the right direction. Has anyone done this, or is there a blog that can be recommended for me to look a

  • 4x Orange flash

    Hi, I have a problem with my iPod Shuffle. It won't turn on and when I connect it to the computer it won't recognize it. Nothing happens on the computer. The iPod flashes oranges four times. I tried to reset the unit with out any luck. Anyone who kno

  • [svn] 4194: Fix loading an RSL on AIR.

    Revision: 4194 Author: [email protected] Date: 2008-11-26 12:13:37 -0800 (Wed, 26 Nov 2008) Log Message: Fix loading an RSL on AIR. QE Notes: None Doc Notes: None Bugs: SDK-18202 Reviewer: Alex tests: checkintests Ticket Links: http://bugs.adobe.com/

  • Display a color image in front panel.

    Hi there, How do I display a color image in front panel? Which graphic indicator should be used to display a color image? I do not want to use Imaq WindDraw. Thanks in advance. Hugh