QFS don�t update File System space after a failover

When I do a fail-over (get down a master node of device QFS) while I delete a file the space on it (FS) is inconsistent (df command show mayor space than du command) and I have to do a file System Check on it to get free space.

Thanks,
The version of QFS is VERSION.4.6
SUN Cluster version 3.2
Solaris 10 8/07
The file "mcf" is
# Equipment Eq Eq Family Device Additional
# Identifier Ord Type Set State Parameters
kml 1 ms kml on shared
/dev/did/dsk/d7s0 10 md kml on
/dev/did/dsk/d8s0 11 md kml on
/dev/did/dsk/d9s0 12 md kml on
/dev/did/dsk/d10s0 13 md kml on
/dev/did/dsk/d11s0 14 md kml on
/dev/did/dsk/d12s0 15 md kml on
/dev/did/dsk/d13s0 16 md kml on
/dev/did/dsk/d14s0 17 md kml on
/dev/did/dsk/d15s0 18 md kml on
/dev/did/dsk/d16s0 19 md kml on
/dev/did/dsk/d21s0 20 md kml on
/dev/did/dsk/d22s0 21 md kml on
/dev/did/dsk/d23s0 22 md kml on
/dev/did/dsk/d24s0 23 md kml on
/dev/did/dsk/d25s0 24 md kml on
/dev/did/dsk/d26s0 25 md kml on
/dev/did/dsk/d27s0 26 md kml on
/dev/did/dsk/d28s0 27 md kml on
/dev/did/dsk/d29s0 28 md kml on
/dev/did/dsk/d30s0 29 md kml on
# samfsinfo kml
samfsinfo: filesystem kml is mounted.
name: kml version: 2 shared
time: Thursday, April 10, 2008 4:48:05 PM PYT
count: 20
capacity: 000000003d064400 DAU: 64
space: 000000003d04e480
ord eq capacity space device
0 10 00000000030d1d00 00000000030d0580 /dev/did/dsk/d7s0
1 11 00000000030d1d00 00000000030d1c00 /dev/did/dsk/d8s0
2 12 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d9s0
3 13 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d10s0
4 14 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d11s0
5 15 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d12s0
6 16 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d13s0
7 17 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d14s0
8 18 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d15s0
9 19 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d16s0
10 20 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d21s0
11 21 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d22s0
12 22 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d23s0
13 23 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d24s0
14 24 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d25s0
15 25 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d26s0
16 26 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d27s0
17 27 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d28s0
18 28 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d29s0
19 29 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d30s0
#df -k |grep kml
kml 1023820800 5384 1023815416 1% /oradata_kml1
#clrs status -g qfs-kml-rg +
Cluster Resources ===
Resource Name Node Name State Status Message
qfs-kml-rs m9ka Online Online - Service is online.
m9kb Offline Offline
Best regasts,
Ivan

Similar Messages

  • When i download files on MEGA it says that I don't have enough system space but on MEGA i only have file with 128MB and on my disk 120GB, what's wrong?

    i'm trying to download small files like 190MB, together is max.1GB with MEGA. it comes to 100% but then i got a message that i don't have enough system space. on MEGA i have only around 1GB and my disc is at 120GB. i don't know what to do

    Hi copycat667,
    Please see your system resources in order to find out how much space you have left for this MEGA program you are having difficulties with.
    I do not completely understand your question, but I will do my best to answer. If you are downloading files from a website called Mega on Firefox, it may be that the space allocated to the folder it is downloading to is out of space or that there is a limitation on the website how much you can download in a certain time period?
    Other topics similar:
    *[http://forums.mozillazine.org/viewtopic.php?f=7&t=2657825] ?
    Mega uses a filesytem api to detect the size left: [http://developer.chrome.com/apps/fileSystem] this may put you in the right direction.

  • Export 500gb database size to a 100gb file system space in oracle 10g

    Hi All,
    Please let me the know the procedure to export 500gb database to a 100gb file system space. Please let me know the procedure.

    user533548 wrote:
    Hi Linda,
    The database version is 10g and OS is linux. Can we use filesize parameter for the export. Please advice on this.FILESIZE will limit the size of a file in case you specify multiple dumpfiles. You could also could specify multiple dump directory (in different FS) when given multiple dumpfiles.
    For instance :
    dumpfile=dump_dir1:file1,dump_dir2:file2,dump_dir3:file3...Nicolas.

  • ZFS File   system space issue

    Hi All,
    Kindly help how to resolve ZFS File system space issue. I have deleted nearly 20 GB but File system size remains the same. Kindly advice on it.
    Thanks,
    Kumar

    The three reasons that I'm aware of that causes deleting files not to return space are 1) the file is linked to multiple names 2) deleting files which are still open by a process and 3) deleting files which are backed up by a snapshot. While it is possible you deleted 20GB in multiply linked and/or open files, I'd guess snapshots to be most likely case.
    For multiple "hard" links, you can see the link count (before you delete the file) in the "ls -l" command (the second field). If it is greater than one, deleting the file won't free up space. You have to delete all file names linked to the file to free up the space.
    For open files, you can use the pfiles command to see what files a process has open. The file space won't be recovered until all processes with a file open close it. Killing the process will do the job. If you like to use a big hammer, a reboot kills everything.
    For snapshots: Use the "zfs -t snapshot" command and look at the snapshots. The space used by the snapshot indicates how much space is held by the snapshot. Deleting the snapshot will free up space unless the space is still being held by another snapshot. To free space held by a file, you have to delete all snapshots which contain that file.
    Hopefully, I got all of this right.

  • Problem in Reducing the root file system space

    Hi All ,
    The root file system is reached 86%. We have cleared 1 GB data in /var file system. But the root file system still showing 86%. Please note that the /var file is not seprate file system.
    I have furnished the df -h output for your reference. Please provide solution as soon as possible.
    /dev/dsk/c1t0d0s0 2.9G 2.4G 404M 86% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 30G 1.0M 30G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /dev/dsk/c1t0d0s3 6.7G 3.7G 3.0G 56% /usr
    /platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/libc_psr.so.1
    /platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 33G 3.5G 30G 11% /tmp
    swap 30G 48K 30G 1% /var/run
    /dev/dsk/c1t0d0s4 45G 30G 15G 67% /www
    /dev/dsk/c1t0d0s5 2.9G 1.1G 1.7G 39% /export/home
    Regards,
    R. Rajesh Kannan.

    I don't know if the root partition filling up was sudden, and thus due to the killing of an in-use file, or some other problem. However, I have noticed that VAST amounts of space is used up just through the normal patching process.
    After I installed Sol 10 11/06, my 12GB root partition was 48% full. Now, about 2 months later, after applying available patches, it is 53% full. That is about 600 MB being taken up by the superseded versions of the installed patches. This is ridiculous. I have patched using Sun Update Manager, which by default does not use the patchadd -d option that would not back up old patch versions, so the superseded patches are building up in /var, wasting massive amounts of space.
    Are Solaris users just supposed to put up with this, or is there some other way we should manage patches? It is time consuming and dangerous to manually clean up the old patch versions by using patchrm to delete all versions of a patch and then using patchadd to re-install only the latest revision.
    Thank you.

  • "ipod cannot update - file not found" after getting new ipod

    i had my ipod replaced today at the apple store because of a bad HD. Seems like the one they gave me was brand new...anyway after following the restore/charge directions, i went to go plug it into my dock and Itunes is telling me it can't update the ipod because the file is missing. i definitely have the latest update installer, and i even tested playlist syncs with the new ipod and it definitely updates songs/playlists, but after that it gives me the error message about not being able to update.
    i ran the last updater, and it says my ipod is up to date.
    what's going wrong? i must have done something wrong.

    or not....i spoke too soon!
    it's doing it AGAIN. after it started updating after restoring all was well, it tried to add my whole library which won't fit.
    when it add's a "library selection" instead automatically, it fills up my ipod so that the photos won't update.
    then when i re-mount the ipod, it gives me the same old error message again about missing the file required for updating!?!?!?
    i wish my preference to "manually add songs" instead of syncing the library would save in Itunes when i restore my Ipod. it's making it impossible for me to get things working properly after restoring....

  • File system corrupted *after* it was unmounted

    Hi all,
    I'd be interested in any opinions on this problem.
    I recently bought a 1TB Samsung Story Station 3 usb hard drive for use with Time Machine.
    This model of drive has a power switch on the front. Most of the time I don't need hourly backups so my plan is to leave the drive turned off most of the time and only turn it on when I feel the need for automatic hourly backups.
    A beautiful setup... except... The file system keeps getting corrupted and it is not always repairable.
    What is happening?
    My procedure is to eject the disk then switch off the drive.
    About half the time an immediate disk verify will report that the disk requires repair.
    I have tried variations of this with very similar results. Here is a typical failure scenario:-
    1 - Start Time Machine "Back Up Now"
    2 - Let it copy some data then stop the backup (if necessary)
    3 - Eject the disk
    4 - Switch off the drive
    5 - Switch on the drive
    6 - Verify
    Variations on the theme...
    I have tried the drive with and without a bootable image restored to it and therefore with and without journalling on the file system (I believe).
    1, 2 - I have tried copying data using finder instead of using time machine.
    3 - I have tried umount -f and diskutil unmount
    Note that umount without -f reports the drive is busy but other ejection/unmount mechanisms work.
    Is this related?
    The only thing that may have made a difference is that powering down the mac, instead of just the drive, after ejection has not yet resulted in a damaged file system.
    The obvious conclusion to draw is that a buffer somewhere is not being flushed before I turn off the drive, but is this buffer in OS X or in the drive itself?
    In either case I assume other people are having the same problem.
    For now I will just leave the drive turned on all the time. Maybe this is what everyone else does so nobody else has sees the problem.

    I need help with my MP3 Player. This is the second one I have bought and it keeps going to 'File System Corrupted'. Its an RCA MP3 Player. Holds 5GB's. My first one lasted about 6 months and this one only 3 months. I don't drop it. Like maybe once, but on a soft floor and its never been thrown or beaten on. No marks on it from being dropped or hit on anything. Also I can't get it to respond to my computer. I plug it in and says, 'File System Corrupted'. There needs to be like a way to fix it. Other wise the people who built it are getting money easily with a device that works only a short limited amount of time. I've pressed my power button and held it for 0 seconds. Nothing happens. I'm really annoyed with this device. I've waited much money on it. Is there at all a way to fix it. ~Dark Warrior~

  • Increasing file system space and /

    Running Solaris 10 SPARC
    V445R
    I have (2ea) 73GB disk (Mirrored)
    I would like to add 2 additional 73GB disk
    Mirror 0 against 1 and Mirror 2 against 3
    I would like to increase /usr (c0t0d0s4), /var (c0t0d0s3) and /generic (c0t0d0s6)
    I believe the file system is limited to 7 partitions (slices s0-6).
    slice s6 would be entirely on the second set of disks.
    w/Solaris 10 is there an easier way to add space to the file system than backup everything, split the mirror, format, partition, mirror, load backups
    Thanks

    Assuming you're using SVM, see the following links to see if they help. I don't have a system I can alter just now, but have these bookmarked. Good luck.
    http://docs.sun.com/app/docs/doc/816-4520/6manpiek9?a=view
    http://docs.sun.com/app/docs/doc/816-5166/6mbb1kq27?a=view
    Just in case, I'd make sure to have a backup. ;)
    -Marc

  • Updating satellite system data after Upgrade in Solman

    Hi All,
    We are using solman 7.01 connected to BI 7.0 system as it's satellite system, We have recently Upgrade our BI system to BI 7.01 version but we do not see its updated version in solman system.
    We checked in SMSY it is still showing BI 7.0, we have also done option " Read data from system " but it did not work.
    Please suggest to Update my system data in solman.
    Regards,
    Shivam Mittal

    Hi Shivam Mittal
    When you clicked in "Read system Data Remote", were the information returned with success in the logs ?
    Usually the informations aren't pushed because the READ Rfc's are failing.
    If you run job LANDSCAPE FETCH, the information are being collected correctly ?

  • Dfc: Display file system space usage using graph and colors

    Hi all,
    I wrote a little tool, somewhat similar to df(1) which I named dfc.
    To present it, nothing better than a screenshot (because of colors):
    And there is a few options available (as of version 3.0.0):
    Usage: dfc [OPTIONS(S)] [-c WHEN] [-e FORMAT] [-p FSNAME] [-q SORTBY] [-t FSTYPE]
    [-u UNIT]
    Available options:
    -a print all mounted filesystem
    -b do not show the graph bar
    -c choose color mode. Read the manpage
    for details
    -d show used size
    -e export to specified format. Read the manpage
    for details
    -f disable auto-adjust mode (force display)
    -h print this message
    -i info about inodes
    -l only show information about locally mounted
    file systems
    -m use metric (SI unit)
    -n do not print header
    -o show mount flags
    -p filter by file system name. Read the manpage
    for details
    -q sort the output. Read the manpage
    for details
    -s sum the total usage
    -t filter by file system type. Read the manpage
    for details
    -T show filesystem type
    -u choose the unit in which
    to show the values. Read the manpage
    for details
    -v print program version
    -w use a wider bar
    -W wide filename (un truncate)
    If you find it interesting, you may install it from the AUR: http://aur.archlinux.org/packages.php?ID=57770
    (it is also available on the archlinuxfr repository for those who have it enabled).
    For further explanations, there is a manpage or the wiki on the official website.
    Here is the official website: http://projects.gw-computing.net/projects/dfc
    If you encounter a bug (or several!), it would be nice to inform me. If you wish a new feature to be implemented, you can always ask me by sending me an email (you can find my email address in the manpage or on the official website).
    Cheers,
    Rolinh
    Last edited by Rolinh (2012-05-31 00:36:48)

    bencahill wrote:There were the decently major changes (e.g. -t changing from 'don't show type' to 'filter by type'), but I suppose this is to be expected from such young software.
    I know I changed the options a lot with 2.1.0 release. I thought it would be better to have -t for filtering and -T for printing the file system type so someone using the original df would not be surprised.
    I'm sorry for the inconvenience. There should not be any changes like this one in the future though but I thought it was needed (especially because of the unit options).
    bencahill wrote:
    Anyway, I now cannot find any way of having colored output showing only some mounts (that aren't all the same type), without modifying the code.
    Two suggestions:
    1. Introduce a --color option like ls and grep (--color=WHEN, where WHEN is always,never,auto)
    Ok, I'll implement this one for 2.2.0 release It'll be more like "-c always", "-c never" and "-c auto" (default) because I do not use long options but I think this would be OK, right?
    bencahill wrote:2. Change -t to be able to filter multiple types (-t ext4,ext3,etc), and support negative matching (! -t tmpfs,devtmpfs,etc)
    This was already planned for 2.2.0 release
    bencahill wrote:Both of these would be awesome, if you have time. I've simply reverted for now.
    This is what I would have suggested.
    bencahill wrote:By the way, awesome software.
    Thanks I'm glad you like it!
    bencahill wrote:P.S. I'd already written this up before I noticed the part in your post about sending feature requests to your email. I decided to post it anyway, as I figured others could benefit from your answer as well. Please forgive me if this is not acceptable.
    This is perfectly fine Moreover, I seem to have some troubles with my e-mail addressee... So it's actually better that you posted your requests here!

  • Solaris file system space

    Hi All,
    While trying to use df -k command in my solaris box, I am getting output shown as below.
    Filesystem 1024-blocks Used Available Capacity Mounted on
    rpool/ROOT/solaris-161 191987712 6004395 140577816 5% /
    /devices 0 0 0 0% /devices
    /dev 0 0 0 0% /dev
    ctfs 0 0 0 0% /system/contract
    proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    swap 4184236 496 4183740 1% /system/volatile
    objfs 0 0 0 0% /system/object
    sharefs 0 0 0 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1 146582211 6004395 140577816 5% /lib/libc.so.1
    fd 0 0 0 0% /dev/fd
    swap 4183784 60 4183724 1% /tmp
    rpool/export 191987712 35 140577816 1% /export
    rpool/export/home 191987712 32 140577816 1% /export/home
    rpool/export/home/123 191987712 13108813 140577816 9% /export/home/123
    rpool/export/repo 191987712 11187204 140577816 8% /export/repo
    rpool/export/repo2010_11 191987712 31 140577816 1% /export/repo2010_11
    rpool 191987712 5238974 140577816 4% /rpool
    /export/home/123 153686630 13108813 140577816 9% /home/12
    My question here is why /usr/lib/libc/libc_hwcap1.so.1 file system is having same size as that of / root filesystem? and what is the significance of /usr/lib/libc/libc_hwcap1.so.1 file system..
    Thanks in Advance for your help..

    You must have a lot of small files on the file system.
    There are couple of ways, the simplest is to increase the size of the filesystem.
    Or if you can create a new filesystem, but increase the inode count so you can utilize the space and still have enough inodes. Check out the man page mkfs_ufs and the option nbpi=n
    my 2 bits

  • Problem with File System Repository after release change

    Hello,
    we have been changing the Portal release from EP 6.0 to SAP NetWeaver Portal 7.0 SP17. The OS of the host is Windows 2003 Server.
    If I open the component monitor to watch the repostitories, the state of the file system repositories are red and a error message occurred:
    Startup Error:  getting mapped math - Logon failure: unknown user name or bad password
    I checked the logon credentials....they are ok. Also I checked the access from the Portalhost to the Windows File System....it´s ok too.
    In our previous portal release the FSR were ok!!
    Is there a difference between the file system repository configuration in portal 6.0 and portal 7.0?
    Thanks and regards
    Tom

    Hello Hussain,
    I have checked the username and the password of the user who have full access on the Windows file system, but the same error occurred. In the Network Path we have configured a domain user for the file system access. Here is the configuration of the network path:
    Name: Test
    Description: Test
    Networkpath:
    Host\share
    Password: ****
    Re-enter Password: ****
    User: domain\user
    Here the configuration of the repository:
    Name: Test
    Description: Test
    Prefix: /Test
    Lookup Mode: caseless
    Root Directory:
    Host\share
    Repository Services: Nothing
    Property Search Manager: com.sapportals.wcm.repository.manager.generic.search.SimpleManagerSearchManager
    Security Manager: AclSecurityManager
    ACL Manager Cache: ca_cm_rep_acl
    Windows Landscape System: Microsoft_Windows_KM
    Read only: unchecked
    Could it be that I need a user mapping for the access of the windows file system?
    Perhaps the configuration of the Windows Landscape System is wrong?
    regards
    Tom

  • Monitor free file system space in GB

    Hi All,
    i need to check all the FS of our database and see if they are getting to less then some GB
    then get an alert.
    for example my FS has 100Gb in total and because i put there some non oracle stuff (or oracle stuff, it doesnt matter) the free space there is lower than 10GB.
    i want to get an alert if the free space is lower than 10GB?
    i didn't find any metric that can help me with it ?
    can somebody help ?
    10x
    zvika

    Hi ,
    i tried directory size Metric but it just give me the size of the directory.
    i need to check if the FS has less then threshold value in GB.
    also i had some errors trying to get the metric works on directories.
    File or Directory Attribute Not Found: File or Directory Name XXX
    Error in computing size for: XXX
    Angrydot
    you are right but 10% of 100GB is not ok while 10% of 500GB is ok.
    i want to set one threshold to all FS.
    10x
    Zvika

  • Spooling a file adding spaces after each line

    Hi
    I am spooling a result of an sql in a csv file in linux os. But the file generates lot of blank lines after each row. How this can be eliminated while spooling
    Thanks in advance
    Sas

    Thanks a lot for your reply. Is there any way to supress the display of query result. I dont want to see the result. I want just the output file.
    Thanks
    Sas
    Edited by: SasDutta on Mar 24, 2010 4:20 PM

  • Hi I have started using MacBook Air. I updated my system yesterday, after that I am not able to have a video calling through Skype.

    Hi I am not able to have video call through Skype after software updation. Please

    Welcome to Apple Support Communities
    It's a known problem in OS X 10.8.5. Apple is working on a fix

Maybe you are looking for