ZFS File   system space issue

Hi All,
Kindly help how to resolve ZFS File system space issue. I have deleted nearly 20 GB but File system size remains the same. Kindly advice on it.
Thanks,
Kumar

The three reasons that I'm aware of that causes deleting files not to return space are 1) the file is linked to multiple names 2) deleting files which are still open by a process and 3) deleting files which are backed up by a snapshot. While it is possible you deleted 20GB in multiply linked and/or open files, I'd guess snapshots to be most likely case.
For multiple "hard" links, you can see the link count (before you delete the file) in the "ls -l" command (the second field). If it is greater than one, deleting the file won't free up space. You have to delete all file names linked to the file to free up the space.
For open files, you can use the pfiles command to see what files a process has open. The file space won't be recovered until all processes with a file open close it. Killing the process will do the job. If you like to use a big hammer, a reboot kills everything.
For snapshots: Use the "zfs -t snapshot" command and look at the snapshots. The space used by the snapshot indicates how much space is held by the snapshot. Deleting the snapshot will free up space unless the space is still being held by another snapshot. To free space held by a file, you have to delete all snapshots which contain that file.
Hopefully, I got all of this right.

Similar Messages

  • ZFS file system options

    Dear Friends,
    Which is the command to create an zfs file system similar to the following command.
    newfs -i 200000 -c 256 -C 8 -m 1 /dev/rdsk/c1t1d0s0
    Thanks,
    Sal.

    Dear Darren & Friends,
    I am trying to install lotus domino in T5240 server, where the mail files will be stored on zfs file system. I would like to tune the ZFS file system for donino before going to the production. Unfortunately sun doesn't have any documnetation for the ZFS file system tuning for domino, but many for ufs. In ufs tuning sun is suggesting like this
    "Modern disk subsystems can perform larger data transfers than their predecessors, and Solaris can take
    advantage of this to read more data than Domino asks for in hopes of having the next piece of a file already in
    memory as soon as Domino asks for it. Unfortunately, Domino doesn't always use the “anticipated” data but
    instead next asks for data from an entirely different region of the file, so the effort spent reading the extra data
    may be wasted. In extreme circumstances and with modern disk systems Solaris can be fooled into reading
    fifty or sixty times as much data as is actually needed, wasting more than 98% of the I/O effort.
    To prevent this pathological behavior, build or tune the file systems that hold NSF databases so that Solaris
    won't try to read more than about 64KB at a time from them. The instructions that follow show how to do this for
    the default Unix File System (UFS); if you are using an alternative file system, consult its documentation.
    If you are in a position to build or rebuild the file systems, we suggest using the command
    newfs -i 200000 -c 256 -C 8 -m 1 /dev/rdsk/...
    The important option is -C 8, which limits the amount of read-ahead to no more than eight pages of 8KB each.
    The other options are less important, and may even cause newfs to issue warning messages for some disk
    sizes; these warnings can be ignored since newfs will adjust the values to suit the actual disk."
    Can I build an zfs file system atleast with the option similaiar to -C 8
    Thanks,
    Sal.

  • Export 500gb database size to a 100gb file system space in oracle 10g

    Hi All,
    Please let me the know the procedure to export 500gb database to a 100gb file system space. Please let me know the procedure.

    user533548 wrote:
    Hi Linda,
    The database version is 10g and OS is linux. Can we use filesize parameter for the export. Please advice on this.FILESIZE will limit the size of a file in case you specify multiple dumpfiles. You could also could specify multiple dump directory (in different FS) when given multiple dumpfiles.
    For instance :
    dumpfile=dump_dir1:file1,dump_dir2:file2,dump_dir3:file3...Nicolas.

  • ZFS file system mount in solaris 11

    Create a ZFS file system for the package repository in the root pool:
    # zfs create rpool/export/repoSolaris11
    # zfs list
    The atime property controls whether the access time for files is updated when the files are read.
    Turning this property off avoids producing write traffic when reading files.
    # zfs set atime=off rpool/export/repoSolaris11
    Create the required pkg repository infrastructure so that you can copy the repository
    # pkgrepo create /export/repoSolaris11
    # cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > \
    sol-11-1111-repo-full.iso
    # mount -F hsfs /export/repoSolaris11/sol-11-1111-repo-full.iso /mnt
    # ls /mnt
    # df -k /mnt
    Using the tar command as shown in the following example can be a faster way to move the
    repository from the mounted file system to the repository ZFS file system.
    # cd /mnt/repo; tar cf - . | (cd /export/repoSolaris11; tar xfp -)
    # cd /export/repoSolaris11
    # ls /export/repoSolaris11
       pkg5.repository README
       publisher sol-11-1111-repo-full.iso
    # df -k /export/repoSolaris11
    # umount /mnt
    # pkgrepo -s /export/repoSolaris11 refresh
    =============================================
    # zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
    ==============================================I am trying to reconfigure the package repository with above steps. when reached the below step
    # zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
    created the mount point but not mounted giving the error message
    cannot mount ,directory not empty When restarted the box, threw service adm screen with error message
    not able to mount all pointsPlease advise and Thanks in advance.

    Hi.
    Don't mix content of directory as mountpoint and what you see after FS was mounted.
    On othet ZFS - mount point also clear. You see contetn of ZFS file system.
    For check you can unmount any other ZFS and see that mountpoint also clear.
    Regards.

  • Best practices for ZFS file systems when using live upgrade?

    I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
    Here is the ZFS layout I have come up with (swap, home, etc omitted):
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              11.0G  52.0G    94K  /rpool
    rpool/ROOT                         4.80G  52.0G    18K  legacy
    rpool/ROOT/boot1                   4.80G  52.0G  4.28G  /
    rpool/ROOT/boot1/zones-root         534M  52.0G    20K  /zones-root
    rpool/ROOT/boot1/zones-root/zone1   534M  52.0G   534M  /zones-root/zone1
    rpool/zone-data                      37K  52.0G    19K  /zones-data
    rpool/zone-data/zone1-runtime        18K  52.0G    18K  /zones-data/zone1-runtimeThere are 2 key components here:
    1) The ROOT file system - This stores the / file systems of the local and global zones.
    2) The zone-data file system - This stores the data that will be changing within the local zones.
    Here is the configuration for the zone itself:
    <zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
      <inherited-pkg-dir directory="/lib"/>
      <inherited-pkg-dir directory="/platform"/>
      <inherited-pkg-dir directory="/sbin"/>
      <inherited-pkg-dir directory="/usr"/>
      <filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
      <network address="192.168.0.1" physical="e1000g0"/>
    </zone>The key components here are:
    1) The local zone / is shared in the same file system as global zone /
    2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
    The system (local and global zone) will operate like this:
    The global zone is used to manage zones only.
    Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
    During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
    Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
    What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
    Thanks in advance for your feedback!
    --Jason                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello "jemurray".
    Have you read this document? (page 198)
    http://docs.sun.com/app/docs/doc/820-7013?l=en
    Then the solution is:
    01.- Create an alternate boot enviroment
    a.- In a new rpool
    b.- In the same rpool
    02.- Upgrade this new enviroment
    03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
    This maybe sound easy but you should be carefull, please try this in a development enviroment
    Good luck

  • Root disk craches how to retrive the data from ZFS file systems.

    Hi Friends,
    The solaris 10 OS (root disk) is crached.i have configered execpt root disk to all disks for ZFS file systems.We using to application.Now any possble to retrive the data.
    Server model - V880
    Pls help me.
    Advance thanks.

    If the OS wasn't on ZFS, then just rebuild the server, hook up the drives and run 'zpool import'. It should find the pool on the disks and offer it up to be imported.
    Darren

  • Oc 11gR1 update 3: doesn't show ZFS file systems created on brownfield zone

    Subject is a pretty concise description here. I have several brownfield Solaris 10U10 containers running on M5000s, and I have delegated three zpool to each container for use by Oracle. Below is relevant output from zonecfg export for one of these containers. They were all built in the same manner, then placed under management by OC. (Wish I'd been able to build them as green field with Ops Center, but there just wasn't enough time to learn how to configure OpsCenter the way I needed to use it.)
    set name=Oracle-DB-Instance
    set type=string
    set value="Oracle e-Business Suite PREPROD"
    end
    add dataset
    set name=PREPRODredoPOOL
    end
    add dataset
    set name=PREPRODarchPOOL
    end
    add dataset
    set name=PREPRODdataPOOL
    end
    The problem is, none of the file systems built on these delegated pools in the container appear in the Ops Center File System Utilization charts. Does anyone have a suggestion for how to get OC to monitor the file systems in the zone?
    Here's the output from zfs list within the zone described by the zonecfg output above:
    [root@acdpreprod ~]# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    PREPRODarchPOOL 8.91G 49.7G 31K none
    PREPRODarchPOOL/d05 8.91G 49.7G 8.91G /d05
    PREPRODdataPOOL 807G 364G 31K none
    PREPRODdataPOOL/d02 13.4G 36.6G 13.4G /d02
    PREPRODdataPOOL/d03 782G 364G 782G /d03
    PREPRODdataPOOL/d06 11.4G 88.6G 11.4G /d06
    PREPRODredoPOOL 7.82G 3.93G 31K none
    PREPRODredoPOOL/d04 7.82G 3.93G 7.82G /d04
    None of the file systems in the delegated datasets appear in Ops Center for this zone. Are there any suggestions for how I correct this?

    Do you mean adopt the zone? That requires the zone be halted and it also says something about copying all file systems to the pool created for the zone. Of the 12 zones I have (four on each of three M5000s), seven of them are already in "production" status, and four of those seven now support 7x24 world-wide operations. A do-over is not an option here.

  • Problem in Reducing the root file system space

    Hi All ,
    The root file system is reached 86%. We have cleared 1 GB data in /var file system. But the root file system still showing 86%. Please note that the /var file is not seprate file system.
    I have furnished the df -h output for your reference. Please provide solution as soon as possible.
    /dev/dsk/c1t0d0s0 2.9G 2.4G 404M 86% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 30G 1.0M 30G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /dev/dsk/c1t0d0s3 6.7G 3.7G 3.0G 56% /usr
    /platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/libc_psr.so.1
    /platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
    2.9G 2.4G 404M 86% /platform/sun4v/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 33G 3.5G 30G 11% /tmp
    swap 30G 48K 30G 1% /var/run
    /dev/dsk/c1t0d0s4 45G 30G 15G 67% /www
    /dev/dsk/c1t0d0s5 2.9G 1.1G 1.7G 39% /export/home
    Regards,
    R. Rajesh Kannan.

    I don't know if the root partition filling up was sudden, and thus due to the killing of an in-use file, or some other problem. However, I have noticed that VAST amounts of space is used up just through the normal patching process.
    After I installed Sol 10 11/06, my 12GB root partition was 48% full. Now, about 2 months later, after applying available patches, it is 53% full. That is about 600 MB being taken up by the superseded versions of the installed patches. This is ridiculous. I have patched using Sun Update Manager, which by default does not use the patchadd -d option that would not back up old patch versions, so the superseded patches are building up in /var, wasting massive amounts of space.
    Are Solaris users just supposed to put up with this, or is there some other way we should manage patches? It is time consuming and dangerous to manually clean up the old patch versions by using patchrm to delete all versions of a patch and then using patchadd to re-install only the latest revision.
    Thank you.

  • Cluster file systems performace issues

    hi all,
    I've been running a 3 node 10gR2 RAC cluster on linux using OCFS2 filesystem for some time as a test environment which is due to go into production.
    Recently I noticed some performance issues when reading from disk so I did some comparisons and the results don't seem to make any sense.
    For the purposes of my tests I created a single node instance and created the following tablespaces:
    i) a local filesystem using ext3
    ii) an ext3 filesystem on the SAN
    iii) an OCFS2 filesystem on the SAN
    iv) and a raw device on the SAN.
    I created a similar table with the exact data in each tablespace containing 900,000 rows and created the same index one each table.
    (i was trying to generate a i/o intensive select statement, but also one which is reallistic to our application)
    I then ran the same query against each table (making sure to flush the buffer cache between each query execution).
    I checked that the explain plan were the same for all queries (they were) and the physical reads (from an autotrace) were also comparable.
    The results from the ext3 filesystems (both local and SAN) were approx 1 second, whilst the results from OCFS2 and the raw device were between 11 and 19 seconds.
    I have tried this test every day for the past 5 days and the results are always in this ballpark.
    we currently cannot put this environment into production as queries which read from disk are cripplingly slow....
    I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db.
    judging from this, and many other forums, OCFS2 is in quite wide use so this cannot be an inherent problem with this type of filesystem.
    Also, given the results from my raw device test I am not sure that moving to ASM would provide any benefits either...
    if anyone has any advice, I'd be very grateful

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • QFS don�t update File System space after a failover

    When I do a fail-over (get down a master node of device QFS) while I delete a file the space on it (FS) is inconsistent (df command show mayor space than du command) and I have to do a file System Check on it to get free space.

    Thanks,
    The version of QFS is VERSION.4.6
    SUN Cluster version 3.2
    Solaris 10 8/07
    The file "mcf" is
    # Equipment Eq Eq Family Device Additional
    # Identifier Ord Type Set State Parameters
    kml 1 ms kml on shared
    /dev/did/dsk/d7s0 10 md kml on
    /dev/did/dsk/d8s0 11 md kml on
    /dev/did/dsk/d9s0 12 md kml on
    /dev/did/dsk/d10s0 13 md kml on
    /dev/did/dsk/d11s0 14 md kml on
    /dev/did/dsk/d12s0 15 md kml on
    /dev/did/dsk/d13s0 16 md kml on
    /dev/did/dsk/d14s0 17 md kml on
    /dev/did/dsk/d15s0 18 md kml on
    /dev/did/dsk/d16s0 19 md kml on
    /dev/did/dsk/d21s0 20 md kml on
    /dev/did/dsk/d22s0 21 md kml on
    /dev/did/dsk/d23s0 22 md kml on
    /dev/did/dsk/d24s0 23 md kml on
    /dev/did/dsk/d25s0 24 md kml on
    /dev/did/dsk/d26s0 25 md kml on
    /dev/did/dsk/d27s0 26 md kml on
    /dev/did/dsk/d28s0 27 md kml on
    /dev/did/dsk/d29s0 28 md kml on
    /dev/did/dsk/d30s0 29 md kml on
    # samfsinfo kml
    samfsinfo: filesystem kml is mounted.
    name: kml version: 2 shared
    time: Thursday, April 10, 2008 4:48:05 PM PYT
    count: 20
    capacity: 000000003d064400 DAU: 64
    space: 000000003d04e480
    ord eq capacity space device
    0 10 00000000030d1d00 00000000030d0580 /dev/did/dsk/d7s0
    1 11 00000000030d1d00 00000000030d1c00 /dev/did/dsk/d8s0
    2 12 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d9s0
    3 13 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d10s0
    4 14 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d11s0
    5 15 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d12s0
    6 16 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d13s0
    7 17 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d14s0
    8 18 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d15s0
    9 19 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d16s0
    10 20 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d21s0
    11 21 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d22s0
    12 22 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d23s0
    13 23 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d24s0
    14 24 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d25s0
    15 25 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d26s0
    16 26 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d27s0
    17 27 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d28s0
    18 28 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d29s0
    19 29 00000000030d1d00 00000000030d1c40 /dev/did/dsk/d30s0
    #df -k |grep kml
    kml 1023820800 5384 1023815416 1% /oradata_kml1
    #clrs status -g qfs-kml-rg +
    Cluster Resources ===
    Resource Name Node Name State Status Message
    qfs-kml-rs m9ka Online Online - Service is online.
    m9kb Offline Offline
    Best regasts,
    Ivan

  • Solaris file system space

    Hi All,
    While trying to use df -k command in my solaris box, I am getting output shown as below.
    Filesystem 1024-blocks Used Available Capacity Mounted on
    rpool/ROOT/solaris-161 191987712 6004395 140577816 5% /
    /devices 0 0 0 0% /devices
    /dev 0 0 0 0% /dev
    ctfs 0 0 0 0% /system/contract
    proc 0 0 0 0% /proc
    mnttab 0 0 0 0% /etc/mnttab
    swap 4184236 496 4183740 1% /system/volatile
    objfs 0 0 0 0% /system/object
    sharefs 0 0 0 0% /etc/dfs/sharetab
    /usr/lib/libc/libc_hwcap1.so.1 146582211 6004395 140577816 5% /lib/libc.so.1
    fd 0 0 0 0% /dev/fd
    swap 4183784 60 4183724 1% /tmp
    rpool/export 191987712 35 140577816 1% /export
    rpool/export/home 191987712 32 140577816 1% /export/home
    rpool/export/home/123 191987712 13108813 140577816 9% /export/home/123
    rpool/export/repo 191987712 11187204 140577816 8% /export/repo
    rpool/export/repo2010_11 191987712 31 140577816 1% /export/repo2010_11
    rpool 191987712 5238974 140577816 4% /rpool
    /export/home/123 153686630 13108813 140577816 9% /home/12
    My question here is why /usr/lib/libc/libc_hwcap1.so.1 file system is having same size as that of / root filesystem? and what is the significance of /usr/lib/libc/libc_hwcap1.so.1 file system..
    Thanks in Advance for your help..

    You must have a lot of small files on the file system.
    There are couple of ways, the simplest is to increase the size of the filesystem.
    Or if you can create a new filesystem, but increase the inode count so you can utilize the space and still have enough inodes. Check out the man page mkfs_ufs and the option nbpi=n
    my 2 bits

  • Increasing file system space and /

    Running Solaris 10 SPARC
    V445R
    I have (2ea) 73GB disk (Mirrored)
    I would like to add 2 additional 73GB disk
    Mirror 0 against 1 and Mirror 2 against 3
    I would like to increase /usr (c0t0d0s4), /var (c0t0d0s3) and /generic (c0t0d0s6)
    I believe the file system is limited to 7 partitions (slices s0-6).
    slice s6 would be entirely on the second set of disks.
    w/Solaris 10 is there an easier way to add space to the file system than backup everything, split the mirror, format, partition, mirror, load backups
    Thanks

    Assuming you're using SVM, see the following links to see if they help. I don't have a system I can alter just now, but have these bookmarked. Good luck.
    http://docs.sun.com/app/docs/doc/816-4520/6manpiek9?a=view
    http://docs.sun.com/app/docs/doc/816-5166/6mbb1kq27?a=view
    Just in case, I'd make sure to have a backup. ;)
    -Marc

  • Dfc: Display file system space usage using graph and colors

    Hi all,
    I wrote a little tool, somewhat similar to df(1) which I named dfc.
    To present it, nothing better than a screenshot (because of colors):
    And there is a few options available (as of version 3.0.0):
    Usage: dfc [OPTIONS(S)] [-c WHEN] [-e FORMAT] [-p FSNAME] [-q SORTBY] [-t FSTYPE]
    [-u UNIT]
    Available options:
    -a print all mounted filesystem
    -b do not show the graph bar
    -c choose color mode. Read the manpage
    for details
    -d show used size
    -e export to specified format. Read the manpage
    for details
    -f disable auto-adjust mode (force display)
    -h print this message
    -i info about inodes
    -l only show information about locally mounted
    file systems
    -m use metric (SI unit)
    -n do not print header
    -o show mount flags
    -p filter by file system name. Read the manpage
    for details
    -q sort the output. Read the manpage
    for details
    -s sum the total usage
    -t filter by file system type. Read the manpage
    for details
    -T show filesystem type
    -u choose the unit in which
    to show the values. Read the manpage
    for details
    -v print program version
    -w use a wider bar
    -W wide filename (un truncate)
    If you find it interesting, you may install it from the AUR: http://aur.archlinux.org/packages.php?ID=57770
    (it is also available on the archlinuxfr repository for those who have it enabled).
    For further explanations, there is a manpage or the wiki on the official website.
    Here is the official website: http://projects.gw-computing.net/projects/dfc
    If you encounter a bug (or several!), it would be nice to inform me. If you wish a new feature to be implemented, you can always ask me by sending me an email (you can find my email address in the manpage or on the official website).
    Cheers,
    Rolinh
    Last edited by Rolinh (2012-05-31 00:36:48)

    bencahill wrote:There were the decently major changes (e.g. -t changing from 'don't show type' to 'filter by type'), but I suppose this is to be expected from such young software.
    I know I changed the options a lot with 2.1.0 release. I thought it would be better to have -t for filtering and -T for printing the file system type so someone using the original df would not be surprised.
    I'm sorry for the inconvenience. There should not be any changes like this one in the future though but I thought it was needed (especially because of the unit options).
    bencahill wrote:
    Anyway, I now cannot find any way of having colored output showing only some mounts (that aren't all the same type), without modifying the code.
    Two suggestions:
    1. Introduce a --color option like ls and grep (--color=WHEN, where WHEN is always,never,auto)
    Ok, I'll implement this one for 2.2.0 release It'll be more like "-c always", "-c never" and "-c auto" (default) because I do not use long options but I think this would be OK, right?
    bencahill wrote:2. Change -t to be able to filter multiple types (-t ext4,ext3,etc), and support negative matching (! -t tmpfs,devtmpfs,etc)
    This was already planned for 2.2.0 release
    bencahill wrote:Both of these would be awesome, if you have time. I've simply reverted for now.
    This is what I would have suggested.
    bencahill wrote:By the way, awesome software.
    Thanks I'm glad you like it!
    bencahill wrote:P.S. I'd already written this up before I noticed the part in your post about sending feature requests to your email. I decided to post it anyway, as I figured others could benefit from your answer as well. Please forgive me if this is not acceptable.
    This is perfectly fine Moreover, I seem to have some troubles with my e-mail addressee... So it's actually better that you posted your requests here!

  • ZFS File System - Need to add space

    Dear All,
    Please help me in the below case.
    I have the df -h output as below.
    rpool/export/home/chaitsri
    134G 35K 126G 1% /export/home/chaitsri
    datapool 134G 32K 134G 1% /datapool
    datapool/test1 20G 31K 20G 1% /datapool/test1
    Request is to add the space of /datapool/test1 to /export/home/chaitsri. Please let me know how can we achieve this.

    It's not clear from your post whether you want to move just /export/home/chaitsri or all of /export/home. The procedure will be similar regardless. I'll assume you want to do the latter. This is untested so please don't blindly copy/paste until you've thought about what you want to achieve.
    1) Make sure no non-root users are logged on to the system
    2) Create the appropriate dataset (filesystem) on datapool, eg:
    # zfs create datapool/export/home
    3) For any local users, create the appropriate datasets on datapool and copy the data from rpool/export/home/<username> to datapool/export/home/<username>. Any automounted home directories don't need this because the data will be on the home servers.
    4) Change the mountpoint property for rpool/export/home (and any local users)
    # zfs set mountpoint=/origexport/home rpool/export/home
    5) Change the mountpoint property for datapool/export/home
    # zfs set mountpoint=/export/home/ datapool/export/home
    6) Once you're happy that things work, then you can delete the rpool/export/home dataset(s).
    HTH
    Steve

  • Trace Files 11g (Space Issue)

    friends, we are running 11g (11.1.0.7.0) database in our environment. Currently our OS slice is facing space related issues due to huge number of trace files generation. Please share any helpful note/idea.
    Regards,
    Irfan Ahmad

    please see following information against trace related parameters:
    log_archive_trace     3     0     0
    sql_trace     1     FALSE     FALSE
    sec_protocol_error_trace_action     2     TRACE     TRACE
    tracefile_identifier     2          
    trace_enabled     1     TRUE     TRUE
    any idea.
    regards

Maybe you are looking for

  • Help! My Mac mini won't reboot

    Hi. I posted yesterday (in the Safari forum) concerning a problem I was having with Safari 3.0.4 after updating to 10.4.11. Basically, it wouldn't run. Anyway, I did a bunch of searches on the Web (including this forum) to try to figure out what to d

  • All printers stopped working on network

    Have had a home network set up for some time. All of a sudden, both printers stopped printing. Printers are HP color laserjet 2550 and OfficeJet 6110. Hooked up on an airport extreme/airport express network. The rest of the network functions appear t

  • Incorrect value: Namespace prefix q1 of QName q1:RequestArray is undeclared

    Can someone please help me understand this error message? Incorrect value: Namespace prefix q1 of QName q1:RequestArray is undeclared Exception of class CX_SLIB I get this error when generating the Proxy Class. The URL of the Web Service WSDL is: htt

  • Error This Field Name is not Known Line...

    hi, on the one PC CRXI Deverloper Edition and Crystal Reports XI Runtime Files.msi are installed. The VBS-Scipt works perfect. On an other PC only Crystal Reports XI Runtime Files.msi is installed. Following error is displey: "Error This Field Name i

  • Raw file from LR3 to PS CS5

    When I import a raw file in PS, Camera raw offers trough the"workflow option" to increase the size. With my Canon full frame Camera raw extrapolate my 5616 by 3744 file to 6144 by 4096. Can we do  The same from Lr3 to PS CS5 ?