Thoughts on ZFS-FUSE stability?

I'm considering putting ZFS on my home file server for the data drives, and setting it up for mirrored RAID. I was wondering if anyone had any experiences with it as far as stability and data loss, good or bad. Performance isn't a HUGE concern, since it's just for my own use. I do keep backups on an external hard drive in any case, but that's one of those things you hope you never have to use.
So has anyone used ZFS-FUSE extensively, and if so, how was your experience with it? Is it ready for prime time, or not?

Well, since few people seem to be talking about the current state of ZFS on Linux, I'll offer my experience:
Other than a solvable bug preventing some filesystems from NFS export (affects ZFS and other FS as well), I've found ZFS-FUSE to be a pleasure to use.  ZFS is much more mature than BTRFS and I think a production-functional kernel module is not far away.  BTRFS has had recent data loss issues and I wouldn't trust my important data to it yet.
I am running a 4-disk RAIDZ2 setup with 4 2TB disks that gives me ~3.7TB of usable space with dual redundancy.  If you decide to expand your storage, ZFS makes it a breeze.  NFS exporting with ZFS is also super easy, as long as you organize your ZFS 'filesystems' properly.  It's a bit different thinking about ZFS filesystems but I really like it (individual permissions and sharing settings, variable size, nestable) now that I'm used to it.  I also have no trouble sharing the filesystems over Samba to my Windows boxes.
ZFS-FUSE is not blazing fast but it's fast enough for my NAS running backups and serving high-bandwidth media.  The self-checking/self-healing feature gives me a calm feeling about data I haven't had before.  It's easy to get status and statistics about the current state of the FS from the zfs and zpool commands.  I only wish I'd switched my NAS to ZFS sooner!
EDIT: This post by a BSD user describes well how I feel about ZFS:
ZFS is not just only another filesystem. And there are faster filesystems out there.
But if you need the features of ZFS, it is the best you have ever worked with.
http://hub.opensolaris.org/bin/view/Com … zfs/whatis
Last edited by doublerebel (2011-09-22 20:26:20)

Similar Messages

  • Please critique my zfs-fuse setup - wiki to follow - help needed

    Hi all,
    I am writing a new wiki for zfs-fuse that will hopefully find a niche and help others.
    I am having a couple rough spots:
    For example im trying to figure out why when i do a
    # zpool create pool /dev/sdb /dev/sdc /dev/sdd /dev/sde
    with a 2TB, 750GB, 750GB, 500GB that the end file size comes out ~3.5TB instead of 4TB, and likewise when i create with a 2TB, 500GB, 500GB the drive size comes out ~2.68TB.......  Does the zpool automatically use the smallest drive as a cache drive or something? Or am i just that bad at 1024*1024*xxx   
    Another question i am having is what role does mdadm play if i created a linear span array,  on the zpool, would this still be considered zfs? 
    https://wiki.archlinux.org/index.php/Us … ementation
    heres the wiki please critique it, comment on it, bash it, help it, anything.  Im sort of finished until i can have someone point out any lame mistakes in the way i created the partition tables, or used a zpool when i shouldnt be, if indeed the way i have done it with the bf00 filesystem type 'Solaris root' is indeed zfs.   
    Im looking for complete filesystem integrity for a backup drive array, thats all.  If its only a backup array, then without having minimum of 6 like sized hard drives i dont expect to have a nice raidZ1 setup with two separate vdevs that can take full advantage of the checksum file correction system that striped arrays offer yet, i just want to get teh zfs system up and running on one array until i get more hard drives. 
    https://wiki.archlinux.org/index.php/Us … ementation
    Last edited by wolfdogg (2013-01-18 05:16:45)

    So, that article you have there is like a draft you are planing to add as a regular arch wiki article?
    If thats the case, then read the Help:Style article, it has the rules about how you should write wiki articles on the archwiki.
    For instance, you are talking a lot on the first person, and giving personal comments. Like this:
    1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory
    That style of writing is more fitting for a blog than a wiki article. The style article mention:
    Write objectively: do not include personal comments on articles, use discussion pages for this purpose. In general, do not write in first person.
    Check it here.
    So, instead of saying things similar to this:
    "I had to install blabla, I did it with yaourt like this: yaourt -S blabla, but you can download it manually from AUR, makepkg it, and then install it with pacman"
    Is better if you say it like this:
    "Install the package blabla from the AUR"
    Of course, using the wiki conventions for making "Install" a link to pacman, "blabla" a link to the aur page of the package, and "AUR" a link to the aur wiki article.
    Just read the article, and you will know how you should write it.

  • Zfs-fuse issues

    zfs-fuse will no longer function properly.
    I just installed a new system yesterday.  It was unable to run, despite an identical kernel and support packages to a running system on my network that zfs-fuse working properly.
    By copying: /usr/sbin/zfuse, zfs, and zpool from the running system to the new system, it came right up.
    I then tried to re-install zfs-fuse from the AUR and it stopped working again.
    I wouldn't know where to start debugging this or even how to discover who the AUR package maintainer is?
    Last edited by TomB17 (2011-06-13 20:09:31)

    The maintainer can be found on zfs-fuse's aur page.
    Last edited by Stebalien (2011-06-14 03:14:10)

  • Hard drive array losing access - suspect controller - zfs

    i am having a problem with one of my arrays, this is a zfs fielsystem.  It consists of a 1x500GB, 2x750GB, and 1x2TB, linear array. the pool is named 'pool'.    I have to mention here, i dont have enough hard drive to have a raidz (raid5) setup yet, so there is no actual redundancy to so the zfs cant auto repair itself from a copy because there is none, therefore all auto repair features can be thrown out the door in this equation meaning i believe its possible that the filesystem can easily be corrupted by the controller in this specific case which i suspect.  Please keep that in mind while reading the following.
    I just upgraded my binaries, therefore i removed zfs-fuse and installed archzfs.  did i remove it completely?  not sure.  i wasnt able to get my array back up and running until i fiddled with  the sata cables, moved around the sata connectors, tinkered with bios drive detect. after i got it running, i copied some files off of it from samba thinking it might not last long.  the copy was succesfull, but problems began surfacing again shortly after.  so now i suspect i have a bad controller on my gigabyte board.  I round recently someone else who had this issue so im thinking its not the hard drive. 
    I did some smartmontools tests last night and found that ll drives are showing good on a short test, they all passed.  today im not having so much luck with getting access.  there is hangs on reboot, and the drive light stays on.  when i try to run zfs and zpool commands its stating the system is hanging.  i have been getting what appears as HD errors as well, ill have to manually type them in here since no copy and paste from the console to the maching im posting from, and the errors arent showing up via ssh or i would copy them from my terminal tha ti currently have open to here.
    ata7: SRST failed (errno=-16)
    reset failed, giving up,
    end_request I/O error, dev sdc, sector 637543760
    ' ' ' ' '''' ' ' ''' sector 637543833
    sd 6:0:0:0 got wrong page
    ' ' ' ' ' '' asking for cache data failed
    ' ' ' ' ' ' assuming drive cache: write through
    info task txg_sync:348 blocked for more than 120 seconds
    and so forth, and when i boot i see this each time which is making me feel that the HD is going bad, however i still want to believe its the controller.
    Note, it seems only those two sectors show up, is it possible that the controller shot out those two sectors with bad data?  {Note, i have had a windows system prior installed on this motherboard and after a few months of running lost a couple raid arrays of data as well.}   
    failed command: WRITE DMA EXT
    ... more stuff here...
    ata7.00 error DRDY ERR
    ICRC ABRT
    blah blah blah.
    so now i can give you some info from the diagnosis that im doing on it, copied from a shell terminal.  Note the following metadata errors JUST appeared after i was trying to delete some files, copying didnt cause this, so it apears either something is currently degrading, or it just inevitably happened from a bad controller
    [root@falcon wolfdogg]# zpool status -v
    pool: pool
    state: ONLINE
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
    see: http://zfsonlinux.org/msg/ZFS-8000-HC
    scan: resilvered 33K in 0h0m with 0 errors on Sun Jul 21 03:52:53 2013
    config:
    NAME STATE READ WRITE CKSUM
    pool ONLINE 0 26 0
    ata-ST2000DM001-9YN164_W1E07E0G ONLINE 6 41 0
    ata-ST3750640AS_5QD03NB9 ONLINE 0 0 0
    ata-ST3750640AS_3QD0AD6E ONLINE 0 0 0
    ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 ONLINE 0 0 0
    errors: Permanent errors have been detected in the following files:
    <metadata>:<0x0>
    <metadata>:<0x1>
    <metadata>:<0x14>
    <metadata>:<0x15>
    <metadata>:<0x16d>
    <metadata>:<0x171>
    <metadata>:<0x277>
    <metadata>:<0x179>
    if one of the devices are faulted, then why are they all 4 stating online???
    [root@falcon dev]# smartctl -a /dev/sdc
    smartctl 6.1 2013-03-16 r3800 [x86_64-linux-3.9.9-1-ARCH] (local build)
    Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF INFORMATION SECTION ===
    Vendor: /6:0:0:0
    Product:
    User Capacity: 600,332,565,813,390,450 bytes [600 PB]
    Logical block size: 774843950 bytes
    scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46
    scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46
    >> Terminate command early due to bad response to IEC mode page
    A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
    my drive list
    [root@falcon wolfdogg]# ls -lah /dev/disk/by-id/
    total 0
    drwxr-xr-x 2 root root 280 Jul 21 03:52 .
    drwxr-xr-x 4 root root 80 Jul 21 03:52 ..
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-_NEC_DVD_RW_ND-2510A -> ../../sr0
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST2000DM001-9YN164_W1E07E0G -> ../../sdc
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K -> ../../sdb
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part2 -> ../../sdb2
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part3 -> ../../sdb3
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part4 -> ../../sdb4
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3750640AS_3QD0AD6E -> ../../sde
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3750640AS_5QD03NB9 -> ../../sdd
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 -> ../../sda
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 wwn-0x5000c50045406de0 -> ../../sdc
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 wwn-0x50014ee1ad3cc907 -> ../../sda
    and this one i dont get
    [root@falcon dev]# zfs list
    no datasets available
    i remember creating a dataset last year, why is it reporting none, but still working
    is anybody seeing any patterns here?  im prepared to destroy the pool and recreate it just to see if its bad data.But what im thinking to do now is since the problem appears to only be happening on the 2TB drive, either the controller just cant handle it, or the drive is bad.  So, to rule out the controller there might be hope.  I have a scsi card (pci to sata) connected that one of the drives in the array is connected to since i only have 4 sata slots on the mobo, and i keep the 500GB connected to there and have not yet tried the 2tb there yet.  So if i connect this 2TB drive to the scsi i should see the problems disappear, unless the drive got corrupted already. 
    Does any experience in the arch forums know whats going on here?  did i mess up by not completely removing zfs-fuse, is my HD going bad, is my controller bad, or did ZFS just get misconfigured?
    Last edited by wolfdogg (2013-07-21 19:38:51)

    ok, something interesting happened when i connected it (the badly reacting 2TB drive) to the scsi pci card.  first of all no errors on boot.... then take a look at this, some clues to some remanants to the older zfs-fuse setup, and a working pool.
    [root@falcon wolfdogg]# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    pool 2.95T 636G 23K /pool
    pool/backup 2.95T 636G 3.49G /backup
    pool/backup/falcon 27.0G 636G 27.0G /backup/falcon
    pool/backup/redtail 2.92T 636G 2.92T /backup/redtail
    [root@falcon wolfdogg]# zpool status
    pool: pool
    state: ONLINE
    status: The pool is formatted using a legacy on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on software that does not support
    feature flags.
    scan: resilvered 33K in 0h0m with 0 errors on Sun Jul 21 04:52:52 2013
    config:
    NAME STATE READ WRITE CKSUM
    pool ONLINE 0 0 0
    ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0
    ata-ST3750640AS_5QD03NB9 ONLINE 0 0 0
    ata-ST3750640AS_3QD0AD6E ONLINE 0 0 0
    ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 ONLINE 0 0 0
    errors: No known data errors
    am i looking at a bios update needed here so the controller can talk to the 2TB properly?
    Last edited by wolfdogg (2013-07-21 19:50:18)

  • ZFS and NFS

    I'm trying to share a zfs file system via nfs to several hosts. I've looked at the official Oracle documentation and it seems to be lacking (or I'm looking in the wrong place).
    I can do a zfs set sharenfs=rw data/set and use the mount command on the nfs clients and that works fine, but the volume is mounted read-only. How do I mount it read-write?
    Also, I thought the 'zfs' command had a mount option for zfs/nfs volumes, is that no longer the case (perhaps it never has been?)?
    Some google results have turned up mention of /etc/zfs/exports being updated whenever a sharenfs option is changed, but that file isn't being created for me. Has it been deprecated?
    Any help appreciated, thanks!

    Nik,
    You are correct, the file system is mounted rw afterall. The ownership of the mount point on the client is nobody, so I su'd to nobody and tried to create files but couldn't. I can as root though.
    I've changed the ownership of the NFS file system on the server to oracle:dba. I have an oracle:dba on the client with the same uid/gid, but the ownership of the files still says nobody. How do I make it show up as oracle?
    Also, is the mount command the correct way to mount zfs nfs volumes?
    Thanks!

  • Questions about ZFS

    Hello
    After having some serious issues with data corruption on ext3 and 4 over time, I have decided to start using ZFS on the disk. After what I have read, ZFS is the superb filesystem for avoiding data corruption, as far as I have understood zfs-fuse (http://aur.archlinux.org/packages.php?ID=8003) to use zfs with linux, or is there any better option?
    And Ive heard that zfs requires alot of the machine? And is there any other alternatives?

    he means until it's stable in the kernel. which could take a loong while I think. (personal guess:  2 years or so)

  • [SOLVED - kind-of] Problems with ZFS

    I had a HDD with ZFS on it in my old computer (was using zfs-fuse at the time). Then I moved it to my new computer.
    Installed zfs kernel module form AUR. But I could not mount the filesystems
    zpool import -a
    zfs mount -a
    and nothing happened.
    Then I upgraded zfs pool to a newer version (was 6, upgraded to 28).
    Still nothing. Then I tried to upgrade zfs filesystems
    zfs upgrade -r -a
    cannot set property for 'home': pool and or dataset must be upgraded to set this property or value
    cannot set property for 'home/user1': pool and or dataset must be upgraded to set this property or value
    cannot set property for 'home/shared': pool and or dataset must be upgraded to set this property or value
    cannot set property for 'home/user2': pool and or dataset must be upgraded to set this property or value
    0 filesystems upgraded
    Current zfs version for filesystems is 1 and supported by the module is 5
    What's wrong?
    UPDATED:
    Sorry folks - didn't read carefully http://zfsonlinux.org:
    "Sorry, the ZFS Posix Layer (ZPL) which allows you to mount the file system is still a work in progress."
    Last edited by SpiegS (2010-12-15 00:05:20)

    Re iphoto, back up first then rebuild the library as follows.
    http://support.apple.com/kb/ht2638
    If you still have problems post in iphoto forum
    There is also a rebuild mailbox option in Mavericks Mail.

  • [solved] format a zfs drive to what, with what

    Im going to rebuild my zfs array, and im not sure i did it right the first time.  Can somebody please point out to me what command line tool to use, and what filesystem i need to be putting on there?  I once thought it was a zfs file system, but is this just a layer on top of another filesystem?  I clearly remember not having to format the drives last time i build an array, and they had been ext3 if i remember correctly, not sure if i wiped them agan after that. Is this normal, wrong, or what am i missing?
    note, my drives have GPT partition tables.
    Edit: a thought, since ZFS can use GPT partition table, and if i created the Partition to use the entire disk using gdisk, then is that all that is needed?  shoudl i have stopped there, or do i need to do some kind for formatting?
    Last edited by wolfdogg (2013-09-30 20:58:39)

    wolfdogg wrote:so can someone clarify if i use gdisk to zap the partition table, then recreate a new GPT, then create a partition(the whole drive), is that it?  then i can create the zpools no problem?   So doesnt itjust use the disk as is?  i dont ever remember the drive going through a long format process when setting up zpools, they are just available for use right away.
    Zfs uses the disk as is, that means there is no need to create a partition, or filesystem. Just clean the drives with dd and use zpool create afterwards.
    Wiki: https://wiki.archlinux.org/index.php/ZF … orage_pool
    EDIT: Remember to add all you drives to the command, for example like this:
    RaidZ1:
    zpool create tank raidz1 /dev/sdb /dev/sdc /dev/sdd /dev/sde
    There are lots of examples on the manpage of zpool.
    Last edited by teateawhy (2013-09-30 12:26:04)

  • ZFS on Linux & mplayer

    Hi,
    I have a problem with the combination of zfsonlinux and mplayer.
    Actually I'm not quite sure if one of them is the culprit, but I have some pointers.
    Problem: When playing a movie (e.g. a mkv ~ 2GB or a vob ~ 6GB) every few minutes the the movie shortly stutters (picture & audio)
    Observations:
    1) This didn't happen with zfs-fuse which is supposedly slower than zfs on linux.
    2) I can copy a vob file in under a minute to my home directory (ext4), hence read performance should be reasonable.
    3) When playing the movie from my home dir, no stuttering occurs. Hence I'd rule out an mplayer or sound card issue.
    Now I don't know what I can do about zfs on linux. Read performance seems to be ok and there seem to be no sound card issues. When I run bonnie or iozone, cpu usage stays below 20%, so that shouldn't cause an issue as well.
    Any hints or tips on what I could look at next? Thanks!

    Hi,
    I have a problem with the combination of zfsonlinux and mplayer.
    Actually I'm not quite sure if one of them is the culprit, but I have some pointers.
    Problem: When playing a movie (e.g. a mkv ~ 2GB or a vob ~ 6GB) every few minutes the the movie shortly stutters (picture & audio)
    Observations:
    1) This didn't happen with zfs-fuse which is supposedly slower than zfs on linux.
    2) I can copy a vob file in under a minute to my home directory (ext4), hence read performance should be reasonable.
    3) When playing the movie from my home dir, no stuttering occurs. Hence I'd rule out an mplayer or sound card issue.
    Now I don't know what I can do about zfs on linux. Read performance seems to be ok and there seem to be no sound card issues. When I run bonnie or iozone, cpu usage stays below 20%, so that shouldn't cause an issue as well.
    Any hints or tips on what I could look at next? Thanks!

  • Homeserver Storage considerations - Suggestions please

    Hi all,
    I'm currently upgrading my homeserver from an old Via-C7 with 1 HDD to a new system. The new one is a Atom D510 in the nice Chenbro ES34069 Case (Mini-ITX with 4x Sata Hotswap). Hardware-wise all is working out nicely so far, and Arch runs perfectly on the box. Now here is my Problem: How to layout the storage software-wise (lvm/raid/whatever).
    What I have:
    1x 2TB HDD holding the data from the old server
    3x 1.5TB HDD free
    all attatched to the server
    What I want:
    The drives should be assembled in an array that provides the following:
    - Accessible as one single "drive"
    - encrypted at block level (dm-crypt / luks)
    - provides redundancy (array survives failing of at least one drive)
    - array is extensible (Disks can be swapped for bigger ones / new disks can be added later [doing this online would be optimal])
    - be as performant as possible
    - it must be possible to first build the array out of 3 Disks, copy the data from the 4th to the array and then add the 4th Disk to the array
    The System has to be linux/arch, since I use software that only runs on linux (VDR), so no switching to solaris/BSD... possible
    The encryption is a must for me, I'm not here to debate if its neccessary or not
    What I tried/considered so far:
    - Linux Software RAID 5
      Pros:
        - extensible, one can reshape the array online
        - redundant
        - encryption layer on top is possible
        - good usage of the aviable space
      Cons:
        - questionable reliability (raid 5 write hole problem, silent data/parity corruption)
        - weak write/rewrite performance
    - Linux Software Raid 10
      Pros:
        - faster than RAID 5
        - excryption on top possible
        - redundant (up to 2 disks can fail, depending on which)
      Cons:
        - loosing half of the HDD-space for redundancy
        - not extensible (raid10 driver doesn't support reshaping)
    - ZFS (through ZFS-Fuse)
      Pros:
        - raidz does RAID 5 but right (no write hole or silent data corruption problems)
        - extensible, even online
      Cons:
        - stability of zfs-fuse is questionable, dont know how stable it is
        - no encryption layer on top is possible
        - writes are quite slow from my tests (this is partly cause of the point above, every disk is encrypted seperately -> to much to handle for the Atom)
    - 2x Software RAID 1 with LVM on top
      Pros:
        - extensible, even online
        - should be quite fast
        - encryption layer on top is possible
      Cons:
        - loosing half of HDD-space for redundancy
    So thats What I have considered so far. What is the arch community using for similar setups? Any suggestions of other methods (BTRFS??) that coud statisfy my needs? Which of the above would you / have you choosen?
    I'm thankfull for any constructive suggestions
    seiichiro0185

    Bronxal:
    There are several options. Since you are looking to be able to retrieve them with iPhoto at a later date you might give this a try.
    1 - select multiple rolls until iPhoto says you've got maybe 550MB (if you're using CDs) or 6G (for DVDs).
    2 - go to the Share->Burn Disk menu option and follow the instructions.
    This will create a disk that can be mounted in iPhoto at a later time (See example) so you can view them, copy back to the library in case you need to print, etc. Once assured they are on the disk and OK you can delete the rolls from the library. One warning, currently iPhoto 5 has a bug that loses the keywords on these disks. It's a minor issue but just be forewarned.
    That method seems to fit what you want to do. Now regarding your aging G4. Like me it could crash at any time. If the budget permits get an external Firewire HD for backup purposes. I've got one that is much bigger than my boot drive so I've cloned my system to it as well as backup my library and other important document on a daily or routine basis with Synk. If my boot drive gives up the ghost I can boot off the external until I get the boot drive fixed or replaced. FW drives are pretty affordable these days and are excellent insurance. I prefer the self powered so I can easily turn them off when I don't need it. It doesn't run all the time.
    There are other methods of backing up just the jpg files. If that's what you're really interested in post back and we can discuss that. Good luck.

  • Is it possible to sandbox the entire system?

    DISCLAIMER
    This is partly just thinking out loud.
    There may be some completely obvious solution for achieving this that I have not come across.
    My ideas may be flawed.
    I saw the other thread about sandboxing but that had a different focus and went in a different direction than this hopefully will.
    First, by sandboxing I mean the following:
    * let an application see the actual system, but only selectively, e.g. make /usr visible but /home inaccessible
    * intercept all writes to the system
    * let an application see all intercepted writes as though they have actually occurred
    * intercept all network communication and allow the user to approve or deny it, e.g. enable a source download from one site but prevent the application from calling home to another
    * the application cannot escape the sandbox
    * the application should not be able to detect the sandbox
    Is this possible?
    First I thought about using FUSE to mask the entire filesystem but this would affect all applications and probably wouldn't work on a running sysem.
    Then I thought about using virtualization. Maybe it would be possible to create a fake base image of the live host system and then add an overlay to that to create a sandboxed virtual clone of the host system. The network connection could probably by the host in that case.
    I don't know if it would be at all possible though to create a fake base image of the live host system. I also don't know if it would need to be static or if the image could remain dynamic. In the latter case. it would probably be possible to create the image with FUSE. Using FUSE it might even be possible to forgo the overlay image as FUSE itself could intercept the writes. There are obvious complexities in that though, such as how to present changes to a file by the host to the guest if the guest has modified it previously. I also have no idea if the guest system could use a clone of the hosts file system.
    Why I would want to do this:
    * "Safely" test-run anything while protecting your system (hide your sensitive data, protect all of your files, control network access)**.
    * Simplified package building: build the application as it's meant to be built in the sandbox, compare the sandbox to the host and then use the differences to build the package***.
    * It would be cool.
    ** Before anyone interjects with the "only run trusted apps" mantra, this would also apply to trusted apps which might contain bugs. Let's face it, most people do not plough through source code of trusted apps before building/installing/running them.
    *** This was prompted by my ongoing installation of SAGE which is built in the post-install function instead of the PKGBUILD itself due to the complexities of the build process. The general idea is to create a way in which all application that can be built can be packaged uniformly.

    Are you sure that you can change the permissions of symlinks themselves? I think I've tried to make files read-only via symlinks on a local server but ended up using bindfs because it wasn't possible. Even if you can, symlinking everything that might be necessary for a given environment would not be ideal, plus I don't think symlinks can be used across different filesystems.
    If a real-life human can figure out if it's he/she is in a chroot and break out of it, then he/she can write a script to do the same. I want a sandbox that could run malicious code with no effect on the system (if that's possible). Also, I think if the chroot idea were truly feasible, makepkg would have been using it for years already to simply install packages in the chroot as you normally would and then package them. There would also be several sandbox applications that could run applications safely. So far I have yet to find any.
    I admit that I haven't looked into using a chroot in detail though and of course I may have missed some application which creates such a setup. Right now I think using per-application namespaces with fuse seems the most promising but I won't know until I've finished implementing a test application. If it turns out that it's a dead end I'll take harder look at chroot but it really doesn't seem to be able to do what I want.

  • Delete download/cached pkg files?

    In addition to a 20tb Solaris server, I also run Solaris 11 Express in a VM on my notebook for portable external ZFS storage (long story on how and why). As you might imagine, I'm interested in keeping the system virtual image (which is stored on my notebook drive) small.
    After doing a pkg update which turned out to be pretty substantial, the on-disk size (within the VM not just the virtual container) grew a good deal.
    So my question: Does the package manager store downloaded package files even though it doesn't need them any more? And if so, can they be safely deleted?
    E.g., Debian Linux systems have "apt-get clean", which deletes the package cache. Any Solaris analogue? (I know Solaris isn't Linux. I mean I really, REALLY know! ;-)
    Thanks in advance!
    -Jim
    +PS: In case this question gets derailed with debate about Solaris not being a suitable solution for portable external notebook-based ZFS storage, let me just completely agree! However, after about 100 hours of work (in total over a couple of years), I've developed a solution that is rock-solid, gets a reliable 20-30 MB/s throughput [not amazing but suprising given the concept], and doesn't get confused by changing device ports, IDs, etc. I push it hard every day, all day. The drives are velcro'ed to the notebook lid. Of course I first preferentially tried Linux and and Btrfs, then ZFS-FUSE, then BSD ZFS, but this is hands-down the best solution. But it wasn't easy. In fact it was really, really hard to get everything ironed out. But the only commercial alternative...doesn't exist. Yet. (But I'm working on it.)+

    doublemeat wrote:
    Thanks, that was just the answer I was looking for! Well except for the "you may have ruined your system" part. Fortunately I was able to restore from a previous snapshot.
    I didn't want to do a full roll back so I copied everything under /var/pkg/cache from a previous snapshot, to the current. When I ran # pkg update, it complained about sources being wrong or something (the output was buggy, clearly missing lines, and didn't make sense). It recommended I run # pkg remove-source solaris. Which didn't seem very wise, but I did it anyway. Then played around with the GUI version, thinking that might get the package source set straight. Now when I run # pkg update, it eventually just comes back with No updates available for this image. Which is the right answer, so hopefully it's OK now!I have no idea what "remove-source" is; that's certainly never been a valid pkg(1) subcommand.
    You also didn't restore all of the directory content you should have (I would have suggested all of /var/pkg).
    As a result, I would suggest running "pkg verify" to determine the state of your system.
    For systems that are running a version of Solaris 11 Express (only), you can safely remove any /var/pkg/publisher/*/file directories. (Just the 'file' directories and their contents.)
    I've been having a tough time finding what old package/log/temp/dump files can be safely deleted. (So far this was the most helpful but not very complete: http://docs.oracle.com/cd/E23824_01/html/821-1451/sysresdiskuse-19.html.) As with many things Solaris - esp. recent versions - one can spend all day Googling and scouring, and still not find a satisfactory answer. Over a couple of years I've been compiling the answers I find and/or work out myself. Here's the section on what I believe are all and only files that can be safely deleted once and/or regularly:
    The directory /var/pkg/cache and its contents may be safely removed, but it should not be necessary to manually remove it or its contents as it is managed automatically by the package system.
    Most of the cache management issues you might have had should be resolved in Solaris 11 FCS (the general release).
    flush-content-cache-on-sucess True is the default in in S11 FCS as well.
    Further refinements to the cache management system will be made as time goes on.

  • WRTU54G-TM Internet / Blue-light & XP SP3 Issues

    So I've been wrestling with getting the WRTU54G-TM maintain a constant internet and "blue light" connection now for over a month.  I thought I had finally stabilized the situation (downloaded the set-up wizard from the Linksys site, as others recommended) and used that to set up the router, and everything seemed to be rock solid for a week or so.
    Then I installed both iTunes 8 (which installs the new version of the Bonjour service) and XP Service Pack 3 in the same weekend, and now the WRTU54G-TM gets flaky every time I turn on my computer (it's fine when the computer is off).
    At first I thought it was the Apple Bonjour application, but after un-installing that, the WRTU54G-TM was still flaky.  I then upgraded the router to the latest firmware version (v1.00.09), but still no luck.  The router will lose the internet connection (and the blue light), and after a few minutes the connection will come back and the blue light will come back a few minutes after that (at least no power cycle is needed).
    So I'm left wondering if XP SP3 is what is now causing the problem and is doing something to mess with the router connectivity.
    Does anyone know of SP3 issues with the WRTU54G-TM and how to resolve these?
    Thanks!
    Ian

    I did a little bit more research and found that XP SP3 is indeed causing a number of routers to crash and reboot (which is what I'm experiencing when my computer is on).
    Apparently there is some issue involving DCHP and "Option 43" which can cause some routers to reboot.  There's more information and a hotfix here http://support.microsoft.com/kb/953761 which I'm going to try tonight (crossing my fingers) and hopefully it will solve the problem...

  • Filesystems: Media storage

    Hey guys, I've got a new 1TB drive coming in the next few days, and I was planning on using it for storing large media files (mostly videos). In the past I've used NTFS for storing such files, but since I never use Windows for anything other than playing games nowadays, booting into Windows to periodically defrag the storage partition has become an unbearable chore.
    So, in your opinion, what is the best Linux filesystem for storing large files? If you have your own dedicated storage partition, are there any specific mount options you use with it to help prevent against data corruption (caused by unclean shutdowns or whatever)? Is it worth using LVM on the disk, and maybe freeing up one of my other terabyte drives, putting LVM on that, and having a RAID0? Is there any significant processor hit when using RAID0?
    Also, as a side topic, MBR or GPT as the partition table? (It's kinda irrelevant, but never mind).

    I've had various arrays over the years, mostly using md + xfs.  It works pretty good, as long as the drives and cables are stable.  It does not, however, protect against bit rot or bad cables.  I've had several bad SATA cables that were tough to track down.
    My first 8 x 2TB array had nothing but problems.  I would get it set up and transfer data to it, thinking it was going to be great.  It was months before I realized a high percentage of my files had become corrupted.  That's scary.
    zfs-fuse solved these problems.  It allowed me to discover drives with bad sectors and the bad cables.  With zfs-fuse, it will even tell you what drive is experiencing the difficulty.  Pretty sweet.
    I'm running WD20EARS drives.  I know the mantra is not to use them in an array but expensive drives would not have been any more reliable in the face of the bad cable problem and I can't imagine they would be much better on the bad sector issue, either.  The point is, zfs-fuse weathered these problems, told me what was wrong... right down to the serial number of the drive having issues, and repaired the data once the problem was corrected.
    zfs-fuse is slow and not appropriate for boot drives.  I wouldn't run it on a root drive, either.  It's pretty much bullet proof on reliability though, so I really like it on arrays that hold bulk data.  For uses like ZoneMinder and MythTV recording, zfs-fuse is fast enough... barely.
    By the way, it got about 50% faster in version 0.7.  I'm getting about 40 MBps on an older dual core 2.5 GHz athlon.
    Some have reported 100 MBps.  That could well be possible with smoking fast drives in mirror mode but my fastest quad core server can't break 45 MBps on RAIDZ1 arrays.
    ... just my 2 cents. 
    Last edited by TomB17 (2011-09-06 04:58:17)

  • Checksumming filesystems

    Are BTRFS, ZFS, or ZFS Fuse ready for production use?
    I care a whole lot about the integrity of my data.  I've had problems with XFS and EXT4 over the md device driver that turned out to be a bug in the Silicon Image 3132 chipset.  It quietly corrupts data.
    These checksumming filesystems sound like the answer but are they ready?
    BTRFS is pretty new but might have the most mature code of the three on the linux platform.
    The paint is still wet on native Linux ZFS but I have to assume the original source code was rock solid before they ported it so it shouldn't take long to stabilize.
    ZFS Fuse is said to be slow but I don't care much about speed.  Can I trust my data to it?
    My main storage is on two very large RAID 5 arrays.  Currently, one is on EXT4 and the other is on XFS.  They are rsync'd regularly to for redundancy.  Is it reasonable to convert the primary array to ZFS?

    I see nobody is interested in commenting.  These discussions frequently seem to turn into jihads based on platform politics so maybe this is why there are a small number of huge threads with vitriol and uninformed commentary.  I'm going to try to boil it down to some pretty lame basics that should be obvious, but aren't entirely.
    This is based on weeks of reading, a few facts, lots of assumptions, and some experience.  I invite comments or criticism.
    Preface: This research started when I realized my 14TB disk array was quietly corrupting my files.  ...  almost all of my files.  It was shocking that neither RAID nor filesystem were able to effect any positive influence on the integrity of my data.
    Here's a CERN paper on Data Integrity.  It seems to be a key paper that has people buzzing, either in disbelief, or panic.  Consider me in the later reactive category.  lol!
    http://indico.cern.ch/getFile.py/access … nfId=13797
    The options:
    md_mod/XFS -> Fast.  Venerable.  This setup is popular and will protect data against some types of drive failure but leaves data subject to bit rot and various forms of quiet corruption.  Your files can easily become corrupt with no warning or errors of any kind.
    hardware RAID/XFS -> Fast (typically a touch slower than md_mod but far less taxing on the CPU than md_mod).  This setup will protect data against some types of drive failure but leaves data subject to bit rot and various forms of quiet corruption.  Your files can easily become corrupt with no warning or errors of any kind.
    BTRFS - New.  Interesting.  Fast for a checksumming filesystem.  Still in the birthing process, it should be sufficiently stable for critical application use in 4~5 years.
    Native ZFS - Very new.  Fast for a checksumming filesystem.  Highly venerable pedigree.  Too new to fully trust but with it's impeccable heritage and relatively small amount of linux glue code versus actual filesystem code, it should be well stable in 12~18 months.
    ZFS Fuse - Mature.  Adequate speed.  This filesystem seems to have been stable since roughly 2008.  Given the bullet proof Sun roots of the filesystem and relatively long term use, it seems reasonable that the Linux integration code is stable at this time.  While ZFS Fuse will not pick up bit rot and quiet corruption dynamically, it can be tested occasionally with "zfs scrub <pool>".
    Risk assessment criteria:
    - data integrity
    Risk assessment in order of least risk to most risk
    - ZFS Fuse
    - md_mod/XFS
    - hardware RAID/XFS
    - native ZFS
    - BTRFS
    Conclusion:
    If the criteria is purely data integrity in a large data store, it looks to me like ZFS Fuse is the system to go with.

Maybe you are looking for