Zfs-fuse issues

zfs-fuse will no longer function properly.
I just installed a new system yesterday.  It was unable to run, despite an identical kernel and support packages to a running system on my network that zfs-fuse working properly.
By copying: /usr/sbin/zfuse, zfs, and zpool from the running system to the new system, it came right up.
I then tried to re-install zfs-fuse from the AUR and it stopped working again.
I wouldn't know where to start debugging this or even how to discover who the AUR package maintainer is?
Last edited by TomB17 (2011-06-13 20:09:31)

The maintainer can be found on zfs-fuse's aur page.
Last edited by Stebalien (2011-06-14 03:14:10)

Similar Messages

  • Thoughts on ZFS-FUSE stability?

    I'm considering putting ZFS on my home file server for the data drives, and setting it up for mirrored RAID. I was wondering if anyone had any experiences with it as far as stability and data loss, good or bad. Performance isn't a HUGE concern, since it's just for my own use. I do keep backups on an external hard drive in any case, but that's one of those things you hope you never have to use.
    So has anyone used ZFS-FUSE extensively, and if so, how was your experience with it? Is it ready for prime time, or not?

    Well, since few people seem to be talking about the current state of ZFS on Linux, I'll offer my experience:
    Other than a solvable bug preventing some filesystems from NFS export (affects ZFS and other FS as well), I've found ZFS-FUSE to be a pleasure to use.  ZFS is much more mature than BTRFS and I think a production-functional kernel module is not far away.  BTRFS has had recent data loss issues and I wouldn't trust my important data to it yet.
    I am running a 4-disk RAIDZ2 setup with 4 2TB disks that gives me ~3.7TB of usable space with dual redundancy.  If you decide to expand your storage, ZFS makes it a breeze.  NFS exporting with ZFS is also super easy, as long as you organize your ZFS 'filesystems' properly.  It's a bit different thinking about ZFS filesystems but I really like it (individual permissions and sharing settings, variable size, nestable) now that I'm used to it.  I also have no trouble sharing the filesystems over Samba to my Windows boxes.
    ZFS-FUSE is not blazing fast but it's fast enough for my NAS running backups and serving high-bandwidth media.  The self-checking/self-healing feature gives me a calm feeling about data I haven't had before.  It's easy to get status and statistics about the current state of the FS from the zfs and zpool commands.  I only wish I'd switched my NAS to ZFS sooner!
    EDIT: This post by a BSD user describes well how I feel about ZFS:
    ZFS is not just only another filesystem. And there are faster filesystems out there.
    But if you need the features of ZFS, it is the best you have ever worked with.
    http://hub.opensolaris.org/bin/view/Com … zfs/whatis
    Last edited by doublerebel (2011-09-22 20:26:20)

  • Please critique my zfs-fuse setup - wiki to follow - help needed

    Hi all,
    I am writing a new wiki for zfs-fuse that will hopefully find a niche and help others.
    I am having a couple rough spots:
    For example im trying to figure out why when i do a
    # zpool create pool /dev/sdb /dev/sdc /dev/sdd /dev/sde
    with a 2TB, 750GB, 750GB, 500GB that the end file size comes out ~3.5TB instead of 4TB, and likewise when i create with a 2TB, 500GB, 500GB the drive size comes out ~2.68TB.......  Does the zpool automatically use the smallest drive as a cache drive or something? Or am i just that bad at 1024*1024*xxx   
    Another question i am having is what role does mdadm play if i created a linear span array,  on the zpool, would this still be considered zfs? 
    https://wiki.archlinux.org/index.php/Us … ementation
    heres the wiki please critique it, comment on it, bash it, help it, anything.  Im sort of finished until i can have someone point out any lame mistakes in the way i created the partition tables, or used a zpool when i shouldnt be, if indeed the way i have done it with the bf00 filesystem type 'Solaris root' is indeed zfs.   
    Im looking for complete filesystem integrity for a backup drive array, thats all.  If its only a backup array, then without having minimum of 6 like sized hard drives i dont expect to have a nice raidZ1 setup with two separate vdevs that can take full advantage of the checksum file correction system that striped arrays offer yet, i just want to get teh zfs system up and running on one array until i get more hard drives. 
    https://wiki.archlinux.org/index.php/Us … ementation
    Last edited by wolfdogg (2013-01-18 05:16:45)

    So, that article you have there is like a draft you are planing to add as a regular arch wiki article?
    If thats the case, then read the Help:Style article, it has the rules about how you should write wiki articles on the archwiki.
    For instance, you are talking a lot on the first person, and giving personal comments. Like this:
    1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory
    That style of writing is more fitting for a blog than a wiki article. The style article mention:
    Write objectively: do not include personal comments on articles, use discussion pages for this purpose. In general, do not write in first person.
    Check it here.
    So, instead of saying things similar to this:
    "I had to install blabla, I did it with yaourt like this: yaourt -S blabla, but you can download it manually from AUR, makepkg it, and then install it with pacman"
    Is better if you say it like this:
    "Install the package blabla from the AUR"
    Of course, using the wiki conventions for making "Install" a link to pacman, "blabla" a link to the aur page of the package, and "AUR" a link to the aur wiki article.
    Just read the article, and you will know how you should write it.

  • ZFS dataset issue after zone migration

    Hi,
    I thought I'd document this as I could not find any references to people having run into this problem during zone migration.
    Last night I moved a full-root zone from a Solaris 10u4 host to a Solaris 10u7 host. It has a delegated zfs pool.
    The migration was smooth, with a zoneadm halt, followed by a zoneadm detach on the other node.
    An unmount of the ufs SAN LUN (which contained the zone root) on host A and a mount on host B (which is sharing the storage between the two nodes).
    The zoneadm attach worked after complaining about missing patches and packages (since the zone was Solaris 10 u4 as well).
    A zoneadm attach -F started the zone on host B, but did not detect the ZFS pool.
    After searching for possible fixes, trying to identify the issue, I halted the zone again on host B and did a zoneadm attach -u (which upgraded the zone to u7).
    At which point, a zoneadm attach and zoneadm boot resulted in the ZFS dataset being visible again...
    In all a smooth process, but I got a couple of gray hairs on my head trying to figure out what the problem with seeing the dataset after force-attaching the zone was...
    Any insights from Sun Gurus are welcome.

    I am looking at a similar migration scenario, so my question is did you get the webserver back up as well?
    Cheers,
    Davy

  • ZFS TED issues

    Hi guys
    I have changed dns record in my distributor, after that ZFS TED is not working.
    zfs-startup.log shows:
    2012.02.15 11:55:07 [Console:Module] Module com.novell.application.zenworks.services.console.C onsole is done initializing.
    2012.02.15 11:55:07 [Console:Module] Module com.novell.application.zenworks.services.console.C onsole is done.
    2012.02.15 11:55:07 [main] Install directory: svc1:\zenworks\PDS\TED
    2012.02.15 11:55:07 [TED:Module] Module com.novell.application.zenworks.ted.TED is waiting for dependencies to initialize...
    2012.02.15 11:55:07 [main] Module started: com.novell.application.zenworks.ted.TED
    2012.02.15 11:55:07 [ZWS:Module] Module com.novell.application.zenworks.services.webserver .ZenWebServer is done waiting.
    2012.02.15 11:55:07 [ZWS:Module] Module com.novell.application.zenworks.services.webserver .ZenWebServer is initializing...
    2012.02.15 11:55:09 [ZWS:Module] Starting the ZENworks RPC Web Server (ZWS)
    2012.02.15 11:55:09 [ZWS:Module] Loading the zws.properties file.
    2012.02.15 11:55:09 [ZWS:Module] Loading servlets
    2012.02.15 11:55:09 [ZWS:Module] Loading servlet com.novell.application.zenworks.services.servlets. XMLRPCServlet.class.....
    2012.02.15 11:55:09 [ZWS:Module] Servlet loaded
    2012.02.15 11:55:09 [ZWS:Module] Socket timeout is set to 30000
    2012.02.15 11:55:09 [ZWS:Module] Creating a server socket on port 8089
    2012.02.15 11:55:09 [ZWS:Module] Module com.novell.application.zenworks.services.webserver .ZenWebServer is done initializing.
    2012.02.15 11:55:09 [ZWS:Module] Module com.novell.application.zenworks.services.webserver .ZenWebServer is running...
    2012.02.15 11:55:09 [TED:Module] Module com.novell.application.zenworks.ted.TED is done waiting.
    2012.02.15 11:55:09 [ZENLoader Admin:Module] Module com.novell.application.zenworks.loader.ZENLoaderXM LRPC is done waiting.
    2012.02.15 11:55:09 [ZENLoader Admin:Module] Module com.novell.application.zenworks.loader.ZENLoaderXM LRPC is initializing...
    2012.02.15 11:55:09 [ZENLoader Admin:Module] Module com.novell.application.zenworks.loader.ZENLoaderXM LRPC is done initializing.
    2012.02.15 11:55:09 [ZENLoader Admin:Module] Module com.novell.application.zenworks.loader.ZENLoaderXM LRPC is done.
    2012.02.15 11:55:09 [TED:Module] INFO: This PDS service's primary host is 10.x.x.1;server1.sub.domain.com
    2012.02.15 11:55:09 [TED:Module] Hosts switch = 10.x.x.1;server1.sub.domain.com
    2012.02.15 11:55:09 [TED:Module] *** Exception: java.net.UnknownHostException: 10.x.x.1;server1.sub.domain.com: 10.x.x.1;server1.sub.domain.com
    2012.02.15 11:55:09 [TED:Module] java.net.UnknownHostException: 10.x.x.1;server1.sub.domain.com: 10.x.x.1;server1.sub.domain.com
    at java.net.InetAddress.getAllByName0(InetAddress.jav a:1015)
    at java.net.InetAddress.getAllByName0(InetAddress.jav a:985)
    at java.net.InetAddress.getAllByName(InetAddress.java :979)
    at com.novell.application.zenworks.ted.TED.figureHost s(TED.java:901)
    at com.novell.application.zenworks.ted.TED.initialize (TED.java:1249)
    at com.novell.application.zenworks.loader.ZENModuleTh read.run(ZENModuleThread.java:53)
    2012.02.15 11:55:09 [TED:Module] Exit code: -100
    2012.02.15 11:55:09 [Policy Package Agent:Module] Module com.novell.application.servman.facilitator.Facilit ator is done waiting.
    2012.02.15 11:55:09 [Policy Package Agent:Module] Module com.novell.application.servman.facilitator.Facilit ator is initializing...
    2012.02.15 11:55:09 [TED:Module] *** Exception: java.lang.ThreadDeath
    2012.02.15 11:55:09 [Policy Package Agent:Module] SM.ServerDN = SERVER1.OUNAME.ONAME
    2012.02.15 11:55:09 [TED:Module] java.lang.ThreadDeath
    at com.novell.application.zenworks.ted.TED.exit(TED.j ava:424)
    at com.novell.application.zenworks.ted.TED.figureHost s(TED.java:940)
    at com.novell.application.zenworks.ted.TED.initialize (TED.java:1249)
    at com.novell.application.zenworks.loader.ZENModuleTh read.run(ZENModuleThread.java:53)
    - I updated the zfs-startup.xml file to add IP and dns record in the sections: hosts and primary hosts.
    <Parameter Name="Security">On</Parameter>
    <Parameter Name="Version2">Off</Parameter>
    <Parameter Name="HTTP">Off</Parameter>
    <Parameter Name="Hosts">10.x.x.1;server1.sub.domain.com</Parameter>
    <Parameter Name="Domain">DOMAINNAME</Parameter>
    <Parameter Name="DistributorDN">Distributor-SERVER1.ZEN.OUNAME.ONAME</Parameter>
    <Parameter Name="Password">distributor_password</Parameter>
    <Parameter Name="PrimaryHost">10.x.x.1;server1.sub.domain.com </Parameter>
    <Parameter Name="eDirectoryServers">
    <Value>server-ds1.domain.com</Value>
    - I have checked all ZFS edirectory objects using C1 and iManager to check the dns record, the network addresses are correct at Other properties tab in C1 for the Distributor and Subscriber.
    - I'll appreciate any recommendations from you.
    Thanks in advance
    William

    wbelandria,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Zfs boot issues

    Hi,
    I've got next issues: when I try to boot my server I'm getting following message:
    Executing last command: boot disk11
    Boot device: /pci@8f,2000/scsi@1/disk@1,0  File and args:
    SunOS Release 5.10 Version Generic_147440-26 64-bit
    Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
    NOTICE: Can not read the pool label from '/pci@8f,2000/scsi@1/disk@1,0:a'
    NOTICE: spa_import_rootpool: error 5
    Cannot mount root on /pci@8f,2000/scsi@1/disk@1,0:a fstype zfs
    panic[cpu0]/thread=1810000: vfs_mountroot: cannot mount root
    000000000180d950 genunix:vfs_mountroot+370 (1898400, 18c2000, 0, 1295400, 1299000, 1)
      %l0-3: 00000300038a6008 000000000188f9e8 000000000113a400 00000000018f8c00
      %l4-7: 0000000000000600 0000000000000200 0000000000000800 0000000000000200
    000000000180da10 genunix:main+120 (189c400, 18eb000, 184ee40, 0, 1, 18f5800)
      %l0-3: 0000000000000001 0000000070002000 0000000070002000 0000000000000000
      %l4-7: 0000000000000000 000000000181d400 000000000181d6a8 0000000001297c00
    skipping system dump - no dump device configured
    rebooting...
    Resetting ...
    I'm able to boot server in failsafe mode and successfuly import zfs-pool. The pool is fine.
    Here is zfs pool configuration:
    # zpool status
      pool: zfsroot
    state: ONLINE
    scan: scrub repaired 0 in 0h35m with 0 errors on Thu Jan 30 13:10:36 2014
    config:
            NAME          STATE     READ WRITE CKSUM
            zfsroot       ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c2t0d0s2  ONLINE       0     0     0
                c2t1d0s2  ONLINE       0     0     0
    errors: No known data errors
    Could somebody help?
    Thanks in advance.
    PS:
    SunOS Release 5.10 Version Generic_147440-01 64-bit
    Server: Fujitsu PRIMEPOWER850 2-slot 12x SPARC64 V
    OBP: 3.21.9-1

    Here are
    format output
    AVAILABLE DISK SELECTIONS:
           0. c0t1d0 <FUJITSU-MAP3735NC-3701 cyl 24345 alt 2 hd 8 sec 737>
              /pci@87,2000/scsi@1/sd@1,0
           1. c2t0d0 <COMPAQ-BD14689BB9-HPB1 cyl 65533 alt 2 hd 5 sec 875>
              /pci@8f,2000/scsi@1/sd@0,0
           2. c2t1d0 <COMPAQ-BD14689BB9-HPB1 cyl 65533 alt 2 hd 5 sec 875>
              /pci@8f,2000/scsi@1/sd@1,0
    show-disks in OBP:
    {0} ok show-disks
    a) /pci@8d,4000/scsi@3,1/disk
    b) /pci@8d,4000/scsi@3/disk
    c) /pci@8f,2000/scsi@1,1/disk
    d) /pci@8f,2000/scsi@1/disk
    e) /pci@85,4000/scsi@3,1/disk
    f) /pci@85,4000/scsi@3/disk
    g) /pci@87,2000/scsi@1,1/disk
    h) /pci@87,2000/scsi@1/disk
    q) NO SELECTION
    Enter Selection, q to quit:
    boot-device  value is "disk1:c" but I switch off autoboot (for debug purposes) and boot server manualy with command "boot disk11..."
    Here is devalias output:
    {0} ok devalias
    tape                     /pci@87,2000/scsi@1,1/tape@5,0
    cdrom                    /pci@87,2000/scsi@1,1/disk@4,0:f
    disk11                   /pci@8f,2000/scsi@1/disk@1,0
    disk10                   /pci@8f,2000/scsi@1/disk@0,0
    disk1                    /pci@87,2000/scsi@1/disk@1,0
    disk0                    /pci@87,2000/scsi@1/disk@0,0
    disk                     /pci@87,2000/scsi@1/disk@0,0
    scsi                     /pci@87,2000/scsi@1
    obp-net                  /pci@87,4000/network@1,1
    net                      /pci@87,4000/network@1,1
    ttyb                     /pci@87,4000/ebus@1/FJSV,se@14,400000:b
    ttya                     /pci@87,4000/ebus@1/FJSV,se@14,400000:a
    scf                      /pci@87,4000/ebus@1/FJSV,scfc@14,200000

  • SCXI-1001 Fuse Issues

    I returned from the weekend to find that one of the SCXI-1001 chassis that I have had blown a main power fuse. I replaced the fuse and turned it back on, upon which the fuse blew again. I then took out all the modules, replaced the fuse and turned it on again; normal power up occurred. I then turned off the chassis, inserted a module, turned it back on, and repeated this process until the fuse blew again (4 modules later). This would lead me to believe that the specific module was bad. SO, I repeated the steps with modules known to me ok. After 3 modules again, another fuse gone.
    Is there any more trouble shooting I should do? What actions should I take to get this problem resolved and insure all modules involved are ok??

    Hello MTL,
    Have you seen this forum post:
    http://forums.ni.com/ni/board/message?board.id=300&message.id=1380&requireLogin=False
    Something I would also try is to define if it is the combination of modules. If is not the modules, the last thing it could be is if the fuse is not slow blow. This means that at the very beginning there is little more current than later. The forum post also have the link for the recommended fuse (if you are not using NI's).
    Hope this helps but if not, I would call us for sending the unit back for repair.
    Yardov
    Gerardo O.
    RF Systems Engineering
    National Instruments

  • ZFS mounting issue, help!

    Hello,
    I'm running a server with Solaris 11 and a single SATA disk for testing the OS out. I had a couple of Zones running and Virtualbox installed with a couple of VMs running some trading software. I came into work this morning; the the server was off and after switching it back on Solaris wouldn't boot. :( I re-installed Solaris on another drive and want to mount my old drive to a mountpoint on my new system to (hopefully) retrieve the VHD images. I looked around a few other forums, but haven't found a useful solution to my problem. Any help would be great! PS, I'm used to Linux, so I'm presuming I am able to do something like "mount /dev/sde1 /mynewmountpoint", but with "zfs mount xyz" instead....
    Thanks alot,
    Tobias

    You can get a list of the zpools that are available for import with:
    # zpool import
    From there, you will probably find that there is a zpool called rpool that you can import. Unfortunately, you probably already have a zpool named rpool imported - that would be the one from which Solaris is currently booted. You can import the zpool with a different name. You will also want to import it at an alternate root. For example:
    # zpool import -R /tmp/altroot rpool oldrpool
    Then you will find the things you are looking for under /tmp/altroot
    If you want to get the zones from the old rpool to the new rpool, you should be able to do that pretty easily too. Assuming your zones are in the dataset oldrpool/zones:
    # zfs snapshot -r oldrpool/zones@saveme
    # zfs send -rc oldrpool/zones@saveme | zfs recv newrpool/zones
    # cp /tmp/altroot/etc/zones/$zonename.xml /etc/zones/temp-$zonename.xml
    # zonecfg -z $zonename create -t temp-zonename.xml
    # rm /etc/zones/tmp-$zonename.xml
    # zoneadm -z $zonename attach -u
    I did not test the procedure above. I believe it will work but there may be some small detail missing or typo.
    When you are done with the old rpool:
    # zpool export oldrpool

  • Hard drive array losing access - suspect controller - zfs

    i am having a problem with one of my arrays, this is a zfs fielsystem.  It consists of a 1x500GB, 2x750GB, and 1x2TB, linear array. the pool is named 'pool'.    I have to mention here, i dont have enough hard drive to have a raidz (raid5) setup yet, so there is no actual redundancy to so the zfs cant auto repair itself from a copy because there is none, therefore all auto repair features can be thrown out the door in this equation meaning i believe its possible that the filesystem can easily be corrupted by the controller in this specific case which i suspect.  Please keep that in mind while reading the following.
    I just upgraded my binaries, therefore i removed zfs-fuse and installed archzfs.  did i remove it completely?  not sure.  i wasnt able to get my array back up and running until i fiddled with  the sata cables, moved around the sata connectors, tinkered with bios drive detect. after i got it running, i copied some files off of it from samba thinking it might not last long.  the copy was succesfull, but problems began surfacing again shortly after.  so now i suspect i have a bad controller on my gigabyte board.  I round recently someone else who had this issue so im thinking its not the hard drive. 
    I did some smartmontools tests last night and found that ll drives are showing good on a short test, they all passed.  today im not having so much luck with getting access.  there is hangs on reboot, and the drive light stays on.  when i try to run zfs and zpool commands its stating the system is hanging.  i have been getting what appears as HD errors as well, ill have to manually type them in here since no copy and paste from the console to the maching im posting from, and the errors arent showing up via ssh or i would copy them from my terminal tha ti currently have open to here.
    ata7: SRST failed (errno=-16)
    reset failed, giving up,
    end_request I/O error, dev sdc, sector 637543760
    ' ' ' ' '''' ' ' ''' sector 637543833
    sd 6:0:0:0 got wrong page
    ' ' ' ' ' '' asking for cache data failed
    ' ' ' ' ' ' assuming drive cache: write through
    info task txg_sync:348 blocked for more than 120 seconds
    and so forth, and when i boot i see this each time which is making me feel that the HD is going bad, however i still want to believe its the controller.
    Note, it seems only those two sectors show up, is it possible that the controller shot out those two sectors with bad data?  {Note, i have had a windows system prior installed on this motherboard and after a few months of running lost a couple raid arrays of data as well.}   
    failed command: WRITE DMA EXT
    ... more stuff here...
    ata7.00 error DRDY ERR
    ICRC ABRT
    blah blah blah.
    so now i can give you some info from the diagnosis that im doing on it, copied from a shell terminal.  Note the following metadata errors JUST appeared after i was trying to delete some files, copying didnt cause this, so it apears either something is currently degrading, or it just inevitably happened from a bad controller
    [root@falcon wolfdogg]# zpool status -v
    pool: pool
    state: ONLINE
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
    see: http://zfsonlinux.org/msg/ZFS-8000-HC
    scan: resilvered 33K in 0h0m with 0 errors on Sun Jul 21 03:52:53 2013
    config:
    NAME STATE READ WRITE CKSUM
    pool ONLINE 0 26 0
    ata-ST2000DM001-9YN164_W1E07E0G ONLINE 6 41 0
    ata-ST3750640AS_5QD03NB9 ONLINE 0 0 0
    ata-ST3750640AS_3QD0AD6E ONLINE 0 0 0
    ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 ONLINE 0 0 0
    errors: Permanent errors have been detected in the following files:
    <metadata>:<0x0>
    <metadata>:<0x1>
    <metadata>:<0x14>
    <metadata>:<0x15>
    <metadata>:<0x16d>
    <metadata>:<0x171>
    <metadata>:<0x277>
    <metadata>:<0x179>
    if one of the devices are faulted, then why are they all 4 stating online???
    [root@falcon dev]# smartctl -a /dev/sdc
    smartctl 6.1 2013-03-16 r3800 [x86_64-linux-3.9.9-1-ARCH] (local build)
    Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF INFORMATION SECTION ===
    Vendor: /6:0:0:0
    Product:
    User Capacity: 600,332,565,813,390,450 bytes [600 PB]
    Logical block size: 774843950 bytes
    scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46
    scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46
    >> Terminate command early due to bad response to IEC mode page
    A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
    my drive list
    [root@falcon wolfdogg]# ls -lah /dev/disk/by-id/
    total 0
    drwxr-xr-x 2 root root 280 Jul 21 03:52 .
    drwxr-xr-x 4 root root 80 Jul 21 03:52 ..
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-_NEC_DVD_RW_ND-2510A -> ../../sr0
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST2000DM001-9YN164_W1E07E0G -> ../../sdc
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K -> ../../sdb
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part2 -> ../../sdb2
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part3 -> ../../sdb3
    lrwxrwxrwx 1 root root 10 Jul 21 03:52 ata-ST3250823AS_5ND0MS6K-part4 -> ../../sdb4
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3750640AS_3QD0AD6E -> ../../sde
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-ST3750640AS_5QD03NB9 -> ../../sdd
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 -> ../../sda
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 wwn-0x5000c50045406de0 -> ../../sdc
    lrwxrwxrwx 1 root root 9 Jul 21 03:52 wwn-0x50014ee1ad3cc907 -> ../../sda
    and this one i dont get
    [root@falcon dev]# zfs list
    no datasets available
    i remember creating a dataset last year, why is it reporting none, but still working
    is anybody seeing any patterns here?  im prepared to destroy the pool and recreate it just to see if its bad data.But what im thinking to do now is since the problem appears to only be happening on the 2TB drive, either the controller just cant handle it, or the drive is bad.  So, to rule out the controller there might be hope.  I have a scsi card (pci to sata) connected that one of the drives in the array is connected to since i only have 4 sata slots on the mobo, and i keep the 500GB connected to there and have not yet tried the 2tb there yet.  So if i connect this 2TB drive to the scsi i should see the problems disappear, unless the drive got corrupted already. 
    Does any experience in the arch forums know whats going on here?  did i mess up by not completely removing zfs-fuse, is my HD going bad, is my controller bad, or did ZFS just get misconfigured?
    Last edited by wolfdogg (2013-07-21 19:38:51)

    ok, something interesting happened when i connected it (the badly reacting 2TB drive) to the scsi pci card.  first of all no errors on boot.... then take a look at this, some clues to some remanants to the older zfs-fuse setup, and a working pool.
    [root@falcon wolfdogg]# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    pool 2.95T 636G 23K /pool
    pool/backup 2.95T 636G 3.49G /backup
    pool/backup/falcon 27.0G 636G 27.0G /backup/falcon
    pool/backup/redtail 2.92T 636G 2.92T /backup/redtail
    [root@falcon wolfdogg]# zpool status
    pool: pool
    state: ONLINE
    status: The pool is formatted using a legacy on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on software that does not support
    feature flags.
    scan: resilvered 33K in 0h0m with 0 errors on Sun Jul 21 04:52:52 2013
    config:
    NAME STATE READ WRITE CKSUM
    pool ONLINE 0 0 0
    ata-ST2000DM001-9YN164_W1E07E0G ONLINE 0 0 0
    ata-ST3750640AS_5QD03NB9 ONLINE 0 0 0
    ata-ST3750640AS_3QD0AD6E ONLINE 0 0 0
    ata-WDC_WD5000AADS-00S9B0_WD-WCAV93917591 ONLINE 0 0 0
    errors: No known data errors
    am i looking at a bios update needed here so the controller can talk to the 2TB properly?
    Last edited by wolfdogg (2013-07-21 19:50:18)

  • Questions about ZFS

    Hello
    After having some serious issues with data corruption on ext3 and 4 over time, I have decided to start using ZFS on the disk. After what I have read, ZFS is the superb filesystem for avoiding data corruption, as far as I have understood zfs-fuse (http://aur.archlinux.org/packages.php?ID=8003) to use zfs with linux, or is there any better option?
    And Ive heard that zfs requires alot of the machine? And is there any other alternatives?

    he means until it's stable in the kernel. which could take a loong while I think. (personal guess:  2 years or so)

  • ZFS on Linux & mplayer

    Hi,
    I have a problem with the combination of zfsonlinux and mplayer.
    Actually I'm not quite sure if one of them is the culprit, but I have some pointers.
    Problem: When playing a movie (e.g. a mkv ~ 2GB or a vob ~ 6GB) every few minutes the the movie shortly stutters (picture & audio)
    Observations:
    1) This didn't happen with zfs-fuse which is supposedly slower than zfs on linux.
    2) I can copy a vob file in under a minute to my home directory (ext4), hence read performance should be reasonable.
    3) When playing the movie from my home dir, no stuttering occurs. Hence I'd rule out an mplayer or sound card issue.
    Now I don't know what I can do about zfs on linux. Read performance seems to be ok and there seem to be no sound card issues. When I run bonnie or iozone, cpu usage stays below 20%, so that shouldn't cause an issue as well.
    Any hints or tips on what I could look at next? Thanks!

    Hi,
    I have a problem with the combination of zfsonlinux and mplayer.
    Actually I'm not quite sure if one of them is the culprit, but I have some pointers.
    Problem: When playing a movie (e.g. a mkv ~ 2GB or a vob ~ 6GB) every few minutes the the movie shortly stutters (picture & audio)
    Observations:
    1) This didn't happen with zfs-fuse which is supposedly slower than zfs on linux.
    2) I can copy a vob file in under a minute to my home directory (ext4), hence read performance should be reasonable.
    3) When playing the movie from my home dir, no stuttering occurs. Hence I'd rule out an mplayer or sound card issue.
    Now I don't know what I can do about zfs on linux. Read performance seems to be ok and there seem to be no sound card issues. When I run bonnie or iozone, cpu usage stays below 20%, so that shouldn't cause an issue as well.
    Any hints or tips on what I could look at next? Thanks!

  • [SOLVED] SystemD NTFS partition issue's

    Hey archers,
    hope someone here can help me
    recently began testing systemd & I am facing a problem where I have to ctrl+d or give root password during every boot up due to systemd having problems with my ntfs partition ( i mount at boot as I have symlinks to that partition for documents & programs which run in wine)
    I have not enabled anything to do with mounting or even added the fuse module to load as it has already picked that up!
    here are the entries I think are related to this from journalctl:
    Aug 29 07:57:37 b0x ntfs-3g[568]: Version 2012.1.15 external FUSE 29
    Aug 29 07:57:37 b0x ntfs-3g[568]: Mounted /dev/sdb1 (Read-Write, label "Win7-sys", NTFS 3.1)
    Aug 29 07:57:37 b0x ntfs-3g[568]: Cmdline options: rw,noatime,sync,gid=100,umask=002
    Aug 29 07:57:37 b0x ntfs-3g[568]: Mount options: rw,sync,allow_other,nonempty,noatime,fsname=/dev/sdb1,blkdev,blksize=4096,default_permissions
    Aug 29 07:57:37 b0x ntfs-3g[568]: Global ownership and permissions enforced, configuration type 7
    Aug 29 07:57:37 b0x ntfs-3g[568]: Warning : using problematic uid==0 and gid!=0
    Aug 29 07:57:37 b0x mount[572]: Mount is denied because the NTFS volume is already exclusively opened.
    Aug 29 07:57:37 b0x mount[572]: The volume may be already mounted, or another software may use it which
    Aug 29 07:57:37 b0x mount[572]: could be identified for example by the help of the 'fuser' command.
    Aug 29 07:57:37 b0x systemd[1]: media-Win7.mount mount process exited, code=exited status=16
    Aug 29 07:57:37 b0x systemd[1]: Job local-fs.target/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Triggering OnFailure= dependencies of local-fs.target.
    Aug 29 07:57:37 b0x systemd[1]: Job systemd-user-sessions.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job lightdm.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job graphical.target/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job multi-user.target/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job systemd-logind.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job dbus.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job [email protected]/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job hwclock.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job syslog-ng.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job network.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job cronie.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job snmpd.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job samba.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job webmin.service/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd[1]: Job systemd-tmpfiles-clean.timer/start failed with result 'dependency'.
    Aug 29 07:57:37 b0x systemd-journal[181]: Journal stopped
    Aug 29 07:57:37 b0x systemd-journal[584]: Journal started
    Aug 29 07:57:37 b0x ntfs-3g[568]: Unmounting /dev/sdb1 (Win7-sys)
    Aug 29 07:57:37 b0x systemd-udevd[224]: '/usr/sbin/alsactl restore 0' [500] terminated by signal 15 (Terminated)
    Aug 29 07:57:38 b0x systemd[1]: Startup finished in 3s 111ms 648us (kernel) + 6s 425ms 155us (userspace) = 9s 536ms 803us.
    Aug 29 07:57:38 b0x systemd[582]: Failed at step EXEC spawning /bin/plymouth: No such file or directory
    Aug 29 07:58:25 b0x systemd[1]: Cannot add dependency job for unit avani-dnsconfd.service, ignoring: Unit avani-dnsconfd.service failed to load: No such file or directory. See system lo...e' for details.
    Aug 29 07:58:25 b0x systemd[1]: Socket service syslog.service not loaded, refusing.
    Aug 29 07:58:26 b0x arch-modules-load[609]: mkdir: cannot create directory ‘/run/modules-load.d’: File exists
    Aug 29 07:58:26 b0x systemd-modules-load[706]: Module 'vhba' is already loaded
    Aug 29 07:58:26 b0x systemd-modules-load[706]: Module 'fuse' is already loaded
    Aug 29 07:58:26 b0x systemd-fsck[646]: public: clean, 385878/2039808 files, 5060668/8159011 blocks
    Aug 29 07:58:26 b0x systemd-fsck[653]: VM: clean, 228/5677056 files, 5637221/22680575 blocks
    Aug 29 07:58:26 b0x systemd-fsck[644]: Home: clean, 90204/1327104 files, 984778/5305458 blocks
    Aug 29 07:58:26 b0x ntfs-3g[871]: Version 2012.1.15 external FUSE 29
    Aug 29 07:58:26 b0x ntfs-3g[871]: Mounted /dev/sdb1 (Read-Write, label "Win7-sys", NTFS 3.1)
    Aug 29 07:58:26 b0x ntfs-3g[871]: Cmdline options: rw,gid=100,fmask=113,dmask=002
    Aug 29 07:58:26 b0x ntfs-3g[871]: Mount options: rw,allow_other,nonempty,relatime,fsname=/dev/sdb1,blkdev,blksize=4096,default_permissions
    Aug 29 07:58:26 b0x ntfs-3g[871]: Global ownership and permissions enforced, configuration type 7
    Aug 29 07:58:26 b0x ntfs-3g[871]: Warning : using problematic uid==0 and gid!=0
    Here is the entry in /etc/fstab for this partition:
    ## Entry for /dev/sdb1 SYSTEM:(Win7)
    UUID=44083B9668A3E0CC /media/Win7 ntfs-3g gid=users,fmask=113,dmask=002 0 0
    I have been all over goggle & am unable to find out anything which can help.
    As stated before I have links to this partition & so really want/need this partition to be mounted at boot.
    Any help on this will be greatly appreciated
    Thanks in advance
    EDIT #1
    rebooted again, still the same happening
    ran:
    $ sudo mount -l
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    dev on /dev type devtmpfs (rw,nosuid,relatime,size=3022708k,nr_inodes=755677,mode=755)
    run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
    /dev/sda1 on / type ext4 (rw,relatime,data=ordered) [Arch-sys]
    securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
    tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
    devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
    tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
    cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
    cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
    cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
    cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
    cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
    cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
    cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
    cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
    systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
    debugfs on /sys/kernel/debug type debugfs (rw,relatime)
    mqueue on /dev/mqueue type mqueue (rw,relatime)
    hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
    fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
    tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime)
    /dev/sdb3 on /media/wine type ext4 (rw,noatime,errors=remount-ro,data=ordered) [wine] <<<THIS SHOULD NOT BE HERE!<<<<<<<<<
    /dev/sdd1 on /media/spare2 type ext4 (rw,noatime,errors=remount-ro,data=ordered) [spare2] <<<THIS SHOULD NOT BE HERE!<<<<<<<<<<
    /dev/sdc1 on /media/spare type ext4 (rw,noatime,errors=remount-ro,data=ordered) [spare] <<<THIS SHOULD NOT BE HERE!<<<<<<<<<<<
    /dev/sde1 on /media/USB-HDD2 type vfat (rw,noatime,sync,gid=100,fmask=0002,dmask=0002,allow_utime=0020,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro) [USB-HDD2] <<<<<THIS IS NORMAL
    /dev/sdb4 on /media/pac type ext4 (rw,noatime,errors=remount-ro,data=ordered) [pac] <<<THIS SHOULD NOT BE HERE!
    /dev/sdd1 on /media/Spare2 type ext4 (rw,relatime,errors=remount-ro,data=ordered) [spare2] <<<<<THIS IS NORMAL
    /dev/sdc1 on /media/Spare type ext4 (rw,relatime,errors=remount-ro,data=ordered) [spare] <<<<<THIS IS NORMAL
    /dev/sdb2 on /media/VM type ext4 (rw,relatime,errors=remount-ro,data=ordered) [VM] <<<<<THIS IS NORMAL
    /dev/sdb3 on /var/wine type ext4 (rw,relatime,errors=remount-ro,data=ordered) [wine] <<<<<THIS IS NORMAL
    /dev/sdb4 on /var/cache/pacman type ext4 (rw,relatime,errors=remount-ro,data=ordered) [pac] <<<<<THIS IS NORMAL
    /dev/sdb1 on /media/Win7 type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096) [Win7-sys] <<<<<THIS IS NORMAL
    /dev/sda3 on /public type ext4 (rw,relatime,errors=remount-ro,data=ordered) [public] <<<<<THIS IS NORMAL
    /dev/sda5 on /home type ext4 (rw,relatime,errors=remount-ro,data=ordered) [Home] <<<<<THIS IS NORMAL
    gvfs-fuse-daemon on /run/user/1000/gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,relatime,user_id=1000,group_id=100)
    binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
    gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
    as you can see my partitions are being mounted TWICE which is not what I want or expected!
    is there more documentation on what systemd does with mounts that could explain why i have multiple mount points for partitions or is this due to systemd discovering my partitions & mounting them at points based on label names & then parsing my FSTAB as well!
    I have read the wikki but there is very small info there & the links have not provided an explanation for this unwanted behaviour
    EDIT #2
    >>>>>>>>>>>SOLVED<<<<<<<<<<<<<<
    not an NTFS or FUSE issue
    I had previously installed mnttools!
    removed & now all is well
    sorry
    Last edited by t0m5k1 (2012-08-29 08:30:11)

    OK,
    After being spurred on to try to do this thing properly, this is what I came up with today.
    My fstab line (for a USB NTFS disk):
    /dev/sdb1 /media/samsung ntfs-3g noauto,users,rw,nodev 0 0
    Then I created the /media/samsung folder and gave the audio group read/write permissions.
    It seems that non-root users can only mount an ntfs partition if they use a version of ntfs-3g with fuse included, so I replaced ntfs-3g with the version from AUR, having removed from the PKGBUILD file the option "-with-fuse=external" (see this thread: http://bbs.archlinux.org/viewtopic.php?id=44844 ).   I also had to set
    the ntfs-3g binary to setuid-root, dealt with here: http://www.tuxera.com/community/ntfs-3g … privileged (note- the instructions say this is discouraged, but it seems using ntfs partitions in linux requires some compromises).
    I can now mount the drive as an ordinary user.
    Then I set mpd back to run as user mpd, checked the audio group had access to all the mpd folders, and all was well.
    One hiccup which you might not have: mpd was unable to access my (external) sound card at first.  To solve this one, I used
    chmod 770 /dev/snd -R && chgrp audio /dev/snd -R
    As far as I can remember, that's everything.
    Last edited by Henry Flower (2010-04-20 12:54:26)

  • [SOLVED - kind-of] Problems with ZFS

    I had a HDD with ZFS on it in my old computer (was using zfs-fuse at the time). Then I moved it to my new computer.
    Installed zfs kernel module form AUR. But I could not mount the filesystems
    zpool import -a
    zfs mount -a
    and nothing happened.
    Then I upgraded zfs pool to a newer version (was 6, upgraded to 28).
    Still nothing. Then I tried to upgrade zfs filesystems
    zfs upgrade -r -a
    cannot set property for 'home': pool and or dataset must be upgraded to set this property or value
    cannot set property for 'home/user1': pool and or dataset must be upgraded to set this property or value
    cannot set property for 'home/shared': pool and or dataset must be upgraded to set this property or value
    cannot set property for 'home/user2': pool and or dataset must be upgraded to set this property or value
    0 filesystems upgraded
    Current zfs version for filesystems is 1 and supported by the module is 5
    What's wrong?
    UPDATED:
    Sorry folks - didn't read carefully http://zfsonlinux.org:
    "Sorry, the ZFS Posix Layer (ZPL) which allows you to mount the file system is still a work in progress."
    Last edited by SpiegS (2010-12-15 00:05:20)

    Re iphoto, back up first then rebuild the library as follows.
    http://support.apple.com/kb/ht2638
    If you still have problems post in iphoto forum
    There is also a rebuild mailbox option in Mavericks Mail.

  • Why is my iMac 27" blowing fuses in plug??

    Pls why is my imac27" blowing fuses in plug???

    I'd start by checking your electrical system. Obviously it's a hardware issue, my guess is it's not the iMac. If you can take it to a neighbors and test it. If they have a fuse issue then take it in immediately to your local Apple Store or AASP.

  • Delete download/cached pkg files?

    In addition to a 20tb Solaris server, I also run Solaris 11 Express in a VM on my notebook for portable external ZFS storage (long story on how and why). As you might imagine, I'm interested in keeping the system virtual image (which is stored on my notebook drive) small.
    After doing a pkg update which turned out to be pretty substantial, the on-disk size (within the VM not just the virtual container) grew a good deal.
    So my question: Does the package manager store downloaded package files even though it doesn't need them any more? And if so, can they be safely deleted?
    E.g., Debian Linux systems have "apt-get clean", which deletes the package cache. Any Solaris analogue? (I know Solaris isn't Linux. I mean I really, REALLY know! ;-)
    Thanks in advance!
    -Jim
    +PS: In case this question gets derailed with debate about Solaris not being a suitable solution for portable external notebook-based ZFS storage, let me just completely agree! However, after about 100 hours of work (in total over a couple of years), I've developed a solution that is rock-solid, gets a reliable 20-30 MB/s throughput [not amazing but suprising given the concept], and doesn't get confused by changing device ports, IDs, etc. I push it hard every day, all day. The drives are velcro'ed to the notebook lid. Of course I first preferentially tried Linux and and Btrfs, then ZFS-FUSE, then BSD ZFS, but this is hands-down the best solution. But it wasn't easy. In fact it was really, really hard to get everything ironed out. But the only commercial alternative...doesn't exist. Yet. (But I'm working on it.)+

    doublemeat wrote:
    Thanks, that was just the answer I was looking for! Well except for the "you may have ruined your system" part. Fortunately I was able to restore from a previous snapshot.
    I didn't want to do a full roll back so I copied everything under /var/pkg/cache from a previous snapshot, to the current. When I ran # pkg update, it complained about sources being wrong or something (the output was buggy, clearly missing lines, and didn't make sense). It recommended I run # pkg remove-source solaris. Which didn't seem very wise, but I did it anyway. Then played around with the GUI version, thinking that might get the package source set straight. Now when I run # pkg update, it eventually just comes back with No updates available for this image. Which is the right answer, so hopefully it's OK now!I have no idea what "remove-source" is; that's certainly never been a valid pkg(1) subcommand.
    You also didn't restore all of the directory content you should have (I would have suggested all of /var/pkg).
    As a result, I would suggest running "pkg verify" to determine the state of your system.
    For systems that are running a version of Solaris 11 Express (only), you can safely remove any /var/pkg/publisher/*/file directories. (Just the 'file' directories and their contents.)
    I've been having a tough time finding what old package/log/temp/dump files can be safely deleted. (So far this was the most helpful but not very complete: http://docs.oracle.com/cd/E23824_01/html/821-1451/sysresdiskuse-19.html.) As with many things Solaris - esp. recent versions - one can spend all day Googling and scouring, and still not find a satisfactory answer. Over a couple of years I've been compiling the answers I find and/or work out myself. Here's the section on what I believe are all and only files that can be safely deleted once and/or regularly:
    The directory /var/pkg/cache and its contents may be safely removed, but it should not be necessary to manually remove it or its contents as it is managed automatically by the package system.
    Most of the cache management issues you might have had should be resolved in Solaris 11 FCS (the general release).
    flush-content-cache-on-sucess True is the default in in S11 FCS as well.
    Further refinements to the cache management system will be made as time goes on.

Maybe you are looking for

  • Iphone 5s not syncing to itunes

    Hi I updated my iTunes last month and ever since then my iPhone 5s wont sync to my iTunes, I've tried my aunts phone which is the same as mine and it didn't sync either and then I tried my boyfriends iPhone 4 and it connected and synced. I have delet

  • Extl Drive : No longer mounting

    I use my Lacie Firewire to backup my system. It is bootable and recently no longer mounts. It is daisy chain to another External Lacie Burner. I have taken the Drive to another Mac, to ensure that my firewire connections are OK. I have also removed a

  • Best Practices v3.31 - SAP InfoSet Query connection

    Hi, I have a problem with adapting a Crystal Report from Best Practices for Business Intelligence v3.31 to SAP system. The report "Cost Analysis Planned vs. Actual Order Costs.rpt" is using SAP InfoSet Query "CO_OM_OP_20_Q1". This InfoSet Query is wo

  • Need help about audited user.

    Hello folks Some days ago I detected that some user was trying to connect to my database with incorrect credentials, I detected it ´cause that user was blocked after 5 attempts (defined at profile) . So, I audited it and I could find OS user name, us

  • Setting content type to pdf

    Hi guys I want my jsp output in pdf format. I have written following lines at the start of jsp: response.setContentType("application/pdf"); response.setHeader("content-disposition","attachment;"); But acrobat reader is giving following error: "Acroba