ZFS help

I am using open solaris and need help with zfs. I have a pool on the 3rd partition of a drive. The other partitions are NTFS. The disk size is 160 G and the 3rd partition's size is about 108 gigabyte. The pool's size is 20.3G and available is 5.05G. I recently filled it up so I moved some files off of it. There is plenty of space on the drive, and I want to increase the pool's size. I have looked at the documentation for zfs and zpool, but could not determine how to do it.
zpool status rpool
pool: rpool
state: ONLINE
scrub: scrub completed after 0h14m with 0 errors on Sun Jan 11 17:30:27 2009
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c6d0s0 ONLINE 0 0 0
errors: No known data errors
zfs list rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.3G 5.05G 58.5K /rpool
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.3G 5.05G 58.5K /rpool
rpool@install 19K - 55.5K -
rpool/ROOT 6.57G 5.05G 18K /rpool/ROOT
rpool/ROOT@install 15K - 18K -
rpool/ROOT/opensolaris 1.05G 5.05G 4.17G legacy
rpool/ROOT/opensolaris-1 5.52G 5.05G 3.63G /tmp/tmpelfMp0
rpool/ROOT/opensolaris-1@static:-:2008-11-22-12:25:21 1.88G - 3.28G -
rpool/ROOT/opensolaris-1/opt 3.82M 5.05G 3.60M /tmp/tmpelfMp0/opt
rpool/ROOT/opensolaris-1/opt@static:-:2008-11-20-23:55:11 0 - 3.60M -
rpool/ROOT/opensolaris-1/opt@static:-:2008-11-21-01:43:22 0 - 3.60M -
rpool/ROOT/opensolaris-1/opt@static:-:2008-11-22-12:25:21 0 - 3.60M -
rpool/export 13.7G 5.05G 19K /export
rpool/export@install 15K - 19K -
rpool/export/home 13.7G 5.05G 13.7G /export/home
rpool/export/home@install 20K - 21K -
Thanks,
wor

The VTOC label inside the Solaris partition appears to think the size of the partition is only 20G. Unfortunately, the Solaris tools don't make it easy to change that number. They only set it when the disk (or partition in this case) is created. So instead of "resizing" the VTOC label, you have to destroy it (just the label, not the data on the disk), the create a new one with the bigger size.
If you're not familiar with Solaris partition and slice setups, it might be faster and safer to just nuke the partition and reinstall/restore.
Otherwise, to record the current slice sizes, you use either 'prtvtoc' or 'format -> partition -> print'. They show the same information in slightly different ways. Both give you where the slices begin and how big they are.
Use 'fdisk' to delete the solaris partition and create a new one. I don't think it'll let you do this if you're booted from the disk. You'll need to boot from alternate media (like CD or something) if that's the case. Once you create a new Solaris partition, it should create a label inside with the right size.
When you run 'prtvtoc' or 'format' after that, it should show the partition size, not the 20G. At that point you use 'format -> partition' to recreate the slices as they existed before. The only difference is that the slice 0 can be extended to go all the way to the end of the (hopefully now larger) partition. If all the slices are in the same spots, then the data in them will be accessible once again. Still, since it's very easy to mess this up, I wouldn't attempt this without having a backup in place.
Darren

Similar Messages

  • ZFS help - disk id has changed

    Hi. I'm running Solaris 10 on x86 platform.
    I had 3 disks on my computer, 1 IDE (with solaris 10) and 2 SATA.
    Both SATA drives are (were) using zfs, though in two different pools.
    One of the drives crashed and I had to remove it, but I forgot to remove it from the pool.
    Naturally, things started cooking after reboot but the system booted. I managed to remove the broken drive from the pool by removing whole pool (I no longer need it).
    But after removing the broken drive from the computer, the hardware ID (c0d0, c1,0, c2do etc) for
    the remaining SATA drive changed. Before removal, this sata drive was c1d0s0, after removal it became c2d0s0 (I think s0 or p0). I don't know why the enumeration changed.
    The question is, is there a way to tell zfs to change the drive in the zpool from c1d0 to c2d0 without erasing its (the remaining drive) contents?
    Kind Regards,
    Yaerek

    Problem resolved. Had to reset the memory buffer inside 1 TB drive; without reset MoBo was recognizing it as a 33 MB drive

  • ZFS Filesystem for FUSE/Linux progressing

    About
    ZFS is an advanced modern filesystem from Sun Microsystems, originally designed for Solaris/OpenSolaris.
    This project is a port of ZFS to the FUSE framework for the Linux operating system.
    It is being sponsored by Google, as part of the Google Summer of Code 2006 program.
    Features
    ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems. ZFS list of features:
          Provable integrity - it checksums all data (and meta-data), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables..). Read how ZFS helped to detect a faulty power supply after only two hours of usage, which was previously silently corrupting data for almost a year!
          Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots/power failures.
          Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks.
          Built-in (optional) compression
          Highly scalable
          Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc..).
          Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model).
          Among others (variable sector sizes, adaptive endianness, ...)
    http://www.wizy.org/wiki/ZFS_on_FUSE
    http://developer.berlios.de/project/sho … up_id=6836

    One workaround for this test was to drop down to NFSv3. That's fine for testing, but when I get ready to roll this thing into production, I hope there are no problems doing v4 from my NetApp hardware.

  • Please critique my zfs-fuse setup - wiki to follow - help needed

    Hi all,
    I am writing a new wiki for zfs-fuse that will hopefully find a niche and help others.
    I am having a couple rough spots:
    For example im trying to figure out why when i do a
    # zpool create pool /dev/sdb /dev/sdc /dev/sdd /dev/sde
    with a 2TB, 750GB, 750GB, 500GB that the end file size comes out ~3.5TB instead of 4TB, and likewise when i create with a 2TB, 500GB, 500GB the drive size comes out ~2.68TB.......  Does the zpool automatically use the smallest drive as a cache drive or something? Or am i just that bad at 1024*1024*xxx   
    Another question i am having is what role does mdadm play if i created a linear span array,  on the zpool, would this still be considered zfs? 
    https://wiki.archlinux.org/index.php/Us … ementation
    heres the wiki please critique it, comment on it, bash it, help it, anything.  Im sort of finished until i can have someone point out any lame mistakes in the way i created the partition tables, or used a zpool when i shouldnt be, if indeed the way i have done it with the bf00 filesystem type 'Solaris root' is indeed zfs.   
    Im looking for complete filesystem integrity for a backup drive array, thats all.  If its only a backup array, then without having minimum of 6 like sized hard drives i dont expect to have a nice raidZ1 setup with two separate vdevs that can take full advantage of the checksum file correction system that striped arrays offer yet, i just want to get teh zfs system up and running on one array until i get more hard drives. 
    https://wiki.archlinux.org/index.php/Us … ementation
    Last edited by wolfdogg (2013-01-18 05:16:45)

    So, that article you have there is like a draft you are planing to add as a regular arch wiki article?
    If thats the case, then read the Help:Style article, it has the rules about how you should write wiki articles on the archwiki.
    For instance, you are talking a lot on the first person, and giving personal comments. Like this:
    1) to get to step one on the ZFS-FUSE page https://wiki.archlinux.org/index.php/ZFS_on_FUSE i had to do a few things. that was to install yaourt, which was not necessarily straight forward. I will be vague in these instructions since i have already completed these steps so they are coming from memory
    That style of writing is more fitting for a blog than a wiki article. The style article mention:
    Write objectively: do not include personal comments on articles, use discussion pages for this purpose. In general, do not write in first person.
    Check it here.
    So, instead of saying things similar to this:
    "I had to install blabla, I did it with yaourt like this: yaourt -S blabla, but you can download it manually from AUR, makepkg it, and then install it with pacman"
    Is better if you say it like this:
    "Install the package blabla from the AUR"
    Of course, using the wiki conventions for making "Install" a link to pacman, "blabla" a link to the aur page of the package, and "AUR" a link to the aur wiki article.
    Just read the article, and you will know how you should write it.

  • ZFS mounting issue, help!

    Hello,
    I'm running a server with Solaris 11 and a single SATA disk for testing the OS out. I had a couple of Zones running and Virtualbox installed with a couple of VMs running some trading software. I came into work this morning; the the server was off and after switching it back on Solaris wouldn't boot. :( I re-installed Solaris on another drive and want to mount my old drive to a mountpoint on my new system to (hopefully) retrieve the VHD images. I looked around a few other forums, but haven't found a useful solution to my problem. Any help would be great! PS, I'm used to Linux, so I'm presuming I am able to do something like "mount /dev/sde1 /mynewmountpoint", but with "zfs mount xyz" instead....
    Thanks alot,
    Tobias

    You can get a list of the zpools that are available for import with:
    # zpool import
    From there, you will probably find that there is a zpool called rpool that you can import. Unfortunately, you probably already have a zpool named rpool imported - that would be the one from which Solaris is currently booted. You can import the zpool with a different name. You will also want to import it at an alternate root. For example:
    # zpool import -R /tmp/altroot rpool oldrpool
    Then you will find the things you are looking for under /tmp/altroot
    If you want to get the zones from the old rpool to the new rpool, you should be able to do that pretty easily too. Assuming your zones are in the dataset oldrpool/zones:
    # zfs snapshot -r oldrpool/zones@saveme
    # zfs send -rc oldrpool/zones@saveme | zfs recv newrpool/zones
    # cp /tmp/altroot/etc/zones/$zonename.xml /etc/zones/temp-$zonename.xml
    # zonecfg -z $zonename create -t temp-zonename.xml
    # rm /etc/zones/tmp-$zonename.xml
    # zoneadm -z $zonename attach -u
    I did not test the procedure above. I believe it will work but there may be some small detail missing or typo.
    When you are done with the old rpool:
    # zpool export oldrpool

  • Help - Custom Jumpstart 10/09 x86 with ZFS

    Hi,
    I'm trying to Jumpstart Solaris 10/09 x86 with ZFS root paritition. Everything seems to be going okay. My problems are
    1. The Server does not automatically boot after completing jumpstart
    2. After installation, everytime I reboot the newly jumpstarted machine, I see the following messages on the console, that I don't see with regular installation
    # init 6
    propagating updated GRUB menu
    File </boot/grub/menu.lst> propagation successful
    File </etc/lu/GRUB_backup_menu> propagation successful
    File </etc/lu/menu.cksum> propagation successful
    File </sbin/bootadm> propagation successful
    Do you know why I see this progagating message?
    My rules.ok file is
    root@dev # cat rules.ok
    any - - my_profile -
    # version=2 checksum=3418
    root@dev #
    root@dev # cat my_profile
    install_type initial_install
    system_type standalone
    cluster SUNWCreq
    pool rpool auto auto auto mirror c1t0d0s0 c1t1d0s0
    bootenv installbe bename s10x_u8wos_08a dataset /var
    root@dev #
    Thank you,
    Jacob

    Okay, I figured out why I'm getting the File </boot/grub/menu.lst> propagation successful,etc. messages. Its because I had
    bootenv installbe bename s10x_u8wos_08a dataset /var
    in my jumpstart profile. When that line exists, another boot env is installed, also Live Upgrade is installed when that line exists. Removing that line Live Upgrade is not installed and an alternate boot environment is not installed. But now the problem is, I would like to have a separate dataset for /var and unfortunately it is not possible to create it without the bootenv line.
    Does anyone know how I could have the "best of both worlds", ie have separate dataset for /var and not having to install a new boot env.
    Thank you,
    Jacob.

  • ZFS - Can't make raidz pool available. Please Help

    Hi All,
    Several months ago I created a raidz pool on a 6 disk external sun array. It was working fine until the other day when I lost a drive. I took out the old drive, and put in the new drive, and am unable to bring the pool back up. It wont let me issue a zpool replace, or an online, or anything. Here is hopefully all the info you need to see what's going on: (If you need more info, let me know)
    Piece of dmesg from after the reboot.
    Dec 19 14:17:14 stzehlsun fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
    Dec 19 14:17:14 stzehlsun EVENT-TIME: Tue Dec 19 14:17:14 EST 2006
    Dec 19 14:17:14 stzehlsun PLATFORM: SUNW,Ultra-2, CSN: -, HOSTNAME: stzehlsun
    Dec 19 14:17:14 stzehlsun SOURCE: zfs-diagnosis, REV: 1.0
    Dec 19 14:17:14 stzehlsun EVENT-ID: 644874cf-084d-413d-88c6-c195db617041
    Dec 19 14:17:14 stzehlsun DESC: A ZFS pool failed to open. Refer to http://sun.com/msg/ZFS-8000-CS for more information.
    Dec 19 14:17:14 stzehlsun AUTO-RESPONSE: No automated response will occur.
    Dec 19 14:17:14 stzehlsun IMPACT: The pool data is unavailable
    Dec 19 14:17:14 stzehlsun REC-ACTION: Run 'zpool status -x' and either attach the missing device or
    Dec 19 14:17:14 stzehlsun restore from backup.
    # zpool status
    pool: array
    state: FAULTED
    status: One or more devices could not be opened. There are insufficient
    replicas for the pool to continue functioning.
    action: Attach the missing device and online it using 'zpool online'.
    see: http://www.sun.com/msg/ZFS-8000-D3
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    array UNAVAIL 0 0 0 insufficient replicas
    c0t9d0 ONLINE 0 0 0
    c0t10d0 ONLINE 0 0 0
    c0t11d0 ONLINE 0 0 0
    c0t12d0 ONLINE 0 0 0
    c0t13d0 UNAVAIL 0 0 0 cannot open
    c0t14d0 ONLINE 0 0 0
    # zpool online array c0t13d0
    cannot open 'array': pool is currently unavailable
    run 'zpool status array' for detailed information
    # zpool replace array c0t13d0
    cannot open 'array': pool is currently unavailable
    run 'zpool status array' for detailed information
    As you can see, I've replaced c0t13d0 with the new drive, format sees it just fine, and it apprears to be up and running. What do I need to do to get this new drive into the raidz pool and get my pool back on-line? I just don't see what Im missing here. Thanks!
    Steve

    Sadly, I never received an answer on this forum, So I opened a ticket with Sun, and they got right back to me. For anyone following this thread, I'll pass along what they told me.
    Basically, I THOUGHT I had created a raidz pool, apparently I did not, and only created a RAID0 pool. so with the one disk gone, there was no parity disk to re-build the array, so it remained faulted, no way to fix it, only solution is to destroy the pool and start again. I really thought I had created a raidz, but now that I have created a raidz pool, I can see the difference in the zpool status command.
    Before: (MUST have been RAID0)
    NAME STATE READ WRITE CKSUM
    array UNAVAIL 0 0 0 insufficient replicas
    c0t9d0 ONLINE 0 0 0
    c0t10d0 ONLINE 0 0 0
    c0t11d0 ONLINE 0 0 0
    c0t12d0 ONLINE 0 0 0
    c0t13d0 UNAVAIL 0 0 0 cannot open
    c0t14d0 ONLINE 0 0 0
    After creating a REAL raidz pool:
    NAME STATE READ WRITE CKSUM
    array ONLINE 0 0 0
    raidz ONLINE 0 0 0
    c0t9d0 ONLINE 0 0 0
    c0t10d0 ONLINE 0 0 0
    c0t11d0 ONLINE 0 0 0
    c0t12d0 ONLINE 0 0 0
    c0t13d0 ONLINE 0 0 0
    c0t14d0 ONLINE 0 0 0
    Note the added raidz line.
    I asked the tech support guy if it was possible that I HAD created a raidz, but due to the disk loss and reboots it was a bug that was only showing it as a RAID0, and he said there are no reported cases of such an incident, and he really didn't think so. So, I guess I just messed up when I created it in the first place, and since I didn't know what a raidz pool would look like, I had no way of knowing I hadn't created one (Yes, I know I could have added up the disk space and realized no disk was being used for parity, but I didn't)
    So moral here is to make sure you created what you thought you had created, and then it will do what you expect.

  • Help: External SATA with ZFS raidz2 extremely slow, freezes system

    Hello,
    I need some help with the following configuration:
    Dell 850 with Intel D940 (3.2 ghz dual core), 1 gig RAM, 2 80 gig SATA drives
    Norco DS-1220 external SATA enclosure
    (http://www.norcotek.com/item_detail.php?categoryid=12&modelno=ds1220)
    Sil3124 4-port SATA card
    (http://www.norcotek.com/item_detail.php?categoryid=8&modelno=norco-4618)
    8 seagate 7200.10 500 gb sata2 drives
    I'm using Solaris 10 x86 u3. It sees the sata card at c3 and the 8 disks as:
    /dev/rdsk/c3t2d0p0 /dev/rdsk/c3t545d0p0 /dev/rdsk/c3t608d0p0
    /dev/rdsk/c3t3d0p0 /dev/rdsk/c3t576d0p0 /dev/rdsk/c3t609d0p0
    /dev/rdsk/c3t544d0p0 /dev/rdsk/c3t577d0p0
    I created a raidz2 pool with all 8 drives. It appeared to work OK:
    # zpool status array0
    pool: array0
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    array0 ONLINE 0 0 0
    raidz2 ONLINE 0 0 0
    c3t2d0p0 ONLINE 0 0 0
    c3t3d0p0 ONLINE 0 0 0
    c3t544d0p0 ONLINE 0 0 0
    c3t545d0p0 ONLINE 0 0 0
    c3t576d0p0 ONLINE 0 0 0
    c3t577d0p0 ONLINE 0 0 0
    c3t608d0p0 ONLINE 0 0 0
    c3t609d0p0 ONLINE 0 0 0
    errors: No known data errors
    As a quick and dirty test, I did:
    time cp -d -r /usr /export/array0/
    The /usr partition is about 2.7 gb. So far, after about 2 hours, 680 mb have been copied.
    Looking at iostat, I see the following:
    # iostat -x -C
    extended device statistics
    device r/s w/s kr/s kw/s wait actv svc_t %w %b
    c0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    c3 0.5 11.7 23.4 108.3 2.0 0.4 203.6 0 44
    sd1 0.1 1.2 3.4 13.6 0.2 0.1 180.3 1 6
    sd2 0.1 1.4 3.0 13.6 0.2 0.1 194.4 2 7
    sd3 0.1 1.3 3.3 13.6 0.3 0.1 242.3 4 7
    sd4 0.1 1.5 2.8 13.4 0.3 0.1 266.9 3 9
    sd5 0.1 1.4 2.5 13.5 0.2 0.0 187.1 1 3
    sd6 0.1 1.6 2.6 13.3 0.4 0.1 300.7 3 10
    sd7 0.1 1.6 3.0 13.7 0.2 0.0 99.5 0 0
    sd8 0.1 1.6 2.8 13.5 0.2 0.0 152.7 1 1
    cmdk0 0.0 0.0 0.2 0.0 0.0 0.0 2.4 0 0
    cmdk1 12.6 1.4 74.6 16.1 0.1 0.0 5.8 0 2
    nfs1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    When I run it with updates, I see a lot of screens like this:
    extended device statistics
    device r/s w/s kr/s kw/s wait actv svc_t %w %b
    c0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    c3 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0 100
    sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0 100
    sd4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    cmdk1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    The drive with 100 moves between sd1 to sd6 (i think sd7 and sd8 too, but not positive). Sometimes both %w and %b report 100. Also, occasionally, two drives will both report 100. These stay at 100 for 30 seconds or longer, followed by a flurry of data, and then another drive reports 100.
    Here's another example:
    extended device statistics
    device r/s w/s kr/s kw/s wait actv svc_t %w %b
    c0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    c3 0.0 0.0 0.0 0.0 0.0 2.0 0.0 0 200
    sd1 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0 100
    sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd3 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0 100
    sd4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    cmdk1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    nfs1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
    Anyone have any ideas what could be causing this? Or if this is a known SATA problem? Or a known problem with the Sil3214 cards or the Sil port multipliers?
    Thanks,
    Greg

    I was told the port multiplier code in the Solaris and OpenSolaris systems is not production ready. It is a known bug and is being worked on, but no ETA. I was given the impression that it was a low priority issue.
    After struggling with this for several days I had to give up using Solaris for this project and had to migrate to Linux. In my case, I had to use CentOS or Red Hat ES due to binary drivers from Silicon Image as the open source Sil3124 drivers in Linux also had problems with port multipliers (slightly different symptoms than Solaris). Even with the binary drivers from the vendor, I am still seeing port multipliers drop all 5 drives at seemingly random times.
    Moral of the story: Don't use port multipliers in production environments or anywhere that you value your data. At least, not the Silicon Image 3726 port multipliers. They'll just give you more headaches than they are worth.
    Had I the chance to do it over, I would not have purchased the NorcoTek (Norco) DS-1220 enclosure; I would have done something with SCSI or with direct channels to all drives and no port multipliers. One day they'll probably be OK.
    I had wanted to use raidz2 for my array. When I switched to Linux I used raid6 (double parity, similar to raidz2), but with the port multipliers dropping 5 drives at a time, I had to reconfigure to raid 1+0. Nice performance, but I'm really annoyed at the loss of capacity. My 5 tb array is now 3 tb.

  • New to Solaris question help please

    Just successfully installed Solaris 10 on VMWare ESXi and assigned it 50GB of HDD space. When I type df -h I see that 79% of my disk is used and the rest of the storage is nowhere to be found.
    I'm new to Solaris is this because the rest of the drive hasn't been partitioned? Some help would be greatly appreciate so that I can take advantage of the rest of my volume...
    Thanks!
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0d0s0 7.9G 6.1G 1.7G 79% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.5G 880K 1.5G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /usr/lib/libc/libc_hwcap1.so.1
    7.9G 6.1G 1.7G 79% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 1.5G 2.2M 1.5G 1% /tmp
    swap 1.6G 84M 1.5G 6% /var/run
    /dev/dsk/c0d0s7 7.9G 8.0M 7.8G 1% /export/home
    /dev/dsk/c0d0s6 6.8G 7.0M 6.8G 1% /export/software
    /hgfs 16G 4.0M 16G 1% /hgfs
    /tmp/VMwareDnD 0K 0K 0K 0% /var/run/vmblock

    I don't recall allocating anything during installation.Which means that you did a default installation with no customization. Looking over your file system layout confirms this.
    I have an update df -h listing as well (I installed on a different system)
    # df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c1t0d0s0 4.6G 4.0G 596M 88% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.7G 868K 1.7G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /usr/lib/libc/libc_hwcap1.so.1
    4.6G 4.0G 596M 88% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 1.7G 120K 1.7G 1% /tmp
    swap 1.7G 24K 1.7G 1% /var/run
    /dev/dsk/c1t0d0s7 44G 245M 43G 1% /export/home
    From this, I understand that /dev/dsk/c1t0d0s0 has a size 4.6GB (assuming the OS is installed here) when I download items to the desktop, it appears they are placed in this "disk"?Depends on where you told Firefox to place them by default. You could just as easily configure it, Edit, Preferences, Main, Downloads, tell it to place the downloads somewhere else.
    > Also, /dev/dsk/c1t0d0s7 is where 44GB of free storage resides, I believe this is located in /export/home (as a folder with 44GB of space available) as far as my knowledge of Solaris is concerned.
    Correct. Solaris is a multi-user system that leaves plenty of space for user home directories which default to /export/home. IMNSHO the space that is leaves behind for / is usually small enough to make the system unusable after applying only a few patches. If you switch to OpenSolaris 2008.11 you can ZFS which will mitigate most of these problems.
    Just a little confusing to a newbie, coming from a Microsoft background (forgive me!) I am used to seeing volumes as icons in My Computer.The Windows tool for managing disks is nice and graphical.
    What's your advice for managing volumes, formating to ZFS, maybe some visual method of understanding volumes?If you haven't done much with the system then you might consider switching to OpenSolaris with the caveat that some parts of Solaris are not available in OpenSolaris because of licensing issues. However, for someone who's just starting out you won't miss them.
    For understanding, http://www.docs.sun.com / Solaris / Administration and then look for the two books on Disk Management. One is for ZFS and the other is for UFS. ZFS and Volume Manager Administration
    alan

  • Trouble installing ZFS in archlinux kernel 3.6.3-1-ARCH

    I've been trying to install ZFS on my system, and i can't get past a building error for SPL, here is my install output:
    ==> Downloading zfs PKGBUILD from AUR...
    x zfs_preempt.patch
    x zfs.install
    x PKGBUILD
    Comment by: modular on Wed, 24 Oct 2012 03:09:04 +0000
    @demizer
    I don't/won't run ZFS as a root file system. I'm getting the following build error:
    http://pastebin.com/ZcWiaViK
    Comment by: demizer on Wed, 24 Oct 2012 04:11:54 +0000
    @modular, You're trying to build with the 3.6.2 kernel. The current version (rc11) does not work with the 3.6.2 kernel. If you want to use it, you will have to downgrade to the 3.5.6 kernel (linux and linux-headers). https://wiki.archlinux.org/index.php/Downgrading_Packages
    Thanks!
    Comment by: MilanKnizek on Wed, 24 Oct 2012 08:07:19 +0000
    @demizer: there still seemed to be a problem during upgrading - zfs/spl requires kernel of certain version (hard-coded) and this blocks the upgrade (the old installed zfs/spl requires the old kernel and kernel can't be upgraded w/o breaking dependency of zfs/spl and therefore build of the new zfs/spl fails, too).
    So far, I have had to remove zpl/spl, upgrade kernel, rebuild + install spl/zfs and manually run depmod against the new kernel (i.e. the postinst: depmod -a does not work until next reboot) and only then reboot to load the new kernel zfs modules successfully.
    That is quite clumsy and error-prone - I hope it will be resolved via DMKS.
    Comment by: srf21c on Sun, 28 Oct 2012 04:00:31 +0000
    All, if you're suffering zfs kernel upgrade pain fatigue, seriously consider going with the LTS (long term support) kernel. I just successfully built zfs on a system that I switched to the linux-lts 3.0.48-1. All you have to do is install the linux-lts and linux-lts-headers packages, reboot to the lts kernel, and change any instances of depends= or makedepends= lines in the package build file like so:
    Before:
    depends=('linux=3.5' "spl=${pkgver}" "zfs-utils=${pkgver}")
    makedepends=('linux-headers=3.5')
    After:
    depends=('linux-lts=3.0' "spl=${pkgver}" "zfs-utils=${pkgver}")
    makedepends=('linux-lts-headers=3.0')
    Then build and install each package in this order: spl-utils,spl,zfs-utils,zfs.
    Worked like a champ for me.
    Comment by: stoone on Mon, 29 Oct 2012 12:09:29 +0000
    If you keep the linux, and linux-headers packages while using the LTS you don't need to modify the PKGBUILDs. Because the checks will pass but it will build the packages to your current runnning kernel.
    Comment by: demizer on Mon, 29 Oct 2012 15:56:27 +0000
    Hey everybody, just a quick update. The new build tool I have been working on is now in master, https://github.com/demizer/aur-zfs. With it you can build and package two different groups of packages one for aur and one for split. Again, building the split packages is more efficient. I still have a lot of work to be done, but it is progressing. I will be adding git, dkms, and lts packages after I setup my repo. My next step is to add unofficial repository support to my build tool so I can easily setup a repo with precompiled binaries. I will be hosting the repo on my website at http://demizerone.com/archzfs. Initially it will only be for 64bit code since the ZOL FAQ states that ZOL is very unstable with 32bit code due to memory management differences in Solaris and Linux. I will notify you all in the future when that is ready to go.
    @MilanKnizek, Yes updating is a pain. ZFS itself is hard-coded to linux versions at build time. The ZFS build tool puts the modules in "/usr/lib/modules/3.5.6-1-ARCH/addon/zfs/", and this the primary reason it has to be rebuilt each upgrade, even minor point releases. Nvidia for example puts their module in "/usr/lib/modules/extramodules-3.5-ARCH/", so minor point releases are still good and the nvidia package doesn't need to be re-installed. A possible reason for ZOL to be hard-coded like this because ZOL is still technically very beta code.
    I do have a question for the community, does anyone use ZFS on a 32bit system?
    Thanks!
    First Submitted: Thu, 23 Sep 2010 08:50:51 +0000
    zfs 0.6.0_rc11-2
    ( Unsupported package: Potentially dangerous ! )
    ==> Edit PKGBUILD ? [Y/n] ("A" to abort)
    ==> ------------------------------------
    ==> n
    ==> zfs dependencies:
    - linux>=3.5 (already installed)
    - linux-headers>=3.5 (already installed)
    - spl>=0.6.0_rc11 (building from AUR)
    - zfs-utils>=0.6.0_rc11 (building from AUR)
    ==> Edit zfs.install ? [Y/n] ("A" to abort)
    ==> ---------------------------------------
    n
    ==> Continue building zfs ? [Y/n]
    ==> -----------------------------
    ==>
    ==> Building and installing package
    ==> Install or build missing dependencies for zfs:
    ==> Downloading spl PKGBUILD from AUR...
    x spl.install
    x PKGBUILD
    Comment by: timemaster on Mon, 15 Oct 2012 22:42:32 +0000
    I am not able to compile this package after the upgrade to the 3.6 kernel. Anyone else ? any idea?
    Comment by: mikers on Mon, 15 Oct 2012 23:34:17 +0000
    rc11 doesn't support Linux 3.6; there are some patches on GitHub that might apply against it (I've not done it myself), see:
    https://github.com/zfsonlinux/spl/pull/179
    https://github.com/zfsonlinux/zfs/pull/1039
    Otherwise downgrade to Linux 3.5.x or linux-lts and wait for rc12.
    Comment by: timemaster on Mon, 15 Oct 2012 23:54:03 +0000
    Yes, I saw that too late.
    https://github.com/zfsonlinux/zfs/commit/ee7913b644a2c812a249046f56eed39d1977d706
    Comment by: demizer on Tue, 16 Oct 2012 07:00:16 +0000
    Looks like the patches have been merged, now we wait for rc12.
    Comment by: vroomanj on Fri, 26 Oct 2012 17:07:19 +0000
    @demizer: 3.6 support is available in the master builds, which are stable but not officially released yet. Can't the build be updated to use the master tars?
    https://github.com/zfsonlinux/spl/tarball/master
    https://github.com/zfsonlinux/zfs/tarball/master
    Comment by: demizer on Fri, 26 Oct 2012 17:51:42 +0000
    @vroomanj, I plan on working on the git packages this weekend. All I have to figure out if it is going to be based on an actual git clone or if its just going to be the download links you provided. They are pretty much the same, but i'm not really clear what the Arch Package Guidelines say about this yet. Also, I don't think the current packages in AUR now should be based off of git master. They should be based off of the ZOL stable releases (rc10, rc11, ...). That's why I am making git packages so people can use them if they want to upgrade to the latest kernel and the stable release hasn't been made yet. As is the case currently.
    First Submitted: Sat, 26 Apr 2008 14:34:31 +0000
    spl 0.6.0_rc11-2
    ( Unsupported package: Potentially dangerous ! )
    ==> Edit PKGBUILD ? [Y/n] ("A" to abort)
    ==> ------------------------------------
    ==> n
    ==> spl dependencies:
    - linux>=3.5 (already installed)
    - spl-utils>=0.6.0_rc11 (already installed)
    - linux-headers>=3.5 (already installed)
    ==> Edit spl.install ? [Y/n] ("A" to abort)
    ==> ---------------------------------------
    ==> n
    ==> Continue building spl ? [Y/n]
    ==> -----------------------------
    ==>
    ==> Building and installing package
    ==> Making package: spl 0.6.0_rc11-2 (Tue Oct 30 11:34:13 CET 2012)
    ==> Checking runtime dependencies...
    ==> Checking buildtime dependencies...
    ==> Retrieving Sources...
    -> Downloading spl-0.6.0-rc11.tar.gz...
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    0 178 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    100 136 100 136 0 0 154 0 --:--:-- --:--:-- --:--:-- 293
    100 508k 100 508k 0 0 357k 0 0:00:01 0:00:01 --:--:-- 1245k
    ==> Validating source files with md5sums...
    spl-0.6.0-rc11.tar.gz ... Passed
    ==> Extracting Sources...
    -> Extracting spl-0.6.0-rc11.tar.gz with bsdtar
    ==> Starting build()...
    configure.ac:34: warning: AM_INIT_AUTOMAKE: two- and three-arguments forms are deprecated. For more info, see:
    configure.ac:34: http://www.gnu.org/software/automake/manual/automake.html#Modernize-AM_INIT_AUTOMAKE-invocation
    checking metadata... yes
    checking build system type... i686-pc-linux-gnu
    checking host system type... i686-pc-linux-gnu
    checking target system type... i686-pc-linux-gnu
    checking whether to enable maintainer-specific portions of Makefiles... no
    checking whether make supports nested variables... yes
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
    checking for gawk... gawk
    checking whether make sets $(MAKE)... yes
    checking for gcc... gcc
    checking whether the C compiler works... yes
    checking for C compiler default output file name... a.out
    checking for suffix of executables...
    checking whether we are cross compiling... no
    checking for suffix of object files... o
    checking whether we are using the GNU C compiler... yes
    checking whether gcc accepts -g... yes
    checking for gcc option to accept ISO C89... none needed
    checking for style of include used by make... GNU
    checking dependency style of gcc... gcc3
    checking how to print strings... printf
    checking for a sed that does not truncate output... /bin/sed
    checking for grep that handles long lines and -e... /usr/bin/grep
    checking for egrep... /usr/bin/grep -E
    checking for fgrep... /usr/bin/grep -F
    checking for ld used by gcc... /usr/bin/ld
    checking if the linker (/usr/bin/ld) is GNU ld... yes
    checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B
    checking the name lister (/usr/bin/nm -B) interface... BSD nm
    checking whether ln -s works... yes
    checking the maximum length of command line arguments... 1572864
    checking whether the shell understands some XSI constructs... yes
    checking whether the shell understands "+="... yes
    checking how to convert i686-pc-linux-gnu file names to i686-pc-linux-gnu format... func_convert_file_noop
    checking how to convert i686-pc-linux-gnu file names to toolchain format... func_convert_file_noop
    checking for /usr/bin/ld option to reload object files... -r
    checking for objdump... objdump
    checking how to recognize dependent libraries... pass_all
    checking for dlltool... no
    checking how to associate runtime and link libraries... printf %s\n
    checking for ar... ar
    checking for archiver @FILE support... @
    checking for strip... strip
    checking for ranlib... ranlib
    checking command to parse /usr/bin/nm -B output from gcc object... ok
    checking for sysroot... no
    checking for mt... no
    checking if : is a manifest tool... no
    checking how to run the C preprocessor... gcc -E
    checking for ANSI C header files... yes
    checking for sys/types.h... yes
    checking for sys/stat.h... yes
    checking for stdlib.h... yes
    checking for string.h... yes
    checking for memory.h... yes
    checking for strings.h... yes
    checking for inttypes.h... yes
    checking for stdint.h... yes
    checking for unistd.h... yes
    checking for dlfcn.h... yes
    checking for objdir... .libs
    checking if gcc supports -fno-rtti -fno-exceptions... no
    checking for gcc option to produce PIC... -fPIC -DPIC
    checking if gcc PIC flag -fPIC -DPIC works... yes
    checking if gcc static flag -static works... yes
    checking if gcc supports -c -o file.o... yes
    checking if gcc supports -c -o file.o... (cached) yes
    checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes
    checking whether -lc should be explicitly linked in... no
    checking dynamic linker characteristics... GNU/Linux ld.so
    checking how to hardcode library paths into programs... immediate
    checking whether stripping libraries is possible... yes
    checking if libtool supports shared libraries... yes
    checking whether to build shared libraries... yes
    checking whether to build static libraries... yes
    checking spl license... GPL
    checking linux distribution... arch
    checking default package type... arch
    checking whether rpm is available... no
    checking whether rpmbuild is available... no
    checking whether dpkg is available... no
    checking whether dpkg-buildpackage is available... no
    checking whether alien is available... no
    checking whether pacman is available... yes (4.0.3)
    checking whether makepkg is available... yes (4.0.3)
    checking spl config... kernel
    checking kernel source directory... /usr/src/linux-3.6.3-1-ARCH
    checking kernel build directory... /usr/src/linux-3.6.3-1-ARCH
    checking kernel source version... 3.6.3-1-ARCH
    checking kernel file name for module symbols... Module.symvers
    checking whether debugging is enabled... no
    checking whether basic debug logging is enabled... yes
    checking whether basic kmem accounting is enabled... yes
    checking whether detailed kmem tracking is enabled... no
    checking whether modules can be built... yes
    checking whether atomic types use spinlocks... no
    checking whether kernel defines atomic64_t... yes
    checking whether kernel defines atomic64_cmpxchg... no
    checking whether kernel defines atomic64_xchg... yes
    checking whether kernel defines uintptr_t... yes
    checking whether INIT_WORK wants 3 args... no
    checking whether register_sysctl_table() wants 2 args... no
    checking whether set_shrinker() available... no
    checking whether shrinker callback wants 3 args... no
    checking whether struct path used in struct nameidata... yes
    checking whether task_curr() is available... no
    checking whether unnumbered sysctl support exists... no
    checking whether struct ctl_table has ctl_name... no
    checking whether fls64() is available... yes
    checking whether device_create() is available... yes
    checking whether device_create() wants 5 args... yes
    checking whether class_device_create() is available... no
    checking whether set_normalized_timespec() is available as export... yes
    checking whether set_normalized_timespec() is an inline... yes
    checking whether timespec_sub() is available... yes
    checking whether init_utsname() is available... yes
    checking whether header linux/fdtable.h exists... yes
    checking whether files_fdtable() is available... yes
    checking whether __clear_close_on_exec() is available... yes
    checking whether header linux/uaccess.h exists... yes
    checking whether kmalloc_node() is available... yes
    checking whether monotonic_clock() is available... no
    checking whether struct inode has i_mutex... yes
    checking whether struct mutex has owner... yes
    checking whether struct mutex owner is a task_struct... yes
    checking whether mutex_lock_nested() is available... yes
    checking whether on_each_cpu() wants 3 args... yes
    checking whether kallsyms_lookup_name() is available... yes
    checking whether get_vmalloc_info() is available... no
    checking whether symbol *_pgdat exist... yes
    checking whether first_online_pgdat() is available... no
    checking whether next_online_pgdat() is available... no
    checking whether next_zone() is available... no
    checking whether pgdat_list is available... no
    checking whether global_page_state() is available... yes
    checking whether page state NR_FREE_PAGES is available... yes
    checking whether page state NR_INACTIVE is available... no
    checking whether page state NR_INACTIVE_ANON is available... yes
    checking whether page state NR_INACTIVE_FILE is available... yes
    checking whether page state NR_ACTIVE is available... no
    checking whether page state NR_ACTIVE_ANON is available... yes
    checking whether page state NR_ACTIVE_FILE is available... yes
    checking whether symbol get_zone_counts is needed... no
    checking whether user_path_dir() is available... yes
    checking whether set_fs_pwd() is available... no
    checking whether set_fs_pwd() wants 2 args... yes
    checking whether vfs_unlink() wants 2 args... yes
    checking whether vfs_rename() wants 4 args... yes
    checking whether vfs_fsync() is available... yes
    checking whether vfs_fsync() wants 2 args... yes
    checking whether struct fs_struct uses spinlock_t... yes
    checking whether struct cred exists... yes
    checking whether groups_search() is available... no
    checking whether __put_task_struct() is available... yes
    checking whether proc_handler() wants 5 args... yes
    checking whether kvasprintf() is available... yes
    checking whether rwsem_is_locked() acquires sem->wait_lock... no
    checking whether invalidate_inodes() is available... no
    checking whether invalidate_inodes_check() is available... no
    checking whether invalidate_inodes() wants 2 args... yes
    checking whether shrink_dcache_memory() is available... no
    checking whether shrink_icache_memory() is available... no
    checking whether symbol kern_path_parent exists in header... no
    checking whether kern_path_parent() is available... no
    checking whether zlib_deflate_workspacesize() wants 2 args... yes
    checking whether struct shrink_control exists... yes
    checking whether struct rw_semaphore member wait_lock is raw... yes
    checking that generated files are newer than configure... done
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating lib/Makefile
    config.status: creating cmd/Makefile
    config.status: creating module/Makefile
    config.status: creating module/spl/Makefile
    config.status: creating module/splat/Makefile
    config.status: creating include/Makefile
    config.status: creating scripts/Makefile
    config.status: creating spl.spec
    config.status: creating spl-modules.spec
    config.status: creating PKGBUILD-spl
    config.status: creating PKGBUILD-spl-modules
    config.status: creating spl.release
    config.status: creating dkms.conf
    config.status: creating spl_config.h
    config.status: executing depfiles commands
    config.status: executing libtool commands
    make all-recursive
    make[1]: Entering directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11'
    Making all in module
    make[2]: Entering directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module'
    make -C /usr/src/linux-3.6.3-1-ARCH SUBDIRS=`pwd` CONFIG_SPL=m modules
    make[3]: Entering directory `/usr/src/linux-3.6.3-1-ARCH'
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-debug.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-proc.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-kmem.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-thread.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-taskq.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-rwlock.o
    CC [M] /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.o
    /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.c: In function 'vn_remove':
    /tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.c:327:2: error: implicit declaration of function 'path_lookup' [-Werror=implicit-function-declaration]
    cc1: some warnings being treated as errors
    make[5]: *** [/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl/../../module/spl/spl-vnode.o] Error 1
    make[4]: *** [/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module/spl] Error 2
    make[3]: *** [_module_/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module] Error 2
    make[3]: Leaving directory `/usr/src/linux-3.6.3-1-ARCH'
    make[2]: *** [modules] Error 2
    make[2]: Leaving directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11/module'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/tmp/yaourt-tmp-alex/aur-spl/src/spl-0.6.0-rc11'
    make: *** [all] Error 2
    ==> ERROR: A failure occurred in build().
    Aborting...
    ==> ERROR: Makepkg was unable to build spl.
    ==> Restart building spl ? [y/N]
    ==> ----------------------------
    ... i'm stuck here, can anyone help me with this one? please !

    Did you read the comments, either on the AUR page or in the output that you posted? They explain it.

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Can someone please tell me how to format a new disk to ZFS format?

    I have a Sun v240 with Solaris 10 update 8 installed on a single 73GB harddisk. Everything is working fine. I just purchased a another identical harddisk online. I plugged the disk into my v240 and ran 'devfsadm' and solaris found the new disk. I want to add this disk to my existing ZFS pool as a mirror. However, this disk was originally formatted with a UFS file system. So when I run:
    zpool attach rpool c1t0d0 c1t1d0I get:
    /dev/dsk/c1t1d0s0 contains a ufs filesystem.I understand the error message but I don't know how to format the disk to have a ZFS file system instead. Note that I am extremely new to Solaris, ZFS, and pretty much everything Sun - I bought this server on eBay so that I could learn more about it. It's been pretty fun so far but need some help here and there.
    For some reason I can't find a single hit on Google telling me how to just simply format a disk to ZFS. Can I use the 'format' command? Maybe you don't "format" disks for ZFS? I have no idea. I might not have the right terminology. If so, apologies. Can anyone help me on this?
    Thanks a lot! =D
    Jonathon

    Yes, you were right. The partitions were totally different. Here is what I saw:
    For c1t0d0:
    # format
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      1 unassigned    wm       0                0         (0/0/0)             0
      2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      3 unassigned    wm       0                0         (0/0/0)             0
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6 unassigned    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0For c1t1d0:
    # format
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 - 12865       62.43GB    (12866/0/0) 130924416
      1       swap    wu   12866 - 14079        5.89GB    (1214/0/0)   12353664
      2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      3 unassigned    wm   14080 - 14086       34.78MB    (7/0/0)         71232
      4 unassigned    wm       0                0         (0/0/0)             0
      5 unassigned    wm       0                0         (0/0/0)             0
      6 unassigned    wm       0                0         (0/0/0)             0
      7 unassigned    wm       0                0         (0/0/0)             0So then I ran the following:
    # prtvtoc /dev/rdsk/c1t0d0s0 | fmthard -s - /dev/rdsk/c1t1d0s0
    fmthard:  New volume table of contents now in place.Then I rechecked the partition table for c1t1d0:
    # format
    Part      Tag    Flag     Cylinders         Size            Blocks
      0       root    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      1 unassigned    wu       0                0         (0/0/0)             0
      2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
      3 unassigned    wu       0                0         (0/0/0)             0
      4 unassigned    wu       0                0         (0/0/0)             0
      5 unassigned    wu       0                0         (0/0/0)             0
      6 unassigned    wu       0                0         (0/0/0)             0
      7 unassigned    wu       0                0         (0/0/0)             0Woo-hoo!! It matches the first disk now! :)
    Then I tried to attach the new disk to the pool again:
    # zpool attach -f rpool c1t0d0s0 c1t1d0s0
    Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable.
    Make sure to wait until resilver is done before rebooting.
    bash-3.00# zpool status
      pool: rpool
    state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
    scrub: resilver in progress for 0h0m, 0.40% done, 0h58m to go
    config:
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0  30.3M resilvered
    errors: No known data errorsBoo-yah!!! ++Does little dance++
    Then, after resilvering completed I ran:
    # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0I think I'm starting to understand this now. I also shutdown the server to the OpenBoot prompt and booted off of the new disk and it worked! Also, my bootup time to login has drastically decreased - I would say it's about half the time it was before I added the mirror disk. So I believe the server is properly reading from both disks simultaneously in order to get better bandwidth. Cool! :)
    Thanks for the help!
    Jonathon

  • How can i create a zfs slice on the finish scritp of a jumpstart?

    Hi All,
    i need advice for a particular task.
    i have a jumpstart wich create a certain number of usf slice (/, /var, etc...).
    it's working good, but i have a final goal : with the rest of the half of the free space, i want to create a zfs pool, with the help of the finish script.
    i think the best way is to use "format" or "fdisk" command with a script like "fdisk /dev/dsk/c0d0 < script.sh"
    and after that a simple "zpool create ...." command for creating the zfs.
    so i have 2 questions:
    do you think it's the best way?
    how can i write the "script.sh" for telling him to use only the half of the free space?
    thx

    Why not make another slice for ZFS to use? Then just setup the zfs pool in your finish script. I use JET here as a friendly front end to jumpstart. You could just have your jumpstart setup create the slice (base_config_profile_s?_size) with no mountpoint (base_config_profile_s?_mtpt) then use that slice when you make the zfs pool later in the finish script.
    I do not believe you will be able to easily get ZFS to use just part of a device without some sort of partitioning. Do some reading on zpool (man zpool) under the vdev section.

  • Mounting a shared zfs onto a remote system : how to restrict permissions

    I'm trying to setup some basic nfs shares out of a zpool and failing miserably as it is.
    Here's what I have; all my machines are in 10.154.22.0/24 .. it's basically a test (pre-prod) network
    - oslo, the nfs server, 10.154.22.1, SunOS 5.10-x86
    - helsinki, the remote machine, 10.154.22.4, linux 2.6.22
    On oslo I've created a zpool named pool2 with a zfs filesytem called tools. pool2/tools is mounted in /tools. I've further restricted access to /tools with : chown 0:nfsusers /tools && chmod 770 /tools . I want to ensure that only users from the group nfsusers will be able to read/write/execute into /tools.
    I have a user, dbusr who is part of the nfsusers group. He can access the FS as he wants. All usernames / uids / gids are identical across the whole network .
    Ok now, on helsinki, I have a directory /export/helsinki/tools . This directory is also chowned 0:nfsusers and chmoded 770.
    Now, on helsinki everytime I try : mount -t nfs oslo:/tools /export/helsinki/tools I get :
    mount.nfs: oslo:/tools failed, reason given by server: Permission denied
    Server-side I've modified /etc/default/nfs so that both client and server run at NFSv3 (I've read somewhere that NFSv4 is not that well supported on Linux). The zfs share is set up like this:
    zfs set sharenfs=rw=10.154.22.0/255.255.255.0 pool2/tools .
    I'd like all my users in the group nfsusers to be able to write on the remote nfs FS, and optionally, root to be able, too.
    What am I missing, here ?
    Regards,
    Jeff

    It was not a problem. The raw partition with the soft partition or ZFS filesystem works fine in the local zone.
    Thank you for your help.
    -Yong

  • Help error while booting sunfire v440 (error)

    Hi solaris admins.
    i am getting the following error while trying to install the solaris 10 OS on a sunfire v440 server.Please can someone help me out here.i think the problem is with the Hard Drives or something.
    =================================================
    Sun Fire V440, No Keyboard
    Copyright 2005 Sun Microsystems, Inc. All rights reserved.
    OpenBoot 4.17.2, 8192 MB memory installed, Serial #63181061.
    Ethernet address 0:3:ba:c4:11:5, Host ID: 83c41105.
    Rebooting with command: boot cdrom
    Boot device: /pci@1e,600000/ide@d/cdrom@0,0:f File and args:
    SunOS Release 5.10 Version Generic_118833-33 64-bit
    Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    =======================================================
    Hardware watchdog enabled
    Configuring devices.
    WARNING: /pci@1f,700000/scsi@2/sd@0,0 (sd1):
    Corrupt label; wrong magic number
    WARNING: /pci@1f,700000/scsi@2/sd@2,0 (sd4):
    Corrupt label; wrong magic number
    ========================================================
    internal error: Bad file number
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: ex
    it status 134
    ====================================================
    Plese and advice and ideas are highly appreciated.
    We Brain server.
    Regards
    Noa Junior Sys Admin

    No clue why it's doing that when you're booting off
    the CD though... I didn't think it checked. Hmmm.
    boot cdrom will try and start the installer - not sure what happens if no valid disk label.
    I do recall going through this (or something similar)
    when the HD in my Blade 150 failed and I swapped in a
    new one. I just formatted the drive, then I was
    fine. Not sure HOW I formatted it though. Never
    bothered to write it down, it seemed obvious at the
    time :-(Could first try boot cdrom -s to just get a single user shell, and then run format just to create the disk labels.
    If manually running format successfully labels the disks, then try running the installer (which should give the option of how to partition the disks).

Maybe you are looking for