ZFS - overfilled pool

I installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.
Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any data and when I unmounted the pool, I could not even mount it again.
I've heard that this is a standard behavior of ZFS filesystems and that the correct way to avoid such problems in future is not to use the full capacity of the pool.
Now I'm thinking about creating quotas on my filesystems (as they describe in [this article](ZFS: Set or create a filesystem quota)), but I am wondering if that is enough.
I have got a tree hiearchy of filesystems on the pool, e.g. something like this (pool is the name of the zpool and also the name of the root filesystem on the pool):
/pool
/pool/svn
/pool/home
Is it OK to just set a quota on "pool" (as they should propagate to all sub-filesystems)? I mean, is this enough to prevent such event to happen again? For instance, would it prevent me to make a new fs snapshot should the quota be overrun?
How much space should I reserve, e.g. make unavailable (I read somewhere that it is a good practise to use only about 80% of the pool capacity)?
Finally, is there a better/more suitable solution to my original problem than setting the quota on fs?
Thank you very much for your advice.
Dusan

ZFS has a couple of different quotas - each of them applies only to the dataset it has been configured on. That means that no quota will propagate down the descendant file systems, so keep that in mind. Another thing one should always have in mind is that ZFS gets really slow, if you populate the underlying zpool more than 80% - so watch out for that as well.
Now for the quotas - there are basically two of them plus one thing called reservation, which is also a good thing to keep a zfs fs from filling up completely.
quota vs. refquota: quota sets the amount of space a zfs fs can occupy including all the data that belong to that zfs fs, so e.g. snapshots are inlcuded, whereas refquota only refers to the amount of "active" data of that zfs which leaves snapshots out of the equation. You'll want to use refquota e.g. for file servers, where you want to enforce a quota on the actual work data, but want to be free to create as many snapshots as you like.
Reservations ensure the minumum space which has to be available to the zfs fs it is set upon. If you set a reservation of 10G on each zfs fs, you won't be able to fill up your zpool more then the capacity of your zpoll minus the reservations combined.
You can read about this in the excellent Oracle® Solaris ZFS Administration Guide, which is available on Oracle's site.
Cheers,
budy

Similar Messages

  • Scrub ZFS root pool

    Does anyone see any issue in having a cron job that scrubs the ZFS root pool rpool periodically?
    Let's say every Sunday at midnight (00:00 of Sunday).

    Hi ,
    What you need to do is very easy here is a procedure that i use:
    1)  make a snapshot off the rpool
           zfs snapshot -r rpool@<snapshotname>
    2) send that snapshot somewhere safe
           zpool destroy rpool/dump<snapshotname>
           zpool destroy rpool/swap@<snapshotname>
           zpool send -R rpool@<snapshotname | gzip > /net/<ipaddress>/<share>/<snapshotname>.gz
    3) Once the above is done you can do the following.
         Boot from DVD make sure you ahe a disk available and start creating the rpool.
         The rpool can be created with EFI or SMI label
         so for example to use EFI label zpool create c0d0
                               to use SMI label zpool create c0d0s0 => make sure that the disk is labeled and that all cylinders are in s0.
    4) create a new boot env
          zpool create rpool <disk>
    5) import the data again.
         gzcat /mnt/<snapshotname>.qz | zfs receive -Fv rpool
          zfs create -V 4G rpool/dump
         zfs create -V 4G rpool/swap
    6) check a list off bootenv
            beadm list
            beadm mount <bootenv> /tmp/mnt
            bootadm install-bootloader -P rpool
           devfsadm -Cn -r /tmp/mnt
           touch /tmp/mnt/reconfigure
           beadm umount <bootenv>
           beadm activate <bootenv>
    This is for Solaris 11 but it also works for Solaris 10 only the last part number 6 is different.
    I need to look this up again but if i remember again you need to set the following for solaris 10 bootfs that needs to be set on the rpool
    If you want i have a script that makes a backup off the rpool towards a nfs share.
    Hope this helps
    Regards
    Filip

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

  • How so I protect my root file system? - x86 solaris 10 - zfs data pools

    Hello all:
    I'm new to ZFS and am trying to understand it better before I start building a new file server. I'm looking for a low cost file server for smaller projects I support and would like to use the ZFS capabilities. If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem? The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too? or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Should I install solaris 10 on a mirrored set of drives? Can the root filesystem be zfs too? I'd like to be able to use a fairly simple PC to do this, perhaps one that doesn't have built in raid. I'm not looking for 10 terabytes of storage, maybe just four 500gb sata disks connected into a raidz zpool.
    thanks,

    patrickez wrote:
    If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem?Solaris 10 doesn't yet support ZFS for a root filesystem, but it is working in some OpenSolaris distributions.
    You could use Sun Volume Manager to create a mirror for your root filesystem.
    The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too?No. They're separate filesystems.
    or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Yes. (Actually, you can import the ZFS pool you created).
    Should I install solaris 10 on a mirrored set of drives?If you have one, that would work as well.
    Can the root filesystem be zfs too?Not currently in Solaris 10. The initial root support in OpenSolaris will require the root pool be only a single disk or mirrors. No striping, no raidz.
    Darren

  • ZFS Root Pool Restore of a EFI labelled Disk

    Hi Team
    Please let me know the procedure for backup and restore of a EFI labelled root pool using zfs send/receive.
    Note - original operating system is installed on a t5 server with latest firmware,here the default disk label will be EFl instead of SMI as in the case of earlier firmware version.
    operation system is Solaris 11.1.
    Also need to know how to expand lun which is formatted with EFI labelled disk without losing its data.
    Expecting a positive response soon
    Regards
    Arun

    Hi ,
    What you need to do is very easy here is a procedure that i use:
    1)  make a snapshot off the rpool
           zfs snapshot -r rpool@<snapshotname>
    2) send that snapshot somewhere safe
           zpool destroy rpool/dump<snapshotname>
           zpool destroy rpool/swap@<snapshotname>
           zpool send -R rpool@<snapshotname | gzip > /net/<ipaddress>/<share>/<snapshotname>.gz
    3) Once the above is done you can do the following.
         Boot from DVD make sure you ahe a disk available and start creating the rpool.
         The rpool can be created with EFI or SMI label
         so for example to use EFI label zpool create c0d0
                               to use SMI label zpool create c0d0s0 => make sure that the disk is labeled and that all cylinders are in s0.
    4) create a new boot env
          zpool create rpool <disk>
    5) import the data again.
         gzcat /mnt/<snapshotname>.qz | zfs receive -Fv rpool
          zfs create -V 4G rpool/dump
         zfs create -V 4G rpool/swap
    6) check a list off bootenv
            beadm list
            beadm mount <bootenv> /tmp/mnt
            bootadm install-bootloader -P rpool
           devfsadm -Cn -r /tmp/mnt
           touch /tmp/mnt/reconfigure
           beadm umount <bootenv>
           beadm activate <bootenv>
    This is for Solaris 11 but it also works for Solaris 10 only the last part number 6 is different.
    I need to look this up again but if i remember again you need to set the following for solaris 10 bootfs that needs to be set on the rpool
    If you want i have a script that makes a backup off the rpool towards a nfs share.
    Hope this helps
    Regards
    Filip

  • Can ZFS storage pools share a physical drive w/ the root (UFS) file system?

    I wonder if I'm missing something here, because I was under the impression ZFS offered ultimate flexability until I encountered the following fine print 50 pages into the ZFS Administration Guide:
    "Before creating a storage pool, you must determine which devices will store your data. These devices must be disks of at least 128 Mbytes in size, and _they must not be in use by other parts of the operating system_. The devices can be individual slices on a preformatted disk, or they can be entire disks that ZFS formats as a single large slice."
    I thought it was frustrating that ZFS couldn't be used as a boot disk, but the fact that I can't even use the rest of the space on the boot drive for ZFS is aggrivating. Or am I missing something? The following text appears elsewhere in the guide, and suggests that I can use the 7th slice:
    "A storage device can be a whole disk (c0t0d0) or _an individual slice_ (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not need to be specially formatted."
    Currently, I've just installed Solaris 10 (6/11) on an Ultra 10. I removed the slice for /export/users (c0t0d0s7) from the default layout during the installation. So there's approx 6 GB in UFS space, and 1/2 GB in swap space. I want to make the 70GB of unused HDD space a ZFS pool.
    Suggestions? I read somewhere that the other slices must be unmounted before creating a pool. How do I unmount the root partition, then use the ZFS tools that reside in that unmounted space to create a pool?
    Edited by: MindFuq on Oct 20, 2007 8:12 PM

    It's not convenient for me to post that right now, because my ultra 10 is offline (for some reason the DNS never got set up properly, and creating an /etc/resolv.conf file isn't enough to get it going).
    Anyway, you're correct, I can see that there is overlap with the cylinders.
    During installation, I removed slice 7 from the table. However, under the covers the installer created a 'backup' partition (slice 2), which used the rest of the space (~74.5GB), so the installer didn't leave the space unused as I had expected. Strangely, the backup partition overlapped; it started at zero as the swap partition did, and it ended ~3000 cylinders beyond the root partition. I trusted the installer to be correct about things, and simply figured it was acceptible for multiple partitions to share a cylinder. So I deleted slice 2, and created slice 7 using the same boundaries as slice 2.
    So next I'll have to remove the zfs pool, and shrink slice 7 so it goes from cylinder 258 to ~35425.
    [UPDATE] It worked. Thanks Alex! When I ran zpool create tank c0t0d0s7, there was no error.
    Edited by: MindFuq on Oct 22, 2007 8:15 PM

  • Zfs storage pools with EMC Snapview clones

    We're currently using Veritas VM and Snapview clones to present multiple copies of our database data to the same host. I've found that Veritas doesn't like multiple copies of the same disk group being imported on a host and found a way around this. My question is: does zfs have this problem? If I switch to zfs for our new system and create a storage pool for each group of data files, (we put the data files with our tables on one filesystem, indexes on another, redo logs another, etc.) can I mount a clone of the indexes storage pool (for example) on the same host as the original?

    We're currently using Veritas VM and Snapview clones
    to present multiple copies of our database data to
    the same host. I've found that Veritas doesn't like
    multiple copies of the same disk group being imported
    on a host and found a way around this. VxVM 5.0 has some tools to redo the ID on the copy. That might make it easier to deal with. Of course that brings up other issues if you ever want to use that copy and roll it back as primary.
    (If you don't have 5.0, you can still do it, but it's a lot more fiddley)
    My question
    is: does zfs have this problem? If I switch to zfs
    for our new system and create a storage pool for each
    group of data files, (we put the data files with our
    tables on one filesystem, indexes on another, redo
    logs another, etc.) can I mount a clone of the
    indexes storage pool (for example) on the same host
    as the original?No. Same issue. The pool/volume group has a (hopefully) unique identifier to find the pieces of the storage. When the identified piece starts showing up in multiple locations, it knows things are wrong. You'd have to have some method of modifying that data on the copied disks. Today, I don't think there's any support in ZFS for doing that.
    Darren

  • S10 x86 ZFS on VMWare - Increase root pool?

    I'm running Solaris 10 x86 on VMWare.
    I need more space in the zfs root pool.
    I doubled the provisioned space in Hard disk 1, but it is not visible to the VM (format).
    I tried creating a 2nd HD, but root pool can't have multiple VDEVs.
    How can I add space to my root pool without rebuilding?

    Hi,
    This is what I did in single user (it may fail in multi user):
    -> format -> partition -> print
    Current partition table (original):
    Total disk cylinders available: 1302 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 1301 9.97GB (1301/0/0) 20900565
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wm 0 - 1301 9.97GB (1302/0/0) 20916630
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> fdisk
    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1304 1304 83
    -> format -> fdisk -> delete partition 1
    -> format -> fdisk -> create SOLARIS2 partition with 100% of the disk
    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1565 1565 100
    format -> partition -> print
    Current partition table (original):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> partition -> 0 cyl=1 size=1562e
    Current partition table (unnamed):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 1 - 1562 11.97GB (1562/0/0) 25093530
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> partition -> label
    zpool set autoexpand=on rpool
    zpool list
    zpool scrub rpool
    zpool status
    Best regards,
    Ibraima

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • More Major Issues with ZFS + Dedup

    I'm having more problems - this time, very, very serious ones, with ZFS and deduplication. Deduplication is basically making my iSCSI targets completely inaccessible to the clients trying to access them over COMSTAR. I have two commands right now that are completely hung:
    1) zfs destroy pool/volume
    2) zfs set dedup=off pool
    The first command I started four hours ago, and it has barely removed 10G of the 50G that were allocated to that volume. It also seems to periodically cause the ZFS system to stop responding to any other I/O requests, which in turn causes major issues on my iSCSI clients. I cannot kill or pause the destroy command, and I've tried renicing it, to no avail. If anyone has any hints or suggestions on what I can do to overcome this issue, I'd very much appreciate that. I'm open to suggestions that will kill the destroy command, or will at least reprioritize it such that other I/O requests have precedence over this destroy command.
    Thanks,
    Nick

    To add some more detail, I've been review iostat and zpool iostat for a couple of hours, and am seeing some very, very strange behavior. There seem to be three distinct patterns going on here.
    The first is extremely heavy writes. Using zpool iostat, I see write bandwidth in the 15MB/s range sustained for a few minutes. I'm guessing this is when ZFS is allowing normal access to volumes and when it is actually removing some of the data for the volume I tried to destroy. This only lasts for two to three minutes at a time before progressing to the next pattern.
    The second pattern is categorized by heavy, heavy read access - several thousand read operations per second, and several MB/s bandwidth. This also lasts for five or ten minutes before proceeding to the third pattern. During this time there is very little, if any write activity.
    The third and final pattern is categorized by absolutely no write activity (0s in both the write ops/sec and the write bandwidth columns, and very, very small read activity. By small ready activity, I mean 100-200 read ops per second, and 100-200K read bandwidth per second. This lasts for 30 to 40 minutes, and then the patter proceeds back to the first one.
    I have no idea what to make of this, and I'm out of my league in terms of ZFS tools to figure out what's going on. This is extremely frustrating because all of my iSCSI clients are essentially dead right now - this destroy command has completely taken over my ZFS storage, and it seems like all I can do is sit and wait for it to finish, which, as this rate, will be another 12 hours.
    Also, during this time, if I look at the plain iostat command, I see that the read ops for the physical disk and the actv are within normal ranges, as are asvc_t and %w. %b, however is pegged at 99-100%.
    Edited by: Nick on Jan 4, 2011 10:57 AM

  • Solaris 10 (sparc) + ZFS boot + ZFS zonepath + liveupgrade

    I would like to set up a system like this:
    1. Boot device on 2 internal disks in ZFS mirrored pool (rpool)
    2. Non-global zones on external storage array in individual ZFS pools e.g.
    zone alpha has zonepath=/zones/alpha where /zones/alpha is mountpoint for ZFS dataset alpha-pool/root
    zone bravo has zonepath=/zones/bravo where /zones/bravo is mountpoint for ZFS dataset bravo-pool/root
    3. Ability to use liveupgrade
    I need the zones to be separated on external storage because the intent is to use them in failover data services within Sun Cluster (er, Solaris Cluster).
    With Solaris 10 10/08, it looks like I can do 1 & 2 but not 3 or I can do 1 & 3 but not 2 (using UFS instead of ZFS).
    Am I missing something that would allow me to do 1, 2, and 3? If not is such a configuration planned to be supported? Any guess at when?
    --Frank                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Nope, that is still work in progress. Quite frankly I wonder if you would even want such a feature considering the way the filesystem works. It is possible to recover if your OS doesn't boot anymore by forcing your rescue environment to import the zfs pool, but its less elegant than merely mounting a specific slice.
    I think zfs is ideal for data and data-like places (/opt, /export/home, /opt/local) but I somewhat question the advantages of moving slices like / or /var into it. Its too early to draw conclusions since the product isn't ready yet, but at this moment I'd only think off disadvantages.

  • Can't boot with zfs root - Solaris 10 u6

    Having installed Solaris 10 u6 on one disk with native ufs and made this work by adding the the following entries
    /etc/driver_aliases
    glm pci1000,f
    /etc/path_to_inst
    <lang pci string for my scsi controller> glm
    which are needed since the driver selected by default are the ncsr scsi controller driver that do not work in 64 bit.
    Now I would like to create a new boot env. on a second disk on the same scsi controller, but use zfs instead.
    Using Live Upgrade to create a new boot env on the second disk with zfs as file system worked fine.
    But when trying to boot of it I get the following error
    spa_import_rootpool: error 22
    panic[cpu0]/thread=fffffffffbc26ba0: cannot mount root path /pci@0,0-pci1002,4384@14,4/pci1000@1000@5/sd@1,0:a
    Well that's the same error I got with ufs before making the above mentioned changes /etc/driver_aliases and path_to_install
    But that seems not to be enough when using zfs.
    What am I missing ??

    Hmm I dropped the live upgrade from ufs to zfs because I was not 100% sure it worked.
    Then I did a reinstall selecting to use zfs during the install and made the changes to driver_aliases and path_to_inst before the 1'st reboot.
    The system came up fine on the 1'st reboot and did use the glm scsi driver and running in 64bit.
    But that was it. When the system then was rebooted (where it made a new boot-archive) it stopped working. Same error as before.
    I have managed to get it to boot in 32bit mode but still the same error (thats independent of what scsi driver used.)
    In all cases it does pop the SunOS Relase banner and it do load the driver (ncrs or glm) and detects the disks in the correct path and numbering.
    But it fails to load the file system.
    So basically the current status are no-go if you need to use the ncrs/glm scsi driver to access the disks with your zfs root pool.
    File-Safe works and can mount the zfs root pool, but that's no fun as server OS :(

  • JET install and ZFS failure

    Hi - I have a JET (jumpstart) server that I've used many times before to install various Solaris SPARC servers with - from V240's to T4-1's. However when I try to install a brand new T4-2 I keep seeing this on screen and the install reverts to a manual install:
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: one or more file systems failed to mount
    There's been a previous post about this but I can't see the MOS doc that is mentioned in the last post.
    The server came pre-installed with Sol11 and I can see the disks:
    AVAILABLE DISK SELECTIONS:
           0. c0t5000CCA016C3311Cd0 <HITACHI-H109030SESUN300G-A31A cyl 46873 alt 2 hd 20 sec 625>  solaris
              /scsi_vhci/disk@g5000cca016c3311c
              /dev/chassis//SYS/SASBP/HDD0/disk
           1. c0t5000CCA016C33AB4d0 <HITACHI-H109030SESUN300G-A31A cyl 46873 alt 2 hd 20 sec 625>  solaris
              /scsi_vhci/disk@g5000cca016c33ab4
              /dev/chassis//SYS/SASBP/HDD1/disk
    If I drop to the ok prompt there is no hardware RAID configured and raidctl also shows nothing:
    root@solarist4-2:~# raidctl
    root@solarist4-2:~#
    The final post I've found on this forum for someone with this same problem was "If you have an access to MOS, please check doc ID 1008139.1"
    Any help would be appreciated.
    Thanks - J.

    Hi Julian,
    I'm not convinced that your problem is the same one that is described in this discussion:
    Re: Problem installing Solaris 10 1/13, disks no found
    Do you see the missing volume message (Volume 130 is missing) as described in this thread?
    A google search shows that there are issues with for a T4 Solaris 10 install due to a network driver problem and also if the system is using
    a virtual CD or device through a LDOM.
    What happens when you boot your T4 from the installation media or server into single-user mode? You say that you can see the disks, but can  you create a ZFS storage pool on one of these disks manually:
    # zpool create test c0t5000CCA016C3311Cd0s0
    # zpool destroy test
    For a T4 and a Solaris 10 install, the disk will need an SMI (VTOC) label, but I would expect a different error message if that was a problem.
    Thanks, Cindy

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

  • SAN Replication with ZFS

    We have Solaris 10 Oracle database servers, with ZFS filesystems on an HP XP SAN array. We are going to replicate the data to a backup data center using the XP array itself. We will build identical servers at the backup data center. My question is, how will the servers at the backup data center recognize the data once it is replicated by the SAN? Will I simply need to import the zpool, and is it that simple? I can't find any documentation on the topic. Thanks.

    I've used Business Copy (BC) on an HP XP1024 array and imported a copy of a ZFS storage pool on a different host using something like ( Veritas Volume Mgr example ) :
    zpool import -d /dev/vx/dsk/hpxpdg1 -f poolHPXP0

Maybe you are looking for