Resize mirrored ZFS rpool

I have a ZFS rpool mirrored with 2 hard disks. I need to resize it with bigger capacity. Below is my questions;
1. Can we resize a mirrored rpool?
2. If yes, How? (Please provide me the source) Thanks.
3. If the rpool is using local disk and i have other storage LUNs can be attached... what is the best practice to increase the rpool size?
4. If resizing is not possible, can someone shed some light how could we migrate a mount point from rpool (local disk) to new pool (SAN storage). Please advise.
Thank you.

Yes, it is possible to increase the root pool size by either attaching a larger disk and detaching the smaller disk or by using zpool replace to do an outright replacement. If you attach a disk from remote storage, so that you have a local disk and remote disk for a mirrored root pool, then there is a risk that if the remote disk is slow to come up after a boot, for example, the pool might be degraded. Similarly, if you use remote storage for both root pool disks. Forcing loading the drivers early in the boot process can help this.
Which Solaris 10 release is this?
Does the existing root pool disk have existing unused slices or is all the disk space in slice 0?
There is a way to expand the root pool disk slice if existing unused slices exist but its a bit more complicated, but I can walk you through it.
The doc that describes root pool disk replacement is here:
http://docs.oracle.com/cd/E23823_01/html/819-5461/ghzvz.html#scrolltoc
How to Replace a Disk in the ZFS Root Pool
Thanks,
Cindy

Similar Messages

  • Install Solaris 11 on a RAID-1 mirror ZFS pool?

    Hi all,
    Sorry if this question has been asked here before.
    I've searched docs.oracle.com about Solaris 11 and didn't see any related info.
    I am installing the new Solaris 11 on my Dell Tower desktop workstation with two 3TB SATA hard drives.
    I am planning to construct one RAID-1 mirror with ZFS just like previous Solaris 10 x86 installation.
    After using lots of installation media of Solaris 11, I didn't find the options to create RAID-1 mirror on ZFS root partition / pool.
    Could someone give me a hint or I must roll back to Solaris 10 again?
    Thanks in advance.

    Yes, it looks like this on a SPARC system:
    ok boot net:dhcp
    Boot device: /pci@780/pci@0/pci@1/network@0:dhcp  File and args:
    1000 Mbps full duplex  Link up
    Timed out waiting for BOOTP/DHCP reply
    <time unavailable> wanboot info: WAN boot messages->console
    <time unavailable> wanboot info: configuring /pci@780/pci@0/pci@1/network@0:dhcp
    1000 Mbps full duplex  Link up
    <time unavailable> wanboot info: Starting DHCP configuration
    <time unavailable> wanboot info: DHCP configuration succeeded
    <time unavailable> wanboot progress: wanbootfs: Read 368 of 368 kB (100%)
    <time unavailable> wanboot info: wanbootfs: Download complete
    Mon Jul  1 14:28:03 wanboot progress: miniroot: Read 249370 of 249370 kB (100%)
    Mon Jul  1 14:28:03 wanboot info: miniroot: Download complete
    SunOS Release 5.11 Version 11.1 64-bit
    Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
    Remounting root read/write
    Welcome to the Oracle Solaris installation menu
            1  Install Oracle Solaris
            2  Install Additional Drivers
            3  Shell
            4  Terminal type (currently xterm)
            5  Reboot

  • Resize the zfs on solaris 5.10

    hi All,
    i have solaris 5.10 serving zfs filesystem via equallogic iscsi , i have resized the luns from the equallogic gui but i do not know how to resize at the solaris server. what are the steps? can i do it while filesystem online?

    thanks guys for your replies, here is the output of the command " zpool status -v"
    can someone give me a step by step on how to resize, also, can i do the resize while the filesystems online.? anything else i need to worry about doing this resize process?
    pool: scratch1
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    scratch1 ONLINE 0 0 0
    c2t6090A01870227DAACFDFA437AB2792ADd0 ONLINE 0 0 0
    errors: No known data errors
    pool: wgsi1-1
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    wgsi1-1 ONLINE 0 0 0
    c2t0690A0184007CD6F716594E001003050d0 ONLINE 0 0 0
    c2t6090A01840075DB43DDFB4421A04A0A9d0 ONLINE 0 0 0
    errors: No known data errors
    pool: wgsi1-2
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    wgsi1-2 ONLINE 0 0 0
    c2t0690A01840079D717165D4E0010090B5d0 ONLINE 0 0 0
    c2t6090A01840078DAC3DDF74351A04C019d0 ONLINE 0 0 0
    errors: No known data errors
    pool: wgsi1-3
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    wgsi1-3 ONLINE 0 0 0
    c2t0690A01840075D73716514E1010030FBd0 ONLINE 0 0 0
    c2t6090A01840074D9F3DDF341F1A04A09Dd0 ONLINE 0 0 0
    errors: No known data errors
    pool: wgsi1-4
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    wgsi1-4 ONLINE 0 0 0
    c2t0690A01840070D75716554E1010090C3d0 ONLINE 0 0 0
    c2t6090A01840077D933DDF640B1A042020d0 ONLINE 0 0 0
    errors: No known data errors
    pool: wgsi1-5
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    wgsi1-5 ONLINE 0 0 0
    c2t0690A01840078D76716594E10100F05Dd0 ONLINE 0 0 0
    c2t6090A0184007BD703DDF64D1190420E6d0 ONLINE 0 0 0
    errors: No known data errors
    pool: wgsi1-6
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    wgsi1-6 ONLINE 0 0 0
    c2t0690A01840073D787165D4E101005005d0 ONLINE 0 0 0
    c2t6090A0184007DD5C3DDFF4AF1904E022d0 ONLINE 0 0 0
    errors: No known data errors
    pool: wgsi1-7
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    wgsi1-7 ONLINE 0 0 0
    c2t0690A0184007DD79716514E20100700Cd0 ONLINE 0 0 0
    c2t6090A0184007BD2F3DDF1465190420B6d0 ONLINE 0 0 0
    errors: No known data errors

  • Solaris 10/08 w/ ZFS rpool NFS doesn't mount at boot time?

    I've got an issue with NFS not mounting vfstab entered mounts during boot and was wondering if the latest Sol 10/08 with ZFS had anything to do with it.
    1) #vi /etc/vfstab
    server:/vol/share - /share nfs - yes rw,bg,hard,nointr,rsize=32768,wsize=32768,noac,vers=3,timeo=600
    server2:/vol/share2 - /share2 nfs - yes rw,bg,hard,nointr,rsize=32768,wsize=32768,noac,vers=3,timeo=600
    2) reboot -- -rv
    Upon rebooting, the shares do not show up, but doing a mountall mounts them finally:
    #mountall
    mount: /tmp is already mounted or swap is busy
    3) Doing a df -k at this point shows me they mounted.
    So what gives? Why are they not mounting at boot? I tried to get around it by doing removing the vfstab entries, adding them into auto_master and then using cachefs, but my employer was not satisfied with autofs and wants them at boot and persistent via the vfstab. So what can be done to get this to work?
    Thank you,
    --tom                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    It appears I didn't have all of the services set online correctly:
    #svcs |grep nfs
    online Jan_05 svc:/network/nfs/status:default
    online Jan_05 svc:/network/nfs/nlockmgr:default
    So, I enabled the following:
    #svcadm enable nfs/mapid
    #svcadm enable nfs/cbd
    #svcadm enable nfs/client
    #scvs |grep nfs
    Checked again:
    #svcs |grep nfs
    online Jan_05 svc:/network/nfs/status:default
    online Jan_05 svc:/network/nfs/nlockmgr:default
    online 15:48:24 svc:/network/nfs/mapid:default
    online 15:48:33 svc:/network/nfs/cbd:default
    online 15:48:42 svc:/network/nfs/client:default
    Tested to make sure no errors were reported:
    # svcs -xv
    Then rebooted again.
    Voila! It worked! Sometimes reading the documents on docs.sun.com pay off. I was just being too lazy to do it and gave up when I posted this. The solution is to check the services and ensure nfs/client is running correctly and then try rebooting again.

  • Solaris 10 (sparc) + ZFS boot + ZFS zonepath + liveupgrade

    I would like to set up a system like this:
    1. Boot device on 2 internal disks in ZFS mirrored pool (rpool)
    2. Non-global zones on external storage array in individual ZFS pools e.g.
    zone alpha has zonepath=/zones/alpha where /zones/alpha is mountpoint for ZFS dataset alpha-pool/root
    zone bravo has zonepath=/zones/bravo where /zones/bravo is mountpoint for ZFS dataset bravo-pool/root
    3. Ability to use liveupgrade
    I need the zones to be separated on external storage because the intent is to use them in failover data services within Sun Cluster (er, Solaris Cluster).
    With Solaris 10 10/08, it looks like I can do 1 & 2 but not 3 or I can do 1 & 3 but not 2 (using UFS instead of ZFS).
    Am I missing something that would allow me to do 1, 2, and 3? If not is such a configuration planned to be supported? Any guess at when?
    --Frank                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Nope, that is still work in progress. Quite frankly I wonder if you would even want such a feature considering the way the filesystem works. It is possible to recover if your OS doesn't boot anymore by forcing your rescue environment to import the zfs pool, but its less elegant than merely mounting a specific slice.
    I think zfs is ideal for data and data-like places (/opt, /export/home, /opt/local) but I somewhat question the advantages of moving slices like / or /var into it. Its too early to draw conclusions since the product isn't ready yet, but at this moment I'd only think off disadvantages.

  • Jumpstart and zfs on new machine

    Hello
    I have been told to install solaris via jumpstart and use zfs.
    I guess the solaris installer will ask me and then i can choose zfs, so this shouldnt be an issue. However
    Do i still need to set up mirroring using metadb, metaattch etc? as I tought i saw somewhere that there are different commands for mirroring zfs?
    Also, if this is a new install of solaris, can i use jumpstart at all yet? Since i would need to be able to get to the OK> prompt to be able to run jumpstart? (and there is no os installed yet, after i exit ilom it goes into a solaris installer).
    Can anybody quickly tell me what i need to do to get jumpostart working quickly on this new box? we have a jumpstart server which i have inherited but not sure how i set the clients up. Is it just a case of adding config to the ethers file and thats it?
    Thanks
    Thanks

    wooke01 wrote:
    Hello
    I have been told to install solaris via jumpstart and use zfs.
    I guess the solaris installer will ask me and then i can choose zfs, so this shouldnt be an issue. However
    Do i still need to set up mirroring using metadb, metaattch etc? as I tought i saw somewhere that there are different commands for mirroring zfs?Las metas commands are not applicable with ZFS as you do not have to use Solaris Volume Management. ZFS integrates its own volume management and you can easily construct stripe, mirrored, raidz (raid5), raidz2 (=raid 6) zpool.
    For exemple to add a disk mirror to your rpool pool, you just have to do a "zpool add rpool c0t0d0s0 c0t1d0s0"
    A "man zpool" should give you more information.
    >
    Also, if this is a new install of solaris, can i use jumpstart at all yet? Since i would need to be able to get to the OK> prompt to be able to run jumpstart? (and there is no os installed yet, after i exit ilom it goes into a solaris installer).
    Can anybody quickly tell me what i need to do to get jumpostart working quickly on this new box? we have a jumpstart server which i have inherited but not sure how i set the clients up. Is it just a case of adding config to the ethers file and thats it?
    Thanks
    ThanksFrom your client, you have to boot from the network "boot - net" but before you will have to create a new profile in your jumpstart server, but I could not say you in details how to do this. Surely you might be able to find lot of information in docs.sun.com about this or perhaps somebody else could give more information.
    Groucho_fr

  • CLUSTERING WITH ZFS BOOT DISK

    hi guys,
    i'm looking for create a new cluster on two standalone server
    the two server boot with a rpool zfs, and i don't know if in installation procedure the boot disk was layered with a dedicated slice for global device.
    Is possible to install SunCluster with a rpool boot zfs disk?
    What do i have to do?
    Alessio

    Hi!
    I am have 10 node Sun Cluster.
    All nodes have zfs rpool with mirror.
    is better create mirror zfs disk boot after installation of Sun Cluster or not?I create zfs mirror when install Solaris 10 OS.
    But I don't see any problems to do this after installation of Sun Cluster or Solaris 10.
    P.S. And you may use UFS global with ZFS root.
    Anatoly S. Zimin

  • ZFS pool frequently going offline

    I am setting up some servers with ZFS raids and and finding that all of them are suffering from I/O errors that cause the pool to go offline (and when that happens everything freezes and I have to power cycle... then everything boots up fine).
    T1000, V245, and V240 systems all exhibit the same behavior.
    Root is mirrored ZFS.
    The raid is configure as one big LUN (3 to 8 TB depending on system) and that lun is the entire pool. In other words, there is no ZFS redundancy. My thinking was I would let the raid handle that.
    Based on some searches I decided to try setting
    set sd:sd_max_throttle=20
    in /etc/system and rebooting, but that made no difference.
    My sense is that the troubles start when there is a lot of activity. I ran these for many days with light activity and no problems. Once I started migrating the data over from the old systems did the problems start. Here is a typical error log:
    Jun 6 16:13:15 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1 (mpt3):
    Jun 6 16:13:15 newserver Connected command timeout for Target 0.
    Jun 6 16:13:15 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1 (mpt3):
    Jun 6 16:13:15 newserver Target 0 reducing sync. transfer rate
    Jun 6 16:13:16 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:16 newserver SCSI transport failed: reason 'reset': retrying command
    Jun 6 16:13:19 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:19 newserver Error for Command: read(10) Error Level: Retryable
    Jun 6 16:13:19 newserver scsi: Requested Block: 182765312 Error Block: 182765312
    Jun 6 16:13:19 newserver scsi: Vendor: IFT Serial Number: 086A557D-00
    Jun 6 16:13:19 newserver scsi: Sense Key: Unit Attention
    Jun 6 16:13:19 newserver scsi: ASC: 0x29 (power on, reset, or bus reset occurred), ASCQ: 0x0, FRU: 0x0
    Jun 6 16:13:19 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:19 newserver incomplete read- retrying
    Jun 6 16:13:20 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:20 newserver incomplete write- retrying
    Jun 6 16:13:20 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:20 newserver incomplete write- retrying
    Jun 6 16:13:20 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:20 newserver incomplete write- retrying
    <... ~80 similar lines deleted ...>
    Jun 6 16:13:21 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:21 newserver incomplete read- retrying
    Jun 6 16:13:21 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:21 newserver incomplete read- giving up
    At this point everything is hung and I am forced to power cycle.
    I'm very confused on how to proceed with this.... since this is happening on all three systems I an reluctant to blame the hardware.
    I would be very grateful to any suggestions on how to get out from under this!
    Thanks,
    David C

    which s10 are you running? You could try increasing the timeout value and see if that helps (see mpt(7d) - mpt-on-bus-time). It could be that when the raid controller is busy, it may take longer to service something that it is trying to correct. I've seen drives just go out to lunch for a while (presumably, the SMART firmware is doing something) and comes back fine (but the delay in response causes problems).

  • OK to use fdisk/100% "SOLARIS System" partition for RAID6 Virtual Drive?

    Solaris newb, here - I am configuring an x4270 with 16 135 GB drives. Basic approach is
    D0, D1: RAID 1 (Boot volume, Solaris, Oracle Software)
    D2-D13: RAID 6 (Oracle dB files)
    D14, D15: global spares
    After configuring the RAID's w/WebBIOS Utility, I am now trying to format/partition the RAID 6 Virtual Drive, which shows up as 1.327 TB 'Optimal' in the MegaRAID Storage Manager. After hunting around the ether for advice on how to do this, I came across http://docs.oracle.com/cd/E23824_01/html/821-1459/disksxadd-50.html#disksxadd-54639
    "Creating a Solaris fdisk Partition That Spans the Entire Drive"
    which is painfully simple: after 'format', just do an 'fdisk' and accept the default 100% "SOLARIS System" partition. After doing this, partition>print and prtvtoc show this:
    partition> print
    Current partition table (original):
    Total disk cylinders available: 59125 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 59124 1.33TB (59125/0/0) 2849529375
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 23.53MB (1/0/0) 48195
    9 unassigned wm 0 0 (0/0/0) 0
    # prtvtoc /dev/dsk/c0t1d0s2
    * /dev/dsk/c0t1d0s2 partition map
    * Dimensions:
    * 512 bytes/sector
    * 189 sectors/track
    * 255 tracks/cylinder
    * 48195 sectors/cylinder
    * 59127 cylinders
    * 59125 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * Unallocated space:
    * First Sector Last
    * Sector Count Sector
    * 48195 2849481180 2849529374
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    2 5 01 0 2849529375 2849529374
    8 1 01 0 48195 48194
    My question: is there anything inherently wrong with this default partitioning? Database is for OLTP & fairly small (<200 GB), with about 140 GB being LOB images.
    Thanks,
    Barry

    First off, RAID-5 or RAID-6 is fine for database performance unless you have some REALLY strict and REALLY astronomical performance requirements. Requirements that someone with lots of money is willing to pay to meet.
    You're running a single small x86 box with only onboard storage.
    So no, you're not operating in that type of environment.
    Here's what I'd do, based upon a whole lot of experience with Solaris 10 and not so much with Solaris 11, and also assuming this box is going to be around for a good long time as an Oracle DB server:
    1. Don't use SVM for your boot drives. Use the onboard RAID controller to make TWO 2-disk RAID-1 mirrors. Use these for TWO ZFS root pools. Why two? Because if you use live upgrade to patch the OS, you want to create a new boot environment in a separate ZFS pool. If you use live upgrade to create new boot environments in the same ZFS pool, you wind up with a ZFS clone/snapshot hell. If you use two separate root pools, each new boot environment is a pool-to-pool actual copy that gets patched, so there are no ZFS snapshot/clone dependencies between the boot environments. Those snapshot/clone dependencies can cause a lot of problems with full disk drives if you wind up with a string of boot environments, and at best they can be a complete pain in the buttocks to clean up - assuming live upgrade doesn't mess up the clones/snapshots so badly you CAN'T clean them up (yeah, it has been known to do just that...). You do your first install with a ZFS rpool, then create rpool2 on the other mirror. Each time you do an lucreate to create a new boot environment from the current boot environment, create the new boot environment in the rpool that ISN'T the one the current boot environment is located in. That makes for ZERO ZFS dependencies between boot environments (at least in Solaris 10. Although with separate rpools, I don't see how that could change....), and there's no software written that can screw up a dependency that doesn't exist.
    2. Create a third RAID-1 mirror either with the onboard RAID controller or ZFS, Use those two drives for home directories. You do NOT want home directories located on an rpool within a live upgrade boot environment. If you put home directories inside a live upgrade boot environment, 1) that can be a LOT of data that gets copied, 2) if you have to revert back to an old boot environment because the latest OS patches broke something, you'll also revert every user's home directory back.
    3. That leaves you 10 drives for a RAID-6 array for DB data. 8 data and two parity. Perfect. I'd use the onboard RAID controller if it supports RAID-6, otherwise I'd use ZFS and not bother with SVM.
    This also assumes you'd be pretty prompt in replacing any failed disks as there are no global spares. If there would be significant time before you'd even know you had a failed disk (days or weeks), let alone getting them replaced, I'd rethink that. In that case, if there were space I'd probably put home directories in the 10-disk RAID-6 drive, using ZFS to limit how big that ZFS file system could get. Then use the two drives freed up for spares.
    But if you're prompt in recognizing failed drives and getting them replaced, you probably don't need to do that. Although you might want to just for peace of mind if you do have the space in the RAID-6 pool.
    And yes, using four total disks for two OS root ZFS pools seems like overkill. But you'll be happy when four years from now you've had no problems doing OS upgrades when necessary, with minimal downtime needed for patching, and with the ability to revert to a previous OS patch level with a simple "luactivate BENAME; init 6" command.
    If you have two or more of these machines set up like that in a cluster with Oracle data on shared storage you could then do OS patching and upgrades with zero database downtime. Use lucreate to make new boot envs on each cluster member, update each new boot env, then do rolling "luactivate BENAME; init 6" reboots on each server, moving on to the next server after the previous one is back and fully operational after its reboot to a new boot environment.

  • Solaris patching on solaris 10 x86

    just wondering if anyone can assist. i have just installed solaris 10 x86 recommended patches on a 16 disks server. where first 2 disks are mirrored called rpool, and remaining 14 disks are raid z called spool. upon installing the patches successfully and rebooting server, i am coming up with the following error:
    NOTICE: Can not read the pool label from '/pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@0,0:a /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@1,0:a'
    NOTICE: spa_import_rootpool: error 5, Inc. All rights reserved.
    Cannot mount root on /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@0,0:a /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@1,0:a fstype zfs
    panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
    fffffffffbc4b190 genunix:vfs_mountroot+323 ()
    fffffffffbc4b1d0 genunix:main+a9 ()
    fffffffffbc4b1e0 unix:_start+95 ()
    skipping system dump - no dump device configured
    rebooting...
    It looks like the solaris 10 os cannot find zfs filesystem and keeps on rebooting in normal solaris os mode, but when I go to safe mode and type: zfs list, I can see zfs rpool (mirrored) file system and cannot see the zfs spool (raid z) file system, any ideas on how i can fix the boot problem and get zfs spool back in the safest way possible?...

    I finally figured it out. It seems that kernel patch 137138-09 was the culprit. It took me a while to test each kernel patch but I found out that if you have a mirrored volume or a virtual logical volume this will affect you. Also, after I did all this testing I finally found the sunsolve article referencing this patch. Seems you can install the patch, but you have to install 125556-01 first. Here's the article: http://sunsolve.sun.com/search/document.do?assetkey=1-66-246206-1

  • Solaris boot problem after Patch cluster install

    just wondering if anyone can assist. i have just installed solaris 10 x86 recommended patches on a 16 disks server. where first 2 disks are mirrored called rpool, and remaining 14 disks are raid z called spool. upon installing the patches successfully and rebooting server, i am coming up with the following error:
    NOTICE: Can not read the pool label from '/pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@0,0:a /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@1,0:a'
    NOTICE: spa_import_rootpool: error 5, Inc. All rights reserved.
    Cannot mount root on /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@0,0:a /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@1,0:a fstype zfs
    panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
    fffffffffbc4b190 genunix:vfs_mountroot+323 ()
    fffffffffbc4b1d0 genunix:main+a9 ()
    fffffffffbc4b1e0 unix:_start+95 ()
    skipping system dump - no dump device configured
    rebooting...
    It looks like the solaris 10 os cannot find zfs filesystem and keeps on rebooting in normal solaris os mode, but when I go to safe mode and type: zfs list, I can see zfs rpool (mirrored) file system and cannot see the zfs spool (raid z) file system, any ideas on how i can fix the boot problem and get zfs spool back in the safest way possible?...

    Same problem here with actual recomended patch set.
    NOTICE: Can not read the pool label from '/pci@0,0/pci10de,375@f/pci1000,3150@0/sd@0,0:a /pci@0,0/pci10de,375@f/pci1000,3150@0/sd@1,0:a'
    NOTICE: spa_import_rootpool: error 5
    Cannot mount root on /pci@0,0/pci10de,375@f/pci1000,3150@0/sd@0,0:a /pci@0,0/pci10de,375@f/pci1000,3150@0/sd@1,0:a fstype zfs
    panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
    fffffffffbc4b190 genunix:vfs_mountroot+323 ()
    fffffffffbc4b1d0 genunix:main+a9 ()
    fffffffffbc4b1e0 unix:_start+95 ()
    What the hell happened to the bootsektor or the pool or what ever?

  • Fdisk, jumpstart, and x86boot

    When I do an Interactive install of Solaris 8 from CD, I end up with a working system, but when I do my jumpstart install with no fdisk commands in the install profile, I end up with a system that says "bad PBR sig" upon rebooting.
    I added fdisk commands to the profile to create the x86boot and Solaris partitions, and the install did in fact create the partitions, however it still came up with the "bad PBR sig" error after the install. So my question is: What am I doing wrong in the profile?
    Does anybody have a working profile that will take a system on which you do not want to preserve any existing data, install an x86boot partition and a Solaris partition, and end up with a system that boots to Solaris successfully?
    (The documentation about the fdisk keywords in the profile that I found on docs.sun.com don't seem to be helping me on this issue.)
    Hopefully,
    Lusty

    wooke01 wrote:
    Hello
    I have been told to install solaris via jumpstart and use zfs.
    I guess the solaris installer will ask me and then i can choose zfs, so this shouldnt be an issue. However
    Do i still need to set up mirroring using metadb, metaattch etc? as I tought i saw somewhere that there are different commands for mirroring zfs?Las metas commands are not applicable with ZFS as you do not have to use Solaris Volume Management. ZFS integrates its own volume management and you can easily construct stripe, mirrored, raidz (raid5), raidz2 (=raid 6) zpool.
    For exemple to add a disk mirror to your rpool pool, you just have to do a "zpool add rpool c0t0d0s0 c0t1d0s0"
    A "man zpool" should give you more information.
    >
    Also, if this is a new install of solaris, can i use jumpstart at all yet? Since i would need to be able to get to the OK> prompt to be able to run jumpstart? (and there is no os installed yet, after i exit ilom it goes into a solaris installer).
    Can anybody quickly tell me what i need to do to get jumpostart working quickly on this new box? we have a jumpstart server which i have inherited but not sure how i set the clients up. Is it just a case of adding config to the ethers file and thats it?
    Thanks
    ThanksFrom your client, you have to boot from the network "boot - net" but before you will have to create a new profile in your jumpstart server, but I could not say you in details how to do this. Surely you might be able to find lot of information in docs.sun.com about this or perhaps somebody else could give more information.
    Groucho_fr

  • Apply patchset on Sol10-x86

    Hi
    I have  to  apply the latest patchset to  a system running Sol10-x86 with  root on zfs As  the usual  backout strategy of breaking the root mirror will not work here , I want some opinion on  the backout strategy . Particularly I want to know if I create an alternate boot environment  with lucreate  and  then patch the primary boot environment  in single user  mode , Can the  ABE  used  to boot the system to  previous state  Basically  if I do not want to use luupgrade to apply the patchset ;  or using luupgrade in this case is the  same as using installpatchset command ?
    OR
    Will   just taking a snapshot of the  root pool and keeping it on the same machine work   if I need to roll back to original environment
    What may be the steps in such roll back
    Appreciate any insights on this
    TIA and Rgds

    Hello
    With zfs rpool you have to use LU, and  LU will do all work for you, making the most of ZFS advantage doing snapshot, clone on the current BE and then you will be able to patch the newBE create.
    there are 3 mains comamnds, lucreate, luupgrade and luactivate (this will activate one newBE or the oldBE)
    Check this doc https://community.oracle.com/docs/DOC-887132
    If you are in sparc there is a new Boot command that will print the BE that are on the system
    boot -L  --> then you will be able to choose the one you one, this is the worst case the newBE that you have tried to boot panic or similar
    in X86 you will see 2 entries in grub menu.
    Regards
    Ezequiel

  • Recovering last good BE

    I am using the latest Solaris release and my configuration is mirrored zfs drives. I created a new BE for patching, I applied patches to that BE, I then activated the new BE, this is the message I got:
    The target boot environment has been activated. It will be used when you
    reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
    MUST USE either the init or the shutdown command when you reboot. If you
    do not use either init or shutdown, the system will not boot using the
    target BE.
    In case of a failure while booting to the target BE, the following process
    needs to be followed to fallback to the currently working boot environment:
    1. Boot from Solaris failsafe or boot in single user mode from the Solaris
    Install CD or Network.
    2. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following command to mount:
    mount -Fzfs /dev/dsk/c0d0s0 /mnt
    3. Run utility with out any arguments from the Parent boot
    environment root slice, as shown below:
    /mnt/sbin/luactivate
    4. luactivate, activates the previous working boot environment and
    indicates the result.
    5. Exit Single User mode and reboot the machine.
    I ran into problems booting off of the patched BE, so I followed the steps above to get to the last good BE, but I got the following message:
    cannot open ‘/dev/dsk/c0t0d0s0′: invalid dataset name

    That message is a known bug, you can not mount your ZFS filesystem with a normal 'mount' command. The correct output for point 2 is:
    zpool import rpool
    zfs inherit -r mountpoint rpool/ROOT/<last good BE>
    zfs set mountpoint=<mountpointName> rpool/ROOT/<last good BE>
    zfs mount rpool/ROOT/<last good BE>
    .. this is fixed in the live upgrade patch.
    .7/M.

  • File upload problem. Can't write to server

    Hi guys,
    I am currently trying to upload some images to the server I am using but I can't succeed.
    I am using a form consisting of 3 text fields 1 textarea and an upload form.
    The whole thing is working fine on my localhost . All the items are being inserted to the database and the files saved to the path.
    But when I try to upload to the commercial server even though the text field are being read and inserted to database the files just can't!!
    my tree structure is
    /home/domain/www-domain/webapps
    and that's what I am getting when using
    getServletContext().getRealPaththe full line for getting the real path is
    FileWriter fw2 = new FileWriter(getServletContext().getRealPath("//upload"+fileName+".jsp")); I am using tomcat 4.0 and all the other servlets are working just fine
    as this one apart from the FileWriting thing?
    am I doing something wrong with the getServletContext thing?
    And if so, why is it working just fine on my pc?
    I am only suspecting that I am not getting the path correctly
    I want to store the images under
    webapps/upload/images
    but it doesn't work even when I am trying to store under webapps (root directory)
    Is there any way that I am not getting permission to write to the server??
    Any help is highly appreciated!!
    cheers

    the thing is that I am not getting any exception back to my browser> some of the values are inserted into the database (the ones that are not referring to any uploded items). i guess i have to do a print.stacktrace but I am not sure at which point...
    ok I am posting the full code to give you an idea. Your interest is much appreciated , thank you
    import java.sql.*;
    import java.text.ParsePosition;
    import java.text.SimpleDateFormat;
    import java.util.*;
    import java.util.Date;
    import gr.cretasport.util.*;
    import org.apache.commons.fileupload.*;
    import java.awt.Image;
    import java.io.*;
    import javax.servlet.RequestDispatcher;
    import javax.servlet.ServletConfig;
    import javax.servlet.ServletContext;
    import javax.servlet.ServletException;
    import javax.servlet.http.HttpServlet;
    import javax.servlet.http.HttpServletRequest ;
    import javax.servlet.http.HttpServletResponse;
    import javax.servlet.http.HttpSession;
    * @author myron.veligradis
    * 17/09/2005
    * TODO To change the template for this generated type comment go to
    * Window - Preferences - Java - Code Style - Code Templates
    public class InsertArticle extends HttpServlet{
      * global variables
    String fieldName;
    Image imageIn;
    Resize rsImg = new Resize();
    Mirror mrImg = new Mirror();
    //ImageProcess ImPr = new ImageProcess();
    Date curDate;
    String datecur;
    String inputtext;
    BufferedWriter bw;
    String aRString ;
    String photoPath;
    File savedFile;
    String txtPath;
    String txtPathFullText;
    String author;
    String eidos;
    String kathgoriaID;
    String title;
    String rating;
    String kathgoria;
    String keimeno;
    String keimenoRest;
    String fullKeimeno;
    String hmeromhniaD;
    final String jspPrefix="<%@ page contentType=\"text/html; charset=iso-8859-7\" language=\"java\"  errorPage=\"\" %>";
    Connection con=null;
    Statement statement = null;
    ResultSet rs, rs1 = null;
    int kathgoriaIDInt;
    int ratingInt;
    public void init(ServletConfig conf) throws ServletException  {
      super.init(conf);
    public void doPost (HttpServletRequest req, HttpServletResponse res)
    throws ServletException, IOException, UnsupportedEncodingException {
    //  try {
    //   req.setCharacterEncoding("iso-8859-7");
    //   catch (UnsupportedEncodingException uee)
      boolean isMultipart = FileUpload.isMultipartContent(req);
      HttpSession session = req.getSession();
      ServletContext sc = getServletContext();
       * GET THE FILE NAME
      curDate=new Date();
      datecur = curDate.toString ();
        String fileName = datecur.replaceAll("\\W","_");
        String strDate = curDate.toString();
        SimpleDateFormat formatter= new SimpleDateFormat ("EEE MMM dd hh:mm:ss zzz yyyy");
      Date date = formatter.parse(strDate,new ParsePosition(0));
      hmeromhniaD = new SimpleDateFormat("yyyy-MM-dd").format(date);
       *  GET THE PARAMETERS
       *  FILE UPLOAD *******************************************************
      System.out.println(isMultipart);
      DiskFileUpload upload = new DiskFileUpload();
      try {
       req.setCharacterEncoding("iso-8859-7");
       List items = upload.parseRequest(req);
       Iterator itr = items.iterator();
       while(itr.hasNext()) {
        FileItem item = (FileItem) itr.next();
    //    String articleTitle = new String(req.getParameter("articleTitle").getBytes("iso-8859-1"),"iso-8859-7");
       // check if the current item is a form field or an uploaded file
         if(item.isFormField()) {
         // get the name of the field
         // if it is name, we can set it in request to thank the user
         if (item.getFieldName().equals("author")) {
                author = new String(item.getString().getBytes("iso-8859-1"),"iso-8859-7");
         if (item.getFieldName().equals("eidos")) {
                eidos = new String( item.getString().getBytes("iso-8859-1"),"iso-8859-7");
         if (item.getFieldName().equals("keimeno")) {
                keimeno =item.getString();
                FileWriter fw1 = new FileWriter(getServletContext().getRealPath("webapps//upload"+"pro_"+fileName +".jsp"));
           fw1.write(jspPrefix+"<html><body>"+keimeno+"</body></html>");
           fw1.close();
           txtPath = ("upload/text/"+"pro_"+fileName+".jsp");
         if (item.getFieldName().equals("keimenorest")) {
                keimenoRest =item.getString();
                FileWriter fw2 = new FileWriter(getServletContext().getRealPath("//upload"+fileName+".jsp"));
           fw2.write(jspPrefix+"<html><body>"+keimeno+" "+keimenoRest+"</body></html>");
           fw2.close();
           txtPathFullText = ("upload/text/fulltext/"+fileName+".jsp");
         if (item.getFieldName().equals("title")) {
               // title = new String(item.getString().getBytes("iso-8859-1"),"iso-8859-7");
                System.out.println ("titlos "+title);
                title=getServletContext().getRealPath("webapps//upload");
         if (item.getFieldName().equals("rating")) {
                rating = item.getString();
         if (item.getFieldName().equals("kathgoriaID")) {
                kathgoriaID = item.getString();
       else {
        // the item must be an uploaded file save it to disk. Note that there
        // seems to be a bug in item.getName() as it returns the full path on
        // the client's machine for the uploaded file name, instead of the file
        // name only. To overcome that, I have used a workaround using
        // fullFile.getName().
        File fullFile  = new File(item.getName()); 
        //check if input file is actually an image
        if ( item.getName().toLowerCase().endsWith(".jpg") ||
        item.getName().toLowerCase().endsWith(".gif") ||
        item.getName().toLowerCase().endsWith(".bmp"))
         File temp = new File(getServletContext().getRealPath("webapps//upload")," temp.jpg");
         item.write(temp);
         //to class Resize pernei 2 inputs, to file kai to path string
         rsImg.resizeImage(temp,getServletContext().getRealPath("webapps//upload"+fileName+".jpg"));
         //to class Mirror pernei kai ayto 2 inputs, to file poy exei dhmiourgithei apo panw
         //pou to kanei overwrite epeidh pernoun to idio onoma me to time stamp
         Mirror.mirrorImage(new File(getServletContext().getRealPath("webapps"+fileName+".jpg")),
           getServletContext().getRealPath("webapps//upload"+fileName+".jpg"));
         System.out.println("upload/images/"+fileName+".jpg");
      kathgoriaIDInt= new Integer(kathgoriaID).intValue();
      ratingInt   = new Integer(rating).intValue();
       * HTML UPLOAD
      catch (Exception fe) {
      //get the string names to insert to database
      photoPath="upload/images/"+fileName+".jpg";
       * INSERT THE DATA INTO THE DATABASE
      InsertToDatabase();
      String url="/insertarticle.jsp";
      RequestDispatcher rd = sc.getRequestDispatcher(url);
      rd.forward(req,res);
    //res.sendRedirect( res.encodeRedirectURL( "indextest.jsp "));
    public void InsertToDatabase() {
      try {
       con=null;
       statement = con.createStatement();
       String insertprefix = "insert into articles(eidos,photopath,textpath,textpathfull,title,author,dateen,rating, kathgoriaID) VALUES"
         + "('" + eidos.trim() +"','" + photoPath.trim()  + "','"   +
         txtPath + "','" + txtPathFullText + "','" +title + "','"+ author + "','"+ hmeromhniaD + "','"+ ratingInt+ "','"
         +kathgoriaIDInt + "')";
       statement.execute(insertprefix);
       statement.close();
       con.close();
       catch (Exception e)
    }

Maybe you are looking for