Solaris 10 upgrade and zfs pool import

Hello folks,
I am currently running "Solaris 10 5/08 s10x_u5wos_10 X86" on a Sun Thumper box where two drives are mirrored UFS boot volume and the rest is used in ZFS pools. I would like to upgrade my system to "10/08 s10x_u6wos_07b X86" to be able to use ZFS for the boot volume. I've seen documentation that describes how to break the mirror, create new BE and so on. This system is only being used as iSCSI target for windows systems so there is really nothing on the box that i need other then my zfs pools. Could i simply pop the DVD in and perform a clean install and select my current UFS drives as my install location, basically telling Solaris to wipe them clean and create an rpool out of them. Once the installation is complete, would i be able to import my existing zfs pools ?
Thank you very much

Sure. As long as you don't write over any of the disks in your ZFS pool you should be fine.
Darren

Similar Messages

  • Replace FC Card and ZFS Pools

    I have to replace a Qlogic ISP2200 dual port Fibre Channel card with a new card in a V480 server. I have 2 ZFS Pools that mount via that card. Would I have to export and import the ZFS pools when replacing the card? I've read you have to when moving the pools to a different server.
    Naturally the World Wide Number (WWN) would be different on the new FC card and other than changing my SAN switch zone information I'm not sure how ZFS would deal with this situation. The storage itself would not change.
    Any ideas are welcome.
    Running Solaris 10 (11/06) with kernel patch 125100-07
    Thanks,
    Chris

    I have to replace a Qlogic ISP2200 dual port Fibre Channel card with a new card in a V480 server. I have 2 ZFS Pools that mount via that card. Would I have to export and import the ZFS pools when replacing the card? I've read you have to when moving the pools to a different server.
    Naturally the World Wide Number (WWN) would be different on the new FC card and other than changing my SAN switch zone information I'm not sure how ZFS would deal with this situation. The storage itself would not change.
    Any ideas are welcome.
    Running Solaris 10 (11/06) with kernel patch 125100-07
    Thanks,
    Chris

  • Upgraded and can't import photos

    Hi There,
    First, I have to admit that I have tampered with my iPhoto library. I didn't think it would have such dire consequences, as most apps are so user friendly and drag-and-drop friendly.
    So, I'll try to give a concise explanation of what's happening. I don't think it's a dire situation, I'd just love to rebuild my library so that iPhoto is happy again.
    1. Was running 10.2.8 with iPhoto 2.0.
    2. About 6 months ago, renamed all of my folders in the library. Realized this was bad when iphoto lost all my pix, but managed to rebuild the library (not with original dates)
    3. Last week, upgraded to Tiger (10.4). All of my apps seemed fine, except iPhoto. Still running version 2.0, and would crash on opening.
    4. Installed iLife 04, upgraded iPhoto to 4.0.3. Now I can open iPhoto, (I held option and created a new library) but there are only 5 pictures in my library (ones that I have added from my camera since the upgrade).
    5. I have a backup copy of my old iPhoto library on my desktop. when I try to import these photos, the bar says "working" for about a second, then the highlighted button in iPhoto switches from "import" to "organize". It just won't let me import anything from the old library. I tried to drag the old library folder into the new one (bad idea?) and that didn't work either.
    6. Have considered taking all of the photos out of their library folder on the desktop and trying to import as individual jpegs. Is that a bad idea?
    Any thoughts on how I can fix the mess I've made?
    Any input appreciated...

    Julia:
    Welcome to the Apple Discussions. As you've found out you've committed the cardinal sin of iPhoto, Don't tamper with files in the iPhoto Library folder from the Finder.
    Version 4, when trying to import photos from another iPhoto library folder, imports the thumbnail files as well as the full sized files. Not a good thing.
    I suggest you use iPhoto Extractor to copy all of the full sized files to the three new folders that will get created on your desktop by the application. Then you can import those three folders into your new library. Of course you will lose all of your roll, album, keyword information but you will have your photos back in a working library. When you have them all in your new working library and are satisfied that you got them all you can delete the old library and the three temp folders that got created.
    Hope this has been of some help. Good luck.
    OT

  • ISCSI array died, held ZFS pool.  Now box han

    I was doing some iSCSI testing and, on an x86 EM64T server running an out-of-the box install of Solaris 10u5, created a ZFS pool on two RAID-0 arrays on an IBM DS300 iSCSI enclosure.
    One of the disks in the array died, the DS300 got really flaky, and now the Solaris box gets hung in boot. It looks like it's trying to mount the ZFS filesystems. The box has two ZFS pools, or had two, anyway. The other ZFS pool has some VirtualBox images filling it.
    Originally, I got a few iSCSI target offline messages on the console, so I booted to failsafe and tried to run iscsiadm to remove the targets, but that wouldn't work. So I just removed the contents of /etc/iscsi and all the iSCSI instances in /etc/path_to_inst on the root drive.
    Now the box hangs with no error messages.
    Anyone have any ideas what to do next? I'm willing to nuke the iSCSI ZFS pool as it's effectively gone anyway, but I would like to save the VirtualBox ZFS pool, if possible. But they are all test images, so I don't have to save them. The host itself is a test host with nothing irreplaceable on it, so I could just reinstall Solaris. But I'd prefer to figure out how to save it, even if only for the learning experience.

    Try this. Disconnect the iSCSI drives completely, then boot. My fallback plan on zfs if things get screwed up is to physically disconnect the zfs drives so that solaris doesn't see them on boot. It marks them failed and should boot. Once it's up, zpool destroy the pools WITH THE DRIVES DISCONNECTED so that it doesn't think there's a pool anymore. THEN reconnect the drives and try to do a "zpool import -f".
    The pools that are on intact drives should be still ok. In theory :)
    BTW, if you removed devices, you probably should do a reconfiguration boot (create a /a/reconfigure in failsafe mode) and make sure the devices gets reprobed. Does the thing boot in single user ( pass -s after the multiboot line in grub )? If it does, you can disable the iscsi svcs with "svcadm disable network/iscsi_initiator; svcadm disable iscsitgt".

  • Max File size in UFS and ZFS

    Hi,
    Any one can share what is max file size can be created in Solaris 10 UFS and ZFS ?
    What will be max size file compression using tar,gz ?
    Regards
    Siva

    from 'man ufs':
    A sparse file  can have  a  logical  size  of one terabyte.
    However, the  actual amount of data that can be stored
    in  a  file  is  approximately  one  percent  less  than one
    terabyte because of file system overhead.
    As for ZFS, well, its a 128bit filesystem, and the maximum size of a file or directory is 2 ^64^ bytes, which i think is somewhere around 8 exabyte (i.e 8192 petabyte), even though my calculator gave up on calculating it.
    http://www.sun.com/software/solaris/ds/zfs.jsp
    .7/M.
    Edited by: abrante on Feb 28, 2011 7:31 AM
    fixed layout and 2 ^64^

  • Solaris 9 upgrade do 10 - zfs problem

    Hi,
    I have solaris 9 on x86 server, which was upgrade to solaris 10 and ufs was upgrade to zfs, now when the system is rebooting or shutdown, I have problem with mount some disk resource, e.g
    rpool/some/foo /local/foo, but is mouting as legacy, what does mean the directory /local/foo isn't empty, but it's empty. Everything is ok when I mount manualy "zfs set mountpoint=/local/foo rpool/some/foo
    Somebody have similar problem? On this server work solaris 10 05/09 version.

    ZFS filesystems mount as legacy if their mounted via vfstab rather than the ZFS mountpoint options.
    So check that the filesystem isnt mentioned in /etc/vfstab.

  • Oracle 10, Solaris and ZFS

    Hello,
    I'm planning to run Oracle 10 under Solaris 10 with a ZFS filesystem. Is Oracle 10 compatible with ZFS? The Solaris-ARC-process uses most of the available memory (RAM) for caching purposes. As other processes demand more memory, ARC releases it. Is such a dynamic memory allocation compatible with Oracle or does Oracle need fixed memory allocations?
    Thanks,
    - Karl-Josef

    In principle all should be fine. ZFS obeys all filesystem semantics, and Oracle will access it through the normal filesystem APIs. I'm not sure if Oracle need to officially state that they are compatible with ZFS. I would have thought it was the other way around - ZFS needs to state it is a fully compatible file system, and so any application will work on it.
    ZFS has many neat design features in it. But be aware - it is a write only file system! It never updates an existing block on disk. Instead it writes out a new block in a new location with the updated data in it, and also writes out new parent inode blocks that point to this block, and so on. This has some benefits around snapshotting a file system, and providing fallback recovery or quick recovery in the event of a system crash. However, one update in one data block can cause a cascaded series of writes of many blocks to the disk.
    This can have a major impact if you put your redo logs on ZFS. You need to consider this, and if possible do some comparison tests between ZFS and UFS with logging and direct I/O. Redo log writes on COMMIT are synchronous and must go all the way to the disk device itself. This could cause ZFS to have to do many physical disk writes, just for writing one redo log block.
    Oracle needs its SGA memory up front, permanently allocated. Solaris should handle this properly, and release as much filesystem cache memory as needed when the Oracle shared memory is allocated. If it doesn't then Sun have messed up big time. But I cannot imagine this, so I am sure your Oracle SGA will be created fine.
    I like the design of ZFS a lot. It has similarities with Oracle's ASM - a built in volume manager that abstracts underlying raw disks to a pool of directly useful storage. ASM abstracts to pools for database storage objects, ZFS abstracts to pools for filesystems. Much better than simple volume managers that abstract raw disks to just logical disks. You still end up with disks, and other management issues. I'm still undecided as to whether it makes sense to store an OLTP database on it that needs to process a high transaction rate, given the extra writes incurred by ZFS.
    I also assume you are going to use an 8 KB database block size to match the filesystem block size? You don't want small database writes leading to bigger ZFS writes, and vice versa.
    John

  • Upgraded to 10.4 and now iTunes imports really slowly???

    Upgraded to 10.4 and now cds import much slower on iTunes. I used to average 9.0X - 10.0X. Now i get lucky to get up to 9.0X. It usually hovers between 5.0X - 8.0X. This happened immediately after I finished with the upgrade disc.
    Any idea how to solve this problem?

    Got advice - disable widgets.

  • Since i upgraded to Yosemite, when importing photos to iPhoto, they no longer appear in the 'events' summary in library. Why is this and how can i amend?

    Since i upgraded to Yosemite, when importing photos to iPhoto, they no longer appear in the 'events' summary in library. Why is this and how can i amend?

    The events may be incorrectly sorted. Try to sort the events by date, while in "Events" view.
    Go to the "View" menu: Sort Photos > By Date > Descending. That should bring the most recent events to the top of the list.

  • I updated to 8.1.3 on my 5s and my very important notes are gone!I bought extra space in icloud and backed up to icloud before the upgrade. Should n\my notes not be there somewhere? Where?

    I updated my 5s to 8.1.3 and my very important notes are gone. Before the upgrade I bought extra space in icloud and backed up to icloud. Should my notes not be there? Where? How do I get them back?

    i did go to notes in gmail and none of thm where there but i havent connected my ipod to a computer for about a month so majoriy of the important ones will be backed up i hope to god  thank you

  • HT4236 I was trying to Import photos to my computer and was asked to upgrade my Iphone before importing. In restoring my Iphone, it deleted all of my 200 new photos and they are now gone. Is there any way to retrieve them?

    I was trying to Import photos to my computer and was asked to upgrade my Iphone before importing. In restoring my Iphone, it deleted all of my 200 new photos and they are now gone. Is there any way to retrieve them?

    You should ALWAYS import all pics ( and anything selse for that matter) before any update. 
    If you failed to do this, then the pics are likely gone.  You can try restoring from backup.

  • Zfs pool I/O failures

    Hello,
    Been using an external SAS/SATA tray connected to a t5220 using a SAS cable as storage for a media library.  The weekly scrub cron failed last week with all disks reporting I/O failures:
    zpool status
      pool: media_NAS
    state: SUSPENDED
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
       see: http://www.sun.com/msg/ZFS-8000-HC
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        2.34T scanned out of 9.59T at 14.7M/s, 143h43m to go
        0 repaired, 24.36% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   UNAVAIL  10.6K    75     0  experienced I/O failures
              raidz2-0  UNAVAIL  21.1K    10     0  experienced I/O failures
                c6t0d0  UNAVAIL    212     6     0  experienced I/O failures
                c6t1d0  UNAVAIL    216     6     0  experienced I/O failures
                c6t2d0  UNAVAIL    225     6     0  experienced I/O failures
                c6t3d0  UNAVAIL    217     6     0  experienced I/O failures
                c6t4d0  UNAVAIL    202     6     0  experienced I/O failures
                c6t5d0  UNAVAIL    189     6     0  experienced I/O failures
                c6t6d0  UNAVAIL    187     6     0  experienced I/O failures
                c6t7d0  UNAVAIL    219    16     0  experienced I/O failures
                c6t8d0  UNAVAIL    185     6     0  experienced I/O failures
                c6t9d0  UNAVAIL    187     6     0  experienced I/O failures
    The console outputs this repeated error:
    SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: 20
    PLATFORM: SUNW,SPARC-Enterprise-T5220, CSN: -, HOSTNAME: t5220-nas
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: e935894e-9ab5-cd4a-c90f-e26ee6a4b764
    DESC: The number of I/O errors associated with a ZFS device exceeded acceptable levels.
    AUTO-RESPONSE: The device has been offlined and marked as faulted. An attempt will be made to activate a hot spare if available.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Run 'zpool status -x' for more information. Please refer to the associated reference document at http://sun.com/msg/ZFS-8000-FD for the latest service procedures and policies regarding this diagnosis.
    Chassis | major: Host detected fault, MSGID: ZFS-8000-FD
    /var/adm/messages has an error message for each disk in the data pool, this being the error for sd7:
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.warning] WARNING: /pci@0/pci@0/p
    ci@9/scsi@0/disk@2,0 (sd7):
    May  3 16:24:02 t5220-nas       Error for Command: read(10)                Error
    Level: Fatal
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.notice]         Requested Block:
    1815064264                Error Block: 1815064264
    Have tried rebooting the system and running zpool clear as the zfs link in the console errors suggest.  Sometimes the system will reboot fine, other times it requires issuing a break from LOM, because the shutdown command is still trying after more than an hour.   The console usually outputs more messages, as the reboot is completing,  basically saying the faulted hardware has been restored, and no additional action is required.  A scrub is recommended in the console message.  When I check the pool status the previously suspended scrub starts back where it left off:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.83T scanned out of 9.59T at 165M/s, 6h37m to go
        0 repaired, 60.79% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    Then after an hour or two all the disks go back into an I/O error state.   Thought it might be the SAS controller card, PCI slot, or maybe the cable, so tried using the other PCI slot in the riser card first (don't have another cable available).   Now the system is back online and again trying to complete the previous scrub:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.58T scanned out of 9.59T at 139M/s, 8h26m to go
        0 repaired, 58.14% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    the zfs file systems are mounted:
    bash# df -h|grep media
    media_NAS               14T   493K   6.3T     1%    /media_NAS
    media_NAS/archive       14T   784M   6.3T     1%    /media_NAS/archive
    media_NAS/exercise      14T    42G   6.3T     1%    /media_NAS/exercise
    media_NAS/ext_subs      14T   3.9M   6.3T     1%    /media_NAS/ext_subs
    media_NAS/movies        14T   402K   6.3T     1%    /media_NAS/movies
    media_NAS/movies/bluray    14T   4.0T   6.3T    39%    /media_NAS/movies/bluray
    media_NAS/movies/dvd    14T   585K   6.3T     1%    /media_NAS/movies/dvd
    media_NAS/movies/hddvd    14T   176G   6.3T     3%    /media_NAS/movies/hddvd
    media_NAS/movies/mythRecordings    14T   329K   6.3T     1%    /media_NAS/movies/mythRecordings
    media_NAS/music         14T   347K   6.3T     1%    /media_NAS/music
    media_NAS/music/flac    14T    54G   6.3T     1%    /media_NAS/music/flac
    media_NAS/mythTV        14T    40G   6.3T     1%    /media_NAS/mythTV
    media_NAS/nuc-celeron    14T   731M   6.3T     1%    /media_NAS/nuc-celeron
    media_NAS/pictures      14T   5.1M   6.3T     1%    /media_NAS/pictures
    media_NAS/television    14T   3.0T   6.3T    33%    /media_NAS/television
    but the format command is not seeing any of the disks:
    format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <SEAGATE-ST9146803SS-0006 cyl 65533 alt 2 hd 2 sec 2187>
              /pci@0/pci@0/pci@2/scsi@0/sd@0,0
           1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@1,0
           2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@2,0
           3. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>  solaris
              /pci@0/pci@0/pci@2/scsi@0/sd@3,0
    Before moving the card into the other slot in the riser card format saw each disk in the zfs pool.    Not sure why the disks are not seen in format but the zfs pool seems to be available to the OS.    The disks in the attached tray were setup for Solaris to see using the Sun StorageTek RAID Manager, they were passed as 2TB raid0 components to Solaris, and format saw them as available 2TB disks.    Any suggestions as to how to proceed if the scrub completes with the SAS card in the new I/O slot?    Should I force a reconfigure of devices on the next reboot?  If the disks fault out again with I/O errors in this slot, the next steps were to try a new SAS  card and/or cable.  Does that sound reasonable?
    Thanks,

    Was the system online (and the ZFS pool) too when you moved the card? That might explain why the disks are confused. Obviously, this system is experiencing some higher level problem like a bad card or cable because disks generally don't fall over at the same time. I would let the scrub finish, if possible, and shut the system down. Bring the system to single-user mode, and review the zpool import data around the device enumeration. If the device info looks sane, then import the pool. This should re-read the device info. If the device info is still not available during the zpool import scan, then you need to look at a higher level.
    Thanks, Cindy

  • Upgrade a degraded pool

    I know that this is not necessarily the right forum, but the FreeBSD forum haven't been able to help me...
    I recently updated my FreeBSD 8.0 RC3 to 8.1 and after the update I can't import my zpool. My computer says that no such pool exists, even though it can be seen with the zpool status command. I assume that it's due to different zfs versions. That should be solved by a zpool upgrade, BUT the problem is that I also have a failed disk. What happens to my data if I upgrade a degraded pool? Furthermore a disk label was lost and zfs tried to replace the disk, with a disk which won't be available once I get the disk re-labeled. I have no clues about what to do... :s
    Any help would be most appreciated!!!

    If you can see the pool with "zpool status", it's already imported. Just replace the failed disk.

  • Will Solaris 10 (Jun/2006) zfs edition consider DiskSuite

    Hi..
    I previously upgraded a Sol8 server to Sol10 (Jan/06 without zfs), but it could not perform a upgrade as the disk array had to be unplugged (which was a metastat configuration). So I had to perform a fresh install, and reconfigure DiskSuite which picked up the data and databases correctly for use after the install. This was all with help from a SUN Support case. Anyways..does anyone know if the zfs version of the O/S is smarter and knows how to deal with metastat during an upgrade? I have a production box that is still on Sol 8, with a complex metastat configuration that I will have to upgrade one day soon. But the test box that I took from Sol 8 to Sol 10 (Jun/2006) O/S was difficult as that version of the upgrade could not deal with the DiskSuite configured 3310 disk array.
    Please email me at [email protected] if anyone has an update or suggestions
    Thanks for reading...Cheers...Roger

    Hi Darren...
    Well I raise CASE: 10815118 with support. On the LIVE Upgrade from Sol 8 to 10 (Jan 2006) attempt and the install_log produced the following...
    =========>
    Error opening file /a/var/sadm/system/admin/CLUSTER.
    getInstalledPkgs: Unable to access /a/var/sadm/pkg
    copyOldClustertoc: could not copy /a/var/sadm/system/admin/.clustertoc to /tmp/clustertocs/.old.clustertoc
    Error:
    Error: ERROR: The specified root and/or boot was not found or was not upgradeable
    Pfinstall failed. Exit stat= java.lang.UNIXProcess@20ca8b 2
    word must be specified if an upgrade with disk space reallocation is required
    Processing profile
    Checking c2t0d0s0 for an upgradeable Solaris image
         Unable to start Solaris Volume Manager for unknown, c2t0d0s0 is not upgradeable
    ERROR: The specified root and/or boot was not found or was not upgradeable
    ===============<
    It seems to look for a disk (controller c2) that did not exist. I copied the DiskSuite files and other system files to another server. the /etc/lvm/md.tab under Sol 8 looked like the following...
    ================>
    # root (on O/S disks 5 GB)
    d0 -m d10
    d10 1 1 c1t0d0s0
    d20 1 1 c1t1d0s0
    # swap (on O/S disks 8 GB)
    d1 -m d11
    d11 1 1 c1t0d0s1
    d21 1 1 c1t1d0s1
    # var (on O/S disks 4 GB)
    d3 -m d13
    d13 1 1 c1t0d0s3
    d23 1 1 c1t1d0s3
    # archive (on O/S disks 5 GB)
    d4 -m d14
    d14 1 1 c1t0d0s4
    d24 1 1 c1t1d0s4
    # usr/openv (on O/S disks 9 GB)
    d6 -m d16
    d16 1 1 c1t0d0s6
    d26 1 1 c1t1d0s6
    # /export/home (on O/S disks 3.7 GB)
    d7 -m d17
    d17 1 1 c1t0d0s7
    d27 1 1 c1t1d0s7
    # array 3310
    # /software ( approx 140 GB)
    d30 -m d40
    d40 1 6 c3t8d0s1 c3t9d0s1 c3t10d0s1 c3t11d0s1 c3t12d0s1 c3t13d0s1 -i 128b
    d50 1 6 c4t8d0s1 c4t9d0s1 c4t10d0s1 c4t11d0s1 c4t12d0s1 c4t13d0s1 -i 128b
    # /databases ( approx 204 GB)
    d31 -m d41
    d41 1 6 c3t8d0s3 c3t9d0s3 c3t10d0s3 c3t11d0s3 c3t12d0s3 c3t13d0s3 -i 128b
    d51 1 6 c4t8d0s3 c4t9d0s3 c4t10d0s3 c4t11d0s3 c4t12d0s3 c4t13d0s3 -i 128b
    # /spare ( approx 92 GB)
    d32 -m d42
    d42 1 6 c3t8d0s4 c3t9d0s4 c3t10d0s4 c3t11d0s4 c3t12d0s4 c3t13d0s4 -i 128b
    d52 1 6 c4t8d0s4 c4t9d0s4 c4t10d0s4 c4t11d0s4 c4t12d0s4 c4t13d0s4 -i 128b
    ===============<
    We had c1 for the O/S disks and c3 & c4 for the 3310 array. Why it was looking for c2 was never determined???
    The Sun support mob under that CASE ID suggested after many attempts to perform an upgrade...
    =================>
    In this situation, I recommend:
    1. Remove Disk Array from Server i.e detach the connecting cable
    2. Please check if Internal Disks are visible at OK prompt.:
    ok probe-scsi-all
    ==================<
    So I unplugged the array, but because there was no spare disk, I had to end up performing a fresh install, then attach and reconfigure the array exactly as before then the data, and more importantly, the Oracle databases and applications where recovered and useable.
    So the question now is ... the server is on sol 10 and I have now rec'd the sol 10 Jun 2006 media...is it possible to perform an upgrade and how will zfs deal with the disksuite defs, because it have real problems under Sol 10 Jan 2006 edition.
    Also, with the SUN engineers, on that CASE ID...we never got to the bottom of the problem with upgrading from sol 8 to 10 with disksuite implemented.
    Cheers...Roger

  • First Solaris 11 upgrade fails

    Trying to update Solaris 11, it says "unable to clone the current boot environment",
    manual creation gives this:
    sudo beadm create test
    updatevfstab: failed to open vfstab (/tmp/.be.5Saiwr/etc/vfstab): No such file or directory
    be_copy: failed to update new BE's vfstab (test)
    be_copy: destroying partially created boot environment
    Unable to create test.
    Unable to find message for error code: 1
    cat /etc/vfstab#device          device          mount          FS     fsck     mount     mount
    #to mount     to fsck          point          type     pass     at boot     options
    /devices     -     /devices     devfs     -     no     -
    /proc     -     /proc     proc     -     no     -
    ctfs     -     /system/contract     ctfs     -     no     -
    objfs     -     /system/object     objfs     -     no     -
    sharefs     -     /etc/dfs/sharetab     sharefs     -     no     -
    fd     -     /dev/fd     fd     -     no     -
    swap     -     /tmp     tmpfs     -     yes     -
    rpool/ROOT/release11-1     -     /     zfs     -     no     -
    /dev/zvol/dsk/ssd/swap     -     -     swap     -     no     -

    Originally it was OpenSolaris, then upgraded to Solaris 11 Express and then to Solaris 11,
    beadm list -dsError getting boot configuration from pool rpool: Error while processing the /rpool/boot/grub/menu.lst file([Errno 13] Permission denied: '/rpool/boot/grub/menu.lst')
    BE/Dataset/Snapshot Active Mountpoint Space Policy Created
    dev-latest-2
    rpool/ROOT/dev-latest-2 - - 5.20G static 2010-07-27 15:52
    orsol
    rpool/ROOT/orsol - - 5.72M static 2011-01-13 14:46
    release11
    rpool/ROOT/release11 - - 3.65M static 2011-11-17 18:32
    release11-1
    rpool/ROOT/release11-1 NR / 33.53G static 2011-11-17 18:51
    rpool/ROOT/release11-1@2010-09-07-09:32:33 - - 2.22G static 2010-09-07 13:32
    rpool/ROOT/release11-1@2011-08-30-11:54:22 - - 1.59G static 2011-08-30 15:54
    rpool/ROOT/release11-1@2011-09-26-14:15:49 - - 45.35M static 2011-09-26 18:15
    rpool/ROOT/release11-1@2011-10-17-08:30:51 - - 221.18M static 2011-10-17 12:30
    rpool/ROOT/release11-1@2011-11-07-10:25:09 - - 287.96M static 2011-11-07 14:25
    rpool/ROOT/release11-1@2011-11-17-14:32:46 - - 18.38M static 2011-11-17 18:32
    rpool/ROOT/release11-1@2011-11-17-14:51:04 - - 11.81M static 2011-11-17 18:51
    rpool/ROOT/release11-1@2011-12-02-09:25:26 - - 410.0K static 2011-12-02 13:25
    rpool/ROOT/release11-1@2011-12-02-09:37:22 - - 326.5K static 2011-12-02 13:37
    rpool/ROOT/release11-1@2011-12-02-09:40:54 - - 276.0K static 2011-12-02 13:40
    rpool/ROOT/release11-1@2011-12-02-09:43:06 - - 229.5K static 2011-12-02 13:43
    rpool/ROOT/release11-1@2011-12-02-09:45:16 - - 290.0K static 2011-12-02 13:45
    rpool/ROOT/release11-1@2011-12-02-10:03:45 - - 35.0K static 2011-12-02 14:03
    rpool/ROOT/release11-1@2011-12-02-10:04:08 - - 35.5K static 2011-12-02 14:04
    rpool/ROOT/release11-1@2011-12-02-10:05:40 - - 301.5K static 2011-12-02 14:05
    rpool/ROOT/release11-1@2011-12-05-08:46:41 - - 273.0K static 2011-12-05 12:46
    rpool/ROOT/release11-1@2011-12-05-08:50:44 - - 279.0K static 2011-12-05 12:50
    rpool/ROOT/release11-1@2011-12-05-08:54:36 - - 260.5K static 2011-12-05 12:54
    rpool/ROOT/release11-1@2011-12-05-08:57:39 - - 118.5K static 2011-12-05 12:57
    rpool/ROOT/release11-1@dev - - 696.84M static 2010-06-24 12:50
    rpool/ROOT/release11-1@install - - 2.19G static 2008-09-08 15:45
    rpool/ROOT/release11-1@snapshot1 - - 6.07M static 2011-09-09 14:03
    rpool/ROOT/release11-1@snapshot2 - - 3.37M static 2011-09-09 15:47
    support
    rpool/ROOT/support - - 3.28M static 2011-08-30 15:54
    support-11
    rpool/ROOT/support-11 - - 3.69M static 2011-09-26 18:15
    support-12
    rpool/ROOT/support-12 - - 5.73M static 2011-10-17 12:30
    support-13
    rpool/ROOT/support-13 - - 20.76M static 2011-11-07 14:25

Maybe you are looking for