Removing a slice from Live Upgrade

Some time ago I used Live Upgrade to update a Solaris 9 4/04 V490 to Solaris 10 6/06. The internal disks (2 146GB) are mirrored using SVM. I used a couple of extra slices for the upgrade (/ & /var). I need to setup a new filesystem for an Oracle grid control install. I need to to reallocate one of the slices currently allocated to Solaris 9 (/var). I assume that I need to remove the slice/metadevice from Live Upgrade and then delete the SVM submirrors and mirror.
What steps/commands are needed to cleanup Live Upgrade and reuse the meta disk?
Thanks,
sysglen

assuming you have a filesystem on the volume, then you can't.
Solaris has no support for shrinking filesystems.
Well not until ZFS, but thats a different issue.
If you don't have a filesystem on the volume, you should be able to use metaclear to scrap the volume. Then readd the devices you do want.

Similar Messages

  • How to delete file systems from a Live Upgrade environment

    How to delete non-critical file systems from a Live Upgrade boot environment?
    Here is the situation.
    I have a Sol 10 upd 3 machine with 3 disks which I intend to upgrade to Sol 10 upd 6.
    Current layout
    Disk 0: 16 GB:
    /dev/dsk/c0t0d0s0 1.9G /
    /dev/dsk/c0t0d0s1 692M /usr/openwin
    /dev/dsk/c0t0d0s3 7.7G /var
    /dev/dsk/c0t0d0s4 3.9G swap
    /dev/dsk/c0t0d0s5 2.5G /tmp
    Disk 1: 16 GB:
    /dev/dsk/c0t1d0s0 7.7G /usr
    /dev/dsk/c0t1d0s1 1.8G /opt
    /dev/dsk/c0t1d0s3 3.2G /data1
    /dev/dsk/c0t1d0s4 3.9G /data2
    Disk 2: 33 GB:
    /dev/dsk/c0t2d0s0 33G /data3
    The data file systems are not in use right now, and I was thinking of
    partitioning the data3 into 2 or 3 file systems and then creating
    a new BE.
    However, the system already has a BE (named s10) and that BE lists
    all of the filesystems, incl the data ones.
    # lufslist -n 's10'
    boot environment name: s10
    This boot environment is currently active.
    This boot environment will be active on next system boot.
    Filesystem fstype device size Mounted on Mount Options
    /dev/dsk/c0t0d0s4 swap 4201703424 - -
    /dev/dsk/c0t0d0s0 ufs 2098059264 / -
    /dev/dsk/c0t1d0s0 ufs 8390375424 /usr -
    /dev/dsk/c0t0d0s3 ufs 8390375424 /var -
    /dev/dsk/c0t1d0s3 ufs 3505453056 /data1 -
    /dev/dsk/c0t1d0s1 ufs 1997531136 /opt -
    /dev/dsk/c0t1d0s4 ufs 4294785024 /data2 -
    /dev/dsk/c0t2d0s0 ufs 36507484160 /data3 -
    /dev/dsk/c0t0d0s5 ufs 2727290880 /tmp -
    /dev/dsk/c0t0d0s1 ufs 770715648 /usr/openwin -
    I browsed the Solaris 10 Installation Guide and the man pages
    for the lu commands, but can not find how to remove the data
    file systems from the BE.
    How do I do a live upgrade on this system?
    Thanks for your help.

    Thanks for the tips.
    I commented out the entries in /etc/vfstab, also had to remove the files /etc/lutab and /etc/lu/ICF.1
    and then could create the Boot Environment from scratch.
    I was also able to create another boot environment and copied into it,
    but now I'm facing a different problem, error when trying to upgrade.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    s10                        yes      yes    yes       no     -        
    s10u6                      yes      no     no        yes    -        Now, I have the Solaris 10 Update 6 DVD image on another machine
    which shares out the directory. I mounted it on this machine,
    did a lofiadm and mounted that at /cdrom.
    # ls -CF /cdrom /cdrom/boot /cdrom/platform
    /cdrom:
    Copyright                     boot/
    JDS-THIRDPARTYLICENSEREADME   installer*
    License/                      platform/
    Solaris_10/
    /cdrom/boot:
    hsfs.bootblock   sparc.miniroot
    /cdrom/platform:
    sun4u/   sun4us/  sun4v/Now I did luupgrade and I get this error:
    # luupgrade -u -n s10u6 -s /cdrom    
    ERROR: The media miniroot archive does not exist </cdrom/boot/x86.miniroot>.
    ERROR: Cannot unmount miniroot at </cdrom/Solaris_10/Tools/Boot>.I find it strange that this sparc machine is complaining about x86.miniroot.
    BTW, the machine on which the DVD image is happens to be x86 running Sol 10.
    I thought that wouldn't matter, as it is just NFS sharing a directory which has a DVD image.
    What am I doing wrong?
    Thanks.

  • Remove Blank Values from Slicer in Power View

    Hi all,
    I am trying to create a power view report from a power pivot data model. After creating the model when I try to use the slicer in power view I am seeing the Blank record. Then I checked that dimension in model where I don't have any blank value in there.
    Also I checked in fact table where it has a relationship with that dimension and I don't see any blank value there as well.
    I am currently blocked with this issue. Experts please jump in and give some ideas how to remove blank value from there.
    Thanks

    As MM-99 already stated correctly, the BLANK-member is created if there is one or more non-matching row in your fact-table that does not exist in the dimension-table
    maybe you have some issues with upper and lower case?
    or maybe there are some whitespace characters contained in the text?
    to identify the issues you may want to create a calcuated column in your fact-table as =RELATED('DimTable'[MyKeyColumn])
    then you can filter on that column and see if any blank values appear
    hth,
    gerhard
    Gerhard Brueckl
    blogging @ http://blog.gbrueckl.at
    working @ http://www.pmOne.com

  • Live Upgrade 2.0 (from Sol8 K23 to Sol9 K05

    Installed LU 2.0 from solaris 9 CDs
    Created a new BE (= copy of my Sol8 K23)
    Start upgrade on my inactiive BE to Sol9
    insert Solaris 9 CD 1of2
    luupgrade -u -n <INACT_BE> -s /cdrom/sol_9_403_sparc/s0
    --> runs fine
    eject cdrom
    insert Solaris 9 CD 2of2
    luupgrade -i -n <INACT_BE> -s /cdrom/sol_9_405_sparc_2 -O '-nodisplay'
    After a few questions, the upgrade starts,
    it first upgrade Live Upgrade ok,
    then it start upgrading Solaris,
    it than fails .
    I checked the logs on the <INACT_BE> and found in
    /var/sadm/install/logs/Solaris_9_pacjages...
    it failed on installing SUNWnsm (Netscape 7) because already installed !!
    It is right that I had SUNWnsm on my Solaris 8 system!!
    Why is this causing LU to fail ?
    It should just skip that package and go to the next
    For the sake of it I deinstalled netscape 7 of my <INACT_BE> using pkgrm -R
    I then restarted the LU using CD 2of2 , now it goes further but fails on package SUNWjhrt (java) which also existed !!
    Do I miss something or is LU just unusable ??Thanks

    Fred,
    I personally have never read that caveat, what is recommended is to always run the same version on components that use the same firmware bundle, in other words....  For a B series upgrade, you need the Infraestruture bundle (include firmware for Fabric Interconnects, IOMs and UCSM) and also need the Server bundle (which includes the firmware for the CIMC, BIOS and Adapter).
    Bottom line, the recommendation is to run exactly the same version for components that use firmware that come from the same bundle BUT,  UCSM 2.1 introduces an enhancement: "Mixed version support (for infra and server bundles firmware) "  wich allows the combination of SOME infraestructure bundles with some server bundles.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/release/notes/UCS_28313.html#wp58530 << Look for
    "Operational enhancements"
    These are the posible configurations I am aware of:
    2.1(1f) infrastructure and 2.0(5a)+ server firmware
    2.1(2a) infrastructure and 2.1(1f)+ server firmware
    I hope that helps.
    Rate ALL helpful answers.
    -Kenny

  • Live Upgrade from 8 to 10 keeping Oracle install

    Hi everyone,
    I'm trying to figure out how to do a Live Upgrade from Solaris 8 (SPARC) to 10 and still keep my Oracle available after the reboot.
    Since all of the Oracle datafiles are on /un or /opt/oracle I figure I can just create a new BE for /, /etc and /var. From there I'd just need to edit /etc/vfstab, /etc/init.d/ (for db startups), copy over /var/opt/oracle/, mount /opt/oracle.
    Does that sound right? Has anyone done this?
    On a side note, I'm still trying to figure out Live Update. My system is configed with RAID 1 under Solstice (metatool). I'm concerened about being able to access the MetaDB once the BE switch goes through. Should I set up a new mirror for the new BE prior to running LU? Or should I configure the mirror for the new BE once the switchover has gone through?

    Hello dfkoch,
    To upgrade from ColdfFusion 8/ColdfFusion 9 to ColdfFusion 10, please download the setup for ColdfFusion 10 from http://www.adobe.com/support/coldfusion/downloads.html#cf10productdownloads and install the same with the serial number.
    The upgrade price can be checked at www.adobe.com or alternatively you can call our sales team.
    Hope this helps.
    Regards,
    Anit Kumar

  • Live upgrade from solaris 8 to 10

    how do i upgrade from solaris 8 to 10 using live upgrade on a sun netra t1 machine?

    THere is a good intro to LU at
    http://www.sun.com/bigadmin/collections/installation.html
    beyond that, see docs.sun.com

  • Slicers Removed from Template: "Removed Records: Slicer Cache from /xl/slicerCaches/slicerCache3.xml part (Slicer Cache)"

    Product: Microsoft Office Professional Plus 2010
    I have created a template file with a data table, 2 pivot tables, and each of those 2 pivot tables has a pivotchart and 2 slicers.  I have some code that updates the pivot table ranges.  When that occurs the pivot tables, pivotcharts and slicers
    all update successfully.  
    The problem comes after saving the .xlsm file.  Upon opening the file I receive a message saying "Excel found unreadable content in 'myTemplate.xlsm'. Do you want to recover the contents of this workbook? If you trus the source of this workbook,
    click Yes."
    I then get a list or removed records and the repair that was done.
    Removed Records: Slicer Cache from /xl/slicerCaches/slicerCache3.xml part (Slicer Cache)
    Removed Records: Slicer Cache from /xl/slicerCaches/slicerCache4.xml part (Slicer Cache)
    Removed Records: Slicers from /xl/slicers/slicer2.xml part (Slicer)
    Removed Records: Drawing from /xl/drawings/drawing4.xml part (Drawing shape)
    Repaired Records: Named range from /xl/workbook.xml part (Workbook)
    Most of the file is OK with the exception of my last worksheet.  The pivot table and chart get updated, but the slicers are gone.  Any idea why this is happening?  Can something be done to prevent this?
    Thanks,
    Rich

    Hi Rich,
    Based on your description, my understanding is that slicers are removed after you get the error messages and repair it. You wonder to know why slicers are lost after repairing, it seems that the corrupted file caused the issue.
    Please try to update the  pivot table ranges manually without macros. According to my test, the slicers won’t be lost without macros. Please test this method in your own environment. If it
    works fine after disable macros, please check your code.
    If my understanding is incorrect, could you upload a sample via OneDrive and be at bit more precise explain your problem, so that we can get more accurate solutions to this problem. I am glad to help and forward to your reply.
    Hope it’s helpful.
    Regards,

  • RAID removing missing/damaged slice from mirror

    I had a disk fail from a mirror RAID config. I replaced it with a new driver and added it to the array. I repaired the array. Now I end up with an extra slice for the disk that failed originally. How do I get rid of it?
    Name: Humongous
    Unique ID: 5A6F0ECF-C86A-4BD6-9B55-050A2AE74B70
    Type: Mirror
    Status: Degraded
    Device Node: disk8
    Apple RAID Version: 2
    # Device Node UUID Status
    0 disk7s3 86264B1D-B5FB-4D8F-B5F2-73DE1F3516B9 Online
    2 disk10s3 E688249B-D352-4F90-A7FB-E8F81153EC77 Online
    2 Unknown Missing/Damaged
    ----------------------------------------------------------------------

    Just remove each disk from the array. They will appear on your Desktop with same names, so you may want to rename them. Delete the array entry in the RAID panel of Disk Utility. This is a simple process with a mirrored RAID, and there should be no loss of the data on either drive. I've done this several times over the years.
    I am a strong believer in CYA, so usually I have backups of the array just in case, but so far I've never encountered any problems with breaking a mirrored RAID.

  • Patching broken in Live Upgrade from Solaris 9 to Solaris 10

    I'm using Live Upgrade to upgrade a Solaris 9 system to Solaris 10. I installed the
    LU packages from Solaris 10 11/06 plus the latest Live Upgrade patches. Everything
    went fine until I attempted to use `luupgrade -t' to apply Recommended and Security patches to the
    Solaris 10 boot environment. It gave me this error:
    ERROR: The boot environment <sol10env> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.
    Can anyone tell me if there is a way to get the Recommended patches installed without having to first activate and boot up to 10?
    Thanks in advance.
    'chele

    Tried it - got kind of excited for a couple of seconds......but then it failed:
    # ./install_cluster -R /.alt.sol10env
    Patch cluster install script for Solaris 10 Recommended Patch Cluster
    WARNING SYSTEMS WITH LIMITED DISK SPACE SHOULD NOT INSTALL PATCHES:
    .(standard stuff about space)
    in only partially loaded patches. Check and be sure adequate disk space
    is available before continuing.
    Are you ready to continue with install? [y/n]: y
    Determining if sufficient save space exists...
    Sufficient save space exists, continuing...
    ERROR: OS is not valid for this cluster. Exiting.

  • How to upgrade from Solaris 10 u2 to Solaris 10 u3 using Live Upgrade?

    Does anyone know how I can use "live upgrade" (or any other method) for upgrading my solaris 10 u2 to Solaris 10 u3?
    I can't find instructions anywhere! ugh.
    Thank you for your help!

    Try the following documentation:
    http://docs.sun.com/app/docs/doc/819-6396
    For future reference, I found this by going to sun.com -> documentation -> Solaris Operating Systems -> Solaris 10 Operating System -> Solaris 10 11/06 Release and Installation Collection.
    -- Alan

  • Solaris 10 upgrade from 9 - Upgrade option not presented

    I have moved the system disk from a v880 to a v480 and successfully booted.
    I've successfully tested the upgrade using pfinstall.
    /usr/sbin/install.d/pfinstall -D /tmp/profile
    I had the following text in /tmp/profile
    install_type upgrade
    root_device /dev/dsk/c1t0d0s0
    This finished with:
    Test run complete. Exit status 0.
    I had to mount /dev/dsk/c1t0d0s0 to /a before running pfinstall otherwise the program would fail with Exit status 2.
    Original slice layout is:
    # df -h | grep c1t0
    /dev/dsk/c1t0d0s0 9.8G 4.2G 5.6G 43% /
    /dev/dsk/c1t0d0s3 9.8G 3.7G 6.1G 38% /usr
    /dev/dsk/c1t0d0s4 9.8G 3.9G 5.9G 40% /var
    /dev/dsk/c1t0d0s5 9.8G 1.8G 8.0G 19% /opt
    /dev/dsk/c1t0d0s6 20G 2.0G 18G 11% /export/home
    Because of http://docs.sun.com/app/docs/doc/820-4041/6nfhjnlao?l=Ja&a=view
    I moved var from slice 3 to / on slice 0
    I also removed "swap" entries in vfstab, and even removed the second disk in the system (which didn't have an OS)
    ok> boot cdrom - nowin
    this results with no upgrade option presented
    Can someone tell me what I can do? I don't have experience with Jumpstart or Live upgrade
    Thanks

    Hi,
    Yes and no.
    11gR1 and 11gR2 are fully supported (premier support). 10gR1 and 10gR2 are in extended support. About 9.2 it's a bit more complicated and the general extended support ended : check note 161818.1 for further details.
    After the extended support there is still the sustaining support.
    Read that to understand the notions :
    http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf
    Best regards
    Phil

  • Live upgrade, zones and separate mount points

    Hi,
    We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
    Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
    lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
    This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
    Anyone tried something similar or has any idea on how to solve this?
    The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
    1:
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mount point </zones/zone01>.
    Copying.
    Creating shared file system mount points.
    Copying root of zone <zone01>.
    Creating compare databases for boot environment <upgrade>.
    Creating compare database for file system </zones/zone01>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <upgrade>.
    Making boot environment <upgrade> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
    zoneadm: zone 'zone01': call to zoneadmd failed
    ERROR: unable to mount zone <zone01> in </.alt.upgrade>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
    umount: /dev/dsk/c2t1d0s0 not mounted
    ERROR: cannot unmount </dev/dsk/c2t1d0s0>
    ERROR: cannot mount boot environment by name <upgrade>
    ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
    ERROR: Update of loader failed.
    ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
    Making the ABE <upgrade> bootable FAILED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    2:
    0 21191 21113 /usr/lib/lu/lumount -f upgrade
    0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
    0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
    0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
    0 21196 21192 mount swap /.alt.upgrade/var/run
    0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
    0 21197 21192 mount swap /.alt.upgrade/tmp
    0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
    0 21199 21198 /bin/expr 2 - 1
    0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
    0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
    0 21202 21198 false
    0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
    0 21206 21204 sed s/\([^\]\)::/\1:-:/
    0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
    0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
    0 21210 21203 false
    0 21211 21203 gettext unable to mount zone <%s> in <%s>
    0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
    Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

    I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
    # lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
    <snip>
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
    Mounting file systems for boot environment <upgrade>.
    ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
    allowable number of mount points exceeded
    ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
    ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
    ERROR: unmounting partially mounted boot environment file systems
    If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
    <snip>
    Creating boot environment <upgrade>.
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
    Mounting file systems for boot environment <upgrade>.
    Calculating required sizes of file systems for boot environment <upgrade>.
    Populating file systems on boot environment <upgrade>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mountED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
    # lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

  • Upgrading Sol 10 u3 x86 -- Sol 10 u6 x86 without using LU/Live Upgrade ?

    Hi all,
    I've got a group of SunFire x4200 M2 hosts currently running Solaris 10 x86 u3, fully patched. They are using the internal LSI Logic SAS controller in a RAID1 config for the disk environment.
    When they were originally installed, the thought of running up an SVM mirror wasn't on the cards for our SE who installed them - nor was the potential for using Live Upgrade.
    It occurs to me, that whilst Sun will release and continue to release patches for u3, I need to actually migrate to u6 in order to get certain functionality (i.e - new ZFS genesis, iscsitadm/iscsiadm, root zpools). I don't believe this will be "patched in" later for older versions, will it?
    How can I cleanly upgrade my hosts without losing any config data or information? Is it even possible? These are production hosts, so I want to do this in the safest way possible. Many have told me I can simply slip in the Sol 10 update 6 media - and boot from it, then find that it will detect the old OS, but I am a little skeptical.
    If someone could point me in the right direction here, that would be appreciated!
    Thanks.
    z

    The installer polls many different things on the existing installation. If any of them aren't exactly right, it won't offer the upgrade option, and it won't give you any hints about what tests didn't pass. (Did I find a root slice, is there a root filesystem on it? Does the /etc/vfstab on that partition make sense? Do I have the space in those filesystems to upgrade? ...)
    The most common ones to fail have to do with disk layout. It's possible to move /var around, or have an extra partition mounted that the installer doesn't realize is irrelevant for the upgrade.
    If that's not the problem, it's very difficult to pin down. At one level, most of the installer logic is just scripts that you could read to understand the logic, but it's difficult to tell which script is running when, so following the logic is not easy.
    Darren

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Solairs 9 Live Upgrade

    Dear All
    I am upgrading Solaris 8 to Solaris 9 using live upgrade on Sun Enterprise 450 Server . All the steps are completed but i am getting problem only while activating the boot environment of Solaris 9. Upgrading was successfully completed from Solaris 9 (1of2) CD . While i am activating the ABE for solaris 9 , it says that the upgrade is not completed and some packages still need to be installed and giving Error 142. If i continue upgrading from the CD (2of2) it is not accepting . Can anyone suggest how to complete the upgrade process so that i can activate Boot environment .
    Thanks / Regards

    Use remove SUNWXall then upgrade SUNWXall in your OS-Profile.

Maybe you are looking for

  • T.code FF7A - How to retrieve items posted before linking the GL account to

    Hi Experts, While executing the T.code FF7A, postings that have been created before making the link between the GL account and the cash position grouping are not retrieved to the report. Does anyone know how to retrieve those past postings on FF7A re

  • Problem with material number

    I created a material 20070730a1 using mm01 tcode. But in my program when i am trying to retrieve info for that material the select statement is unsuccessfully. But when i go for that material in mm03 i am able to view info for that material. Even i a

  • Missing business types

    Hi, I used to have an Email Address and Money type as well as several other types, but now they aren't in the dropdown menu. I still have Person for whatever reason. I don't know what triggered the problem. I'm using VS 2013 Community Edition with Of

  • Windows Mobile Service, getting started, doesn't work.

    I have followed the instructions for creating a mobile data service (several times, under and existing and a new subscription, using an existing and a new database): create the service, download the todo app, build and run it locally (it works), chan

  • Adobe Presenter 10 crashes when clicking record

    I am using windows 8.1 and have tried the 32 and 64 bit versions of Office 365 and Presenter 10. Since presenter detects the version of Office you have its always using the right one but i still get crash. Every time i load it and hit record i get th