RFE: smpatch for Live Upgrade boot environments

We use smpatch extensively, along with a local patch server, to keep our Solaris servers
and workstations up to date on patches. I'm relatively satisfied with this facility.
I'd like to use smpatch to apply patches to a Live Upgrade boot environment, but it
doesn't offer that option. All I really need to do is to point it at an alternate root to do
the analysis and patch download. Live Upgrade already has the ability to apply patches
from a local directory. I've had to turn to the competition, pca, to do the analysis and
download.
Please request that this ability be added to smpatch.

Unfortunately man pages are not usually updated after an initial release. However there is a change request 6481979 to add it to the man pages. The option is now present in the smpatch help when no parameters are provided to the command only as "-b boot-env". As an example;
$ smpatch add -b altboot -i 111111-11
The relevent change requests were 6366823 for Update Connection and 4974240 historically for smpatch. As realisation detection used in an analysis may depend on active software or drivers to extract data this cannot be statically extracted from a system image so a correct analysis cannot be done which appears to be why only add, remove and update were given the boot environment option.

Similar Messages

  • Creating Boot Environment for Live Upgrade

    Hello.
    I'd like to upgrade a Sun Fire 280R system running Solaris 8 to Solaris 10 U4. I'd like to use Live Upgrade to do this. As that's going to be my first LU of a system, I've got some questions. Before I start, I'd like to mention that I have read the �Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning� ([820-0178|http://docs.sun.com/app/docs/doc/820-0178]) document. Nonetheless, I'd also appreciate pointers to a more �hands-on� documentation/howto reg. live upgrade.
    The system that I'd like to upgrade has these filesystems:
    (winds02)askwar$ df
    Filesystem 1k-blocks Used Available Use% Mounted on
    /dev/md/dsk/d30 4129290 684412 3403586 17% /
    /dev/md/dsk/d32 3096423 1467161 1567334 49% /usr
    /dev/md/dsk/d33 2053605 432258 1559739 22% /var
    swap 7205072 16 7205056 1% /var/run
    /dev/dsk/c3t1d0s6 132188872 61847107 69019877 48% /u04
    /dev/md/dsk/d34 18145961 5429315 12535187 31% /opt
    /dev/md/dsk/d35 4129290 77214 4010784 2% /export/home
    It has 2 built in harddisks, which form those metadevices. You can find the �metastat� at http://askwar.pastebin.ca/697380. I'm now planning to break the mirrors for /, /usr, /var and /opt. To do so, I'd run
    metadetach d33 d23
    metaclear d23
    d23 is/used to be c1t1d0s4. I'd do this for d30, d32 and d34 as well. Plan is, that I'd be able to use these newly freed slices on c1t1d0 for LU. I know that I'm in trouble when c1t0d0 now dies. But that's okay, as that system isn't being used anyway right now...
    Or wait, I can use lucreate to do that as well, can't I? So, instead of manually detaching the mirror, I could do:
    lucreate -n s8_2_s10 -m /:/dev/md/dsk/d30:preserve,ufs \
    -m /usr:/dev/md/dsk/d32:preserve,ufs \
    -m /var:/dev/md/dsk/d33:preserve,ufs \
    -m /opt:/dev/md/dsk/d34:preserve,ufs
    Does that sound right? I'd assume, that I'd then have a new boot environment called �s8_2_s10�, which uses the contents of the old metadevices. Or would the correct command rather be:
    lucreate -n s8_2_s10_v2 \
    -m /:/dev/md/dsk/d0:mirror,ufs \
    -m /:/dev/md/dsk/d20:detach,attach,preserve \
    -m /usr:/dev/md/dsk/d2:mirror,ufs \
    -m /usr:/dev/md/dsk/d22:detach,attach,preserve \
    -m /var:/dev/md/dsk/d3:mirror,ufs \
    -m /var:/dev/md/dsk/d23:detach,attach,preserve \
    -m /opt:/dev/md/dsk/d4:mirror,ufs \
    -m /opt:/dev/md/dsk/d24:detach,attach,preserve
    What would be the correct way to create the new boot environment? As I said, I haven't done this before, so I'd really appreciate some help.
    Thanks a lot,
    Alexander Skwar

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Best practices for ZFS file systems when using live upgrade?

    I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
    Here is the ZFS layout I have come up with (swap, home, etc omitted):
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              11.0G  52.0G    94K  /rpool
    rpool/ROOT                         4.80G  52.0G    18K  legacy
    rpool/ROOT/boot1                   4.80G  52.0G  4.28G  /
    rpool/ROOT/boot1/zones-root         534M  52.0G    20K  /zones-root
    rpool/ROOT/boot1/zones-root/zone1   534M  52.0G   534M  /zones-root/zone1
    rpool/zone-data                      37K  52.0G    19K  /zones-data
    rpool/zone-data/zone1-runtime        18K  52.0G    18K  /zones-data/zone1-runtimeThere are 2 key components here:
    1) The ROOT file system - This stores the / file systems of the local and global zones.
    2) The zone-data file system - This stores the data that will be changing within the local zones.
    Here is the configuration for the zone itself:
    <zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
      <inherited-pkg-dir directory="/lib"/>
      <inherited-pkg-dir directory="/platform"/>
      <inherited-pkg-dir directory="/sbin"/>
      <inherited-pkg-dir directory="/usr"/>
      <filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
      <network address="192.168.0.1" physical="e1000g0"/>
    </zone>The key components here are:
    1) The local zone / is shared in the same file system as global zone /
    2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
    The system (local and global zone) will operate like this:
    The global zone is used to manage zones only.
    Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
    During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
    Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
    What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
    Thanks in advance for your feedback!
    --Jason                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello "jemurray".
    Have you read this document? (page 198)
    http://docs.sun.com/app/docs/doc/820-7013?l=en
    Then the solution is:
    01.- Create an alternate boot enviroment
    a.- In a new rpool
    b.- In the same rpool
    02.- Upgrade this new enviroment
    03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
    This maybe sound easy but you should be carefull, please try this in a development enviroment
    Good luck

  • Zones or Containers Live Upgrade Solaris10 Update 8

    Good Day,
    I am running my Solaris OS in a ZFS root pool. My containers/zones are running in their own ZPOOLS. They are not part of the root pool.
    How can I get Live Upgrade to not make snapshopts of my running containers/zones?
    To shut the containers down and detach them is not an option.
    Many Thanks,
    Gilbert

    Our client has a strange request :-)
    The idea is to run live upgrade and patch the new boot environment. On the day of the reboot, the containers are detached before reboot. Once the Global Domain rebooted successfully, the containers/zones are atached again.
    They have limites space and don't have the luxury for Live Upgrade to make clones of their container/zone environments.
    I tried commenting out the running zones in /etc/zones/index but somehow, Live Upgrade still detects the running zone and makes a snapshot of the ZFS filesystem in that ZPOOL.
    Any suggestions?

  • How to delete file systems from a Live Upgrade environment

    How to delete non-critical file systems from a Live Upgrade boot environment?
    Here is the situation.
    I have a Sol 10 upd 3 machine with 3 disks which I intend to upgrade to Sol 10 upd 6.
    Current layout
    Disk 0: 16 GB:
    /dev/dsk/c0t0d0s0 1.9G /
    /dev/dsk/c0t0d0s1 692M /usr/openwin
    /dev/dsk/c0t0d0s3 7.7G /var
    /dev/dsk/c0t0d0s4 3.9G swap
    /dev/dsk/c0t0d0s5 2.5G /tmp
    Disk 1: 16 GB:
    /dev/dsk/c0t1d0s0 7.7G /usr
    /dev/dsk/c0t1d0s1 1.8G /opt
    /dev/dsk/c0t1d0s3 3.2G /data1
    /dev/dsk/c0t1d0s4 3.9G /data2
    Disk 2: 33 GB:
    /dev/dsk/c0t2d0s0 33G /data3
    The data file systems are not in use right now, and I was thinking of
    partitioning the data3 into 2 or 3 file systems and then creating
    a new BE.
    However, the system already has a BE (named s10) and that BE lists
    all of the filesystems, incl the data ones.
    # lufslist -n 's10'
    boot environment name: s10
    This boot environment is currently active.
    This boot environment will be active on next system boot.
    Filesystem fstype device size Mounted on Mount Options
    /dev/dsk/c0t0d0s4 swap 4201703424 - -
    /dev/dsk/c0t0d0s0 ufs 2098059264 / -
    /dev/dsk/c0t1d0s0 ufs 8390375424 /usr -
    /dev/dsk/c0t0d0s3 ufs 8390375424 /var -
    /dev/dsk/c0t1d0s3 ufs 3505453056 /data1 -
    /dev/dsk/c0t1d0s1 ufs 1997531136 /opt -
    /dev/dsk/c0t1d0s4 ufs 4294785024 /data2 -
    /dev/dsk/c0t2d0s0 ufs 36507484160 /data3 -
    /dev/dsk/c0t0d0s5 ufs 2727290880 /tmp -
    /dev/dsk/c0t0d0s1 ufs 770715648 /usr/openwin -
    I browsed the Solaris 10 Installation Guide and the man pages
    for the lu commands, but can not find how to remove the data
    file systems from the BE.
    How do I do a live upgrade on this system?
    Thanks for your help.

    Thanks for the tips.
    I commented out the entries in /etc/vfstab, also had to remove the files /etc/lutab and /etc/lu/ICF.1
    and then could create the Boot Environment from scratch.
    I was also able to create another boot environment and copied into it,
    but now I'm facing a different problem, error when trying to upgrade.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    s10                        yes      yes    yes       no     -        
    s10u6                      yes      no     no        yes    -        Now, I have the Solaris 10 Update 6 DVD image on another machine
    which shares out the directory. I mounted it on this machine,
    did a lofiadm and mounted that at /cdrom.
    # ls -CF /cdrom /cdrom/boot /cdrom/platform
    /cdrom:
    Copyright                     boot/
    JDS-THIRDPARTYLICENSEREADME   installer*
    License/                      platform/
    Solaris_10/
    /cdrom/boot:
    hsfs.bootblock   sparc.miniroot
    /cdrom/platform:
    sun4u/   sun4us/  sun4v/Now I did luupgrade and I get this error:
    # luupgrade -u -n s10u6 -s /cdrom    
    ERROR: The media miniroot archive does not exist </cdrom/boot/x86.miniroot>.
    ERROR: Cannot unmount miniroot at </cdrom/Solaris_10/Tools/Boot>.I find it strange that this sparc machine is complaining about x86.miniroot.
    BTW, the machine on which the DVD image is happens to be x86 running Sol 10.
    I thought that wouldn't matter, as it is just NFS sharing a directory which has a DVD image.
    What am I doing wrong?
    Thanks.

  • Looking for information on best practices using Live Upgrade to patch LDOMs

    This is in Solaris 10. Relatively new to the style of patching... I have a T5240 with 4 LDOMS. A control LDOM and three clients. I have some fundamental questions I'd like help with..
    Namely:
    #1. The Client LDOMS have zones running in them. Do I need to init 0 the zone or can I just +zoneadm zone halt+ them regardless of state? I.E. if it's running a database will halting the zone essentially snapshot it or will it attempt to shut it down. Is this even a nessessary step.
    #2. What is the reccommended reboot order for the LDOMs? do I need to init 0 the client ldoms and the reboot the control ldom or can I leave the client LDOM's running and just reboot the control and then reboot the clients after the control comes up?
    #3. Oracle. it's running in several of the zones on the client LDOM's what considerations need to be made for this?
    I am sure other things will come up during the conversation but I have been looking for an hour on Oracle's site for this and the only thing I can find is old Sun Docs with broken links.
    Thanks for any help you can provide,
    pipelineadmin+*                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Before you use live upgrade, or any other patching technique for Solaris, please be sure to read http://docs.oracle.com/cd/E23823_01/html/E23801/index.html which includes information on upgrading systems with non-global zones. Also, go to support.oracle.com and read Oracle Solaris Live Upgrade Information Center [ID 1364140.1]. These really are MANDATORY READING.
    For the individual questions:
    #1. During the actual maintenance you don't have to do anything to the zone - just operate it as normal. That's the purpose of the "live" in "live upgrade" - you're applying patches on a live, running system under normal operations. When you are finisihed with that process you can then reboot into the new "boot environment". This will become more clear after reading the above documents. Do as you normally would do before taking a planned outage: shut the databases down using the database commands for a graceful shutdown. A zone halt will abruptly stop the zone and is not a good idea for a database. Alternatively, if you can take application outages, you could (smoothly) shutdown the applications and then their domains, detach the zones (zoneadm detach) and then do a live upgrade. Some people like that because it makes things faster. After the live upgrade you would reboot and then zoneadm attach the zones again. The fact that the Solaris instance is running within a logical domain really is mostly besides the point with respect to this process.
    As you can see, there are a LOT of options and choices here, so it's important to read the doc. I ***strongly*** recommend you practice on a test domain so you can get used to the procedure. That's one of the benefits of virtualization: you can easily set up test environments so you cn test out procedures. Do it! :-)
    #2 First, note that you can update the domains individually at separate times, just as if they were separate physical machines. So, you could update the guest domains one week (all at once or one at a time), reboot them into the new Solaris 10 software level, and then a few weeks later (or whenever) update the control domain.
    If you had set up your T5240 in a split-bus configuration with an alternate I/O domain providing virtual I/O for the guests, you would be able to upgrade the extra I/O domain and the control domain one at a time in a rolling upgrade - without ever having to reboot the guests. That's really powerful for providing continuous availability. Since you haven't done that, the answer is that at the point you reboot the control domain the guests will lose their I/O. They don't crash, and technically you could just have them continue until the control domain comes back up at which time the I/O devices reappear. For an important application like a database I wouldn't recommend that. Instead: shutdown the guests. then reboot the control domain, then bring the guest domains back up.
    3. The fact that Oracle database is running in zones inside those domains really isn't an issue. You should study the zones administration guide to understand the operational aspects of running with zones, and make sure that the patches are compatible with the version of Oracle.
    I STRONGLY recommend reading the documents mentioned at top, and setting up a test domain to practice on. It shouldn't be hard for you to find documentation. Go to www.oracle.com and hover your mouse over "Oracle Technology Network". You'll see a window with a menu of choices, one of which is "Documentation" - click on that. From there, click on System Software, and it takes you right to the links for Solaris 10 and 11.

  • Volume as install disk for Guest Domain and Live Upgrade

    Hi Folks,
    I am new to LDOMs and have some questions - any pointers, examples would be much appreciated:
    (1) With support for volumes to be used as whole disks added in LDOM release 1.0.3, can we export a whole LUN under either VERITAS DMP or mpxio control to guest domain and install Solaris on it ? Any gotchas or special config required to do this ?
    (2) Can Solaris Live Upgrade be used with Guest LDOMs ? or is this ability limited to Control Domains ?
    Thanks

    The answer to your #1 question is YES.
    Here's my mpxio enabled device.
    non-STMS device name STMS device name
    /dev/rdsk/c2t50060E8010029B33d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    /dev/rdsk/c3t50060E8010029B37d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    create the virtual disk using slice 2
    ldm add-vdsdev /dev/dsk/c4t4849544143484920373730313036373530303136d0s2 77bootdisk@primary-vds01
    add the virtual disk to the guest domain
    ldm add-vdisk apps bootdisk@primary-vds01 ldom1
    the virtual disk will be imprted as c0d0 which is the whole lun itself.
    bind, start ldom 1 and install OS (i used jumpstart) and it partitioned the boot disk c0d0 as / 15GB, swap the remaining space (10GB)
    when you run format, print command on both guest and primary domain on this disk you'll see the same slice/size information
    Part Tag Flag Cylinders Size Blocks
    0 root wm 543 - 1362 15.01GB (820/0/0) 31488000
    1 swap wu 0 - 542 9.94GB (543/0/0) 20851200
    2 backup wm 0 - 1362 24.96GB (1363/0/0) 52339200
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    I havent used DMP but HDLM (Hitachi Dynamic link manager) seems not supported by ldom as i cannot make it work :(
    I have no answer on your next question unfortunately.

  • Boot archive corrupt on both mirrors after live upgrade to Sparc U8

    I upgrade from U5 to U8 for an oracle 11g upgrade requirement. I used LU to break my SVM mirror and upgrade one mirror. After the upgrade the U8 mirror was patched, and finally I activated the new BE and shutdown -i6. I was presented with the "file loaded is not bootable" error, so I attempted to boot from my orginal BE. Unfortunately this disk was also not bootable as well. I eventually got the system booted over the network where I used bootadm to re-create the boot archive. The problem I am struggling with is how both of my boot archives could have become corrupt, as my CIO is not allowing our upgrade to move forward until I provide RCA, even though the server is already upgraded and the DB upgrade is completely unrelated. I have a case open with Oracle for this, but they are not making any headway, so I thought I would post and see if anyone has any idea how the boot archives could have both been affected by the live upgrade. The corrupt boot archives are close in size and update time, however they are not exact, so it would seem that I encountered two issues possibly.
    Here is the corrupt boot archive of the PBE, which I saved in case it could possibly help with RCA:
    # mount /dev/dsk/c0t0d0s0 /mnt
    root@wwpxpsdb01 # ls -l /mnt/platform/sun4u/boot_archive.shawn
    -rw-r--r-- 1 root root 1161216 Oct 16 17:28 /mnt/platform/sun4u/boot_archive.shawn
    Here is the corrupt boot archive of the ABE that was upgraded to U8:
    # ls -l /platform/sun4u/boot_archive.shawn
    -rw-r--r-- 1 root root 1169408 Oct 16 17:33 /platform/sun4u/boot_archive.shawn
    I've found on google that other have had this issue before, but seems that was due to ZFS root disks, I am using UFS. Any help or sugestions are greatly appreciated.

    If you have an Oracle contract, have you tried the Oracle communities? There is an active Solaris OS booting community, and an upgrade community. Look here:
    https://communities.oracle.com
    and search on "Oracle Solaris" and you should see the following (among others)
    Oracle Solaris File Systems . ..
    Oracle Solaris Installation, Booting and patching
    Oracle Solaris System Administration
    whatever the method, hope you get your answer soon!

  • Solaris 10 5/08 live upgrade only for customers with serviceplan ?

    Live upgrade fails due to missing /usr/bin/7za
    Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
    But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
    Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
    A bit disappointing ...
    Regards
    /Flemming

    Live upgrade fails due to missing /usr/bin/7za
    Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
    But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
    Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
    A bit disappointing ...
    Regards
    /Flemming

  • Live upgrade only for zfs root?

    Only live upgrade for zfs root on 5/09? Is this true? I have tried to do live upgrades previously and have had no luck. Particularly on my old blade1000 with an 18gb drive.

    Reading over this post I see it is a little unclear. I am trying to upgrade a u6 installation that has a zfs root to u7.

  • Going live check for SAP upgrade projects

    Hello,
    I am interested in "going live check" service and procedures for my upgrade projects. May I find any documentation? Do I have to pay for it?..
    Thanking you in advance and a happy new years!
    Kind regards
    Nilüfer Çal&#305;&#351;kan

    Hi Nilufer,
    The following Input as follows :
    Source : http://www.thespot4sap.com/upgrade_guide_v2.pdf
    This service is included as part of your annual maintenance fee.
    The underlying concept of the SAP GoingLive Functional Upgrade Check is
    to ensure smooth operation of your mySAP.com solution by taking action
    proactively, before severe technical problems occur.
    The GoingLive Functional Upgrade Check is made up of three sessions
    (Planning, Analysis and Verification)—two sessions (Planning and Analysis)
    before the upgrade and the other (Verification) after the upgrade.
    Planning Session: This session should be performed as much in advance as
    possible when the upgrade is first being considered.
    Analyses compatibility for the target release with regards to connected
    systems in the Solution Landscape, OS version, DB version, installed Add-
    Ons, Plug-ins, Country Versions, and so on.
    Analysis Session: It is generally performed two months before the
    production upgrade (consider lead time for hardware procurement).
      The focus is on resource planning and system configuration.
      There is a high-level hardware plausibility check (this is not a hardware
    sizing exercise).
      Potential resource bottlenecks are identified.
      Necessary changes will be recommended to prepare the system for the
    productive use of the new release.
    38 SAP Upgrade Guide | Simplifying Your Upgrade
    Verification Session: Generally performed four to six weeks after the
    production upgrade, and the recommendation from the analysis session
    implemented.
      Comparison of performance with results prior to upgrade
      Recommendations on configuration and optimization
    More information and ordering . For more information, see the GoingLive
    Functional Upgrade Check web page at:
    <b>http://service.sap.com/goinglive-fu</b>
    To order a GoingLive Functional Upgrade Check, contact SAP Customer Care
    Center at least eight weeks prior to the planned upgrade of your production
    system.
    For customers of value-added resellers (VARs) such as CBS customers in the
    US, special conditions apply. Contact your VAR.
    Remote Upgrade Service
    Remote Upgrade Service is a technical upgrade that is performed by SAP
    remote consultants. Pricing is based on the complexity of the customer’s
    environment.
    SAP performs the technical upgrade of your system, allowing you to focus on
    the functional and training side of the upgrade. You are still responsible for
    resolving object conflicts.
    See SAP Notes 106447 and 84044.
    The Remote Upgrade Service is an attractive option for CBS sized customers
    with small or non-existent mySAP Technology group.
    For more information see the Remote Upgrade Service web page at:
    http://service.sap.com/remoteupgrade
    Resources
    The following list shows some of the resources you can use as you prepare for
    your upgrade of SAP. These resources include:
      Your installation partner
      Customers and others you have networked with at customer functions and
    conferences such as:
    • ASUG
    Hope this Info helps you.
    <i>Advanced New Year wishes.</i>
    Br,
    Sri
    Award points for helpful answers

  • Live Upgrade on Solaris 11

    Following my posts on LU on Solaris 10, I now need to do the same on Solaris 11.2.  The two machines are brand new - not being used in anger.
    I saw the useful blog https://blogs.oracle.com/oem/entry/using_ops_center_to_update
    One of our machines has the following:
    beadm list
    BE                 Active Mountpoint Space  Policy Created
    solaris            -      -          12.99M static 2014-06-21 05:39
    solaris-1          NR     /          25.76G static 2014-09-10 12:00
    solaris-1-backup-1 -      -          323.0K static 2014-09-29 11:45
    solaris-1-backup-2 -      -          172.0K static 2014-11-12 12:26
    When asked, the person who looked after the machine, he was not sure how these were created.  I would like to know how do I ensure that the two boot environment are the same and if that is the case if one can be deleted. Whether the backups are required and if they are auto-generated.  Essentially, I am a total newbie on Live Upgrade, but I see as the only way to apply patches and other packages.
    Regards
    SC

    Hi,
    please note that Live Upgrade is obsolete and isn't used on Solaris 11 and above. Solaris 11
    uses pkg(1) and beadm(1M) to manage boot environments.
    The output from beadm list shows the currently existing boot environments. Those environments
    are created by pkg(1) when updating the system or in certain situations when destructive package
    operations take place (the "-backup" environments).
    Please see
    Updating to Oracle Solaris 11.2
    for more information on upgrading Solaris.
    In your example output the solaris boot environment is most likely the result of an
    initial installation. Then someone later updated the environment resulting in a new
    environment solaris-1 (which is also the current boot environment). Some pkg
    operations then caused some backup boot environments to be created. This is
    done to make sure that a fall-back exists should there be a problem with the newly
    installed packages. If you don't need to go back to that stage you can also remove them.
    It is recommended to always keep a known-good boot environment around just in case.
    If you are happy with the current one you can ask beadm to create such an environment
    like this:
    # beadm create my-known-good-be
    Please note: When using the pkg command to update the system you can
    also specify a custom name for the new boot environment, e.g.
    # pkg update --be-name=s11.2
    would name the new environment s11.2 instead of some generic name like "solaris-X".
    Regards,
      Ronald

  • DiskSuite and Live Upgrade 2.0

    I have two Solaris 7 boxes running DiskSuite to mirror the O/S disk onto another drive.
    I need to upgrade to Solaris 8. In the past I have used Live Upgrade to do so, when I have enough free disk space to partition an existing disk or to use a unused disk for the Solaris 8 system files.
    In this case, I do not have sufficient free space on the boot disk. So, what is the best approach? It seems that I would have to:
    1. unmirror the file system
    2. install Solaris 8 onto the old mirror drive using LU 2.0
    3. make the old mirror drive the boot drive
    4. re-establish mirroring, being sure that it goes the right way from the Solaris 8 disk to the old boot disk
    Comments, suggestions?

    I recently built a system (specs below) and installed this card (MSI GF4 Ti4200 VTD8X MS8894, 128MB DDR), and when I try to use Live Update 2 (version 3.33.000, from the CD that came with the card), I get the same message:
    "Warning!!! Your Display Card does not support MSI Live Update 2 function.  Note: MSI Live Update 2 supports the Display Cards of MSI only."
    I'm using the drivers/BIOS that came on the CD: Driver version 6.13.10.4107, BIOS version 4.28.20.05.11.  I see on the nVidia site that they have the 4109 drivers out now, should I try those?  ?(
    I have also made sure to do the suggested modifications to IE (and I don't have PC-cillin installed):
    "Note: In order to operate this application properly, please note the following suggests.
    -Set the IE security setting 'Download signed ActiveX controls' to [Enable] or [Prompt]. (System default is [Prompt]).
    -Disable 'WebTrap' of PC-cillin(R) or any web based anti-virus application when executing MSITM Live Update 2TM.
    -Update Microsoft® Windows® Installer"
    I downloaded a newer version of LIveUpdate (3.35.000), and installed it (after completely uninstalling the old version), and got the same results.  Nothing on my system is currently overclocked.
    Help!
    System specs:
    -Soyo SY-KT400 DRAGON Ultra (Platinum Edition) with latest BIOS & Chipset Drivers
    -AMD Athlon XP Thoroughbred 2100+
    -MSI GF4 Ti4200 VTD8X (MS-8894)
    -WD Caviar Special Edition 80 GB HDD, 8 MB Cache
    -512 MB Crucial PC2700 DDR (one stick, in DIMM #1)
    -TDK 40/12/48 CD R/RW
    -Daewoo 905DF Dynaflat 19" Monitor
    -Windows XP Home Edition, SP1/all other updates current
    -On-Board CMedia 6-channel audio
    -On-Board VIA 10/100 Ethernet
    -Altec-Lansing ATP3 Speakers

  • Sparse zones live upgrade

    Hi
    I have problem with live upgrade on solaris 10 9/10 to 8/11 on sparse zone.
    The installation on global zone is good but sparse zone cannnot boot because zonepath changed.
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - pbspfox1 installed /zoneprod/pbspfox1-s10u10-baseline native excl
    the initial path is /zoneprod/pbspfox1
    #zfs list
    zoneprod/pbspfox1@s10u10-baseline 22.4M - 2.18G -
    zoneprod/pbspfox1-s10u10-baseline
    # luactivate zoneprod/pbspfox1@s10u10-baseline
    ERROR: The boot environment Name <zoneprod/pbspfox1@s10u10-baseline> is too long - a BE name may contain no more than 30 characters.
    ERROR: Unable to activate boot environment <zoneprod/pbspfox1@s10u10-baseline>.
    how to upgrade pbspfox1?
    Please help
    Walter

    I'm not exactly sure what happened here but the zone name doesn't change. If the zone patch is wrong, I'd try using zonecfg to change the zone path to the proper value and then boot the zone normally.
    zonecfg -z pbspfox1
    set zonepath=/zone/pbspfox1 (or whatever is the proper path)
    verify
    commit
    exit
    zoneadm -z pbspfox1 boot
    If the zone didn't get properly updated, you might be able to update it by detaching the zone:
    zoneadm -z pbspfox1 detach
    and doing an update reattach
    zoneadm -z pbspfox1 attach -u
    Disclaimer: All of the above was done from memory without testing, I may have gotten some things wrong.
    Hopefully this will help. I've had similar issues in the past but I'm not sure I've had exactly the same problem so I can't tell for sure whether this will help you or not. It is what I'd try. Of course, try to avoid getting yourself into a position where you can't back something out if necessary. This kind of thing can be messy and may require more than one try. If a remember correctly there were some issues with the live upgrade software as shipped with Solaris 10 8/11. I'd get it patched up to current levels ASAP to avoid additional issues.

  • Solaris 10 update 9 - live upgrade issues with ZFS

    Hi
    After doing a live upgrade from Solaris 10 update 8 to Solaris 10 update 9 the alternate boot environment I created is no longer bootable.
    I have completed all the pre-upgrade steps like:
    - Installing the latest version of live upgrade from the update 9 ISO.
    - Create and test the new boot environment.
    - Create a sysidcfg file used by the live upgrade that has auto_reg=disable in it.
    There is also no errors while creating the boot environment or even when activating it.
    Here is the error I get:
    SunOS Release 5.10 Version Generic_14489-06 64-bit
    Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
    NOTICE: zfs_parse_bootfs: error 22
    Cannot mount root on altroot/37 fstype zfs
    *panic[cpu0]/thread=fffffffffbc28040: vfs mountroot: cannot mount root*
    ffffffffffbc4a8d0 genunix:main+107 ()
    Skipping system dump - no dump device configured
    Does anyone know how I can fix this?
    Edited by: user12099270 on 02-Feb-2011 04:49

    Found the culprit... *142910-17*... breaks it
    System has findroot enabled GRUB
    Updating GRUB menu default setting
    GRUB menu default setting is unaffected
    Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10x_u8wos_08a> as <mount-point>//boot/grub/menu.lst.prev.
    File </etc/lu/GRUB_backup_menu> propagation successful
    Successfully deleted entry from GRUB menu
    Validating the contents of the media </admin/x86/Patches/10_x86_Recommended/patches>.
    The media contains 204 software patches that can be added.
    Mounting the BE <s10x_u8wos_08a_Jan2011>.
    Adding patches to the BE <s10x_u8wos_08a_Jan2011>.
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    The following requested patches have packages not installed on the system
    Package SUNWio-tools from directory SUNWio-tools in patch 142910-17 is not installed on the system. Changes for package SUNWio-tools will not be applied to the system.
    Package SUNWzoneu from directory SUNWzoneu in patch 142910-17 is not installed on the system. Changes for package SUNWzoneu will not be applied to the system.
    Package SUNWpsm-ipp from directory SUNWpsm-ipp in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-ipp will not be applied to the system.
    Package SUNWsshdu from directory SUNWsshdu in patch 142910-17 is not installed on the system. Changes for package SUNWsshdu will not be applied to the system.
    Package SUNWsacom from directory SUNWsacom in patch 142910-17 is not installed on the system. Changes for package SUNWsacom will not be applied to the system.
    Package SUNWmdbr from directory SUNWmdbr in patch 142910-17 is not installed on the system. Changes for package SUNWmdbr will not be applied to the system.
    Package SUNWopenssl-commands from directory SUNWopenssl-commands in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-commands will not be applied to the system.
    Package SUNWsshdr from directory SUNWsshdr in patch 142910-17 is not installed on the system. Changes for package SUNWsshdr will not be applied to the system.
    Package SUNWsshcu from directory SUNWsshcu in patch 142910-17 is not installed on the system. Changes for package SUNWsshcu will not be applied to the system.
    Package SUNWsshu from directory SUNWsshu in patch 142910-17 is not installed on the system. Changes for package SUNWsshu will not be applied to the system.
    Package SUNWgrubS from directory SUNWgrubS in patch 142910-17 is not installed on the system. Changes for package SUNWgrubS will not be applied to the system.
    Package SUNWzoner from directory SUNWzoner in patch 142910-17 is not installed on the system. Changes for package SUNWzoner will not be applied to the system.
    Package SUNWmdb from directory SUNWmdb in patch 142910-17 is not installed on the system. Changes for package SUNWmdb will not be applied to the system.
    Package SUNWpool from directory SUNWpool in patch 142910-17 is not installed on the system. Changes for package SUNWpool will not be applied to the system.
    Package SUNWudfr from directory SUNWudfr in patch 142910-17 is not installed on the system. Changes for package SUNWudfr will not be applied to the system.
    Package SUNWxcu4 from directory SUNWxcu4 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu4 will not be applied to the system.
    Package SUNWarc from directory SUNWarc in patch 142910-17 is not installed on the system. Changes for package SUNWarc will not be applied to the system.
    Package SUNWtftp from directory SUNWtftp in patch 142910-17 is not installed on the system. Changes for package SUNWtftp will not be applied to the system.
    Package SUNWaccu from directory SUNWaccu in patch 142910-17 is not installed on the system. Changes for package SUNWaccu will not be applied to the system.
    Package SUNWppm from directory SUNWppm in patch 142910-17 is not installed on the system. Changes for package SUNWppm will not be applied to the system.
    Package SUNWtoo from directory SUNWtoo in patch 142910-17 is not installed on the system. Changes for package SUNWtoo will not be applied to the system.
    Package SUNWcpc from directory SUNWcpc.i in patch 142910-17 is not installed on the system. Changes for package SUNWcpc will not be applied to the system.
    Package SUNWftdur from directory SUNWftdur in patch 142910-17 is not installed on the system. Changes for package SUNWftdur will not be applied to the system.
    Package SUNWypr from directory SUNWypr in patch 142910-17 is not installed on the system. Changes for package SUNWypr will not be applied to the system.
    Package SUNWlxr from directory SUNWlxr in patch 142910-17 is not installed on the system. Changes for package SUNWlxr will not be applied to the system.
    Package SUNWdcar from directory SUNWdcar in patch 142910-17 is not installed on the system. Changes for package SUNWdcar will not be applied to the system.
    Package SUNWnfssu from directory SUNWnfssu in patch 142910-17 is not installed on the system. Changes for package SUNWnfssu will not be applied to the system.
    Package SUNWpcmem from directory SUNWpcmem in patch 142910-17 is not installed on the system. Changes for package SUNWpcmem will not be applied to the system.
    Package SUNWlxu from directory SUNWlxu in patch 142910-17 is not installed on the system. Changes for package SUNWlxu will not be applied to the system.
    Package SUNWxcu6 from directory SUNWxcu6 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu6 will not be applied to the system.
    Package SUNWpcmci from directory SUNWpcmci in patch 142910-17 is not installed on the system. Changes for package SUNWpcmci will not be applied to the system.
    Package SUNWarcr from directory SUNWarcr in patch 142910-17 is not installed on the system. Changes for package SUNWarcr will not be applied to the system.
    Package SUNWscpu from directory SUNWscpu in patch 142910-17 is not installed on the system. Changes for package SUNWscpu will not be applied to the system.
    Package SUNWcpcu from directory SUNWcpcu in patch 142910-17 is not installed on the system. Changes for package SUNWcpcu will not be applied to the system.
    Package SUNWopenssl-include from directory SUNWopenssl-include in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-include will not be applied to the system.
    Package SUNWdtrp from directory SUNWdtrp in patch 142910-17 is not installed on the system. Changes for package SUNWdtrp will not be applied to the system.
    Package SUNWhermon from directory SUNWhermon in patch 142910-17 is not installed on the system. Changes for package SUNWhermon will not be applied to the system.
    Package SUNWpsm-lpd from directory SUNWpsm-lpd in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-lpd will not be applied to the system.
    Package SUNWdtrc from directory SUNWdtrc in patch 142910-17 is not installed on the system. Changes for package SUNWdtrc will not be applied to the system.
    Package SUNWhea from directory SUNWhea in patch 142910-17 is not installed on the system. Changes for package SUNWhea will not be applied to the system.
    Package SUNW1394 from directory SUNW1394 in patch 142910-17 is not installed on the system. Changes for package SUNW1394 will not be applied to the system.
    Package SUNWrds from directory SUNWrds in patch 142910-17 is not installed on the system. Changes for package SUNWrds will not be applied to the system.
    Package SUNWnfsskr from directory SUNWnfsskr in patch 142910-17 is not installed on the system. Changes for package SUNWnfsskr will not be applied to the system.
    Package SUNWudf from directory SUNWudf in patch 142910-17 is not installed on the system. Changes for package SUNWudf will not be applied to the system.
    Package SUNWixgb from directory SUNWixgb in patch 142910-17 is not installed on the system. Changes for package SUNWixgb will not be applied to the system.
    Checking patches that you specified for installation.
    Done!
    Approved patches will be installed in this order:
    142910-17
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    Patch 142910-17 has been successfully installed.
    See /a/var/sadm/patch/142910-17/log for details
    Executing postpatch script...
    Creating GRUB menu in /a
    Installing grub on /dev/rdsk/c2t0d0s0
    stage1 written to partition 0 sector 0 (abs 16065)
    stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
    Patch packages installed:
    BRCMbnx
    SUNWaac
    SUNWahci
    SUNWamd8111s
    SUNWcakr
    SUNWckr
    SUNWcry
    SUNWcryr
    SUNWcsd
    SUNWcsl
    SUNWcslr
    SUNWcsr
    SUNWcsu
    SUNWesu
    SUNWfmd
    SUNWfmdr
    SUNWgrub
    SUNWhxge
    SUNWib
    SUNWigb
    SUNWintgige
    SUNWipoib
    SUNWixgbe
    SUNWmdr
    SUNWmegasas
    SUNWmptsas
    SUNWmrsas
    SUNWmv88sx
    SUNWnfsckr
    SUNWnfscr
    SUNWnfscu
    SUNWnge
    SUNWnisu
    SUNWntxn
    SUNWnv-sata
    SUNWnxge
    SUNWopenssl-libraries
    SUNWos86r
    SUNWpapi
    SUNWpcu
    SUNWpiclu
    SUNWpsdcr
    SUNWpsdir
    SUNWpsu
    SUNWrge
    SUNWrpcib
    SUNWrsgk
    SUNWses
    SUNWsmapi
    SUNWsndmr
    SUNWsndmu
    SUNWtavor
    SUNWudapltu
    SUNWusb
    SUNWxge
    SUNWxvmpv
    SUNWzfskr
    SUNWzfsr
    SUNWzfsu
    PBE GRUB has no capability information.
    PBE GRUB has no versioning information.
    ABE GRUB is newer than PBE GRUB. Updating GRUB.
    GRUB update was successfull.
    Unmounting the BE <s10x_u8wos_08a_Jan2011>.
    The patch add to the BE <s10x_u8wos_08a_Jan2011> completed.
    Still need to know how to resolve it though...

Maybe you are looking for