Solaris Live Upgrade with NGZ

Hi
I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
On my 1st Server:
I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
zoneadm: zone 'mddtri': call to zoneadm failed
ERROR: unable to mount non-global zones of ABE: cannot make bootable
ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
ERROR: unable to make boot environment <new_s10BE> bootable
On my 2nd Server:
I have a Global zone and 10 NGZ. This is the error i am getting:-
WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
*.This happens for all the NGZ running.*
Duplicating ZFS datasets from PBE to ABE.
ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zone  to dedicated dataset.
ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
Reverting state of zones in PBE <old_s10BE>
ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
ERROR: Unable to populate file systems from boot environment <new_s10BE>
Help, I need to sort this out a.s.a.p!

Hi,
I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
Here is a script with lucreate.
#!/bin/sh
Logdir=/usr/local/LUscripts/logs
if &#91; ! -d ${Logdir} &#93;
then
echo ${Logdir} existiert nicht
exit
fi
/usr/sbin/lucreate \
-l ${Logdir}/$0.log \
-o ${Logdir}/$0.error \
-m /:/dev/dsk/c2t0d0s0:ufs \
-m /var:/dev/dsk/c2t0d0s3:ufs \
-m /opt:/dev/dsk/c2t0d0s4:ufs \
-m -:/dev/dsk/c2t0d0s1:swap \
-n disk0
And here is the output
root&#64;ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
ERROR: An error occurred during creation of configuration file.
ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
Abort - core dumped

Similar Messages

  • Live Upgrade with Zones - still not working ?

    Hi Guys,
    I'm trying to do LiveUpdate from Solaris update 3 to update 4 with non-global zone installed. It's driving me crazy now.
    I did everything as described in documentation, installed SUNWlucfg and supposedly updated SUNWluu and SUNWlur (supposedly because they are exactly the same as were in update 3) both from packages and with script from update 4 DVD, installed all patches mentioned in 72099, but lucreate process still complains about missing patches and I've checked if they're installed five times. They are. It doesn't even allow to create second BE. Once I detached Zone - everything went smooth, but I had an impression that Live Upgrade with Zones will work in Update 4.
    It did create second BE before SUNWlucfg was installed, but failed on update stage with exactly the same message - install patches according to 72099. After installation of SUNWlucfg Live Upgrade process fails instantly, that's a real progress, must admit.
    Is it still "mission impossible" to Live Upgrade with non-global zones installed ? Or am I missed something ?
    Any ideas or success stories are greatly appreciated. Thanks.

    I upgraded from u3 to u5.
    The upgrade went fine, the zones boot up but there are problems.
    sshd doesn't work
    svsc -vx prints out this.
    svc:/network/rpc/gss:default (Generic Security Service)
    State: uninitialized since Fri Apr 18 09:54:33 2008
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M gssd
    Impact: 8 dependent services are not running:
    svc:/network/nfs/client:default
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/system/webconsole:console
    svc:/milestone/multi-user-server:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/network/inetd:default (inetd)
    State: maintenance since Fri Apr 18 09:54:41 2008
    Reason: Restarting too quickly.
    See: http://sun.com/msg/SMF-8000-L5
    See: man -M /usr/share/man -s 1M inetd
    See: /var/svc/log/network-inetd:default.log
    Impact: This service is not running.
    It seems as thought the container is not upgraded.
    more /etc/release in the container shows this
    Solaris 10 11/06 s10s_u3wos_10 SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 14 November 2006
    How do I get it to fix the inetd service?

  • Solaris Live Upgrade

    I didnt find a specific place to place this topic, so maybe you can replace.
    I read a lot about Solaris Live Upgrade, but I'd would like to know if I can copy a BE from one computer to another over network via NFS.
    Or if there is another software that can make a image of my system and replace to the others computers (they are all the same), so I will dont need to configure all of them.
    Thx for any help.

    Thanks for answering.
    I read about JumpStart installation, but is it possible to install a customized image, but not with just the partition, packages and patches selected. I wanted to share a whole system configured. For example, the DNS, IP, User's account settings, etc.
    Because, I just found a way to share the installation image with the selected packages and patches. But you will need to configure the files of the packages later in the clients.
    Is it possible?

  • Solaris 10 Live Upgrade with Veritas Volume Manager 4.1

    What is the latest version of Live Upgrade?
    I need to upgrade our systems from Solaris 8 to Solaris 10. All our systems have Veritas VxVM 4.1, with the O.S disks encapsulated and mirrored.
    Whats the best way to do the Live Upgrade. Anyone have clean documents for the same ?

    There are more things that you need to do.
    Read veritas install guide -- it has a pretty good section of what needs to be done.
    http://www.sun.com/products-n-solutions/hardware/docs/Software/Storage_Software/VERITAS_Volume_Manager/

  • Sun Live Upgrade with local zones Solaris 10

    I have M800 server running global root (/) fs on local disk and running 6 local zones on another local disk. I am running solaris 5.10 8/07.
    I used live upgrade to patch the system and created new BE (lucreate). Both root fs are mirror as RAID-1.
    When I ran lucreate, it copies all 6 local zones root fs to the global root fs and failed no enogh space.
    What is the best procedure to use lu with local zones.
    Note: I used lu with global zone only, and worked without any problem.
    regards,

    I have been trying to use luupgrade for Solaris10 on Sparc, 05/09 -> 10/09.
    lucreate is successful, but luactivate directs me to install 'the rest of the packages' in order to make the BE stable enough to activate. I try to find the packages indicated , but find only "virtual packages" which contain only pkgmap.
    I installed upgrade 6 on a spare disk to make sure my u7 installation was not defective, but got similar results.
    I got beyond luactivate on x86 a while ago, but had other snags which I left unattended.

  • Sun cluster 3.20, live upgrade with non-global zones

    I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
    For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
    The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
    Any assistance gratefully received

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Solaris 10 upgrade with mirrored OS (meta-device) partition

    I will going to upgrade my host from Solaris 10 5/08 (U5) to Solaris 10 10/09 (U8). The installation media is a cdrom.
    On my host, I used Solaris Volume Manager (SVM) to mirror /, /var, and swap.
    My Question is:
    Before the upgrade, should I just need to break one side of mirrors which will be sufficient for the upgrade process, or should I convert them back to physical device?
    How should I proceed?
    Thanks.

    chewr wrote:
    Firstly, Thanks for your answer. I hope that it will work as you said.
    But I am thinking that the OS upgrade process will boot from the cdrom, and it will only look for physical device to do the upgrade. That's the reason I am concerning whether the OS, which booted from cdrom, will able to see the meta-device?Solaris 10 boot media should be able to see and recognize the metadevices. Previous versions could not.
    On the other hand, if I need to remove all meta-device for the upgrade, will data be safe and intact on the physical device when the OS booted from cdrom?Safe? If you do reconfigure the system to not use any SVM devices for the OS, then yes the data is still there. I'm not sure what you're asking, or how the data might be at risk.
    Darren

  • Live Upgrade with VDI

    Is there any reason why LU will not work with VDI and it's built in MySQL cluster?
    I attempted to LU a VDI 3.2.1 environment and after rebooting a secondary node the service would not come back up. Unonfiguring and reconfiguring brought everything back to normal. What this just an anomaly or is there a procedure for using LU with VDI?

    Well LU does work fine with VDI. I upgraded a second VDI cluster without problems.

  • Live upgrade with zones

    Hi,
    While trying to create a new be environment in solaris 10 update 6 i 'm getting following errors for my zone
    Updating compare databases on boot environment <zfsBE>.
    Making boot environment <zfsBE> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'OTM1_wa_lab': "/usr/lib/fs/lofs/mount -o ro /.alt.tmp.b-AKc.mnt/swdump /zones/app/OTM1_wa_lab-zfsBE/lu/a/swdump" failed with exit code 33
    zoneadm: zone 'OTM1_wa_lab': call to zoneadmd failed
    ERROR: unable to mount zone <OTM1_wa_lab> in </.alt.tmp.b-AKc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
    ERROR: Unable to remount ABE <zfsBE>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device <rootpool/ROOT/zfsBE>
    Making the ABE <zfsBE> bootable FAILED.
    Although my zone is running fine
    zoneadm -z OTM1_wa_lab list -v
    ID NAME STATUS PATH BRAND IP
    3 OTM1_wa_lab running /zones/app/OTM1_wa_lab native shared
    Does any body know what could be the reason for this ?

    http://opensolaris.org/jive/thread.jspa?messageID=322728

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • DB upgrade during a Solaris upgrade using live upgrade feature

    Hello,
    Here is the scenario. We are currently upgrading our OS from Solaris 9 to Solaris 10. And there is also a major database upgrade involved too. To help mitigate downtime, the Solaris Live Upgrade feature is being used by the SA's. The DB to be upgraded is currently at 9i and the proposed upgrade end state is at 10.2.0.3.
    Does anyone know if it is possible to do at least part of the database upgrade in the alternate boot environment created during the live upgrade process? So lets say that I am able to get the database partly upgraded to 10.2.0.1. Then, I want to run the patch set for 10.2.0.3 in the alternate boot environment and do the upgrade of the instance there too so that when the alternate boot environment is booted to the primary boot environment, I can, in effect, just start up the database.
    That's sort of a high level simplified version of what I think may be possible.
    Does anyone have any recommendations? We don't have any other high availability solutions currently in place so options such as using data guard do not apply.
    Thanks in advance!

    Hi magNorth,
    I'm not a Solaris expert but I'd always recommend not to do both steps (OS upgrade and database upgrade) at the same time. I've seen to many live scenarios where either the one or the other has caused issues - and it's fairly tough for all three parties (You as the customer, Sun Support, Oracle Support) to find out the end what was/is causing the issues.
    So my recommendation would be:
    1) Do your Solaris upgrade from 9 to 10 and check for the most recent Solaris patches - once this has been finished and run for one or two weeks successfully in production then go to step 2
    2) Do your database upgrade - I would suggest not going to 10.2.0.3 but directly to 10.2.0.4 because 10.2.0.4 has ~2000 bug fixes plus the security fixes from April2008 - but that's your decision. If 10.2.0.3 is a must then at least check Note:401435.1 for known issues and alert in 10.2.0.3 and for both cases the Upgrade Companion: Note:466181.1     
    Kind regards
    Mike

  • Looking for information on best practices using Live Upgrade to patch LDOMs

    This is in Solaris 10. Relatively new to the style of patching... I have a T5240 with 4 LDOMS. A control LDOM and three clients. I have some fundamental questions I'd like help with..
    Namely:
    #1. The Client LDOMS have zones running in them. Do I need to init 0 the zone or can I just +zoneadm zone halt+ them regardless of state? I.E. if it's running a database will halting the zone essentially snapshot it or will it attempt to shut it down. Is this even a nessessary step.
    #2. What is the reccommended reboot order for the LDOMs? do I need to init 0 the client ldoms and the reboot the control ldom or can I leave the client LDOM's running and just reboot the control and then reboot the clients after the control comes up?
    #3. Oracle. it's running in several of the zones on the client LDOM's what considerations need to be made for this?
    I am sure other things will come up during the conversation but I have been looking for an hour on Oracle's site for this and the only thing I can find is old Sun Docs with broken links.
    Thanks for any help you can provide,
    pipelineadmin+*                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Before you use live upgrade, or any other patching technique for Solaris, please be sure to read http://docs.oracle.com/cd/E23823_01/html/E23801/index.html which includes information on upgrading systems with non-global zones. Also, go to support.oracle.com and read Oracle Solaris Live Upgrade Information Center [ID 1364140.1]. These really are MANDATORY READING.
    For the individual questions:
    #1. During the actual maintenance you don't have to do anything to the zone - just operate it as normal. That's the purpose of the "live" in "live upgrade" - you're applying patches on a live, running system under normal operations. When you are finisihed with that process you can then reboot into the new "boot environment". This will become more clear after reading the above documents. Do as you normally would do before taking a planned outage: shut the databases down using the database commands for a graceful shutdown. A zone halt will abruptly stop the zone and is not a good idea for a database. Alternatively, if you can take application outages, you could (smoothly) shutdown the applications and then their domains, detach the zones (zoneadm detach) and then do a live upgrade. Some people like that because it makes things faster. After the live upgrade you would reboot and then zoneadm attach the zones again. The fact that the Solaris instance is running within a logical domain really is mostly besides the point with respect to this process.
    As you can see, there are a LOT of options and choices here, so it's important to read the doc. I ***strongly*** recommend you practice on a test domain so you can get used to the procedure. That's one of the benefits of virtualization: you can easily set up test environments so you cn test out procedures. Do it! :-)
    #2 First, note that you can update the domains individually at separate times, just as if they were separate physical machines. So, you could update the guest domains one week (all at once or one at a time), reboot them into the new Solaris 10 software level, and then a few weeks later (or whenever) update the control domain.
    If you had set up your T5240 in a split-bus configuration with an alternate I/O domain providing virtual I/O for the guests, you would be able to upgrade the extra I/O domain and the control domain one at a time in a rolling upgrade - without ever having to reboot the guests. That's really powerful for providing continuous availability. Since you haven't done that, the answer is that at the point you reboot the control domain the guests will lose their I/O. They don't crash, and technically you could just have them continue until the control domain comes back up at which time the I/O devices reappear. For an important application like a database I wouldn't recommend that. Instead: shutdown the guests. then reboot the control domain, then bring the guest domains back up.
    3. The fact that Oracle database is running in zones inside those domains really isn't an issue. You should study the zones administration guide to understand the operational aspects of running with zones, and make sure that the patches are compatible with the version of Oracle.
    I STRONGLY recommend reading the documents mentioned at top, and setting up a test domain to practice on. It shouldn't be hard for you to find documentation. Go to www.oracle.com and hover your mouse over "Oracle Technology Network". You'll see a window with a menu of choices, one of which is "Documentation" - click on that. From there, click on System Software, and it takes you right to the links for Solaris 10 and 11.

  • Volume as install disk for Guest Domain and Live Upgrade

    Hi Folks,
    I am new to LDOMs and have some questions - any pointers, examples would be much appreciated:
    (1) With support for volumes to be used as whole disks added in LDOM release 1.0.3, can we export a whole LUN under either VERITAS DMP or mpxio control to guest domain and install Solaris on it ? Any gotchas or special config required to do this ?
    (2) Can Solaris Live Upgrade be used with Guest LDOMs ? or is this ability limited to Control Domains ?
    Thanks

    The answer to your #1 question is YES.
    Here's my mpxio enabled device.
    non-STMS device name STMS device name
    /dev/rdsk/c2t50060E8010029B33d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    /dev/rdsk/c3t50060E8010029B37d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    create the virtual disk using slice 2
    ldm add-vdsdev /dev/dsk/c4t4849544143484920373730313036373530303136d0s2 77bootdisk@primary-vds01
    add the virtual disk to the guest domain
    ldm add-vdisk apps bootdisk@primary-vds01 ldom1
    the virtual disk will be imprted as c0d0 which is the whole lun itself.
    bind, start ldom 1 and install OS (i used jumpstart) and it partitioned the boot disk c0d0 as / 15GB, swap the remaining space (10GB)
    when you run format, print command on both guest and primary domain on this disk you'll see the same slice/size information
    Part Tag Flag Cylinders Size Blocks
    0 root wm 543 - 1362 15.01GB (820/0/0) 31488000
    1 swap wu 0 - 542 9.94GB (543/0/0) 20851200
    2 backup wm 0 - 1362 24.96GB (1363/0/0) 52339200
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    I havent used DMP but HDLM (Hitachi Dynamic link manager) seems not supported by ldom as i cannot make it work :(
    I have no answer on your next question unfortunately.

  • After Solaris live upgarde disks unavailable

    Hi All,
    we have two SUN Fire 6800 - cluster node. OS and Kernel Version: Solaris 9 9/05 s9s_u8wos_05 SPARC. After Solaris live upgrade the new root disks are continueously failing. During boot with new boot environment Solaris_9_905 wait and then boot disks become unavailable, cluster will crash and start with old boot environment. If the failed devices are removed then inserted, it makes the device available again.
    Could anybody help us? Any idea?
    regards
    Josef

    Hi Tim,
    We have Sun Cluster 3.1. Is it possible error? I'll seek for description of LU.
    However: it seems a simple hardware failure. it lost just the disks.
    Messages:
    root@mcl01:~#metastat -p
    d60 -m d61 d62 d63 1
    d61 1 1 c1t1d0s6
    d62 1 1 c1t5d0s6
    d63 1 1 c1t4d0s6
    d50 -m d51 d52 d53 1
    d51 1 1 c1t1d0s5
    d52 1 1 c1t5d0s5
    d53 1 1 c1t4d0s5
    d30 -m d31 d32 d33 1
    d31 1 1 c1t1d0s3
    d32 1 1 c1t5d0s3
    d33 1 1 c1t4d0s3
    d0 -m d1 d2 d3 1
    d1 1 1 c1t1d0s0
    d2 1 1 c1t5d0s0
    d3 1 1 c1t4d0s0
    0. c1t0d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000870e3de0,0
    1. c1t2d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871421d4,0
    2. c1t3d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871432fc,0
    3. c1t5d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871421a1,0
    root@mcl02:~#metastat -p
    d60 -m d61 d62 d63 1
    d61 1 1 c0t1d0s6
    d62 1 1 c4t1d0s6
    d63 1 1 c4t2d0s6
    d50 -m d51 d52 d53 1
    d51 1 1 c0t1d0s5
    d52 1 1 c4t1d0s5
    d53 1 1 c4t2d0s5
    d30 -m d31 d32 d33 1
    d31 1 1 c0t1d0s3
    d32 1 1 c4t1d0s3
    d33 1 1 c4t2d0s3
    d0 -m d1 d2 d3 1
    d1 1 1 c0t1d0s0
    d2 1 1 c4t1d0s0
    d3 1 1 c4t2d0s0
    0. c0t0d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@0,0
    !!! 1. c0t1d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@1,0
    2. c0t2d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@2,0
    297. c4t0d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@0,0
    !!! 298. c4t1d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@1,0
    !!! 299. c4t2d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@2,0
    And
    "format" doesn't show some of these disks or says "disk type undefined" (- something like this)
    thx
    J

  • Creating Boot Environment for Live Upgrade

    Hello.
    I'd like to upgrade a Sun Fire 280R system running Solaris 8 to Solaris 10 U4. I'd like to use Live Upgrade to do this. As that's going to be my first LU of a system, I've got some questions. Before I start, I'd like to mention that I have read the �Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning� ([820-0178|http://docs.sun.com/app/docs/doc/820-0178]) document. Nonetheless, I'd also appreciate pointers to a more �hands-on� documentation/howto reg. live upgrade.
    The system that I'd like to upgrade has these filesystems:
    (winds02)askwar$ df
    Filesystem 1k-blocks Used Available Use% Mounted on
    /dev/md/dsk/d30 4129290 684412 3403586 17% /
    /dev/md/dsk/d32 3096423 1467161 1567334 49% /usr
    /dev/md/dsk/d33 2053605 432258 1559739 22% /var
    swap 7205072 16 7205056 1% /var/run
    /dev/dsk/c3t1d0s6 132188872 61847107 69019877 48% /u04
    /dev/md/dsk/d34 18145961 5429315 12535187 31% /opt
    /dev/md/dsk/d35 4129290 77214 4010784 2% /export/home
    It has 2 built in harddisks, which form those metadevices. You can find the �metastat� at http://askwar.pastebin.ca/697380. I'm now planning to break the mirrors for /, /usr, /var and /opt. To do so, I'd run
    metadetach d33 d23
    metaclear d23
    d23 is/used to be c1t1d0s4. I'd do this for d30, d32 and d34 as well. Plan is, that I'd be able to use these newly freed slices on c1t1d0 for LU. I know that I'm in trouble when c1t0d0 now dies. But that's okay, as that system isn't being used anyway right now...
    Or wait, I can use lucreate to do that as well, can't I? So, instead of manually detaching the mirror, I could do:
    lucreate -n s8_2_s10 -m /:/dev/md/dsk/d30:preserve,ufs \
    -m /usr:/dev/md/dsk/d32:preserve,ufs \
    -m /var:/dev/md/dsk/d33:preserve,ufs \
    -m /opt:/dev/md/dsk/d34:preserve,ufs
    Does that sound right? I'd assume, that I'd then have a new boot environment called �s8_2_s10�, which uses the contents of the old metadevices. Or would the correct command rather be:
    lucreate -n s8_2_s10_v2 \
    -m /:/dev/md/dsk/d0:mirror,ufs \
    -m /:/dev/md/dsk/d20:detach,attach,preserve \
    -m /usr:/dev/md/dsk/d2:mirror,ufs \
    -m /usr:/dev/md/dsk/d22:detach,attach,preserve \
    -m /var:/dev/md/dsk/d3:mirror,ufs \
    -m /var:/dev/md/dsk/d23:detach,attach,preserve \
    -m /opt:/dev/md/dsk/d4:mirror,ufs \
    -m /opt:/dev/md/dsk/d24:detach,attach,preserve
    What would be the correct way to create the new boot environment? As I said, I haven't done this before, so I'd really appreciate some help.
    Thanks a lot,
    Alexander Skwar

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

Maybe you are looking for

  • Adjust Date & Time doesn't stick on export

    I have scanned over 2000+ negatives in a Tiff format. I created a temporary iPhoto Library so I could import them in, sort them in albums, and make some edits (crop and straighten). So far so good. I have now for each album adjusted the date and time

  • Error while applying patch to 12.1.3 from 12.1.1

    Hi, while I'm applying patch 9239090, I'm getting the following error: The following Oracle Forms objects did not generate successfully: au      resource        IEXFBALI.pll au      resource        OKSACTEV.pll au      resource        OKSAUDET.pll au

  • Error Handling in table control for line item.

    Hi, Please how to do error handling in table control for line item in bdc,i have used format_message for header but i don't no fill decamps internal tabled and  how to do background processing in call transaction. Thanks

  • Html5 video doesn't play

    I was on the People magazine site and where there was supposed to be a video it said this player cannot play HTML video. I thought iOS can't play Flash video but that HTML 5 is what apple supports. Am I wrong?

  • Some calender appointments show one hour too early after sync

    when I sync my calendar from ical to my iphone, some appointments (that are repeated weekly) are shown on time and some one hours to early. This occurs with all apointments dated after november 1st (switch from summerclock to winterclock in Europe).