LDOM Guest domains show c0d0 as 100% using iostat -cxn

I have a T5220 split into one primary and 3 guest domains. When the primary is re-booted an iostat -cxn on the guest domains shows c0d0 as 100% in the %b column. This is causing BMC Patrol to report a problem.
Searches of sunsolve and google have not given any pointers - possibly due to the vagueness of the question.
This situation only seems to occur when the primary domain is re-booted without the guest ldoms being re-booted.
Any help would be much appreciated.
Greg Hitchcock

I'm not at liberty to give you a specific timeline but have a look at this http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2191993

Similar Messages

  • Can I install Sun Cluster on LDOM guest domain. Is Oracle RAC a supported c

    Hello,
    Can I install Sun Cluster on LDOM guest domains. Is Oracle RAC on LDOM guest domains of 2 physical servers a supported configuration from oracle?
    Many thanks in advance
    Ushas Symon

    Hello,
    The motive behind using LDOm Guest domains as RAC node is to have a better control of the resource allocation, since i will be having more than one guest domains which should perform different functions. The customer wants to have ORACLE RAC alone (without sun cluster).
    I will have two T5120's and one 2540 shared storage.
    My plan of configuration is to have
    Control&IO Domain with 8VCPU, 6GB mem
    one LDOM guest domain on each physical machine with 8 VCPU's, 8GB of memory, shared n/w and disks participating as RAC node's. (Don't know yet if i will use solaris cluster or not)
    one guest domain on each physical machine with 12 VCPU's, 14GB of memory, shared n/w and disks participating as BEA weblogic cluster nodes (not on solaris cluster)
    One guest domain on each physical machine with 4 VCPU's, 4GB of memory,shared n/w and disks participating as apache web cluster (on solaris cluster)
    Now, My question is, is it a supported configuration to have guest domains as Oracle RAC participants for 11gR2 (either with or without solaris cluster).
    If I need to configure RAC nodes on solaris cluster, is it possible to have two independent clusters on LDOM , one 2 node cluster for RAC and another 2 node cluster for apache web?
    Kindly advise
    Many thanks in advance
    Ushas Symon

  • Guest Domain Freezes with 100% utilization

    I setup LDOM on a Sun Fire T2000 with one control domain and one guest domain. The guest domain is sharing the disk (on slice 6) with the control domain.
    The slice is set as boot disk for the guest domain. Upon starting the guest domain, the utilization goes 100%. A telnet to the virtual console gets refused connection.
    When I try to stop and unbind the guest domain (with a reboot), the slice is no longer unusable. format and newfs commands both cannot operate on the slice. What's wrong?
    sc>showhost
    Sun-Fire-T2000 System Firmware 6.4.4  2007/04/20 10:13
    Host flash versions:
       Hypervisor 1.4.1 2007/04/02 16:37
       OBP 4.26.1 2007/04/02 16:26
       POST 4.26.0 2007/03/26 16:45
    # Control Domain
    $ cd LDoms_Manager-1_0-RR
    $ Install/install-ldm
    $ ldm add-vdiskserver primary-vds0 primary
    $ ldm add-vconscon port-range=5000-5100 primary-vcc0 primary
    $ ldm add-vswitch net-dev=e1000g0 primary-vsw0 primary
    $ ldm set-mau 1 primary
    $ ldm set-vcpu 4 primary
    $ ldm set-memory 4g primary
    $ ldm add-config initial
    $ shutdown -i6 -g0 -y
    # Guest Domain
    $ ldm add-domain myldom1
    $ ldm add-vcpu 4 myldom1
    $ ldm add-memory 2g myldom1
    $ ldm add-vnet vnet1 primary-vsw0 myldom1
    $ ldm add-vdiskserverdevice /dev/dsk/c0t1d0s6 vol1@primary-vds0
    $ ldm add-vdisk vdisk1 vol1@primary-vds0 myldom1
    $ ldm set-variable auto-boot\?=false myldom1
    $ ldm set-variable boot-device=vdisk1 myldom1
    $ ldm bind-domain myldom1
    $ ldm start-domain myldom1
    $ telnet localhost 5000Truss output of format command:
    AVAILABLE DISK SELECTIONS:
    write(1, "\n\n A V A I L A B L E  ".., 29)      = 29
    ioctl(0, TCGETA, 0xFFBFFB34)                    = 0
    ioctl(1, TCGETA, 0xFFBFFB34)                    = 0
    ioctl(0, TCGETA, 0xFFBFFACC)                    = 0
    ioctl(1, TCGETA, 0xFFBFFACC)                    = 0
    ioctl(1, TIOCGWINSZ, 0xFFBFFB40)                = 0
    open("/dev/tty", O_RDWR|O_NDELAY)               = 3
    ioctl(3, TCGETS, 0x000525BC)                    = 0
    ioctl(3, TCSETS, 0x000525BC)                    = 0
    ioctl(3, TCGETS, 0x000525BC)                    = 0
    ioctl(3, TCSETS, 0x000525BC)                    = 0
           0. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    write(1, "               0 .   c 0".., 56)      = 56
              /pci@780/pci@0/pci@9/scsi@0/sd@1,0
    write(1, "                     / p".., 45)      = 45
    ioctl(3, TCSETS, 0x000525BC)                    = 0
    ioctl(3, TCSETS, 0x000525BC)                    = 0
    close(3)                                        = 0
    ioctl(0, TCGETA, 0xFFBFF1EC)                    = 0
    fstat64(0, 0xFFBFF108)                          = 0
    Specify disk (enter its number): write(1, " S p e c i f y   d i s k".., 33)     = 33
    read(0, 0xFF2700F8, 1024)       (sleeping...)
    0
    read(0, " 0\n", 1024)                           = 2
    open("/dev/rdsk/c0t1d0s2", O_RDWR|O_NDELAY)     = 3
    brk(0x00058810)                                 = 0
    brk(0x00068810)                                 = 0
    brk(0x00068810)                                 = 0
    brk(0x00078810)                                 = 0
    selecting c0t1d0
    write(1, " s e l e c t i n g   c 0".., 17)      = 17
    ioctl(3, 0x04C9, 0xFFBFFA54)                    = 0
    [disk formatted]
    write(1, " [ d i s k   f o r m a t".., 17)      = 17
    open("/etc/mnttab", O_RDONLY)                   = 4
    ioctl(4, (('m'<<8)|7), 0xFFBFFA64)              = 0
    open("/dev/rdsk/c0t1d0s0", O_RDWR|O_NDELAY)     = 5
    fstat(5, 0xFFBFF5E0)                            = 0
    ioctl(5, 0x0403, 0xFFBFF59C)                    = 0
    close(5)                                        = 0
    llseek(4, 0, SEEK_CUR)                          = 0
    close(4)                                        = 0
    fstat64(2, 0xFFBFEBA0)                          = 0
    Warning: Current Disk has mounted partitions.
    write(2, " W a r n i n g :   C u r".., 46)      = 46
    resolvepath("/", "/", 1024)                     = 1
    sysconfig(_CONFIG_PAGESIZE)                     = 8192
    open("/dev/.devlink_db", O_RDONLY)              = 4
    fstat(4, 0xFFBFF1F8)                            = 0
    mmap(0x00000000, 40, PROT_READ, MAP_SHARED, 4, 0) = 0xFEF50000
    mmap(0x00000000, 24576, PROT_READ, MAP_SHARED, 4, 32768) = 0xFEF38000
    open("/devices/pseudo/devinfo@0:devinfo", O_RDONLY) = 5
    ioctl(5, 0xDF82, 0x00000000)                    = 57311After failing to configure LDOM, I boot from a Solaris DVD to do reinstall the OS. It looks like the e1000g0 has some kind of fault.
    {0} ok boot cdrom
    Boot device: /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom@0,0:f  File and args:
    SunOS Release 5.10 Version Generic_118833-33 64-bit Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: Unable to setup switching mode
    Configuring devices.
    WARNING: bypass cookie failure 71ece
    NOTICE: tavor0: error during attach: hw_init_eqinitall_fail
    NOTICE: tavor0: driver attached (for maintenance mode only)
    NOTICE: pciex8086,105e - e1000g[0] : Adapter 1000Mbps full duplex copper link is  up.What is the meaning of this?
    Message was edited by:
    JoeChris@Sun

    Hi,
    In the case of bug 6530040, the recovery method in the Release Notes is to reboot the system. In my case, even after the reboot the vds still does not close the device. I suspect I might use a wrong way to reboot the system. Can you give me an example for a system with control domain (primary) and a single guest domain (myldom1).
    My steps would be as follow:
    $ ldm stop-domain myldom1
    $ ldm unbind-domain myldom1
    $ reboot
    Do I miss any step?
    LDOM seems to have problem releasing the network interface e1000g0 also. How do I release it?
    {0} ok boot cdrom
    Boot device: /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom@0,0:f  File and args:
    SunOS Release 5.10 Version Generic_118833-33 64-bit Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: Unable to setup switching mode
    Configuring devices.
    WARNING: bypass cookie failure 71ece
    NOTICE: tavor0: error during attach: hw_init_eqinitall_fail
    NOTICE: tavor0: driver attached (for maintenance mode only)
    NOTICE: pciex8086,105e - e1000g[0] : Adapter 1000Mbps full duplex copper link is  up.Thank you
    Message was edited by:
    JoeChris@Sun

  • LDOMs on T2000 and T2540 , guest domains from FC LUN's

    Hi,
    I have some T2000 servers and ont 2540 FC storage. each servers have one HBA only, and 73 GB of mirrored internal disk space
    I would like to implement LDOM's here. I would like to confirm if we can have the guest domains booted off from the SAN storage LUN's. (Guest OS to be installed on LUN's exported to T2000 from 2540 storage)
    Any help will be highly apreciated
    Many thanks in advance
    Ushas symon

    Hi,
    Guest LDOM images can be on anything, as it is transparent to the LDOM. Remember the Control domain is actually serving the filesystems to the guest LDOM`s. So they can either be whole LUN`s, zfs devices, mounted filesystems, anything really. You could even have a LDOM guest image on a NFS filesystem if you really wanted..
    I have setup the majority of systems using the SAN attached to the Control domain, and then setup ZFS filesystems on these LUN`s and placed disk images on the ZFS filesystems. This means that we can use ZFS snapshots on the control domain if we need to do any patching etc.etc.
    I would also suggest that you have a minimum of 2 connections to each of your SAN devices. One connection is bad, m`kay? :D
    Edited by: krankyd on Sep 23, 2009 1:01 AM

  • LDOM Guests with FC Library - LUNS not showing

    Hi
    I've been having issues starting LDOMs and getting storage errors.
    I decided to start from scratch and I removed all my ldom guests, removed the luns from the library, removed the library.
    Then in CAM on the 6140 I removed the LUNS.
    I then created new LUN's on the 6140 in CAM.
    I created a new FC library.
    When I click to add LUN's to the library it only finds the old luns that are no longer there, and does not find the new ones.
    Am I missing a step?
    Thanks.

    I fixed this by reinstalling the three T series boxes.
    Once they were all reinstalled, adding the LUNs into the library show the correct LUNs

  • LDOMS - virtual disks export mutiple times to different guest domains

    Experts,
    From control domain, can I export one storage LUN to two guest domains?
    ldm add-vdsdev /dev/dsk/c1t5d0s0 disk1@primary-vds0
    ldm add-vdisk disk1 disk1@primary-vds0 ldom1
    Ok
    ldm add-vdsdev /dev/dsk/c1t5d0s0 disk2@primary-vds0
    Gives error. With –f flag it works.
    ldm add-vdisk disk2 disk2@primary-vds0 ldom2
    Is it permitted configuration?

    A virtual disk back end can be exported multiple times either through the same or different virtual disk servers. Each exported instance of the virtual disk back end can then be assigned to either the same or different guest domains.
    When a virtual disk back end is exported multiple times, it should not be exported with the exclusive (excl) option. Specifying the excl option will only allow exporting the back end once. The back end can be safely exported multiple times as a read-only device with the ro option.
    chears
    ME

  • Guest domain obp alias changed after upgrade to 1.0.3

    Hello all,
    Recently upgraded from 1.0.1 to 1.0.3 (and firmware to 6.6.5) using the procedure at http://blogs.sun.com/jbeloro/entry/upgrading_from_ldoms_1_0. I found that my boot-device setting on all guest domains no longer worked. They were set to "boot-device=vdisk" however after the upgrade to 1.0.3 vdisk was no longer a valid device alias in the obp of the guest domain. The other device aliases (vdisk0, disk) were there and they worked fine. Anyone else seen this? Solution was simply to change the boot-device setting but I'm curious on the "why". Details below.
    Ldom primary domain
    $ uname -a
    SunOS isdsyddev13 5.10 Generic_137111-03 sun4v sparc SUNW,Sun-Fire-T200
    $ cat /etc/release
    Solaris 10 6/06 s10s_u2wos_09a SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 09 June 2006
    $ prtconf -V
    OBP 4.25.9 2007/08/23 14:17
    Guest domain setup :
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    itgsyddev57 inactive ----- 1 1G
    VARIABLES
    auto-boot?=false
    boot-device=vdisk
    NETWORK
    NAME SERVICE DEVICE MAC
    vnet0 primary-vsw0 00:14:4f:f9:61:83
    DISK
    NAME VOLUME TOUT DEVICE SERVER
    Guest domain devalias output post upgrade
    ok devalias
    vdisk0 /virtual-devices@100/channel-devices@200/disk@0
    vnet0 /virtual-devices@100/channel-devices@200/network@0
    net /virtual-devices@100/channel-devices@200/network@0
    disk /virtual-devices@100/channel-devices@200/disk@0
    virtual-console /virtual-devices/console@1
    name aliases
    I don't have a pre upgrade devalias output but I do have the output from an eeprom command...
    root@itgsyddev57# grep boot-device eeprom.out
    boot-device=vdisk
    Thanks.
    Edited by: paebersold on Sep 23, 2008 9:45 PM

    I think this because with LDoms 1.0.3, devalias are automatically created using names
    you used to define your vdisk. So after the upgrade, devalias have probably been
    wiped out and replaced with the ones automatically generated.

  • Vds_add_vd() error and frozen guest domain

    Hello all,
    trying to setup a guest domain but getting a frozen domain at the end of the process. Seems to be a combination of some other issues on the forum. Commands run...
    # ldm add-domain myldom1
    # ldm add-vcpu 8 myldom1
    # ldm add-memory 4G myldom1
    # ldm add-vnet vnet1 primary-vsw0 myldom1
    # ldm add-vdsdev /dev/dsk/c0t3d0s2 vol1@primary-vds0
    # ldm add-vdisk vdisk1 vol1@primary-vds0 myldom1
    # ldm set-variable auto-boot\?=false myldom1
    # ldm set-variable boot-device=/virtual-devices@100/channel-devices@200/disk@0 myldom1
    # ldm bind-domain myldom1
    # ldm list-domain
    Name State Flags Cons VCPU Memory Util Uptime
    primary active -t-cv SP 4 2G 3.2% 6m
    myldom1 bound ----- 5000 8 4G
    # ldm list-bindings myldom1
    Name: myldom1
    State: bound
    Flags:
    OS:
    Util:
    Uptime:
    Vcpu: 8
    vid pid util strand
    0 4 100%
    1 5 100%
    2 6 100%
    3 7 100%
    4 8 100%
    5 9 100%
    6 10 100%
    7 11 100%
    Memory: 4G
    real-addr phys-addr size
    0x8000000 0x88000000 4G
    Vars: auto-boot?=false
    boot-device=/virtual-devices@100/channel-devices@200/disk@0
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Vnet: vnet1
    mac-addr=0:14:4f:f9:f7:be
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vdisk: vdisk1 vol1@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0x2]
    Vcons: [via LDC:3]
    myldom1@primary-vcc0 [port:5000]
    # ldm start-domain myldom1
    LDom myldom1 started
    # ldm list-domain
    Name State Flags Cons VCPU Memory Util Uptime
    primary active -t-cv SP 4 2G 0.6% 7m
    myldom1 active -t--- 5000 8 4G 100% 7s
    # telnet localhost 5000
    Trying 127.0.0.1...
    telnet: connect to address 127.0.0.1: Connection refused
    Trying ::1...
    telnet: Unable to connect to remote host: Network is unreachable
    and in /var/adm/messages...
    May 23 11:38:11 isdsyddev13 vds: [ID 556514 kern.info] vds_add_vd(): Failed to add vdisk ID 0
    Tried stopping/unbinding the domain and then rebooting the primary domain, still doesn't work. Removed the guest domain, rebooted, recreated, still doesn't work. Any ideas on what else I can try? Thanks

    Hi merwick,
    thanks for the reply. Running sol10 11/06 (u3) with patch 124921. The disk wasn't in use anywhere else but vxvm is installed on the box which got me thinking. Seems I was getting hit by the veritas DMP issue (bug 6522993) even though the disk in question wasn't being used by vxvm directly. A "vxdisk rm" solved the issues. So first guest domain up and running :) Happy days. Uninstalled vxvm from the system as well and am off to try out zfs as boot disks (as described in the Ldom admin guide). Thanks again.

  • How to reboot a guest domain when hung and ldm stop-domain doesn't work

    Hi, the configuration is as follows.
    SF T1000 (32 threads/16gb) memory
    Latest Firmware and the LDOM patch (-02) applied.
    This is how the LDOMs are setup.
    Instance CPUs Memory
    Service domain 4 2g
    ldom1 4 2g
    ldom2 4 2g
    ldom3 4 2g
    ldom4 4 2g
    ldom5 4 2g
    ldom6 4 2g
    ldom7 4 1.9g
    All guest domains are running on disk-images on mirrored BE on service domain. Size around 7 gb and SUNWCXall installed.
    However, I have had a few hangs, especially when working over the virtual switch on the domains.
    At the moment ldom1 is totally hung. See below for info:
    bash-3.00# ldm list-domain
    Name State Flags Cons VCPU Memory Util Uptime
    primary active -t-cv SP 4 2G 0.5% 1d 1h 17m
    ldom1 active -t--- 5000 4 2G 25% 2h 14m
    ldom2 active -t--- 5001 4 2G 0.2% 2h 35m
    ldom3 active ----- 5002 4 2G 0.2% 47m
    ldom4 active ----- 5003 4 2G 0.2% 1d 1h 10m
    ldom5 active -t--- 5004 4 2G 0.3% 1d 1h 10m
    ldom6 active -t--- 5005 4 2G 0.2% 1d 1h 10m
    ldom7 active -t--- 5006 4 1900M 0.2% 7h 29m
    bash-3.00#
    bash-3.00# ldm stop-domain ldom1
    LDom ldom1 stop notification failed
    bash-3.00#
    bash-3.00# telnet localhost 5000
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    Connecting to console "ldom1" in group "ldom1" ....
    Press ~? for control options ..
    <COMMENT: ~w sent!>
    Warning: another user currently has write permission
    to this console and forcibly removing him/her will terminate
    any current write action and all work will be lost.
    Would you like to continue?[y/n] y
    < COMMENT: I don't get any response when hitting enter and ~# (break) doesn't seem to work....>
    I cannot ssh to ldom1 since it appears to be dead!
    Anyone know if I can send some sort of reset to this hung domain? How can I troubleshoot it?
    Regards,
    Daniel

    UPDATE 2
    =========
    When I attached to ldom3 through the console services, this domain also had
    hung.
    Below is some LDOM information.
    bash-3.00# ldm list-services
    Vldc: primary-vldc0
    Vldc: primary-vldc3
    Vds: primary-vds0
    vdsdev: vol1 device=/ldoms/be/ldom_1.img
    vdsdev: vol5 device=/ldoms/be/ldom_5.img
    vdsdev: vol6 device=/ldoms/be/ldom_6.img
    vdsdev: vol7 device=/ldoms/be/ldom_7.img
    vdsdev: vol2 device=/ldoms/be/ldom_2.img
    vdsdev: vol3 device=/ldoms/be/ldom_3.img
    vdsdev: vol4 device=/ldoms/be/ldom_4.img
    Vcc: primary-vcc0
    port-range=5000-5100
    Vsw: primary-vsw0
    mac-addr=0:14:4f:f8:66:9f
    net-dev=bge0
    mode=prog,promisc
    Vsw: primary-vsw1
    mac-addr=0:14:4f:f9:dd:53
    net-dev=bge1
    mode=prog,promisc
    bash-3.00# ldm list-devices
    vCPU:
    vCPUID %FREE
    MAU:
    Free MA-Units:
    cpuset (0, 1, 2, 3)
    cpuset (4, 5, 6, 7)
    cpuset (8, 9, 10, 11)
    cpuset (12, 13, 14, 15)
    cpuset (16, 17, 18, 19)
    cpuset (20, 21, 22, 23)
    cpuset (24, 25, 26, 27)
    cpuset (28, 29, 30, 31)
    Memory:
    Available mblocks:
    PADDR SIZE
    0x3fec00000 20M (0x1400000)
    I/O Devices:
    Free Devices:
    bash-3.00# ldm list-domains
    Unknown command list-domains; use --help option for list of available commands
    bash-3.00# ldm list-domain
    Name State Flags Cons VCPU Memory Util Uptime
    primary active -t-cv SP 4 2G 0.7% 1d 4h 8m
    ldom1 active -t--- 5000 4 2G 0.3% 1h 24m
    ldom2 active -t--- 5001 4 2G 0.6% 5h 26m
    ldom3 active ----- 5002 4 2G 25% 3h 38m
    ldom4 active ----- 5003 4 2G 0.1% 1d 4h 1m
    ldom5 active -t--- 5004 4 2G 0.1% 1d 4h 1m
    ldom6 active -t--- 5005 4 2G 0.7% 1d 4h 1m
    ldom7 active -t--- 5006 4 1900M 0.1% 10h 20m
    bash-3.00#
    bash-3.00# ldm list-bindings
    Name: primary
    State: active
    Flags: transition,control,vio service
    OS:
    Util: 0.5%
    Uptime: 1d 4h 11m
    Vcpu: 4
    vid pid util strand
    0 0 0.9% 100%
    1 1 0.8% 100%
    2 2 0.2% 100%
    3 3 0.3% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x8000000 2G
    Vars: reboot-command=boot
    IO: pci@780 (bus_a)
    pci@7c0 (bus_b)
    Vldc: primary-vldc0
    [LDC: 0x0]
    [(HV Control channel)]
    [LDC: 0x1]
    [LDom primary   (Domain Services channel)]
    [LDC: 0x3]
    [LDom primary   (FMA Services channel)]
    [LDC: 0xb]
    [LDom ldom1     (Domain Services channel)]
    [LDC: 0x22]
    [LDom ldom5     (Domain Services channel)]
    [LDC: 0x27]
    [LDom ldom6     (Domain Services channel)]
    [LDC: 0x2d]
    [LDom ldom7     (Domain Services channel)]
    [LDC: 0x10]
    [LDom ldom2     (Domain Services channel)]
    [LDC: 0x18]
    [LDom ldom3     (Domain Services channel)]
    [LDC: 0x1d]
    [LDom ldom4     (Domain Services channel)]
    Vldc: primary-vldc3
    [LDC: 0x14]
    [spds (SP channel)]
    [LDC: 0xd]
    [system-management (SP channel)]
    [LDC: 0x6]
    [sunvts (SP channel)]
    [LDC: 0x7]
    [sunmc (SP channel)]
    [LDC: 0x8]
    [explorer (SP channel)]
    [LDC: 0x9]
    [led (SP channel)]
    [LDC: 0xa]
    [flashupdate (SP channel)]
    Vds: primary-vds0
    vdsdev: vol1 device=/ldoms/be/ldom_1.img
    vdsdev: vol5 device=/ldoms/be/ldom_5.img
    vdsdev: vol6 device=/ldoms/be/ldom_6.img
    vdsdev: vol7 device=/ldoms/be/ldom_7.img
    vdsdev: vol2 device=/ldoms/be/ldom_2.img
    vdsdev: vol3 device=/ldoms/be/ldom_3.img
    vdsdev: vol4 device=/ldoms/be/ldom_4.img
    [LDom  ldom1, dev-name: vol1]
    [LDC: 0xe]
    [LDom  ldom5, dev-name: vol5]
    [LDC: 0x25]
    [LDom  ldom6, dev-name: vol6]
    [LDC: 0x2a]
    [LDom  ldom7, dev-name: vol7]
    [LDC: 0x30]
    [LDom  ldom2, dev-name: vol2]
    [LDC: 0x13]
    [LDom  ldom3, dev-name: vol3]
    [LDC: 0x1b]
    [LDom  ldom4, dev-name: vol4]
    [LDC: 0x20]
    Vcc: primary-vcc0
    [LDC: 0xf]
    [LDom ldom1, group: ldom1, port: 5000]
    [LDC: 0x26]
    [LDom ldom5, group: ldom5, port: 5004]
    [LDC: 0x2c]
    [LDom ldom6, group: ldom6, port: 5005]
    [LDC: 0x31]
    [LDom ldom7, group: ldom7, port: 5006]
    [LDC: 0x15]
    [LDom ldom2, group: ldom2, port: 5001]
    [LDC: 0x1c]
    [LDom ldom3, group: ldom3, port: 5002]
    [LDC: 0x21]
    [LDom ldom4, group: ldom4, port: 5003]
    port-range=5000-5100
    Vsw: primary-vsw0
    mac-addr=0:14:4f:f8:66:9f
    net-dev=bge0
    [LDC: 0xc]
    [LDom ldom1, name: vnet1, mac-addr: 0:14:4f:fa:1e:4d]
    [LDC: 0x23]
    [LDom ldom5, name: vnet0, mac-addr: 0:14:4f:f9:ae:a1]
    [LDC: 0x28]
    [LDom ldom6, name: vnet0, mac-addr: 0:14:4f:f8:27:b8]
    [LDC: 0x2e]
    [LDom ldom7, name: vnet0, mac-addr: 0:14:4f:f9:1f:5d]
    [LDC: 0x11]
    [LDom ldom2, name: vnet0, mac-addr: 0:14:4f:f8:c9:7c]
    [LDC: 0x19]
    [LDom ldom3, name: vnet0, mac-addr: 0:14:4f:fb:d9:6d]
    [LDC: 0x1e]
    [LDom ldom4, name: vnet0, mac-addr: 0:14:4f:fb:df:2c]
    mode=prog,promisc
    Vsw: primary-vsw1
    mac-addr=0:14:4f:f9:dd:53
    net-dev=bge1
    [LDC: 0x2b]
    [LDom ldom1, name: vnet2, mac-addr: 0:14:4f:fa:b1:f0]
    [LDC: 0x24]
    [LDom ldom5, name: vnet1, mac-addr: 0:14:4f:f9:b2:b0]
    [LDC: 0x29]
    [LDom ldom6, name: vnet1, mac-addr: 0:14:4f:fb:f5:c3]
    [LDC: 0x2f]
    [LDom ldom7, name: vnet1, mac-addr: 0:14:4f:f8:3a:3e]
    [LDC: 0x12]
    [LDom ldom2, name: vnet1, mac-addr: 0:14:4f:f9:88:a0]
    [LDC: 0x1a]
    [LDom ldom3, name: vnet1, mac-addr: 0:14:4f:fa:aa:57]
    [LDC: 0x1f]
    [LDom ldom4, name: vnet1, mac-addr: 0:14:4f:f9:33:59]
    mode=prog,promisc
    Vldcc: vldcc1 [FMA Services]
    service: ldmfma
    service: primary-vldc0 @ primary
    [LDC: 0x4]
    Vldcc: vldcc2 [SP channel]
    service: spfma
    [LDC: 0x5]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x2]
    Vldcc: hvctl [Hypervisor Control]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Vcons: SP
    Name: ldom1
    State: active
    Flags: transition
    OS:
    Util: 0.3%
    Uptime: 1h 27m
    Vcpu: 4
    vid pid util strand
    0 4 0.5% 100%
    1 5 0.6% 100%
    2 6 0.1% 100%
    3 7 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x88000000 2G
    Vars: auto-boot?=false
    boot-device=/virtual-devices@100/channel-devices@200/disk@0:a vdisk
    nvramrc=devalias vnet /virtual-devices@100/channel-devices@200/network@0
    use-nvramrc?=true
    Vnet: vnet1 [LDC: 0xb]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0xd]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0xf]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x6]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0x8]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:fa:1e:4d
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet2 [LDC: 0xc]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xe]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0x10]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0x5]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x7]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0x9]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:fa:b1:f0
    service: primary-vsw1 @ primary
    [LDC: 0xa]
    Vdisk: vdisk1 vol1@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0x2]
    Vcons: [via LDC:3]
    ldom1@primary-vcc0 [port:5000]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom2
    State: active
    Flags: transition
    OS:
    Util: 0.1%
    Uptime: 5h 29m
    Vcpu: 4
    vid pid util strand
    0 8 0.6% 100%
    1 9 0.1% 100%
    2 10 0.0% 100%
    3 11 0.2% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x108000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x3]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x4]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x5]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0xd]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0xf]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f8:c9:7c
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x7]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0x8]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0x9]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0xa]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0xe]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0x10]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:f9:88:a0
    service: primary-vsw1 @ primary
    [LDC: 0x6]
    Vdisk: vdisk2 vol2@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xb]
    Vcons: [via LDC:12]
    ldom2@primary-vcc0 [port:5001]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom3
    State: active
    Flags:
    OS:
    Util: 24%
    Uptime: 3h 42m
    Vcpu: 4
    vid pid util strand
    0 12 100% 100%
    1 13 1.4% 100%
    2 14 1.4% 100%
    3 15 1.4% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x188000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x3]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x4]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x5]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x6]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0xf]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:fb:d9:6d
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x8]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0x9]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xa]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0xb]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0xc]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x10]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:fa:aa:57
    service: primary-vsw1 @ primary
    [LDC: 0x7]
    Vdisk: vdisk3 vol3@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xd]
    Vcons: [via LDC:14]
    ldom3@primary-vcc0 [port:5002]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom4
    State: active
    Flags:
    OS:
    Util: 0.2%
    Uptime: 1d 4h 4m
    Vcpu: 4
    vid pid util strand
    0 16 0.4% 100%
    1 17 0.3% 100%
    2 18 0.1% 100%
    3 19 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x208000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x3]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x4]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x5]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x6]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x7]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d]
    mac-addr=0:14:4f:fb:df:2c
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x9]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xa]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xb]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0xc]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0xd]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0xe]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57]
    mac-addr=0:14:4f:f9:33:59
    service: primary-vsw1 @ primary
    [LDC: 0x8]
    Vdisk: vdisk4 vol4@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xf]
    Vcons: [via LDC:16]
    ldom4@primary-vcc0 [port:5003]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom5
    State: active
    Flags: transition
    OS:
    Util: 0.2%
    Uptime: 1d 4h 4m
    Vcpu: 4
    vid pid util strand
    0 20 0.6% 100%
    1 21 0.0% 100%
    2 22 0.3% 100%
    3 23 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x288000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0xd]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0xf]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x3]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x5]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0x9]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f9:ae:a1
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x7]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xe]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0x10]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x8]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0xa]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:f9:b2:b0
    service: primary-vsw1 @ primary
    [LDC: 0x6]
    Vdisk: vdisk5 vol5@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xb]
    Vcons: [via LDC:12]
    ldom5@primary-vcc0 [port:5004]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom6
    State: active
    Flags: transition
    OS:
    Util: 0.3%
    Uptime: 1d 4h 4m
    Vcpu: 4
    vid pid util strand
    0 24 0.5% 100%
    1 25 0.3% 100%
    2 26 0.5% 100%
    3 27 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x308000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x6]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0xf]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x3]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x5]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0xa]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f8:27:b8
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x8]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xc]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0x10]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x9]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0xb]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:fb:f5:c3
    service: primary-vsw1 @ primary
    [LDC: 0x7]
    Vdisk: vdisk6 vol6@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xd]
    Vcons: [via LDC:14]
    ldom6@primary-vcc0 [port:5005]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom7
    State: active
    Flags: transition
    OS:
    Util: 0.4%
    Uptime: 10h 23m
    Vcpu: 4
    vid pid util strand
    0 28 0.6% 100%
    1 29 0.1% 100%
    2 30 0.3% 100%
    3 31 0.2% 100%
    Memory: 1900M
    real-addr phys-addr size
    0x8000000 0x388000000 1900M
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x6]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x7]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x3]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x5]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0xb]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f9:1f:5d
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x9]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xd]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xe]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0xa]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0xc]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:f8:3a:3e
    service: primary-vsw1 @ primary
    [LDC: 0x8]
    Vdisk: vdisk7 vol7@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xf]
    Vcons: [via LDC:16]
    ldom7@primary-vcc0 [port:5006]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    bash-3.00#

  • LDOMs guest hung when LDOM primary reboots

    Hi.
    I have a T5440 with LDOM sw v 1.3 and split bus configuration
    2 services domains, each one with a pair of buses that adresses:
    * 2 quad Gbit Ethernet card
    * 1 dual port FC HBA
    Now we are defining LUNs in a NetApp storage allowing visibility to both service domains.
    Configured MPGROUP and TOUT in order to get IO balancing in case of reboot of primary LDOM, but doesnt work. LDOMS guest keep freeze and qe have to stop (with -f) to continue working.
    We ara assuming that config is ok and supported, but we are totally lost.
    Some help will be appreciated.
    Thanks in advance
    P.D. Here is vdisk configuration
    # ldm ls -o disk
    NAME
    primary
    VDS
    NAME VOLUME OPTIONS MPGROUP DEVICE
    primary-vds0 t5k32c01-d2_rootdisk t5k32c01-d2 /dev/rdsk/c4t60A98000572D426F5434575573544D50d0s2
    t5k32c01-d2_zones t5k32c01-d2-1 /dev/rdsk/c4t60A98000572D426F5434575577706B63d0s2
    t5k32c01-d4_rootdisk t5k32c01-d4 /dev/rdsk/c4t60A98000572D426F543457557364654Fd0s2
    t5k32c01-d4_zones t5k32c01-d4-1 /dev/rdsk/c4t60A98000572D426F5434575577746231d0s2
    t5k32c01-d1_rootdisk t5k32c01-d1 /dev/rdsk/c4t60A98000572D426F54345755727A6A44d0s2
    t5k32c01-d1_zones t5k32c01-d1-1 /dev/rdsk/c4t60A98000572D426F5434575577784A52d0s2
    t5k32c01-d3_rootdisk t5k32c01-d3 /dev/rdsk/c4t60A98000572D426F54345755732F5772d0s2
    t5k32c01-d3_zones t5k32c01-d3-1 /dev/rdsk/c4t60A98000572D426F5434575578306F66d0s2
    NAME
    alternate
    VDS
    NAME VOLUME OPTIONS MPGROUP DEVICE
    alternate-vds0 t5k32c01-d2_rootdisk t5k32c01-d2 /dev/rdsk/c0t60A98000572D426F5434575573544D50d0s2
    t5k32c01-d2_zones t5k32c01-d2-1 /dev/rdsk/c0t60A98000572D426F5434575577706B63d0s2
    t5k32c01-d4_rootdisk t5k32c01-d4 /dev/rdsk/c0t60A98000572D426F543457557364654Fd0s2
    t5k32c01-d4_zones t5k32c01-d4-1 /dev/rdsk/c0t60A98000572D426F5434575577746231d0s2
    t5k32c01-d1_rootdisk t5k32c01-d1 /dev/rdsk/c0t60A98000572D426F54345755727A6A44d0s2
    t5k32c01-d1_zones t5k32c01-d1-1 /dev/rdsk/c0t60A98000572D426F5434575577784A52d0s2
    t5k32c01-d3_rootdisk t5k32c01-d3 /dev/rdsk/c0t60A98000572D426F54345755732F5772d0s2
    t5k32c01-d3_zones t5k32c01-d3-1 /dev/rdsk/c0t60A98000572D426F5434575578306F66d0s2
    NAME
    t5k32c01-d1
    DISK
    NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
    rootdisk t5k32c01-d1_rootdisk@primary-vds0 1 0 disk@0 primary t5k32c01-d1
    zones t5k32c01-d1_zones@primary-vds0 1 1 disk@1 primary t5k32c01-d1-1
    NAME
    t5k32c01-d2
    DISK
    NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
    rootdisk t5k32c01-d2_rootdisk@primary-vds0 1 0 disk@0 primary t5k32c01-d2
    zones t5k32c01-d2_zones@primary-vds0 1 1 disk@1 primary t5k32c01-d2-1
    NAME
    t5k32c01-d3
    DISK
    NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
    rootdisk t5k32c01-d3_rootdisk@alternate-vds0 1 0 disk@0 alternate t5k32c01-d3
    zones t5k32c01-d3_zones@alternate-vds0 1 1 disk@1 alternate t5k32c01-d3-1
    NAME
    t5k32c01-d4
    DISK
    NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
    rootdisk t5k32c01-d4_rootdisk@alternate-vds0 1 0 disk@0 alternate t5k32c01-d4
    zones t5k32c01-d4_zones@alternate-vds0 1 1 disk@1 alternate t5k32c01-d4-1
    Edited by: cesar.rivero on May 14, 2010 8:17 AM

    Hi.
    From your response I can't see clearly what is wrong.
    My config is showing two different VDS and each one defined in primary and alternate LDOM:
    t5k32c01:/root # ldm ls -l
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-cv- SP 8 4G 1.8% 2d 23h 2m
    SOFTSTATE
    Solaris running
    IO
    DEVICE PSEUDONYM OPTIONS
    pci@400 pci_0
    pci@500 pci_1
    VCC
    NAME PORT-RANGE
    primary-vcc0 5000-5100
    VSW
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
    vsw-backup-int 00:14:4f:fb:9e:3e aggr1 0 switch@0 1 1 3 1500
    vsw-prep 00:14:4f:fb:40:8a aggr2 1 switch@1 1 1 320,83,82 1500
    vsw-buses 00:14:4f:f8:46:42 aggr3 2 switch@2 1 1 24 1500
    vsw-ora 00:14:4f:fb:63:de aggr4 3 switch@3 1 1 7,8,9 1500
    VDS
    NAME VOLUME OPTIONS MPGROUP DEVICE
    primary-vds0 t5k32c01-d2_rootdisk t5k32c01-d2 /dev/rdsk/c4t60A98000572D426F5434575573544D50d0s2
    t5k32c01-d2_zones t5k32c01-d2-1 /dev/rdsk/c4t60A98000572D426F5434575577706B63d0s2
    t5k32c01-d4_rootdisk t5k32c01-d4 /dev/rdsk/c4t60A98000572D426F543457557364654Fd0s2
    t5k32c01-d4_zones t5k32c01-d4-1 /dev/rdsk/c4t60A98000572D426F5434575577746231d0s2
    t5k32c01-d1_rootdisk t5k32c01-d1 /dev/rdsk/c4t60A98000572D426F54345755727A6A44d0s2
    t5k32c01-d1_zones t5k32c01-d1-1 /dev/rdsk/c4t60A98000572D426F5434575577784A52d0s2
    t5k32c01-d3_rootdisk t5k32c01-d3 /dev/rdsk/c4t60A98000572D426F54345755732F5772d0s2
    t5k32c01-d3_zones t5k32c01-d3-1 /dev/rdsk/c4t60A98000572D426F5434575578306F66d0s2
    VCONS
    NAME SERVICE PORT
    SP
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    alternate active -n--v- 5000 8 4G 14% 2d 23h 46m
    SOFTSTATE
    Solaris running
    IO
    DEVICE PSEUDONYM OPTIONS
    pci@600 pci_2
    pci@700 pci_3
    VSW
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
    vsw-backup-int-alt 00:14:4f:f9:22:6d aggr11 0 switch@0 1 1 3 1500
    vsw-prep-alt 00:14:4f:f8:2a:03 aggr12 1 switch@1 1 1 320,83,82 1500
    vsw-buses-alt 00:14:4f:fa:76:8d aggr13 2 switch@2 1 1 24 1500
    vsw-ora-alt 00:14:4f:f8:d0:d0 aggr14 3 switch@3 1 1 7,8,9 1500
    VDS
    NAME VOLUME OPTIONS MPGROUP DEVICE
    alternate-vds0 t5k32c01-d2_rootdisk t5k32c01-d2 /dev/rdsk/c0t60A98000572D426F5434575573544D50d0s2
    t5k32c01-d2_zones t5k32c01-d2-1 /dev/rdsk/c0t60A98000572D426F5434575577706B63d0s2
    t5k32c01-d4_rootdisk t5k32c01-d4 /dev/rdsk/c0t60A98000572D426F543457557364654Fd0s2
    t5k32c01-d4_zones t5k32c01-d4-1 /dev/rdsk/c0t60A98000572D426F5434575577746231d0s2
    t5k32c01-d1_rootdisk t5k32c01-d1 /dev/rdsk/c0t60A98000572D426F54345755727A6A44d0s2
    t5k32c01-d1_zones t5k32c01-d1-1 /dev/rdsk/c0t60A98000572D426F5434575577784A52d0s2
    t5k32c01-d3_rootdisk t5k32c01-d3 /dev/rdsk/c0t60A98000572D426F54345755732F5772d0s2
    t5k32c01-d3_zones t5k32c01-d3-1 /dev/rdsk/c0t60A98000572D426F5434575578306F66d0s2
    VCONS
    NAME SERVICE PORT
    alternate primary-vcc0@primary 5000
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    t5k32c01-d1 active -n---- 5005 16 8G 0.0% 3d 16h 38m
    SOFTSTATE
    Solaris running
    MAC
    00:14:4f:f9:92:aa
    HOSTID
    0x84f992aa
    CONTROL
    failure-policy=ignore
    DEPENDENCY
    master=
    VCPU
    VID PID UTIL STRAND
    0 16 0.2% 100%
    1 17 0.0% 100%
    2 18 0.0% 100%
    3 19 0.0% 100%
    4 20 0.0% 100%
    5 21 0.0% 100%
    6 22 0.0% 100%
    7 23 0.0% 100%
    8 24 0.0% 100%
    9 25 0.0% 100%
    10 26 0.0% 100%
    11 27 0.0% 100%
    12 28 0.0% 100%
    13 29 0.0% 100%
    14 30 0.0% 100%
    15 31 0.5% 100%
    MAU
    ID CPUSET
    2 (16, 17, 18, 19, 20, 21, 22, 23)
    3 (24, 25, 26, 27, 28, 29, 30, 31)
    MEMORY
    RA PA SIZE
    0x2000000 0x212000000 8G
    VARIABLES
    auto-boot?=true
    boot-device=disk
    NETWORK
    NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
    vnet-backup-int vsw-backup-int@primary 1 network@1 00:14:4f:fb:db:97 1 3 1500 phys-state
    vnet-backup-int-alt vsw-backup-int-alt@alternate 11 network@11 00:14:4f:f8:b3:5d 1 3 1500 phys-state
    vnet-apps vsw-ora@primary 2 network@2 00:14:4f:fa:98:b7 1 8 1500 phys-state
    vnet-apps-alt vsw-ora-alt@alternate 12 network@12 00:14:4f:f9:99:18 1 8 1500 phys-state
    vnet-ora vsw-ora@primary 3 network@3 00:14:4f:fa:25:c3 1 9 1500 phys-state
    vnet-ora-alt vsw-ora-alt@alternate 13 network@13 00:14:4f:f9:00:f3 1 9 1500 phys-state
    DISK
    NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
    rootdisk t5k32c01-d1_rootdisk@primary-vds0 1 0 disk@0 primary t5k32c01-d1
    zones t5k32c01-d1_zones@primary-vds0 1 1 disk@1 primary t5k32c01-d1-1
    VCONS
    NAME SERVICE PORT
    group1 primary-vcc0@primary 5005
    Thanks for your help
    Edited by: cesar.rivero on May 17, 2010 1:23 AM

  • Volume as install disk for Guest Domain and Live Upgrade

    Hi Folks,
    I am new to LDOMs and have some questions - any pointers, examples would be much appreciated:
    (1) With support for volumes to be used as whole disks added in LDOM release 1.0.3, can we export a whole LUN under either VERITAS DMP or mpxio control to guest domain and install Solaris on it ? Any gotchas or special config required to do this ?
    (2) Can Solaris Live Upgrade be used with Guest LDOMs ? or is this ability limited to Control Domains ?
    Thanks

    The answer to your #1 question is YES.
    Here's my mpxio enabled device.
    non-STMS device name STMS device name
    /dev/rdsk/c2t50060E8010029B33d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    /dev/rdsk/c3t50060E8010029B37d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    create the virtual disk using slice 2
    ldm add-vdsdev /dev/dsk/c4t4849544143484920373730313036373530303136d0s2 77bootdisk@primary-vds01
    add the virtual disk to the guest domain
    ldm add-vdisk apps bootdisk@primary-vds01 ldom1
    the virtual disk will be imprted as c0d0 which is the whole lun itself.
    bind, start ldom 1 and install OS (i used jumpstart) and it partitioned the boot disk c0d0 as / 15GB, swap the remaining space (10GB)
    when you run format, print command on both guest and primary domain on this disk you'll see the same slice/size information
    Part Tag Flag Cylinders Size Blocks
    0 root wm 543 - 1362 15.01GB (820/0/0) 31488000
    1 swap wu 0 - 542 9.94GB (543/0/0) 20851200
    2 backup wm 0 - 1362 24.96GB (1363/0/0) 52339200
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    I havent used DMP but HDLM (Hitachi Dynamic link manager) seems not supported by ldom as i cannot make it work :(
    I have no answer on your next question unfortunately.

  • Error "NOTICE: [0] disk access failed" during guest domain network booting

    Hi,
    Could you please tell me what is the problem with my configuration?
    I created guest domain on my T1000 server.
    As a disk I used disk from disk array: /dev/dsk/c0t18d0
    I added disk using commands:
    # ldm add-vdsdev /dev/dsk/c0t18d0 vol1@primary-vds0
    # ldm add-vdisk vdisk1 vol1@primary-vds0 myldom1
    # ldm set-variable auto-boot\?=false myldom1
    # ldm set-variable boot-device=/virtual-devices@100/channel-devices@200/disk@0 myldom1
    Then I logged to guest domain and booted from network to install OS from JumpStart server:
    {0} ok boot net - install
    Boot device: /virtual-devices@100/channel-devices@200/network@0 File and args: - install
    Requesting Internet Address for 0:14:4f:f9:78:19
    SunOS Release 5.10 Version Generic_137137-09 64-bit
    Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Configuring devices.
    NOTICE: [0] disk access failed.
    Checking rules.ok file...
    Using begin script: install_begin
    Using finish script: patch_finish
    Executing SolStart preinstall phase...
    Executing begin script "install_begin"...
    Begin script install_begin execution completed.
    ERROR: No disks found
    - Check to make sure disks are cabled and powered up
    Solaris installation program exited.
    Configuration:
    [root@gt1000a /]# ldm list-bindings
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-cv- SP 4 2G 0.5% 2h 23m
    MAC
    00:14:4f:9f:71:4e
    HOSTID
    0x849f714e
    VCPU
    VID PID UTIL STRAND
    0 0 5.3% 100%
    1 1 0.5% 100%
    2 2 0.5% 100%
    3 3 0.4% 100%
    MAU
    ID CPUSET
    0 (0, 1, 2, 3)
    MEMORY
    RA PA SIZE
    0x8000000 0x8000000 2G
    VARIABLES
    keyboard-layout=US-English
    IO
    DEVICE PSEUDONYM OPTIONS
    pci@780 bus_a
    pci@7c0 bus_b
    VCC
    NAME PORT-RANGE
    primary-vcc0 5000-5100
    CLIENT PORT
    myldom1@primary-vcc0 5000
    VSW
    NAME MAC NET-DEV DEVICE DEFAULT-VLAN-ID PVID VID MODE
    primary-vsw0 00:14:4f:fa:ca:94 bge0 switch@0 1 1
    PEER MAC PVID VID
    vnet0@myldom1 00:14:4f:f9:78:19 1
    VDS
    NAME VOLUME OPTIONS MPGROUP DEVICE
    primary-vds0 vol1 /dev/dsk/c0t18d0
    CLIENT VOLUME
    vdisk1@myldom1 vol1
    VCONS
    NAME SERVICE PORT
    SP
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    myldom1 active -n---- 5000 12 2G 0.1% 2h 18m
    MAC
    00:14:4f:f9:e7:ae
    HOSTID
    0x84f9e7ae
    VCPU
    VID PID UTIL STRAND
    0 4 0.5% 100%
    1 5 0.0% 100%
    2 6 0.0% 100%
    3 7 0.0% 100%
    4 8 0.0% 100%
    5 9 0.0% 100%
    6 10 0.0% 100%
    7 11 0.0% 100%
    8 12 0.0% 100%
    9 13 0.0% 100%
    10 14 0.0% 100%
    11 15 0.0% 100%
    MEMORY
    RA PA SIZE
    0x8000000 0x88000000 2G
    VARIABLES
    auto-boot?=false
    boot-device=/virtual-devices@100/channel-devices@200/disk@0
    NETWORK
    NAME SERVICE DEVICE MAC MODE PVID VID
    vnet0 primary-vsw0@primary network@0 00:14:4f:f9:78:19 1
    PEER MAC MODE PVID VID
    primary-vsw0@primary 00:14:4f:fa:ca:94 1
    DISK
    NAME VOLUME TOUT DEVICE SERVER MPGROUP
    vdisk1 vol1@primary-vds0 disk@0 primary
    VCONS
    NAME SERVICE PORT
    myldom1 primary-vcc0@primary 5000
    [root@gt1000a /]#
    Kind regards,
    Daniel

    Issue solved.
    There was a wrong disk name:
    primary-vds0 vol1 /dev/dsk/c0t18d0
    I changed to c0t18d0s2 and now I sucessfuly installed OS from Jumpstart.

  • Mounting Cdrom Drive In Guest Domain running 1.0.3

    Folks,
    I have followed the Admin guide on how to export Cdrom/Dvd drive from Service Domain to Guest Domain, however once allocated to the guest domain we are unable to mount the device. Any help is appreciated.
    Here are my bindings
    ldm list-bindings primary
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-cv SP 4 2G 0.8% ; 7d 18h 56m
    MAC
    00:14:4f:02:ca:6a
    VCPU
    VID PID UTIL STRAND
    0 0 1.0% ; 100%
    1 1 0.5% ; 100%
    2 2 0.7% ; 100%
    3 3 0.8% ; 100%
    MAU
    CPUSET
    (0, 1, 2, 3)
    MEMORY
    RA PA SIZE
    0x8000000 0x8000000 2G
    VARIABLES
    boot-device=/pci@780/pci@0/pci@9/scsi@0/disk@1,0:a /pci@780/pci@0/pci@9/scsi@0/disk@0,0:a
    IO
    DEVICE PSEUDONYM OPTIONS
    pci@780 bus_a
    pci@7c0 bus_b
    VCC
    NAME PORT-RANGE
    primary-vcc0 5000-5031
    CLIENT PORT
    ldom3@ldom3 5002
    VDS
    NAME VOLUME OPTIONS DEVICE
    primary-vds0 cdrom /dev/dsk/c1t0d0s2
    vol1 /ldom1/bootfile1
    vol11 /dev/rdsk/c6t60060480000287750594534653353445d0s2
    vol2 /ldom2/bootfile2
    vol3 /bootpool/bootfile3
    CLIENT VOLUME
    vdisk0@ldom3 vol3
    vdisk11@ldom3 vol11
    cdrom@ldom3 cdrom
    VSW
    NAME MAC NET-DEV DEVICE MODE
    primary-vsw0 00:14:4f:fa:c6:14 e1000g0 switch@0 prog,promisc
    PEER MAC
    vnet1@ldom3 00:14:4f:f9:c8:15
    VCONS
    NAME SERVICE PORT
    SP
    ldm list-bindings ldom3
    root@host# ldm list-bindings ldom3
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    ldom3 active -n--- 5002 4 1900M 0.1% ; 15m
    MAC
    00:14:4f:f8:6c:1d
    VCPU
    VID PID UTIL STRAND
    0 4 0.3% ; 100%
    1 5 0.0% ; 100%
    2 6 0.0% ; 100%
    3 7 0.0% ; 100%
    MEMORY
    RA PA SIZE
    0x8000000 0x88000000 1900M
    VARIABLES
    keyboard-layout=UK-English
    NETWORK
    NAME SERVICE DEVICE MAC
    vnet1 primary-vsw0@primary network@0 00:14:4f:f9:c8:15
    PEER MAC
    primary-vsw0@primary 00:14:4f:fa:c6:14
    DISK
    NAME VOLUME TOUT DEVICE SERVER
    vdisk0 vol3@primary-vds0 disk@0 primary
    vdisk11 vol11@primary-vds0 disk@1 primary
    cdrom cdrom@primary-vds0 disk@2 primary
    VCONS
    NAME SERVICE PORT
    ldom3 primary-vcc0@primary 5002
    We can see the exported cdrom volume in /dev/dsk as a new device and also at the ok prompt after running devalias.
    However, we need to mount this device so that we can install software in our domain. All our attempts at mounting just hang and I end up having to stop/start the whole domain. ?
    Is there an easy way to mount cdroms/dvd drives in a guest domain ?
    TIA

    There is an recently filed bug where DVDs (and ISO images) which do not have a VTOC in their disk label
    fail to be mountable in the guest domain (e.g. application software DVDs as opposed to OS installation images).
    6708257 DVD-ROM (not OS installation disk) can not mount from guest domain of LDOM 1.0.2
    # mount -F hsfs /dev/dsk/c0d1s0 /mnt
    mount: I/O error
    mount: cannot mount /dev/dsk/c0d1s0
    It's strange that you are getting a hang though (the reported error is an I/O error).
    Are there any messages in /var/adm/message of the control domain?
    What OS/patches are running on the guest and control domains (i.e is 127127-11 installed on both) ?
    I've just had a thought though, exporting the DVD device as a slice instead may work
    (it works for me on an ISO image but my machine is 1000s of miles away so I can't stick DVD in it
    to try out)
    e.g. ldm add-vdsdev options=slice /dev/dsk/c1t0d0s2 cdrom@primary-vds0

  • Backup the Complete Guest Domain

    Hi All,
    What is the best way to backup the complete Guest Domain including the configuration and all, so that it can be migrated to another server.
    Thanks in advance

    The best way in my opinion is NOT to copy the guest domain.
    instead just have the vdisk beck-end device be on a shared storage ( via san of nfs ) and then just use ldom migration to migrate the guest domain to another server
    Ori

  • Cdrom install in new guest domain

    Hi all --
    I'm waiting for my T1000 to show up (should be here tomorrow) and I'm whiling away my time with reading the Admin Guide.
    Our Solaris machines (and these ldom's) are on a network that already has a Red Hat Kickstart server. I don't really feel like messing with that, and our volume of installing Solaris is so small that I don't want to spend the time to make a Jumpstart server.
    It seems that it's possible to boot and start a cd-rom based install of Solaris 10 from within a guest domain, but the docs don't mention how to do it.
    Can someone shed some light for me?
    Thanks!

    Well, the T1000 is here, and lo and behold, no cdrom drive.
    So, the question now becomes - it it possible to have it boot off of an .iso, or is netboot my only option?

Maybe you are looking for

  • Moved to a new computer, but iTunes looking for songs at old folder locatio

    My old pc crashed, so I copied all the songs from my old hard drive to my new machine. Now my iTunes shows 2 versions of each song and 1 version says it can't find the file. Does anyone know an easy way to have iTunes delete all the songs it can't fi

  • Voltage requirement for late 2008 17" MacBook Pro

    I bought a used 17" MacBook Pro - vintage late 2008 and have been having power issues with it.  Screen flashing, system crash.  Streaming video online, system crash.  Took the battery out and held down the power button for about 30 seconds to dischar

  • Announcing General Availability of PowerShell Connector and Release Candidate of Generic SQL and SAP Roles/Users

    The FIM team is pleased to announce the availability of some additional Connectors for FIM2010R2. General Availability of PowerShell Connector The PowerShell Connector can be used to communicate with a system through PowerShell scripts. This allows a

  • I need the Response Code to activate CS4

    I'm reinstalling Adobe CS4 Design Standard on my pc. It was purchased as an upgrade from CS1 and the activation procedure shows me a Challenge Code which I'm supposed to give to Adobe in order to get a Response Code for finalizing the activation. I u

  • Analytical function syntax help

    The following query sorts by most occuring hesid descedning and requires TWO full table scans of the episodes_full_test table : create table episodes_full_test (epikey number, hesid number, dob date) tablespace analysis_ip_d; insert into episodes_ful