Configure Solaris cluster to failover guest domain when NICs were down

Hi,
I am running Solaris 11 as the control domains on 2 clustered nodes running on Solaris Cluster 4. There is a Solaris 10 guest domain which is managed via the Solaris cluster in failover mode.
2 virtual switches connected to 2 different network switches are presented to the guest domain. I would like to use link based IPMP to facilitate HA for the network connections. I understand that in this case the IPMP can only be configured within the guest domain. Now the question is how do I configure it in such a way that the guest domain fails over to the second cluster node (standby control domain) if both network interfaces are down? Thanks.
Edited by: user12925046 on Dec 25, 2012 9:48 PM
Edited by: user12925046 on Dec 25, 2012 9:49 PM

The Solaris Cluster 4.1 Installation and Concepts Guide are available at :-
http://docs.oracle.com/cd/E29086_01/index.html
Thanks.

Similar Messages

  • Can I install Sun Cluster on LDOM guest domain. Is Oracle RAC a supported c

    Hello,
    Can I install Sun Cluster on LDOM guest domains. Is Oracle RAC on LDOM guest domains of 2 physical servers a supported configuration from oracle?
    Many thanks in advance
    Ushas Symon

    Hello,
    The motive behind using LDOm Guest domains as RAC node is to have a better control of the resource allocation, since i will be having more than one guest domains which should perform different functions. The customer wants to have ORACLE RAC alone (without sun cluster).
    I will have two T5120's and one 2540 shared storage.
    My plan of configuration is to have
    Control&IO Domain with 8VCPU, 6GB mem
    one LDOM guest domain on each physical machine with 8 VCPU's, 8GB of memory, shared n/w and disks participating as RAC node's. (Don't know yet if i will use solaris cluster or not)
    one guest domain on each physical machine with 12 VCPU's, 14GB of memory, shared n/w and disks participating as BEA weblogic cluster nodes (not on solaris cluster)
    One guest domain on each physical machine with 4 VCPU's, 4GB of memory,shared n/w and disks participating as apache web cluster (on solaris cluster)
    Now, My question is, is it a supported configuration to have guest domains as Oracle RAC participants for 11gR2 (either with or without solaris cluster).
    If I need to configure RAC nodes on solaris cluster, is it possible to have two independent clusters on LDOM , one 2 node cluster for RAC and another 2 node cluster for apache web?
    Kindly advise
    Many thanks in advance
    Ushas Symon

  • How to reboot a guest domain when hung and ldm stop-domain doesn't work

    Hi, the configuration is as follows.
    SF T1000 (32 threads/16gb) memory
    Latest Firmware and the LDOM patch (-02) applied.
    This is how the LDOMs are setup.
    Instance CPUs Memory
    Service domain 4 2g
    ldom1 4 2g
    ldom2 4 2g
    ldom3 4 2g
    ldom4 4 2g
    ldom5 4 2g
    ldom6 4 2g
    ldom7 4 1.9g
    All guest domains are running on disk-images on mirrored BE on service domain. Size around 7 gb and SUNWCXall installed.
    However, I have had a few hangs, especially when working over the virtual switch on the domains.
    At the moment ldom1 is totally hung. See below for info:
    bash-3.00# ldm list-domain
    Name State Flags Cons VCPU Memory Util Uptime
    primary active -t-cv SP 4 2G 0.5% 1d 1h 17m
    ldom1 active -t--- 5000 4 2G 25% 2h 14m
    ldom2 active -t--- 5001 4 2G 0.2% 2h 35m
    ldom3 active ----- 5002 4 2G 0.2% 47m
    ldom4 active ----- 5003 4 2G 0.2% 1d 1h 10m
    ldom5 active -t--- 5004 4 2G 0.3% 1d 1h 10m
    ldom6 active -t--- 5005 4 2G 0.2% 1d 1h 10m
    ldom7 active -t--- 5006 4 1900M 0.2% 7h 29m
    bash-3.00#
    bash-3.00# ldm stop-domain ldom1
    LDom ldom1 stop notification failed
    bash-3.00#
    bash-3.00# telnet localhost 5000
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    Connecting to console "ldom1" in group "ldom1" ....
    Press ~? for control options ..
    <COMMENT: ~w sent!>
    Warning: another user currently has write permission
    to this console and forcibly removing him/her will terminate
    any current write action and all work will be lost.
    Would you like to continue?[y/n] y
    < COMMENT: I don't get any response when hitting enter and ~# (break) doesn't seem to work....>
    I cannot ssh to ldom1 since it appears to be dead!
    Anyone know if I can send some sort of reset to this hung domain? How can I troubleshoot it?
    Regards,
    Daniel

    UPDATE 2
    =========
    When I attached to ldom3 through the console services, this domain also had
    hung.
    Below is some LDOM information.
    bash-3.00# ldm list-services
    Vldc: primary-vldc0
    Vldc: primary-vldc3
    Vds: primary-vds0
    vdsdev: vol1 device=/ldoms/be/ldom_1.img
    vdsdev: vol5 device=/ldoms/be/ldom_5.img
    vdsdev: vol6 device=/ldoms/be/ldom_6.img
    vdsdev: vol7 device=/ldoms/be/ldom_7.img
    vdsdev: vol2 device=/ldoms/be/ldom_2.img
    vdsdev: vol3 device=/ldoms/be/ldom_3.img
    vdsdev: vol4 device=/ldoms/be/ldom_4.img
    Vcc: primary-vcc0
    port-range=5000-5100
    Vsw: primary-vsw0
    mac-addr=0:14:4f:f8:66:9f
    net-dev=bge0
    mode=prog,promisc
    Vsw: primary-vsw1
    mac-addr=0:14:4f:f9:dd:53
    net-dev=bge1
    mode=prog,promisc
    bash-3.00# ldm list-devices
    vCPU:
    vCPUID %FREE
    MAU:
    Free MA-Units:
    cpuset (0, 1, 2, 3)
    cpuset (4, 5, 6, 7)
    cpuset (8, 9, 10, 11)
    cpuset (12, 13, 14, 15)
    cpuset (16, 17, 18, 19)
    cpuset (20, 21, 22, 23)
    cpuset (24, 25, 26, 27)
    cpuset (28, 29, 30, 31)
    Memory:
    Available mblocks:
    PADDR SIZE
    0x3fec00000 20M (0x1400000)
    I/O Devices:
    Free Devices:
    bash-3.00# ldm list-domains
    Unknown command list-domains; use --help option for list of available commands
    bash-3.00# ldm list-domain
    Name State Flags Cons VCPU Memory Util Uptime
    primary active -t-cv SP 4 2G 0.7% 1d 4h 8m
    ldom1 active -t--- 5000 4 2G 0.3% 1h 24m
    ldom2 active -t--- 5001 4 2G 0.6% 5h 26m
    ldom3 active ----- 5002 4 2G 25% 3h 38m
    ldom4 active ----- 5003 4 2G 0.1% 1d 4h 1m
    ldom5 active -t--- 5004 4 2G 0.1% 1d 4h 1m
    ldom6 active -t--- 5005 4 2G 0.7% 1d 4h 1m
    ldom7 active -t--- 5006 4 1900M 0.1% 10h 20m
    bash-3.00#
    bash-3.00# ldm list-bindings
    Name: primary
    State: active
    Flags: transition,control,vio service
    OS:
    Util: 0.5%
    Uptime: 1d 4h 11m
    Vcpu: 4
    vid pid util strand
    0 0 0.9% 100%
    1 1 0.8% 100%
    2 2 0.2% 100%
    3 3 0.3% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x8000000 2G
    Vars: reboot-command=boot
    IO: pci@780 (bus_a)
    pci@7c0 (bus_b)
    Vldc: primary-vldc0
    [LDC: 0x0]
    [(HV Control channel)]
    [LDC: 0x1]
    [LDom primary   (Domain Services channel)]
    [LDC: 0x3]
    [LDom primary   (FMA Services channel)]
    [LDC: 0xb]
    [LDom ldom1     (Domain Services channel)]
    [LDC: 0x22]
    [LDom ldom5     (Domain Services channel)]
    [LDC: 0x27]
    [LDom ldom6     (Domain Services channel)]
    [LDC: 0x2d]
    [LDom ldom7     (Domain Services channel)]
    [LDC: 0x10]
    [LDom ldom2     (Domain Services channel)]
    [LDC: 0x18]
    [LDom ldom3     (Domain Services channel)]
    [LDC: 0x1d]
    [LDom ldom4     (Domain Services channel)]
    Vldc: primary-vldc3
    [LDC: 0x14]
    [spds (SP channel)]
    [LDC: 0xd]
    [system-management (SP channel)]
    [LDC: 0x6]
    [sunvts (SP channel)]
    [LDC: 0x7]
    [sunmc (SP channel)]
    [LDC: 0x8]
    [explorer (SP channel)]
    [LDC: 0x9]
    [led (SP channel)]
    [LDC: 0xa]
    [flashupdate (SP channel)]
    Vds: primary-vds0
    vdsdev: vol1 device=/ldoms/be/ldom_1.img
    vdsdev: vol5 device=/ldoms/be/ldom_5.img
    vdsdev: vol6 device=/ldoms/be/ldom_6.img
    vdsdev: vol7 device=/ldoms/be/ldom_7.img
    vdsdev: vol2 device=/ldoms/be/ldom_2.img
    vdsdev: vol3 device=/ldoms/be/ldom_3.img
    vdsdev: vol4 device=/ldoms/be/ldom_4.img
    [LDom  ldom1, dev-name: vol1]
    [LDC: 0xe]
    [LDom  ldom5, dev-name: vol5]
    [LDC: 0x25]
    [LDom  ldom6, dev-name: vol6]
    [LDC: 0x2a]
    [LDom  ldom7, dev-name: vol7]
    [LDC: 0x30]
    [LDom  ldom2, dev-name: vol2]
    [LDC: 0x13]
    [LDom  ldom3, dev-name: vol3]
    [LDC: 0x1b]
    [LDom  ldom4, dev-name: vol4]
    [LDC: 0x20]
    Vcc: primary-vcc0
    [LDC: 0xf]
    [LDom ldom1, group: ldom1, port: 5000]
    [LDC: 0x26]
    [LDom ldom5, group: ldom5, port: 5004]
    [LDC: 0x2c]
    [LDom ldom6, group: ldom6, port: 5005]
    [LDC: 0x31]
    [LDom ldom7, group: ldom7, port: 5006]
    [LDC: 0x15]
    [LDom ldom2, group: ldom2, port: 5001]
    [LDC: 0x1c]
    [LDom ldom3, group: ldom3, port: 5002]
    [LDC: 0x21]
    [LDom ldom4, group: ldom4, port: 5003]
    port-range=5000-5100
    Vsw: primary-vsw0
    mac-addr=0:14:4f:f8:66:9f
    net-dev=bge0
    [LDC: 0xc]
    [LDom ldom1, name: vnet1, mac-addr: 0:14:4f:fa:1e:4d]
    [LDC: 0x23]
    [LDom ldom5, name: vnet0, mac-addr: 0:14:4f:f9:ae:a1]
    [LDC: 0x28]
    [LDom ldom6, name: vnet0, mac-addr: 0:14:4f:f8:27:b8]
    [LDC: 0x2e]
    [LDom ldom7, name: vnet0, mac-addr: 0:14:4f:f9:1f:5d]
    [LDC: 0x11]
    [LDom ldom2, name: vnet0, mac-addr: 0:14:4f:f8:c9:7c]
    [LDC: 0x19]
    [LDom ldom3, name: vnet0, mac-addr: 0:14:4f:fb:d9:6d]
    [LDC: 0x1e]
    [LDom ldom4, name: vnet0, mac-addr: 0:14:4f:fb:df:2c]
    mode=prog,promisc
    Vsw: primary-vsw1
    mac-addr=0:14:4f:f9:dd:53
    net-dev=bge1
    [LDC: 0x2b]
    [LDom ldom1, name: vnet2, mac-addr: 0:14:4f:fa:b1:f0]
    [LDC: 0x24]
    [LDom ldom5, name: vnet1, mac-addr: 0:14:4f:f9:b2:b0]
    [LDC: 0x29]
    [LDom ldom6, name: vnet1, mac-addr: 0:14:4f:fb:f5:c3]
    [LDC: 0x2f]
    [LDom ldom7, name: vnet1, mac-addr: 0:14:4f:f8:3a:3e]
    [LDC: 0x12]
    [LDom ldom2, name: vnet1, mac-addr: 0:14:4f:f9:88:a0]
    [LDC: 0x1a]
    [LDom ldom3, name: vnet1, mac-addr: 0:14:4f:fa:aa:57]
    [LDC: 0x1f]
    [LDom ldom4, name: vnet1, mac-addr: 0:14:4f:f9:33:59]
    mode=prog,promisc
    Vldcc: vldcc1 [FMA Services]
    service: ldmfma
    service: primary-vldc0 @ primary
    [LDC: 0x4]
    Vldcc: vldcc2 [SP channel]
    service: spfma
    [LDC: 0x5]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x2]
    Vldcc: hvctl [Hypervisor Control]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Vcons: SP
    Name: ldom1
    State: active
    Flags: transition
    OS:
    Util: 0.3%
    Uptime: 1h 27m
    Vcpu: 4
    vid pid util strand
    0 4 0.5% 100%
    1 5 0.6% 100%
    2 6 0.1% 100%
    3 7 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x88000000 2G
    Vars: auto-boot?=false
    boot-device=/virtual-devices@100/channel-devices@200/disk@0:a vdisk
    nvramrc=devalias vnet /virtual-devices@100/channel-devices@200/network@0
    use-nvramrc?=true
    Vnet: vnet1 [LDC: 0xb]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0xd]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0xf]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x6]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0x8]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:fa:1e:4d
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet2 [LDC: 0xc]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xe]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0x10]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0x5]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x7]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0x9]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:fa:b1:f0
    service: primary-vsw1 @ primary
    [LDC: 0xa]
    Vdisk: vdisk1 vol1@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0x2]
    Vcons: [via LDC:3]
    ldom1@primary-vcc0 [port:5000]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom2
    State: active
    Flags: transition
    OS:
    Util: 0.1%
    Uptime: 5h 29m
    Vcpu: 4
    vid pid util strand
    0 8 0.6% 100%
    1 9 0.1% 100%
    2 10 0.0% 100%
    3 11 0.2% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x108000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x3]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x4]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x5]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0xd]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0xf]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f8:c9:7c
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x7]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0x8]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0x9]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0xa]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0xe]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0x10]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:f9:88:a0
    service: primary-vsw1 @ primary
    [LDC: 0x6]
    Vdisk: vdisk2 vol2@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xb]
    Vcons: [via LDC:12]
    ldom2@primary-vcc0 [port:5001]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom3
    State: active
    Flags:
    OS:
    Util: 24%
    Uptime: 3h 42m
    Vcpu: 4
    vid pid util strand
    0 12 100% 100%
    1 13 1.4% 100%
    2 14 1.4% 100%
    3 15 1.4% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x188000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x3]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x4]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x5]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x6]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0xf]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:fb:d9:6d
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x8]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0x9]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xa]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0xb]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0xc]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x10]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:fa:aa:57
    service: primary-vsw1 @ primary
    [LDC: 0x7]
    Vdisk: vdisk3 vol3@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xd]
    Vcons: [via LDC:14]
    ldom3@primary-vcc0 [port:5002]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom4
    State: active
    Flags:
    OS:
    Util: 0.2%
    Uptime: 1d 4h 4m
    Vcpu: 4
    vid pid util strand
    0 16 0.4% 100%
    1 17 0.3% 100%
    2 18 0.1% 100%
    3 19 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x208000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x3]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x4]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x5]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x6]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x7]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d]
    mac-addr=0:14:4f:fb:df:2c
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x9]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xa]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xb]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0xc]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0xd]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0xe]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57]
    mac-addr=0:14:4f:f9:33:59
    service: primary-vsw1 @ primary
    [LDC: 0x8]
    Vdisk: vdisk4 vol4@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xf]
    Vcons: [via LDC:16]
    ldom4@primary-vcc0 [port:5003]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom5
    State: active
    Flags: transition
    OS:
    Util: 0.2%
    Uptime: 1d 4h 4m
    Vcpu: 4
    vid pid util strand
    0 20 0.6% 100%
    1 21 0.0% 100%
    2 22 0.3% 100%
    3 23 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x288000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0xd]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0xf]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x3]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x5]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0x9]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f9:ae:a1
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x7]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xe]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0x10]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x8]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0xa]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:f9:b2:b0
    service: primary-vsw1 @ primary
    [LDC: 0x6]
    Vdisk: vdisk5 vol5@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xb]
    Vcons: [via LDC:12]
    ldom5@primary-vcc0 [port:5004]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom6
    State: active
    Flags: transition
    OS:
    Util: 0.3%
    Uptime: 1d 4h 4m
    Vcpu: 4
    vid pid util strand
    0 24 0.5% 100%
    1 25 0.3% 100%
    2 26 0.5% 100%
    3 27 0.0% 100%
    Memory: 2G
    real-addr phys-addr size
    0x8000000 0x308000000 2G
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x6]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0xf]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f9:1f:5d] [LDC: 0x3]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x5]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0xa]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f8:27:b8
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x8]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xc]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0x10]
    [Peer LDom: ldom7, mac-addr: 0:14:4f:f8:3a:3e] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0x9]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0xb]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:fb:f5:c3
    service: primary-vsw1 @ primary
    [LDC: 0x7]
    Vdisk: vdisk6 vol6@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xd]
    Vcons: [via LDC:14]
    ldom6@primary-vcc0 [port:5005]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    Name: ldom7
    State: active
    Flags: transition
    OS:
    Util: 0.4%
    Uptime: 10h 23m
    Vcpu: 4
    vid pid util strand
    0 28 0.6% 100%
    1 29 0.1% 100%
    2 30 0.3% 100%
    3 31 0.2% 100%
    Memory: 1900M
    real-addr phys-addr size
    0x8000000 0x388000000 1900M
    Vars: boot-device=vdisk
    Vnet: vnet0 [LDC: 0x2]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:1e:4d] [LDC: 0x6]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:ae:a1] [LDC: 0x7]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:f8:27:b8] [LDC: 0x3]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f8:c9:7c] [LDC: 0x5]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fb:d9:6d] [LDC: 0xb]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:fb:df:2c]
    mac-addr=0:14:4f:f9:1f:5d
    service: primary-vsw0 @ primary
    [LDC: 0x1]
    Vnet: vnet1 [LDC: 0x9]
    [Peer LDom: ldom1, mac-addr: 0:14:4f:fa:b1:f0] [LDC: 0xd]
    [Peer LDom: ldom5, mac-addr: 0:14:4f:f9:b2:b0] [LDC: 0xe]
    [Peer LDom: ldom6, mac-addr: 0:14:4f:fb:f5:c3] [LDC: 0x4]
    [Peer LDom: ldom2, mac-addr: 0:14:4f:f9:88:a0] [LDC: 0xa]
    [Peer LDom: ldom3, mac-addr: 0:14:4f:fa:aa:57] [LDC: 0xc]
    [Peer LDom: ldom4, mac-addr: 0:14:4f:f9:33:59]
    mac-addr=0:14:4f:f8:3a:3e
    service: primary-vsw1 @ primary
    [LDC: 0x8]
    Vdisk: vdisk7 vol7@primary-vds0
    service: primary-vds0 @ primary
    [LDC: 0xf]
    Vcons: [via LDC:16]
    ldom7@primary-vcc0 [port:5006]
    Vldcc: vldcc0 [Domain Services]
    service: primary-vldc0 @ primary
    [LDC: 0x0]
    bash-3.00#

  • Auto boot Guest Domains when control domain is restarted

    We have a user who has asked us to set up the LDOMs to boot when the host machine boots up.
    Can this be done?
    All I found was this setting
    ldm set-variable auto-boot\?=true goldldom
    Or, do I need to write a script to do this.

    http://www.opensolaris.org/jive/thread.jspa?threadID=109629&tstart=0
    "If you save your configuration to the SP (ldm add-config foo) while
    the guest domains are active then they will start & boot when the
    system is powercycled (assuming auto-boot? is true also)."

  • Solaris 10 instalation in Guest Domain - Timed out waiting for TFTP reply

    I have the next trouble
    I have a Sun Fire T1000 and i want install two Guest LDOM's with Solaris10 in each one, I Install the LDOM Manager 1.0.3 and the service is "online" also the another services needed
    First Setup the 2 Guest LDOM's in the primary Domain sucesfully .... here is a desc of the configuration.
    # ldm list-bindings
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-cv SP 4 1G 0.1% 2h 14m
    MAC
    00:14:4f:a7:85:1e
    VCPU
    VID PID UTIL STRAND
    0 0 0.2% 100%
    1 1 0.2% 100%
    2 2 0.1% 100%
    3 3 0.1% 100%
    MAU
    ID CPUSET
    0 (0, 1, 2, 3)
    MEMORY
    RA PA SIZE
    0x8000000 0x8000000 1G
    VARIABLES
    diag-switch?=true
    security-#badlogins=0
    IO
    DEVICE PSEUDONYM OPTIONS
    pci@780 bus_a
    pci@7c0 bus_b
    VCC
    NAME PORT-RANGE
    primary-vcc0 5000-5100
    CLIENT PORT
    aplica_srss1@primary-vcc0 5001
    aplica_prod@primary-vcc0 5000
    VSW
    NAME MAC NET-DEV DEVICE MODE
    primary-vsw0 00:14:4f:a7:85:1e bge0 switch@0
    PEER MAC
    vnet2@aplica_srss1 00:14:4f:fb:f4:e0
    vnet1@aplica_prod 00:14:4f:fb:2c:60
    VDS
    NAME VOLUME OPTIONS DEVICE
    primary-vds0 vol1 /dev/dsk/c0t0d0s3
    CLIENT VOLUME
    vdisk1@aplica_prod vol1
    VCONS
    NAME SERVICE PORT
    SP
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    aplica_prod active -t--- 5000 4 2G 15% 9m
    MAC
    00:14:4f:fb:2c:60
    VCPU
    VID PID UTIL STRAND
    0 4 35% 100%
    1 5 0.0% 100%
    2 6 7.0% 100%
    3 7 84% 100%
    MEMORY
    RA PA SIZE
    0x8000000 0x48000000 2G
    VARIABLES
    auto-boot?=true
    boot-device=
    use-nvramrc?=true
    NETWORK
    NAME SERVICE DEVICE MAC
    vnet1 primary-vsw0@primary network@0 00:14:4f:fb:2c:60
    PEER MAC
    primary-vsw0@primary 00:14:4f:a7:85:1e
    vnet2@aplica_srss1 00:14:4f:fb:f4:e0
    DISK
    NAME VOLUME TOUT DEVICE SERVER
    vdisk1 vol1@primary-vds0 disk@0 primary
    VCONS
    NAME SERVICE PORT
    aplica_prod primary-vcc0@primary 5000
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    aplica_srss1 bound ----- 5001 24 13184M
    MAC
    00:14:4f:fb:ad:fb
    VCPU
    VID PID UTIL STRAND
    0 8 100%
    1 9 100%
    2 10 100%
    3 11 100%
    4 12 100%
    5 13 100%
    6 14 100%
    7 15 100%
    8 16 100%
    9 17 100%
    10 18 100%
    11 19 100%
    12 20 100%
    13 21 100%
    14 22 100%
    15 23 100%
    16 24 100%
    17 25 100%
    18 26 100%
    19 27 100%
    20 28 100%
    21 29 100%
    22 30 100%
    23 31 100%
    MEMORY
    RA PA SIZE
    0x8000000 0xc8000000 13184M
    VARIABLES
    auto-boot?=true
    boot-device=
    NETWORK
    NAME SERVICE DEVICE MAC
    vnet2 primary-vsw0@primary network@0 00:14:4f:fb:f4:e0
    PEER MAC
    primary-vsw0@primary 00:14:4f:a7:85:1e
    vnet1@aplica_prod 00:14:4f:fb:2c:60
    VCONS
    NAME SERVICE PORT
    aplica_srss1 primary-vcc0@primary 5001
    Then i mount the ISO images of The solaris OS 5/08, Later install the jumpstart server without troubles.... Here are the Configuration files needed for the properly setup of the service for a better guide

    # ./add_install_client -d -e 0:14:4f:fb:2c:60 -s Zolder:/var/jump_start/ sun4v
    cleaning up preexisting install client "0:14:4f:fb:2c:60"
    To disable 0:14:4f:fb:2c:60 in the DHCP server,
    remove the entry with Client ID 0100144FFB2C60
    To enable 0100144FFB2C60 in the DHCP server, ensure that
    the following Sun vendor-specific options are defined
    (SinstNM, SinstIP4, SinstPTH, SrootNM, SrootIP4,
    SrootPTH, and optionally SbootURI, SjumpCF and SsysidCF),
    and add a macro to the server named 0100144FFB2C60,
    containing the following option values:
    Install server (SinstNM) : Zolder
    Install server IP (SinstIP4) : 172.24.0.10
    Install server path (SinstPTH) : /var/jump_start/
    Root server name (SrootNM) : Zolder
    Root server IP (SrootIP4) : 172.24.0.10
    Root server path (SrootPTH) : /mnt/s0/Solaris_10/Tools/Boot
    Boot file (BootFile) : 0100144FFB2C60
    # cat /etc/inet/hosts // The Content of this file is exactly with ipnodes file
    127.0.0.1 localhost
    172.24.0.10 Zolder loghost
    172.24.0.9 primary-vsw0
    172.24.0.8 aplica_prod
    172.24.0.7 aplica_srss1
    # cat /etc/ethers
    00:14:4f:fb:2c:60 aplica_prod
    # cat /etc/bootparams
    aplica_prod root=Zolder:/mnt/s0/Solaris_10/Tools/Boot install=Zolder:/var/jump_start/ boottype=:in rootopts=:rsize=8192
    # cat /etc/dfs/dfstab
    share -F nfs -o ro,anon=0 /var/jump_start/
    share -F nfs -o ro,anon=0 /mnt/s0/Solaris_10/Tools/Boot
    share -F nfs -o ro,anon=0 /mnt/s1
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    vsw0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.24.0.10 netmask fffffe00 broadcast 172.24.1.255
    ether 0:14:4f:a7:85:1e
    vsw0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.24.0.9 netmask fffffe00 broadcast 172.24.1.255
    Here i plumb up the networks interfaces and setup the vsw0 device and change the MAC address of the vsw0 for the same of the bge0 device.
    Then i Log to a Guest LDOM aplica_prod
    ----------START CONSOLE 1-------------------------------
    # telnet localhost 5000
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    Connecting to console "aplica_prod" in group "aplica_prod" ....
    Press ~? for control options ..
    ----END CONSOLE 1----
    At the same time i open another console for startup the Guest LDOM aplica_prod i make a snoop of the vsw0 interface
    ---START CONSOLE 2---
    # ldm start aplica_prod
    LDom aplica_prod started
    -bash-3.00# snoop -d vsw0 | grep aplica_prod
    Using device /dev/vsw0 (promiscuous mode)
    ---END CONSOLE 2-----
    In the other console is produced the next exit
    ----START CONSOLE 1------
    Sun Fire(TM) T1000, No Keyboard
    Copyright 2008 Sun Microsystems, Inc. All rights reserved.
    OpenBoot 4.28.9, 2048 MB memory available, Serial #66792544.
    Ethernet address 0:14:4f:fb:2c:60, Host ID: 83fb2c60.
    Boot device: File and args:
    ERROR: boot-read fail
    Evaluating:
    Can't locate boot device
    {0} ok boot vnet1 install
    Boot device: /virtual-devices@100/channel-devices@200/network@0 File and args: install
    Requesting Internet Address for 0:14:4f:fa:82:17
    Requesting Internet Address for 0:14:4f:fa:82:17
    Timed out waiting for TFTP reply
    ---END CONSOLE 1-----
    in the console 2 the snoop generates an exit of the net traffic for the vsw0 device
    ----START CONSOLE 2--------
    primary-vsw0 -> aplica_prod RARP R 0:14:4f:fb:2c:60 is 172.24.0.8, aplica_prod
    aplica_prod -> BROADCAST TFTP Read "AC180008" (octet)
    primary-vsw0 -> aplica_prod RARP R 0:14:4f:fb:2c:60 is 172.24.0.8, aplica_prod
    Zolder -> aplica_prod TFTP Error: access violation
    aplica_prod -> BROADCAST TFTP Read "AC180008" (octet)
    Zolder -> aplica_prod TFTP Error: access violation
    aplica_prod -> BROADCAST TFTP Read "AC180008" (octet)
    Zolder -> aplica_prod TFTP Error: access violation
    aplica_prod -> BROADCAST TFTP Read "AC180008" (octet)
    Zolder -> aplica_prod TFTP Error: access violation
    aplica_prod -> BROADCAST TFTP Read "AC180008" (octet)
    Zolder -> aplica_prod TFTP Error: access violation
    aplica_prod -> BROADCAST TFTP Read "AC180008" (octet)
    ----END CONSOLE 2----
    Once start the snoop to produce messages make a ping to the IP's that should be assigned in the host file and all give a succesful answer BUT without start the Solaris 10 Instalation
    # ping 172.24.0.8
    172.24.0.8 is alive
    # ping 172.24.0.9
    172.24.0.9 is alive
    # ping 172.24.0.10
    172.24.0.10 is alive
    I really need help with this trouble ..and any help or advice will be thrully appreciated
    Thanks
    PD: any other required configuration file will be provided if is needed

  • Solaris cluster 3.2 with zfs failover filesystem failed. How can I recover?

    Hi all,
    I have just install and configure Solaris cluster 3.2U3 using zfs for both of root filesystem and shared storage file system.
    This cluster operate clearly. Today, I can not see the zpool for shared storage. I can see the storage volume in the output of format command.
    So all my resources change to offline status. and my application is failed.
    How can I recover this cluster??????
    is there any body can help me :(

    Have you used a SUNW.HAStoragePlus (HASP) resource to control your zpool? If not, the zpool is probably needs importing. That is what the HASP resource would do for you. You would also need a dependency from your application on the HASP resource to ensure that your application does not try to start up before the storage is avaialable.
    Regards,
    Tim
    ---

  • Prerequisites : 2-node Solaris Cluster 4.1 using VirtualBox.

    Hi,
    I am going to try building a 2-node Solaris Cluster 4.1 using VirtualBox. I have downloaded Solaris 11.1 ISO. Can someone please help me with the right configuration for the 2 nodes/guests, particularly the NICs and shared storage?
    Thanks,
    Shankar

    https://blogs.oracle.com/TF/entry/new_white_paper_practicing_solaris
    it's a bit dated but should still get you there.

  • Solaris Cluster Private Link Failure

    Hi,
    I have configured Solaris Cluster 3.3 and add two Back to Back interconnect cable.
    Sun Cluster is working fine but private link is fail and i can not ping the clusternode2-priv and clusternode1-priv form each other. some cammands faile
    ~ # ping clusternode2-priv
    no answer from clusternode2-priv
    ~ # metaset -s nfsds -a -h t1u331 t1u332
    metaset: 172.16.4.1: metad client create: RPC: Rpcbind failure
    ~ # scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: n1u332 Online
    Cluster node: n1u331 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path:   n1u332:nxge2           n1u331:nxge2           Path online
    Transport path:   n1u332:nxge1           n1u331:nxge1           Path online
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: n1u332 1 1 Online
    Node votes: n1u331 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d4s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    -- Device Group Status --
    Device Group Status
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- Resource Groups and Resources --
    Group Name Resources
    -- Resource Groups --
    Group Name Node Name State Suspended
    -- Resources --
    Resource Name Node Name State Status Message
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    [root @ n1u332]
    ~ # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1000802<BROADCAST,MULTICAST,IPv4> mtu 1500 index 2
    inet 0.0.0.0 netmask 0
    ether 0:15:17:e3:a4:e8
    vsw0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
    inet 10.131.58.76 netmask ffffff00 broadcast 10.131.58.255
    groupname ipmp-grp
    ether 0:14:4f:f9:1:bd
    vsw0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 10.131.58.75 netmask ffffff00 broadcast 10.131.58.255
    vsw1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4
    inet 10.131.58.77 netmask ffffff00 broadcast 10.131.58.255
    groupname ipmp-grp
    ether 0:14:4f:fb:44:4
    nxge1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 7
    inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
    ether 0:14:4f:a0:81:d9
    nxge2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6
    inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
    ether 0:14:4f:a0:81:da
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 8
    inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255
    ether 0:0:0:0:0:1
    [root @ n1u332]
    ~ # dladm show-dev
    vsw0 link: up speed: 1000 Mbps duplex: full
    vsw1 link: up speed: 1000 Mbps duplex: full
    e1000g0 link: down speed: 0 Mbps duplex: half
    e1000g1 link: up speed: 1000 Mbps duplex: full
    e1000g2 link: unknown speed: 0 Mbps duplex: half
    e1000g3 link: unknown speed: 0 Mbps duplex: half
    nxge0 link: up speed: 100 Mbps duplex: full
    nxge1 link: up speed: 1000 Mbps duplex: full
    nxge2 link: up speed: 1000 Mbps duplex: full
    nxge3 link: up speed: 100 Mbps duplex: full
    e1000g4 link: unknown speed: 0 Mbps duplex: half
    e1000g5 link: up speed: 1000 Mbps duplex: full
    clprivnet0              link: unknown   speed: 0     Mbps       duplex: unknown
    Edited by: 808696 on Mar 2, 2011 8:27 AM

    If your private interconnect had really failed then one or other of the cluster nodes would have panicked. I think it is more likely that either you have changed the nsswitch.conf entry for hosts such that it does not include 'cluster' first, although I would have expected that to result in an unresolved host name. The other option is that you have hardened your machine in some way with ipfilters or security settings.
    Has it ever worked?
    Tim
    ---

  • Solaris Cluster 3.3u2 configuration - Solaris Ldoms

    Hi,
    I've 2 sun blade T5-1B and I create 4 ldoms on each blade, also I install the Solaris Cluster 3.3u2 and its all done successfully, I did several time resource fail-over with nodes, everything seems working perfect for me as normal behavior of expected but I've an issue to test public transport testing, hereby I remove the both physical public transport cable form node1 but its didn't fail-over on node2, little confuse as its should be fail-over to node2, but not. Please let me know if where missed something to get proper fail-over?
    Please advise.
    regards,   

    Hi M10vir,
    this means you are running SC3.3u2 in guest domains and pulling the cables from the primary domain?
    If so, this would mean that the IPMP group used in the guest domain need to get a link down that Solaris Cluster do a failover when logical hostname resources are configured. How are the network interfaces configured and what is the status of the network interfaces when the cables are removed? Furthermore does the domains recognize that the network is gone and the relevant messages are in /var/adm/messages?
    Thanks,
      Juergen

  • OIM is not able to Restart the Domain when I am trying to configure it with the Config.sh

    Hi,
    I am a newbee here. Below is the complete details about my problem:
    I have installed WLS1211 (64-Bit) on OEL 6.3 OS & also installed the OIM 11.1.1.7 (64-Bit) on the same machine.
    When I am trying to configure the OIM (using config.sh file), system fails to restart the domain & in turn fails the configuration. I navigated to the domain created for OIM and verified that log file displays following error:
    ####<Sep 6, 2013 7:41:16 PM IST> <Info> <Server> <blr2211427> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476676195> <BEA-002609> <Channel Service initialized.>
    ####<Sep 6, 2013 7:41:16 PM IST> <Info> <Socket> <blr2211427> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476676216> <BEA-000415> <System has file descriptor limits of soft: 65,536, hard: 65,536>
    ####<Sep 6, 2013 7:41:16 PM IST> <Info> <Socket> <blr2211427> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476676218> <BEA-000416> <Using effective file descriptor limit of: 65,536 open sockets and files.>
    ####<Sep 6, 2013 7:41:16 PM IST> <Info> <Socket> <blr2211427> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476676218> <BEA-000406> <PosixSocketMuxer was built on Apr 24 2007 16:05:00>
    ####<Sep 6, 2013 7:41:16 PM IST> <Info> <Socket> <blr2211427> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476676238> <BEA-000436> <Allocating 3 reader threads.>
    ####<Sep 6, 2013 7:41:16 PM IST> <Info> <Socket> <blr2211427> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476676238> <BEA-000446> <Native I/O enabled.>
    ####<Sep 6, 2013 7:41:16 PM IST> <Info> <IIOP> <blr2211427> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476676483> <BEA-002014> <IIOP subsystem enabled.>
    ####<Sep 6, 2013 7:41:20 PM IST> <Error> <Security> <blr2211427> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1378476680681> <BEA-090892> <The loading of OPSS java security policy provider failed due to exception, see the exception stack trace or the server log file for root cause. If still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get more information. Error message: JPS-06514: Opening of file based keystore failed.>
    ####<Sep 6, 2013 7:41:20 PM IST> <Critical> <WebLogicServer> <blr2211427> <AdminServer> <main> <<WLS Kernel>> <> <> <1378476680682> <BEA-000386> <Server subsystem failed. Reason: weblogic.security.SecurityInitializationException: The loading of OPSS java security policy provider failed due to exception, see the exception stack trace or the server log file for root cause. If still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get more information. Error message: JPS-06514: Opening of file based keystore failed.
    weblogic.security.SecurityInitializationException: The loading of OPSS java security policy provider failed due to exception, see the exception stack trace or the server log file for root cause. If still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get more information. Error message: JPS-06514: Opening of file based keystore failed.
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.loadOPSSPolicy(CommonSecurityServiceManagerDelegateImpl.java:1402)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.initialize(CommonSecurityServiceManagerDelegateImpl.java:1022)
        at weblogic.security.service.SecurityServiceManager.initialize(SecurityServiceManager.java:873)
        at weblogic.security.SecurityService.start(SecurityService.java:148)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    Caused By: oracle.security.jps.JpsRuntimeException: JPS-06514: Opening of file based keystore failed.
        at oracle.security.jps.internal.policystore.PolicyDelegationController.<init>(PolicyDelegationController.java:170)
        at oracle.security.jps.internal.policystore.JavaPolicyProvider.<init>(JavaPolicyProvider.java:383)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at java.lang.Class.newInstance(Class.java:374)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.loadOPSSPolicy(CommonSecurityServiceManagerDelegateImpl.java:1343)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.initialize(CommonSecurityServiceManagerDelegateImpl.java:1022)
        at weblogic.security.service.SecurityServiceManager.initialize(SecurityServiceManager.java:873)
        at weblogic.security.SecurityService.start(SecurityService.java:148)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    Caused By: oracle.security.jps.JpsException: JPS-06514: Opening of file based keystore failed.
        at oracle.security.jps.internal.policystore.PolicyUtil.getDefaultPDPService(PolicyUtil.java:2984)
        at oracle.security.jps.internal.policystore.PolicyUtil.getPDPService(PolicyUtil.java:3226)
        at oracle.security.jps.internal.policystore.PolicyDelegationController.<init>(PolicyDelegationController.java:167)
        at oracle.security.jps.internal.policystore.JavaPolicyProvider.<init>(JavaPolicyProvider.java:383)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at java.lang.Class.newInstance(Class.java:374)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.loadOPSSPolicy(CommonSecurityServiceManagerDelegateImpl.java:1343)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.initialize(CommonSecurityServiceManagerDelegateImpl.java:1022)
        at weblogic.security.service.SecurityServiceManager.initialize(SecurityServiceManager.java:873)
        at weblogic.security.SecurityService.start(SecurityService.java:148)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    Caused By: oracle.security.jps.service.keystore.KeyStoreServiceException: JPS-06514: Opening of file based keystore failed.
        at oracle.security.jps.internal.keystore.file.FileKeyStoreManager.openKeyStore(FileKeyStoreManager.java:406)
        at oracle.security.jps.internal.keystore.file.FileKeyStoreManager.openKeyStore(FileKeyStoreManager.java:352)
        at oracle.security.jps.internal.keystore.file.FileKeyStoreServiceImpl.doInit(FileKeyStoreServiceImpl.java:122)
        at oracle.security.jps.internal.keystore.file.FileKeyStoreServiceImpl.<init>(FileKeyStoreServiceImpl.java:88)
        at oracle.security.jps.internal.keystore.KeyStoreProvider.getInstance(KeyStoreProvider.java:164)
        at oracle.security.jps.internal.keystore.KeyStoreProvider.getInstance(KeyStoreProvider.java:91)
        at oracle.security.jps.internal.keystore.KeyStoreProvider.getInstance(KeyStoreProvider.java:68)
        at oracle.security.jps.internal.core.runtime.ContextFactoryImpl.findServiceInstance(ContextFactoryImpl.java:139)
        at oracle.security.jps.internal.core.runtime.ContextFactoryImpl.getContext(ContextFactoryImpl.java:170)
        at oracle.security.jps.internal.core.runtime.ContextFactoryImpl.getContext(ContextFactoryImpl.java:191)
        at oracle.security.jps.internal.core.runtime.JpsContextFactoryImpl.getContext(JpsContextFactoryImpl.java:132)
        at oracle.security.jps.internal.core.runtime.JpsContextFactoryImpl.getContext(JpsContextFactoryImpl.java:127)
        at oracle.security.jps.internal.policystore.PolicyUtil$3.run(PolicyUtil.java:2956)
        at oracle.security.jps.internal.policystore.PolicyUtil$3.run(PolicyUtil.java:2950)
        at java.security.AccessController.doPrivileged(Native Method)
        at oracle.security.jps.internal.policystore.PolicyUtil.getDefaultPDPService(PolicyUtil.java:2950)
        at oracle.security.jps.internal.policystore.PolicyUtil.getPDPService(PolicyUtil.java:3226)
        at oracle.security.jps.internal.policystore.PolicyDelegationController.<init>(PolicyDelegationController.java:167)
        at oracle.security.jps.internal.policystore.JavaPolicyProvider.<init>(JavaPolicyProvider.java:383)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at java.lang.Class.newInstance(Class.java:374)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.loadOPSSPolicy(CommonSecurityServiceManagerDelegateImpl.java:1343)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.initialize(CommonSecurityServiceManagerDelegateImpl.java:1022)
        at weblogic.security.service.SecurityServiceManager.initialize(SecurityServiceManager.java:873)
        at weblogic.security.SecurityService.start(SecurityService.java:148)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    >
    ####<Sep 6, 2013 7:41:20 PM IST> <Notice> <WebLogicServer> <blr2211427> <AdminServer> <main> <<WLS Kernel>> <> <> <1378476680728> <BEA-000365> <Server state changed to FAILED.>
    ####<Sep 6, 2013 7:41:20 PM IST> <Error> <WebLogicServer> <blr2211427> <AdminServer> <main> <<WLS Kernel>> <> <> <1378476680728> <BEA-000383> <A critical service failed. The server will shut itself down.>
    ####<Sep 6, 2013 7:41:20 PM IST> <Notice> <WebLogicServer> <blr2211427> <AdminServer> <main> <<WLS Kernel>> <> <> <1378476680735> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.>
    ####<Sep 6, 2013 7:41:20 PM IST> <Info> <WebLogicServer> <blr2211427> <AdminServer> <main> <<WLS Kernel>> <> <> <1378476680754> <BEA-000236> <Stopping execute threads.>
    When I tried to google it to find some information I found that I need to provide full access (0777) to the cwallet.sso file available under the MW_HOME\user_projects\domains\domain_name\config\fmwconfig\bootstrap\ location. I did this but was not able to succeed.
    I followed following link: http://www.techpaste.com/2012/04/jpsruntimeexception-jps-06514-opening-file-based-keystore-failed/
    Also tried other solutions mentioned there but non worked.
    Please help.

    have you tried taking the backup of keystore.xml and cwallet.sso file, delete them and then restart admin server?

  • Oracle ASM Configuration on Solaris Cluster - Oracle 11.2.0.3

    Hi,
    I want some clarifications!
    I need to set Active and Passive Cluster settup on Solaris 10 SPARC Operating System, the HA software is Solaris Cluster and Oracle 11.2.0.3.
    1) I understand "Single instance Oracle ASM is not supported with Oracle 11g release 2" so we need to go for Clustered ASM - is it required to use RAC framework in this case?
    2) When i use the RAC framework, do i need to have license for RAC?
    Am new to Oracle, any help is appreciated.
    Regards,
    Shashank

    Hi,
    I want some clarifications!
    I need to set Active and Passive Cluster settup on Solaris 10 SPARC Operating System, the HA software is Solaris Cluster and Oracle 11.2.0.3.
    1) I understand "Single instance Oracle ASM is not supported with Oracle 11g release 2" so we need to go for Clustered ASM - is it required to use RAC framework in this case?
    2) When i use the RAC framework, do i need to have license for RAC?
    Am new to Oracle, any help is appreciated.
    Regards,
    Shashank

  • Solaris Cluster  4.1 Quorum Configuration Best Follows metaset OR ZFS?

    Solaris Cluster 4.1 Quorum Configuration Best Follows metaset OR ZFS?

    If you want to use a quorum device - in contrast to a quorum server - then you'll need a LUN to configure your quorum device on.
    It does not matter whether this LUN will be used later as a zpool or as an SVM metaset.
    There is one exception, that should be mentioned in the docs. If the LUN used for the quorum device is later used as a disk for a zpool, and this disk gets a new EFI label, then, I think, the quorum information can get overwritten. So be careful in this specific situation and consult the docs before doing so.
    Hartmut

  • HOWTO: Create 2-node Solaris Cluster 4.1/Solaris 11.1(x64) using VirtualBox

    I did this on VirtualBox 4.1 on Windows 7 and VirtualBox 4.2 on Linux.X64. Basic pre-requisites are : 40GB disk space, 8GB RAM, 64-bit guest capable VirtualBox.
    Please read all the descriptive messages/prompts shown by 'scinstall' and 'clsetup' before answering.
    0) Download from OTN
    - Solaris 11.1 Live Media for x86(~966 MB)
    - Complete Solaris 11.1 IPS Repository Image (total 7GB)
    - Oracle Solaris Cluster 4.1 IPS Repository image (~73MB)
    1) Run VirtualBox Console, create VM1 : 3GB RAM, 30GB HDD
    2) The new VM1 has 1 NIC, add 2 more NICs (total 3). Setting the NIC to any type should be okay, 'VirtualBox Host Only Adapter' worked fine for me.
    3) Start VM1, point the "Select start-up disk" to the Solaris 11.1 Live Media ISO.
    4) Select "Oracle Solaris 11.1" in the GRUB menu. Select Keyboard layout and Language.
    VM1 will boot and the Solaris 11.1 Live Desktop screen will appear.
    5) Click <Install Oracle Solaris> from the desktop, supply necessary inputs.
    Default Disk Discovery (iSCSI not needed) and Disk Selection are fine.
    Disable the "Support Registration" connection info
    6) The alternate user created during the install has root privileges (sudo). Set appropriate VM1 name
    7) When the VM has to be rebooted after the installation is complete, make sure the Solaris 11.1 Live ISO is ejected or else the VM will again boot from the Live CD.
    8) Repeat steps 1-6, create VM2 and install Solaris.
    9) FTP(secure) the Solaris 11.1 Repository IPS and Solaris Cluster 4.1 IPS onto both the VMs e.g under /home/user1/
    10) We need to setup both the packages: Solaris 11.1 Repository and Solaris Cluster 4.1
    11) All commands now to be run as root
    12) By default the 'solaris' repository is of type online (pkg.oracle.com), that needs to be updated to the local ISO we downloaded :-
    +$ sudo sh+
    +# lofiadm -a /home/user1/sol-11_1-repo-full.iso+
    +//output : /dev/lofi/N+
    +# mount -F hsfs /dev/lofi/N /mnt+
    +# pkg set-publisher -G '*' -M '*' -g /mnt/repo solaris+
    13) Setup the ha-cluster package :-
    +# lofiadm -a /home/user1/osc-4_1-ga-repo-full.iso+
    +//output : /dev/lofi/N+
    +# mkdir /mnt2+
    +# mount -f hsfs /dev/lofi/N /mnt2+
    +# pkg set-publisher -g file:///mnt2/repo ha-cluster+
    14) Verify both packages are fine :-
    +# pkg publisher+
    PUBLISHER                   TYPE     STATUS P LOCATION
    solaris                     origin   online F file:///mnt/repo/
    ha-cluster                  origin   online F file:///mnt2/repo/
    15) Install the complete SC4.1 package by installing 'ha-cluster-full'
    +# pkg install ha-cluster-full+
    14) Repeat steps 12-15 on VM2.
    15) Now both VMs have the OS and SC4.1 installed.
    16) By default the 3 NICs are in the "Automatic" profile and have DHCP configured. We need to activate the Fixed profile and put the 3 NICs into it. Only 1 interface, the public interface, needs to be
    configured. The other 2 are for the cluster interconnect and will be automatically configured by scinstall. Execute the following commands :-
    +# netadm enable -p ncp defaultfixed+
    +//verify+
    +# netadm list -p ncp defaultfixed+
    +#Configure the public-interface+
    +#Verify none of the interfaces are listed, add all the 3+
    +# ipadm show-if+
    +# run dladm show-phys or dladm show-link to check interface names : must be net0/net1/net2+
    +# ipadm create-ip net0+
    +# ipadm create-ip net1+
    +# ipadm create-ip net2+
    +# ipadm show-if+
    +//select proper IP and configure the public interface. I have used 192.168.56.171 & 172+
    +# ipadm create-addr -T static -a 192.168.56.171/24 net0/publicip+
    +#IP plumbed, restart+
    +# ipadm down-addr -t net0/publicip+
    +# ipadm up-addr -t net0/publicip+
    +//Verify publicip is fine by pinging the host+
    +# ping 192.168.56.1+
    +//Verify, net0 should be up, net1/net2 should be down+
    +# ipadm+
    17) Repeat step 16 on VM2
    18) Verify both VMs can ping each other using the public IP. Add entries to each other's /etc/hosts
    Now we are ready to run scinstall and create/configure the 2-node cluster
    19)
    +# cd /usr/cluster/bin+
    +# ./scinstall+
    select 1) Create a new cluster ...
    select 1) Create a new cluster
    select 2) Custom in "Typical or Custom Mode"
    Enter cluster name : mycluster1 (e.g)
    Add the 2 nodes : solvm1 & solvm2 and press <ctrl-d>
    Accept default "No" for <Do you need to use DES authentication>"
    Accept default "Yes" for <Should this cluster use at least two private networks>
    Enter "No" for <Does this two-node cluster use switches>
    Select "1)net1" for "Select the first cluster transport adapter"
    If there is warning of unexpected traffic on "net"1, ignore it
    Enter "net1" when it asks corresponding adapter on "solvm2"
    Select "2)net2" for "Select the second cluster transport adapter"
    Enter "net2" when it asks corresponding adapter on "solvm2"
    Select "Yes" for "Is it okay to accept the default network address"
    Select "Yes" for "Is it okay to accept the default network netmask"Now the IP addresses 172.16.0.0 will be plumbed in the 2 private interfaces
    Select "yes" for "Do you want to turn off global fencing"
    (These are SATA serial disks, so no fencing)
    Enter "Yes" for "Do you want to disable automatic quorum device selection"
    (we will add quorum disks later)
    Enter "Yes" for "Proceed with cluster creation"
    Select "No" for "Interrupt cluster creation for cluster check errors"
    The second node will be configured and 2nd node rebooted
    The first node will be configured and rebootedAfter both nodes have rebooted, verify the cluster has been created and both nodes joined.
    On both nodes :-
    +# cd /usr/cluster/bin+
    +# ./clnode status+
    +//should show both nodes Online.+
    At this point there are no quorum disks, so 1 of the node's will be designated quorum vote. That node VM has to be up for the other node to come up and cluster to be formed.
    To check the current quorum status, run :-
    +# ./clquorum show+
    +//one of the nodes will have 1 vote and other 0(zero).+
    20)
    Now the cluster is in 'Installation Mode' and we need to add a quorum disk.
    Shutdown both the nodes as we will be adding shared disks to both of them
    21)
    Create 2 VirtualBox HDDs (VDI Files) on the host, 1 for quorum and 1 for shared filesystem. I have used a size of 1 GB for each :-
    *$ vboxmanage createhd --filename /scratch/myimages/sc41cluster/sdisk1.vdi --size 1024 --format VDI --variant Fixed*
    *0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%*
    *Disk image created. UUID: 899147b9-d21f-4495-ad55-f9cf1ae46cc3*
    *$ vboxmanage createhd --filename /scratch/myimages/sc41cluster/sdisk2.vdi --size 1024 --format VDI --variant Fixed*
    *0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%*
    *Disk image created. UUID: 899147b9-d22f-4495-ad55-f9cf15346caf*
    22)
    Attach these disks to both the VMs as shared type
    *$ vboxmanage storageattach solvm1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk1.vdi --mtype shareable*
    *$ vboxmanage storageattach solvm1 --storagectl "SATA" --port 2 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk2.vdi --mtype shareable*
    *$ vboxmanage storageattach solvm2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk1.vdi --mtype shareable*
    *$ vboxmanage storageattach solvm2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium /scratch/myimages/sc41cluster/sdisk2.vdi --mtype shareable*
    The disks are attached to SATA ports 1 & 2 of each VM. On my VirtualBox on Linux, the controller type is "SATA", whereas on Windows it is "SATA Controller".
    The "--mtype shareable' parameter is important
    23)
    Mark both disks as shared :-
    *$ vboxmanage modifyhd /scratch/myimages/sc41cluster/sdisk1.vdi --type shareable*
    *$ vboxmanage modifyhd /scratch/myimages/sc41cluster/sdisk2.vdi --type shareable*
    24) Start both VMs. We need to format the 2 shared disks
    25) From VM1, run format. In my case, the 2 new shared disks show up as 'c7t1d0' and 'c7t2d0'.
    +# format+
    select disk 1 (c7t1d0)
    [disk formated]
    FORMAT MENU
    fdisk
    Type 'y' to accept default partition
    partition
    0
    <enter>
    <enter>
    1
    995mb
    print
    label
    <yes>
    quit
    quit26) Repeat step 25) for the 2nd disk (c7t2d0)
    27) Make sure the shared disks can be used for quorum :-
    On VM1
    +# ./cldevice refresh+
    +# ./cldevice show+
    On VM2
    +# ./cldevice refresh+
    +# ./cldevice show+
    The shared disks should have the same DID (d2,d3,d4 etc). Note down the DID that you are going to use for quorum (e.g d2)
    By default, global fencing is enabled for these disks. We need to turn it off for all disks as these are SATA disks :-
    +# cldevice set -p default_fencing=nofencing-noscrub d1+
    +# cldevice set -p default_fencing=nofencing-noscrub d2+
    +# cldevice set -p default_fencing=nofencing-noscrub d3+
    +# cldevice set -p default_fencing=nofencing-noscrub d4+
    28) It is better to do one more reboot of both VMs, otherwise I got a error when adding the quorum disk
    29) Run clsetup to add quorum disk and to complete cluster configuration :-
    +# ./clsetup+
    === Initial Cluster Setup ===
    Enter 'Yes' for "Do you want to continue"
    Enter 'Yes' for "Do you want add any quorum devices"
    Select '1) Directly Attached Shared Disk' for the type of device
    Enter 'Yes' for "Is it okay to continue"
    Enter 'd2' (or 'd3') for 'Which global device do you want to use'
    Enter 'Yes' for "Is it okay to proceed with the update"
    The command 'clquorum add d2' is run
    Enter 'No' for "Do you want to add another quorum device"
    Enter 'Yes' for "Is it okay to reset "installmode"?"Cluster initialization is complete.!!!
    30) Run 'clquorum status' to confirm both nodes and the quorum disk have 1 vote each
    31) Run other cluster commands to explore!
    I will cover Data services and shared file system in another post. Basically the other shared disk
    can be used to create a UFS filesystem and mount it on all nodes.

    The Solaris Cluster 4.1 Installation and Concepts Guide are available at :-
    http://docs.oracle.com/cd/E29086_01/index.html
    Thanks.

  • Guest Domain Freezes with 100% utilization

    I setup LDOM on a Sun Fire T2000 with one control domain and one guest domain. The guest domain is sharing the disk (on slice 6) with the control domain.
    The slice is set as boot disk for the guest domain. Upon starting the guest domain, the utilization goes 100%. A telnet to the virtual console gets refused connection.
    When I try to stop and unbind the guest domain (with a reboot), the slice is no longer unusable. format and newfs commands both cannot operate on the slice. What's wrong?
    sc>showhost
    Sun-Fire-T2000 System Firmware 6.4.4  2007/04/20 10:13
    Host flash versions:
       Hypervisor 1.4.1 2007/04/02 16:37
       OBP 4.26.1 2007/04/02 16:26
       POST 4.26.0 2007/03/26 16:45
    # Control Domain
    $ cd LDoms_Manager-1_0-RR
    $ Install/install-ldm
    $ ldm add-vdiskserver primary-vds0 primary
    $ ldm add-vconscon port-range=5000-5100 primary-vcc0 primary
    $ ldm add-vswitch net-dev=e1000g0 primary-vsw0 primary
    $ ldm set-mau 1 primary
    $ ldm set-vcpu 4 primary
    $ ldm set-memory 4g primary
    $ ldm add-config initial
    $ shutdown -i6 -g0 -y
    # Guest Domain
    $ ldm add-domain myldom1
    $ ldm add-vcpu 4 myldom1
    $ ldm add-memory 2g myldom1
    $ ldm add-vnet vnet1 primary-vsw0 myldom1
    $ ldm add-vdiskserverdevice /dev/dsk/c0t1d0s6 vol1@primary-vds0
    $ ldm add-vdisk vdisk1 vol1@primary-vds0 myldom1
    $ ldm set-variable auto-boot\?=false myldom1
    $ ldm set-variable boot-device=vdisk1 myldom1
    $ ldm bind-domain myldom1
    $ ldm start-domain myldom1
    $ telnet localhost 5000Truss output of format command:
    AVAILABLE DISK SELECTIONS:
    write(1, "\n\n A V A I L A B L E  ".., 29)      = 29
    ioctl(0, TCGETA, 0xFFBFFB34)                    = 0
    ioctl(1, TCGETA, 0xFFBFFB34)                    = 0
    ioctl(0, TCGETA, 0xFFBFFACC)                    = 0
    ioctl(1, TCGETA, 0xFFBFFACC)                    = 0
    ioctl(1, TIOCGWINSZ, 0xFFBFFB40)                = 0
    open("/dev/tty", O_RDWR|O_NDELAY)               = 3
    ioctl(3, TCGETS, 0x000525BC)                    = 0
    ioctl(3, TCSETS, 0x000525BC)                    = 0
    ioctl(3, TCGETS, 0x000525BC)                    = 0
    ioctl(3, TCSETS, 0x000525BC)                    = 0
           0. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    write(1, "               0 .   c 0".., 56)      = 56
              /pci@780/pci@0/pci@9/scsi@0/sd@1,0
    write(1, "                     / p".., 45)      = 45
    ioctl(3, TCSETS, 0x000525BC)                    = 0
    ioctl(3, TCSETS, 0x000525BC)                    = 0
    close(3)                                        = 0
    ioctl(0, TCGETA, 0xFFBFF1EC)                    = 0
    fstat64(0, 0xFFBFF108)                          = 0
    Specify disk (enter its number): write(1, " S p e c i f y   d i s k".., 33)     = 33
    read(0, 0xFF2700F8, 1024)       (sleeping...)
    0
    read(0, " 0\n", 1024)                           = 2
    open("/dev/rdsk/c0t1d0s2", O_RDWR|O_NDELAY)     = 3
    brk(0x00058810)                                 = 0
    brk(0x00068810)                                 = 0
    brk(0x00068810)                                 = 0
    brk(0x00078810)                                 = 0
    selecting c0t1d0
    write(1, " s e l e c t i n g   c 0".., 17)      = 17
    ioctl(3, 0x04C9, 0xFFBFFA54)                    = 0
    [disk formatted]
    write(1, " [ d i s k   f o r m a t".., 17)      = 17
    open("/etc/mnttab", O_RDONLY)                   = 4
    ioctl(4, (('m'<<8)|7), 0xFFBFFA64)              = 0
    open("/dev/rdsk/c0t1d0s0", O_RDWR|O_NDELAY)     = 5
    fstat(5, 0xFFBFF5E0)                            = 0
    ioctl(5, 0x0403, 0xFFBFF59C)                    = 0
    close(5)                                        = 0
    llseek(4, 0, SEEK_CUR)                          = 0
    close(4)                                        = 0
    fstat64(2, 0xFFBFEBA0)                          = 0
    Warning: Current Disk has mounted partitions.
    write(2, " W a r n i n g :   C u r".., 46)      = 46
    resolvepath("/", "/", 1024)                     = 1
    sysconfig(_CONFIG_PAGESIZE)                     = 8192
    open("/dev/.devlink_db", O_RDONLY)              = 4
    fstat(4, 0xFFBFF1F8)                            = 0
    mmap(0x00000000, 40, PROT_READ, MAP_SHARED, 4, 0) = 0xFEF50000
    mmap(0x00000000, 24576, PROT_READ, MAP_SHARED, 4, 32768) = 0xFEF38000
    open("/devices/pseudo/devinfo@0:devinfo", O_RDONLY) = 5
    ioctl(5, 0xDF82, 0x00000000)                    = 57311After failing to configure LDOM, I boot from a Solaris DVD to do reinstall the OS. It looks like the e1000g0 has some kind of fault.
    {0} ok boot cdrom
    Boot device: /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom@0,0:f  File and args:
    SunOS Release 5.10 Version Generic_118833-33 64-bit Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: Unable to setup switching mode
    Configuring devices.
    WARNING: bypass cookie failure 71ece
    NOTICE: tavor0: error during attach: hw_init_eqinitall_fail
    NOTICE: tavor0: driver attached (for maintenance mode only)
    NOTICE: pciex8086,105e - e1000g[0] : Adapter 1000Mbps full duplex copper link is  up.What is the meaning of this?
    Message was edited by:
    JoeChris@Sun

    Hi,
    In the case of bug 6530040, the recovery method in the Release Notes is to reboot the system. In my case, even after the reboot the vds still does not close the device. I suspect I might use a wrong way to reboot the system. Can you give me an example for a system with control domain (primary) and a single guest domain (myldom1).
    My steps would be as follow:
    $ ldm stop-domain myldom1
    $ ldm unbind-domain myldom1
    $ reboot
    Do I miss any step?
    LDOM seems to have problem releasing the network interface e1000g0 also. How do I release it?
    {0} ok boot cdrom
    Boot device: /pci@7c0/pci@0/pci@1/pci@0/ide@8/cdrom@0,0:f  File and args:
    SunOS Release 5.10 Version Generic_118833-33 64-bit Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: mac_open e1000g0 failed
    WARNING: Unable to setup switching mode
    Configuring devices.
    WARNING: bypass cookie failure 71ece
    NOTICE: tavor0: error during attach: hw_init_eqinitall_fail
    NOTICE: tavor0: driver attached (for maintenance mode only)
    NOTICE: pciex8086,105e - e1000g[0] : Adapter 1000Mbps full duplex copper link is  up.Thank you
    Message was edited by:
    JoeChris@Sun

  • Solaris cluster 3.2 Sparc

    Hi folks
    First things first. I may not have great knowledge about Solaris clusters, so please be merciful :)
    Here it is what I have:
    - 2 x Netra T1 AC200 each with 1GB Ram, 2x18GB disks, 500 MHZ Sparc Cpu, 4 port ethernet card
    - 1 array netra d130 3x36 GB
    -- cable et all, switches , you name it
    So, I set up the OS, all ok. I set up the cluster, all SEEMS to be ok.
    But when I define my resources and stuff like that all goes fine, except when I try top bring the resource group on line.
    On another configuration I teste the shared logical hostname and works fine.
    Group Name Resources
    Resources: ingresc nodec ingresr
    -- Resource Groups --
    Group Name Node Name State Suspended
    Group: ingresc node2 Unmanaged No
    Group: ingresc node1 Unmanaged No
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: nodec node2 Offline Offline
    Resource: nodec node1 Offline Offline
    Resource: ingresr node2 Offline Offline
    Resource: ingresr node1 Offline Offline
    scswitch: (C969069) Request failed because resource group ingresc is in ERROR_STOP_FAILED state and requires operator attention
    Now, in /var/adm/messsages I spotted this :
    Mar 6 17:09:03 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar 6 17:09:03 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_stop>:tag=<IngresNCG.nodec.1>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    A little bit of research points in the direction of a bug (see CR 6565601)
    Here it is what I see as my options:
    1 - reinstall Solaris OS, but not the Solaris Cluster 3.2, instead using Solaris Express 10/07 or 2/08. But will this combination work ? Or will it work only in the combination Solaris Cluster Express and Solaris Express Developer Edition ? If the later, which versions will work together ?
    2 - Beg for a Solaris Cluster 3.2 patch, although in my humble opinion, this should be free since it looks to me that once you write your own stuff, you run in the bug, and after all it is education
    Any ideas, help, greatly appreciated
    Many thanks
    Armand

    Although names are different since I used two setups, this is the relevant part of /var/adm/messages.
    It looks to me Ingres resource is failing:
    Mar  6 17:08:03 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_prenet_start> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:03 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_prenet_start>:tag=<IngresNCG.nodec.10>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:05 node2 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
    Mar  6 17:08:05 node2 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_prenet_start> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 1% of timeout <300 seconds>
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_PRENET_STARTED
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_STARTING
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_start> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <500> seconds
    Mar  6 17:08:09 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_start>:tag=<IngresNCG.nodec.0>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nodec status on node node2 change to R_FM_ONLINE
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nodec status msg on node node2 change to <LogicalHostname online.>
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_start> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <500 seconds>
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_JUST_STARTED
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_ONLINE_UNMON
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_STARTING
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_MON_STARTING
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_UNKNOWN
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Starting>
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <bin/ingres_server_start> for resource <IngresNCR>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_monitor_start> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</global/disk2s0/ing_nc_1/ingresclu/bin/ingres_server_start>:tag=<IngresNCG.IngresNCR.0>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:11 node2 Cluster.RGM.rgmd: [ID 268902 daemon.notice] 45 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_monitor_start>:tag=<IngresNCG.nodec.7>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:12 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_start> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <300 seconds>
    Mar  6 17:08:12 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_ONLINE
    Mar  6 17:08:13 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Bringing Ingres DBMS server online.>
    Mar  6 17:08:30 node2 sendmail[534]: [ID 702911 mail.alert] unable to qualify my own domain name (node2) -- using short name
    Mar  6 17:08:30 node2 sendmail[535]: [ID 702911 mail.alert] unable to qualify my own domain name (node2) -- using short name
    Mar  6 17:08:31 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Bringing Ingres DBMS server offline.>
    Mar  6 17:08:45 node2 SC[Ingres.ingres_server,IngresNCG,IngresNCR,stop]: [ID 147958 daemon.error] ERROR : HA-Ingres failed to stop.
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_FAULTED
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Ingres DBMS server faulted.>
    Mar  6 17:08:46 node2 SC[Ingres.ingres_server,IngresNCG,IngresNCR,start]: [ID 335575 daemon.error] ERROR : Stop method failed for the HA-Ingres data service.
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 938318 daemon.error] Method <bin/ingres_server_start> failed on resource <IngresNCR> in resource group <IngresNCG> [exit code <1>, time used: 11% of timeout <300 seconds>]
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_START_FAILED
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group IngresNCG state on node node2 change to RG_PENDING_OFF_START_FAILED
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_STOPPING
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_MON_STOPPING
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_UNKNOWN
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Stopping>
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <bin/ingres_server_stop> for resource <IngresNCR>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_monitor_stop> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</global/disk2s0/ing_nc_1/ingresclu/bin/ingres_server_stop>:tag=<IngresNCG.IngresNCR.1>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:46 node2 Cluster.RGM.rgmd: [ID 268902 daemon.notice] 45 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_monitor_stop>:tag=<IngresNCG.nodec.8>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:08:47 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Bringing Ingres DBMS server offline.>
    Mar  6 17:08:48 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_monitor_stop> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <300 seconds>
    Mar  6 17:08:48 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_ONLINE_UNMON
    Mar  6 17:09:00 node2 SC[Ingres.ingres_server,IngresNCG,IngresNCR,stop]: [ID 147958 daemon.error] ERROR : HA-Ingres failed to stop.
    Mar  6 17:09:02 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource IngresNCR status on node node2 change to R_FM_FAULTED
    Mar  6 17:09:02 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource IngresNCR status msg on node node2 change to <Ingres DBMS server faulted.>
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 938318 daemon.error] Method <bin/ingres_server_stop> failed on resource <IngresNCR> in resource group <IngresNCG> [exit code <2>, time used: 5% of timeout <300 seconds>]
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource IngresNCR state on node node2 change to R_STOP_FAILED
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group IngresNCG state on node node2 change to RG_PENDING_OFF_STOP_FAILED
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 424774 daemon.error] Resource group <IngresNCG> requires operator attention due to STOP failure
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_STOPPING
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nodec status on node node2 change to R_FM_UNKNOWN
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nodec status msg on node node2 change to <Stopping>
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <nodec>, resource group <IngresNCG>, node <node2>, timeout <300> seconds
    Mar  6 17:09:03 node2 Cluster.RGM.rgmd: [ID 510020 daemon.notice] 46 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_stop>:tag=<IngresNCG.nodec.1>: Calling security_clnt_connect(..., host=<node2>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Mar  6 17:09:04 node2 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 192.168.005.085:0, remote = 000.000.000.000:0, start = -2, end = 6
    Mar  6 17:09:04 node2 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource nodec status on node node2 change to R_FM_OFFLINE
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource nodec status msg on node node2 change to <LogicalHostname offline.>
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_stop> completed successfully for resource <nodec>, resource group <IngresNCG>, node <node2>, time used: 0% of timeout <300 seconds>
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource nodec state on node node2 change to R_OFFLINE
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group IngresNCG state on node node2 change to RG_ERROR_STOP_FAILED
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 424774 daemon.error] Resource group <IngresNCG> requires operator attention due to STOP failure
    Mar  6 17:09:04 node2 Cluster.RGM.rgmd: [ID 663692 daemon.error] failback attempt failed on resource group <IngresNCG> with error <resource group in ERROR_STOP_FAILED state requires operator attention>
    Mar  6 17:09:10 node2 java[1652]: [ID 807473 user.error] pkcs11_softtoken: Keystore version failure.Thank you
    Armand

Maybe you are looking for

  • SAP NetWeaver 7.02 Java Trial version

    Dear all, where can i download the "SAP NetWeaver 7.02 Java Trial version "??? Thanks and best regards..

  • Cannot mount NTFS file system on USB drive

    I plug in external disk via USB drive. I attempt to mount drive. I've downloaded several NTFS rpms. I believe the issue is that I do not yet have the correct NTFS rpm. Any help? [root@kclinux1 media]# mount -t ntfs /dev/sdc1 /media/usbdrive/ mount: u

  • Pooled

    hi guru's, i'm trying to retrive data from tables a004 and konp using joins. but it is saying that don't apply joins for pooled table. how can i slove this proble? explane me brief thanks in advance.

  • Trying to grab a moveiclip using a dragged object

    Ok this is a hard one to explain so the best thing to do is attach the test .fla here... http://www.directionfirst.com/temp/Xmas2014_copy2.fla What I have done is draw a movie clip and with the mouse cursor it makes a grabbing movement. This is to ac

  • Where do notes sync from iPhone 4 to osx lion?

    When I sync/back-up my iphone 4, where do the notes go on my Mac OSX Lion?