Cluster node reboots after network failure

hi all,
The suncluster 3.1 8/05 with 2 nodes (E2900) was working fine without any errors in the sccheck.
yesterday one node rebooted saying a network failure,errors in the massage file are
Jan 17 08:00:36 PRD in.mpathd[221]: [ID 594170 daemon.error] NIC failure detected on ce0 of group sc_ipmp0
Jan 17 08:00:36 PRD Cluster.PNM: [ID 890413 daemon.notice] sc_ipmp0: state transition from OK to DOWN.
Jan 17 08:00:47 PRD Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource PROD status on node PRD change to R_FM_DEGRADED
Jan 17 08:00:47 PRD Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource PROD status msg on node PRD change to <IPMP Failure.>
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group CFS state on node PRD change to RG_PENDING_OFFLINE
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource PROD state on node PRD change to R_MON_STOPPING
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hafoip_monitor_stop> for resource <PROD>, resource group <CFS>, timeout <300> seconds
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_monitor_stop> completed successfully for resource <PROD>, resource group <CFS>, time used: 0% of timeout <300 seconds>
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource PROD state on node PRD change to R_ONLINE_UNMON
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource PROD state on node PRD change to R_STOPPING
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hafoip_stop> for resource <PROD>, resource group <CFS>, timeout <300> seconds
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource PROD status on node PRD change to R_FM_UNKNOWN
Jan 17 08:00:50 PRD Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource PROD status msg on node PRD change to <Stopping>
Jan 17 08:00:51 PRD ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 172.016.005.025:0, remote = 000.000.000.000:0, start = -2, end = 6
Jan 17 08:00:51 PRD ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 53 connections
what can be the reason for reabooting?
is there any way to avoid this, with only a failover?
rgds
Message was edited by:
suj

What is in that resource group? The cause is probably something with Failover_mode=HARD set. Check the manual reference section for this. The option would be to set the Failover_mode=SOFT.
Tim
---

Similar Messages

  • Oracle Cluster Node Reboots Abruptly

    One of our RAC 11gR2 Cluster Node rebooted abruptly. We found the following error in the grid home alter log file and ocssd.log file.
    [cssd(6014)]CRS-1611:Network communication with node mumchora12 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 6.190 secondsWe need to find the Root Cause for this node reboot. Kindly assist.
    OS Version : RHEL 5.8
    GRID : 11.2.0.2
    Database : 11.2.0.2.10

    Hi,
    By looking the logs it seems private interconnect problem. I would suggest you to refer one of nice metalink doc on same issue.
    Node reboot or eviction: How to check if your private interconnect CRS can transmit network heartbeats [ID 1445075.1]
    Hope it will help you to identify the root cause of node eviction.
    Thanks

  • Cluster node fails after testing removing both interconnects in a two node

    Hi,
    cluster node panics and fails to join cluster after testing removing both interconnects in a two node cluster. cluster is up on one node , but the panic'ed node fails to rejoin cluster saying no sufficient quorum yet and both clinterconn failed (even after conencting the interconn). Quorum device used is a shared disk.
    Is this a bug?
    Any workaround or solution?
    Cluster is 3.2 SPARC
    Thanking you
    Ushas Symon

    Sounds like a networking problem to me. If the failed node genuinely can't communicate with the remaining node then it will not be allowed to join the cluster, hence the quorum message. I would suspect either:
    * Misconnected cables
    * A switch that has block or disabled the port
    * A failed auto-negotiation
    This is of course without knowing anything about what your network infrastructure actually is!
    Tim
    ---

  • Cluster node reboots repeatedly

    We have 2 node 10.1.0.3 cluster setup. We had a problem with a HBA card for the fibre channel to SAN and after replacing it, one of the cluster nodes keeps rebooting itself right after the Cluster processes startup.
    We have had this issue once before and Support suggested the following.. Howevere the same solution is not working this time around.. Any ideas?
    Check output of the unix command hostname is node1
    Please rename cssnorun file in /etc/oracle/scls_scr/node1/root directory. Please issue "touch /etc/oracle/scls_scr/node1/root/crsdboot" and also change the permission and ownership of the file to match that of the node 2. Please check if there is any differences in permission, ownership, and the group for any files or directory structure under /etc/oracle between two nodes.
    Please reboot node 1 after this change and see if you run into the same problem.
    Please check if there is any /tmp/crsctl* files.

    Well especially if you are Linux RH4 the new controler card will have cause the device names to change. Check that out. It could be that you are no longer seeing you vote and crs partitions. This can happen on other operating systems if the devices now have a new name because the controller card has changed.
    For Linux try the Man pages on udev and search for udev on OTN
    Regards

  • Both cluster node reboot

    There is a two nodes cluster and running Oracle RAC DB. Yesterday both nodes rebooted at the same time (less than few seconds different). Don't know it was caused by Oracle CRS and server itsefl?
    Here is the log:
    /var/log/messages in node 1
    Dec 8 15:14:38 dc01locs01 kernel: 493 http://RAIDarray.mppdcsgswsst6140:1:0:2 Cmnd failed-retry the same path. vcmnd SN 18469446 pdev H3:C0:T0:L2 0x02/0x04/0x01 0x08000002 mpp_status:1
    Dec 8 15:14:38 dc01locs01 kernel: 493 http://RAIDarray.mppdcsgswsst6140:1:0:2 Cmnd failed-retry the same path. vcmnd SN 18469448 pdev H3:C0:T0:L2 0x02/0x04/0x01 0x08000002 mpp_status:1
    Dec 8 15:17:20 dc01locs01 syslogd 1.4.1: restart.
    Dec 8 15:17:20 dc01locs01 kernel: klogd 1.4.1, log source = /proc/kmsg started.
    Dec 8 15:17:20 dc01locs01 kernel: Linux version 2.6.18-128.7.1.0.1.el5 ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)) #1 SMP Mon Aug 24 14:07:09 EDT 2009
    Dec 8 15:17:20 dc01locs01 kernel: Command line: ro root=/dev/vg00/root rhgb quiet crashkernel=128M@16M
    Dec 8 15:17:20 dc01locs01 kernel: BIOS-provided physical RAM map:
    ocssd.log in node 1
    CSSD2009-12-08 15:14:33.467 1134680384 >TRACE: clssgmDispatchCMXMSG: msg type(13) src(2) dest(1) size(123) tag(00000000) incarnation(148585637)
    CSSD2009-12-08 15:14:33.468 1134680384 >TRACE: clssgmHandleDataInvalid: grock HB+ASM, member 2 node 2, birth 1
    CSSD2009-12-08 15:19:00.217 >USER: Copyright 2009, Oracle version 11.1.0.7.0
    CSSD2009-12-08 15:19:00.217 >USER: CSS daemon log for node dc01locs01, number 1, in cluster ocsprodrac
    clsdmtListening to (ADDRESS=(PROTOCOL=ipc)(KEY=dc01locs01DBG_CSSD))
    CSSD2009-12-08 15:19:00.235 1995774848 >TRACE: clssscmain: Cluster GUID is 79db6803afc7df32ffd952110f22702c
    CSSD2009-12-08 15:19:00.239 1995774848 >TRACE: clssscmain: local-only set to false
    /var/log/messages in node 2
    Dec 8 15:14:38 dc01locs02 kernel: 493 http://RAIDarray.mppdcsgswsst6140:1:0:2 Cmnd failed-retry the same path. vcmnd SN 18561465 pdev H3:C0:T0:L2 0x02/0x04/0x01 0x08000002 mpp_status:1
    Dec 8 15:14:38 dc01locs02 kernel: 493 http://RAIDarray.mppdcsgswsst6140:1:0:2 Cmnd failed-retry the same path. vcmnd SN 18561463 pdev H3:C0:T0:L2 0x02/0x04/0x01 0x08000002 mpp_status:1
    Dec 8 15:17:14 dc01locs02 syslogd 1.4.1: restart.
    Dec 8 15:17:14 dc01locs02 kernel: klogd 1.4.1, log source = /proc/kmsg started.
    Dec 8 15:17:14 dc01locs02 kernel: Linux version 2.6.18-128.7.1.0.1.el5 ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)) #1 SMP Mon Aug 24 14:07:09 EDT 2009
    Dec 8 15:17:14 dc01locs02 kernel: Command line: ro root=/dev/vg00/root rhgb quiet crashkernel=128M@16M
    Dec 8 15:17:14 dc01locs02 kernel: BIOS-provided physical RAM map:
    ocssd.log in node 2
    CSSD2009-12-08 15:14:35.450 1264081216 >TRACE: clssgmExecuteClientRequest: Received data update request from client (0x2aaaac065a00), type 1
    CSSD2009-12-08 15:14:36.909 1127713088 >TRACE: clssgmDispatchCMXMSG: msg type(13) src(1) dest(1) size(123) tag(00000000) incarnation(148585637)
    CSSD2009-12-08 15:14:36.909 1127713088 >TRACE: clssgmHandleDataInvalid: grock HB+ASM, member 1 node 1, birth 0
    CSSD2009-12-08 15:18:55.047 >USER: Copyright 2009, Oracle version 11.1.0.7.0
    clsdmtListening to (ADDRESS=(PROTOCOL=ipc)(KEY=dc01locs02DBG_CSSD))
    CSSD2009-12-08 15:18:55.047 >USER: CSS daemon log for node dc01locs02, number 2, in cluster ocsprodrac
    CSSD2009-12-08 15:18:55.071 3628915584 >TRACE: clssscmain: Cluster GUID is 79db6803afc7df32ffd952110f22702c
    CSSD2009-12-08 15:18:55.077 3628915584 >TRACE: clssscmain: local-only set to false

    Hi!
    I suppose this seems easy: you have a service at 'http://RAIDarray.mppdcsgswsst6140:1:0:2' (a RAID perhaps?) which failed. Logically all servers connected to thi RAID went down at the same time.
    Seems no Oracle problem. Good luck!

  • IMac running 10.9 not rebooting after network issues

    This is weird, and while I am going through Apple's Knowledgebase steps - so far without success - I thought I'd post here to see if anyone else experienced a similar issue.
    I was having network issues (or what I think/thought were network issues) for about a week. The symptoms looked like high packet loss, i.e. websites would simply not load reliably: sometimes not load at all, sometimes load partially. This started with some smaller websites and eventually it failed to load google and facebook. It took me some time to notice that  other devices on the same WiFi did not have this issue, so yesterday I decided to reboot my iMac. Network diagnostics didn't show any issue and I was able to connect to the WiFi router just fine, just not find any websites, so I thought a reboot may fix it.
    I have not been able to boot it back up since. It stalls at the grey loading screen with Apple logo and a spinning wheel for ever (I've let it sit for hours). Safe boot does not work, and starting in Verbose mode hangs at "executing fsck_hfs" and it doesn't seem to do anything else. I've booted from the install disc (actually a USB drive) and ran Disk Utility from there, but it showed not disk errors at all. It did repair a handful of permissions (I only saw iBooks related stuff and something about etc/apache2/users), apparently successfully, but it's still not booting.
    Disconnecting my external Time Machine drive and unplugging the iMac from my surge protector didn't help, either. I am now creating a new image from the partition just to have an additional backup on top of my Time Machine, and I'm going to try Archive and Install next, which is the next to last step in the Knowledgebase.
    I do have two partitions on the Mac (one 10.9 and the other 10.7), by the way, and booting into the older partition seemed to work right away. It's just that I don't have any use for that other partition right now, it's a leftover from back when I migrated to Mavericks.
    I am baffled how all of this connects together. Network issues, boot up failure without apparent disk issues? Has anyone else seen anything like this?

    I am still working on restoring my iMac with 10.9.
    I've cloned the affected partition. I used Disk Utility and it Restored without apparent unto an external drive. So I tried booting from that drive, but I get the same result as booting from the partition directly. Not a major surprise I guess since this is supposed to be an identical copy.
    I have also created a dmg image out of the partition, and I was able to open that on my MBP without problems! All the files seem to be there. It seems to be something specific with the, uh,  boot sector or, uh, something?
    Next, I booted into Single User mode and ran fsck -fy. The Knowledgebase says it's okay to do it if Safe Mode doesn't work, right? The first time it ran, I got conflicting results. It said both
    ** The volume appears to be OK
    **THE VOLUME WAS MODIFIED**
    The Knowledgebase seems to suggest that it should only say one or the other?
    Running it again, it said
    ** The volume appears to be OK
    Alas, rebooting it still doesn't work, it continues hanging at the Apple logo.

  • Cluster node reboot and Quick Migration of VMs instead of Live Migration...

    Hi to all,
    how can one configure a Windows Server 2012 multi-node failover cluster, that vms are migrated per Live Migration and NOT per Quick Migration, if one node of the failover cluster will be rebooted.
    Thanks in advance
    Joerg

    Hi Aidan,
    only for the record:
    We get the requested functionality - Live migrate all VMs on reboot without first pausing the cluster- when we do the following:
    Change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\PreshutdownOrder
    from the default
    vmms
    wuauserv
    gpsvc
    trustedinstall
    to
    clussvc
    vmms
    wuauserv
    gpsvc
    trustedinstall
    Now the cluster service stops at first, if we Trigger a reboot and all VMs migrate as configured per MoveTypeThreshold cluster setting.
    Greetings
    Joerg

  • Behaviour after network failure

    I am running two systems connected via Coherence Extend. I have a nearscheme on each and a backing map is connected via extend. I'd just like to clarify how Coherence should behave in the following scenarios. The backing map of system A is populated with 20 objects, system B now loads an object that resides in the backingmap of system A. B should have access to this object correct? If the WAN between the two systems goes down, should the backing map of both systems then both contain those 20 objects? If while in this 'island' state another 40 documents are saved into system B's backing map (A now has 20 and B has 60) how will the backing maps react once the WAN comes back up? Which objects will have precedence and is there any hook into this process of re-synchronisation so that custom logic can be applied to this process?
    Thanks
    Richard

    Another question to add is this. If I have two members sharing a distributed cache they are part of the same cluster, and my cluster has two members. Now if I connect two members via a distributed cache using extend, I have two clusters with two seperate members. Is there anyway to make my extend members part of the same cluster?
    The reason I ask this is as follows. If I have two members on the same box connected via a distributed cache and put 10 objects in member 1, I can then call invokeAll on member 2. This results in roughly 5 objects on each member responding to the invokeAll. This is as I expect as the distributed cache has done its jobs a balanced the load over both members. Now if I run the above scenario in the extend situation, when I call invokeAll on member2 the cache contains no items as they all reside in member1. Is there anyway to force a load balance when the two members of the distributed cache are connected via extend?
    Richard

  • [SOLVED] How to cancel systemd-fsck during reboot after power failure.

    Hello
    If power fails (or if I, for any reason, force a physical shutdown ) on my computer, it will display during boot :
    systemd-fsck[171] : arch_data was not cleanly unmounted, check forced
    And then hangs for a long time (10 minutes ) while checking arch_home partition, wich is a 1T ext4 partition.
    then finish the boot.
    Sometimes i want this behavior, but i may need to have my computer up and running as fast as possible, and i don't seem to be able to cancel this fsck.
    Ctrl+C, or escape have no effect.
    How to allow cancelation of system-fsck for this boot, postponing it for the next boot ?
    my fstab :
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # UUID=cfdf8739-d512-4d99-9893-437a6a3c9bf4 LABEL=Arch root
    /dev/sda12 / ext4 rw,relatime,data=ordered 0 2
    # UUID=98b01aa3-0f7f-4777-a941-8e676a68adce LABEL=Arch boot
    /dev/sda11 /boot ext2 rw,relatime 0 2
    # UUID=8909c168-5f1e-4ae7-974c-3c681237af7a LABEL=Arch var
    /dev/sda13 /var ext4 rw,relatime,data=ordered 0 2
    # UUID=a13efc24-cf66-44d0-b26c-5bb5260627a0 LABEL=Arch tmp
    /dev/sda14 /tmp ext4 rw,relatime,data=ordered 0 2
    # UUID=779aeb69-9360-4df0-af84-da385b7117d1 LABEL=Arch home
    /dev/sdb4 /home ext4 rw,relatime,data=ordered 0 2
    /dev/sdb5 /home/glow/data ext4 rw,relatime,data=ordered 0 2
    Last edited by GloW_on_dub (2013-12-07 16:00:06)

    Maybe you can add a menu item to the grub boot menu so you can pick it from the menu instead of editing the grub line by hand?
    I'm using syslinux, but I can have menu items that only differ in e.g. 'quiet' parameter:
    APPEND root=/dev/sda3 rw init=/usr/lib/systemd/systemd quiet
    APPEND root=/dev/sda3 rw init=/usr/lib/systemd/systemd
    Everything else is the same.

  • OES2 SP2a cluster node freeze

    Hi all.
    I have a 3 node cluster based on OES2 SP2a fully patched. There are a coupe of resources: Master_IP and a NSS volume.
    The cluster is virtualized on ESXi 4.1 fully patched, and vmware-tools are installed and up to date.
    If i do an "rcnetwork stop" on a node, it remains with no network for about 20 seconds, and then freezes. Does not reboot. Only freezes. The resource is balanced correctly, but the server remains hanged.
    This behaviour is the same on a server with a cluster resource on it and on a server with no cluster resource on it. Always hangs.
    The correct behaviour should be a reboot, shouldn't?
    Any hints?
    Thanks in advance.

    The node does not reboot because ....
    9.11 Preventing a Cluster Node Reboot after a Node Shutdown
    If LAN connectivity is lost between a cluster node and the other nodes in the cluster, it is possible that the lost node will be automatically shut down by the other cluster nodes. This is normal cluster operating behavior, and it prevents the lost node from trying to load cluster resources because it cannot detect the other cluster nodes. By default, cluster nodes are configured to reboot after an automatic shutdown.
    On certain occasions, you might want to prevent a downed cluster node from rebooting so you can troubleshoot problems.
    Section 9.11.1, OES 2 SP2 with Patches and Later
    Section 9.11.2, OES 2 SP2 Release Version and Earlier
    9.11.1 OES 2 SP2 with Patches and Later
    Beginning in the OES 2 SP2 Maintenance Patch for May 2010, the Novell Cluster Services reboot behavior conforms to the kernel panic setting for the Linux operating system. By default the kernel panic setting is set for no reboot after a node shutdown.
    You can set the kernel panic behavior in the /etc/sysctl.conf file by adding a kernel.panic command line. Set the value to 0 for no reboot after a node shutdown. Set the value to a positive integer value to indicate that the server should be rebooted after waiting the specified number of seconds. For information about the Linux sysctl, see the Linux man pages on sysctl and sysctl.conf.
    1.
    As the root user, open the /etc/sysctl.conf file in a text editor.
    2.
    If the kernel.panic token is not present, add it.
    kernel.panic = 0
    3.
    Set the kernel.panic value to 0 or to a positive integer value, depending on the desired behavior.
    No Reboot: To prevent an automatic cluster reboot after a node shutdown, set the kernel.panic token to value to 0. This allows the administrator to determine what caused the kernel panic condition before manually rebooting the server. This is the recommended setting.
    kernel.panic = 0
    Reboot: To allow a cluster node to reboot automatically after a node shutdown, set the kernel.panic token to a positive integer value that represents the seconds to delay the reboot.
    kernel.panic = <seconds>
    For example, to wait 1 minute (60 seconds) before rebooting the server, specify the following:
    kernel.panic = 60
    4.
    Save your changes.
    9.11.2 OES 2 SP2 Release Version and Earlier
    In OES 2 SP release version and earlier, you can modify the opt/novell/ncs/bin/ldncs file for the cluster to trigger the server to not automatically reboot after a shutdown.
    1.
    Open the opt/novell/ncs/bin/ldncs file in a text editor.
    2.
    Find the following line:
    echo -n $TOLERANCE > /proc/sys/kernel/panic
    3.
    Replace $TOLERANCE with a value of 0 to cause the server to not automatically reboot after a shutdown.
    4.
    After editing the ldncs file, you must reboot the server to cause the change to take effect.

  • Cluster Node paused

    Hi there
    My Setup:
    2 Cluster Nodes (HP DL380 G7 & HP DL380 Gen8)
    HP P2000 G3 FC MSA (MPIO)
    The Gen8 Cluster Node pauses after a few minutes, but stays online if the G7 is paused (no drain) My troubleshooting has led me to believe that there is a problem with the Cluster Shared Volume:
    00001508.000010b4::2015/02/19-14:51:14.189 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:cf2dec1d-ee88-4fb6-a86d-0c2d1aa888b4:Netbios
    00000d1c.0000299c::2015/02/19-14:51:14.615 INFO  [API] s_ApiGetQuorumResource final status 0.
    00000d1c.0000299c::2015/02/19-14:51:14.616 INFO  [RCM [RES] Virtual Machine VirtualMachine1 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
    00001508.000010b4::2015/02/19-14:51:15.010 INFO  [RES] Network Name <Cluster Name>: Getting Read only private properties
    00000d1c.00002294::2015/02/19-14:51:15.096 INFO  [API] s_ApiGetQuorumResource final status 0.
    00000d1c.00002294::2015/02/19-14:51:15.121 INFO  [API] s_ApiGetQuorumResource final status 0.
    000014a8.000024f4::2015/02/19-14:51:15.269 INFO  [RES] Physical Disk <Quorum>: VolumeIsNtfs: Volume
    \\?\GLOBALROOT\Device\Harddisk1\ClusterPartition2\ has FS type NTFS
    00000d1c.00002294::2015/02/19-14:51:15.343 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00002294::2015/02/19-14:51:15.352 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:15.386 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:15.386 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:15.386 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00001420::2015/02/19-14:51:15.847 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00001420::2015/02/19-14:51:15.855 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:15.887 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:15.888 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:15.888 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00001420::2015/02/19-14:51:15.928 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00001420::2015/02/19-14:51:15.939 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:15.968 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:15.969 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:15.969 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00001420::2015/02/19-14:51:16.005 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00001420::2015/02/19-14:51:16.015 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:16.059 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:16.059 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:16.059 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00002568::2015/02/19-14:51:17.110 INFO  [GEM] Node 1: Deleting [2:395 , 2:396] (both included) as it has been ack'd by every node
    00000d1c.0000299c::2015/02/19-14:51:17.444 INFO  [RCM [RES] Virtual Machine VirtualMachine2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
    00000d1c.0000299c::2015/02/19-14:51:18.103 INFO  [RCM] rcm::DrainMgr::PauseNodeNoDrain: [DrainMgr] PauseNodeNoDrain
    00000d1c.0000299c::2015/02/19-14:51:18.103 INFO  [GUM] Node 1: Processing RequestLock 1:164
    00000d1c.00002568::2015/02/19-14:51:18.104 INFO  [GUM] Node 1: Processing GrantLock to 1 (sent by 2 gumid: 1470)
    00000d1c.0000299c::2015/02/19-14:51:18.104 INFO  [GUM] Node 1: executing request locally, gumId:1471, my action: /nsm/stateChange, # of updates: 1
    00000d1c.00001420::2015/02/19-14:51:18.104 INFO  [DM] Starting replica transaction, paxos: 99:99:50133, smartPtr: HDL( c9b16cf1e0 ), internalPtr: HDL( c9b21
    This issue has been bugging me for some time now. The Cluster is fully functional and works great until the node gets paused again. I've read somewhere that the MSMQ errors can be ignored, but can't find anything about the
    HardDiskpGetDiskInfo: GetVolumeInformation failed messages. No errors in the san or the Server Event logs. Driver and Firmware are up to date. Any help would be greatly appreciated.
    Best regards

    Thank you for your replies.
    First some information I left out in my original post. We're using Windows Server 2012 R2 Datacenter and are currently only hosting virtual machines on the cluster.
    I did some testing over the weekend, including a firmware update on the san and cluster validation. 
    The problem doesn't seem to be related to backup. We use Microsoft DPM to make a full express backup once every day, the getvolumeinformation Failed error gets logged periodically every half an hour.
    Excerpts from the validation report:
    Validate Disk Failover
    Description: Validate that a disk can fail over successfully with
    data intact.
    Start: 21.02.2015 18:02:17.
    Node Node2 holds the SCSI PR on Test Disk 3
    and brought the disk online, but failed in its attempt to write file data to
    partition table entry 1. The disk structure is corrupted and
    unreadable.
    Stop: 21.02.2015 18:02:37.
    Node Node1 holds the SCSI PR on Test Disk 3
    and brought the disk online, but failed in its attempt to write file data to
    partition table entry 1. The disk structure is corrupted and unreadable.
    Validate File System
    Description: Validate that the file system on disks in shared
    storage is supported by failover clusters and Cluster Shared Volumes (CSVs).
    Failover cluster physical disk resources support NTFS, ReFS, FAT32, FAT, and
    RAW. Only volumes formatted as NTFS or ReFS are accessible in disks added as
    CSVs.
    The test was canceled.
    Validate Simultaneous Failover
    Description: Validate that disks can fail over simultaneously with
    data intact.
    The test was canceled.
    Validate Storage Spaces Persistent Reservation
    Description: Validate that storage supports the SCSI-3 Persistent
    Reservation commands needed by Storage Spaces to support clustering.
    Start: 21.02.2015 18:01:00.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 3 from node Node1. Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x30000000a for Test
    Disk 3 from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 3 from node 
    Node1 using key 0x30000000a.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x3000100aa for Test
    Disk 3 from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0x30000000a SERVICE ACTION RESERVATION KEY 0x30000000b for Test Disk 3 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 2 from node Node1.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x20000000a for Test
    Disk 2 from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 2 from node 
    Node1 using key 0x20000000a.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x2000100aa for Test
    Disk 2 from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0x20000000a SERVICE ACTION RESERVATION KEY 0x20000000b for Test Disk 2 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 0 from node Node1.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0xa for Test Disk 0
    from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 0 from node 
    Node1 using key 0xa.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x100aa for Test Disk 0
    from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0xa SERVICE ACTION RESERVATION KEY 0xb for Test Disk 0 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 1 from node Node1.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x10000000a for Test
    Disk 1 from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 1 from node 
    Node1 using key 0x10000000a.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x1000100aa for Test
    Disk 1 from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0x10000000a SERVICE ACTION RESERVATION KEY 0x10000000b for Test Disk 1 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Failure. Persistent Reservation not present on Test Disk 3 from
    node Node1 after successful call to update reservation holder's
    registration key 0x30000000b.
    Failure. Persistent Reservation not present on Test Disk 1 from
    node Node1 after successful call to update reservation holder's
    registration key 0x10000000b.
    Failure. Persistent Reservation not present on Test Disk 0 from
    node Node1 after successful call to update reservation holder's
    registration key 0xb.
    Failure. Persistent Reservation not present on Test Disk 2 from
    node Node1 after successful call to update reservation holder's
    registration key 0x20000000b.
    Test Disk 0 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Test Disk 1 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Test Disk 2 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Test Disk 3 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Stop: 21.02.2015 18:01:02
    Thank you for your help.
    David

  • Changing Cluster node hostname

    Dear all
    Can i change hostname of box in cluster enviroment
    Regards
    DR

    According to SysAdmin magazine (it's not on there site but in the May 2006 edition) you can change the hostname of cluster hostnames by performing the following:
    Reboot cluster nodes into non-cluster node (reboot -- -x)
    Change the hostname of the system (nodenames, hosts etc)
    Change hostname on all nodes within the files under /erc/cluster/ccr
    Regenerate the checksums for each file changed using ccradm -I /etc/cluster/ccr/FILENAME -0Reboot every cluster node into the cluster.
    I have no idea if this works but if it does then let me know.

  • After reboot cluster node went into maintanance mode (CONTROL-D)

    Hi there!
    I have configured 2 node cluster on 2 x SUN Enterprise 220R and StoreEdge D1000.
    Each time when rebooted any of the cluster nodes i get the following error during boot up:
    The / file system (/dev/rdsk/c0t1d0s0) is being checked.
    /dev/rdsk/c0t1d0s0: UNREF DIR I=35540 OWNER=root MODE=40755
    /dev/rdsk/c0t1d0s0: SIZE=512 MTIME=Jun 5 15:02 2006 (CLEARED)
    /dev/rdsk/c0t1d0s0: UNREF FILE I=1192311 OWNER=root MODE=100600
    /dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 (RECONNECTED)
    /dev/rdsk/c0t1d0s0: LINK COUNT FILE I=1192311 OWNER=root MODE=100600
    /dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 COUNT 0 SHOULD BE 1
    /dev/rdsk/c0t1d0s0: LINK COUNT INCREASING
    /dev/rdsk/c0t1d0s0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
    In maintanance mode i do:
    # fsck -y -F ufs /dev/rdsk/c0t1d0s0
    and it managed to correct the problem ... but problem occured again after each reboot on each cluster node!
    I have installed Sun CLuster 3.1 on Solaris 9 SPARC
    How can i get rid of it?
    Any ideas?
    Brgds,
    Sergej

    Hi i get this:
    112941-09 SunOS 5.9: sysidnet Utility Patch
    116755-01 SunOS 5.9: usr/snadm/lib/libadmutil.so.2 Patch
    113434-30 SunOS 5.9: /usr/snadm/lib Library and Differential Flash Patch
    112951-13 SunOS 5.9: patchadd and patchrm Patch
    114711-03 SunOS 5.9: usr/sadm/lib/diskmgr/VDiskMgr.jar Patch
    118064-04 SunOS 5.9: Admin Install Project Manager Client Patch
    113742-01 SunOS 5.9: smcpreconfig.sh Patch
    113813-02 SunOS 5.9: Gnome Integration Patch
    114501-01 SunOS 5.9: drmproviders.jar Patch
    112943-09 SunOS 5.9: Volume Management Patch
    113799-01 SunOS 5.9: solregis Patch
    115697-02 SunOS 5.9: mtmalloc lib Patch
    113029-06 SunOS 5.9: libaio.so.1 librt.so.1 and abi_libaio.so.1 Patch
    113981-04 SunOS 5.9: devfsadm Patch
    116478-01 SunOS 5.9: usr platform links Patch
    112960-37 SunOS 5.9: patch libsldap ldap_cachemgr libldap
    113332-07 SunOS 5.9: libc_psr.so.1 Patch
    116500-01 SunOS 5.9: SVM auto-take disksets Patch
    114349-04 SunOS 5.9: sbin/dhcpagent Patch
    120441-03 SunOS 5.9: libsec patch
    114344-19 SunOS 5.9: kernel/drv/arp Patch
    114373-01 SunOS 5.9: UMEM - abi_libumem.so.1 patch
    118558-27 SunOS 5.9: Kernel Patch
    115675-01 SunOS 5.9: /usr/lib/liblgrp.so Patch
    112958-04 SunOS 5.9: patch pci.so
    113451-11 SunOS 5.9: IKE Patch
    112920-02 SunOS 5.9: libipp Patch
    114372-01 SunOS 5.9: UMEM - llib-lumem patch
    116229-01 SunOS 5.9: libgen Patch
    116178-01 SunOS 5.9: libcrypt Patch
    117453-01 SunOS 5.9: libwrap Patch
    114131-03 SunOS 5.9: multi-terabyte disk support - libadm.so.1 patch
    118465-02 SunOS 5.9: rcm_daemon Patch
    113490-04 SunOS 5.9: Audio Device Driver Patch
    114926-02 SunOS 5.9: kernel/drv/audiocs Patch
    113318-25 SunOS 5.9: patch /kernel/fs/nfs and /kernel/fs/sparcv9/nfs
    113070-01 SunOS 5.9: ftp patch
    114734-01 SunOS 5.9: /usr/ccs/bin/lorder Patch
    114227-01 SunOS 5.9: yacc Patch
    116546-07 SunOS 5.9: CDRW DVD-RW DVD+RW Patch
    119494-01 SunOS 5.9: mkisofs patch
    113471-09 SunOS 5.9: truss Patch
    114718-05 SunOS 5.9: usr/kernel/fs/pcfs Patch
    115545-01 SunOS 5.9: nss_files patch
    115544-02 SunOS 5.9: nss_compat patch
    118463-01 SunOS 5.9: du Patch
    116016-03 SunOS 5.9: /usr/sbin/logadm patch
    115542-02 SunOS 5.9: nss_user patch
    116014-06 SunOS 5.9: /usr/sbin/usermod patch
    116012-02 SunOS 5.9: ps utility patch
    117433-02 SunOS 5.9: FSS FX RT Patch
    117431-01 SunOS 5.9: nss_nis Patch
    115537-01 SunOS 5.9: /kernel/strmod/ptem patch
    115336-03 SunOS 5.9: /usr/bin/tar, /usr/sbin/static/tar Patch
    117426-03 SunOS 5.9: ctsmc and sc_nct driver patch
    121319-01 SunOS 5.9: devfsadmd_mod.so Patch
    121316-01 SunOS 5.9: /kernel/sys/doorfs Patch
    121314-01 SunOS 5.9: tl driver patch
    116554-01 SunOS 5.9: semsys Patch
    112968-01 SunOS 5.9: patch /usr/bin/renice
    116552-01 SunOS 5.9: su Patch
    120445-01 SunOS 5.9: Toshiba platform token links (TSBW,Ultra-3i)
    112964-15 SunOS 5.9: /usr/bin/ksh Patch
    112839-08 SunOS 5.9: patch libthread.so.1
    115687-02 SunOS 5.9:/var/sadm/install/admin/default Patch
    115685-01 SunOS 5.9: sbin/netstrategy Patch
    115488-01 SunOS 5.9: patch /kernel/misc/busra
    115681-01 SunOS 5.9: usr/lib/fm/libdiagcode.so.1 Patch
    113032-03 SunOS 5.9: /usr/sbin/init Patch
    113031-03 SunOS 5.9: /usr/bin/edit Patch
    114259-02 SunOS 5.9: usr/sbin/psrinfo Patch
    115878-01 SunOS 5.9: /usr/bin/logger Patch
    116543-04 SunOS 5.9: vmstat Patch
    113580-01 SunOS 5.9: mount Patch
    115671-01 SunOS 5.9: mntinfo Patch
    113977-01 SunOS 5.9: awk/sed pkgscripts Patch
    122716-01 SunOS 5.9: kernel/fs/lofs patch
    113973-01 SunOS 5.9: adb Patch
    122713-01 SunOS 5.9: expr patch
    117168-02 SunOS 5.9: mpstat Patch
    116498-02 SunOS 5.9: bufmod Patch
    113576-01 SunOS 5.9: /usr/bin/dd Patch
    116495-03 SunOS 5.9: specfs Patch
    117160-01 SunOS 5.9: /kernel/misc/krtld patch
    118586-01 SunOS 5.9: cp/mv/ln Patch
    120025-01 SunOS 5.9: ipsecconf Patch
    116527-02 SunOS 5.9: timod Patch
    117155-08 SunOS 5.9: pcipsy Patch
    114235-01 SunOS 5.9: libsendfile.so.1 Patch
    117152-01 SunOS 5.9: magic Patch
    116486-03 SunOS 5.9: tsalarm Driver Patch
    121998-01 SunOS 5.9: two-key mode fix for 3DES Patch
    116484-01 SunOS 5.9: consconfig Patch
    116482-02 SunOS 5.9: modload Utils Patch
    117746-04 SunOS 5.9: patch platform/sun4u/kernel/drv/sparcv9/pic16f819
    121992-01 SunOS 5.9: fgrep Patch
    120768-01 SunOS 5.9: grpck patch
    119438-01 SunOS 5.9: usr/bin/login Patch
    114389-03 SunOS 5.9: devinfo Patch
    116510-01 SunOS 5.9: wscons Patch
    114224-05 SunOS 5.9: csh Patch
    116670-04 SunOS 5.9: gld Patch
    114383-03 SunOS 5.9: Enchilada/Stiletto - pca9556 driver
    116506-02 SunOS 5.9: traceroute patch
    112919-01 SunOS 5.9: netstat Patch
    112918-01 SunOS 5.9: route Patch
    112917-01 SunOS 5.9: ifrt Patch
    117132-01 SunOS 5.9: cachefsstat Patch
    114370-04 SunOS 5.9: libumem.so.1 patch
    114010-02 SunOS 5.9: m4 Patch
    117129-01 SunOS 5.9: adb Patch
    117483-01 SunOS 5.9: ntwdt Patch
    114369-01 SunOS 5.9: prtvtoc patch
    117125-02 SunOS 5.9: procfs Patch
    117480-01 SunOS 5.9: pkgadd Patch
    112905-02 SunOS 5.9: ippctl Patch
    117123-06 SunOS 5.9: wanboot Patch
    115030-03 SunOS 5.9: Multiterabyte UFS - patch mount
    114004-01 SunOS 5.9: sed Patch
    113335-03 SunOS 5.9: devinfo Patch
    113495-05 SunOS 5.9: cfgadm Library Patch
    113494-01 SunOS 5.9: iostat Patch
    113493-03 SunOS 5.9: libproc.so.1 Patch
    113330-01 SunOS 5.9: rpcbind Patch
    115028-02 SunOS 5.9: patch /usr/lib/fs/ufs/df
    115024-01 SunOS 5.9: file system identification utilities
    117471-02 SunOS 5.9: fifofs Patch
    118897-01 SunOS 5.9: stc Patch
    115022-03 SunOS 5.9: quota utilities
    115020-01 SunOS 5.9: patch /usr/lib/adb/ml_odunit
    113720-01 SunOS 5.9: rootnex Patch
    114352-03 SunOS 5.9: /etc/inet/inetd.conf Patch
    123056-01 SunOS 5.9: ldterm patch
    116243-01 SunOS 5.9: umountall Patch
    113323-01 SunOS 5.9: patch /usr/sbin/passmgmt
    116049-01 SunOS 5.9: fdfs Patch
    116241-01 SunOS 5.9: keysock Patch
    113480-02 SunOS 5.9: usr/lib/security/pam_unix.so.1 Patch
    115018-01 SunOS 5.9: patch /usr/lib/adb/dqblk
    113277-44 SunOS 5.9: sd and ssd Patch
    117457-01 SunOS 5.9: elfexec Patch
    113110-01 SunOS 5.9: touch Patch
    113077-17 SunOS 5.9: /platform/sun4u/kernal/drv/su Patch
    115006-01 SunOS 5.9: kernel/strmod/kb patch
    113072-07 SunOS 5.9: patch /usr/sbin/format
    113071-01 SunOS 5.9: patch /usr/sbin/acctadm
    116782-01 SunOS 5.9: tun Patch
    114331-01 SunOS 5.9: power Patch
    112835-01 SunOS 5.9: patch /usr/sbin/clinfo
    114927-01 SunOS 5.9: usr/sbin/allocate Patch
    119937-02 SunOS 5.9: inetboot patch
    113467-01 SunOS 5.9: seg_drv & seg_mapdev Patch
    114923-01 SunOS 5.9: /usr/kernel/drv/logindmux Patch
    117443-01 SunOS 5.9: libkvm Patch
    114329-01 SunOS 5.9: /usr/bin/pax Patch
    119929-01 SunOS 5.9: /usr/bin/xargs patch
    113459-04 SunOS 5.9: udp patch
    113446-03 SunOS 5.9: dman Patch
    116009-05 SunOS 5.9: sgcn & sgsbbc patch
    116557-04 SunOS 5.9: sbd Patch
    120241-01 SunOS 5.9: bge: Link & Speed LEDs flash constantly on V20z
    113984-01 SunOS 5.9: iosram Patch
    113220-01 SunOS 5.9: patch /platform/sun4u/kernel/drv/sparcv9/upa64s
    113975-01 SunOS 5.9: ssm Patch
    117165-01 SunOS 5.9: pmubus Patch
    116530-01 SunOS 5.9: bge.conf Patch
    116529-01 SunOS 5.9: smbus Patch
    116488-03 SunOS 5.9: Lights Out Management (lom) patch
    117131-01 SunOS 5.9: adm1031 Patch
    117124-12 SunOS 5.9: platmod, drmach, dr, ngdr, & gptwocfg Patch
    114003-01 SunOS 5.9: bbc driver Patch
    118539-02 SunOS 5.9: schpc Patch
    112837-10 SunOS 5.9: patch /usr/lib/inet/in.dhcpd
    114975-01 SunOS 5.9: usr/lib/inet/dhcp/svcadm/dhcpcommon.jar Patch
    117450-01 SunOS 5.9: ds_SUNWnisplus Patch
    113076-02 SunOS 5.9: dhcpmgr.jar Patch
    113572-01 SunOS 5.9: docbook-to-man.ts Patch
    118472-01 SunOS 5.9: pargs Patch
    122709-01 SunOS 5.9: /usr/bin/dc patch
    113075-01 SunOS 5.9: pmap patch
    113472-01 SunOS 5.9: madv & mpss lib Patch
    115986-02 SunOS 5.9: ptree Patch
    115693-01 SunOS 5.9: /usr/bin/last Patch
    115259-03 SunOS 5.9: patch usr/lib/acct/acctcms
    114564-09 SunOS 5.9: /usr/sbin/in.ftpd Patch
    117441-01 SunOS 5.9: FSSdispadmin Patch
    113046-01 SunOS 5.9: fcp Patch
    118191-01 gtar patch
    114818-06 GNOME 2.0.0: libpng Patch
    117177-02 SunOS 5.9: lib/gss module Patch
    116340-05 SunOS 5.9: gzip and Freeware info files patch
    114339-01 SunOS 5.9: wrsm header files Patch
    122673-01 SunOS 5.9: sockio.h header patch
    116474-03 SunOS 5.9: libsmedia Patch
    117138-01 SunOS 5.9: seg_spt.h
    112838-11 SunOS 5.9: pcicfg Patch
    117127-02 SunOS 5.9: header Patch
    112929-01 SunOS 5.9: RIPv2 Header Patch
    112927-01 SunOS 5.9: IPQos Header Patch
    115992-01 SunOS 5.9: /usr/include/limits.h Patch
    112924-01 SunOS 5.9: kdestroy kinit klist kpasswd Patch
    116231-03 SunOS 5.9: llc2 Patch
    116776-01 SunOS 5.9: mipagent patch
    117420-02 SunOS 5.9: mdb Patch
    117179-01 SunOS 5.9: nfs_dlboot Patch
    121194-01 SunOS 5.9: usr/lib/nfs/statd Patch
    116502-03 SunOS 5.9: mountd Patch
    113331-01 SunOS 5.9: usr/lib/nfs/rquotad Patch
    113281-01 SunOS 5.9: patch /usr/lib/netsvc/yp/ypbind
    114736-01 SunOS 5.9: usr/sbin/nisrestore Patch
    115695-01 SunOS 5.9: /usr/lib/netsvc/yp/yppush Patch
    113321-06 SunOS 5.9: patch sf and socal
    113049-01 SunOS 5.9: luxadm & liba5k.so.2 Patch
    116663-01 SunOS 5.9: ntpdate Patch
    117143-01 SunOS 5.9: xntpd Patch
    113028-01 SunOS 5.9: patch /kernel/ipp/flowacct
    113320-06 SunOS 5.9: patch se driver
    114731-08 SunOS 5.9: kernel/drv/glm Patch
    115667-03 SunOS 5.9: Chalupa platform support Patch
    117428-01 SunOS 5.9: picl Patch
    113327-03 SunOS 5.9: pppd Patch
    114374-01 SunOS 5.9: Perl patch
    115173-01 SunOS 5.9: /usr/bin/sparcv7/gcore /usr/bin/sparcv9/gcore Patch
    114716-02 SunOS 5.9: usr/bin/rcp Patch
    112915-04 SunOS 5.9: snoop Patch
    116778-01 SunOS 5.9: in.ripngd patch
    112916-01 SunOS 5.9: rtquery Patch
    112928-03 SunOS 5.9: in.ndpd Patch
    119447-01 SunOS 5.9: ses Patch
    115354-01 SunOS 5.9: slpd Patch
    116493-01 SunOS 5.9: ProtocolTO.java Patch
    116780-02 SunOS 5.9: scmi2c Patch
    112972-17 SunOS 5.9: patch /usr/lib/libssagent.so.1 /usr/lib/libssasnmp.so.1 mibiisa
    116480-01 SunOS 5.9: IEEE 1394 Patch
    122485-01 SunOS 5.9: 1394 mass storage driver patch
    113716-02 SunOS 5.9: sar & sadc Patch
    115651-02 SunOS 5.9: usr/lib/acct/runacct Patch
    116490-01 SunOS 5.9: acctdusg Patch
    117473-01 SunOS 5.9: fwtmp Patch
    116180-01 SunOS 5.9: geniconvtbl Patch
    114006-01 SunOS 5.9: tftp Patch
    115646-01 SunOS 5.9: libtnfprobe shared library Patch
    113334-03 SunOS 5.9: udfs Patch
    115350-01 SunOS 5.9: ident_udfs.so.1 Patch
    122484-01 SunOS 5.9: preen_md.so.1 patch
    117134-01 SunOS 5.9: svm flasharchive patch
    116472-02 SunOS 5.9: rmformat Patch
    112966-05 SunOS 5.9: patch /usr/sbin/vold
    114229-01 SunOS 5.9: action_filemgr.so.1 Patch
    114335-02 SunOS 5.9: usr/sbin/rmmount Patch
    120443-01 SunOS 5.9: sed core dumps on long lines
    121588-01 SunOS 5.9: /usr/xpg4/bin/awk Patch
    113470-02 SunOS 5.9: winlock Patch
    119211-07 NSS_NSPR_JSS 3.11: NSPR 4.6.1 / NSS 3.11 / JSS 4.2
    118666-05 J2SE 5.0: update 6 patch
    118667-05 J2SE 5.0: update 6 patch, 64bit
    114612-01 SunOS 5.9: ANSI-1251 encodings file errors
    114276-02 SunOS 5.9: Extended Arabic support in UTF-8
    117400-01 SunOS 5.9: ISO8859-6 and ISO8859-8 iconv symlinks
    113584-16 SunOS 5.9: yesstr, nostr nl_langinfo() strings incorrect in S9
    117256-01 SunOS 5.9: Remove old OW Xresources.ow files
    112625-01 SunOS 5.9: Dcam1394 patch
    114600-05 SunOS 5.9: vlan driver patch
    117119-05 SunOS 5.9: Sun Gigabit Ethernet 3.0 driver patch
    117593-04 SunOS 5.9: Manual Page updates for Solaris 9
    112622-19 SunOS 5.9: M64 Graphics Patch
    115953-06 Sun Cluster 3.1: Sun Cluster sccheck patch
    117949-23 Sun Cluster 3.1: Core Patch for Solaris 9
    115081-06 Sun Cluster 3.1: HA-Sun One Web Server Patch
    118627-08 Sun Cluster 3.1: Manageability and Serviceability Agent
    117985-03 SunOS 5.9: XIL 1.4.2 Loadable Pipeline Libraries
    113896-06 SunOS 5.9: en_US.UTF-8 locale patch
    114967-02 SunOS 5.9: FDL patch
    114677-11 SunOS 5.9: International Components for Unicode Patch
    112805-01 CDE 1.5: Help volume patch
    113841-01 CDE 1.5: answerbook patch
    113839-01 CDE 1.5: sdtwsinfo patch
    115713-01 CDE 1.5: dtfile patch
    112806-01 CDE 1.5: sdtaudiocontrol patch
    112804-02 CDE 1.5: sdtname patch
    113244-09 CDE 1.5: dtwm patch
    114312-02 CDE1.5: GNOME/CDE Menu for Solaris 9
    112809-02 CDE:1.5 Media Player (sdtjmplay) patch
    113868-02 CDE 1.5: PDASync patch
    119976-01 CDE 1.5: dtterm patch
    112771-30 Motif 1.2.7 and 2.1.1: Runtime library patch for Solaris 9
    114282-01 CDE 1.5: libDtWidget patch
    113789-01 CDE 1.5: dtexec patch
    117728-01 CDE1.5: dthello patch
    113863-01 CDE 1.5: dtconfig patch
    112812-01 CDE 1.5: dtlp patch
    113861-04 CDE 1.5: dtksh patch
    115972-03 CDE 1.5: dtterm libDtTerm patch
    114654-02 CDE 1.5: SmartCard patch
    117632-01 CDE1.5: sun_at patch for Solaris 9
    113374-02 X11 6.6.1: xpr patch
    118759-01 X11 6.6.1: Font Administration Tools patch
    117577-03 X11 6.6.1: TrueType fonts patch
    116084-01 X11 6.6.1: font patch
    113098-04 X11 6.6.1: X RENDER extension patch
    112787-01 X11 6.6.1: twm patch
    117601-01 X11 6.6.1: libowconfig.so.0 patch
    117663-02 X11 6.6.1: xwd patch
    113764-04 X11 6.6.1: keyboard patch
    113541-02 X11 6.6.1: XKB patch
    114561-01 X11 6.6.1: X splash screen patch
    113513-02 X11 6.6.1: platform support for new hardware
    116121-01 X11 6.4.1: platform support for new hardware
    114602-04 X11 6.6.1: libmpg_psr patch
    Is there a bundle to install or i have to install each patch separatly_?

  • Simple two node Cluster Install - Hung after reboot of first node

    Hello,
    Over the past couple of days I have tried to install a simple two node cluster using two identical SunFire X4200s, firstly following the recipe in: http://www.sun.com/software/solaris/howtoguides/twonodecluster.jsp
    and when that failed referring to http://docs.sun.com/app/docs/doc/819-0912 and http://docs.sun.com/app/docs/doc/819-2970.
    I am trying to keep the install process as simple as possible, no switch, just back to back connections for the internal networking (node1 e1000g0 <--> node2 e1000g0, node1 e1000g1 <--> node2 e1000g1)
    I ran the installer on both X4200s with default answers. This went through smoothly without problems.
    I ran scinstall on node1, first time through, choosing "typical" as suggested in the how to guide. Everything goes OK (no errors) node2 reboots, but node1 just sits there waiting for node2, no errors, nothing....
    I also tried rerunning scinstall choosing "Custom", and then selecting the no switch option. Same thing happened.
    I must be doing something stupid, it's such a simple setup! Any ideas??
    Here's the final screen from node1 (dcmds0) in both cases:
    Cluster Creation
    Log file - /var/cluster/logs/install/scinstall.log.940
    Checking installation status ... done
    The Sun Cluster software is installed on "dcmds0".
    The Sun Cluster software is installed on "dcmds1".
    Started sccheck on "dcmds0".
    Started sccheck on "dcmds1".
    sccheck completed with no errors or warnings for "dcmds0".
    sccheck completed with no errors or warnings for "dcmds1".
    Configuring "dcmds1" ... done
    Rebooting "dcmds1" ...
    Output from scconf on node2 (dcmds1):
    bash-3.00# scconf -p
    Cluster name: dcmdscluster
    Cluster ID: 0x47538959
    Cluster install mode: enabled
    Cluster private net: 172.16.0.0
    Cluster private netmask: 255.255.248.0
    Cluster maximum nodes: 64
    Cluster maximum private networks: 10
    Cluster new node authentication: unix
    Cluster authorized-node list: dcmds0 dcmds1
    Cluster transport heart beat timeout: 10000
    Cluster transport heart beat quantum: 1000
    Round Robin Load Balancing UDP session timeout: 480
    Cluster nodes: dcmds1
    Cluster node name: dcmds1
    Node ID: 1
    Node enabled: yes
    Node private hostname: clusternode1-priv
    Node quorum vote count: 1
    Node reservation key: 0x4753895900000001
    Node zones: <NULL>
    CPU shares for global zone: 1
    Minimum CPU requested for global zone: 1
    Node transport adapters: e1000g0 e1000g1
    Node transport adapter: e1000g0
    Adapter enabled: no
    Adapter transport type: dlpi
    Adapter property: device_name=e1000g
    Adapter property: device_instance=0
    Adapter property: lazy_free=1
    Adapter property: dlpi_heartbeat_timeout=10000
    Adapter property: dlpi_heartbeat_quantum=1000
    Adapter property: nw_bandwidth=80
    Adapter property: bandwidth=70
    Adapter port names: <NULL>
    Node transport adapter: e1000g1
    Adapter enabled: no
    Adapter transport type: dlpi
    Adapter property: device_name=e1000g
    Adapter property: device_instance=1
    Adapter property: lazy_free=1
    Adapter property: dlpi_heartbeat_timeout=10000
    Adapter property: dlpi_heartbeat_quantum=1000
    Adapter property: nw_bandwidth=80
    Adapter property: bandwidth=70
    Adapter port names: <NULL>
    Cluster transport switches: <NULL>
    Cluster transport cables
    Endpoint Endpoint State
    Quorum devices: <NULL>
    Rob.

    I have found out why the install hung - this needs to be added into the install guide(s) at once!! - It's VERY frustrating when an install guide is incomplete!
    The solution is posted in the HA-Cluster OpenSolaris forums at:
    http://opensolaris.org/os/community/ha-clusters/ohac/Documentation/SCXdocs/relnotes/#bugs
    In particular, my problem was that I selected to make my Solaris install secure (A good idea, I thought!). Unfortunately, this stops Sun Cluster from working. To fix the problem you need to perform the following steps on each secured node:
    Problem Summary: During Solaris installation, the setting of a restricted network profile disables external access to network services that Sun Cluster functionality uses, ie: The RPC communication service, which is required for cluster communication
    Workaround: Restore external access to RPC communication.
    Perform the following commands to restore external access to RPC communication.
    # svccfg
    svc:> select network/rpc/bind
    svc:/network/rpc/bind> setprop config/local_only=false
    svc:/network/rpc/bind> quit
    # svcadm refresh network/rpc/bind:default
    # svcprop network/rpc/bind:default | grep local_only
    Once I applied these commands, the install process continued ... AT LAST!!!
    Rob.

  • Problem in NODE 1 after reboot

    Hi,
    Oracle Version:11gR2
    Operating System:Cent Os
    Hi we have some problem in node 1 after sudden reboot of both the nodes when the servers are up the database in node 2 started automatically but in node 1 we started manually.
    But inthe CRSCTL command it is showing that node 1 database is down as show below.
    [root@rac1 bin]# ./crsctl stat res -t
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS
    Local Resources
    ora.ASM_DATA.dg
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.ASM_FRA.dg
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.OCR_VOTE.dg
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.asm
                   ONLINE  ONLINE       rac1                     Started
                   ONLINE  ONLINE       rac2                     Started
    ora.eons
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.gsd
                   OFFLINE OFFLINE      rac1
                   OFFLINE OFFLINE      rac2
    ora.net1.network
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.ons
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.registry.acfs
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    Cluster Resources
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       rac2
    ora.LISTENER_SCAN2.lsnr
          1        ONLINE  ONLINE       rac1
    ora.LISTENER_SCAN3.lsnr
          1        ONLINE  ONLINE       rac1
    ora.oc4j
          1        OFFLINE OFFLINE
    ora.qfundrac.db
          1        OFFLINE OFFLINE
          2        ONLINE  ONLINE       rac2                     Open
    ora.rac1.vip
          1        ONLINE  ONLINE       rac1
    ora.rac2.vip
          1        ONLINE  ONLINE       rac2
    ora.scan1.vip
          1        ONLINE  ONLINE       rac2
    ora.scan2.vip
          1        ONLINE  ONLINE       rac1
    ora.scan3.vip
          1        ONLINE  ONLINE       rac1but for the below command it is showing both the nodes are up
    SQL> select inst_id,status,instance_role,active_state from gv$instance;
       INST_ID STATUS       INSTANCE_ROLE      ACTIVE_ST
             1 OPEN         PRIMARY_INSTANCE   NORMAL
             2 OPEN         PRIMARY_INSTANCE   NORMALhere is the output for cluvfy .
    [grid@rac1 bin]$ ./cluvfy stage -post crsinst -n rac1,rac2 -verbose
    Performing post-checks for cluster services setup
    Checking node reachability...
    Check: Node reachability from node "rac1"
      Destination Node                      Reachable?
      rac2                                  yes
      rac1                                  yes
    Result: Node reachability check passed from node "rac1"
    Checking user equivalence...
    Check: User equivalence for user "grid"
      Node Name                             Comment
      rac2                                  passed
      rac1                                  passed
    Result: User equivalence check passed for user "grid"
    Checking time zone consistency...
    Time zone consistency check passed.
    Checking Cluster manager integrity...
    Checking CSS daemon...
      Node Name                             Status
      rac2                                  running
      rac1                                  running
    Oracle Cluster Synchronization Services appear to be online.
    Cluster manager integrity check passed
    UDev attributes check for OCR locations started...
    Result: UDev attributes check passed for OCR locations
    UDev attributes check for Voting Disk locations started...
    Result: UDev attributes check passed for Voting Disk locations
    Check default user file creation mask
      Node Name     Available                 Required                  Comment
      rac2          0022                      0022                      passed
      rac1          0022                      0022                      passed
    Result: Default user file creation mask check passed
    Checking cluster integrity...
      Node Name
      rac1
      rac2
    Cluster integrity check passed
    Checking OCR integrity...
    Checking the absence of a non-clustered configuration...
    All nodes free of non-clustered, local-only configurations
    ASM Running check passed. ASM is running on all cluster nodes
    Checking OCR config file "/etc/oracle/ocr.loc"...
    OCR config file "/etc/oracle/ocr.loc" check successful
    Disk group for ocr location "+OCR_VOTE" available on all the nodes
    Checking size of the OCR location "+OCR_VOTE" ...
    Size check for OCR location "+OCR_VOTE" successful...
    Size check for OCR location "+OCR_VOTE" successful...
    WARNING:
    This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
    OCR integrity check passed
    Checking CRS integrity...
    The Oracle clusterware is healthy on node "rac2"
    The Oracle clusterware is healthy on node "rac1"
    CRS integrity check passed
    Checking node application existence...
    Checking existence of VIP node application
      Node Name     Required                  Status                    Comment
      rac2          yes                       online                    passed
      rac1          yes                       online                    passed
    Result: Check passed.
    Checking existence of ONS node application
      Node Name     Required                  Status                    Comment
      rac2          no                        online                    passed
      rac1          no                        online                    passed
    Result: Check passed.
    Checking existence of GSD node application
      Node Name     Required                  Status                    Comment
      rac2          no                        does not exist            ignored
      rac1          no                        does not exist            ignored
    Result: Check ignored.
    Checking existence of EONS node application
      Node Name     Required                  Status                    Comment
      rac2          no                        online                    passed
      rac1          no                        online                    passed
    Result: Check passed.
    Checking existence of NETWORK node application
      Node Name     Required                  Status                    Comment
      rac2          no                        online                    passed
      rac1          no                        online                    passed
    Result: Check passed.
    Checking Single Client Access Name (SCAN)...
      SCAN VIP name     Node          Running?      ListenerName  Port          Running?
      qfund-rac.qfund.net  rac2          true          LISTENER      1521          true
    Checking name resolution setup for "qfund-rac.qfund.net"...
      SCAN Name     IP Address                Status                    Comment
      qfund-rac.qfund.net  192.168.8.118             passed
      qfund-rac.qfund.net  192.168.8.119             passed
      qfund-rac.qfund.net  192.168.8.117             passed
    Verification of SCAN VIP and Listener setup passed
    OCR detected on ASM. Running ACFS Integrity checks...
    Starting check to see if ASM is running on all cluster nodes...
    ASM Running check passed. ASM is running on all cluster nodes
    Starting Disk Groups check to see if at least one Disk Group configured...
    Disk Group Check passed. At least one Disk Group configured
    Task ACFS Integrity check passed
    Checking Oracle Cluster Voting Disk configuration...
    Oracle Cluster Voting Disk configuration check passed
    Checking to make sure user "grid" is not in "root" group
      Node Name     Status                    Comment
      rac2          does not exist            passed
      rac1          does not exist            passed
    Result: User "grid" is not part of "root" group. Check passed
    Checking if Clusterware is installed on all nodes...
    Check of Clusterware install passed
    Checking if CTSS Resource is running on all nodes...
    Check: CTSS Resource running on all nodes
      Node Name                             Status
      rac2                                  passed
      rac1                                  passed
    Result: CTSS resource check passed
    Querying CTSS for time offset on all nodes...
    Result: Query of CTSS for time offset passed
    Check CTSS state started...
    Check: CTSS state
      Node Name                             State
      rac2                                  Observer
      rac1                                  Observer
    CTSS is in Observer state. Switching over to clock synchronization checks using NTP
    Starting Clock synchronization checks using Network Time Protocol(NTP)...
    NTP Configuration file check started...
    The NTP configuration file "/etc/ntp.conf" is available on all nodes
    NTP Configuration file check passed
    Checking daemon liveness...
    Check: Liveness for "ntpd"
      Node Name                             Running?
      rac2                                  yes
      rac1                                  yes
    Result: Liveness check passed for "ntpd"
    Checking NTP daemon command line for slewing option "-x"
    Check: NTP daemon command line
      Node Name                             Slewing Option Set?
      rac2                                  yes
      rac1                                  yes
    Result:
    NTP daemon slewing option check passed
    Checking NTP daemon's boot time configuration, in file "/etc/sysconfig/ntpd", for slewing option "-x"
    Check: NTP daemon's boot time configuration
      Node Name                             Slewing Option Set?
      rac2                                  yes
      rac1                                  yes
    Result:
    NTP daemon's boot time configuration check for slewing option passed
    NTP common Time Server Check started...
    NTP Time Server ".INIT." is common to all nodes on which the NTP daemon is running
    NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
    Check of common NTP Time Server passed
    Clock time offset check from NTP Time Server started...
    Checking on nodes "[rac2, rac1]"...
    Check: Clock time offset from NTP Time Server
    Time Server: .INIT.
    Time Offset Limit: 1000.0 msecs
      Node Name     Time Offset               Status
      rac2          0.0                       passed
      rac1          0.0                       passed
    Time Server ".INIT." has time offsets that are within permissible limits for nodes "[rac2, rac1]".
    Time Server: .LOCL.
    Time Offset Limit: 1000.0 msecs
      Node Name     Time Offset               Status
      rac2          -29.328                   passed
      rac1          -84.385                   passed
    Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[rac2, rac1]".
    Clock time offset check passed
    Result: Clock synchronization check using Network Time Protocol(NTP) passed
    Oracle Cluster Time Synchronization Services check passed
    Post-check for cluster services setup was successful.
    [grid@rac1 bin]$Please help me how to solve this problem.
    Thanks & regards
    Poorna Prasad.S

    Hi All,
    Now again i reboothed the database again manually and the database is no up on both the node.
    Here is the output for few commands
    [grid@rac1 bin]$ ./crs_stat -t
    Name           Type           Target    State     Host
    ora....DATA.dg ora....up.type OFFLINE   OFFLINE
    ora.ASM_FRA.dg ora....up.type OFFLINE   OFFLINE
    ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1
    ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac1
    ora....N2.lsnr ora....er.type ONLINE    ONLINE    rac2
    ora....N3.lsnr ora....er.type ONLINE    ONLINE    rac2
    ora....VOTE.dg ora....up.type ONLINE    ONLINE    rac1
    ora.asm        ora.asm.type   ONLINE    ONLINE    rac1
    ora.eons       ora.eons.type  ONLINE    ONLINE    rac1
    ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
    ora....network ora....rk.type ONLINE    ONLINE    rac1
    ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
    ora.ons        ora.ons.type   ONLINE    ONLINE    rac1
    ora....drac.db ora....se.type OFFLINE   OFFLINE
    ora....SM1.asm application    ONLINE    ONLINE    rac1
    ora....C1.lsnr application    ONLINE    ONLINE    rac1
    ora.rac1.gsd   application    OFFLINE   OFFLINE
    ora.rac1.ons   application    ONLINE    ONLINE    rac1
    ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1
    ora....SM2.asm application    ONLINE    ONLINE    rac2
    ora....C2.lsnr application    ONLINE    ONLINE    rac2
    ora.rac2.gsd   application    OFFLINE   OFFLINE
    ora.rac2.ons   application    ONLINE    ONLINE    rac2
    ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2
    ora....ry.acfs ora....fs.type ONLINE    ONLINE    rac1
    ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1
    ora.scan2.vip  ora....ip.type ONLINE    ONLINE    rac2
    ora.scan3.vip  ora....ip.type ONLINE    ONLINE    rac2
    [grid@rac1 bin]$ srvctl status nodeapps -n rac1,rac2
    -bash: srvctl: command not found
    [grid@rac1 bin]$ ./srvctl status nodeapps -n rac1,rac2
    PRKO-2003 : Invalid command line option value: rac1,rac2
    [grid@rac1 bin]$ ./srvctl status nodeapps -n rac1
    -n <node_name> option has been deprecated.
    VIP rac1-vip is enabled
    VIP rac1-vip is running on node: rac1
    Network is enabled
    Network is running on node: rac1
    GSD is disabled
    GSD is not running on node: rac1
    ONS is enabled
    ONS daemon is running on node: rac1
    eONS is enabled
    eONS daemon is running on node: rac1
    [grid@rac1 bin]$ ./srvctl status nodeapps -n rac2
    -n <node_name> option has been deprecated.
    VIP rac2-vip is enabled
    VIP rac2-vip is running on node: rac2
    Network is enabled
    Network is running on node: rac2
    GSD is disabled
    GSD is not running on node: rac2
    ONS is enabled
    ONS daemon is running on node: rac2
    eONS is enabled
    eONS daemon is running on node: rac2Here is the output for crsctl stat res -t
    [grid@rac1 bin]$ ./crsctl stat res -t
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS
    Local Resources
    ora.ASM_DATA.dg
                   OFFLINE OFFLINE      rac1
                   OFFLINE OFFLINE      rac2
    ora.ASM_FRA.dg
                   OFFLINE OFFLINE      rac1
                   OFFLINE OFFLINE      rac2
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.OCR_VOTE.dg
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.asm
                   ONLINE  ONLINE       rac1                     Started
                   ONLINE  ONLINE       rac2                     Started
    ora.eons
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.gsd
                   OFFLINE OFFLINE      rac1
                   OFFLINE OFFLINE      rac2
    ora.net1.network
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.ons
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    ora.registry.acfs
                   ONLINE  ONLINE       rac1
                   ONLINE  ONLINE       rac2
    Cluster Resources
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       rac1
    ora.LISTENER_SCAN2.lsnr
          1        ONLINE  ONLINE       rac2
    ora.LISTENER_SCAN3.lsnr
          1        ONLINE  ONLINE       rac2
    ora.oc4j
          1        OFFLINE OFFLINE
    ora.qfundrac.db
          1        OFFLINE OFFLINE
          2        OFFLINE OFFLINE
    ora.rac1.vip
          1        ONLINE  ONLINE       rac1
    ora.rac2.vip
          1        ONLINE  ONLINE       rac2
    ora.scan1.vip
          1        ONLINE  ONLINE       rac1
    ora.scan2.vip
          1        ONLINE  ONLINE       rac2
    ora.scan3.vip
          1        ONLINE  ONLINE       rac2What is going wrong here .
    Thanks & Regards,
    Poorna Prasad.S
    Edited by: SIDDABATHUNI on Apr 30, 2011 2:06 PM
    Edited by: SIDDABATHUNI on Apr 30, 2011 2:10 PM

Maybe you are looking for

  • Internal server error when trying to empty the trash in my contacts

    I deleted my contacts and when I try to empty the trash it says "internal server error."  Anyone know how to fix this?

  • I have a problem with mu icloud

    i e a problem. ( in icloud i have entered the email which password i dont  konw and i can't verfie it ... please help me to solve this problem

  • Some folders and files are 'missing' in lion.

    When I restore them from my SL back up they disappear from there as well. Help

  • Trouble re-setting email address

    I have recently updated to the Curve 9300. My email address was working on my previous 8900 but became invalidated for no apparent reason. I wasn't able to re-validate it, so deleted it. I now can not set the email to work on the Curve. I have asked

  • ERROR with model attribute BTQOpp

    I created view with context node BTQOpp and i write a code in do in context when i run i'm getting error like this....................... y...... SAP Note The following error text was processed in the system: Entry parameter  of method CL_CRM_BOL_DQU