NODE GETTING EVICTED ?

Hello All,
We have a Two node production setup for RAC 11gR2 on VM. We are facing issue of frequent Node eviction. Node gets evicted atleast once a day, We have tried switching to Single node and stopped all the services on the another node. However, that node still get rebooted. . Please shed some light on the issue and provide some help,
I am uploading logs from /var/logs /messages
Sep  5 02:23:34 wmprddb1 kernel: imklog 5.8.10, log source = /proc/kmsg started.
Sep  5 02:23:34 wmprddb1 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1553" x-info="http://www.rsyslog.com"] start
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys cpuset
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys cpu
Sep  5 02:23:34 wmprddb1 kernel: Linux version 2.6.32-358.6.1.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Fri Mar 29 16:51:51 EDT 2013
Sep  5 02:23:34 wmprddb1 kernel: Command line: ro root=UUID=5a2261d9-fe14-4b3b-8f86-94426a33bccc clocksource=acpi_pm clocksource_failover=hpet rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
Sep  5 02:23:34 wmprddb1 kernel: KERNEL supported cpus:
Sep  5 02:23:34 wmprddb1 kernel:  Intel GenuineIntel
Sep  5 02:23:34 wmprddb1 kernel:  AMD AuthenticAMD
Sep  5 02:23:34 wmprddb1 kernel:  Centaur CentaurHauls
Sep  5 02:23:34 wmprddb1 kernel: BIOS-provided physical RAM map:
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 0000000000100000 - 00000000dfffe000 (usable)
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 00000000dfffe000 - 00000000e0000000 (reserved)
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved)
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
Sep  5 02:23:34 wmprddb1 kernel: BIOS-e820: 0000000100000000 - 000000101ff00000 (usable)
Sep  5 02:23:34 wmprddb1 kernel: DMI 2.4 present.
Sep  5 02:23:34 wmprddb1 kernel: SMBIOS version 2.4 @ 0xFDAB0
Sep  5 02:23:34 wmprddb1 kernel: last_pfn = 0x101ff00 max_arch_pfn = 0x400000000
Sep  5 02:23:34 wmprddb1 kernel: x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106
Sep  5 02:23:34 wmprddb1 kernel: last_pfn = 0xdfffe max_arch_pfn = 0x400000000
Sep  5 02:23:34 wmprddb1 kernel: init_memory_mapping: 0000000000000000-00000000dfffe000
Sep  5 02:23:34 wmprddb1 kernel: init_memory_mapping: 0000000100000000-000000101ff00000
Sep  5 02:23:34 wmprddb1 kernel: RAMDISK: 3706f000 - 37fef992
Sep  5 02:23:34 wmprddb1 kernel: ACPI: RSDP 00000000000fd830 00014 (v00 BOCHS )
Sep  5 02:23:34 wmprddb1 kernel: ACPI: RSDT 00000000dfffe380 00034 (v01 BOCHS  BXPCRSDT 00000001 BXPC 00000001)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: FACP 00000000dfffff80 00074 (v01 BOCHS  BXPCFACP 00000001 BXPC 00000001)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: DSDT 00000000dfffe3c0 011A9 (v01   BXPC   BXDSDT 00000001 INTL 20100528)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: FACS 00000000dfffff40 00040
Sep  5 02:23:34 wmprddb1 kernel: ACPI: SSDT 00000000dffff6e0 00858 (v01 BOCHS  BXPCSSDT 00000001 BXPC 00000001)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: APIC 00000000dffff5b0 00090 (v01 BOCHS  BXPCAPIC 00000001 BXPC 00000001)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: HPET 00000000dffff570 00038 (v01 BOCHS  BXPCHPET 00000001 BXPC 00000001)
Sep  5 02:23:34 wmprddb1 kernel: Setting APIC routing to flat.
Sep  5 02:23:34 wmprddb1 kernel: No NUMA configuration found
Sep  5 02:23:34 wmprddb1 kernel: Faking a node at 0000000000000000-000000101ff00000
Sep  5 02:23:34 wmprddb1 kernel: Bootmem setup node 0 0000000000000000-000000101ff00000
Sep  5 02:23:34 wmprddb1 kernel:  NODE_DATA [000000000004a000 - 000000000007dfff]
Sep  5 02:23:34 wmprddb1 kernel:  bootmap [0000000000100000 -  0000000000303fdf] pages 204
Sep  5 02:23:34 wmprddb1 kernel: (8 early reservations) ==> bootmem [0000000000 - 101ff00000]
Sep  5 02:23:34 wmprddb1 kernel:  #0 [0000000000 - 0000001000]   BIOS data page ==> [0000000000 - 0000001000]
Sep  5 02:23:34 wmprddb1 kernel:  #1 [0000006000 - 0000008000]       TRAMPOLINE ==> [0000006000 - 0000008000]
Sep  5 02:23:34 wmprddb1 kernel:  #2 [0001000000 - 000201b0a4]    TEXT DATA BSS ==> [0001000000 - 000201b0a4]
Sep  5 02:23:34 wmprddb1 kernel:  #3 [003706f000 - 0037fef992]          RAMDISK ==> [003706f000 - 0037fef992]
Sep  5 02:23:34 wmprddb1 kernel:  #4 [000009fc00 - 0000100000]    BIOS reserved ==> [000009fc00 - 0000100000]
Sep  5 02:23:34 wmprddb1 kernel:  #5 [000201c000 - 000201c051]              BRK ==> [000201c000 - 000201c051]
Sep  5 02:23:34 wmprddb1 kernel:  #6 [0000008000 - 000000c000]          PGTABLE ==> [0000008000 - 000000c000]
Sep  5 02:23:34 wmprddb1 kernel:  #7 [000000c000 - 000004a000]          PGTABLE ==> [000000c000 - 000004a000]
Sep  5 02:23:34 wmprddb1 kernel: found SMP MP-table at [ffff8800000fdad0] fdad0
Sep  5 02:23:34 wmprddb1 kernel: Reserving 133MB of memory at 48MB for crashkernel (System RAM: 66047MB)
Sep  5 02:23:34 wmprddb1 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Sep  5 02:23:34 wmprddb1 kernel: kvm-clock: cpu 0, msr 0:1c25681, boot clock
Sep  5 02:23:34 wmprddb1 kernel: Zone PFN ranges:
Sep  5 02:23:34 wmprddb1 kernel:  DMA      0x00000001 -> 0x00001000
Sep  5 02:23:34 wmprddb1 kernel:  DMA32    0x00001000 -> 0x00100000
Sep  5 02:23:34 wmprddb1 kernel:  Normal   0x00100000 -> 0x0101ff00
Sep  5 02:23:34 wmprddb1 kernel: Movable zone start PFN for each node
Sep  5 02:23:34 wmprddb1 kernel: early_node_map[3] active PFN ranges
Sep  5 02:23:34 wmprddb1 kernel:    0: 0x00000001 -> 0x0000009f
Sep  5 02:23:34 wmprddb1 kernel:    0: 0x00000100 -> 0x000dfffe
Sep  5 02:23:34 wmprddb1 kernel:    0: 0x00100000 -> 0x0101ff00
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PM-Timer IO Port: 0xb008
Sep  5 02:23:34 wmprddb1 kernel: Setting APIC routing to flat.
Sep  5 02:23:34 wmprddb1 kernel: ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Sep  5 02:23:34 wmprddb1 kernel: ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0])
Sep  5 02:23:34 wmprddb1 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Sep  5 02:23:34 wmprddb1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Sep  5 02:23:34 wmprddb1 kernel: Using ACPI (MADT) for SMP configuration information
Sep  5 02:23:34 wmprddb1 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Sep  5 02:23:34 wmprddb1 kernel: SMP: Allowing 4 CPUs, 0 hotplug CPUs
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 00000000dfffe000 - 00000000e0000000
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 00000000e0000000 - 00000000feffc000
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 00000000feffc000 - 00000000ff000000
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 00000000ff000000 - 00000000fffc0000
Sep  5 02:23:34 wmprddb1 kernel: PM: Registered nosave memory: 00000000fffc0000 - 0000000100000000
Sep  5 02:23:34 wmprddb1 kernel: Allocating PCI resources starting at e0000000 (gap: e0000000:1effc000)
Sep  5 02:23:34 wmprddb1 kernel: Booting paravirtualized kernel on KVM
Sep  5 02:23:34 wmprddb1 kernel: NR_CPUS:4096 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Sep  5 02:23:34 wmprddb1 kernel: PERCPU: Embedded 31 pages/cpu @ffff880028200000 s94552 r8192 d24232 u524288
Sep  5 02:23:34 wmprddb1 kernel: pcpu-alloc: s94552 r8192 d24232 u524288 alloc=1*2097152
Sep  5 02:23:34 wmprddb1 kernel: pcpu-alloc: [0] 0 1 2 3
Sep  5 02:23:34 wmprddb1 kernel: kvm-clock: cpu 0, msr 0:28216681, primary cpu clock
Sep  5 02:23:34 wmprddb1 kernel: kvm-stealtime: cpu 0, msr 2820e840
Sep  5 02:23:34 wmprddb1 kernel: Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 16545530
Sep  5 02:23:34 wmprddb1 kernel: Policy zone: Normal
Sep  5 02:23:34 wmprddb1 kernel: Kernel command line: ro root=UUID=5a2261d9-fe14-4b3b-8f86-94426a33bccc clocksource=acpi_pm clocksource_failover=hpet rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=133M@0M  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
Sep  5 02:23:34 wmprddb1 kernel: PID hash table entries: 4096 (order: 3, 32768 bytes)
Sep  5 02:23:34 wmprddb1 kernel: Checking aperture...
Sep  5 02:23:34 wmprddb1 kernel: No AGP bridge found
Sep  5 02:23:34 wmprddb1 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Sep  5 02:23:34 wmprddb1 kernel: Placing 64MB software IO TLB between ffff880020000000 - ffff880024000000
Sep  5 02:23:34 wmprddb1 kernel: software IO TLB at phys 0x20000000 - 0x24000000
Sep  5 02:23:34 wmprddb1 kernel: Memory: 65951972k/67632128k available (5221k kernel code, 524688k absent, 1155468k reserved, 7120k data, 1264k init)
Sep  5 02:23:34 wmprddb1 kernel: Hierarchical RCU implementation.
Sep  5 02:23:34 wmprddb1 kernel: NR_IRQS:33024 nr_irqs:440
Sep  5 02:23:34 wmprddb1 kernel: Console: colour VGA+ 80x25
Sep  5 02:23:34 wmprddb1 kernel: console [tty0] enabled
Sep  5 02:23:34 wmprddb1 kernel: allocated 268435456 bytes of page_cgroup
Sep  5 02:23:34 wmprddb1 kernel: please try 'cgroup_disable=memory' option if you don't want memory cgroups
Sep  5 02:23:34 wmprddb1 kernel: Detected 2792.998 MHz processor.
Sep  5 02:23:34 wmprddb1 kernel: Calibrating delay loop (skipped) preset value.. 5585.99 BogoMIPS (lpj=2792998)
Sep  5 02:23:34 wmprddb1 kernel: pid_max: default: 32768 minimum: 301
Sep  5 02:23:34 wmprddb1 kernel: Security Framework initialized
Sep  5 02:23:34 wmprddb1 kernel: SELinux:  Initializing.
Sep  5 02:23:34 wmprddb1 kernel: Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes)
Sep  5 02:23:34 wmprddb1 kernel: Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes)
Sep  5 02:23:34 wmprddb1 kernel: Mount-cache hash table entries: 256
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys ns
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys cpuacct
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys memory
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys devices
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys freezer
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys net_cls
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys blkio
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys perf_event
Sep  5 02:23:34 wmprddb1 kernel: Initializing cgroup subsys net_prio
Sep  5 02:23:34 wmprddb1 kernel: CPU: Physical Processor ID: 0
Sep  5 02:23:34 wmprddb1 kernel: CPU: Processor Core ID: 0
Sep  5 02:23:34 wmprddb1 kernel: mce: CPU supports 10 MCE banks
Sep  5 02:23:34 wmprddb1 kernel: alternatives: switching to unfair spinlock
Sep  5 02:23:34 wmprddb1 kernel: ACPI: Core revision 20090903
Sep  5 02:23:34 wmprddb1 kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Sep  5 02:23:34 wmprddb1 kernel: ftrace: allocating 21432 entries in 85 pages
Sep  5 02:23:34 wmprddb1 kernel: Enabling x2apic
Sep  5 02:23:34 wmprddb1 kernel: Enabled x2apic
Sep  5 02:23:34 wmprddb1 kernel: Setting APIC routing to physical x2apic
Sep  5 02:23:34 wmprddb1 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Sep  5 02:23:34 wmprddb1 kernel: CPU0: Intel(R) Xeon(R) CPU           X5560  @ 2.80GHz stepping 05
Sep  5 02:23:34 wmprddb1 kernel: Performance Events: Nehalem events, Broken BIOS detected, complain to your hardware vendor.
Sep  5 02:23:34 wmprddb1 kernel: [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 186 is 53003c)
Sep  5 02:23:34 wmprddb1 kernel: Intel PMU driver.
Sep  5 02:23:34 wmprddb1 kernel: CPUID marked event: 'bus cycles' unavailable
Sep  5 02:23:34 wmprddb1 kernel: ... version:                2
Sep  5 02:23:34 wmprddb1 kernel: ... bit width:              48
Sep  5 02:23:34 wmprddb1 kernel: ... generic registers:      4
Sep  5 02:23:34 wmprddb1 kernel: ... value mask:             0000ffffffffffff
Sep  5 02:23:34 wmprddb1 kernel: ... max period:             000000007fffffff
Sep  5 02:23:34 wmprddb1 kernel: ... fixed-purpose events:   3
Sep  5 02:23:34 wmprddb1 kernel: ... event mask:             000000070000000f
Sep  5 02:23:34 wmprddb1 kernel: NMI watchdog enabled, takes one hw-pmu counter.
Sep  5 02:23:34 wmprddb1 kernel: Booting Node   0, Processors  #1
Sep  5 02:23:34 wmprddb1 kernel: kvm-clock: cpu 1, msr 0:28296681, secondary cpu clock
Sep  5 02:23:34 wmprddb1 kernel: kvm-stealtime: cpu 1, msr 2828e840
Sep  5 02:23:34 wmprddb1 kernel: #2
Sep  5 02:23:34 wmprddb1 kernel: kvm-clock: cpu 2, msr 0:28316681, secondary cpu clock
Sep  5 02:23:34 wmprddb1 kernel: kvm-stealtime: cpu 2, msr 2830e840
Sep  5 02:23:34 wmprddb1 kernel: #3 Ok.
Sep  5 02:23:34 wmprddb1 kernel: kvm-clock: cpu 3, msr 0:28396681, secondary cpu clock
Sep  5 02:23:34 wmprddb1 kernel: kvm-stealtime: cpu 3, msr 2838e840
Sep  5 02:23:34 wmprddb1 kernel: Brought up 4 CPUs
Sep  5 02:23:34 wmprddb1 kernel: Total of 4 processors activated (22343.98 BogoMIPS).
Sep  5 02:23:34 wmprddb1 kernel: devtmpfs: initialized
Sep  5 02:23:34 wmprddb1 kernel: regulator: core version 0.5
Sep  5 02:23:34 wmprddb1 kernel: NET: Registered protocol family 16
Sep  5 02:23:34 wmprddb1 kernel: ACPI: bus type pci registered
Sep  5 02:23:34 wmprddb1 kernel: PCI: Using configuration type 1 for base access
Sep  5 02:23:34 wmprddb1 kernel: bio: create slab <bio-0> at 0
Sep  5 02:23:34 wmprddb1 kernel: ACPI: Interpreter enabled
Sep  5 02:23:34 wmprddb1 kernel: ACPI: (supports S0 S3 S4 S5)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: Using IOAPIC for interrupt routing
Sep  5 02:23:34 wmprddb1 kernel: ACPI: No dock devices found.
Sep  5 02:23:34 wmprddb1 kernel: HEST: Table not found.
Sep  5 02:23:34 wmprddb1 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Sep  5 02:23:34 wmprddb1 kernel: pci_root PNP0A03:00: host bridge window [io  0x0000-0x0cf7]
Sep  5 02:23:34 wmprddb1 kernel: pci_root PNP0A03:00: host bridge window [io  0x0d00-0xffff]
Sep  5 02:23:34 wmprddb1 kernel: pci_root PNP0A03:00: host bridge window [mem 0x000a0000-0x000bffff]
Sep  5 02:23:34 wmprddb1 kernel: pci_root PNP0A03:00: host bridge window [mem 0xe0000000-0xfebfffff]
Sep  5 02:23:34 wmprddb1 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Sep  5 02:23:34 wmprddb1 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKS] (IRQs *9)
Sep  5 02:23:34 wmprddb1 kernel: vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
Sep  5 02:23:34 wmprddb1 kernel: vgaarb: loaded
Sep  5 02:23:34 wmprddb1 kernel: vgaarb: bridge control possible 0000:00:02.0
Sep  5 02:23:34 wmprddb1 kernel: SCSI subsystem initialized
Sep  5 02:23:34 wmprddb1 kernel: usbcore: registered new interface driver usbfs
Sep  5 02:23:34 wmprddb1 kernel: usbcore: registered new interface driver hub
Sep  5 02:23:34 wmprddb1 kernel: usbcore: registered new device driver usb
Sep  5 02:23:34 wmprddb1 kernel: PCI: Using ACPI for IRQ routing
Sep  5 02:23:34 wmprddb1 kernel: NetLabel: Initializing
Sep  5 02:23:34 wmprddb1 kernel: NetLabel:  domain hash size = 128
Sep  5 02:23:34 wmprddb1 kernel: NetLabel:  protocols = UNLABELED CIPSOv4
Sep  5 02:23:34 wmprddb1 kernel: NetLabel:  unlabeled traffic allowed by default
Sep  5 02:23:34 wmprddb1 kernel: HPET: 3 timers in total, 0 timers will be used for per-cpu timer
Sep  5 02:23:34 wmprddb1 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Sep  5 02:23:34 wmprddb1 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Sep  5 02:23:34 wmprddb1 kernel: Switching to clocksource kvm-clock
Sep  5 02:23:34 wmprddb1 kernel: pnp: PnP ACPI init
Sep  5 02:23:34 wmprddb1 kernel: ACPI: bus type pnp registered
Sep  5 02:23:34 wmprddb1 kernel: pnp: PnP ACPI: found 6 devices
Sep  5 02:23:34 wmprddb1 kernel: ACPI: ACPI bus type pnp unregistered
Sep  5 02:23:34 wmprddb1 kernel: Override clocksource acpi_pm is not HRT compatible. Cannot switch while in HRT/NOHZ mode
Sep  5 02:23:34 wmprddb1 kernel: NET: Registered protocol family 2
Sep  5 02:23:34 wmprddb1 kernel: IP route cache hash table entries: 524288 (order: 10, 4194304 bytes)
Sep  5 02:23:34 wmprddb1 kernel: TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
Sep  5 02:23:34 wmprddb1 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Sep  5 02:23:34 wmprddb1 kernel: TCP: Hash tables configured (established 524288 bind 65536)
Sep  5 02:23:34 wmprddb1 kernel: TCP reno registered
Sep  5 02:23:34 wmprddb1 kernel: NET: Registered protocol family 1
Sep  5 02:23:34 wmprddb1 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Sep  5 02:23:34 wmprddb1 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Sep  5 02:23:34 wmprddb1 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
Sep  5 02:23:34 wmprddb1 kernel: pci 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11
Sep  5 02:23:34 wmprddb1 kernel: pci 0000:00:01.2: PCI INT D disabled
Sep  5 02:23:34 wmprddb1 kernel: Trying to unpack rootfs image as initramfs...
Sep  5 02:23:34 wmprddb1 kernel: Freeing initrd memory: 15874k freed
Sep  5 02:23:34 wmprddb1 kernel: audit: initializing netlink socket (disabled)
Sep  5 02:23:34 wmprddb1 kernel: type=2000 audit(1378365736.552:1): initialized
Sep  5 02:23:34 wmprddb1 kernel: HugeTLB registered 2 MB page size, pre-allocated 0 pages
Sep  5 02:23:34 wmprddb1 kernel: VFS: Disk quotas dquot_6.5.2
Sep  5 02:23:34 wmprddb1 kernel: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Sep  5 02:23:34 wmprddb1 kernel: msgmni has been set to 32768
Sep  5 02:23:34 wmprddb1 kernel: alg: No test for stdrng (krng)
Sep  5 02:23:34 wmprddb1 kernel: ksign: Installing public key data
Sep  5 02:23:34 wmprddb1 kernel: Loading keyring
Sep  5 02:23:34 wmprddb1 kernel: - Added public key CD7AEEA03C3767CE
Sep  5 02:23:34 wmprddb1 kernel: - User ID: Red Hat, Inc. (Kernel Module GPG key)
Sep  5 02:23:34 wmprddb1 kernel: - Added public key D4A26C9CCD09BEDA
Sep  5 02:23:34 wmprddb1 kernel: - User ID: Red Hat Enterprise Linux Driver Update Program <[email protected]>
Sep  5 02:23:34 wmprddb1 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Sep  5 02:23:34 wmprddb1 kernel: io scheduler noop registered
Sep  5 02:23:34 wmprddb1 kernel: io scheduler anticipatory registered
Sep  5 02:23:34 wmprddb1 kernel: io scheduler deadline registered
Sep  5 02:23:34 wmprddb1 kernel: io scheduler cfq registered (default)
Sep  5 02:23:34 wmprddb1 kernel: pci_hotplug: PCI Hot Plug PCI Core version: 0.5
Sep  5 02:23:34 wmprddb1 kernel: pciehp: PCI Express Hot Plug Controller Driver version: 0.4
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [3] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [4] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [5] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [6] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [7] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [8] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [9] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [10] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [11] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [12] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [13] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [14] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [15] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [16] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [17] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [18] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [19] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [20] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [21] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [22] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [23] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [24] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [25] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [26] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [27] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [28] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [29] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [30] registered
Sep  5 02:23:34 wmprddb1 kernel: acpiphp: Slot [31] registered
Sep  5 02:23:34 wmprddb1 kernel: ipmi message handler version 39.2
Sep  5 02:23:34 wmprddb1 kernel: IPMI System Interface driver.
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Adding default-specified kcs state machine
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Trying default-specified kcs state machine at i/o address 0xca2, slave address 0x0, irq 0
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Interface detection failed
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Adding default-specified smic state machine
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Trying default-specified smic state machine at i/o address 0xca9, slave address 0x0, irq 0
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Interface detection failed
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Adding default-specified bt state machine
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Trying default-specified bt state machine at i/o address 0xe4, slave address 0x0, irq 0
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Interface detection failed
Sep  5 02:23:34 wmprddb1 kernel: ipmi_si: Unable to find any System Interface(s)
Sep  5 02:23:34 wmprddb1 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Sep  5 02:23:34 wmprddb1 kernel: ACPI: Power Button [PWRF]
Sep  5 02:23:34 wmprddb1 kernel: ERST: Table is not found!
Sep  5 02:23:34 wmprddb1 kernel: GHES: HEST is not enabled!
Sep  5 02:23:34 wmprddb1 kernel: Non-volatile memory driver v1.3
Sep  5 02:23:34 wmprddb1 kernel: Linux agpgart interface v0.103
Sep  5 02:23:34 wmprddb1 kernel: crash memory driver: version 1.1
Sep  5 02:23:34 wmprddb1 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Sep  5 02:23:34 wmprddb1 kernel: brd: module loaded
Sep  5 02:23:34 wmprddb1 kernel: loop: module loaded
Sep  5 02:23:34 wmprddb1 kernel: input: Macintosh mouse button emulation as /devices/virtual/input/input1
Sep  5 02:23:34 wmprddb1 kernel: Fixed MDIO Bus: probed
Sep  5 02:23:34 wmprddb1 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Sep  5 02:23:34 wmprddb1 kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Sep  5 02:23:34 wmprddb1 kernel: uhci_hcd: USB Universal Host Controller Interface driver
Sep  5 02:23:34 wmprddb1 kernel: uhci_hcd 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11
Sep  5 02:23:34 wmprddb1 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Sep  5 02:23:34 wmprddb1 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Sep  5 02:23:34 wmprddb1 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c0c0
Sep  5 02:23:34 wmprddb1 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001
Sep  5 02:23:34 wmprddb1 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Sep  5 02:23:34 wmprddb1 kernel: usb usb1: Product: UHCI Host Controller
Sep  5 02:23:34 wmprddb1 kernel: usb usb1: Manufacturer: Linux 2.6.32-358.6.1.el6.x86_64 uhci_hcd
Sep  5 02:23:34 wmprddb1 kernel: usb usb1: SerialNumber: 0000:00:01.2
Sep  5 02:23:34 wmprddb1 kernel: usb usb1: configuration #1 chosen from 1 choice
Sep  5 02:23:34 wmprddb1 kernel: hub 1-0:1.0: USB hub found
Sep  5 02:23:34 wmprddb1 kernel: hub 1-0:1.0: 2 ports detected
Sep  5 02:23:34 wmprddb1 kernel: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Sep  5 02:23:34 wmprddb1 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Sep  5 02:23:34 wmprddb1 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Sep  5 02:23:34 wmprddb1 kernel: mice: PS/2 mouse device common for all mice
Sep  5 02:23:34 wmprddb1 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input2
Sep  5 02:23:34 wmprddb1 kernel: rtc_cmos 00:01: RTC can wake from S4
Sep  5 02:23:34 wmprddb1 kernel: rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
Sep  5 02:23:34 wmprddb1 kernel: rtc0: alarms up to one day, 114 bytes nvram, hpet irqs
Sep  5 02:23:34 wmprddb1 kernel: cpuidle: using governor ladder
Sep  5 02:23:34 wmprddb1 kernel: cpuidle: using governor menu
Sep  5 02:23:34 wmprddb1 kernel: EFI Variables Facility v0.08 2004-May-17
Sep  5 02:23:34 wmprddb1 kernel: usbcore: registered new interface driver hiddev
Sep  5 02:23:34 wmprddb1 kernel: usbcore: registered new interface driver usbhid
Sep  5 02:23:34 wmprddb1 kernel: usbhid: v2.6:USB HID core driver
Sep  5 02:23:34 wmprddb1 kernel: TCP cubic registered
Sep  5 02:23:34 wmprddb1 kernel: Initializing XFRM netlink socket
Sep  5 02:23:34 wmprddb1 kernel: NET: Registered protocol family 17
Sep  5 02:23:34 wmprddb1 kernel: registered taskstats version 1
Sep  5 02:23:34 wmprddb1 kernel: rtc_cmos 00:01: setting system clock to 2013-09-05 07:22:17 UTC (1378365737)
Sep  5 02:23:34 wmprddb1 kernel: Initalizing network drop monitor service
Sep  5 02:23:34 wmprddb1 kernel: Freeing unused kernel memory: 1264k freed
Sep  5 02:23:34 wmprddb1 kernel: Write protecting the kernel read-only data: 10240k
Sep  5 02:23:34 wmprddb1 kernel: Freeing unused kernel memory: 904k freed
Sep  5 02:23:34 wmprddb1 kernel: Freeing unused kernel memory: 1676k freed
Sep  5 02:23:34 wmprddb1 kernel: dracut: dracut-004-303.el6
Sep  5 02:23:34 wmprddb1 kernel: dracut: rd_NO_LUKS: removing cryptoluks activation
Sep  5 02:23:34 wmprddb1 kernel: dracut: rd_NO_LVM: removing LVM activation
Sep  5 02:23:34 wmprddb1 kernel: device-mapper: uevent: version 1.0.3
Sep  5 02:23:34 wmprddb1 kernel: device-mapper: ioctl: 4.23.6-ioctl (2012-07-25) initialised: [email protected]
Sep  5 02:23:34 wmprddb1 kernel: udev: starting version 147
Sep  5 02:23:34 wmprddb1 kernel: dracut: Starting plymouth daemon
Sep  5 02:23:34 wmprddb1 kernel: dracut: rd_NO_DM: removing DM RAID activation
Sep  5 02:23:34 wmprddb1 kernel: dracut: rd_NO_MD: removing MD RAID activation
Sep  5 02:23:34 wmprddb1 kernel: scsi0 : ata_piix
Sep  5 02:23:34 wmprddb1 kernel: scsi1 : ata_piix
Sep  5 02:23:34 wmprddb1 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc100 irq 14
Sep  5 02:23:34 wmprddb1 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc108 irq 15
Sep  5 02:23:34 wmprddb1 kernel: usb 1-1: new full speed USB device number 2 using uhci_hcd
Sep  5 02:23:34 wmprddb1 kernel: ata1.00: ATA-7: QEMU HARDDISK, 1.4.2, max UDMA/100
Sep  5 02:23:34 wmprddb1 kernel: ata1.00: 134217728 sectors, multi 16: LBA48
Sep  5 02:23:34 wmprddb1 kernel: ata1.01: ATA-7: QEMU HARDDISK, 1.4.2, max UDMA/100
Sep  5 02:23:34 wmprddb1 kernel: ata1.01: 6442450944 sectors, multi 16: LBA48
Sep  5 02:23:34 wmprddb1 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 1.4.2, max UDMA/100
Sep  5 02:23:34 wmprddb1 kernel: ata1.00: configured for MWDMA2
Sep  5 02:23:34 wmprddb1 kernel: ata2.00: configured for MWDMA2
Sep  5 02:23:34 wmprddb1 kernel: ata1.01: configured for MWDMA2
Sep  5 02:23:34 wmprddb1 kernel: scsi 0:0:0:0: Direct-Access     ATA      QEMU HARDDISK    1.4. PQ: 0 ANSI: 5
Sep  5 02:23:34 wmprddb1 kernel: scsi 0:0:1:0: Direct-Access     ATA      QEMU HARDDISK    1.4. PQ: 0 ANSI: 5
Sep  5 02:23:34 wmprddb1 kernel: scsi 1:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     1.4. PQ: 0 ANSI: 5
Sep  5 02:23:34 wmprddb1 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
Sep  5 02:23:34 wmprddb1 kernel: virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 10 (level, high) -> IRQ 10
Sep  5 02:23:34 wmprddb1 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001
Sep  5 02:23:34 wmprddb1 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=5
Sep  5 02:23:34 wmprddb1 kernel: usb 1-1: Product: QEMU USB Tablet
Sep  5 02:23:34 wmprddb1 kernel: usb 1-1: Manufacturer: QEMU
Sep  5 02:23:34 wmprddb1 kernel: usb 1-1: SerialNumber: 42
Sep  5 02:23:34 wmprddb1 kernel: usb 1-1: configuration #1 chosen from 1 choice
Sep  5 02:23:34 wmprddb1 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/input/input4
Sep  5 02:23:34 wmprddb1 kernel: generic-usb 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Pointer [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:0:0: [sda] 134217728 512-byte logical blocks: (68.7 GB/64.0 GiB)
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:1:0: [sdb] 6442450944 512-byte logical blocks: (3.29 TB/3.00 TiB)
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:1:0: [sdb] Write Protect is off
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:0:0: [sda] Write Protect is off
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Sep  5 02:23:34 wmprddb1 kernel: sdb:
Sep  5 02:23:34 wmprddb1 kernel: sda: sda1 sda2 sda3
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Sep  5 02:23:34 wmprddb1 kernel: sdb1
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:1:0: [sdb] Attached SCSI disk
Sep  5 02:23:34 wmprddb1 kernel: sr0: scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Sep  5 02:23:34 wmprddb1 kernel: Uniform CD-ROM driver Revision: 3.20
Sep  5 02:23:34 wmprddb1 kernel: Refined TSC clocksource calibration: 2793.158 MHz.
Sep  5 02:23:34 wmprddb1 kernel: EXT4-fs (sda2): INFO: recovery required on readonly filesystem
Sep  5 02:23:34 wmprddb1 kernel: EXT4-fs (sda2): write access will be enabled during recovery
Sep  5 02:23:34 wmprddb1 kernel: EXT4-fs (sda2): recovery complete
Sep  5 02:23:34 wmprddb1 kernel: EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts:
Sep  5 02:23:34 wmprddb1 kernel: dracut: Mounted root filesystem /dev/sda2
Sep  5 02:23:34 wmprddb1 kernel: SELinux:  Disabled at runtime.
Sep  5 02:23:34 wmprddb1 kernel: type=1404 audit(1378365759.705:2): selinux=0 auid=4294967295 ses=4294967295
Sep  5 02:23:34 wmprddb1 kernel: dracut:
Sep  5 02:23:34 wmprddb1 kernel: dracut: Switching root
Sep  5 02:23:34 wmprddb1 kernel: udev: starting version 147
Sep  5 02:23:34 wmprddb1 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0
Sep  5 02:23:34 wmprddb1 kernel: e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
Sep  5 02:23:34 wmprddb1 kernel: e1000: Copyright (c) 1999-2006 Intel Corporation.
Sep  5 02:23:34 wmprddb1 kernel: ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:12.0: PCI INT A -> Link[LNKB] -> GSI 10 (level, high) -> IRQ 10
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:12.0: eth0: (PCI:33MHz:32-bit) ce:6d:78:74:c8:1b
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:12.0: eth0: Intel(R) PRO/1000 Network Connection
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:13.0: PCI INT A -> Link[LNKC] -> GSI 10 (level, high) -> IRQ 10
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:13.0: eth1: (PCI:33MHz:32-bit) 46:07:2f:27:23:9e
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:13.0: eth1: Intel(R) PRO/1000 Network Connection
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:14.0: PCI INT A -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:14.0: eth2: (PCI:33MHz:32-bit) b2:ca:27:f7:1f:9a
Sep  5 02:23:34 wmprddb1 kernel: e1000 0000:00:14.0: eth2: Intel(R) PRO/1000 Network Connection
Sep  5 02:23:34 wmprddb1 kernel: microcode: CPU0 sig=0x106a5, pf=0x1, revision=0x1
Sep  5 02:23:34 wmprddb1 kernel: platform microcode: firmware: requesting intel-ucode/06-1a-05
Sep  5 02:23:34 wmprddb1 kernel: microcode: CPU1 sig=0x106a5, pf=0x1, revision=0x1
Sep  5 02:23:34 wmprddb1 kernel: platform microcode: firmware: requesting intel-ucode/06-1a-05
Sep  5 02:23:34 wmprddb1 kernel: microcode: CPU2 sig=0x106a5, pf=0x1, revision=0x1
Sep  5 02:23:34 wmprddb1 kernel: platform microcode: firmware: requesting intel-ucode/06-1a-05
Sep  5 02:23:34 wmprddb1 kernel: microcode: CPU3 sig=0x106a5, pf=0x1, revision=0x1
Sep  5 02:23:34 wmprddb1 kernel: platform microcode: firmware: requesting intel-ucode/06-1a-05
Sep  5 02:23:34 wmprddb1 kernel: Microcode Update Driver: v2.00 <[email protected]>, Peter Oruba
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:0:0: Attached scsi generic sg0 type 0
Sep  5 02:23:34 wmprddb1 kernel: sd 0:0:1:0: Attached scsi generic sg1 type 0
Sep  5 02:23:34 wmprddb1 kernel: sr 1:0:0:0: Attached scsi generic sg2 type 5
Sep  5 02:23:34 wmprddb1 kernel: EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts:
Sep  5 02:23:34 wmprddb1 kernel: EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts:
Sep  5 02:23:34 wmprddb1 kernel: EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts:
Sep  5 02:23:34 wmprddb1 kernel: Adding 16777208k swap on /dev/sda3.  Priority:-1 extents:1 across:16777208k
Sep  5 02:23:34 wmprddb1 kernel: Loading iSCSI transport class v2.0-870.
Sep  5 02:23:34 wmprddb1 kernel: iscsi: registered transport (tcp)
Sep  5 02:23:34 wmprddb1 kernel: NET: Registered protocol family 10
Sep  5 02:23:34 wmprddb1 kernel: lo: Disabled Privacy Extensions
Sep  5 02:23:34 wmprddb1 kernel: iscsi: registered transport (iser)
Sep  5 02:23:34 wmprddb1 kernel: libcxgbi:libcxgbi_init_module: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
Sep  5 02:23:34 wmprddb1 kernel: libcxgbi:ddp_setup_host_page_size: system PAGE 4096, ddp idx 0.
Sep  5 02:23:34 wmprddb1 kernel: Chelsio T3 iSCSI Driver cxgb3i v2.0.0 (Jun. 2010)
Sep  5 02:23:34 wmprddb1 kernel: iscsi: registered transport (cxgb3i)
Sep  5 02:23:34 wmprddb1 kernel: Chelsio T4 iSCSI Driver cxgb4i v0.9.1 (Aug. 2010)
Sep  5 02:23:34 wmprddb1 kernel: iscsi: registered transport (cxgb4i)
Sep  5 02:23:34 wmprddb1 kernel: cnic: Broadcom NetXtreme II CNIC Driver cnic v2.5.13 (Sep 07, 2012)
Sep  5 02:23:34 wmprddb1 kernel: Broadcom NetXtreme II iSCSI Driver bnx2i v2.7.2.2 (Apr 26, 2012)
Sep  5 02:23:34 wmprddb1 kernel: iscsi: registered transport (bnx2i)
Sep  5 02:23:34 wmprddb1 kernel: iscsi: registered transport (be2iscsi)
Sep  5 02:23:34 wmprddb1 kernel: In beiscsi_module_init, tt=ffffffffa032d620
Sep  5 02:23:34 wmprddb1 kernel: ip6_tables: (C) 2000-2006 Netfilter Core Team
Sep  5 02:23:34 wmprddb1 kernel: nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
Sep  5 02:23:34 wmprddb1 kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Sep  5 02:23:34 wmprddb1 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Sep  5 02:23:34 wmprddb1 kernel: e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Sep  5 02:23:34 wmprddb1 kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Sep  5 02:23:34 wmprddb1 kernel: ADDRCONF(NETDEV_UP): eth2: link is not ready
Sep  5 02:23:34 wmprddb1 kernel: e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Sep  5 02:23:34 wmprddb1 kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Sep  5 02:23:34 wmprddb1 kernel: e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
Sep  5 02:23:34 wmprddb1 kernel: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Sep  5 02:23:35 wmprddb1 kernel: scsi2 : iSCSI Initiator over TCP/IP
Sep  5 02:23:35 wmprddb1 kernel: scsi 2:0:0:0: RAID              IET      Controller       0001 PQ: 0 ANSI: 5
Sep  5 02:23:35 wmprddb1 kernel: scsi 2:0:0:0: Attached scsi generic sg3 type 12
Sep  5 02:23:35 wmprddb1 kernel: scsi 2:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0001 PQ: 0 ANSI: 5
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:1: Attached scsi generic sg4 type 0
Sep  5 02:23:35 wmprddb1 kernel: scsi 2:0:0:2: Direct-Access     IET      VIRTUAL-DISK     0001 PQ: 0 ANSI: 5
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:2: Attached scsi generic sg5 type 0
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:1: [sdc] 1048576000 512-byte logical blocks: (536 GB/500 GiB)
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:2: [sdd] 1048576000 512-byte logical blocks: (536 GB/500 GiB)
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:1: [sdc] Write Protect is off
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:1: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:2: [sdd] Write Protect is off
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:2: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep  5 02:23:35 wmprddb1 kernel: sdc:
Sep  5 02:23:35 wmprddb1 kernel: sdd: sdc1 sdc2 sdc3 sdc4
Sep  5 02:23:35 wmprddb1 kernel: sd 2:0:0:1: [sdc] Attached SCSI disk
Sep  5 02:23:36 wmprddb1 kernel: unknown partition table
Sep  5 02:23:36 wmprddb1 kernel: sd 2:0:0:2: [sdd] Attached SCSI disk
Sep  5 02:23:36 wmprddb1 kdump: kexec: loaded kdump kernel
Sep  5 02:23:36 wmprddb1 kdump: started up
Sep  5 02:23:36 wmprddb1 iscsid: Connection1:0 to [target: iqn.2013-05.com.mot-mobility.iscsi:prddb, portal: 10.34.8.38,3260] through [iface: default] is operational now
Sep  5 02:23:36 wmprddb1 rpc.statd[1732]: Version 1.2.3 starting
Sep  5 02:23:36 wmprddb1 sm-notify[1733]: Version 1.2.3 starting
Sep  5 02:23:36 wmprddb1 kernel: RPC: Registered named UNIX socket transport module.
Sep  5 02:23:36 wmprddb1 kernel: RPC: Registered udp transport module.
Sep  5 02:23:36 wmprddb1 kernel: RPC: Registered tcp transport module.
Sep  5 02:23:36 wmprddb1 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Sep  5 02:23:37 wmprddb1 kernel: Slow work thread pool: Starting up
Sep  5 02:23:37 wmprddb1 kernel: Slow work thread pool: Ready
Sep  5 02:23:37 wmprddb1 kernel: FS-Cache: Loaded
Sep  5 02:23:37 wmprddb1 kernel: Registering the id_resolver key type
Sep  5 02:23:37 wmprddb1 kernel: FS-Cache: Netfs 'nfs' registered for caching
Sep  5 02:23:37 wmprddb1 acpid: starting up
Sep  5 02:23:37 wmprddb1 acpid: 1 rule loaded
Sep  5 02:23:37 wmprddb1 acpid: waiting for events: event logging is off
Sep  5 02:23:37 wmprddb1 acpid: client connected from 1886[68:68]
Sep  5 02:23:37 wmprddb1 acpid: 1 client rule loaded
Sep  5 02:23:38 wmprddb1 ntpd[1923]: ntpd [email protected] Thu Jan 10 15:17:40 UTC 2013 (1)
Sep  5 02:23:38 wmprddb1 ntpd[1924]: precision = 0.176 usec
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #0 wildcard, 0.0.0.0#123 Disabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #1 wildcard, ::#123 Disabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #2 lo, ::1#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #3 eth1, fe80::4407:2fff:fe27:239e#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #4 eth0, fe80::cc6d:78ff:fe74:c81b#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #5 eth2, fe80::b0ca:27ff:fef7:1f9a#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #6 lo, 127.0.0.1#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #7 lo:1, 10.34.8.115#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #8 eth0, 10.34.8.10#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #9 eth0:1, 10.34.8.102#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #10 eth1, 192.168.90.50#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on interface #11 eth2, 192.168.91.50#123 Enabled
Sep  5 02:23:38 wmprddb1 ntpd[1924]: Listening on routing socket on fd #28 for interface updates
Sep  5 02:23:38 wmprddb1 ntpd[1924]: kernel time sync status 2040
Sep  5 02:23:38 wmprddb1 ntpd[1924]: frequency initialized 2.864 PPM from /var/lib/ntp/drift
Sep  5 02:23:39 wmprddb1 abrtd: Init complete, entering main loop
Sep  5 02:23:40 wmprddb1 logger: Oracle HA daemon is enabled for autostart.
Sep  5 02:23:40 wmprddb1 rhnsd[2100]: Red Hat Network Services Daemon starting up, check in interval 240 minutes.
Sep  5 02:23:40 wmprddb1 kernel: hcpdriver: module license 'Proprietary' taints kernel.
Sep  5 02:23:40 wmprddb1 kernel: Disabling lock debugging due to kernel taint
Sep  5 02:23:40 wmprddb1 kernel: hcp: INFO: hcp driver loaded: 4.4.0 build 18616, NR_CPUS: 4096
Sep  5 02:23:40 wmprddb1 kernel: hcp: INFO: hcp_watchdog: started.
Sep  5 02:23:41 wmprddb1 logger: exec /p02/app/11.2.0/grid/perl/bin/perl -I/p02/app/11.2.0/grid/perl/lib /p02/app/11.2.0/grid/bin/crswrapexece.pl /p02/app/11.2.0/grid/crs/install/s_crsconfig_wmprddb1_env.txt /p02/app/11.2.0/grid/bin/ohasd.bin "reboot"
Sep  5 02:23:42 wmprddb1 /p02/app/11.2.0/grid/bin/crswrapexece.pl[2139]: executing "/p02/app/11.2.0/grid/bin/ohasd.bin reboot"
Sep  5 02:24:03 wmprddb1 mDNSResponder (Engineering Build) (Jul 31 2009 08:51:29) [2315]: starting
Sep  5 02:24:04 wmprddb1 mDNSResponder: Oracle mDNSResponder starting
Sep  5 02:24:07 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:07 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:07 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:07 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:07 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:07 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:09 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:09 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:09 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:09 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:09 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:09 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:12 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:12 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:12 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:13 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:13 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:13 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:24 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:24 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:24 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:24 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:24 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:24 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:25 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:25 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:25 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:25 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:25 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:25 wmprddb1 mDNSResponder: ERROR: read_msg: Connection reset by peer
Sep  5 02:24:56 wmprddb1 ntpd[1924]: Listening on interface #12 eth0:2, 10.34.8.106#123 Enabled
Sep  5 02:27:58 wmprddb1 ntpd[1924]: synchronized to 10.177.128.120, stratum 1
Sep  5 02:29:18 wmprddb1 kernel: Clocksource tsc unstable (delta = -8590540721 ns)
Details from cluster alert log file :
[cssd(2379)]CRS-1612:Network communication with node wmprddb2 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 10.460 seconds
2013-09-05 02:23:52.072
[ohasd(2139)]CRS-2112:The OLR service started on node wmprddb1.
2013-09-05 02:23:53.042
[ohasd(2139)]CRS-8011:reboot advisory message from host: wmprddb1, component: ag093053, with time stamp: L-2013-09-04-02:02:31.795
[ohasd(2139)]CRS-8013:reboot advisory message text: Rebooting after limit 28250 exceeded; disk timeout 27940, network timeout 28250, last heartbeat from CSSD at epoch seconds 1378278113.113, 38684 milliseconds ago based on invariant clock value of 480878270
2013-09-05 02:23:55.486
[ohasd(2139)]CRS-8011:reboot advisory message from host: wmprddb1, component: mo104940, with time stamp: L-2013-09-04-11:14:44.788
[ohasd(2139)]CRS-8013:reboot advisory message text: Rebooting after limit 26320 exceeded; disk timeout 26320, network timeout 25410, last heartbeat from CSSD at epoch seconds 1378311258.322, 26464 milliseconds ago based on invariant clock value of 577500
2013-09-05 02:24:01.318
[ohasd(2139)]CRS-8017:location: /etc/oracle/lastgasp has 354 reboot advisory log files, 2 were announced and 0 errors occurred
2013-09-05 02:24:03.101
[ohasd(2139)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'Free'.
2013-09-05 02:24:06.854
[cssd(2388)]CRS-1713:CSSD daemon is started in clustered mode
2013-09-05 02:24:16.288
[cssd(2388)]CRS-1707:Lease acquisition for node wmprddb1 number 1 completed
2013-09-05 02:24:16.318
[cssd(2388)]CRS-1605:CSSD voting file is online: /dev/asm-disk1; details in /p02/app/11.2.0/grid/log/wmprddb1/cssd/ocssd.log.
2013-09-05 02:24:27.929
[cssd(2388)]CRS-1601:CSSD Reconfiguration complete. Active nodes are wmprddb1 wmprddb2 .
2013-09-05 02:24:28.877
[ctssd(2492)]CRS-2403:The Cluster Time Synchronization Service on host wmprddb1 is in observer mode.
2013-09-05 02:24:29.724
[ctssd(2492)]CRS-2401:The Cluster Time Synchronization Service started on host wmprddb1.
2013-09-05 02:24:30.387
[ctssd(2492)]CRS-2407:The new Cluster Time Synchronization Service reference node is host wmprddb2.
2013-09-05 02:24:30.395
[ctssd(2492)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-05 02:24:45.834
[crsd(2694)]CRS-1012:The OCR service started on node wmprddb1.
2013-09-05 02:24:47.959
[crsd(2694)]CRS-1201:CRSD started on node wmprddb1.
2013-09-05 02:29:23.327
[cssd(2388)]CRS-1612:Network communication with node wmprddb2 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 14.330 seconds
2013-09-05 02:37:00.963
[ctssd(2492)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
PLEASE PROVIDE SOME UPDATE ASAP ..  Its going on for more than a week now .

Hello Levi ..
Thanks for the update ..
However we have created and SR and working with Oracle. Still facing same issues.
Node restarted multiple times today ..
please check the log and provide some more work arounds if possible. ..
2013-09-06 15:26:31.620
[crsd(14929)]CRS-2773:Server 'wmprddb2' has been removed from pool 'ora.ebizprd_NEWSRV1'.
2013-09-06 15:28:26.527
[cssd(14650)]CRS-1601:CSSD Reconfiguration complete. Active nodes are wmprddb1 wmprddb2 .
2013-09-06 15:29:00.370
[crsd(14929)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'Generic'.
2013-09-06 15:29:00.372
[crsd(14929)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'ora.ebizprd'.
2013-09-06 15:29:00.372
[crsd(14929)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'ora.ebizprd_NEWSRV1'.
2013-09-06 15:29:00.372
[crsd(14929)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'ora.ebizprd_NEWSRV'.
2013-09-06 15:36:56.873
[cssd(14650)]CRS-1612:Network communication with node wmprddb2 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 12.280 seconds
2013-09-06 19:28:02.570
[cssd(14650)]CRS-1611:Network communication with node wmprddb2 (2) missing for 75% of timeout interval.  Removal of this node from cluster in 5.320 seconds
2013-09-07 03:31:55.801
[cssd(14650)]CRS-1614:No I/O has completed after 75% of the maximum interval. Voting file /dev/asm-disk1 will be considered not functional in 6060 milliseconds
2013-09-09 01:12:58.503
[cssd(14650)]CRS-1604:CSSD voting file is offline: /dev/asm-disk1; details at (:CSSNM00058:) in /p02/app/11.2.0/grid/log/wmprddb1/cssd/ocssd.log.
2013-09-09 01:12:58.503
[cssd(14650)]CRS-1606:The number of voting files available, 0, is less than the minimum number of voting files required, 1, resulting in CSSD termination to ensure data integrity; details at (:CSSNM00018:) in /p02/app/11.2.0/grid/log/wmprddb1/cssd/ocssd.log
2013-09-09 01:12:58.506
[ohasd(2144)]CRS-8011:reboot advisory message from host: wmprddb1, component: mo224047, with time stamp: L-2013-09-09-01:12:58.504
[ohasd(2144)]CRS-8013:reboot advisory message text: Rebooting after limit 28270 exceeded; disk timeout 27500, network timeout 28270, last heartbeat from CSSD at epoch seconds 1378706815.931, 362510 milliseconds ago based on invariant clock value of 301809314
2013-09-09 01:14:31.574
[crsd(14929)]CRS-2765:Resource 'ora.ebizprd.db' has failed on server 'wmprddb1'.
2013-09-09 01:20:38.387
[ohasd(2167)]CRS-2112:The OLR service started on node wmprddb1.
2013-09-09 01:20:43.215
[ohasd(2167)]CRS-8011:reboot advisory message from host: wmprddb1, component: mo224047, with time stamp: L-2013-09-09-01:12:58.504
[ohasd(2167)]CRS-8013:reboot advisory message text: Rebooting after limit 28270 exceeded; disk timeout 27500, network timeout 28270, last heartbeat from CSSD at epoch seconds 1378706815.931, 362510 milliseconds ago based on invariant clock value of 301809314
2013-09-09 01:20:48.400
[ohasd(2167)]CRS-8017:location: /etc/oracle/lastgasp has 368 reboot advisory log files, 1 were announced and 0 errors occurred
2013-09-09 01:20:50.801
[ohasd(2167)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'Free'.
2013-09-09 01:20:54.688
[cssd(2413)]CRS-1713:CSSD daemon is started in clustered mode
2013-09-09 01:21:06.177
[cssd(2413)]CRS-1707:Lease acquisition for node wmprddb1 number 1 completed
2013-09-09 01:21:06.231
[cssd(2413)]CRS-1605:CSSD voting file is online: /dev/asm-disk1; details in /p02/app/11.2.0/grid/log/wmprddb1/cssd/ocssd.log.
2013-09-09 01:21:18.080
[cssd(2413)]CRS-1601:CSSD Reconfiguration complete. Active nodes are wmprddb1 wmprddb2 .
2013-09-09 01:21:19.679
[ctssd(2533)]CRS-2403:The Cluster Time Synchronization Service on host wmprddb1 is in observer mode.
2013-09-09 01:21:20.511
[ctssd(2533)]CRS-2401:The Cluster Time Synchronization Service started on host wmprddb1.
2013-09-09 01:21:21.214
[ctssd(2533)]CRS-2407:The new Cluster Time Synchronization Service reference node is host wmprddb2.
2013-09-09 01:21:21.250
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-09 01:21:38.561
[crsd(2744)]CRS-1012:The OCR service started on node wmprddb1.
2013-09-09 01:21:40.684
[crsd(2744)]CRS-1201:CRSD started on node wmprddb1.
2013-09-09 05:27:10.018
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-09 09:35:08.407
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-09 13:42:01.043
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-09 17:55:00.185
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-09 22:02:24.885
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-10 02:14:22.832
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-10 06:27:21.917
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-10 10:35:43.068
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-10 14:11:20.820
[cssd(2413)]CRS-1612:Network communication with node wmprddb2 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 14.180 seconds
2013-09-10 14:55:56.051
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-10 19:19:16.100
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-10 23:35:54.158
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-11 03:46:23.286
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-11 07:59:27.716
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-11 12:14:17.259
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-11 16:29:50.113
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-11 20:55:26.122
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 01:13:44.095
[ctssd(2533)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 02:34:26.298
[ohasd(2155)]CRS-2112:The OLR service started on node wmprddb1.
2013-09-12 02:34:31.876
[ohasd(2155)]CRS-8011:reboot advisory message from host: wmprddb1, component: ag012106, with time stamp: L-2013-09-12-02:32:31.817
[ohasd(2155)]CRS-8013:reboot advisory message text: Rebooting after limit 27880 exceeded; disk timeout 27880, network timeout 27650, last heartbeat from CSSD at epoch seconds 1378971123.548, 28271 milliseconds ago based on invariant clock value of 262545873
2013-09-12 02:34:36.770
[ohasd(2155)]CRS-8011:reboot advisory message from host: wmprddb1, component: mo224047, with time stamp: L-2013-09-09-01:12:58.504
[ohasd(2155)]CRS-8013:reboot advisory message text: Rebooting after limit 28270 exceeded; disk timeout 27500, network timeout 28270, last heartbeat from CSSD at epoch seconds 1378706815.931, 362510 milliseconds ago based on invariant clock value of 301809314
2013-09-12 02:34:46.087
[ohasd(2155)]CRS-8017:location: /etc/oracle/lastgasp has 370 reboot advisory log files, 2 were announced and 0 errors occurred
2013-09-12 02:34:47.539
[ohasd(2155)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'Free'.
2013-09-12 02:34:51.530
[cssd(2386)]CRS-1713:CSSD daemon is started in clustered mode
2013-09-12 02:35:03.390
[cssd(2386)]CRS-1707:Lease acquisition for node wmprddb1 number 1 completed
2013-09-12 02:35:03.426
[cssd(2386)]CRS-1605:CSSD voting file is online: /dev/asm-disk1; details in /p02/app/11.2.0/grid/log/wmprddb1/cssd/ocssd.log.
2013-09-12 02:35:40.068
[cssd(2386)]CRS-1601:CSSD Reconfiguration complete. Active nodes are wmprddb1 wmprddb2 .
2013-09-12 02:35:41.503
[ctssd(2524)]CRS-2403:The Cluster Time Synchronization Service on host wmprddb1 is in observer mode.
2013-09-12 02:35:41.530
[ctssd(2524)]CRS-2407:The new Cluster Time Synchronization Service reference node is host wmprddb2.
2013-09-12 02:35:41.538
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 02:35:42.282
[ctssd(2524)]CRS-2401:The Cluster Time Synchronization Service started on host wmprddb1.
2013-09-12 02:36:12.633
[crsd(2747)]CRS-1012:The OCR service started on node wmprddb1.
2013-09-12 02:36:15.517
[crsd(2747)]CRS-1201:CRSD started on node wmprddb1.
2013-09-12 02:36:19.256
[crsd(2747)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'Generic'.
2013-09-12 02:36:19.256
[crsd(2747)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'ora.ebizprd'.
2013-09-12 02:36:19.257
[crsd(2747)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'ora.ebizprd_NEWSRV1'.
2013-09-12 02:36:19.330
[crsd(2747)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'ora.ebizprd_NEWSRV'.
2013-09-12 02:39:03.666
[crsd(2747)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'Generic'.
2013-09-12 02:39:03.666
[crsd(2747)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'ora.ebizprd'.
2013-09-12 02:39:03.667
[crsd(2747)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'ora.ebizprd_NEWSRV1'.
2013-09-12 02:39:03.667
[crsd(2747)]CRS-2772:Server 'wmprddb2' has been assigned to pool 'ora.ebizprd_NEWSRV'.
2013-09-12 02:56:34.796
[cssd(2386)]CRS-1612:Network communication with node wmprddb2 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 14.240 seconds
2013-09-12 03:04:30.467
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 03:56:40.283
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 04:01:29.311
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 04:30:15.619
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 04:47:47.914
[cssd(2386)]CRS-1614:No I/O has completed after 75% of the maximum interval. Voting file /dev/asm-disk1 will be considered not functional in 6070 milliseconds
2013-09-12 04:51:29.577
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 04:52:50.706
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 05:06:34.196
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 05:20:50.567
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 05:37:57.880
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 06:10:12.542
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 06:17:10.910
[ctssd(2524)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-12 06:56:09.891
[ohasd(2156)]CRS-2112:The OLR service started on node wmprddb1.
2013-09-12 06:56:13.926
[ohasd(2156)]CRS-8011:reboot advisory message from host: wmprddb1, component: ag012106, with time stamp: L-2013-09-12-02:32:31.817
[ohasd(2156)]CRS-8013:reboot advisory message text: Rebooting after limit 27880 exceeded; disk timeout 27880, network timeout 27650, last heartbeat from CSSD at epoch seconds 1378971123.548, 28271 milliseconds ago based on invariant clock value of 262545873
2013-09-12 06:56:16.041
[ohasd(2156)]CRS-8011:reboot advisory message from host: wmprddb1, component: mo224047, with time stamp: L-2013-09-09-01:12:58.504
[ohasd(2156)]CRS-8013:reboot advisory message text: Rebooting after limit 28270 exceeded; disk timeout 27500, network timeout 28270, last heartbeat from CSSD at epoch seconds 1378706815.931, 362510 milliseconds ago based on invariant clock value of 301809314
2013-09-12 06:56:22.248
[ohasd(2156)]CRS-8017:location: /etc/oracle/lastgasp has 372 reboot advisory log files, 2 were announced and 0 errors occurred
2013-09-12 06:56:24.514
[ohasd(2156)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'Free'.
2013-09-12 06:56:28.435
[cssd(2383)]CRS-1713:CSSD daemon is started in clustered mode
2013-09-12 06:56:40.174
[cssd(2383)]CRS-1707:Lease acquisition for node wmprddb1 number 1 completed
2013-09-12 06:56:40.226
[cssd(2383)]CRS-1605:CSSD voting file is online: /dev/asm-disk1; details in /p02
2013-09-12 06:56:51.094
[cssd(2383)]CRS-1601:CSSD Reconfiguration complete. Active nodes are wmprddb1 wm
2013-09-12 06:56:52.415
[ctssd(2502)]CRS-2403:The Cluster Time Synchronization Service on host wmprddb1
2013-09-12 06:56:52.832
[ctssd(2502)]CRS-2407:The new Cluster Time Synchronization Service reference nod
2013-09-12 06:56:52.840
[ctssd(2502)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mea                                                                                        running in observer mode.
2013-09-12 06:56:53.197
[ctssd(2502)]CRS-2401:The Cluster Time Synchronization Service started on host w
2013-09-12 06:57:13.173
[crsd(2716)]CRS-1012:The OCR service started on node wmprddb1.
2013-09-12 06:57:14.675
[crsd(2716)]CRS-1201:CRSD started on node wmprddb1.
2013-09-12 07:00:50.465
[cssd(2383)]CRS-1612:Network communication with node wmprddb2 (2) missing for 50
Filter 
Unread Only
Mark all read
Levi-Pereira replied to
NODE GETTING EVICTED ?
2 days ago
NODE GETTING EVICTED ? in Real Application Clusters 
hasib asked 1 week ago Hello All,   We have a Two node production setup for RAC 11gR2 on VM. We are facing issue of frequent Node eviction. Node gets evicted atleast once a day, We have tried switching to Single node and…
SHOW FULL PREVIEW
Mark unreadStop tracking
2 replies
Latest activity
hasib replied (in response to hasib) 1 week ago
The node rebooted again, both the nodes rebooted thrice in half an hour giving following error:
CRS-1612:Network communication with node wmprddb2 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 12.480 seconds
2013-09-05 10:37:04.806
[ctssd(2540)]CRS-2409:The clock on host wmprddb1 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
2013-09-05 10:39:31.565
[ohasd(2128)]CRS-2112:The OLR service started on node wmprddb1.
2013-09-05 10:39:31.793
[ohasd(2128)]CRS-8011:reboot advisory message from host: wmprddb1, component: ag093053, with time stamp: L-2013-09-04-02:02:31.795
[ohasd(2128)]CRS-8013:reboot advisory message text: Rebooting after limit 28250 exceeded; disk timeout 27940, network timeout 28250, last heartbeat from CSSD at epoch seconds 1378278113.113, 38684 milliseconds ago based on invariant clock value of 480878270
2013-09-05 10:39:32.036
[ohasd(2128)]CRS-8011:reboot advisory message from host: wmprddb1, component: mo102023, with time stamp: L-2013-09-05-10:38:06.313
[ohasd(2128)]CRS-8013:reboot advisory message text: Rebooting after limit 25570 exceeded; disk timeout 6560, network timeout 25570, last heartbeat from CSSD at epoch seconds 1378395458.791, 27520 milliseconds ago based on invariant clock value of 243974
2013-09-05 10:39:32.072
[ohasd(2128)]CRS-8011:reboot advisory message from host: wmprddb1, component: mo104940, with time stamp: L-2013-09-04-11:14:44.788
[ohasd(2128)]CRS-8013:reboot advisory message text: Rebooting after limit 26320 exceeded; disk timeout 26320, network timeout 25410, last heartbeat from CSSD at epoch seconds 1378311258.322, 26464 milliseconds ago based on invariant clock value of 577500
2013-09-05 10:39:32.719
[ohasd(2128)]CRS-8017:location: /etc/oracle/lastgasp has 358 reboot advisory log files, 3 were announced and 0 errors occurred
2013-09-05 10:39:34.258
[ohasd(2128)]CRS-2772:Server 'wmprddb1' has been assigned to pool 'Free'.
2013-09-05 10:39:37.919
[cssd(2378)]CRS-1713:CSSD daemon is started in clustered mode
2013-09-05 10:39:49.264
[cssd(2378)]CRS-1707:Lease acquisition for node wmprddb1 number 1 completed
2013-09-05 10:39:49.292
[cssd(2378)]CRS-1605:CSSD voting file is online: /dev/asm-disk1; details in /p02/app/11.2.0/grid/log/wmprddb1/cssd/ocssd.log.
2013-09-05 10:40:13.946
[cssd(2378)]CRS-1601:CSSD Reconfiguration complete. Active nodes are wmprddb1 wmprddb2 .
log writer taking 99% I/O
Reply
Like (0)
Levi-Pereira replied (in response to hasib) 2 days ago
To deploy Oracle RAC on Virtualized Env must be done carefully, because RAC/Clusterware is more sensitive to bad performance. (split-brain occurs due I/O or CPU starvation).
I can see:
1: CRS-1612:Network communication with node wmprddb2 (2) missing for 50% of timeout interval. Removal of this node from cluster in 12.480 seconds
2013-09-05 10:37:04.806
2: Rebooting after limit 25570 exceeded; disk timeout 6560, network timeout 25570, last heartbeat from CSSD at epoch seconds 1378395458.791, 27520 milliseconds ago based on invariant clock value of 243974

Similar Messages

  • Oracle RAC 2 node architecture-- Node -2 always gets evicted

    Hi,
    I have Oracle RAC DB with simple 2 node architecture( Host RHEL5.5 X 86_64) . The problem we are facing is, whenever there is network failure on either of nodes, always node-2 gets evicted (rebooted). We do not see any abnormal errors on alert.log file on both the nodes.
    The steps followed and results are:
    **Node-1#service network restart**
    **Result: Node-2 evicted**
    **Node-2# service network restart**
    **Result: Node-2 evicted**
    I would like to know why node-1 never gets evicted even if the network is down or restarted on node-1 itself?? Is this normal.
    Regards,
    Raj

    Hi,
    Please find the output below:
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 50% heartbeat fatal, removal in 14.120 seconds
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) is impending reconfig, flag 132108, misstime 15880
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: local diskTimeout set to 27000 ms, remote disk timeout set to 27000, impending reconfig status(1)
    2011-06-03 16:36:05.994: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 760 > margin 750 cur_ms 1480138014 lastalive 1480137254
    2011-06-03 16:36:07.493: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:07.493: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:08.084: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 850 > margin 750 cur_ms 1480140104 lastalive 1480139254
    2011-06-03 16:36:09.831: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 75% heartbeat fatal, removal in 7.110 seconds
    2011-06-03 16:36:10.122: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 880 > margin 750 cur_ms 1480142134 lastalive 1480141254
    2011-06-03 16:36:11.112: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 860 > margin 750 cur_ms 1480143124 lastalive 1480142264
    2011-06-03 16:36:12.212: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 950 > margin 750 cur_ms 1480144224 lastalive 1480143274
    2011-06-03 16:36:12.487: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:12.487: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:13.840: [    CSSD][1216194880]clssnmPollingThread: local diskTimeout set to 200000 ms, remote disk timeout set to 200000, impending reconfig status(0)
    2011-06-03 16:36:14.881: [    CSSD][1205705024]clssgmTagize: version(1), type(13), tagizer(0x494dfe)
    2011-06-03 16:36:14.881: [    CSSD][1205705024]clssgmHandleDataInvalid: grock HB+ASM, member 2 node 2, birth 21
    2011-06-03 16:36:17.487: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:17.487: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:22.486: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:22.486: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: [network] failed recv attempt endp 0x2eb80c0 [0000000001fed69c] { gipcEndpoint : localAddr 'gipc://prddbs01:80b3-6853-187b-4d2e#192.168.7.1#33842', remoteAddr 'gipc://prddbs02:gm_prddbs-cluster#192.168.7.2#60074', numPend 4, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x1e10, pidPeer 0, flags 0x2616, usrFlags 0x0 }, req 0x2aaaac308bb0 [0000000001ff4b7d] { gipcReceiveRequest : peerName '', data 0x2aaaac2e3cd8, len 10240, olen 0, off 0, parentEndp 0x2eb80c0, ret gipc
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos op : sgipcnTcpRecv
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos dep : Connection reset by peer (104)
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos loc : recv
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos info: dwRet 4294967295, cookie 0x2aaaac308bb0
    2011-06-03 16:36:23.162: [    CSSD][1205705024]clssgmeventhndlr: Disconnecting endp 0x1fed69c ninf 0x2aaab0000f90
    2011-06-03 16:36:23.162: [    CSSD][1205705024]clssgmPeerDeactivate: node 2 (prddbs02), death 0, state 0x80000001 connstate 0x1e
    2011-06-03 16:36:23.162: [GIPCXCPT][1205705024]gipcInternalDissociate: obj 0x2eb80c0 [0000000001fed69c] { gipcEndpoint : localAddr 'gipc://prddbs01:80b3-6853-187b-4d2e#192.168.7.1#33842', remoteAddr 'gipc://prddbs02:gm_prddbs-cluster#192.168.7.2#60074', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x1e10, pidPeer 0, flags 0x261e, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
    2011-06-03 16:36:32.494: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:37.493: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:37.494: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:40.598: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 90% heartbeat fatal, removal in 2.870 seconds, seedhbimpd 1
    2011-06-03 16:36:42.497: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:42.497: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:43.476: [    CSSD][1216194880]clssnmPollingThread: Removal started for node prddbs02 (2), flags 0x20000, state 3, wt4c 0
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: Initiating sync 178830908
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssscUpdateEventValue: NMReconfigInProgress val 1, changes 57
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: local disk timeout set to 27000 ms, remote disk timeout set to 27000
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: new values for local disk timeout and remote disk timeout will take effect when the sync is completed.
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: Starting cluster reconfig with incarnation 178830908
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetupAckWait: Ack message type (11)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetupAckWait: node(1) is ALIVE
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908), indicating EXADATA fence initialization complete
    2011-06-03 16:36:43.476: [    CSSD][1237174592]List of nodes that have ACKed my sync: NULL
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmWaitForAcks: Ack message type(11), ackCount(1)
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: Node prddbs01, number 1, is EXADATA fence capable
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssscUpdateEventValue: NMReconfigInProgress val 1, changes 58
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: local disk timeout set to 27000 ms, remote disk timeout set t:
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Sending Event(2), type 2, incarn 178830907
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Node[1] state = 3, birth = 178830889, unique = 1305623432
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Node[2] state = 5, birth = 178830907, unique = 1307103307
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: Acknowledging sync: src[1] srcName[prddbs01] seq[73] sync[178830908]
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmSendAck: node 1, prddbs01, syncSeqNo(178830908) type(11)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmStartNMMon: node 1 active, birth 178830889
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleAck: src[1] dest[1] dom[0] seq[0] sync[178830908] type[11] ackCount(0)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmStartNMMon: node 2 active, birth 178830907
    2011-06-03 16:36:43.476: [    CSSD][1240850064]NMEVENT_SUSPEND [00][00][00][06]
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908), indicating EXADATA fence initialization complete
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmUpdateEventValue: CmInfo State val 5, changes 190
    2011-06-03 16:36:43.476: [    CSSD][1237174592]List of nodes that have ACKed my sync: 1
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmSuspendAllGrocks: Issue SUSPEND
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmWaitForAcks: done, msg type(11)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion:node1 product/protocol (11.2/1.4)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion: properties common to all nodes: 1,2,3,4,5,6,7,8,9,10,11,12,13,14
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion: min product/protocol (11.2/1.4)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmQueueGrockEvent: groupName(IG+ASMSYS$USERS) count(2) master(1) event(2), incarn 22, mbrc 2, to member 1, events 0x0, state 0x0
    2011-06-03 16:36:43.477: [    CSSD][1237174592]clssnmSetMinMaxVersion: max product/protocol (11.2/1.4)
    2011-06-03 16:36:43.477: [    CSSD][1237174592]clssnmNeedConfReq: No configuration to change
    etc.etc....
    Let me know if any other logfile required. No unususal messages on /var/log/messages.
    Regards,
    Raj

  • In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.

  • Where does the data of health service watcher node gets stored in OperationsManagerDB/DW ?

    Hi,
    We have integrated SCOM 2007 R2 to some third party tool which takes the data of health service as the component from OperationsManagerDB/DW and shows the availability of each server in graphical format. But we have noticed that due to some issues with health
    service, the availability gets affected. Hence, we have thought to add the data of health service watcher node as a component in that tool than
    health service only. But we are unable to get the exact location of health service watcher in OperationsManagerDB/DW.
    Could anyone please help me with the location of health service watcher node data in
    OperationsManagerDB/DW.
    What is the table name where the data of health service watcher node gets collected in OperationsManagerDB/DW.
    Thanks in anticipation.

    Hi,
    A watcher node is an agent that runs monitors and rules that test an application or feature on another computer.
    A watcher node can either be the agent with the application or feature installed, or it can be a separate agent. If the watcher node is a separate computer, in addition to ensuring that the application or feature is healthy, the watcher node can validate
    that clients can connect to it and  test such additional features as security, network availability, and firewalls.
    I think according to applications monitored, watcher node is configured differentlly, the below article is talking about lync watcher node, you may want to check it:
    http://technet.microsoft.com/en-us/library/jj204943.aspx
    Regards,
    Yan Li
    Regards, Yan Li

  • Is there any possiblity to write and execute code before nodes get created in the content?

    Hi,
         I have created a dialog and after clicking OK the data is stored in the content. But I've the following requirement: "After clicking OK button on the dialog and before the data stored into the content, I've to do some action(I want to write some code)". Is it possible? Where can I write the code to perform the action before nodes get created? Let me know the solution. Your comments are welcome.
    Thanks & Regards,
    Arya

    This forum is only for discussions on the forums themselves. You should look in here for the forum corresponding to the Adobe product you are using and post your question there:
    http://forums.adobe.com/index.jspa?view=discussions
    When you do, please don't forget to provide enough information. We not only don't know what program you are talking about, but we don't even know if you are in Mac or Win.

  • Node 1 evicted due to ORA- 29740

    Hi Guys,
    We have a two node cluster with 10.2.0.2 on Hp unix 11.31
    Yesterday node 1 was evicted by the other node due to ORA 29740 error;
    When I checked the alert log file I sae some IPC errors, below are some excerpts from the alert log files of both the nodes
    Node 1 Alert log file
    Mon Aug 24 22:03:00 2009
    Thread 1 advanced to log sequence 10484
    Current log# 7 seq# 10484 mem# 0: +DATADG/orcl/onlinelog/group_7.298.670427121
    Mon Aug 24 22:03:00 2009
    SUCCESS: diskgroup FLASHDG was mounted
    SUCCESS: diskgroup FLASHDG was dismounted
    Mon Aug 24 22:50:04 2009
    IPC Send timeout detected. Receiver ospid 15041
    Mon Aug 24 22:51:08 2009
    *Trace dumping is performing id=[cdmp_20090824225031]*
    Mon Aug 24 22:52:27 2009
    Errors in file /u01/app/oracle/db/admin/orcl/bdump/orcl1_lmon_15039.trc:
    ORA-29740: evicted by member 1, group incarnation 10
    Mon Aug 24 22:52:27 2009
    LMON: terminating instance due to error 29740
    Mon Aug 24 22:52:27 2009
    Errors in file /u01/app/oracle/db/admin/orcl/bdump/orcl1_lms1_15045.trc:
    ORA-29740: evicted by member , group incarnation
    Mon Aug 24 22:52:27 2009
    Errors in file /u01/app/oracle/db/admin/orcl/bdump/orcl1_lms0_15043.trc:
    ORA-29740: evicted by member , group incarnation
    Mon Aug 24 22:52:30 2009
    Errors in file /u01/app/oracle/db/admin/orcl/bdump/orcl1_rbal_15336.trc:
    ORA-29740: evicted by member , group incarnation
    Mon Aug 24 22:52:59 2009
    Shutting down instance (abort)
    License high water mark = 254
    Mon Aug 24 22:53:02 2009
    Instance terminated by LMON, pid = 15039
    Mon Aug 24 22:53:04 2009
    Instance terminated by USER, pid = 8745
    Mon Aug 24 22:53:13 2009
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Node 2 Alert log file
    Mon Aug 24 19:55:31 2009
    Thread 2 advanced to log sequence 6803
    Current log# 10 seq# 6803 mem# 0: +DATADG/orcl/onlinelog/group_10.301.670427207
    Mon Aug 24 19:55:31 2009
    SUCCESS: diskgroup FLASHDG was mounted
    SUCCESS: diskgroup FLASHDG was dismounted
    Mon Aug 24 22:50:03 2009
    IPC Send timeout detected.Sender: ospid 6382
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:50:04 2009
    IPC Send timeout detected.Sender: ospid 25897
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:50:05 2009
    IPC Send timeout detected.Sender: ospid 26617
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:50:06 2009
    IPC Send timeout detected.Sender: ospid 25678
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:50:07 2009
    IPC Send timeout detected.Sender: ospid 21344
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:50:31 2009
    IPC Send timeout to 0.0 inc 8 for msg type 12 from opid 198
    Mon Aug 24 22:50:31 2009
    Communications reconfiguration: instance_number 1+
    Mon Aug 24 22:50:33 2009
    IPC Send timeout to 0.0 inc 8 for msg type 12 from opid 112
    Mon Aug 24 22:50:35 2009
    Trace dumping is performing id=[cdmp_20090824225031]
    Mon Aug 24 22:50:35 2009
    IPC Send timeout detected.Sender: ospid 984
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:50:35 2009
    IPC Send timeout to 0.0 inc 8 for msg type 12 from opid 15
    Mon Aug 24 22:50:49 2009
    IPC Send timeout to 0.0 inc 8 for msg type 12 from opid 16
    Mon Aug 24 22:50:52 2009
    IPC Send timeout detected.Sender: ospid 12489
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:50:57 2009
    IPC Send timeout to 0.0 inc 8 for msg type 12 from opid 84
    Mon Aug 24 22:51:00 2009
    IPC Send timeout to 0.0 inc 8 for msg type 12 from opid 97
    Mon Aug 24 22:51:07 2009
    IPC Send timeout to 0.0 inc 8 for msg type 12 from opid 75
    Mon Aug 24 22:51:08 2009
    IPC Send timeout detected.Sender: ospid 8900
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:51:25 2009
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:52:09 2009
    Mon Aug 24 22:52:42 2009
    Waiting for instances to leave:
    *1*
    Mon Aug 24 22:52:57 2009
    IPC Send timeout detected.Sender: ospid 6378
    Receiver: inst 1 binc 275179919 ospid 15041
    Mon Aug 24 22:53:02 2009
    Reconfiguration started (old inc 8, new inc 12)
    List of nodes:
    1
    Global Resource Directory frozen
    * dead instance detected - domain 0 invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Mon Aug 24 22:53:02 2009
    LMS 0: 10 GCS shadows cancelled, 2 closed
    Mon Aug 24 22:53:02 2009
    LMS 1: 1 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Mon Aug 24 22:53:04 2009
    LMS 0: 317502 GCS shadows traversed, 0 replayed
    Mon Aug 24 22:53:04 2009
    LMS 1: 302589 GCS shadows traversed, 0 replayed
    Mon Aug 24 22:53:04 2009
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Fix write in gcs resources
    Mon Aug 24 22:53:04 2009
    Instance recovery: looking for dead threads
    Mon Aug 24 22:53:04 2009
    Beginning instance recovery of 1 threads
    Reconfiguration complete
    Mon Aug 24 22:53:06 2009
    parallel recovery started with 3 processes
    Mon Aug 24 22:53:07 2009
    Started redo scan
    Mon Aug 24 22:53:07 2009
    Completed redo scan
    53 redo blocks read, 30 data blocks need recovery
    Mon Aug 24 22:53:07 2009
    Started redo application at
    Thread 1: logseq 10484, block 40586
    Mon Aug 24 22:53:07 2009
    Recovery of Online Redo Log: Thread 1 Group 7 Seq 10484 Reading mem 0
    Mem# 0 errs 0: +DATADG/orcl/onlinelog/group_7.298.670427121
    Mon Aug 24 22:53:08 2009
    Completed redo application
    Mon Aug 24 22:53:08 2009
    Completed instance recovery at
    Thread 1: logseq 10484, block 40639, scn 1479311755
    30 data blocks read, 32 data blocks written, 53 redo blocks read
    Switch log for thread 1 to sequence 10485
    Mon Aug 24 22:53:27 2009
    Reconfiguration started (old inc 12, new inc 14)
    List of nodes:
    0 1
    Global Resource Directory frozen
    Communication channels reestablished
    * domain 0 valid = 1 according to instance 0
    Mon Aug 24 22:53:27 2009
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Mon Aug 24 22:53:27 2009
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Mon Aug 24 22:53:27 2009
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Mon Aug 24 22:53:28 2009
    LMS 1: 11913 GCS shadows traversed, 4001 replayed
    Mon Aug 24 22:53:28 2009
    LMS 0: 11725 GCS shadows traversed, 4001 replayed
    Mon Aug 24 22:53:28 2009
    LMS 0: 11680 GCS shadows traversed, 4001 replayed
    Mon Aug 24 22:53:28 2009
    LMS 1: 11945 GCS shadows traversed, 4001 replayed
    Mon Aug 24 22:53:28 2009
    LMS 1: 11808 GCS shadows traversed, 4001 replayed
    LMS 1: 239 GCS shadows traversed, 80 replayed
    Mon Aug 24 22:53:28 2009
    LMS 0: 8065 GCS shadows traversed, 2737 replayed
    Mon Aug 24 22:53:28 2009
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    Tue Aug 25 02:11:36 2009
    Thread 2 advanced to log sequence 6804
    Current log# 12 seq# 6804 mem# 0: +DATADG/orcl/onlinelog/group_12.303.670427257
    I checked the spu performance and I saw one oracle process i.e; SMON is utilising 86% CPU
    CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU COMMAND
    *1 ? 6378 oracle 241 20 17060M 18200K run 1951:13 86.48 86.33 ora_smon_orcl*
    Please Help me in investigating this issue.

    Check this link --> [this link|http://forums.oracle.com/forums/thread.jspa?messageID=3631288]

  • File system getting full and Server node getting down.

    Hi Team,
    Currently we are using IBM Power 6 AIX operating system.
    And in our environment, for development system, file system is getting full and development system is getting slow while accessing.
    Can you please let me know , what exactly the problem & which command is used to see the file system size and how to resolve the issue by deleting the core files or some ting. Please help me .
    Thanks
    Manoj K

    Hi      Orkun Gedik,
    When i executed the command df -lg and find . -name core noting is displayed, but if i execute the command df is showed me the below information. below is an original file which i have modified the sid.
    Filesystem    512-blocks      Free %Used    Iused %Iused Mounted on
    /dev/fslv10     52428800  16279744   69%   389631    15% /usr/sap/SID
    Server 0 node is giving the problem. its getting down all the times.
    And if i check it in the /usr/sap/SID/<Instance>/work for the server node "std_server0.out file , the below information is written in the file.
    framework started for 73278 ms.
    SAP J2EE Engine Version 7.00   PatchLevel 81863.450 is running! PatchLevel 81863.450 March 10, 2010 11:48 GMT
    94.539: [GC 94.539: [ParNew: 239760K->74856K(261888K), 0.2705150 secs] 239760K->74856K(2009856K), 0.2708720 secs] [Times: user=0.00 sys=0.36, real=0.27 secs]
    105.163: [GC 105.164: [ParNew: 249448K->80797K(261888K), 0.2317650 secs] 249448K->80797K(2009856K), 0.2320960 secs] [Times: user=0.00 sys=0.44, real=0.23 secs]
    113.248: [GC 113.248: [ParNew: 255389K->87296K(261888K), 0.3284190 secs] 255389K->91531K(2009856K), 0.3287400 secs] [Times: user=0.00 sys=0.58, real=0.33 secs]
    Please advise.
    thanks in advance
    Manoj K

  • Org.w3c.dom. node, get number of items

    hi,
    how can i get the number of items of one node?
    thanks

    Have you tried something like - getChildNodes().getLength()?

  • CCMS job monitoring -- RZ20 job history node get duplicated record

    Hi,
    I configured CCMS/Solution Manager auto monitoring job failure. When a job failed on my satellite system, it created a alert and send notification to my CEN, which is my solution manager. Then solution manager send e-mail to a distribution group.
    This is working fine until last week. Now when a job failed, in job history node of RZ20 of my satellite system, the alerts are created mutiple times even the job actually only failed once. For some reasons, the history node of the alerts are updated every 5 minutes with the same job failure. More interestingly, after one hour, it did not create alerts any more. However, as the result I get many e-mails with the same job failure in this once hour, which I expect only one e-mail.
    Do you guys encounter the same problem? I am wondering something has been unintentionaly turned on so it's updating every 5 minutes. I want to turn it off. I looked at SM37 try to find if some batch jobs are doing this but did not find anything running on every 5 minutes.
    Any helpful answers will definitely be appreciated and reward point.
    Yujun Ran

    Hi Suhas,
    Once we had the same issue. But once we updated the current ccmsagent in both Solution manger and the satellite system the issue got resolved.
    Regards,
    Rafikul Hussain

  • How can i control which iView gets updated when a nav node gets clicked?

    Hello,
    Following setting: the content area of the desktop inner page inside the default frameworkpage usually consist of just one iView.
    But what happens when there are 3 iViews inside? How do i have to call the pageloader which of the 3 iViews should get updated when a nav node of the detailed navigation is clicked?
    Thanx

    Hi
    Since you will be using the Merger Id property, they will be linked to a single link in navigational area.
    Refreshing a page / clicking the link in navigation Pane will result in refreshing of all the iViews.
    Regards
    Chander Kararia
    # Please close the thread once get the correct answer. Give rewards for answers.

  • Calling Tcode Solar02 in ITS - Nodes getting Locked

    Dear all,
    To call tcode solar02, from web i am using the ITS url ,
    http://host:port/sap/bc/gui/sap/its/webgui?~transaction=solar02
    But when i access  the nodes,  through this link the nodes are getting locked automatically.
    can i use ITS url for solar02 tcode?
    Regards,
    Shamila

    you can check via SE93 if a particular transaction is ment to be worked in ITS/SapGui..

  • Oracle RAC Nodes getting reboot in case of preferred controller failed

    When we are disconnecting both Fiber cable from preferred Controller A or plugging out Controller A card from Disk Array(IBM DS 4300), After 90 seconds both the servers are rebooting.
    In this time complete RAC network is going out of service for approx 5 minutes.After reboot both servers are coming with both instances without any manual intervention
    It’s a critical issue for us because we are loosing High Availability, Let us know how we can resolve this critical issue.
    Detail of Network:
    1. Software- Oracle 10g Release2
    2. OS- Redhat Linux 3 (Kernel Version-2.4.21-27.ELsmp)
    3. Shared Storage- IBM DS 4300.
    4. Multipathing Driver - RDAC (rdac-LINUX-09.00 A5.13)
    4. Nodes- IBM 346
    5. Databse on ASM
    6. ASM,OCR & Voting Disk Preferred controller is A.
    7. Hangcheck timer value is 210 seconds.
    8. Both Server available with 2 HBA port . I HBA port is connected with Controller A and Seconfd HBA port is connected with Controller B of SAN Disk Array.
    As per my understanding,
    Voting disk resides in Disk Array and Controller A is preferred owner of Voting Disk LUN.. When i am disconnecting both fiber cable from preferred controller A , then Both Nodes Clusterware software trying to contact with Voting Disk, When they are unable to contact with Voting disk in specfic time period, they are going for reboot.
    I tested Controller failure testing with Oracle RAC software as well without Oracle. Without Oracle its working fine and reason behind, in that time Disk Array is waiting for approx 300 seconds for changing preferred controlller from A to B.
    But With Oracle, Clusterware Software reboot both nodes before Controller can shift from A to B.
    So if i conclude,the tech who has good understanding of Oracle Clusterware on Linux OS & IBM RDAC multipath driver can help me.
    when we install Oracle RAC on Linux, it is required to configure hangcheck timer.
    Oracle recomends 180 second.
    It means if one of node is hanging, then second node will wait for 180 seconds, if within 180 seconds ,it is not able to resolve this situation then it will reboot hung node.
    I think Hangcheck timer configuration reuired only with Linux OS.
    Configuration File
    cat >> /etc/rc.d/rc.local << EOF
    modprobe hangcheck-timer hangcheck_tick=15 hangcheck_margin=60

    Sorry
    Hangcheck timer is
    Configuration File
    cat >> /etc/rc.d/rc.local << EOF
    modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

  • GSD and ONS shutting down automatically when listener is started.

    Issue with RAC database Node 2
    OS: Windows 2003 Server (64-bit)
    Problem 1:
    After patching Oracle to a higher version i.e from 10.2.0.3.0 P31 to 10.2.0.4.0 P35, the second database node was not starting up properly. When starting, the node hangs with blue screen.
    Cause:
    The above problem was because the second node, when starting up the cluster related services,
    was not able to communicate with the first node through the cluster interconnect network (heartbeat).
    The node tries to ping the heartbeat several times and the node gets evicted from the cluster resulting in BSOD.
    We found that several times this type of node eviction had occurred.
    Due to several evictions of Node 2, Node 1 locks Voting Disk to prevent from corruption.
    NOde 2 again while starting tries to communicate to the Voting Disk to join the cluster.
    But since those files are locked by Node 1, Node2 is not able to access those files and this also results in BSOD.
    The above information was found out from ocssd logs.
    ocssd.log
    =========
    WARNING: clssnmLocalJoinEvent: takeover aborted due to ALIVE node on Disk
    WARNING: clssnmRcfgMgrThread: not possible to join the cluster. Please reboot the node.Solution:
    The only way to release the lock was to reboot the Node 1. We rebooted the node 1 and the lock was released.
    Now both the nodes were able to communicate with the configuration files and there were no BSOD on node 2.
    All the cluster related services were started without any issues and the node joined the cluster.
    Problem 2:
    After problem 1 got solved, we noticed one more issue with the node 2.
    We noticed that in node 2, when listener is started, the GSD and ONS node applications dies.
    When we stop the listener, GSD and ONS starts.
    Also when I try to start the instance on Node 2, it hangs in the startup command.
    Oracle is not able to start the instance. Also the configuration assistants, srvctl etc was not working.
    I got information that if GSD is not started, then DBCA,srvctl and other commands do not work.
    But how to resolve the issue described (highlighted) in problem 2.
    Please help me....
    Thanks & Regards,
    Mahesh Menon,
    Oracle DBA,
    Key Information Technology LLC.

    Hi Bibii and welcome to Discussions,
    first of, you have posted in the Mac Pro section not the MacBook Pro section here http://discussions.apple.com/category.jspa?categoryID=190
    Anyway, is this really a shutdown or is it just sleeping ?
    If sleeping check the Energy Savings settings in the System Preferences for when your MBP should go to sleep when inactive.
    Regards
    Stefan

  • How do I define 2 disk groups for ocr and voting disks at the oracle grid infrastructure installation window

    Hello,
    It may sound too easy to someone but I need to ask it anyway.
    I am in the middle of building Oracle RAC 11.2.0.3 on Linux. I have 2 storage and I created three LUNs in each of storage so 6 LUNs in total look in both of servers. If I choose NORMAL as a redundancy level, is it right to choose my 6 disks those are created for OCR_VOTE at the grid installation? Or should I choose only 3 of them and configure mirroring at latter stage?
    The reason why I am asking this is that I didn't find any place to create asm disk group for ocr and voting disks at the oracle grid infrastructure installation window. In fact, I would to like to create two disk groups in which one of groups contains three disks those were brought from storage 1 and the second one contains rest three 3 disk that come from the last storage.
    I believe that you will understand the manner and help me on choosing proper way.
    Thank you.

    Hi,
    You have 2 Storage H/W.
    You will need to configure a Quorum ASMDisk to store a Voting Disk.
    Because if you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster.
    You must have the odd number of voting disk (i.e 1,3 or 5)  one voting disk per ASM DISK, So 1 Storage will hold more voting disk than another storage.
    (e.g 3 Voting Disk - 2 voting stored in stg-A and 1 voting disk store in stg-B)
    If fail the storage that hold the major number of  Voting disk the whole cluster (all nodes) goes down. Does not matter if you have a other storage online.
    It will happen:
    https://forums.oracle.com/message/9681811#9681811
    You must configure your Clusterware same as RAC extended
    Check this link:
    http://www.oracle.com/technetwork/database/clusterware/overview/grid-infra-thirdvoteonnfs-131158.pdf
    Explaining: How to store OCR, Voting disks and ASM SPFILE on ASM Diskgroup (RAC or RAC Extended) | Levi Pereira | Oracl…

  • Split brain syndrome in RAC

    As per Split brain syndrome in Oracle RAC in case of inter-connect failures the master node will evict other/dead nodes .
    Let say 2 node RAC configuration node 1 is defined as master node (by some parameter like load and others) incase of network failures node 1 will terminate node 2 from cluster.
    ;l
    what happens if master node, in this case node 1 fails. which will terminate node 1 and is node 2 will become master node ?

    Hi,
    It occurs when the instance members in a RAC fail to ping/connect to each other via this private interconnect, but the servers are all pysically up and running and the database instance on each of these servers is also running. These individual nodes are running fine and can conceptually accept user connections and work independently. So basically due to lack of commincation the instance thinks that the other instance that it is not able to connect is down and it needs to do something about the situation. The problem is if we leave these instance running, the sane block might get read, updated in these individual instances and there would be data integrity issue, as the blocks changed in one instance, will not be locked and could be over-written by another instance. Oracle has efficiently implemented check for the split brain syndrome.
    In RAC if any node becomes inactive, or if other nodes are unable to ping/connect to a node in the RAC, then the node which first detects that one of the node is not accessible, it will evict that node from the RAC group. e.g. there are 4 nodes in a rac instance, and node 3 becomes unavailable, and node 1 tries to connect to node 3 and finds it not responding, then node 1 will evict node 3 out of the RAC groups and will leave only Node1, Node2 & Node4 in the RAC group to continue functioning.
    The split brain concepts can become more complicated in large RAC setups. For example there are 10 RAC nodes in a cluster. And say 4 nodes are not able to communicate with the other 6. So there are 2 groups formed in this 10 node RAC cluster ( one group of 4 nodes and other of 6 nodes). Now the nodes will quickly try to affirm their membership by locking controlfile, then the node that lock the controlfile will try to check the votes of the other nodes. The group with the most number of active nodes gets the preference and the others are evicted. Moreover, I have seen this node eviction issue with only 1 node getting evicted and the rest function fine, so I cannot really testify that if thats how it work by experience, but this is the theory behind it.
    When we see that the node is evicted, usually oracle rac will reboot that node and try to do a cluster reconfiguration to include back the evicted node.
    You will see oracle error: ORA-29740, when there is a node eviction in RAC. There are many reasons for a node eviction like heart beat not received by the controlfile, unable to communicate with the clusterware etc.
    And also You can go through Metalink Note ID: 219361.1

Maybe you are looking for

  • How do i switch the information for games from one ipad to another?

    my mom got the ipad two while i had a 32gb first gen. and my lil sister was playing with a 64gb. im trying to move the game information that she has to the 32 gb so i can keep the 64. but im having trouble doing that. everytime i open a game on my 64

  • Problem Calling WS-Security enabled service

    Hi, I have a problem when trying to call a web service which responds with signed content as per the WS-Security Spec.  I have loaded my own certificate by using the keytool -export command from my javakeystore.  I have loaded this certificate using

  • Save Error in Photoshop cs6

    Hi there, I just upgraded from CS5 to CS6 and for the most part love it. I have run into a very serious problem when it comes to saving. I am on Mac 10.6.8 and have a server that I save all my files too. In CS6 anytime I try to save something to the

  • Problem with swf on 1 page of my website?

    I have created a simple flash swf which I have added to all the pages of my website. For some reason when I load one particular page a blank screen appears. When I delete the swf the page appears! This doesn't happen on the other pages and wonder if

  • Narration sound too low or else have background hiss/hum

    I am using PreE8 on Windows7 64-bit.  My sound card is....hmmmm, my spec sheet says "Integrated 7.1 Audio."  I don't know what that means.  Perhaps that is the "house" sound on my computer, a Dell Studio XPS 435MT (lack of a decent sound card may be