LDAP JNDI bootstrap info at each cluster node
Any suggestions on the best way to distribute LDAP JNDI bootstrap
information (host:port/DN/pwd/searchbase) to each node in the cluster? I've
thought of deploying individual bootstrap files, but then I'd have to
encrypt them and I don't like the idea of these files authentication files
laying around on (potentially) distributed hosts. Clustered EJBs could use
an abstraction layer that hides this stuff from them, but the abstraction
layer needs the bootstrap info before accessing LDAP through JNDI.
I feel like there's some simple, elegant manner of doing this, but I'm
missing it because my head is full of LDAP searchbases and filter syntax....
Any suggestions on the best way to distribute LDAP JNDI bootstrap
information (host:port/DN/pwd/searchbase) to each node in the cluster? I've
thought of deploying individual bootstrap files, but then I'd have to
encrypt them and I don't like the idea of these files authentication files
laying around on (potentially) distributed hosts. Clustered EJBs could use
an abstraction layer that hides this stuff from them, but the abstraction
layer needs the bootstrap info before accessing LDAP through JNDI.
I feel like there's some simple, elegant manner of doing this, but I'm
missing it because my head is full of LDAP searchbases and filter syntax....
Similar Messages
-
Local NFS / LDAP on cluster nodes
Hi,
I have a 2-node cluster (3.2 1/09) on Solaris 10 U8, providing NFS (/home) and LDAP for clients. I would like to configure LDAP and NFS clients on each cluster node, so they share user information with the rest of the machines.
I assume the right way to do this is to configure the cluster nodes the same as other clients, using the HA Logical Hostnames for the LDAP and NFS server; this way, there's always a working LDAP and NFS server for each node. However, what happens if both nodes reboot at once (for example, power failure)? As the first node boots, there is no working LDAP or NFS server, because it hasn't been started yet. Will this cause the boot to fail and require manual intervention, or will the cluster boot without NFS and LDAP clients enabled, allowing me to fix it later?Thanks. In that case, is it safe to configure the NFS-exported filesystem as a global mount, and symlink e.g. "/home" -> "/global/home", so home directories are accessible via the normal path on both nodes? (I understand global filesystems have worse performance, but this would just be for administrators logging in with their LDAP accounts.)
For LDAP, my concern is that if svc:/network/ldap/client:default fails during startup (because no LDAP server is running yet), it might prevent the cluster services from starting, even though all names required by cluster are available from /etc. -
SCVMM losing connection to cluster nodes
Hey guys'n girls, I hope this is the right forum for this question. I already opened a ticket at MS support as well because it's impacting our production environment indirectly, but even after a week there's been no contact. Losing faith in MS support there
The problem we're having is that scvmm is that a host enters the 'needs attention' state, with a winrm error 0x80338126. I guess it has something to do with the network or with Kerberos, and I've found some info on it, but I still haven't been able to solve
it. Do you guys have any ideas?
Problem summary:
We are seeing an issue on our new hyper-v platform. The platform should have been in production last week, but this issue is delaying our project as we can't seem to get it stable.
The problem we are experiencing is that SCVMM loses the connection to some of the Hyper-V nodes. Not one
specific node. Last week it happened to two nodes, and today it happened to another node. I see issues with WinRM, and I expect something to do with kerberos. See the bottom of this post for background details and software versions.
The host gets the status 'needs attention', and if you look at the status of the machine, WinRM gives an error. The error is:
Error (2916)
VMM is unable to complete the request. The connection to the agent cc1-hyp-10.domaincloud1.local was lost.
WinRM: URL: [http://cc1-hyp-10.domaincloud1.local:5985], Verb: [ENUMERATE], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Service], Filter: [select * from Win32_Service where Name="WinRM"]
Unknown error (0x80338126)
Recommended Action
Ensure that the Windows Remote Management (WinRM) service and the VMM agent are installed and running and that a firewall is not blocking HTTP/HTTPS traffic. Ensure that VMM server is able to communicate with cc1-hyp-10.domaincloud1.local over WinRM by successfully
running the following command:
winrm id –r:cc1-hyp-10.domaincloud1.local
This
problem can also be caused by a Windows Management Instrumentation (WMI) service crash. If the server is running Windows Server 2008 R2, ensure that KB 982293 (http://support.microsoft.com/kb/982293)
is installed on it.
If the error persists, restart cc1-hyp-10.domaincloud1.local and then try the operation again. /nRefer to
http://support.microsoft.com/kb/2742275 for more details.
Doing a simple test from the VMM server to the problematic cluster node shows this error:
PS C:\> hostname
CC1-VMM-01
PS C:\> winrm id -r:cc1-hyp-10.domaincloud1.local
WSManFault
Message = WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that a firewall exception for the WinRM service is enabled and allows access from this
computer. By default, the WinRM firewall exception for public profiles limits access to remote computers within the same local subnet.
Error number: -2144108250 0x80338126
WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that a firewall exception for the WinRM service is enabled and allows access from this computer. By default, the WinRM
firewall exception for public profiles limits access to remote computers within the same local subnet.
I CAN connect from other hosts to this problematic cluster node:
PS C:\> hostname
CC1-HYP-16
PS C:\> winrm id -r:cc1-hyp-10.domaincloud1.local
IdentifyResponse
ProtocolVersion =
http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd
ProductVendor = Microsoft Corporation
ProductVersion = OS: 6.3.9600 SP: 0.0 Stack: 3.0
SecurityProfiles
SecurityProfileName =
http://schemas.dmtf.org/wbem/wsman/1/wsman/secprofile/http/spnego-kerberos
And I can connect from the vmm server to all other cluster nodes:
PS C:\> hostname
CC1-VMM-01
PS C:\> winrm id -r:cc1-hyp-11.domaincloud1.local
IdentifyResponse
ProtocolVersion =
http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd
ProductVendor = Microsoft Corporation
ProductVersion = OS: 6.3.9600 SP: 0.0 Stack: 3.0
SecurityProfiles
SecurityProfileName =
http://schemas.dmtf.org/wbem/wsman/1/wsman/secprofile/http/spnego-kerberos
So at this point only the test from the cc1-vmm-01 to cc1-hyp-10 seems to be problematic.
I followed the steps in the page
https://support.microsoft.com/kb/2742275 (which is referred to above). I tried the VMMCA, but it can't really get it working the way I want, or it seems to give outdated recommendations.
I tried checking for duplicate SPN's by running setspn -x on affected machines. No results (although I do not understand
what an SPN is or how it works). I rebuilt the performance counters.
It tried setting 'sc config winrm type= own' as described in [http://blinditandnetworkadmin.blogspot.nl/2012/08/kb-how-to-troubleshoot-needs-attention.html].
If I reboot this cc1-hyp-10 machine, it will start working perfectly again. However, then I can't troubleshoot the issue, and it will happen again.
I want this problem to be solved, so vmm never loses connection to the hypervisors it's managing again!
Background information:
We've set up a platform with Hyper-V to run a VM workload. The platform consists of the following hardware:
2 Dell R620's with 32GB of RAM, running hyper-v to virtualize the cloud management layer (DC's, VMM, SQL). These machines are called cc1-hyp-01 and cc1-hyp-02. They run the management vm's like cc1-dc-01/02, cc1-sql-01, cc1-vmm-01, etc. The names are self-explanatory.
The VMM machine is NOT clustered.
8 Dell M620 blades with 320GB of RAM, running hyper-v to virtualize the customer workload. The machines are
called cc1-hyp-10 until cc1-hyp-17. They are in a cluster.
2 Equallogic units form a SAN (premium storage), and we have a Dell R515 running iscsi target (budget storage).
We have Dell Force10 switches and Cisco C3750X switches to connect everything together (mostly 10GB links).
All hosts run Windows Server 2012R2 Datacenter edition. The VMM server runs System Center Virtual Machine Manage 2012 R2.
All the latest Windows updates are installed on every host. There are no firewalls between any host (vmm and hypervisors) at this level. Windows firewalls are all disabled. No antivirus software is installed, no symantec software is installed.
The only non-standard software that is installed is the Dell Host Integration Tools 4.7.1, Dell Openmanage Server Administrator, and some small stuff like 7-zip, bginfo, net-snap, etc.
The SCVMM service is running under the domain account DOMAINCLOUD1\scvmm. This machine is in the local administrators group of each cluster node.
On top of this cloud layer we're running the tenant layer with a lot of vm's for a specific customer (although they are all off now).I think I found the culprit, after an hour of analyzing wireshark dumps I found the vmm had jumbo frames enabled on the management interface to the hosts (and the underlying infrastructure does not).. Now my winrm commands started working again.
-
After reboot cluster node went into maintanance mode (CONTROL-D)
Hi there!
I have configured 2 node cluster on 2 x SUN Enterprise 220R and StoreEdge D1000.
Each time when rebooted any of the cluster nodes i get the following error during boot up:
The / file system (/dev/rdsk/c0t1d0s0) is being checked.
/dev/rdsk/c0t1d0s0: UNREF DIR I=35540 OWNER=root MODE=40755
/dev/rdsk/c0t1d0s0: SIZE=512 MTIME=Jun 5 15:02 2006 (CLEARED)
/dev/rdsk/c0t1d0s0: UNREF FILE I=1192311 OWNER=root MODE=100600
/dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 (RECONNECTED)
/dev/rdsk/c0t1d0s0: LINK COUNT FILE I=1192311 OWNER=root MODE=100600
/dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 COUNT 0 SHOULD BE 1
/dev/rdsk/c0t1d0s0: LINK COUNT INCREASING
/dev/rdsk/c0t1d0s0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
In maintanance mode i do:
# fsck -y -F ufs /dev/rdsk/c0t1d0s0
and it managed to correct the problem ... but problem occured again after each reboot on each cluster node!
I have installed Sun CLuster 3.1 on Solaris 9 SPARC
How can i get rid of it?
Any ideas?
Brgds,
SergejHi i get this:
112941-09 SunOS 5.9: sysidnet Utility Patch
116755-01 SunOS 5.9: usr/snadm/lib/libadmutil.so.2 Patch
113434-30 SunOS 5.9: /usr/snadm/lib Library and Differential Flash Patch
112951-13 SunOS 5.9: patchadd and patchrm Patch
114711-03 SunOS 5.9: usr/sadm/lib/diskmgr/VDiskMgr.jar Patch
118064-04 SunOS 5.9: Admin Install Project Manager Client Patch
113742-01 SunOS 5.9: smcpreconfig.sh Patch
113813-02 SunOS 5.9: Gnome Integration Patch
114501-01 SunOS 5.9: drmproviders.jar Patch
112943-09 SunOS 5.9: Volume Management Patch
113799-01 SunOS 5.9: solregis Patch
115697-02 SunOS 5.9: mtmalloc lib Patch
113029-06 SunOS 5.9: libaio.so.1 librt.so.1 and abi_libaio.so.1 Patch
113981-04 SunOS 5.9: devfsadm Patch
116478-01 SunOS 5.9: usr platform links Patch
112960-37 SunOS 5.9: patch libsldap ldap_cachemgr libldap
113332-07 SunOS 5.9: libc_psr.so.1 Patch
116500-01 SunOS 5.9: SVM auto-take disksets Patch
114349-04 SunOS 5.9: sbin/dhcpagent Patch
120441-03 SunOS 5.9: libsec patch
114344-19 SunOS 5.9: kernel/drv/arp Patch
114373-01 SunOS 5.9: UMEM - abi_libumem.so.1 patch
118558-27 SunOS 5.9: Kernel Patch
115675-01 SunOS 5.9: /usr/lib/liblgrp.so Patch
112958-04 SunOS 5.9: patch pci.so
113451-11 SunOS 5.9: IKE Patch
112920-02 SunOS 5.9: libipp Patch
114372-01 SunOS 5.9: UMEM - llib-lumem patch
116229-01 SunOS 5.9: libgen Patch
116178-01 SunOS 5.9: libcrypt Patch
117453-01 SunOS 5.9: libwrap Patch
114131-03 SunOS 5.9: multi-terabyte disk support - libadm.so.1 patch
118465-02 SunOS 5.9: rcm_daemon Patch
113490-04 SunOS 5.9: Audio Device Driver Patch
114926-02 SunOS 5.9: kernel/drv/audiocs Patch
113318-25 SunOS 5.9: patch /kernel/fs/nfs and /kernel/fs/sparcv9/nfs
113070-01 SunOS 5.9: ftp patch
114734-01 SunOS 5.9: /usr/ccs/bin/lorder Patch
114227-01 SunOS 5.9: yacc Patch
116546-07 SunOS 5.9: CDRW DVD-RW DVD+RW Patch
119494-01 SunOS 5.9: mkisofs patch
113471-09 SunOS 5.9: truss Patch
114718-05 SunOS 5.9: usr/kernel/fs/pcfs Patch
115545-01 SunOS 5.9: nss_files patch
115544-02 SunOS 5.9: nss_compat patch
118463-01 SunOS 5.9: du Patch
116016-03 SunOS 5.9: /usr/sbin/logadm patch
115542-02 SunOS 5.9: nss_user patch
116014-06 SunOS 5.9: /usr/sbin/usermod patch
116012-02 SunOS 5.9: ps utility patch
117433-02 SunOS 5.9: FSS FX RT Patch
117431-01 SunOS 5.9: nss_nis Patch
115537-01 SunOS 5.9: /kernel/strmod/ptem patch
115336-03 SunOS 5.9: /usr/bin/tar, /usr/sbin/static/tar Patch
117426-03 SunOS 5.9: ctsmc and sc_nct driver patch
121319-01 SunOS 5.9: devfsadmd_mod.so Patch
121316-01 SunOS 5.9: /kernel/sys/doorfs Patch
121314-01 SunOS 5.9: tl driver patch
116554-01 SunOS 5.9: semsys Patch
112968-01 SunOS 5.9: patch /usr/bin/renice
116552-01 SunOS 5.9: su Patch
120445-01 SunOS 5.9: Toshiba platform token links (TSBW,Ultra-3i)
112964-15 SunOS 5.9: /usr/bin/ksh Patch
112839-08 SunOS 5.9: patch libthread.so.1
115687-02 SunOS 5.9:/var/sadm/install/admin/default Patch
115685-01 SunOS 5.9: sbin/netstrategy Patch
115488-01 SunOS 5.9: patch /kernel/misc/busra
115681-01 SunOS 5.9: usr/lib/fm/libdiagcode.so.1 Patch
113032-03 SunOS 5.9: /usr/sbin/init Patch
113031-03 SunOS 5.9: /usr/bin/edit Patch
114259-02 SunOS 5.9: usr/sbin/psrinfo Patch
115878-01 SunOS 5.9: /usr/bin/logger Patch
116543-04 SunOS 5.9: vmstat Patch
113580-01 SunOS 5.9: mount Patch
115671-01 SunOS 5.9: mntinfo Patch
113977-01 SunOS 5.9: awk/sed pkgscripts Patch
122716-01 SunOS 5.9: kernel/fs/lofs patch
113973-01 SunOS 5.9: adb Patch
122713-01 SunOS 5.9: expr patch
117168-02 SunOS 5.9: mpstat Patch
116498-02 SunOS 5.9: bufmod Patch
113576-01 SunOS 5.9: /usr/bin/dd Patch
116495-03 SunOS 5.9: specfs Patch
117160-01 SunOS 5.9: /kernel/misc/krtld patch
118586-01 SunOS 5.9: cp/mv/ln Patch
120025-01 SunOS 5.9: ipsecconf Patch
116527-02 SunOS 5.9: timod Patch
117155-08 SunOS 5.9: pcipsy Patch
114235-01 SunOS 5.9: libsendfile.so.1 Patch
117152-01 SunOS 5.9: magic Patch
116486-03 SunOS 5.9: tsalarm Driver Patch
121998-01 SunOS 5.9: two-key mode fix for 3DES Patch
116484-01 SunOS 5.9: consconfig Patch
116482-02 SunOS 5.9: modload Utils Patch
117746-04 SunOS 5.9: patch platform/sun4u/kernel/drv/sparcv9/pic16f819
121992-01 SunOS 5.9: fgrep Patch
120768-01 SunOS 5.9: grpck patch
119438-01 SunOS 5.9: usr/bin/login Patch
114389-03 SunOS 5.9: devinfo Patch
116510-01 SunOS 5.9: wscons Patch
114224-05 SunOS 5.9: csh Patch
116670-04 SunOS 5.9: gld Patch
114383-03 SunOS 5.9: Enchilada/Stiletto - pca9556 driver
116506-02 SunOS 5.9: traceroute patch
112919-01 SunOS 5.9: netstat Patch
112918-01 SunOS 5.9: route Patch
112917-01 SunOS 5.9: ifrt Patch
117132-01 SunOS 5.9: cachefsstat Patch
114370-04 SunOS 5.9: libumem.so.1 patch
114010-02 SunOS 5.9: m4 Patch
117129-01 SunOS 5.9: adb Patch
117483-01 SunOS 5.9: ntwdt Patch
114369-01 SunOS 5.9: prtvtoc patch
117125-02 SunOS 5.9: procfs Patch
117480-01 SunOS 5.9: pkgadd Patch
112905-02 SunOS 5.9: ippctl Patch
117123-06 SunOS 5.9: wanboot Patch
115030-03 SunOS 5.9: Multiterabyte UFS - patch mount
114004-01 SunOS 5.9: sed Patch
113335-03 SunOS 5.9: devinfo Patch
113495-05 SunOS 5.9: cfgadm Library Patch
113494-01 SunOS 5.9: iostat Patch
113493-03 SunOS 5.9: libproc.so.1 Patch
113330-01 SunOS 5.9: rpcbind Patch
115028-02 SunOS 5.9: patch /usr/lib/fs/ufs/df
115024-01 SunOS 5.9: file system identification utilities
117471-02 SunOS 5.9: fifofs Patch
118897-01 SunOS 5.9: stc Patch
115022-03 SunOS 5.9: quota utilities
115020-01 SunOS 5.9: patch /usr/lib/adb/ml_odunit
113720-01 SunOS 5.9: rootnex Patch
114352-03 SunOS 5.9: /etc/inet/inetd.conf Patch
123056-01 SunOS 5.9: ldterm patch
116243-01 SunOS 5.9: umountall Patch
113323-01 SunOS 5.9: patch /usr/sbin/passmgmt
116049-01 SunOS 5.9: fdfs Patch
116241-01 SunOS 5.9: keysock Patch
113480-02 SunOS 5.9: usr/lib/security/pam_unix.so.1 Patch
115018-01 SunOS 5.9: patch /usr/lib/adb/dqblk
113277-44 SunOS 5.9: sd and ssd Patch
117457-01 SunOS 5.9: elfexec Patch
113110-01 SunOS 5.9: touch Patch
113077-17 SunOS 5.9: /platform/sun4u/kernal/drv/su Patch
115006-01 SunOS 5.9: kernel/strmod/kb patch
113072-07 SunOS 5.9: patch /usr/sbin/format
113071-01 SunOS 5.9: patch /usr/sbin/acctadm
116782-01 SunOS 5.9: tun Patch
114331-01 SunOS 5.9: power Patch
112835-01 SunOS 5.9: patch /usr/sbin/clinfo
114927-01 SunOS 5.9: usr/sbin/allocate Patch
119937-02 SunOS 5.9: inetboot patch
113467-01 SunOS 5.9: seg_drv & seg_mapdev Patch
114923-01 SunOS 5.9: /usr/kernel/drv/logindmux Patch
117443-01 SunOS 5.9: libkvm Patch
114329-01 SunOS 5.9: /usr/bin/pax Patch
119929-01 SunOS 5.9: /usr/bin/xargs patch
113459-04 SunOS 5.9: udp patch
113446-03 SunOS 5.9: dman Patch
116009-05 SunOS 5.9: sgcn & sgsbbc patch
116557-04 SunOS 5.9: sbd Patch
120241-01 SunOS 5.9: bge: Link & Speed LEDs flash constantly on V20z
113984-01 SunOS 5.9: iosram Patch
113220-01 SunOS 5.9: patch /platform/sun4u/kernel/drv/sparcv9/upa64s
113975-01 SunOS 5.9: ssm Patch
117165-01 SunOS 5.9: pmubus Patch
116530-01 SunOS 5.9: bge.conf Patch
116529-01 SunOS 5.9: smbus Patch
116488-03 SunOS 5.9: Lights Out Management (lom) patch
117131-01 SunOS 5.9: adm1031 Patch
117124-12 SunOS 5.9: platmod, drmach, dr, ngdr, & gptwocfg Patch
114003-01 SunOS 5.9: bbc driver Patch
118539-02 SunOS 5.9: schpc Patch
112837-10 SunOS 5.9: patch /usr/lib/inet/in.dhcpd
114975-01 SunOS 5.9: usr/lib/inet/dhcp/svcadm/dhcpcommon.jar Patch
117450-01 SunOS 5.9: ds_SUNWnisplus Patch
113076-02 SunOS 5.9: dhcpmgr.jar Patch
113572-01 SunOS 5.9: docbook-to-man.ts Patch
118472-01 SunOS 5.9: pargs Patch
122709-01 SunOS 5.9: /usr/bin/dc patch
113075-01 SunOS 5.9: pmap patch
113472-01 SunOS 5.9: madv & mpss lib Patch
115986-02 SunOS 5.9: ptree Patch
115693-01 SunOS 5.9: /usr/bin/last Patch
115259-03 SunOS 5.9: patch usr/lib/acct/acctcms
114564-09 SunOS 5.9: /usr/sbin/in.ftpd Patch
117441-01 SunOS 5.9: FSSdispadmin Patch
113046-01 SunOS 5.9: fcp Patch
118191-01 gtar patch
114818-06 GNOME 2.0.0: libpng Patch
117177-02 SunOS 5.9: lib/gss module Patch
116340-05 SunOS 5.9: gzip and Freeware info files patch
114339-01 SunOS 5.9: wrsm header files Patch
122673-01 SunOS 5.9: sockio.h header patch
116474-03 SunOS 5.9: libsmedia Patch
117138-01 SunOS 5.9: seg_spt.h
112838-11 SunOS 5.9: pcicfg Patch
117127-02 SunOS 5.9: header Patch
112929-01 SunOS 5.9: RIPv2 Header Patch
112927-01 SunOS 5.9: IPQos Header Patch
115992-01 SunOS 5.9: /usr/include/limits.h Patch
112924-01 SunOS 5.9: kdestroy kinit klist kpasswd Patch
116231-03 SunOS 5.9: llc2 Patch
116776-01 SunOS 5.9: mipagent patch
117420-02 SunOS 5.9: mdb Patch
117179-01 SunOS 5.9: nfs_dlboot Patch
121194-01 SunOS 5.9: usr/lib/nfs/statd Patch
116502-03 SunOS 5.9: mountd Patch
113331-01 SunOS 5.9: usr/lib/nfs/rquotad Patch
113281-01 SunOS 5.9: patch /usr/lib/netsvc/yp/ypbind
114736-01 SunOS 5.9: usr/sbin/nisrestore Patch
115695-01 SunOS 5.9: /usr/lib/netsvc/yp/yppush Patch
113321-06 SunOS 5.9: patch sf and socal
113049-01 SunOS 5.9: luxadm & liba5k.so.2 Patch
116663-01 SunOS 5.9: ntpdate Patch
117143-01 SunOS 5.9: xntpd Patch
113028-01 SunOS 5.9: patch /kernel/ipp/flowacct
113320-06 SunOS 5.9: patch se driver
114731-08 SunOS 5.9: kernel/drv/glm Patch
115667-03 SunOS 5.9: Chalupa platform support Patch
117428-01 SunOS 5.9: picl Patch
113327-03 SunOS 5.9: pppd Patch
114374-01 SunOS 5.9: Perl patch
115173-01 SunOS 5.9: /usr/bin/sparcv7/gcore /usr/bin/sparcv9/gcore Patch
114716-02 SunOS 5.9: usr/bin/rcp Patch
112915-04 SunOS 5.9: snoop Patch
116778-01 SunOS 5.9: in.ripngd patch
112916-01 SunOS 5.9: rtquery Patch
112928-03 SunOS 5.9: in.ndpd Patch
119447-01 SunOS 5.9: ses Patch
115354-01 SunOS 5.9: slpd Patch
116493-01 SunOS 5.9: ProtocolTO.java Patch
116780-02 SunOS 5.9: scmi2c Patch
112972-17 SunOS 5.9: patch /usr/lib/libssagent.so.1 /usr/lib/libssasnmp.so.1 mibiisa
116480-01 SunOS 5.9: IEEE 1394 Patch
122485-01 SunOS 5.9: 1394 mass storage driver patch
113716-02 SunOS 5.9: sar & sadc Patch
115651-02 SunOS 5.9: usr/lib/acct/runacct Patch
116490-01 SunOS 5.9: acctdusg Patch
117473-01 SunOS 5.9: fwtmp Patch
116180-01 SunOS 5.9: geniconvtbl Patch
114006-01 SunOS 5.9: tftp Patch
115646-01 SunOS 5.9: libtnfprobe shared library Patch
113334-03 SunOS 5.9: udfs Patch
115350-01 SunOS 5.9: ident_udfs.so.1 Patch
122484-01 SunOS 5.9: preen_md.so.1 patch
117134-01 SunOS 5.9: svm flasharchive patch
116472-02 SunOS 5.9: rmformat Patch
112966-05 SunOS 5.9: patch /usr/sbin/vold
114229-01 SunOS 5.9: action_filemgr.so.1 Patch
114335-02 SunOS 5.9: usr/sbin/rmmount Patch
120443-01 SunOS 5.9: sed core dumps on long lines
121588-01 SunOS 5.9: /usr/xpg4/bin/awk Patch
113470-02 SunOS 5.9: winlock Patch
119211-07 NSS_NSPR_JSS 3.11: NSPR 4.6.1 / NSS 3.11 / JSS 4.2
118666-05 J2SE 5.0: update 6 patch
118667-05 J2SE 5.0: update 6 patch, 64bit
114612-01 SunOS 5.9: ANSI-1251 encodings file errors
114276-02 SunOS 5.9: Extended Arabic support in UTF-8
117400-01 SunOS 5.9: ISO8859-6 and ISO8859-8 iconv symlinks
113584-16 SunOS 5.9: yesstr, nostr nl_langinfo() strings incorrect in S9
117256-01 SunOS 5.9: Remove old OW Xresources.ow files
112625-01 SunOS 5.9: Dcam1394 patch
114600-05 SunOS 5.9: vlan driver patch
117119-05 SunOS 5.9: Sun Gigabit Ethernet 3.0 driver patch
117593-04 SunOS 5.9: Manual Page updates for Solaris 9
112622-19 SunOS 5.9: M64 Graphics Patch
115953-06 Sun Cluster 3.1: Sun Cluster sccheck patch
117949-23 Sun Cluster 3.1: Core Patch for Solaris 9
115081-06 Sun Cluster 3.1: HA-Sun One Web Server Patch
118627-08 Sun Cluster 3.1: Manageability and Serviceability Agent
117985-03 SunOS 5.9: XIL 1.4.2 Loadable Pipeline Libraries
113896-06 SunOS 5.9: en_US.UTF-8 locale patch
114967-02 SunOS 5.9: FDL patch
114677-11 SunOS 5.9: International Components for Unicode Patch
112805-01 CDE 1.5: Help volume patch
113841-01 CDE 1.5: answerbook patch
113839-01 CDE 1.5: sdtwsinfo patch
115713-01 CDE 1.5: dtfile patch
112806-01 CDE 1.5: sdtaudiocontrol patch
112804-02 CDE 1.5: sdtname patch
113244-09 CDE 1.5: dtwm patch
114312-02 CDE1.5: GNOME/CDE Menu for Solaris 9
112809-02 CDE:1.5 Media Player (sdtjmplay) patch
113868-02 CDE 1.5: PDASync patch
119976-01 CDE 1.5: dtterm patch
112771-30 Motif 1.2.7 and 2.1.1: Runtime library patch for Solaris 9
114282-01 CDE 1.5: libDtWidget patch
113789-01 CDE 1.5: dtexec patch
117728-01 CDE1.5: dthello patch
113863-01 CDE 1.5: dtconfig patch
112812-01 CDE 1.5: dtlp patch
113861-04 CDE 1.5: dtksh patch
115972-03 CDE 1.5: dtterm libDtTerm patch
114654-02 CDE 1.5: SmartCard patch
117632-01 CDE1.5: sun_at patch for Solaris 9
113374-02 X11 6.6.1: xpr patch
118759-01 X11 6.6.1: Font Administration Tools patch
117577-03 X11 6.6.1: TrueType fonts patch
116084-01 X11 6.6.1: font patch
113098-04 X11 6.6.1: X RENDER extension patch
112787-01 X11 6.6.1: twm patch
117601-01 X11 6.6.1: libowconfig.so.0 patch
117663-02 X11 6.6.1: xwd patch
113764-04 X11 6.6.1: keyboard patch
113541-02 X11 6.6.1: XKB patch
114561-01 X11 6.6.1: X splash screen patch
113513-02 X11 6.6.1: platform support for new hardware
116121-01 X11 6.4.1: platform support for new hardware
114602-04 X11 6.6.1: libmpg_psr patch
Is there a bundle to install or i have to install each patch separatly_? -
Do I use same oracle account on 2 cluster nodes cause problem?
Do I use same oracle account on 2 cluster nodes cause problem?
If I use same oracle account on 2 cluster nodes running 2 database, when failover happens, 2 database will be running on one node, does 2 oracle account make SHM ... memory conflict?
or do I have to use oracle01 account on node1, oracle02 account on node2? Can not use same name account?
Thanks.I'm not 100% certain I understood the question, so I'll rephrase them and answer them.
Q. If I have the same Oracle account on each cluster node, e.g. uid=100 (oracle) gid=100 (oinstall), groups dba=200, can I run two databases, one on each cluster node without problems?
A. Yes. Having multiple DBs on one node is not a problem and doesn't cause shared memory problems. Obviously each database needs a different database name and thus different SID.
Q. Can I have two different Oracle accounts on each cluster node e.g. uid=100 (oraclea) gid=100 (oinstall), groups dba=200 and e.g. uid=300 (oracleb) gid=100 (oinstall), groups dba=200, and run two databases, one for each Oracle user?
A. Yes. The different Oracle user names would need to be associated with different Oracle installations, i.e. Oracle HOMEs. So you might have /oracle/oracle/product/10.2.0/db_1 (oraclea) and /oracle/oracle/product/11.0.1.0/db_1 (oracleb). The ORACLE_HOME is then used to determine the Oracle user name by checking the user of the Oracle binary in the ${ORACLE_HOME}/bin directory.
Tim
--- -
JNDI Lookup for multiple server instances with multiple cluster nodes
Hi Experts,
I need help with retreiving log files for multiple server instances with multiple cluster nodes. The system is Netweaver 7.01.
There are 3 server instances all instances with 3 cluster nodes.
There are EJB session beans deployed on them to retreive the log information for each server node.
In the session bean there is a method:
public List getServers() {
List servers = new ArrayList();
ClassLoader saveLoader = Thread.currentThread().getContextClassLoader();
try {
Properties prop = new Properties();
prop.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sap.engine.services.jndi.InitialContextFactoryImpl");
prop.put(Context.SECURITY_AUTHENTICATION, "none");
Thread.currentThread().setContextClassLoader((com.sap.engine.services.adminadapter.interfaces.RemoteAdminInterface.class).getClassLoader());
InitialContext mInitialContext = new InitialContext(prop);
RemoteAdminInterface rai = (RemoteAdminInterface) mInitialContext.lookup("adminadapter");
ClusterAdministrator cadm = rai.getClusterAdministrator();
ConvenienceEngineAdministrator cea = rai.getConvenienceEngineAdministrator();
int nodeId[] = cea.getClusterNodeIds();
int dispatcherId = 0;
String dispatcherIP = null;
String p4Port = null;
for (int i = 0; i < nodeId.length; i++) {
if (cea.getClusterNodeType(nodeId[i]) != 1)
continue;
Properties dispatcherProp = cadm.getNodeInfo(nodeId[i]);
dispatcherIP = dispatcherProp.getProperty("Host", "localhost");
p4Port = cea.getServiceProperty(nodeId[i], "p4", "port");
String[] loc = new String[3];
loc[0] = dispatcherIP;
loc[1] = p4Port;
loc[2] = null;
servers.add(loc);
mInitialContext.close();
} catch (NamingException e) {
} catch (RemoteException e) {
} finally {
Thread.currentThread().setContextClassLoader(saveLoader);
return servers;
and the retreived server information used here in another class:
public void run() {
ReadLogsSession readLogsSession;
int total = servers.size();
for (Iterator iter = servers.iterator(); iter.hasNext();) {
if (keepAlive) {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
status = status + e.getMessage();
System.err.println("LogReader Thread Exception" + e.toString());
e.printStackTrace();
String[] serverLocs = (String[]) iter.next();
searchFilter.setDetails("[" + serverLocs[1] + "]");
Properties prop = new Properties();
prop.put(Context.INITIAL_CONTEXT_FACTORY, "com.sap.engine.services.jndi.InitialContextFactoryImpl");
prop.put(Context.PROVIDER_URL, serverLocs[0] + ":" + serverLocs[1]);
System.err.println("LogReader run [" + serverLocs[0] + ":" + serverLocs[1] + "]");
status = " Reading :[" + serverLocs[0] + ":" + serverLocs[1] + "] servers :[" + currentIndex + "/" + total + " ] ";
prop.put("force_remote", "true");
prop.put(Context.SECURITY_AUTHENTICATION, "none");
try {
Context ctx = new InitialContext(prop);
Object ob = ctx.lookup("com.xom.sia.ReadLogsSession");
ReadLogsSessionHome readLogsSessionHome = (ReadLogsSessionHome) PortableRemoteObject.narrow(ob, ReadLogsSessionHome.class);
status = status + "Found ReadLogsSessionHome ["+readLogsSessionHome+"]";
readLogsSession = readLogsSessionHome.create();
if(readLogsSession!=null){
status = status + " Created ["+readLogsSession+"]";
List l = readLogsSession.getAuditLogs(searchFilter);
serverLocs[2] = String.valueOf(l.size());
status = status + serverLocs[2];
allRecords.addAll(l);
}else{
status = status + " unable to create readLogsSession ";
ctx.close();
} catch (NamingException e) {
status = status + e.getMessage();
System.err.println(e.getMessage());
e.printStackTrace();
} catch (CreateException e) {
status = status + e.getMessage();
System.err.println(e.getMessage());
e.printStackTrace();
} catch (IOException e) {
status = status + e.getMessage();
System.err.println(e.getMessage());
e.printStackTrace();
} catch (Exception e) {
status = status + e.getMessage();
System.err.println(e.getMessage());
e.printStackTrace();
currentIndex++;
jobComplete = true;
The application is working for multiple server instances with a single cluster node but not working for multiple cusltered environment.
Anybody knows what should be changed to handle more cluster nodes?
Thanks,
GergelyThanks for the response.
I was afraid that it would be something like that although
was hoping for
something closer to the application pools we use with IIS to
isolate sites
and limit the impact one badly behaving one can have on
another.
mmr
"Ian Skinner" <[email protected]> wrote in message
news:fe5u5v$pue$[email protected]..
> Run CF with one instance. Look at your processes and see
how much memory
> the "JRun" process is using, multiply this by number of
other CF
> instances.
>
> You are most likely going to end up on implementing a
"handful" of
> instances versus "dozens" of instance on all but the
beefiest of servers.
>
> This can be affected by how much memory each instance
uses. An
> application that puts major amounts of data into
persistent scopes such as
> application and|or session will have a larger foot print
then a leaner
> application that does not put much data into memory
and|or leave it there
> for a very long time.
>
> I know the first time we made use of CF in it's
multi-home flavor, we went
> a bit overboard and created way too many. After nearly
bringing a
> moderate server to its knees, we consolidated until we
had three or four
> or so IIRC. A couple dedicated to to each of our largest
and most
> critical applications and a couple general instances
that ran many smaller
> applications each.
>
>
>
>
> -
Question about cluster node NodeWeight property
Hi,
I have a three nodes (A/B/C) windows 2008 r2 sp1 cluster testCluster, and installed KB2494036 for three nodes,suppose Node A is a active node.
I configured node C's NodeWeight property to 0, and node A and node B keep default (NodeWeight=1). I also added a shared disk Q for cluster quorum.
So i want to know if node C and Node B are down , is the windows cluster testCluster down as lost of quorum or keep up?
At the first i thought testCluster should keep up , because the cluster has 2 votes (node A and quorum), node B is down, node C doesn't join voting. But after testing, testCluster was down as lost of quorum.
So anybody konw the reason,thanks.Hello mark.gao,
Let me see if I understand correctly your steps, so I can think that if you create your cluster with three nodes at the beginning your quorum model should be "Node Majority", then you have three votes one per each node.
Then was removed the vote for Node "C" and added a disk to be witness for cluster quorum, at this point we have two out of three votes from the original configuration on "Node Majority"
Question:
At some point you changed the quorum model to be "Node and Disk Majority"???
Maybe this is the issue, you are stuck on "Node Majority" and when "B" and "C" nodes are down we have only one vote from node "A" therefore there is no quorum to keep the service online.
On 2012 we have the awesome option to configure a Dynamic Quorum:
Dynamic quorum management
In Windows Server 2012, as an advanced quorum configuration option, you can choose to enable dynamic quorum management by cluster. When this option is enabled, the cluster dynamically manages
the vote assignment to nodes, based on the state of each node. Votes are automatically removed from nodes that leave active cluster membership, and a vote is automatically assigned when a node rejoins the cluster. By default, dynamic quorum management is enabled.
Note
With dynamic quorum management, the cluster quorum majority is determined by the set of nodes that are active members of the cluster at any time. This is an important distinction from the cluster quorum in Windows Server 2008 R2, where the quorum
majority is fixed, based on the initial cluster configuration.
With dynamic quorum management, it is also possible for a cluster to run on the last surviving cluster node. By dynamically adjusting the quorum majority requirement, the cluster can sustain
sequential node shutdowns to a single node.
The cluster-assigned dynamic vote of a node can be verified with the DynamicWeight common property of the cluster node by using the Get-ClusterNodeWindows
PowerShell cmdlet. A value of 0 indicates that the node does not have a quorum vote. A value of 1 indicates that the node has a quorum vote.
The vote assignment for all cluster nodes can be verified by using the Validate Cluster Quorum validation test.
Additional considerations
Dynamic quorum management does not allow the cluster to sustain a simultaneous failure of a majority of voting members. To continue running, the cluster must always have a quorum majority at the time of a node shutdown or failure.
If you have explicitly removed the vote of a node, the cluster cannot dynamically add or remove that vote.
Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
https://technet.microsoft.com/en-us/library/jj612870.aspx#BKMK_dynamic
Hope this info help you to reach your goal. :D
5ALU2 ! -
Get a list of users logged on each server node in a clustered environment
Hi,
does anybody know if there is way to get a list of users which are logged on each server node in a clustered environment. Or maybe is there api for that so I could write an application which does that?
Regards
LadislavHi,
about the code I was looking at - easily you can find out that these iviews are components of <b>com.sap.portal.admin.monitor.par</b>. The thread Thread Overview is implemented by: <i>com.sapportals.portal.admin.psm.PortalServerMonitor</i>. In <i>getThreadsData</i> method there is a parameter <b>portalMonitor</b> so it led me to this code in <i>doContent</i>:
/* 143*/ IPortalMonitor portalMonitor = (IPortalMonitor)PortalRuntime.getRuntimeResources().getService("com.sap.portal.runtime.application.monitor.PortalMonitor");
/* 144*/ IPSMData psmData = portalMonitor.getIPSMDataInstance();
So I have found <i>com.sap.portal.runtime.application.monitor.par</i> and there is a service implemented by <i>com.sapportals.portal.prt.service.monitor.PortalMonitor</i> where is <i>com.sapportals.portal.prt.service.monitor.MonitorCommunication</i>. And this is a service which calls all the nodes in the cluster around asking for info about the threads, requests, ... Here is a method <i>getMonitoringData</i> which works with object <i>com.sapportals.portal.prt.service.monitor.MonitorData</i>. Here in method <i>getThreadOverview</i>:
/* 396*/ ApplThreadOverview oATOs[] = OverviewMonitor.getApplThreads();
So in this <i>OverviewMonitor</i> the method <i>getApplThreads</i>:
public static ApplThreadOverview[] getApplThreads()
/* 50*/ return ApplThreadMonitor.getApplThreads(false);
from here:
/* 130*/ ApplThreadMonitor hlpAppl = TaskMonitor.getApplThreadList();
and this <i>ApplThreadMonitor</i> has:
private String userName;
private String reqName;
private String taskName;
private String compName;
private String action;
So if you can get this (when the most of the classes mentioned is not in API but CORE part) on each node and ask for it from one place you got the user list at cluster nodes...
In fact there can be a shortcut to get this info, I have found this ;o)
Romano
PS: and thanks for the stars! -
What are the preferred methods for backing up a cluster node bootdisk?
Hi,
I would like to use flarcreate to backup the bootdisks for each of the nodes in my cluster... but I cannot see this method mentioned in any cluster documentation...
Has anybody used flash backups for cluster nodes before (and more importantly - successfully restored a cluster node from a flash image..?)
Thanks very much,
TrevorHi, some backround on this - I need to patch some production cluster nodes, and obviously would like to backup the rootdisk of each node before doing this.
What I really need is some advice about the best method to backup & patch my cluster node (with a recovery method also).
The sun documentation for this says to use ufsdump, which i have used in the past - but will FLAR do the same job? - has anyone had experiance using FLAR to restore a cluster node?
Or if someone has some other solutions for patching the nodes? - maybe offline my root mirror (SVM) - patch root disk - barring any major problems - online the mirror again??
Cheers, Trevor -
JNDI lookup on a specific server node
Hi experts
I am facing the following issue, we are loading data from ECC tables on PI JAVA memory to improve performance at runtime, this is done by a JCO,
it works fine within one Java node. But when we tested it in Productive system (with 2 java nodes) it failed because data is stored in just one Java node, so if the message does not go through
that node it does not find that data.
This is part ofe the code from the UDF where we are loading data to java memory:
props.put(javax.naming.Context.INITIAL_CONTEXT_FACTORY, "com.sap.engine.services.jndi.InitialContextFactoryImpl");
* props.put(javax.naming.Context.PROVIDER_URL, "sapms://localhost:8110 ");*
* props.put("domain", "true");*
And this is part of the code from the UDF where we are getting data from memory
javax.naming.Context ctx = null;
* java.util.Hashtable props = new java.util.Hashtable(1);*
* props.put("domain", "true");*
* props.put(javax.naming.Context.INITIAL_CONTEXT_FACTORY, "com.sap.engine.services.jndi.InitialContextFactoryImpl");*
props.put(javax.naming.Context.PROVIDER_URL, "sapms://localhost:8110");
Can someone give me some light about how to send data to both nodes or how to do the data lookup into a specific node??
Thanks in advanced.
EmmanuelHi,
I guess what you are trying to achieve is to build a cache of ECC data in PI memory. In this case you have to maintain local cache of the data on each server node. Hopefully the amount of memory required will not impact the system stability.
Regarding the lookup of different server node, although it is technically possible, you need to bind a remote object in the JNDI and use costly remote communication to transfer the data between the server nodes.
I guess you also have to think of some kind of update / eviction strategy for your cache.
Hope this helps!
Best Regards,
Dimitar -
Error: Halting this cluster node due to unrecoverable service failure
Our cluster has experienced some sort of fault that has only become apparent today. The origin appears to have been nearly a month ago yet the symptoms have only just manifested.
The node in question is a standalone instance running a DistributedCache service with local storage. It output the following to stdout on Jan-22:
Coherence <Error>: Halting this cluster node due to unrecoverable service failure
It finally failed today with OutOfMemoryError: Java heap space.
We're running coherence-3.5.2.jar.
Q1: It looks like this node failed on Jan-22 yet we did not notice. What is the best way to monitor node health?
Q2: What might the root cause be for such a fault?
I found the following in the logs:
2011-01-22 01:18:58,296 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:18:58.296/9910749.462 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
2011-01-22 01:18:58,296 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:18:58.296/9910749.462 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
2011-01-22 01:19:04,772 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:04.772/9910755.938 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
2011-01-22 01:19:04,772 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:04.772/9910755.938 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
2011-01-22 01:19:05,785 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:05.785/9910756.951 Oracle Coherence EE 3.5.2/463 <Error> (thread=Termination Thread, member=33): Full Thread Dump
Thread[Reference Handler,10,system]
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:485)
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
Thread[DistributedCache,5,Cluster]
java.nio.Bits.copyToByteArray(Native Method)
java.nio.DirectByteBuffer.get(DirectByteBuffer.java:224)
com.tangosol.io.nio.ByteBufferInputStream.read(ByteBufferInputStream.java:123)
java.io.DataInputStream.readFully(DataInputStream.java:178)
java.io.DataInputStream.readFully(DataInputStream.java:152)
com.tangosol.util.Binary.readExternal(Binary.java:1066)
com.tangosol.util.Binary.<init>(Binary.java:183)
com.tangosol.io.nio.BinaryMap$Block.readValue(BinaryMap.java:4304)
com.tangosol.io.nio.BinaryMap$Block.getValue(BinaryMap.java:4130)
com.tangosol.io.nio.BinaryMap.get(BinaryMap.java:377)
com.tangosol.io.nio.BinaryMapStore.load(BinaryMapStore.java:64)
com.tangosol.net.cache.SerializationPagedCache$WrapperBinaryStore.load(SerializationPagedCache.java:1547)
com.tangosol.net.cache.SerializationPagedCache$PagedBinaryStore.load(SerializationPagedCache.java:1097)
com.tangosol.net.cache.SerializationMap.get(SerializationMap.java:121)
com.tangosol.net.cache.SerializationPagedCache.get(SerializationPagedCache.java:247)
com.tangosol.net.cache.AbstractSerializationCache$1.getOldValue(AbstractSerializationCache.java:315)
com.tangosol.net.cache.OverflowMap$Status.registerBackEvent(OverflowMap.java:4210)
com.tangosol.net.cache.OverflowMap.onBackEvent(OverflowMap.java:2316)
com.tangosol.net.cache.OverflowMap$BackMapListener.onMapEvent(OverflowMap.java:4544)
com.tangosol.util.MultiplexingMapListener.entryDeleted(MultiplexingMapListener.java:49)
com.tangosol.util.MapEvent.dispatch(MapEvent.java:214)
com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
com.tangosol.net.cache.AbstractSerializationCache.dispatchEvent(AbstractSerializationCache.java:338)
com.tangosol.net.cache.AbstractSerializationCache.dispatchPendingEvent(AbstractSerializationCache.java:321)
com.tangosol.net.cache.AbstractSerializationCache.removeBlind(AbstractSerializationCache.java:155)
com.tangosol.net.cache.SerializationPagedCache.removeBlind(SerializationPagedCache.java:348)
com.tangosol.util.AbstractKeyBasedMap$KeySet.remove(AbstractKeyBasedMap.java:556)
com.tangosol.net.cache.OverflowMap.removeInternal(OverflowMap.java:1299)
com.tangosol.net.cache.OverflowMap.remove(OverflowMap.java:380)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.clear(DistributedCache.CDB:24)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:32)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheRequest.onReceived(DistributedCacheRequest.CDB:12)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[Finalizer,8,system]
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
Thread[PacketReceiver,7,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[RMI TCP Accept-0,5,system]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
java.lang.Thread.run(Thread.java:619)
Thread[PacketSpeaker,8,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[Logger@9216774 3.5.2/463,3,main]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[PacketListener1,8,Cluster]
java.net.PlainDatagramSocketImpl.receive0(Native Method)
java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
java.net.DatagramSocket.receive(DatagramSocket.java:712)
com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[main,5,main]
java.lang.Object.wait(Native Method)
com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
com.networkfleet.cacheserver.Launcher.main(Launcher.java:122)
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Accept-41006,5,system]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
java.lang.Thread.run(Thread.java:619)
ThreadCluster
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[TcpRingListener,6,Cluster]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[PacketPublisher,6,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[RMI TCP Accept-0,5,system]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
java.lang.Thread.run(Thread.java:619)
Thread[PacketListenerN,8,Cluster]
java.net.PlainDatagramSocketImpl.receive0(Native Method)
java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
java.net.DatagramSocket.receive(DatagramSocket.java:712)
com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[Invocation:Management,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[DistributedCache:PofDistributedCache,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[Invocation:Management:EventDispatcher,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[Termination Thread,5,Cluster]
java.lang.Thread.dumpThreads(Native Method)
java.lang.Thread.getAllStackTraces(Thread.java:1487)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
java.lang.Thread.run(Thread.java:619)
2011-01-22 01:19:05,785 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:05.785/9910756.951 Oracle Coherence EE 3.5.2/463 <Error> (thread=Termination Thread, member=33): Full Thread Dump
Thread[Reference Handler,10,system]
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:485)
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
Thread[DistributedCache,5,Cluster]
java.nio.Bits.copyToByteArray(Native Method)
java.nio.DirectByteBuffer.get(DirectByteBuffer.java:224)
com.tangosol.io.nio.ByteBufferInputStream.read(ByteBufferInputStream.java:123)
java.io.DataInputStream.readFully(DataInputStream.java:178)
java.io.DataInputStream.readFully(DataInputStream.java:152)
com.tangosol.util.Binary.readExternal(Binary.java:1066)
com.tangosol.util.Binary.<init>(Binary.java:183)
com.tangosol.io.nio.BinaryMap$Block.readValue(BinaryMap.java:4304)
com.tangosol.io.nio.BinaryMap$Block.getValue(BinaryMap.java:4130)
com.tangosol.io.nio.BinaryMap.get(BinaryMap.java:377)
com.tangosol.io.nio.BinaryMapStore.load(BinaryMapStore.java:64)
com.tangosol.net.cache.SerializationPagedCache$WrapperBinaryStore.load(SerializationPagedCache.java:1547)
com.tangosol.net.cache.SerializationPagedCache$PagedBinaryStore.load(SerializationPagedCache.java:1097)
com.tangosol.net.cache.SerializationMap.get(SerializationMap.java:121)
com.tangosol.net.cache.SerializationPagedCache.get(SerializationPagedCache.java:247)
com.tangosol.net.cache.AbstractSerializationCache$1.getOldValue(AbstractSerializationCache.java:315)
com.tangosol.net.cache.OverflowMap$Status.registerBackEvent(OverflowMap.java:4210)
com.tangosol.net.cache.OverflowMap.onBackEvent(OverflowMap.java:2316)
com.tangosol.net.cache.OverflowMap$BackMapListener.onMapEvent(OverflowMap.java:4544)
com.tangosol.util.MultiplexingMapListener.entryDeleted(MultiplexingMapListener.java:49)
com.tangosol.util.MapEvent.dispatch(MapEvent.java:214)
com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
com.tangosol.net.cache.AbstractSerializationCache.dispatchEvent(AbstractSerializationCache.java:338)
com.tangosol.net.cache.AbstractSerializationCache.dispatchPendingEvent(AbstractSerializationCache.java:321)
com.tangosol.net.cache.AbstractSerializationCache.removeBlind(AbstractSerializationCache.java:155)
com.tangosol.net.cache.SerializationPagedCache.removeBlind(SerializationPagedCache.java:348)
com.tangosol.util.AbstractKeyBasedMap$KeySet.remove(AbstractKeyBasedMap.java:556)
com.tangosol.net.cache.OverflowMap.removeInternal(OverflowMap.java:1299)
com.tangosol.net.cache.OverflowMap.remove(OverflowMap.java:380)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.clear(DistributedCache.CDB:24)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:32)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheRequest.onReceived(DistributedCacheRequest.CDB:12)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[Finalizer,8,system]
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
Thread[PacketReceiver,7,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[RMI TCP Accept-0,5,system]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
java.lang.Thread.run(Thread.java:619)
Thread[PacketSpeaker,8,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[Logger@9216774 3.5.2/463,3,main]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[PacketListener1,8,Cluster]
java.net.PlainDatagramSocketImpl.receive0(Native Method)
java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
java.net.DatagramSocket.receive(DatagramSocket.java:712)
com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[main,5,main]
java.lang.Object.wait(Native Method)
com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
com.networkfleet.cacheserver.Launcher.main(Launcher.java:122)
Thread[Signal Dispatcher,9,system]
Thread[RMI TCP Accept-41006,5,system]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
java.lang.Thread.run(Thread.java:619)
ThreadCluster
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[TcpRingListener,6,Cluster]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[PacketPublisher,6,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[RMI TCP Accept-0,5,system]
java.net.PlainSocketImpl.socketAccept(Native Method)
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
java.net.ServerSocket.implAccept(ServerSocket.java:453)
java.net.ServerSocket.accept(ServerSocket.java:421)
sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
java.lang.Thread.run(Thread.java:619)
Thread[PacketListenerN,8,Cluster]
java.net.PlainDatagramSocketImpl.receive0(Native Method)
java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
java.net.DatagramSocket.receive(DatagramSocket.java:712)
com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
java.lang.Thread.run(Thread.java:619)
Thread[Invocation:Management,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[DistributedCache:PofDistributedCache,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[Invocation:Management:EventDispatcher,5,Cluster]
java.lang.Object.wait(Native Method)
com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
java.lang.Thread.run(Thread.java:619)
Thread[Termination Thread,5,Cluster]
java.lang.Thread.dumpThreads(Native Method)
java.lang.Thread.getAllStackTraces(Thread.java:1487)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
java.lang.Thread.run(Thread.java:619)
2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 INFO 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Info> (thread=main, member=33): Restarting Service: DistributedCache
2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 INFO 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Info> (thread=main, member=33): Restarting Service: DistributedCache
2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Error> (thread=main, member=33): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: Distr
butedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=16, BackupPartitions=16}
2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Error> (thread=main, member=33): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: Distr
butedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=16, BackupPartitions=16}Hi
It seems like the problem in this case is the call to clear() which will try to load all entries stored in the overflow scheme to emit potential cache events to listeners. This probably requires much more memory than there is Java heap available, hence the OOM.
Our recommendation in this case is to call destroy() since this will bypass the even firing.
/Charlie -
Why the non-cluster SQL Server appeared in the cluster nodes list
1, I install the node rs6 standalone, Why it appeared in the cluster node list by inquiry the dmv?
2, how to removed the rs6 from the cluster node list ?
by "set -clusterownernode -resource "XXXASQL" -owners NODE1,NODE2"?
But how to find the resource name? I tried to use window cluster name, SQL cluster name, and SQL role name , All of them say failed to get the cluster object.
3,how to set the owers to {}, I try below, but failed.IMHO, sys.dm_os_cluster_nodes DMV is associated with the SQL Server
Operating System (SQLOS), sys.dm_os_cluster_nodes returns one row for each node in the failover cluster configuration.
As you are running standalone instance on cluster I am assuming this information is being picked from
OS and not from RS6 SQL instance.
As you have confirmed Is_cluster is false and if you don’t see RS6 instance in failover cluster manager I don’t think anything damaged here. Everything looking as expected, dont change owner node as its standalone instance. -
How to use SVM metadevices with cluster - sync metadb between cluster nodes
Hi guys,
I feel like I've searched the whole internet regarding that matter but found nothing - so hopefully someone here can help me?!?!?
<b>Situation:</b>
I have a running server with Sol10 U2. SAN storage is attached to the server but without any virtualization in the SAN network.
The virtualization is done by Solaris Volume Manager.
The customer has decided to extend the environment with a second server to build up a cluster. According our standards we
have to use Symantec Veritas Cluster, but I think regarding my question it doesn't matter which cluster software is used.
The SVM configuration is nothing special. The internal disks are configured with mirroring, the SAN LUNs are partitioned via format
and each slice is a meta device.
d100 p 4.0GB d6
d6 m 44GB d20 d21
d20 s 44GB c1t0d0s6
d21 s 44GB c1t1d0s6
d4 m 4.0GB d16 d17
d16 s 4.0GB c1t0d0s4
d17 s 4.0GB c1t1d0s4
d3 m 4.0GB d14 d15
d14 s 4.0GB c1t0d0s3
d15 s 4.0GB c1t1d0s3
d2 m 32GB d12 d13
d12 s 32GB c1t0d0s1
d13 s 32GB c1t1d0s1
d1 m 12GB d10 d11
d10 s 12GB c1t0d0s0
d11 s 12GB c1t1d0s0
d5 m 6.0GB d18 d19
d18 s 6.0GB c1t0d0s5
d19 s 6.0GB c1t1d0s5
d1034 s 21GB /dev/dsk/c4t600508B4001064300001C00004930000d0s5
d1033 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s4
d1032 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s3
d1031 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s1
d1030 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s0
d1024 s 31GB /dev/dsk/c4t600508B4001064300001C00004870000d0s5
d1023 s 512MB /dev/dsk/c4t600508B4001064300001C00004870000d0s4
d1022 s 2.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s3
d1021 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s1
d1020 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s0
d1014 s 8.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s5
d1013 s 1.7GB /dev/dsk/c4t600508B4001064300001C00004750000d0s4
d1012 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s3
d1011 s 256MB /dev/dsk/c4t600508B4001064300001C00004750000d0s1
d1010 s 4.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s0
d1004 s 46GB /dev/dsk/c4t600508B4001064300001C00004690000d0s5
d1003 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s4
d1002 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s3
d1001 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s1
d1000 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s0
<b>The problem is the following:</b>
The SVM configuration on the second server (cluster node 2) must be the same for the devices d1000-d1034.
Generally spoken the metadb needs to be in sync.
- How can I manage this?
- Do I have to use disk sets?
- Will a copy of the md.cf/md.tab and an initialization with metainit do it?
I would be great to have several options how one can manage this.
Thanks and regards,
MarkusDear Tim,
Thank you for your answer.
I can confirm that Veritas Cluster doesn't support SVM by default. Of course they want to sell their own volume manager ;o).
But that wouldn't be the big problem. With SVM I expect the same behaviour as with VxVM, If I do or have to use disk sets,
and for that I can write a custom agent.
My problem is not the cluster implementation. It's more likely a fundamental problem with syncing the SVM config for a set
of meta devices between two hosts. I'm far from implementing the devices into the cluster config as long as I don't know how
how to let both nodes know about both devices.
Currently only the hosts that initialized the volumes knows about them. The second node doesn't know anything about the
devices d1000-d1034.
What I need to know in this state is:
- How can I "register" the alrady initialized meta devices d1000-d1034 on the second cluster node?
- Do I have to use disk sets?
- Can I only copy and paste the appropriate lines of the md.cf/md.tab
- Generaly speaking: How can one configure SVM that different hosts see the same meta devices?
Hope that someone can help me!
Thanks,
Markus -
I installed Sun cluster 3.1 and it all seemed successful. However, the node ID and the private hostnames seemed twisted. "comdb1" has the node ID of 2 and "comdb2" has the node ID of 1. I installed the software from "comdb1", so it should have used that as node 1, right? I pasted below some info from 'scconf -p'
Cluster node name: comdb1
Node ID: 2
Node enabled: yes
Node private hostname: clusternode2-priv
Node quorum vote count: 1
Node reservation key: 0x4472697D00000002
Node transport adapters: ce4 ce1
Cluster node name: comdb2
Node ID: 1
Node enabled: yes
Node private hostname: clusternode1-priv
Node quorum vote count: 1
Node reservation key: 0x4472697D00000001
Node transport adapters: ce4 ce1
Thank you in advance.Consequently, when installing Oracle 10g RAC, the database name "db1" is created on node2 and "db2" is created on node1 since it relies on the private node name and the node ID. Otherwise I wouldn't bother with how the cluster software names its node ID.
Thanks again,
Luke -
Hi,
We got 1 SunFire12K and 15K and 2 clusters are configured, But the cluster nodes in each cluster are from the same server (15K or 12K), which we feel is not very good. We need to move the cluster nodes across SunFire Servers Following are my queries
1) Is it possible to Move a node from 12K to 15K provided all the IO boards are moved from 12K to 15K
along with all Network and FC interface and the root disk.
In this case do we need to reconfigure the cluster?
2) Is there any other way to implement this with minimal outage on services
Thanks in adv
RajI know your post is over 3 years old now - but did you find any problem leading to this behaviour?
I get this error on two different 2008R2 two-node-Clusters holding lots of DFSR resources when failing over or back.
service packs and hotfixes (including all dfsr) are up to date, evering is set up using microsoft best practice.
hardware-specs are fine- cluster1: 78GB memory 8 core, cluster 2: 128GB memory 32 core, emc-storage connected using multiple failsafe and loadbalancing FC8-connections.
storage does not see any unusual load when failing over, disk cue length on cluster nodes <=1 when failing over
debug logs show those entries:
+ [Error:9101(0x238d)
FrsReplicator::GetReplicaSetConfiguration frsreplicatorserver.cpp:2836 2892 C Der Registrierungsschlüssel wurde nicht gefunden.]
+ [Error:9101(0x238d)
Config::XmlReader::ReadReplicaSetConfig xml.cpp:3034 2892 C Der Registrierungsschlüssel wurde nicht gefunden.]
+ [Error:9101(0x238d)
Config::RegReader::ReadReplicaConfigValues reg.cpp:1201 2892 C Der Registrierungsschlüssel wurde nicht gefunden.]
+ [Error:9101(0x238d)
Config::RegConfig::TranslateWin32StatusToConfigFrsStatus reg.cpp:650 2892 C Der Registrierungsschlüssel wurde nicht gefunden.]
+ [Error:2(0x2)
BaseRegKey::Open regkey.cpp:165 2892 W Das System kann die angegebene Datei nicht finden.]
+ [Error:9116(0x239c)
FrsReplicator::GetReplicaSetConfiguration frsreplicatorserver.cpp:2831 2892 C Die Konfiguration wurde nicht gefunden.]
registry keys do exist with correct permissions.
it seems there is some kind of timing issue with unloading/loading
dfsr-related cluster registry entries.
maybe you found some solution for this problem.
------------------ Roman Fischer, AUT
Maybe you are looking for
-
Problem in document distribution
Dear guys, I am facing problem in document distribution and having error attached if anyone can help me in this regard. Thanks in advance
-
Hello there, I wrote a client program which connects to a server program using socket. The client program communicates with the server first to establish an agreed association and then it will send several data files to the server program. When the s
-
not your usual editing question ... but just installed airport extreme last night. even if i'm not connected to the internet, does having airport "on" effect the performance (speed etc) of fcp in any way or is it just better to leave it off unless i
-
Convert the date into user default date formate
I am wrinting a bdc and i want to convert the date into user default date farmate ..please suggust the functiom module should i use...
-
Scion IPOD Integration Question and Safety Issue
I have a 2007 Scion TC with the factory pioneer stereo with ipod connection and control from the stereo buttons. Does anyone know how to fix the problem where when you start scrolling through which song/album you want to select, and then you take a c