Cluster node (1 out of 6) is in error for a file channel - SAP XI

Hi,
In one of a sender file channel for an interface, one java node out of 6 configured ones failed. The error message says "Login Incorrect" while the other cluster nodes are polling properly. I have tried to update the password in the channel's configuration in Integration Directory and activated it. This doesnt help. Please advice!
Thanks in advance!
Regards,
Kumaran

Hello Kumaran,
The status of the file adapter is not reflected properly but this should not have an impact. No message might have arrived at that node yet. Once it receives a message for processing, the status will be changed.
Anil

Similar Messages

  • Regarding Pulling out network cable from cluster node

    I have two cluster nodes installed with my application.
    I have pulled out the Network cable from the primary where my application is running. So the primary is not reachle from remote box.(cannot ping primary)
    I have found the following error messages
    SUNW,hme0 : No response from Ethernet network : Link down -- cable problem?
    I have found the device group and resource group online the primary and the sun cluster does not failover to secondary node. Does Sun cluster support this scenario ?
    Or do i need to any additional configuration? Can i get clarification on this

    Hi Sudheer,
    if you have two interfaces in your ipmpgroup, I am missing the test address.
    http://docs.sun.com/app/docs/doc/819-3000/emybr?l=en&q=ipmp&a=view
    states a hostname.hme0 as:
    192.168.85.19 netmask + broadcast + group testgroup1 up \
         addif 192.168.85.21 deprecated -failover netmask + broadcast + up
    and for hotname.hme1
    192.168.85.20 netmask + broadcast + group testgroup1 up \
         addif 192.168.85.22 deprecated -failover netmask + broadcast + up
    you can safely replace the addresses by names if they are in /etc/hosts
    In this case the -failover flag for the physical of your example is wrong.
    If you only have one adapter,
    One line in /etc/hostname.hme0 like you stated in your example is correct.
    this is from one of my clusters.
    deulwork20 group sc_ipmp0 -failover
    it is the ipmpgroup Sun Cluster creates for you if you do not specify anything else. so for one single adapter one line like "hadev1 group sc_ipmp0 -failover" is correct.
    DEtlef

  • Help with a Blind Configuration of a G5 Cluster node

    So I bought 2 G5 Cluster Nodes to dedicate some audiovisual processes to them. My only other mac computer is a Core 2 Duo Macbook Pro.
    Using Pacifist, I was able to do a clean install of Mac OSX onto the internal drive by putting it into an external enclosure.
    Now here is my problem: The cluster nodes have no videocard.
    I plan on using them through the OSX Screen Sharing function, when they will be conencted to the network, but I don't know how to do the initial configuration of Mac OS X on them, since I can not boot from a system using the Apple Partition Map on my Macbook pro, and the Cluster node will not boot from the GIUD partition scheme.
    Can anyone please help me?
    Thanks,
    Chuck

    Assuming you're running Mac OS X Server on the cluster node, just boot the server normally - it will run a special first-time-boot process that sets up a network listener.
    You can then install the Server Admin tools on your MacBook Pro and run Server Assistant. Server Assistant will look out over the network and find the new servers, then give you the opportunity to configure them remotely (assign account data, IP address, etc.).
    (note you can also do this as part of the initial install process - boot the server from the Install DVD and run the entire OS installation and configuration remotely via Server Assistant)
    Note: If you're not running Mac OS X Server on the cluster nodes then the above doesn't apply

  • Distributed Transaction Coordinator not displaying remotely on a server core cluster node..

    We setup a server core single node cluster (W2012 R2).  The MS DTC is running, and the Distributed Transaction Coordinator firewall rules are enabled.  I can connect to the firewall rules and compmgmt.msc remotely for this server.  When
    I attempt to connect to the Component Services Management console with this server, the MS DTC object is not displayed.  Only the COM+ applications are displayed.   We've setup other server core instances with out clustering and DTC is displayed.

    Hi Steve,
    We just finished the local test and found this behavior is by design. It’s expected that we can’t see the DTC remotely from component services for a cluster node.
    In my lab, 2012R2 node3 is a single node cluster and has DTC role.

  • Question about cluster node NodeWeight property

    Hi,
    I have a three nodes (A/B/C) windows 2008 r2 sp1 cluster testCluster, and installed KB2494036 for three nodes,suppose Node A is a active node.
      I configured node C's NodeWeight property to 0, and node A and node B keep default (NodeWeight=1). I also added a shared disk Q for cluster quorum.
    So i want to know if node C and Node B are down , is the windows cluster testCluster down as lost of quorum or keep up?
    At the first i thought testCluster should keep up , because the cluster has 2 votes (node A and quorum), node B is down, node C doesn't join voting. But after testing, testCluster  was down as  lost of quorum.
    So anybody konw the reason,thanks.

    Hello mark.gao,
    Let me see if I understand correctly your steps, so I can think that if you create your cluster with three nodes at the beginning your quorum model should be "Node Majority", then you have three votes one per each node.
    Then was removed the vote for Node "C" and added a disk to be witness for cluster quorum, at this point we have two out of three votes from the original configuration on "Node Majority"
    Question:
    At some point you changed the quorum model to be "Node and Disk Majority"???
    Maybe this is the issue, you are stuck on "Node Majority" and when "B" and "C" nodes are down we have only one vote from node "A" therefore there is no quorum to keep the service online.
    On 2012 we have the awesome option to configure a Dynamic Quorum:
    Dynamic quorum management
    In Windows Server 2012, as an advanced quorum configuration option, you can choose to enable dynamic quorum management by cluster. When this option is enabled, the cluster dynamically manages
    the vote assignment to nodes, based on the state of each node. Votes are automatically removed from nodes that leave active cluster membership, and a vote is automatically assigned when a node rejoins the cluster. By default, dynamic quorum management is enabled.
    Note
    With dynamic quorum management, the cluster quorum majority is determined by the set of nodes that are active members of the cluster at any time. This is an important distinction from the cluster quorum in Windows Server 2008 R2, where the quorum
    majority is fixed, based on the initial cluster configuration.
    With dynamic quorum management, it is also possible for a cluster to run on the last surviving cluster node. By dynamically adjusting the quorum majority requirement, the cluster can sustain
    sequential node shutdowns to a single node.
    The cluster-assigned dynamic vote of a node can be verified with the DynamicWeight common property of the cluster node by using the Get-ClusterNodeWindows
    PowerShell cmdlet. A value of 0 indicates that the node does not have a quorum vote. A value of 1 indicates that the node has a quorum vote.
    The vote assignment for all cluster nodes can be verified by using the Validate Cluster Quorum validation test.
    Additional considerations
    Dynamic quorum management does not allow the cluster to sustain a simultaneous failure of a majority of voting members. To continue running, the cluster must always have a quorum majority at the time of a node shutdown or failure.
    If you have explicitly removed the vote of a node, the cluster cannot dynamically add or remove that vote. 
    Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
    https://technet.microsoft.com/en-us/library/jj612870.aspx#BKMK_dynamic
    Hope this info help you to reach your goal. :D
    5ALU2 !

  • Hyper-V Failover Cluster Node Corruption

    Dear All,
                Some of my nodes are showing abnormal behavior.  They are restarting every now and then.  I had updated the cluster nodes, but all updates were OS specific, there was nothing specific
    with respect to hardware update.
    I have analyzed crash dumps and find out that following is causing the crash:
    page_fault_in_nonpaged_area
    anyone has any idea about this?
    Thanks in advance.

    Hi ,
    What is the OS of the cluster node ?
    Did you try to remove the protection client for troubleshooing ?
    If it is a 2008R2 cluster , please refer to this thread :
    http://social.technet.microsoft.com/Forums/en-US/32ab6a85-6002-4c3c-97ea-27cb1091e9b3/windows-cluster-server-is-getting-restarted?forum=winservergen
    Hope it helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • WDRuntimeException: Failed to create J2EE cluster node in SLD

    Hello,
    I am getting the below error, but to my knowledge I have everything set up properly.  Let me briefly outline the logistics (I am running everything LOCALLY (will move to remote later)):
    WAS 6.4 <b>SP12</b>
    Set up JCo and tests fine
    Set up Visual Administrator / SLD Data Supplier / HTTP and CIM configured and seem to test fine
    Created SLD and it tests OK
    Created Technical Landscape
    I have noticed that in SP12, in the SLD config I actually have a NEW category called "<b>System Landscape</b>" above my "Technical Landscape" link.  I have not seen this option in previous versions SP9 or SP11.  Is it mandatory to configure this?
    Also, I created a model for Adaptive RFC and found the function I needed successfully.
    Anyway, here is the error when trying to deploy...
    com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Error while obtaining JCO connection.
         at com.sap.tc.webdynpro.services.datatypes.core.DataTypeBroker$1.fillSldConnection(DataTypeBroker.java:90)
    Caused by: com.sap.tc.webdynpro.services.sal.sl.api.WDSystemLandscapeException: Error while obtaining JCO connection.
    Caused by: com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Failed to create J2EE cluster node in SLD for 'J2E.SystemHome.bc347792': com.sap.lcr.api.cimclient.LcrException: CIM_ERR_NOT_FOUND: No such instance: SAP_J2EEEngineCluster.CreationClassName="SAP_J2EEEngineCluster",Name="J2E.SystemHome.bc347792"
    Any help will be appreciated!

    I figured it out for those that may have a similar problem.
    Although I had created and tested my JCo's properly and they were working fine, somehow, and I still don't know why, they went RED in the JCo Maintainence screen. 
    I had to "create" again and it works fine now.

  • After reboot cluster node went into maintanance mode (CONTROL-D)

    Hi there!
    I have configured 2 node cluster on 2 x SUN Enterprise 220R and StoreEdge D1000.
    Each time when rebooted any of the cluster nodes i get the following error during boot up:
    The / file system (/dev/rdsk/c0t1d0s0) is being checked.
    /dev/rdsk/c0t1d0s0: UNREF DIR I=35540 OWNER=root MODE=40755
    /dev/rdsk/c0t1d0s0: SIZE=512 MTIME=Jun 5 15:02 2006 (CLEARED)
    /dev/rdsk/c0t1d0s0: UNREF FILE I=1192311 OWNER=root MODE=100600
    /dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 (RECONNECTED)
    /dev/rdsk/c0t1d0s0: LINK COUNT FILE I=1192311 OWNER=root MODE=100600
    /dev/rdsk/c0t1d0s0: SIZE=96 MTIME=Jun 5 13:23 2006 COUNT 0 SHOULD BE 1
    /dev/rdsk/c0t1d0s0: LINK COUNT INCREASING
    /dev/rdsk/c0t1d0s0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
    In maintanance mode i do:
    # fsck -y -F ufs /dev/rdsk/c0t1d0s0
    and it managed to correct the problem ... but problem occured again after each reboot on each cluster node!
    I have installed Sun CLuster 3.1 on Solaris 9 SPARC
    How can i get rid of it?
    Any ideas?
    Brgds,
    Sergej

    Hi i get this:
    112941-09 SunOS 5.9: sysidnet Utility Patch
    116755-01 SunOS 5.9: usr/snadm/lib/libadmutil.so.2 Patch
    113434-30 SunOS 5.9: /usr/snadm/lib Library and Differential Flash Patch
    112951-13 SunOS 5.9: patchadd and patchrm Patch
    114711-03 SunOS 5.9: usr/sadm/lib/diskmgr/VDiskMgr.jar Patch
    118064-04 SunOS 5.9: Admin Install Project Manager Client Patch
    113742-01 SunOS 5.9: smcpreconfig.sh Patch
    113813-02 SunOS 5.9: Gnome Integration Patch
    114501-01 SunOS 5.9: drmproviders.jar Patch
    112943-09 SunOS 5.9: Volume Management Patch
    113799-01 SunOS 5.9: solregis Patch
    115697-02 SunOS 5.9: mtmalloc lib Patch
    113029-06 SunOS 5.9: libaio.so.1 librt.so.1 and abi_libaio.so.1 Patch
    113981-04 SunOS 5.9: devfsadm Patch
    116478-01 SunOS 5.9: usr platform links Patch
    112960-37 SunOS 5.9: patch libsldap ldap_cachemgr libldap
    113332-07 SunOS 5.9: libc_psr.so.1 Patch
    116500-01 SunOS 5.9: SVM auto-take disksets Patch
    114349-04 SunOS 5.9: sbin/dhcpagent Patch
    120441-03 SunOS 5.9: libsec patch
    114344-19 SunOS 5.9: kernel/drv/arp Patch
    114373-01 SunOS 5.9: UMEM - abi_libumem.so.1 patch
    118558-27 SunOS 5.9: Kernel Patch
    115675-01 SunOS 5.9: /usr/lib/liblgrp.so Patch
    112958-04 SunOS 5.9: patch pci.so
    113451-11 SunOS 5.9: IKE Patch
    112920-02 SunOS 5.9: libipp Patch
    114372-01 SunOS 5.9: UMEM - llib-lumem patch
    116229-01 SunOS 5.9: libgen Patch
    116178-01 SunOS 5.9: libcrypt Patch
    117453-01 SunOS 5.9: libwrap Patch
    114131-03 SunOS 5.9: multi-terabyte disk support - libadm.so.1 patch
    118465-02 SunOS 5.9: rcm_daemon Patch
    113490-04 SunOS 5.9: Audio Device Driver Patch
    114926-02 SunOS 5.9: kernel/drv/audiocs Patch
    113318-25 SunOS 5.9: patch /kernel/fs/nfs and /kernel/fs/sparcv9/nfs
    113070-01 SunOS 5.9: ftp patch
    114734-01 SunOS 5.9: /usr/ccs/bin/lorder Patch
    114227-01 SunOS 5.9: yacc Patch
    116546-07 SunOS 5.9: CDRW DVD-RW DVD+RW Patch
    119494-01 SunOS 5.9: mkisofs patch
    113471-09 SunOS 5.9: truss Patch
    114718-05 SunOS 5.9: usr/kernel/fs/pcfs Patch
    115545-01 SunOS 5.9: nss_files patch
    115544-02 SunOS 5.9: nss_compat patch
    118463-01 SunOS 5.9: du Patch
    116016-03 SunOS 5.9: /usr/sbin/logadm patch
    115542-02 SunOS 5.9: nss_user patch
    116014-06 SunOS 5.9: /usr/sbin/usermod patch
    116012-02 SunOS 5.9: ps utility patch
    117433-02 SunOS 5.9: FSS FX RT Patch
    117431-01 SunOS 5.9: nss_nis Patch
    115537-01 SunOS 5.9: /kernel/strmod/ptem patch
    115336-03 SunOS 5.9: /usr/bin/tar, /usr/sbin/static/tar Patch
    117426-03 SunOS 5.9: ctsmc and sc_nct driver patch
    121319-01 SunOS 5.9: devfsadmd_mod.so Patch
    121316-01 SunOS 5.9: /kernel/sys/doorfs Patch
    121314-01 SunOS 5.9: tl driver patch
    116554-01 SunOS 5.9: semsys Patch
    112968-01 SunOS 5.9: patch /usr/bin/renice
    116552-01 SunOS 5.9: su Patch
    120445-01 SunOS 5.9: Toshiba platform token links (TSBW,Ultra-3i)
    112964-15 SunOS 5.9: /usr/bin/ksh Patch
    112839-08 SunOS 5.9: patch libthread.so.1
    115687-02 SunOS 5.9:/var/sadm/install/admin/default Patch
    115685-01 SunOS 5.9: sbin/netstrategy Patch
    115488-01 SunOS 5.9: patch /kernel/misc/busra
    115681-01 SunOS 5.9: usr/lib/fm/libdiagcode.so.1 Patch
    113032-03 SunOS 5.9: /usr/sbin/init Patch
    113031-03 SunOS 5.9: /usr/bin/edit Patch
    114259-02 SunOS 5.9: usr/sbin/psrinfo Patch
    115878-01 SunOS 5.9: /usr/bin/logger Patch
    116543-04 SunOS 5.9: vmstat Patch
    113580-01 SunOS 5.9: mount Patch
    115671-01 SunOS 5.9: mntinfo Patch
    113977-01 SunOS 5.9: awk/sed pkgscripts Patch
    122716-01 SunOS 5.9: kernel/fs/lofs patch
    113973-01 SunOS 5.9: adb Patch
    122713-01 SunOS 5.9: expr patch
    117168-02 SunOS 5.9: mpstat Patch
    116498-02 SunOS 5.9: bufmod Patch
    113576-01 SunOS 5.9: /usr/bin/dd Patch
    116495-03 SunOS 5.9: specfs Patch
    117160-01 SunOS 5.9: /kernel/misc/krtld patch
    118586-01 SunOS 5.9: cp/mv/ln Patch
    120025-01 SunOS 5.9: ipsecconf Patch
    116527-02 SunOS 5.9: timod Patch
    117155-08 SunOS 5.9: pcipsy Patch
    114235-01 SunOS 5.9: libsendfile.so.1 Patch
    117152-01 SunOS 5.9: magic Patch
    116486-03 SunOS 5.9: tsalarm Driver Patch
    121998-01 SunOS 5.9: two-key mode fix for 3DES Patch
    116484-01 SunOS 5.9: consconfig Patch
    116482-02 SunOS 5.9: modload Utils Patch
    117746-04 SunOS 5.9: patch platform/sun4u/kernel/drv/sparcv9/pic16f819
    121992-01 SunOS 5.9: fgrep Patch
    120768-01 SunOS 5.9: grpck patch
    119438-01 SunOS 5.9: usr/bin/login Patch
    114389-03 SunOS 5.9: devinfo Patch
    116510-01 SunOS 5.9: wscons Patch
    114224-05 SunOS 5.9: csh Patch
    116670-04 SunOS 5.9: gld Patch
    114383-03 SunOS 5.9: Enchilada/Stiletto - pca9556 driver
    116506-02 SunOS 5.9: traceroute patch
    112919-01 SunOS 5.9: netstat Patch
    112918-01 SunOS 5.9: route Patch
    112917-01 SunOS 5.9: ifrt Patch
    117132-01 SunOS 5.9: cachefsstat Patch
    114370-04 SunOS 5.9: libumem.so.1 patch
    114010-02 SunOS 5.9: m4 Patch
    117129-01 SunOS 5.9: adb Patch
    117483-01 SunOS 5.9: ntwdt Patch
    114369-01 SunOS 5.9: prtvtoc patch
    117125-02 SunOS 5.9: procfs Patch
    117480-01 SunOS 5.9: pkgadd Patch
    112905-02 SunOS 5.9: ippctl Patch
    117123-06 SunOS 5.9: wanboot Patch
    115030-03 SunOS 5.9: Multiterabyte UFS - patch mount
    114004-01 SunOS 5.9: sed Patch
    113335-03 SunOS 5.9: devinfo Patch
    113495-05 SunOS 5.9: cfgadm Library Patch
    113494-01 SunOS 5.9: iostat Patch
    113493-03 SunOS 5.9: libproc.so.1 Patch
    113330-01 SunOS 5.9: rpcbind Patch
    115028-02 SunOS 5.9: patch /usr/lib/fs/ufs/df
    115024-01 SunOS 5.9: file system identification utilities
    117471-02 SunOS 5.9: fifofs Patch
    118897-01 SunOS 5.9: stc Patch
    115022-03 SunOS 5.9: quota utilities
    115020-01 SunOS 5.9: patch /usr/lib/adb/ml_odunit
    113720-01 SunOS 5.9: rootnex Patch
    114352-03 SunOS 5.9: /etc/inet/inetd.conf Patch
    123056-01 SunOS 5.9: ldterm patch
    116243-01 SunOS 5.9: umountall Patch
    113323-01 SunOS 5.9: patch /usr/sbin/passmgmt
    116049-01 SunOS 5.9: fdfs Patch
    116241-01 SunOS 5.9: keysock Patch
    113480-02 SunOS 5.9: usr/lib/security/pam_unix.so.1 Patch
    115018-01 SunOS 5.9: patch /usr/lib/adb/dqblk
    113277-44 SunOS 5.9: sd and ssd Patch
    117457-01 SunOS 5.9: elfexec Patch
    113110-01 SunOS 5.9: touch Patch
    113077-17 SunOS 5.9: /platform/sun4u/kernal/drv/su Patch
    115006-01 SunOS 5.9: kernel/strmod/kb patch
    113072-07 SunOS 5.9: patch /usr/sbin/format
    113071-01 SunOS 5.9: patch /usr/sbin/acctadm
    116782-01 SunOS 5.9: tun Patch
    114331-01 SunOS 5.9: power Patch
    112835-01 SunOS 5.9: patch /usr/sbin/clinfo
    114927-01 SunOS 5.9: usr/sbin/allocate Patch
    119937-02 SunOS 5.9: inetboot patch
    113467-01 SunOS 5.9: seg_drv & seg_mapdev Patch
    114923-01 SunOS 5.9: /usr/kernel/drv/logindmux Patch
    117443-01 SunOS 5.9: libkvm Patch
    114329-01 SunOS 5.9: /usr/bin/pax Patch
    119929-01 SunOS 5.9: /usr/bin/xargs patch
    113459-04 SunOS 5.9: udp patch
    113446-03 SunOS 5.9: dman Patch
    116009-05 SunOS 5.9: sgcn & sgsbbc patch
    116557-04 SunOS 5.9: sbd Patch
    120241-01 SunOS 5.9: bge: Link & Speed LEDs flash constantly on V20z
    113984-01 SunOS 5.9: iosram Patch
    113220-01 SunOS 5.9: patch /platform/sun4u/kernel/drv/sparcv9/upa64s
    113975-01 SunOS 5.9: ssm Patch
    117165-01 SunOS 5.9: pmubus Patch
    116530-01 SunOS 5.9: bge.conf Patch
    116529-01 SunOS 5.9: smbus Patch
    116488-03 SunOS 5.9: Lights Out Management (lom) patch
    117131-01 SunOS 5.9: adm1031 Patch
    117124-12 SunOS 5.9: platmod, drmach, dr, ngdr, & gptwocfg Patch
    114003-01 SunOS 5.9: bbc driver Patch
    118539-02 SunOS 5.9: schpc Patch
    112837-10 SunOS 5.9: patch /usr/lib/inet/in.dhcpd
    114975-01 SunOS 5.9: usr/lib/inet/dhcp/svcadm/dhcpcommon.jar Patch
    117450-01 SunOS 5.9: ds_SUNWnisplus Patch
    113076-02 SunOS 5.9: dhcpmgr.jar Patch
    113572-01 SunOS 5.9: docbook-to-man.ts Patch
    118472-01 SunOS 5.9: pargs Patch
    122709-01 SunOS 5.9: /usr/bin/dc patch
    113075-01 SunOS 5.9: pmap patch
    113472-01 SunOS 5.9: madv & mpss lib Patch
    115986-02 SunOS 5.9: ptree Patch
    115693-01 SunOS 5.9: /usr/bin/last Patch
    115259-03 SunOS 5.9: patch usr/lib/acct/acctcms
    114564-09 SunOS 5.9: /usr/sbin/in.ftpd Patch
    117441-01 SunOS 5.9: FSSdispadmin Patch
    113046-01 SunOS 5.9: fcp Patch
    118191-01 gtar patch
    114818-06 GNOME 2.0.0: libpng Patch
    117177-02 SunOS 5.9: lib/gss module Patch
    116340-05 SunOS 5.9: gzip and Freeware info files patch
    114339-01 SunOS 5.9: wrsm header files Patch
    122673-01 SunOS 5.9: sockio.h header patch
    116474-03 SunOS 5.9: libsmedia Patch
    117138-01 SunOS 5.9: seg_spt.h
    112838-11 SunOS 5.9: pcicfg Patch
    117127-02 SunOS 5.9: header Patch
    112929-01 SunOS 5.9: RIPv2 Header Patch
    112927-01 SunOS 5.9: IPQos Header Patch
    115992-01 SunOS 5.9: /usr/include/limits.h Patch
    112924-01 SunOS 5.9: kdestroy kinit klist kpasswd Patch
    116231-03 SunOS 5.9: llc2 Patch
    116776-01 SunOS 5.9: mipagent patch
    117420-02 SunOS 5.9: mdb Patch
    117179-01 SunOS 5.9: nfs_dlboot Patch
    121194-01 SunOS 5.9: usr/lib/nfs/statd Patch
    116502-03 SunOS 5.9: mountd Patch
    113331-01 SunOS 5.9: usr/lib/nfs/rquotad Patch
    113281-01 SunOS 5.9: patch /usr/lib/netsvc/yp/ypbind
    114736-01 SunOS 5.9: usr/sbin/nisrestore Patch
    115695-01 SunOS 5.9: /usr/lib/netsvc/yp/yppush Patch
    113321-06 SunOS 5.9: patch sf and socal
    113049-01 SunOS 5.9: luxadm & liba5k.so.2 Patch
    116663-01 SunOS 5.9: ntpdate Patch
    117143-01 SunOS 5.9: xntpd Patch
    113028-01 SunOS 5.9: patch /kernel/ipp/flowacct
    113320-06 SunOS 5.9: patch se driver
    114731-08 SunOS 5.9: kernel/drv/glm Patch
    115667-03 SunOS 5.9: Chalupa platform support Patch
    117428-01 SunOS 5.9: picl Patch
    113327-03 SunOS 5.9: pppd Patch
    114374-01 SunOS 5.9: Perl patch
    115173-01 SunOS 5.9: /usr/bin/sparcv7/gcore /usr/bin/sparcv9/gcore Patch
    114716-02 SunOS 5.9: usr/bin/rcp Patch
    112915-04 SunOS 5.9: snoop Patch
    116778-01 SunOS 5.9: in.ripngd patch
    112916-01 SunOS 5.9: rtquery Patch
    112928-03 SunOS 5.9: in.ndpd Patch
    119447-01 SunOS 5.9: ses Patch
    115354-01 SunOS 5.9: slpd Patch
    116493-01 SunOS 5.9: ProtocolTO.java Patch
    116780-02 SunOS 5.9: scmi2c Patch
    112972-17 SunOS 5.9: patch /usr/lib/libssagent.so.1 /usr/lib/libssasnmp.so.1 mibiisa
    116480-01 SunOS 5.9: IEEE 1394 Patch
    122485-01 SunOS 5.9: 1394 mass storage driver patch
    113716-02 SunOS 5.9: sar & sadc Patch
    115651-02 SunOS 5.9: usr/lib/acct/runacct Patch
    116490-01 SunOS 5.9: acctdusg Patch
    117473-01 SunOS 5.9: fwtmp Patch
    116180-01 SunOS 5.9: geniconvtbl Patch
    114006-01 SunOS 5.9: tftp Patch
    115646-01 SunOS 5.9: libtnfprobe shared library Patch
    113334-03 SunOS 5.9: udfs Patch
    115350-01 SunOS 5.9: ident_udfs.so.1 Patch
    122484-01 SunOS 5.9: preen_md.so.1 patch
    117134-01 SunOS 5.9: svm flasharchive patch
    116472-02 SunOS 5.9: rmformat Patch
    112966-05 SunOS 5.9: patch /usr/sbin/vold
    114229-01 SunOS 5.9: action_filemgr.so.1 Patch
    114335-02 SunOS 5.9: usr/sbin/rmmount Patch
    120443-01 SunOS 5.9: sed core dumps on long lines
    121588-01 SunOS 5.9: /usr/xpg4/bin/awk Patch
    113470-02 SunOS 5.9: winlock Patch
    119211-07 NSS_NSPR_JSS 3.11: NSPR 4.6.1 / NSS 3.11 / JSS 4.2
    118666-05 J2SE 5.0: update 6 patch
    118667-05 J2SE 5.0: update 6 patch, 64bit
    114612-01 SunOS 5.9: ANSI-1251 encodings file errors
    114276-02 SunOS 5.9: Extended Arabic support in UTF-8
    117400-01 SunOS 5.9: ISO8859-6 and ISO8859-8 iconv symlinks
    113584-16 SunOS 5.9: yesstr, nostr nl_langinfo() strings incorrect in S9
    117256-01 SunOS 5.9: Remove old OW Xresources.ow files
    112625-01 SunOS 5.9: Dcam1394 patch
    114600-05 SunOS 5.9: vlan driver patch
    117119-05 SunOS 5.9: Sun Gigabit Ethernet 3.0 driver patch
    117593-04 SunOS 5.9: Manual Page updates for Solaris 9
    112622-19 SunOS 5.9: M64 Graphics Patch
    115953-06 Sun Cluster 3.1: Sun Cluster sccheck patch
    117949-23 Sun Cluster 3.1: Core Patch for Solaris 9
    115081-06 Sun Cluster 3.1: HA-Sun One Web Server Patch
    118627-08 Sun Cluster 3.1: Manageability and Serviceability Agent
    117985-03 SunOS 5.9: XIL 1.4.2 Loadable Pipeline Libraries
    113896-06 SunOS 5.9: en_US.UTF-8 locale patch
    114967-02 SunOS 5.9: FDL patch
    114677-11 SunOS 5.9: International Components for Unicode Patch
    112805-01 CDE 1.5: Help volume patch
    113841-01 CDE 1.5: answerbook patch
    113839-01 CDE 1.5: sdtwsinfo patch
    115713-01 CDE 1.5: dtfile patch
    112806-01 CDE 1.5: sdtaudiocontrol patch
    112804-02 CDE 1.5: sdtname patch
    113244-09 CDE 1.5: dtwm patch
    114312-02 CDE1.5: GNOME/CDE Menu for Solaris 9
    112809-02 CDE:1.5 Media Player (sdtjmplay) patch
    113868-02 CDE 1.5: PDASync patch
    119976-01 CDE 1.5: dtterm patch
    112771-30 Motif 1.2.7 and 2.1.1: Runtime library patch for Solaris 9
    114282-01 CDE 1.5: libDtWidget patch
    113789-01 CDE 1.5: dtexec patch
    117728-01 CDE1.5: dthello patch
    113863-01 CDE 1.5: dtconfig patch
    112812-01 CDE 1.5: dtlp patch
    113861-04 CDE 1.5: dtksh patch
    115972-03 CDE 1.5: dtterm libDtTerm patch
    114654-02 CDE 1.5: SmartCard patch
    117632-01 CDE1.5: sun_at patch for Solaris 9
    113374-02 X11 6.6.1: xpr patch
    118759-01 X11 6.6.1: Font Administration Tools patch
    117577-03 X11 6.6.1: TrueType fonts patch
    116084-01 X11 6.6.1: font patch
    113098-04 X11 6.6.1: X RENDER extension patch
    112787-01 X11 6.6.1: twm patch
    117601-01 X11 6.6.1: libowconfig.so.0 patch
    117663-02 X11 6.6.1: xwd patch
    113764-04 X11 6.6.1: keyboard patch
    113541-02 X11 6.6.1: XKB patch
    114561-01 X11 6.6.1: X splash screen patch
    113513-02 X11 6.6.1: platform support for new hardware
    116121-01 X11 6.4.1: platform support for new hardware
    114602-04 X11 6.6.1: libmpg_psr patch
    Is there a bundle to install or i have to install each patch separatly_?

  • OrainstRoot.sh: Failure to promote local gpnp setup to other cluster nodes

    I'm trying to build a 2 node cluster and everything appeared to be going swimmingly until the end of the 1st nodes running of the orainstRoot.sh script.
    The following is the end of the output:
    Disk Group OCR_VOTE created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 4e3f692529584f8bbf7f16146bd90346.
    Successful addition of voting disk 728bed918cf54f6cbf904d37638c674b.
    Successful addition of voting disk 8ac20793405d4fdcbfcafc7e311f877d.
    Successfully replaced voting disk group with +OCR_VOTE.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 4e3f692529584f8bbf7f16146bd90346 (ORCL:VOTE01) [OCR_VOTE]
    2. ONLINE 728bed918cf54f6cbf904d37638c674b (ORCL:VOTE02) [OCR_VOTE]
    3. ONLINE 8ac20793405d4fdcbfcafc7e311f877d (ORCL:VOTE03) [OCR_VOTE]
    Located 3 voting disk(s).
    Failed to rmtcopy "/tmp/fileLgKPGV" to "/u01/app/11.2.0/grid/gpnp/manifest.txt" for nodes {ilprevzedb01,ilprevzedb02}, rc=256
    Failed to rmtcopy "/u01/app/11.2.0/grid/gpnp/ilprevzedb01/profiles/peer/profile.xml" to "/u01/app/11.2.0/grid/gpnp/profiles/peer/profile.xml" for nodes {ilprevzedb01,ilprevzedb02}, rc=256
    rmtcopy aborted
    Failed to promote local gpnp setup to other cluster nodes at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6504.
    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
    Has anyone run into this problem and found a solution?
    Thanks in advance!

    Ok, for everyone out there, I resolved the issue. Hopefully this will help others encountering the same problem.
    It turns out that when the OS was installed, iptables firewall was enabled. This will cause havoc with the installer scripts.
    My first inkling should have been when the installer stalled at 65% trying to copy home directories between nodes, the first time I ran through the installer.
    At that time, Googling around found that iptables might be the problem and indeed it was running, so I just did a 'service iptables stop' WITHOUT REBOOTING THE NODES and re-ran the installer.
    Well, it looks as though NOT REBOOTING THE NODES doesn't quite cut it. I then did a 'chkconfig iptables off' and REBOOTED BOTH NODES.
    Oracle support simply provided me with: How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation (Doc ID 942166.1), which didn't really work all that well, lots of failures, errors, etc. So I just deleted the 11.2.0 directory and tried running the installer again.
    This time the install went through without problems.
    Thanks!

  • VMM Thinks Cluster Node is in Maintenance

    I'm running VMM 2012 SP1 (version 3.1.6020.0). The cluster in question are Windows Server 2012 Datacenter.
    I performed maintenance on one of my Hyper-V failover clusters (installed KB's in
    this article
    ) and when I took one the nodes out of maintenance I successfully migrated VM's between the two via the Failover Cluster Manger console. However, I noticed that VMM still had the exclamation mark on the cluster name. I didn't noticed this until
    a couple of days later and now I'm trying to do a cross-cluster migration and it's not allowing me because VMM thinks the node is in maintenance. I've tried rebooting the VMM server, refreshing the cluster, refreshing all the VMMs and no luck.
    When I go into the Failover Cluster Manager on each of the cluster nodes, both nodes show in production (not in maintenance). Any ideas?
    Note: the way that I took the node out of maintenance was via the Failover Cluster Manager console and NOT through VMM console, as the VMM server was unavailable at the time).

    It is interesting that VMM was unavailable at the time you were doing this. Are you able to refresh this particular host and see if anything changes? Are the option for "stop maintenance mode" available on this host from VMM? 
    Anyhow, the root cause here will be that the data in VMM database is not consistent with your resources, so as a last attempt you could remote - and add your cluster again, just so that the database will perform a clean up of the objects. 
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Could not open (SQL) error log on passive cluster node

    Hi. We have SCCM 2012 SP1 installation using clustered SQL instance running on 2-node SQL Server 2012 cluster.
    On the passive SQL node that's not running the SCCM SQL instance, there are repeated errors in the eventlog with ID 17058, source MSSQL$SCCM: "initerrlog: Could not open error log file 'K:\MSSQL11.SCCM\MSSQL\Log\ERRORLOG'. Operating system error = 3(The
    system cannot find the path specified.)."
    While searching I've found out this is caused by the "SMS_SITE_SQL_BACKUP_<siteservername>" service that is registered on both SQL nodes and has startup type of Automatic and therefore is running on both nodes simultaneously.
    In the smssqlbackup.log file on the passive node, there are errors "SMS_SITE_SQL_BACKUPFailed to start SQL Server.Error code = 0x0.", which make sense, since the SQL instance is already running on the second node. Therefore it fails.
    This seems like a very bad design bug, can it be prevented/fixed? I could configure our monitoring system to ignore this error, but I would rather not...
    Why is that service running on both (all) SQL cluster nodes simultaneously? Why isn't it made part of the cluster resource group or something like that?

    Try on the passive node remove: SQLServer Name\InstanceName
    From the registry:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Components\SMS_SITE_SQL_BACKUP_<SiteServer>\SQL
    Server Instance.

  • Setup of G5 Cluster Node as a Standalone Server....

    I have tried to no avail to try and setup a G5 XServer cluster node as a new Mac OS X 10.6 server. Here is where I screwed up - I pulled the drive out not realizing that creating a new partition on my Mac Book Pro would not be compatible and toasted the partition or rather made it so it was intel based instead of Power PC based and further went about installing the OS via "target mode" and went all the way through and realized that the server would not recognize it. So tonight I went back created 2 new partitions on the drive made them Power PC based but the server still does not recognize the drives when I boot up using the Option key. I get 2 buttons (refresh and I'm guessing a next) but the refresh puts up a clock for about 10 - 15 seconds and comes back and the next does nothing visible.
    Here is what I have tried so far:
    1) Tried booting up with the different boot commands
    a) #3 Start up from internal drive
    b) #5 Setup in Target mode but I don't have a PowerPC Mac to install from
    3) #6 Reset NVRam
    2) Boot with the letter "c" but nothing happens other then getting to a window with a single folder that alternates between a question and the mac guy logo (sorry don't know the exact name)
    3) Boot with a Fire Wire external BluRay DVD player but does not seem to recognize it at all (could be the BluRay I guess have not thought of that)
    And I'm sure I have tried a few other things but currently at wits end. I have a video card (3rd one was the charm) so I have video but have no idea how to get my Mac OS X Server software installed on this machine....
    Any help or suggest would be greatly appreciated - oh I'm sure by now you know I'm new to Macs - I was an old Apple ][e guy but have been on PCs since the late 80's and finally got back to Apple - love them.

    I have tried to no avail to try and setup a G5 XServer cluster node as a new Mac OS X 10.6 server.
    Stop right there.
    10.6 is Intel-only. It won't boot a PowerPC-based server. It doesn't matter about the disk format, or anything else. 10.5.x is as far as you can go with this machine.

  • Question about cluster node majority voting

    We've been having problems with a DB instance crashing regularly.  This weekend when it crashed, it seems to have taken the node it was on with it, or this was a separate incident...
    Right now I have 3 nodes in the cluster.  2 nodes are running 3 instances (2 on 1). The 3rd node is in a state where the OS is mostly unusable and the Cluster service will not start. 
    Event Log:
    "The failover cluster database could not be unloaded. If restarting the cluster service does not fix the problem, please restart the machine."
    Cluster Log from that machine:
    00003768.000067a0::2014/01/06-03:28:05.393 INFO  -----------------------------+ LOG BEGIN +-----------------------------
    00003768.000067a0::2014/01/06-03:28:05.393 INFO  [CS] Starting clussvc as a service
    00003768.000067a0::2014/01/06-03:28:05.394 INFO  [CS] cluster service logging level is 2
    00003768.00004c30::2014/01/06-03:28:05.521 DBG   [NETFTAPI] received NsiInitialNotification
    00003768.00004c30::2014/01/06-03:28:05.523 DBG   [NETFTAPI] received NsiInitialNotification
    00003768.000031f4::2014/01/06-03:28:05.588 DBG   [NETFTAPI] received NsiAddInstance  for 169.254.3.47
    00003768.00004eb4::2014/01/06-03:28:05.590 ERR   [DM] Error while restoring (refreshing) the hive: STATUS_INVALID_PARAMETER(c000000d
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   [DM] mscs::DmAgent::Start: STATUS_INVALID_PARAMETER(c000000d' because of 'Load(NOTHROW(), securityAttributes, discardError )'
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   [DM] Node 3: failed to unload cluster hive, error 87.
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   Hive unload failed (status = 87)
    00003768.00004eb4::2014/01/06-03:28:05.592 ERR   FatalError is Calling Exit Process.
    This is a 3 node cluster set to node majority, I don't have an available drive letter for a witness disk.  Since the cluster service won't start, I'm not certain how the cluster is still running, but am thankful that it is.
    A reboot might fix everything, but I'm very worried that if I reboot the server, and the cluster service still fails to start... it may prevent the entire cluster from starting and we won't be able to run the instances on the other 2 nodes.
    Does the 3rd server still act as an odd-number server, even if the cluster service won't start?  If I reboot and the cluster service still fails to start, will the cluster itself be able to be in an UP state and run the DB instances on the other nodes?
    I already need to open a MS Support incident on the DB instance crashing, so I'd rather not have to open a 2nd one just to answer this hopefully simple question.
    Thanks in advance!
    Mark

    I'll answer it here, since it matters fundamentally to SQL High Availability.
    There are a couple of entities you are conflating here, leading to much confusion.  There is a difference between the Cluster and the cluster service.
    The cluster service will run on a node once the Failover Cluster Feature is installed on that node.  The cluster service will run, even if a cluster is not created.  It may generate errors and not participate in a Cluster if it cannot talk to the
    other nodes, but it will not shut down.
    The Cluster itself requires a quorum, that is a majority of votes, in order to operate.  With three nodes, you should choose Node Majority quorum model, which sounds like what you have.  Any two votes will count, so the third node being offline
    does not matter.  You can safely restart the cluster service on the failed nod, and even restart the node.  Note that with the third node down, you have no redundancy.  (Windows 2012 and 2012 R2 have dynamic quorum, which adjusts the quorum
    count based on the last "settled" quorum vote, but that doesn't apply here).
    I am concerned with your statement that you are out of drive letters.  With three instances, you should have plenty of drive letters left.  I suggest investigating Mount Points.  You only need one drive letter per instance when using Mount
    Points.
    Geoff N. Hiten Principal Consultant Microsoft SQL Server MVP

  • Common memory place across the cluster nodes

    Hi All,
    I am a websphere application server v6.1 user. I am running an application that uses a HashMap to store common information in the form of key value pairs. The application works fine in a single server environment but the same application fails in a cluster environment. This happens because the HashMap information will not be available for the cluster environment nodes which were running on a different JVM�s.
    Could anybody suggest a good design where in I can use a common place to store the HashMap information like queue, database or any common memory area which is available across the cluster nodes? I am not really familiar with the memory facilities offered by websphere server. (The use of a central database is the worst case I prefer as the application makes several calls to the database resulting in a deadlock and 100% CPU utilization)
    And also the values to the HashMap were added dynamically so the memory place should allow me to add my values dynamically during the runtime.
    Please suggest is there any other way or any links to refer to achieve the above situation.
    Thanks in advance
    -Sandeep
    Message was edited by:
    km-sandeep

    For a similar scenario we maintain a version flag in the DB based on which we would reload the hashmap.I'm too interested in finding out a design without DB.

Maybe you are looking for

  • Posting VAT only invoices

    Hi Experts, Please help me post VAT only invoices. I would like to post the invoices only with VAT amount and to see the same in VAT report. Can anybody has come across these type of situations, if then please help me. Regards

  • Could this work on Nvidia boards as well

    ?(  I found this on a Intel site about loading DDR ram and copied it. Remove the AGP video card before installing or upgrading memory to avoid interference with the memory retention mechanism. To be fully compliant with all applicable DDR SDRAM memor

  • URGENT : EVENT Listener Problem

    Hi All, Thanks for your time. I am facing a Small issue with Event dispatcher and Add Event listener. Well i will explain the problem. I am having one MovieClip named BUTTON_SET and i gave "ButtonSet.as" as linkage Class. Inside that BUTTON_SET i am

  • Time Machine Folder Sizes

    I have calculate folder sizes on but when looking through time machine folders the size is not indicated. Anyone know why? Rachael

  • Can't use brushes after re-install -Could not load the brushes because the file is not compatible wi

    I have CS3 Web Premium on an Alienware Area 51 M12j 7700i laptop. Recently I had to wipe the HD and reinstall the OS, and everything else. Photoshop deactivated, installed and re-activated fine, but when I try to use any brushes(except the default on