Xserve G5 Cluster problem

Hi,
Recently I inherited Xserve G5 Cluster server. unfortunately it fails to boot.
After blue leds draw a line and then fall back, the power leds blinks in a serie of 3 shorts - nothing - 3 shorts - nothing, etc.
I wasn't able either to enter the OpenFirmware for diagnotics or to boot the OS.
Can one please enlighten me what can we do, or what does that lights mean?
Thank you!

Hi
They have to be PC-3200 DDR400 ECC SDRAM.
Have you tried to Google for a suitable outlet? Failing that you could always go to your nearest Apple Store or Apple Repair Specialist and give them the serial number of your XServe. They should be able to supply you with relevant RAM as a Service Replacement.
Have you seen this?
http://mactracker.dreamhosters.com/
Tony

Similar Messages

  • Xserve Remote Diagnostics - problem

    I've got a faulty Xserve G5 node in our cluster, and I'd like to use the Xserve Remote Diagnostics software to perform a quick hardware check (I don't expect much from this software, but it's better than nothing
    Now, the problem is:
    For some reason the node will not connect to the headnode, which is running the xrdiags software.
    I've got NetBoot set up correctly on the headnode, and the node, once booted using the frontpanel diagnostics mode, will connect to the headnode and download the appropriate xrdiags NetBoot Image. no problems there. Then the node reboots, and then nothing. I hooked up a display to the node, and it looks like the node has booted correctly, but cannot 'find' any machines running the xrdiags software...
    It just sits there cycling through the same message over and over
    If I start the node normally, and then run xrdiags on the headnode using "xrdiags -q -v -u <username>:<password> -remote <nodeipaddress>" it finds the node, connects, authorizes, sets up the NetBoot images, and then reboots the node...but then nothing...
    the headnode displays "Waiting for systems that have initiated testing... Press Control-C to abort." and then
    it sits there waiting for a connection which never happens
    Any idea how to fix this?
    Xserve G5 cluster Mac OS X (10.3.9)

    I have not used Xrdiags much and this information may not help with this problem but may help down the road.
    How long did you wait after you saw the "Waiting for systems that have initiated testing... Press Control-C to abort". I know this process the entire process of running xrdiags is longer than what is mentioned in Apple docs.
    I did not run verbose mode when I did it. Rather, I used the output flag as in "xrdiags .....-o /home/user/xrdiagsout .... after the message on the headnode says "Restarting remote server" I waited a couple of minutes and then would do an ls of the output file to see if had been created. Running ls -l every several minutes i saw the file actually getting larger in size.
    I did have a problem with my password i supplied the in the xrdiags command. Not sure which characters, but some non-alphanumeric characters are not allowed when running Xrdiags. actually had to temporarily change the client password to get this to work.
    On a side not if you use the -r (reboot) option on the xrdiags command, I had problems getting the client machine to reboot after xrdiags was done. I actually had to do a hard reboot and use the front panel LEDs and set it to boot to the hard drive. 3rd LED.
    Also, when xrdiags is working properly, the cpu LEDs on the front panel will be going in a jigsaw pattern.

  • Ironport c160 cluster problems

    Hi!
    I have two Ironport C160 in cluster mode, tonight one of them has stopped working, and I can not access this on, but it responds to ping.
    In the system log I found only the following line:
    Mon Mar 12 15:30:39 2012 Warning: Error connecting to cluster machine xxxxx (Serial#: xxxxxx-xxxxxx) at IP xx.xxx.xxx.x - Operation timed out - Timeout connecting to remotehost cluster
    Mon Mar 12 15:31:09 2012 Info: Attempting to connect via IPxxxxx toxxxxxxxx port 22 (Explicitly configured)
    My version is:6.5.3-007
    What I can log to find the cause of the problem?
    How I can find out what the problem?
    How can you solve?
    Thank you very much

    Well, "queuereset" is not a valid command, what you mean is "resetqueue", which I would strongly not recomment  to use without having a very good reason.Because this command removes all messages from the workqueue, delivery queues, and quarantines. There are usually less destructive ways to fix a cluster problem.
    BTW, version 5.5 has long been gone, so we won't need to reference any bugs from there any more.
    Regards,
    Andreas

  • SPF is not supported SCVMM cluster problems, when repairing ?

    SPF is not supported SCVMM cluster problems, when repairing ?

    See:
    *http://forums.sdn.sap.com/thread.jspa?threadID=2056183&tstart=45#10718101

  • Leopard - QMaster and Virtual Cluster problem

    Hi guys,
    Up until yesterday I had my MacPro Octo running under 10.4 where I did succesfully set up a Virtual Cluster using 4 instances for Compressor. It worked as a charm and my MacPro was doing it's job perfectly.
    Today, I made a bootable backup of my 10.4 install and installed 10.5 using the erase and install options ( clean install ). I installed all my software again and tryed setting up my Virtual Cluster again, using the same settings I had under 10.4. Sadly I can't seem to get it working.
    In the QMaster Preferences pane, I have the QuickCluster with Services option checked. For the compressor entry in the Services I have the Share Option checked and used 4 isntances for the selected service. The Quickcluster received a descent name and the option to include unmanaged services from other computers is checked.
    I have the default options set in the Advanced tab, ( nothing checked except log service activity to log file and Show QMaster service status in the Menu Bar). I then started the Cluster using the Start Sharing button.
    Now I open u Compressor and add a file to process ( QT encode to iPod ), but when I hit the Submit button, my Virtual Cluster doesn't show up in the Cluster Dropdown. If I now leave the Compressor GUI open for 5 minutes, it will eventually show up in the list, and I can pick it. Sadly, picking it from the list at this point and hitting the Submit button makes Compressor Hang.
    I checked my logs, but the only thing concerning Compressor I could find is this :
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:41 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'], server [tcp://10.0.1.199:49167]
    4/12/07 20:12:41 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'
    4/12/07 20:17:55 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488705.075513 218488675.075652) 1'], server [tcp://10.0.1.199:49167]
    I tried Stopping and then Restart Sharing and I noticed the follwoing entries in my log :
    4/12/07 20:23:26 compressord[210] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[211] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[213] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterca[269] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterqd[199] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:27 QmasterStatusMenu[178] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489009.603992 218489007.604126) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'
    Batch Monitor immediately detects the cluster being active again, but Compressor doesnt, leaving me only This Computer available in the Cluster drop down when submitting a batch.
    In my Activity Monitor, I notice that CompressorTranscoder is not responing ( the 4 CompressorTranscoderX processes are fine ) and the ContentAgent proces isn't responding neither.
    Does anyone have any clue on what I could check next or how I could fix my problems ?
    Thanks a lot in advance,
    Stefaan

    Bah, this is crazy, today it doesn't work anymore. Yesterday my cluster was showing up in the Dropdown window, and I could submit a batch to it, and it got processed over my virtual cluster.
    Today, after finishing the second part of my movie, I tried it again. I didn't change anything to my settings, my machine hasn't even rebooted (just recovered from sleep mode) and my cluster isn't showing up at all anymore. Even the Qmaster menu doesn't show it
    Guess, I'll have to wait out until it appears again, or try a few things out

  • August Patch Cluster Problems

    Has anyone had the following issue after installing the latest Patch Cluster?
    After a reboot I get
    couldn't set locale correctly
    To correct this I have to edit /etc/default/init
    and remove
    LC_COLLATE=en_GB.ISO8859-1
    LC_CTYPE=en_GB.ISO8859-1
    LC_MESSAGES=C
    LC_MONETARY=en_GB.ISO8859-1
    LC_NUMERIC=en_GB.ISO8859-1
    LC_TIME=en_GB.ISO8859-1
    If I then create a flash archive and use this flash archive the jumpstart process then puts the locale info back and the problem appears again.
    It's not critical as I don't need to be on the latest Patch Cluster but would wondered if I'm the only one having issues?

    If you open the directory in CDE's file manager, right click on the zipped file and select unzip. The cluster will be unzipped to a directory structure called x86_recommended or something of the sort. Change to that directory to run the patch cluster install script. The patch script is looking for that directory structure.
    Lee

  • NVGRE Gateway Cluster Problem

    Hello
    We have following setup:
    Management Hyper-V hosts running WAP, SPF and SCVMM 2012 R2 components
    Gateway Hyper-V host: single node gateway hyper-v host, configured as a single node cluster to be able to join extra hardware in the future
    this Hyper-V host runs 2 Windows Server Gateway VMs,configured as a failover cluster.
    The following script is used to deploy these windows server gateway VMs as a high available NVGRE gateway service:
    http://www.hyper-v.nu/archives/mscholman/2015/01/hyper-v-nvgre-gateway-toolkit/
    two tenant Hyper-V hosts running VMs which are using network virtualization
    The setup is completed successfully and when creating a tenant in WAP and creating VM network for this tenant using NAT, the VMs of this tenant are accessible and can access Internet using the HA Gateway cluster.
    The Gateway Hyper-V host and NVGRE Gateway VMs are running in a DMZ zone, in a DMZ Active Directory Domain.
    Management and Tenant Hyper-V hosts, incl all Management VMs, are running in a dedicated internal Active Directory domain.
    Problems start when we failover the Windows Server Gateway service to the other VM node of the NVGRE Gateway cluster. We see in the lookup records on the Gateway Hyper-V host that the MAC address of the gateway record for tenants is updated with the new
    MAC address of the VM node running the gateway service.
    But in SCVMM, apparently, this record is not updated. The tenant hosts still use the old MAC address of the other Gateway VM node.
    When looking in the SCVMM database, we can also see that in the VMNetworkGateway table that the record representing the gateway of the tenant, still points to the MAC address of the PA network adapter of the other node of the NVGRE Gateway cluster, not to the
    new node on which the gateway service is running after initiating a failover.
    On the tenant hyper-v hosts, the lookup record for the gateway also points to the old node as well.
    When manually changing the record in the VMNetworkGateway table to the new MAC address, and refreshing the tenant hosts in SCVMM, all starts working again and the tenant VMs can access the gateway again.
    Anybody else facing this issue? Or is running a NVGRE Gateway cluster on a single Hyper-V node not supported?
    To be complete, the deployed VMs running the gateway service are not configured as HA VMs.
    Regards
    Stijn

    If i understand your post correctly you have a single Hyper-V Host running 2 GW VM's. I think the problem is that when you deploy a HA VM Gateway Cluster it wants to create a Cluster Resource (PA IP Address) on the Hyper-V host as well. So when you run 2
    hyper-v hosts and 2 gw vm's and you move the active role to another host it will move the Provider Address to the other Hyper-V host as well. I believe this is by design. You should ask yourself also the question why running 2 vm's in a cluster on the same
    node ;-)
    I would recommend to use 2 node Hyper-V Host Cluster (This is needed for the HA PA Address, And not necessary for your GW VM's )
    Then run the deployment toolkit again. Now when that's done again, take a close look on how the Active node on the Hyper-V host has the corresponding PA assiogned on that Hyper-V host. Then do a failover, refresh the cluster manager and take notice
    of the PA address that has moved along to the other Hyper-V host that is the active one. It is diffuclt to explain, in a couple of sentences but i hope you have the opportunity to build the 2nd Hyper-V host aswell and create a cluste.
    Side note: if you want to keep the excising VM Gateway cluster. remove all gateways from VM Networks and remove the Gateway service from VMM. Then provision the second Hyper-V Host, Configure Cluster, Live migrate 1 GW VM node to it. Reconfigure
    Shared VHDX for quorum and csv and  then add back the network service again. Don't try to leave it as a network service in VMM and move the VM to another node. It will not work when failover.
    Best regards, Mark Scholman. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Reformatting a XServe G5 Cluster Node

    I got my hands on a pair of G5 cluster nodes (the ones with a single drive bay and no optical) and I need to intall Mac OSX server on it. I tried removing the hard drive and using my macBook Pro to install to it, but I can't install mac Os X using the apple partition map, since I am working from and intel machine.
    The G5 Cluster node has no video card so I can't even plug in my external reader and install it from there.
    A Little Help Please? I am not a server guy, I'm a video guy but this happens to have fallen into my hands.
    Thanks,
    Charles

    Hey Chuck,
    You can install whatever partition map you need from the OS X Server installer. Before you begin the installation, go to "Utilities" and open Disk Utility.
    Select the server's hard drive (be careful NOT to select your MacBook's drive, you don't want to erase that). Go to the "Partition" tab. Select "1 Partition" from the Volume Scheme popup. Then click on the "Options" button and and select "Apple Partition Map". Hit Apply. This should ensure that your XServe has the appropriate boot record.

  • Cluster Problems??

    Hi All,
    Need some help we have a SAP 4.6C install on a Microsoft cluster with a MSQL database one node in the
    Cluster is corrupt and needs to be rebuilt my question to you all is can one node of the cluster be built
    Or will both nodes have to be rebuilt.
    If so where can I find the documentation to do this can it result in any other problems.
    Thanks
    John

    Hello - The nature of MSCS is failover. Thus one node failure = one node recovery. MSCS documentation would suffice here.
    Regards.

  • Xserve G5 Cluster as Metadata Controller?

    We just got a great deal on a G5 Cluster. Would it make a good metadata controller for our future SAN? The MDC does not use local storage for the metadata right? How much RAM should it have? Thanks for all the info! All you guys are great!

    Thanks for the advice. I do have ARD. It's great on the LAN, but I find it easier to get things done over the WAN using Timbuktu. I wish ARD would allow me to monitor Windows servers also. I have to keep a couple around and ther're a pain to monitor. We always get video cards added to the Xserves anyways. We have a couple big KVM switches. Sometimes you just have to see the real thing. I'm not into Terminal!

  • BorderManager Cluster problems

    I have set up a 2 node NW 6.5 SP8 cluster to run BorderManager 3.9 SP2. I don't have a 'Split Brain Detector' (SBD) partition; the servers only monitor each other through the LAN heartbeat signal that is being sent by the master and replies by the slave. This has worked well from a high availability perspective but I keep running into a situation where both nodes will go 'active'.
    Usually, I have Node 0 set as both the cluster master and the host of the NBM proxy resource. Node 1 is then in standby - ready to load the proxy service and assume the proxy IP address if node 0 dies. At some point (the time is variable in days 2 - 5 and doesn't seem to be related to network load) Node 0 will think that Node 1 has failed and will show that on the Cmon console. Shortly afterwards Node 1 will think that Node 0 has failed and bind the proxy IP and cluster master IP and load the proxy. At this time I have two servers; both with the same Cluster Master IP bound and the proxy IP bound and proxy.nlm loaded!
    I can access Node 0 through rconj and it appears to be working fine. If I do a 'display secondary ipaddress' I can see it has both the proxy IP and Cluster Master IP bound to it. The same thing is the case for Node 1. I unload the proxy on Node 0 and reset the server. When it comes back up, it joins the cluster just fine and there doesn't appear to be any other problem.
    Has anyone else seen this behavior? (Craig???)
    thanks,
    Dan

    In article <[email protected]>, Dchuntdnc wrote:
    > but I keep running into a situation where
    > both nodes will go 'active'.
    I've got one of those situations too, at a client.
    >
    > Usually, I have Node 0 set as both the cluster master and the host of
    > the NBM proxy resource. Node 1 is then in standby - ready to load the
    > proxy service and assume the proxy IP address if node 0 dies. At some
    > point (the time is variable in days 2 - 5 and doesn't seem to be related
    > to network load) Node 0 will think that Node 1 has failed and will show
    > that on the Cmon console.
    This sounds familiar, except for me it happens within hours.
    > Shortly afterwards Node 1 will think that
    > Node 0 has failed and bind the proxy IP and cluster master IP and load
    > the proxy. At this time I have two servers; both with the same Cluster
    > Master IP bound and the proxy IP bound and proxy.nlm loaded!
    Yep. Gets annoying, to say the least!
    >
    > I can access Node 0 through rconj and it appears to be working fine.
    > If I do a 'display secondary ipaddress' I can see it has both the proxy
    > IP and Cluster Master IP bound to it. The same thing is the case for
    > Node 1. I unload the proxy on Node 0 and reset the server. When it
    > comes back up, it joins the cluster just fine and there doesn't appear
    > to be any other problem.
    Yep.
    >
    > Has anyone else seen this behavior? (Craig???)
    I have definitely fought this issue, but only on one (of many) BM cluster.
    Both nodes of the cluster are on old servers, and when the proxy is
    active, it is exceptionally busy. (More than 2000 users, and plenty of LAN
    bandwidth). I was on site at the client working on this (and a lot of
    other projects) and I never was able to get to the bottom of it. The fact
    that the server was so busy (24x7) made it hard to experiment on. My hope
    at this point is to get decent newer hardware in there to replace the
    7-year old nodes.
    This happened when one server was BM 3.8 and the other BM 3.9, but it
    continued to happen when I upgraded both to 3.9sp2. It also happened even
    though I moved the heartbeat to dedicated nics with a crossover cable.
    I'm thinking that something causes the LAN drivers to hiccup long enough
    for the server to stop responding to heartbeat - but the proxy seems to
    work continuously without showing a 30-second pause anywhere.
    For the time being, I've left the oldest node not loading cluster
    services. It's a manual failover at this time, but that's better than
    nothing. (And the primary node is quite stable anyway, for months and
    months at a time).
    Craig Johnson
    Novell Support Connection SysOp
    *** For a current patch list, tips, handy files and books on
    BorderManager, go to http://www.craigjconsulting.com ***

  • Patchin portal cluster problem

    I am trying to run, Portal patch 13 on a WAS cluster.
    t
    The problem I am getting is that the patch installation asks for a "username and password" for administrator
    When I enter the details I get an error.
    My question is that if it is in safe modde, how is the install checking these details.  I cannot log into the visual admin when the cluster is in safe mode.
    Anybode else have this problem.
    Thanks

    I think I just figured out how to use the sage mode.  It basically just limits the cluster to 1 server and 1 dispatcher.  Youre right, the same result can be achieved with the config tool
    Thanks

  • Oracle IAS Cluster Problem

    Can someone tell me how the ias cluster work?
    Do it need a clustering compenont? as like db clusterware or weblogic clustering?
    Thanks very much!

    "Cluster" is pretty wide/open term. Bit more details about what have you got (version of Oracle Application Server, kind of installation, your topology) and what do you want to achieve by having a cluster will help us suggest something. There are multiple types of cluster configurations varying by Oracle Application Server releases.
    So pl. be bit more specific about your problem/request.
    Thanks
    Shail

  • Compressor 4 cluster problem

    I have just set up my MacBook Pro and iMac with Compressor 4 as a cluster. Everything looks fine, but when I send the file out to render it fails with an error "error reading source.....no such file or directory". The file renders fine when I don't tick the "this computer plus" box.
    I've followed the path Compressor is using and it points to the alias in the events folder which goes from the "original media" folder back to the actual location of the .mov files. Sure enough, in Finder, OS X tells me the alias has failed. However, when I try to fix it and browse to the original file, the "OK" button lights up but nothing happens when I click it. This seems to be the case for ALL .mov files in every project I have edited.
    The weird thing is that FCP X can obviously see all of these files as everything works fine - I can edit and render with no problem. The issue only arises when I choose "this computer plus" or pick a cluster.
    So it looks like the aliases do point to the correct files but cannot be accessed directly from the Finder or when Compressor 4 looks for them in cluster mode.
    I hope that makes sense.
    Hopefully someone has seen similar behaviour.
    Thanks,
    Jack.

    Hi Studio X, not sure how to do that but I just worked it out. It was (as you seem to have worked out) the alias that was the clue.
    I had put together these FCP X projects on a different USB drive. As I wanted to be more organised I copied over all the projects to a new 1TB USB drive which I'm only using for storage and editing. The good thing about this is that I can simply remove the drive and plug it into a different Mac and FCP X sees everything - events, projects, original files. However, there must be some reference to the name of the old USB drive as part of the alias which Compressor doesn't like when in cluster mode. I started a quick project on the new drive and Compressor 4 worked as a 3 machine cluster with no problems.
    I can't quite understand why FCP X finds the original video at all if it is looking for the drive name and not just the path to the files, but it seems not to care.
    Anyone have any ideas about this?

  • Cluster problem

              Hi,
              when I exam my logs I found out these lines. what are these threads trying to
              do? The reported 2 servers did log anything 5 mins before the timestamp here.
              ###<Sept 7, 2002 6:37:54 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              '5' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.11.21.50 and port 7001 with protocol t3. The Exception
              is java.net.ConnectException: Operation timed out: connect>
              ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              '3' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.11.21.51 and port 7001 with protocol t3. The Exception
              is java.net.SocketException: Connection reset by peer: JVM_recv in socket input
              stream read>
              ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              '1' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.11.21.512 and port 7001 with protocol t3. The Exception
              is java.net.SocketException: Connection reset by peer: socket write error>
              

              I have the problem too. I ma using wls7.0 sp1. Is it because some network setting?
              After sometime the thread will stuck.
              ####<Oct 23, 2002 11:12:29 AM PST> <Warning> <RJVM> <SP2> <srv2> <ExecuteThread:
              '16' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.2.20.62 and port 7001 with protocol t3. The Exception
              is java.net.ConnectException: Operation timed out: connect>
              ####<Oct 23, 2002 11:13:16 AM PST> <Warning> <RJVM> <SP2> <srv2> <ExecuteThread:
              '10' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.2.20.62 and port 7001 with protocol t3. The Exception
              is java.net.ConnectException: Operation timed out: connect>
              ####<Oct 23, 2002 11:13:16 AM PST> <Info> <WebLogicServer> <SP2> <srv2> <ExecuteThread:
              '10' for queue: 'default'> <kernel identity> <> <000339> <ExecuteThread: '10'
              for queue: 'default' has become "unstuck".>
              ####<Oct 23, 2002 11:13:16 AM PST> <Info> <WebLogicServer> <SP2> <srv2> <ExecuteThread:
              '16' for queue: 'default'> <kernel identity> <> <000339> <ExecuteThread: '16'
              for queue: 'default' has become "unstuck".>
              Rajesh Mirchandani <[email protected]> wrote:
              >Which server and service pack are you using?
              >
              >This message should show up on one server when one server in the cluster
              >drops out or
              >is force-killed.
              >
              >They are harmless anyway.
              >
              >Joe wrote:
              >
              >> They are clustered managed servers. The message is logged on a managed
              >server.
              >>
              >> Kumar Allamraju <[email protected]> wrote:
              >> >Which server is logging the following messages? Is it admin
              >> >or managed server?
              >> >
              >> >Who are these 10.11.21.50,51 etc..?
              >> >
              >> >Kumar
              >> >
              >> >Joe wrote:
              >> >> Hi,
              >> >> when I exam my logs I found out these lines. what are these threads
              >> >trying to
              >> >> do? The reported 2 servers did log anything 5 mins before the timestamp
              >> >here.
              >> >>
              >> >> ###<Sept 7, 2002 6:37:54 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              >> >> '5' for queue: 'default'> <kernel identity> <> <000519> <Unable
              >to
              >> >connect to
              >> >> a remote server on address 10.11.21.50 and port 7001 with protocol
              >> >t3. The Exception
              >> >> is java.net.ConnectException: Operation timed out: connect>
              >> >>
              >> >> ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1>
              ><ExecuteThread:
              >> >> '3' for queue: 'default'> <kernel identity> <> <000519> <Unable
              >to
              >> >connect to
              >> >> a remote server on address 10.11.21.51 and port 7001 with protocol
              >> >t3. The Exception
              >> >> is java.net.SocketException: Connection reset by peer: JVM_recv
              >in
              >> >socket input
              >> >> stream read>
              >> >>
              >> >> ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1>
              ><ExecuteThread:
              >> >> '1' for queue: 'default'> <kernel identity> <> <000519> <Unable
              >to
              >> >connect to
              >> >> a remote server on address 10.11.21.51 and port 7001 with protocol
              >> >t3. The Exception
              >> >> is java.net.SocketException: Connection reset by peer: socket write
              >> >error>
              >> >>
              >> >
              >
              >--
              >Rajesh Mirchandani
              >Developer Relations Engineer
              >BEA Support
              >
              >
              

Maybe you are looking for