WAP 561 CLUSTER PROBLEM

Dear Support,
May i know if i can do the cluster of WAP561 with a WAP321?
Regards,
Danny

Hello Danny,
No, you cannot cluster a WAP321 and WAP561 together. Clustering is done with like devices (WAP321's together or WAP551/WAP561 combo).
https://supportforums.cisco.com/docs/DOC-40281
Hope this helps.
Nagaraja

Similar Messages

  • Ironport c160 cluster problems

    Hi!
    I have two Ironport C160 in cluster mode, tonight one of them has stopped working, and I can not access this on, but it responds to ping.
    In the system log I found only the following line:
    Mon Mar 12 15:30:39 2012 Warning: Error connecting to cluster machine xxxxx (Serial#: xxxxxx-xxxxxx) at IP xx.xxx.xxx.x - Operation timed out - Timeout connecting to remotehost cluster
    Mon Mar 12 15:31:09 2012 Info: Attempting to connect via IPxxxxx toxxxxxxxx port 22 (Explicitly configured)
    My version is:6.5.3-007
    What I can log to find the cause of the problem?
    How I can find out what the problem?
    How can you solve?
    Thank you very much

    Well, "queuereset" is not a valid command, what you mean is "resetqueue", which I would strongly not recomment  to use without having a very good reason.Because this command removes all messages from the workqueue, delivery queues, and quarantines. There are usually less destructive ways to fix a cluster problem.
    BTW, version 5.5 has long been gone, so we won't need to reference any bugs from there any more.
    Regards,
    Andreas

  • WAP 561 Web Browsing only

    I am looking for some help configuring a Cisco WAP 561 to only allow web browsing.  Currently, I am able to configure an ACL deny specific ports.  I set the rules to deny the ports and the last rule allows everything.  This works ok, but I can only configure 10 rules.
    I would rather set it to only allow port 80 and 443.  Is there a way to do this?  If so, i'm having no luck figuring it out.
    Also, the WAP561 does not have a command line interface. Only web configuration.
    Below is what I have configured.  I am denying share drives, remote desktop, and some specific internal IPs. 
    <acl name="GuestAccess">
    <acl-type>ipv4</acl-type>
    <in-use>1</in-use>
    </acl>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>tcp</protocol>
    <dst-port>135</dst-port>
    <index>19</index>
    <commit>3</commit>
    <rule-index>1</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>tcp</protocol>
    <dst-port>445</dst-port>
    <index>20</index>
    <commit>3</commit>
    <rule-index>2</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>udp</protocol>
    <dst-port>137</dst-port>
    <index>21</index>
    <commit>3</commit>
    <rule-index>3</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>udp</protocol>
    <dst-port>138</dst-port>
    <index>22</index>
    <commit>3</commit>
    <rule-index>4</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>tcp</protocol>
    <dst-port>3389</dst-port>
    <index>23</index>
    <commit>3</commit>
    <rule-index>5</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>ip</protocol>
    <dst-ip>192.168.24.16</dst-ip>
    <dst-ip-mask>0.0.0.0</dst-ip-mask>
    <index>24</index>
    <commit>3</commit>
    <rule-index>6</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>ip</protocol>
    <dst-ip>192.168.25.164</dst-ip>
    <dst-ip-mask>0.0.0.0</dst-ip-mask>
    <index>25</index>
    <commit>3</commit>
    <rule-index>7</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>permit</action>
    <every>yes</every>
    <index>26</index>
    <commit>3</commit>
    <rule-index>8</rule-index>
    </rule>
    <acl name="GuestAccess">
    <acl-type>ipv4</acl-type>
    <in-use>1</in-use>
    </acl>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>tcp</protocol>
    <dst-port>135</dst-port>
    <index>19</index>
    <commit>3</commit>
    <rule-index>1</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>tcp</protocol>
    <dst-port>445</dst-port>
    <index>20</index>
    <commit>3</commit>
    <rule-index>2</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>udp</protocol>
    <dst-port>137</dst-port>
    <index>21</index>
    <commit>3</commit>
    <rule-index>3</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>udp</protocol>
    <dst-port>138</dst-port>
    <index>22</index>
    <commit>3</commit>
    <rule-index>4</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>tcp</protocol>
    <dst-port>3389</dst-port>
    <index>23</index>
    <commit>3</commit>
    <rule-index>5</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>ip</protocol>
    <dst-ip>192.168.24.16</dst-ip>
    <dst-ip-mask>0.0.0.0</dst-ip-mask>
    <index>24</index>
    <commit>3</commit>
    <rule-index>6</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>deny</action>
    <protocol>ip</protocol>
    <dst-ip>192.168.25.164</dst-ip>
    <dst-ip-mask>0.0.0.0</dst-ip-mask>
    <index>25</index>
    <commit>3</commit>
    <rule-index>7</rule-index>
    </rule>
    <rule>
    <acl-name>GuestAccess</acl-name>
    <acl-type>ipv4</acl-type>
    <action>permit</action>
    <every>yes</every>
    <index>26</index>
    <commit>3</commit>
    <rule-index>8</rule-index>
    </rule>

    Hi Shane,
    Thank you for reaching the Small Business Support Community.
    Notice there is an implicit “deny” at the end of every ACL, so what I suggest doing is just create one ACL with two rules; to “permit” TCP 80 and 443 respectively where the implicit “deny” will block everything else. Something like this:
    Just in case please refer to the admin guide, page 111, for details;
    http://www.cisco.com/en/US/docs/wireless/access_point/csbap/wap5x1/administration/guide/WAP551_561_admin_guide.pdf
    Please do not hesitate to reach me back if there is any further assistance I may help you with.
    Kind regards,
    Jeffrey Rodriguez S. .:|:.:|:.
    Cisco Customer Support Engineer
    *Please rate the Post so other will know when an answer has been found.

  • SPF is not supported SCVMM cluster problems, when repairing ?

    SPF is not supported SCVMM cluster problems, when repairing ?

    See:
    *http://forums.sdn.sap.com/thread.jspa?threadID=2056183&tstart=45#10718101

  • NVGRE Gateway Cluster Problem

    Hello
    We have following setup:
    Management Hyper-V hosts running WAP, SPF and SCVMM 2012 R2 components
    Gateway Hyper-V host: single node gateway hyper-v host, configured as a single node cluster to be able to join extra hardware in the future
    this Hyper-V host runs 2 Windows Server Gateway VMs,configured as a failover cluster.
    The following script is used to deploy these windows server gateway VMs as a high available NVGRE gateway service:
    http://www.hyper-v.nu/archives/mscholman/2015/01/hyper-v-nvgre-gateway-toolkit/
    two tenant Hyper-V hosts running VMs which are using network virtualization
    The setup is completed successfully and when creating a tenant in WAP and creating VM network for this tenant using NAT, the VMs of this tenant are accessible and can access Internet using the HA Gateway cluster.
    The Gateway Hyper-V host and NVGRE Gateway VMs are running in a DMZ zone, in a DMZ Active Directory Domain.
    Management and Tenant Hyper-V hosts, incl all Management VMs, are running in a dedicated internal Active Directory domain.
    Problems start when we failover the Windows Server Gateway service to the other VM node of the NVGRE Gateway cluster. We see in the lookup records on the Gateway Hyper-V host that the MAC address of the gateway record for tenants is updated with the new
    MAC address of the VM node running the gateway service.
    But in SCVMM, apparently, this record is not updated. The tenant hosts still use the old MAC address of the other Gateway VM node.
    When looking in the SCVMM database, we can also see that in the VMNetworkGateway table that the record representing the gateway of the tenant, still points to the MAC address of the PA network adapter of the other node of the NVGRE Gateway cluster, not to the
    new node on which the gateway service is running after initiating a failover.
    On the tenant hyper-v hosts, the lookup record for the gateway also points to the old node as well.
    When manually changing the record in the VMNetworkGateway table to the new MAC address, and refreshing the tenant hosts in SCVMM, all starts working again and the tenant VMs can access the gateway again.
    Anybody else facing this issue? Or is running a NVGRE Gateway cluster on a single Hyper-V node not supported?
    To be complete, the deployed VMs running the gateway service are not configured as HA VMs.
    Regards
    Stijn

    If i understand your post correctly you have a single Hyper-V Host running 2 GW VM's. I think the problem is that when you deploy a HA VM Gateway Cluster it wants to create a Cluster Resource (PA IP Address) on the Hyper-V host as well. So when you run 2
    hyper-v hosts and 2 gw vm's and you move the active role to another host it will move the Provider Address to the other Hyper-V host as well. I believe this is by design. You should ask yourself also the question why running 2 vm's in a cluster on the same
    node ;-)
    I would recommend to use 2 node Hyper-V Host Cluster (This is needed for the HA PA Address, And not necessary for your GW VM's )
    Then run the deployment toolkit again. Now when that's done again, take a close look on how the Active node on the Hyper-V host has the corresponding PA assiogned on that Hyper-V host. Then do a failover, refresh the cluster manager and take notice
    of the PA address that has moved along to the other Hyper-V host that is the active one. It is diffuclt to explain, in a couple of sentences but i hope you have the opportunity to build the 2nd Hyper-V host aswell and create a cluste.
    Side note: if you want to keep the excising VM Gateway cluster. remove all gateways from VM Networks and remove the Gateway service from VMM. Then provision the second Hyper-V Host, Configure Cluster, Live migrate 1 GW VM node to it. Reconfigure
    Shared VHDX for quorum and csv and  then add back the network service again. Don't try to leave it as a network service in VMM and move the VM to another node. It will not work when failover.
    Best regards, Mark Scholman. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Leopard - QMaster and Virtual Cluster problem

    Hi guys,
    Up until yesterday I had my MacPro Octo running under 10.4 where I did succesfully set up a Virtual Cluster using 4 instances for Compressor. It worked as a charm and my MacPro was doing it's job perfectly.
    Today, I made a bootable backup of my 10.4 install and installed 10.5 using the erase and install options ( clean install ). I installed all my software again and tryed setting up my Virtual Cluster again, using the same settings I had under 10.4. Sadly I can't seem to get it working.
    In the QMaster Preferences pane, I have the QuickCluster with Services option checked. For the compressor entry in the Services I have the Share Option checked and used 4 isntances for the selected service. The Quickcluster received a descent name and the option to include unmanaged services from other computers is checked.
    I have the default options set in the Advanced tab, ( nothing checked except log service activity to log file and Show QMaster service status in the Menu Bar). I then started the Cluster using the Start Sharing button.
    Now I open u Compressor and add a file to process ( QT encode to iPod ), but when I hit the Submit button, my Virtual Cluster doesn't show up in the Cluster Dropdown. If I now leave the Compressor GUI open for 5 minutes, it will eventually show up in the list, and I can pick it. Sadly, picking it from the list at this point and hitting the Submit button makes Compressor Hang.
    I checked my logs, but the only thing concerning Compressor I could find is this :
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:35 Compressor[242] Could not find image named 'MPEG1-Output'.
    4/12/07 20:12:41 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'], server [tcp://10.0.1.199:49167]
    4/12/07 20:12:41 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488391.647220 218488361.647369) 1'
    4/12/07 20:17:55 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218488705.075513 218488675.075652) 1'], server [tcp://10.0.1.199:49167]
    I tried Stopping and then Restart Sharing and I noticed the follwoing entries in my log :
    4/12/07 20:23:26 compressord[210] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[211] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 compressord[213] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterca[269] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:26 qmasterqd[199] can't refresh cache from file "/Library/Application Support/Apple Qmaster/qmasterservices.plist"
    4/12/07 20:23:27 QmasterStatusMenu[178] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489009.603992 218489007.604126) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] * CDOClient::connect2: CException [NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'], server [tcp://10.0.1.199:49407]
    4/12/07 20:23:27 Batch Monitor[190] exception caught in -[ClusterStatus getNewStatusFromController:withOptions:withQueryList:]: NSException raised by 'NSPortTimeoutException', reason = '[NSPortCoder sendBeforeTime:sendReplyPort:] timed out (218489037.738080 218489007.738169) 1'
    Batch Monitor immediately detects the cluster being active again, but Compressor doesnt, leaving me only This Computer available in the Cluster drop down when submitting a batch.
    In my Activity Monitor, I notice that CompressorTranscoder is not responing ( the 4 CompressorTranscoderX processes are fine ) and the ContentAgent proces isn't responding neither.
    Does anyone have any clue on what I could check next or how I could fix my problems ?
    Thanks a lot in advance,
    Stefaan

    Bah, this is crazy, today it doesn't work anymore. Yesterday my cluster was showing up in the Dropdown window, and I could submit a batch to it, and it got processed over my virtual cluster.
    Today, after finishing the second part of my movie, I tried it again. I didn't change anything to my settings, my machine hasn't even rebooted (just recovered from sleep mode) and my cluster isn't showing up at all anymore. Even the Qmaster menu doesn't show it
    Guess, I'll have to wait out until it appears again, or try a few things out

  • August Patch Cluster Problems

    Has anyone had the following issue after installing the latest Patch Cluster?
    After a reboot I get
    couldn't set locale correctly
    To correct this I have to edit /etc/default/init
    and remove
    LC_COLLATE=en_GB.ISO8859-1
    LC_CTYPE=en_GB.ISO8859-1
    LC_MESSAGES=C
    LC_MONETARY=en_GB.ISO8859-1
    LC_NUMERIC=en_GB.ISO8859-1
    LC_TIME=en_GB.ISO8859-1
    If I then create a flash archive and use this flash archive the jumpstart process then puts the locale info back and the problem appears again.
    It's not critical as I don't need to be on the latest Patch Cluster but would wondered if I'm the only one having issues?

    If you open the directory in CDE's file manager, right click on the zipped file and select unzip. The cluster will be unzipped to a directory structure called x86_recommended or something of the sort. Change to that directory to run the patch cluster install script. The patch script is looking for that directory structure.
    Lee

  • Cluster Problems??

    Hi All,
    Need some help we have a SAP 4.6C install on a Microsoft cluster with a MSQL database one node in the
    Cluster is corrupt and needs to be rebuilt my question to you all is can one node of the cluster be built
    Or will both nodes have to be rebuilt.
    If so where can I find the documentation to do this can it result in any other problems.
    Thanks
    John

    Hello - The nature of MSCS is failover. Thus one node failure = one node recovery. MSCS documentation would suffice here.
    Regards.

  • BorderManager Cluster problems

    I have set up a 2 node NW 6.5 SP8 cluster to run BorderManager 3.9 SP2. I don't have a 'Split Brain Detector' (SBD) partition; the servers only monitor each other through the LAN heartbeat signal that is being sent by the master and replies by the slave. This has worked well from a high availability perspective but I keep running into a situation where both nodes will go 'active'.
    Usually, I have Node 0 set as both the cluster master and the host of the NBM proxy resource. Node 1 is then in standby - ready to load the proxy service and assume the proxy IP address if node 0 dies. At some point (the time is variable in days 2 - 5 and doesn't seem to be related to network load) Node 0 will think that Node 1 has failed and will show that on the Cmon console. Shortly afterwards Node 1 will think that Node 0 has failed and bind the proxy IP and cluster master IP and load the proxy. At this time I have two servers; both with the same Cluster Master IP bound and the proxy IP bound and proxy.nlm loaded!
    I can access Node 0 through rconj and it appears to be working fine. If I do a 'display secondary ipaddress' I can see it has both the proxy IP and Cluster Master IP bound to it. The same thing is the case for Node 1. I unload the proxy on Node 0 and reset the server. When it comes back up, it joins the cluster just fine and there doesn't appear to be any other problem.
    Has anyone else seen this behavior? (Craig???)
    thanks,
    Dan

    In article <[email protected]>, Dchuntdnc wrote:
    > but I keep running into a situation where
    > both nodes will go 'active'.
    I've got one of those situations too, at a client.
    >
    > Usually, I have Node 0 set as both the cluster master and the host of
    > the NBM proxy resource. Node 1 is then in standby - ready to load the
    > proxy service and assume the proxy IP address if node 0 dies. At some
    > point (the time is variable in days 2 - 5 and doesn't seem to be related
    > to network load) Node 0 will think that Node 1 has failed and will show
    > that on the Cmon console.
    This sounds familiar, except for me it happens within hours.
    > Shortly afterwards Node 1 will think that
    > Node 0 has failed and bind the proxy IP and cluster master IP and load
    > the proxy. At this time I have two servers; both with the same Cluster
    > Master IP bound and the proxy IP bound and proxy.nlm loaded!
    Yep. Gets annoying, to say the least!
    >
    > I can access Node 0 through rconj and it appears to be working fine.
    > If I do a 'display secondary ipaddress' I can see it has both the proxy
    > IP and Cluster Master IP bound to it. The same thing is the case for
    > Node 1. I unload the proxy on Node 0 and reset the server. When it
    > comes back up, it joins the cluster just fine and there doesn't appear
    > to be any other problem.
    Yep.
    >
    > Has anyone else seen this behavior? (Craig???)
    I have definitely fought this issue, but only on one (of many) BM cluster.
    Both nodes of the cluster are on old servers, and when the proxy is
    active, it is exceptionally busy. (More than 2000 users, and plenty of LAN
    bandwidth). I was on site at the client working on this (and a lot of
    other projects) and I never was able to get to the bottom of it. The fact
    that the server was so busy (24x7) made it hard to experiment on. My hope
    at this point is to get decent newer hardware in there to replace the
    7-year old nodes.
    This happened when one server was BM 3.8 and the other BM 3.9, but it
    continued to happen when I upgraded both to 3.9sp2. It also happened even
    though I moved the heartbeat to dedicated nics with a crossover cable.
    I'm thinking that something causes the LAN drivers to hiccup long enough
    for the server to stop responding to heartbeat - but the proxy seems to
    work continuously without showing a 30-second pause anywhere.
    For the time being, I've left the oldest node not loading cluster
    services. It's a manual failover at this time, but that's better than
    nothing. (And the primary node is quite stable anyway, for months and
    months at a time).
    Craig Johnson
    Novell Support Connection SysOp
    *** For a current patch list, tips, handy files and books on
    BorderManager, go to http://www.craigjconsulting.com ***

  • Patchin portal cluster problem

    I am trying to run, Portal patch 13 on a WAS cluster.
    t
    The problem I am getting is that the patch installation asks for a "username and password" for administrator
    When I enter the details I get an error.
    My question is that if it is in safe modde, how is the install checking these details.  I cannot log into the visual admin when the cluster is in safe mode.
    Anybode else have this problem.
    Thanks

    I think I just figured out how to use the sage mode.  It basically just limits the cluster to 1 server and 1 dispatcher.  Youre right, the same result can be achieved with the config tool
    Thanks

  • Oracle IAS Cluster Problem

    Can someone tell me how the ias cluster work?
    Do it need a clustering compenont? as like db clusterware or weblogic clustering?
    Thanks very much!

    "Cluster" is pretty wide/open term. Bit more details about what have you got (version of Oracle Application Server, kind of installation, your topology) and what do you want to achieve by having a cluster will help us suggest something. There are multiple types of cluster configurations varying by Oracle Application Server releases.
    So pl. be bit more specific about your problem/request.
    Thanks
    Shail

  • Compressor 4 cluster problem

    I have just set up my MacBook Pro and iMac with Compressor 4 as a cluster. Everything looks fine, but when I send the file out to render it fails with an error "error reading source.....no such file or directory". The file renders fine when I don't tick the "this computer plus" box.
    I've followed the path Compressor is using and it points to the alias in the events folder which goes from the "original media" folder back to the actual location of the .mov files. Sure enough, in Finder, OS X tells me the alias has failed. However, when I try to fix it and browse to the original file, the "OK" button lights up but nothing happens when I click it. This seems to be the case for ALL .mov files in every project I have edited.
    The weird thing is that FCP X can obviously see all of these files as everything works fine - I can edit and render with no problem. The issue only arises when I choose "this computer plus" or pick a cluster.
    So it looks like the aliases do point to the correct files but cannot be accessed directly from the Finder or when Compressor 4 looks for them in cluster mode.
    I hope that makes sense.
    Hopefully someone has seen similar behaviour.
    Thanks,
    Jack.

    Hi Studio X, not sure how to do that but I just worked it out. It was (as you seem to have worked out) the alias that was the clue.
    I had put together these FCP X projects on a different USB drive. As I wanted to be more organised I copied over all the projects to a new 1TB USB drive which I'm only using for storage and editing. The good thing about this is that I can simply remove the drive and plug it into a different Mac and FCP X sees everything - events, projects, original files. However, there must be some reference to the name of the old USB drive as part of the alias which Compressor doesn't like when in cluster mode. I started a quick project on the new drive and Compressor 4 worked as a 3 machine cluster with no problems.
    I can't quite understand why FCP X finds the original video at all if it is looking for the drive name and not just the path to the files, but it seems not to care.
    Anyone have any ideas about this?

  • Cluster problem

              Hi,
              when I exam my logs I found out these lines. what are these threads trying to
              do? The reported 2 servers did log anything 5 mins before the timestamp here.
              ###<Sept 7, 2002 6:37:54 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              '5' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.11.21.50 and port 7001 with protocol t3. The Exception
              is java.net.ConnectException: Operation timed out: connect>
              ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              '3' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.11.21.51 and port 7001 with protocol t3. The Exception
              is java.net.SocketException: Connection reset by peer: JVM_recv in socket input
              stream read>
              ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              '1' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.11.21.512 and port 7001 with protocol t3. The Exception
              is java.net.SocketException: Connection reset by peer: socket write error>
              

              I have the problem too. I ma using wls7.0 sp1. Is it because some network setting?
              After sometime the thread will stuck.
              ####<Oct 23, 2002 11:12:29 AM PST> <Warning> <RJVM> <SP2> <srv2> <ExecuteThread:
              '16' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.2.20.62 and port 7001 with protocol t3. The Exception
              is java.net.ConnectException: Operation timed out: connect>
              ####<Oct 23, 2002 11:13:16 AM PST> <Warning> <RJVM> <SP2> <srv2> <ExecuteThread:
              '10' for queue: 'default'> <kernel identity> <> <000519> <Unable to connect to
              a remote server on address 10.2.20.62 and port 7001 with protocol t3. The Exception
              is java.net.ConnectException: Operation timed out: connect>
              ####<Oct 23, 2002 11:13:16 AM PST> <Info> <WebLogicServer> <SP2> <srv2> <ExecuteThread:
              '10' for queue: 'default'> <kernel identity> <> <000339> <ExecuteThread: '10'
              for queue: 'default' has become "unstuck".>
              ####<Oct 23, 2002 11:13:16 AM PST> <Info> <WebLogicServer> <SP2> <srv2> <ExecuteThread:
              '16' for queue: 'default'> <kernel identity> <> <000339> <ExecuteThread: '16'
              for queue: 'default' has become "unstuck".>
              Rajesh Mirchandani <[email protected]> wrote:
              >Which server and service pack are you using?
              >
              >This message should show up on one server when one server in the cluster
              >drops out or
              >is force-killed.
              >
              >They are harmless anyway.
              >
              >Joe wrote:
              >
              >> They are clustered managed servers. The message is logged on a managed
              >server.
              >>
              >> Kumar Allamraju <[email protected]> wrote:
              >> >Which server is logging the following messages? Is it admin
              >> >or managed server?
              >> >
              >> >Who are these 10.11.21.50,51 etc..?
              >> >
              >> >Kumar
              >> >
              >> >Joe wrote:
              >> >> Hi,
              >> >> when I exam my logs I found out these lines. what are these threads
              >> >trying to
              >> >> do? The reported 2 servers did log anything 5 mins before the timestamp
              >> >here.
              >> >>
              >> >> ###<Sept 7, 2002 6:37:54 PM PDT> <Warning> <RJVM> <asd> <srv1> <ExecuteThread:
              >> >> '5' for queue: 'default'> <kernel identity> <> <000519> <Unable
              >to
              >> >connect to
              >> >> a remote server on address 10.11.21.50 and port 7001 with protocol
              >> >t3. The Exception
              >> >> is java.net.ConnectException: Operation timed out: connect>
              >> >>
              >> >> ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1>
              ><ExecuteThread:
              >> >> '3' for queue: 'default'> <kernel identity> <> <000519> <Unable
              >to
              >> >connect to
              >> >> a remote server on address 10.11.21.51 and port 7001 with protocol
              >> >t3. The Exception
              >> >> is java.net.SocketException: Connection reset by peer: JVM_recv
              >in
              >> >socket input
              >> >> stream read>
              >> >>
              >> >> ####<Sept 7, 2002 10:05:47 PM PDT> <Warning> <RJVM> <asd> <srv1>
              ><ExecuteThread:
              >> >> '1' for queue: 'default'> <kernel identity> <> <000519> <Unable
              >to
              >> >connect to
              >> >> a remote server on address 10.11.21.51 and port 7001 with protocol
              >> >t3. The Exception
              >> >> is java.net.SocketException: Connection reset by peer: socket write
              >> >error>
              >> >>
              >> >
              >
              >--
              >Rajesh Mirchandani
              >Developer Relations Engineer
              >BEA Support
              >
              >
              

  • Servlet cluster problem

    I am having trouble with a wls6.1 cluster. I am trying write a pdf out
              via a servlet. When I run the following code with the cluster turned
              off I have no problems. If I turn it on the servlet is returning no
              data. I am including the servlet and the stack trace in case someone
              can help. GenericFileObject.getTheFile returns a byte array.
              Jeff
              public void service(HttpServletRequest request, HttpServletResponse
              response) throws ServletException, IOException {
              DataOutputStream activityreportOut = new
              DataOutputStream(response.getOutputStream());
                        try {
                        HttpSession session=request.getSession(true);
                        response.setContentType("application/pdf");
                   String fileid = request.getParameter("fileid");
              String type=request.getParameter("type");
                        byte[] buffer;
                   ClientFacadeHome cfhome = (ClientFacadeHome)
                   EJBHomeFactory.getInstance().getBeanHome(Constants.CLASS_CLIENT_FACADE,
              Constants.JNDI_CLIENT_FACADE);
                   ClientFacade cf= cfhome.create();
                        GenericFileObject file =(GenericFileObject)
              cf.getFile(fileid,type);
                   buffer =(byte[]) file.getThefile();
                   activityreportOut.write(buffer);
              catch (Exception e){
                   e.printStackTrace();
              activityreportOut.flush();
              java.io.IOException: Broken pipe
              at java.net.SocketOutputStream.socketWrite(Native Method)
              at java.net.SocketOutputStream.write(Unknown Source)
              at weblogic.servlet.internal.ChunkUtils.writeChunkTransfer(ChunkUtils.java:189)
              at weblogic.servlet.internal.ChunkUtils.writeChunks(ChunkUtils.java:165)
              at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:248)
              at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:306)
              at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:197)
              at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:121)
              at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:155)
              at java.io.DataOutputStream.write(Unknown Source)
              at java.io.FilterOutputStream.write(Unknown Source)
              at com.bi.micardis.security.clientaction.ActivityAndScriptServlet.service(ActivityAndScriptServlet.java:41)
              at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:265)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:200)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:2456)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2039)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              

              "Subir Das" <[email protected]> wrote:
              >So, Applet#1 will always talk to the servlet hosted by WLInstance#1 and
              >Applet#2
              >will always talk to the servlet hosted by WLInstance#2.
              This statement is not entirely true.
              Suppose WLInstance#1 were to be brought down (for whatever reason), Applet#1 will
              then talk to servlet hosted by WLInstance#2.
              Server pinning could be modified by different load balancing algorithms, configurable
              via containers (or hardware).
              So don't count on which servlet instance your applet is going to be served by.
              Instead consider to give a second look into the design of the servlet data structure
              (object):
              1.Read from data store, if it has been persisted.
              2.If the data is client related then consider sticking the data into session, which
              would then replicate to other WL instances.
              3.Stateless EJB in a cluster ? Don' know much about this(yet).
              My 2 cents..Good luck.
              Rama
              

  • DAG - Cluster problem

    Hi,
    I have MS-Exchange 2010 SP2
    servers on different subnets on 2
    sites.
    Server Exch2010-1 subnet 192.168.1.0/24
    on site 1 (PROD - Active
    Mailboxes)
    Server Exch2010-2 subnet 192.168.1.1/24
    on site 2 (DRP - Passive
    Mailboxes)
    I created a DAG with these 2 servers
    as members
    DAG IP subnet 192.168.1.0/24
    (same subnet on site1)
    Replication only enabled on 3th subnet
    192.168.1.3/24 (streched VLAN on sites 1
    and 2)
    Everything seems OK (OWA/MAPI/Backups...) / DAG appears in AD and DNS
    I have a 1 question and 2 problems
    Question:
    Should my DAG also have an IP address in the
    subnet of site 2?
    Problems:
    When site 2 loses the connection with the site 1, the cluster
    fails. (the mails are delayed / Backups fail)
    When site 1 and site 2 are connected back and the cluster is reset online (manually), the mailboxes are active on site 2,
    where as site 2 has activation prefernece 2 and site 1 has preference activiation 1
    Any idea?
    Thank you in advance

    Ok. gotcha. Im not a fan of stretched VLANS across a WAN. But if there is only one true subnet, then you only need one IP Address for the cluster.
    Since you are crossing a WAN, ensure your File Share Witness is in the primary data center and you have enabled DAC Mode.
    Also see:
    http://social.technet.microsoft.com/Forums/exchange/en-US/36311bb6-6193-4370-ab47-cc0e831354e7/automount-consensus-not-reached-database-wont-mount-exchange-2013-2-node-dag-and-file-share?forum=exchangesvravailabilityandisasterrecovery
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

Maybe you are looking for

  • Help with HP Deskjet 6500 wireless connection

    My HP Deskjet 6500 worked perfectly via my wireless network until I turned off my SSID Broadcast. Now the printer SAYS it is connected to the wireless network, but I cannot print to it. The printer does not even show up as being on the network. There

  • Tables EEIN and EAUS

    Hi friends, I dont find the archive objects for the tables EEIN and EAUS. In what way I can identify the objects for this two tables to finish archiving. Help  Please. Thanks Shree.

  • How can i check when my phone was last active on here? it was lost and to report a claim i need the exact date it was stolen

    ?

  • Chart Documentation

    1. FYI to MSFT Project Siena team - "Chart" is not documented under "Visual Reference" 2. Struggling a bit to make a bar chart...some clear guidance would be much appreciated. Have 2 Collections with data as follows: Base_Risk_Percent Year4Risk - .20

  • PR Work Flow Issue

    Hi folks, Please help me asap.... For PR release Stratergy let us assume that A1(ist approver), B1(2nd approver) , C1(top - last approver) are the approvers of the hierarchy and there is no prerequiste for the A1 should approve for B1 to approver...f