Queues Clustering Question

Hi,
we are evaluating Sun System Message Queue for our j2ee application and after browsing the documentation we still have one single doubt which is actually slowing down our purchase.
The question is pretty simple,having more j2ee application servers running on different machines accessed via hardware load balancer,
is it possible to have different brokers (one per machine) with a shared store on a network drive, or better to have a "balanced" couple of queues (That's to say, have more queues with separated stores sycronized with each others'), and actually see them as a single queue (via hardware load balancer).
Thanks for your precious help
Michele Zaina

A couple of initial clarifications -
While it is the default with the Sun Application Server configuration, its optional whether or not you run a cluster of brokers with a cluster of app servers (you can run one or many)
A single broker will be faster but wont provide failover (specifically service availability) if something goes wrong.
If you want a single broker for all AS instances, you need to set the service up as "REMOTE"
In MQ, unless you set it up differently, all queues are "global". This means that you set up a single "queue" and consumers can produce or consume from any broker in the cluster. This means you dont need to set up the system to see the queue as the same on all brokers. Messages are stored on the broker they are produced to. e.g.
You have a cluster of two brokers (broker1 and broker2), a producer is on broker1 and a conumer is on broker2. The producer sends a message to broker1, it is stored and forward to broker2 where it is delivered to the consumer.
As far as the shared store MQ does not support a true "shared store" at this time (e.g. on database or persistence store used by all clients)
If you assign the brokers different instance names, you can use an NFS drive and place all file based stores under the same directory e.g. (paths vary on platform)
<location>/var/imq/instances/imqbroker1 - first broker filestore
<location>/var/imq/instances/imqbroker2 - second broker filestore

Similar Messages

  • JMS/Queue cluster question

              Hi
              I have some very basic cluster questions on JMS Queues. Lets say Q1>I have 3 WLS
              in cluster. I create the queue in only WLS#1 - then all the other WLS (#2 and #3)
              should have a stub in their JNDI tree for the Queue which points to the Queue in
              #1 - right? Basically what I am trying to acheive is to have the queue in one server
              and all the other servers have a pointer to it - I beleive this is possible in WLS
              cluster - right??
              Q2> Is there any way a client to the queue running on a WLS can tell whether the
              Queue handle its using is local (ie in the same server) or remote. Is the API createQueue(./queuename)
              going to help here??
              Q3>Is there any way to create a Queue dynamically - I guess JMX is the answer -right?
              But I will take this question a bit further - lets say Q1 answer is yes. In this
              case if server #1 crashes - then #2 and #3 have no Queues. So if they try to create
              a replica of the Queue (as on server#1) - pointing to the same filestore - can they
              do it?? - I want only one of them to succed in creating the Queue and also the Queue
              should have all the data of the #1 Queue (1 to 1 replica).
              All I want is the concept of primary and secondary queue in a cluster. Go on using
              the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession replication
              concept in clusters. My cluster purpose is more for failover rather than loadbalancing.
              TIA
              Anamitra
              

              Anamitra wrote:
              > Hi Tom
              > 7.0 is definitely an option for me. So lets take the scenarion on case of JMS cluster
              > and 7.0.
              >
              > I do not understand what u mean by HA framework?
              An HA framework is a third party product that can be used to automatically restart a failed server
              (perhaps on a new machine), and that will guarantee that the same server isn't started in two
              different places (that would be bad). There are few of these HA products, "Veritas" is one of
              them. Note that if you are using JMS file stores or transactions, both of which depend on the disk,
              you must make sure that the files are available on the new machine. One approach to this is to use
              what is known as a "dual-ported" disk.
              > If I am using a cluster of 3 WLS
              > 7.0 servers - as u have said I can create a distrubuted Queue with a fwd delay attribute
              > set to 0 if I have the consumer only in one server say server #1.
              > But still if the server #1 goes down u say that the Queues in server #2 and server
              > #3 will not have access to the messages which were stuck in the server #1 Queue when
              > it went down -right?
              Right, but is there a point in forwarding the messages to your consumer's destination if your
              application is down?
              If your application can tolerate it, you may wish to consider allowing multiple instances of it (one
              per physical destination). That way if something goes down, only those messages are out-of-business
              until the application comes back up...
              >
              >
              > Why cant the other servers see them - they all point to the same store right??
              > thanks
              > Anamitra
              >
              Again, multiple JMS servers can not share a store. Nor can multiple stores share a file. That will
              cause corruption. Multiple stores CAN share a database, but can't use the same tables in the
              database.
              Tom
              >
              > Tom Barnes <[email protected]> wrote:
              > >
              > >
              > >Anamitra wrote:
              > >
              > >> Hi
              > >> I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > >> in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > >> should have a stub in their JNDI tree for the Queue which points to the
              > >Queue in
              > >> #1 - right?
              > >
              > >Its not a stub. But essentially right.
              > >
              > >> Basically what I am trying to acheive is to have the queue in one server
              > >> and all the other servers have a pointer to it - I beleive this is possible
              > >in WLS
              > >> cluster - right??
              > >
              > >Certainly.
              > >
              > >>
              > >> Q2> Is there any way a client to the queue running on a WLS can tell whether
              > >the
              > >> Queue handle its using is local (ie in the same server) or remote. Is
              > >the API createQueue(./queuename)
              > >> going to help here??
              > >
              > >That would do it. This returns the queue on the CF side of the established
              > >Connection.
              > >
              > >>
              > >> Q3>Is there any way to create a Queue dynamically - I guess JMX is the
              > >answer -right?
              > >> But I will take this question a bit further - lets say Q1 answer is yes.
              > >In this
              > >> case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > >> a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > >> do it??
              > >> - I want only one of them to succed in creating the Queue and also the
              > >Queue
              > >> should have all the data of the #1 Queue (1 to 1 replica).
              > >
              > >No. Not possible. Corruption city.
              > >Only one server may safely access a store at a time.
              > >If you have an HA framework that can ensure this atomicity fine, or are
              > >willing
              > >to ensure this manually then fine.
              > >
              > >>
              > >>
              > >> All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > >> the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > >> concept in clusters. My cluster purpose is more for failover rather than
              > >loadbalancing.
              > >
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > >you want used most. Optionally, 7.0 will automatically forward messages
              > >from distr. dest
              > >members that have no consumers to those that do.
              > >
              > >In 6.1 you can emulate a distributed destination this way (from an upcoming
              > >white-paper):
              > >Approximating Distributed Queues in 6.1
              > >
              > >If you wish to distribute the destination across several servers in a cluster,
              > >use the distributed
              > >destination features built into WL 7.0. If 7.0 is not an option, you can
              > >still approximate a simple
              > >distributed destination when running JMS servers in a &#8220;single-tier&#8221;
              > configuration.
              > > Single-tier indicates
              > >that there is a local JMS server on each server that a connection factory
              > >is targeted at. Here is a
              > >typical scenario, where producers randomly pick which server and consequently
              > >which part of the
              > >distributed destination to produce to, while consumers in the form of MDBs
              > >are pinned to a particular
              > >destination and are replicated homogenously to all destinations:
              > >
              > >· Create JMS servers on multiple servers in the cluster. The servers will
              > >collectively host the
              > >distributed queue &#8220;A&#8221;. Remember, the JMS servers (and WL servers) must
              > >be named differently.
              > >
              > >· Configure a queue on each JMS server. These become the physical destinations
              > >that collectively become
              > >the distributed destination. Each destination should have the same name
              > >"A".
              > >
              > >· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;, and also
              > take
              > >care to set the destination&#8217;s
              > >&#8220;JNDINameReplicated&#8221; parameter to false. The &#8220;JNDINameReplicated&#8221;
              > parameter
              > >is available in 7.0, 6.1SP3
              > >or later, or 6.1SP2 with patch CR061106.
              > >
              > >· Create a connection factory, and target it at all servers that have a
              > >JMS server with &#8220;A&#8221;.
              > >
              > >· Target the same MDB pool at each server that has a JMS server with destination
              > >&#8220;A&#8221;, configure its
              > >destination to be &#8220;JNDI_A&#8221;. Do not specify a connection factory URL
              > when
              > >configuring the MDB, as it can
              > >use the server&#8217;s default JNDI context that already contains the destination.
              > >
              > >· Producers look up the connection factory, create a connection, then a
              > >session as usual. Then producers
              > >look up the destination by calling javax.jms.QueueSession.createQueue(String).
              > > The parameter to
              > >createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > so
              > >&#8220;./A&#8221; works in this example.
              > >This will return a physical destination of the distributed destination that
              > >is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later, and 6.1SP2
              > >with patch CR072612.
              > >
              > >This design pattern allows for high availability, as if one server goes
              > >down, the distributed destination
              > >is still available and only the messages on that one server become unavailable.
              > > It also allows for high
              > >scalability as speedup is directly proportional to the number of servers
              > >on which the distributed
              > >destination is deployed.
              > >
              > >
              > >
              > >>
              > >> TIA
              > >> Anamitra
              > >
              > >
              > ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              > ><html>
              > >Anamitra wrote:
              > ><blockquote TYPE=CITE>Hi
              > ><br>I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > ><br>in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > ><br>should have a stub in their JNDI tree for the Queue which points to
              > >the Queue in
              > ><br>#1 - right?</blockquote>
              > >Its not a stub. But essentially right.
              > ><blockquote TYPE=CITE>Basically what I am trying to acheive is to have
              > >the queue in one server
              > ><br>and all the other servers have a pointer to it - I beleive this is
              > >possible in WLS
              > ><br>cluster - right??</blockquote>
              > >Certainly.
              > ><blockquote TYPE=CITE>
              > ><br>Q2> Is there any way a client to the queue running on a WLS can tell
              > >whether the
              > ><br>Queue handle its using is local (ie in the same server) or remote.
              > >Is the API createQueue(./queuename)
              > ><br>going to help here??</blockquote>
              > >That would do it. This returns the queue on the
              > >CF side of the established Connection.
              > ><blockquote TYPE=CITE>
              > ><br>Q3>Is there any way to create a Queue dynamically - I guess JMX is
              > >the answer -right?
              > ><br>But I will take this question a bit further - lets say Q1 answer is
              > >yes. In this
              > ><br>case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > ><br>a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > ><br>do it?? <br>
              > >- I want only one of them to succed in creating the Queue and also the
              > >Queue
              > ><br>should have all the data of the #1 Queue (1 to 1 replica).</blockquote>
              > >No. Not possible. Corruption city.
              > ><br>Only one server may safely access a store at a time.
              > ><br>If you have an HA framework that can ensure this atomicity fine, or
              > >are willing
              > ><br>to ensure this manually then fine.
              > ><blockquote TYPE=CITE>
              > ><p>All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > ><br>the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > ><br>concept in clusters. My cluster purpose is more for failover rather
              > >than loadbalancing.</blockquote>
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > ><br>you want used most. Optionally, 7.0 will automatically
              > >forward messages from distr. dest
              > ><br>members that have no consumers to those that do.
              > ><p><i>In 6.1 you can emulate a distributed destination this way (from an
              > >upcoming white-paper):</i>
              > ><br><i>Approximating Distributed Queues in 6.1</i><i></i>
              > ><p><i>If you wish to distribute the destination across several servers
              > >in a cluster, use the distributed destination features built into WL 7.0.
              > >If 7.0 is not an option, you can still approximate a simple distributed
              > >destination when running JMS servers in a &#8220;single-tier&#8221; configuration.
              > >Single-tier indicates that there is a local JMS server on each server that
              > >a connection factory is targeted at. Here is a typical scenario,
              > >where producers randomly pick which server and consequently which part
              > >of the distributed destination to produce to, while consumers in the form
              > >of MDBs are pinned to a particular destination and are replicated homogenously
              > >to all destinations:</i><i></i>
              > ><p><i>· Create JMS servers on multiple servers in the cluster.
              > >The servers will collectively host the distributed queue &#8220;A&#8221;. Remember,
              > >the JMS servers (and WL servers) must be named differently.</i><i></i>
              > ><p><i>· Configure a queue on each JMS server. These become
              > >the physical destinations that collectively become the distributed destination.
              > >Each destination should have the same name "A".</i><i></i>
              > ><p><i>· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;,
              > >and also take care to set the destination&#8217;s &#8220;JNDINameReplicated&#8221;
              > parameter
              > >to false. The &#8220;JNDINameReplicated&#8221; parameter is available in
              > >7.0, 6.1SP3 or later, or 6.1SP2 with patch CR061106.</i><i></i>
              > ><p><i>· Create a connection factory, and target it at all servers
              > >that have a JMS server with &#8220;A&#8221;.</i><i></i>
              > ><p><i>· Target the same MDB pool at each server that has a JMS server
              > >with destination &#8220;A&#8221;, configure its destination to be &#8220;JNDI_A&#8221;.
              > >Do not specify a connection factory URL when configuring the MDB, as it
              > >can use the server&#8217;s default JNDI context that already contains the destination.</i><i></i>
              > ><p><i>· Producers look up the connection factory, create a connection,
              > >then a session as usual. Then producers look up the destination by
              > >calling javax.jms.QueueSession.createQueue(String). The parameter
              > >to createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > >so &#8220;./A&#8221; works in this example. This will return a physical
              > >destination of the distributed destination that is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later,
              > >and 6.1SP2 with patch CR072612.</i><i></i>
              > ><p><i>This design pattern allows for high availability, as if one server
              > >goes down, the distributed destination is still available and only the
              > >messages on that one server become unavailable. It also allows
              > >for high scalability as speedup is directly proportional to the number
              > >of servers on which the distributed destination is deployed.</i>
              > ><br><i></i>
              > ><br><i></i>
              > ><blockquote TYPE=CITE>
              > ><br>TIA
              > ><br>Anamitra</blockquote>
              > ></html>
              > >
              > >
              

  • Jms queue clustering

    hi,
              Connection to the queue is not establised after a crash of one node
              I'd be grateful if anyone confirmed that my jms clustered queues are configured and handled correctly from API point of view.
              I have:
              - created connection factory targeted to all servers in a cluster
              - all servers in a cluster have its own JMS server
              - created distributed queue targeted to all servers in a cluster
              In a program
              - i lookup one server within a cluster via JNDI and connect to the queue
              - start connection both for consumer and producer
              - consumer and producer implement ExceptionListener
              - in onException method I create a new connection to the second node

    hi,
              Connection to the queue is not establised after a crash of one node
              I'd be grateful if anyone confirmed that my jms clustered queues are configured and handled correctly from API point of view.
              I have:
              - created connection factory targeted to all servers in a cluster
              - all servers in a cluster have its own JMS server
              - created distributed queue targeted to all servers in a cluster
              In a program
              - i lookup one server within a cluster via JNDI and connect to the queue
              - start connection both for consumer and producer
              - consumer and producer implement ExceptionListener
              - in onException method I create a new connection to the second node

  • IronPort Clustering questions

    Hello all,
    I have some questions about clustering in Ironport:
    Actually I have one IronPort C150 in "Standalone mode" with an ip adress who takes the mail flow (192.168.1.34)
    We received a second Ironport for setup a cluster configuration between them.
    My question are :
    1) What happen for the mail flow if the first IronPort ( 192.168.1.34) move to a cluster configuration ?
    I have to configure a virtual address to be same of the original ip adress mail flow (192.168.1.34) or the cluster takes the original configuration of the first IronPort ?
    2) If one Ironport Fail, the second IronPort automatically takes the mail? or i have to reconfigure manually the ip address ?
    Thanks for your help.
    PS: Sorry for my english

    I agree with your thoughts on MX records. The biggest benefit to using a load balancer is with the management. Once you start getting a large number of hosts in an MX record you start running into problems with senders correctly resolving your MX records due to inproper DNS configuration on the internet (UDP vs TCP). Standing up a large number of hosts behind some load balancers is one potential solution. This of course comes with its own set of challenges.
    I'm still using MX records, but at some point will need to look at having multiple machines behind each host in my MX records to cut down on the size of the returned record.
    I just wish I could get all of my application developers to write their apps to understand MX records. Load balancers have worked well for my outbound environment where most applications are pointing at a host name instead of an MX record.
    Joe

  • OAS Clustering question

    Hi
    I have a question here, if I would like to create file-based repository cluster for 2 nodes(J2EE container and web cache) 10.1.2.0.2, what should I do first?
    1. create the cluster and join the 2 nodes then deploy my oc4j application?
    2. deploy my oc4j application to both nodes then cluster them up?
    My target : load-balancing
    Edited by: user12259190 on Feb 8, 2010 7:04 AM

    You may wish to create the clusters first and then deploy your application.
    The following link would help if you are using 10gR2.
    [Oracle Application Server Clusters (OC4J)|http://download.oracle.com/docs/cd/B14099_19/core.1012/b14003/midtiermanage.htm#i1031723]
    thanks!
    AMN

  • Broker clustering question

    In our current setup, we have a few brokers clustered together without a master broker. These brokers serve the same client applications and therefore should have the same set of topics/destinations. My question is should I create these topics on each of the broker or should I just create them on one of the brokers and let JMS provider service to replicate these topics on all other brokers when they are clustered? I have created these topics on the broker before I put it to the cluster and I have seen some strange behaviours with the cluster and I'm not sure if it's related. Does anybody know?
    Thanks in advance for your feedbacks.

    Currently we created all the destinations on the broker before we added it to the cluster. These destinations are the same as the ones that are already created on all of the brokers on the cluster. Will that confuse JMS service provider because it will try to forward the information to all the brokers in the cluster? We saw messages stuck on ACTIVE subscribers. We saw messages got lost. We saw messages went through for a few hours and a subscriber became INACTIVE for no reasons. I'm not sure if these are related, but I have no ideas where to look.
    Is there any reason why you cannot use a master broker?We didn't want to use a master broker because our requirements didn't allow us to have a single point failure. With a designated master node, and if that node dies, will the cluster still function? Another reason is because we have a pre-defined set of destinations and connection factories on all of the brokers, we don't think a master node will be neccessary. Is it correct?
    Thanks.

  • CF8 and JRun 4 Clustering question

    I have a production CF8 environment that consists of:
    3 Windows 2003 IIS 6.0 servers (behind a load balancer)
    JRun 4.0 Updater7 with a CF8 instance installed on each
    server .
    Each JRun server I created a cluster and added the CF8
    instance from each server to it.
    I connected IIS 6.0 to the local cluster via the JRun
    Connector tool.
    My question is when a request is sent to IIS web server for a
    CF page, would JRun try to process the CF request locally and if it
    could not, then send it to another server in the cluster, or would
    it simple do a round robin where as 2/3 of the requests that come
    into that JRun server would be sent off to other servers?
    Thanks.

    what i know that is the request goes in round-robin
    process

  • BAM HA (clustering) Question - Where to run the IIS services?

    Referring to the BAM HA guide I have a set up similar to what is recommended for a "medium" sized installation:
    Cluster 1 with two nodes running ADC, Event Service, Enterprise Link, IIS
    Cluster 2 with two nodes running the Report Service, IIS
    The question is which cluster should be the target for users and which should be the target for BPEL sensor actions?
    Right now I have individual users pointed to cluster 2 and the BPEL processes pointed to cluster 1. I originally did it this way because the ADC is on cluster 1 and I thought that performance might be better from the BPEL web service call (for sensor actions) perspective, since that is where my primary load is coming from. Doing this seems requires IIS running on all nodes, which to me seems incorrect. If I point everything users, report users, and BPEL sensor actions to cluster 2 is that more correct (i.e. better balancing of load) ?
    Thanks!

    Thanks for the hint with /var/run/daemons
    On the other hand this solves my problem only partially, because using the /etc/rc.d/Serv start method I cannot see the proper order in which the services were started and at what point they failed.
    I was expecting to have something like http://www.freebsd.org/cgi/man.cgi?quer … ormat=html
    In freebsd the dmesg prints the system buffer, not only the kernel buffer.

  • XI R2 clustering question

    I am adding new hardware to BusinessObjects XI R2 SP3 FP3.4 environment.  We are not upgrading versions.  When running install, I would like to configure new servers with their own, independent CMS database instead of creating extended installation, or simply using existing CMS database at install time.  The benefit would be that I can install the product any time without imapct to users and take relatively smaller outage time to cluster new servers with existing servers.  Are there any drawbacks to this method?

    Hi Alan,
    I wasn't able to find it at a glance, but I do believe that some clustering instructions are in the Admin guide.  But here are your basic steps:
    - Make sure that your File Repositories are on a network share.  This is a biggie...if they're not, your reports will not run correctly from both servers.
    - Make sure that your CMS databaes is on a shared database server.  The default installation, I believe, is to create a local MySQL repository.  That will probably not work (someone feel free to jump in here).
    - In the CCM, give your cluster a name...in the properties section.  The cluster name starts with @.  For instance, the cluster could be @BOE_CompanyName_PROD.
    - Go through your BusinessObjects servers and make sure they're all pointed to the cluster name instead of "servername:6400"
    Semi-optional step:
    - Change the default login on the Infoview and CMC logins to point to the clustered name instead of "servername:6400."  If you don't do this, it's pretty likely that only one CMS is going to be utilized.
    There are probably a few things that I'm missing, but that's the general idea.  Overall, clustering is surprisingly easy.  You'll know it's working when you enter in the cluster name in the Infoview login screen (make sure you remember the @ at the login screen) and are able to log in.
    Good luck!

  • CUC 8.x Clustering Questions / Concerns

    Hi all,
    I am looking into deploying a CUC 8.x cluster with Pub and Sub at two separate locations connecting over a 45MB DS3 link. I have about 1500 subscribers with 48 voice messaging ports on each server. According Cisco specs for CUC 8.x below I am completely within specs. I am wondering if anyone has run into this scenario with CUC 8.x and could share your experience. Per Cisco recommendation, the subscriber sever will be the primary call processor and the Publisher will be handling replication, maintaining the DB, and MWI. I am wondering if there will be any issues with MWI with Pub and Sub connecting over a WAN link even though my WAN link is completely within Cisco specs.
    In my existing Unity 4.0.5 with failover environment. I have the primary Unity server and the Exchange server located in the same building and the secondary Unity server located at the remote site. And I know whenever I failover to the secondary Unity server I always have issue with MWI. I understand Cisco has completely changed the architecture with Unity Connection. I hope this won't an issue.
    Connection Cluster Requirements When the Servers Are in Separate Buildings or Sites
    Revised April16, 2010
    •Both servers must meet specifications according to the Cisco Unity Connection 8.x Supported Platforms List at
    http://www.cisco.com/en/US/partner/docs/voice_ip_comm/connection/8x/supported_platforms/8xcucspl.html.
    •For a cluster with two physical servers, both servers must have the same platform overlay.
    •For a cluster with two virtual servers, both servers must have the same virtual platform overlay.
    •For a cluster with one physical and one virtual server:
    –The platform-overlay numbers of the physical server and the virtual server must match.
    –With Platform Overlay 1 servers, you must add 2 GB of RAM to the physical server so the amount of RAM in the physical server matches the 4 GB of vRAM configured for Connection on the virtual server.
    •Depending on the number of voice messaging ports on each Connection server, the path of connectivity must have the following guaranteed bandwidth with no steady-state congestion:
    –For 50 voice messaging ports on each server—7 Mbps
    –For 100 voice messaging ports on each server—14 Mbps
    –For 150 voice messaging ports on each server—21 Mbps
    –For 200 voice messaging ports on each server—28 Mbps
    –For 250 voice messaging ports on each server—35 Mbps
    Thanks in advance !!! I appreciate any inputs / suggestions !!
    D.

    I was out and about earlier so I wanted to elaborate on my response:
    I still think you ultimately have to do what works best for your environment.  If that means running everything on the Publisher then so be it.  BUT, I think there are too many variables to just move to that solution without further investigation both internally and from TAC.  Understandably, a single server can handle the load - if it couldn't, there wouldn't be a standalone option for CUC.  However, the concept of clustering is not simply about load balancing to provide high availability.  That's a big part of it - but the separation of roles is also intended to provide better performance, scalability, as well as the active-active HA fault-tolerance.  If you meet the specs for clustering over the WAN, then I think the product should perform as expected.  My expectation would be for there to be very little, if any, noticeable lag in performance especially as far as end user operations are concerned.  If what you are experiencing is par for the course for clustering over the WAN with CUC, I would say there are a lot of customers that would be disappointed.  I know I would. In saying that, it doesn't meant that there is an inherent issue with CUC.  It could be perfectly fine.  However, given that you are running an 8.0 release - I would push for some further inspection of the health of the system.  If everything is up to spec then there could be something on the WAN link that is at fault.  These types of things always take a little more time to investigate but as Markus said - it would still be in your best interest to figure out what is going on here. Even if you have the Publisher sitting at your HQ site doing all the work, it still has to replicate data within the cluster which would be traversing the WAN. So, I would want to pursue both sides of the coin.  You may still end up running everything from one box but at least you will have tried to rule out as many potential issues as possible.
    Hailey
    Please rate helpful posts!

  • Another clustering question

    what are all the debug messages :
              <Debug> <Cluster> <nemesis> <nemesis> <ExecuteThread: '11' for queue:
              'default'> <> <> <000000> <dropped fragment from foreign domain/cluster
              domainhash=1113319721 clusterhash=-548483879>
              about?
              

    what are all the debug messages :
              <Debug> <Cluster> <nemesis> <nemesis> <ExecuteThread: '11' for queue:
              'default'> <> <> <000000> <dropped fragment from foreign domain/cluster
              domainhash=1113319721 clusterhash=-548483879>
              about?
              

  • WAP321 clustering question

    I have two WAP321s at one of my offices.  They are in a cluster.  When an end-user connects, they do not see each WAP, they just see the cluster name unser "available wireless networks."  How does the cluster "decide" which WAP a user will connect to, and how (if at all) are handoffs handled  between WAPs if a client moves out of range of one of the devices in the cluster.  Are these parameters that can be adjusted?
    Thanks,
    -Mat

    Case Solution:
    Please check the below links  for WAP321. These link will help you in understanding the WAP321  Clustering and its process
    Link-1
    http://www.cisco.com/en/US/docs/wireless/access_point/csbap/wap121/administration/guide/WAP121_321_AG_en.pdf
    Link-2
    http://www.cisco.com/en/US/prod/collateral/wireless/ps5678/ps12237/ps12249/c78-697406_data_sheet.pdf

  • WL51 Clustering Question

    Folks,
              I am curious as to how many people are using WL51 with clustering
              successfully in a large environment. We have been unable to get this up and
              running with BEA support. They have so far told us that it may be because of
              the large number of beans being deployed or a synchronization bug. We are
              only deploying 64 ejb's and consider that to be a small to mid-sized
              deployment.
              Our production/development system is using 2 dual PIII 500's with 512 meg
              RAM, WIN2K, and IIS. Our settings are:
              max heap 200
              executethreadcount 65
              pools 2 with 25 max connections each
              ejb's deployed 64
              Please respond if you are using a similar or larger system successfully.
              Please post the server specs and the weblogic specs above. Feel free to
              respond to me by email.
              Thank You,
              Mica Cooper
              

    NOTE:
              This is issue 31575 and it was NOT fixed in service pack 4.
              "Mica Cooper" <[email protected]> wrote in message
              news:[email protected]...
              > I just tried starting up with no ejb's and the cluster seems to work
              > perfectly! These are the same ejbs that are working in production 4.52
              > modified for EJB 1.1. What gives? What is the next step? The instances are
              > staying in sync and I am not getting the "Timed Out" messages.
              > Help....
              > thanks,
              > Mica Cooper
              >
              >
              >
              > "Kumar Allamraju" <[email protected]> wrote in message
              > news:[email protected]...
              > > Deployment of EJB's has nothing to do with server up & running. There's
              > > no limitation
              > > on no. of beans to be deployed in WLS.
              > > What happens if you don't deploy EJB's at all? Does the server comes up?
              > > Have you also tried with deploying 10 beans and then adding more & more.
              > >
              > > We usually do not recommend heap size of 200MB for production
              > > environment, although you will
              > > not see this problem at startup
              > >
              > > The cluster docs has been recently updated. Would you mind checking 'em
              > > again.
              > > http://www.weblogic.com/docs51/cluster/index.html
              > >
              > > --
              > > Kumar
              > >
              > >
              > > Mica Cooper wrote:
              > > >
              > > > Folks,
              > > >
              > > > I am curious as to how many people are using WL51 with clustering
              > > > successfully in a large environment. We have been unable to get this
              up
              > and
              > > > running with BEA support. They have so far told us that it may be
              > because of
              > > > the large number of beans being deployed or a synchronization bug. We
              > are
              > > > only deploying 64 ejb's and consider that to be a small to mid-sized
              > > > deployment.
              > > >
              > > > Our production/development system is using 2 dual PIII 500's with 512
              > meg
              > > > RAM, WIN2K, and IIS. Our settings are:
              > > > max heap 200
              > > > executethreadcount 65
              > > > pools 2 with 25 max connections each
              > > > ejb's deployed 64
              > > >
              > > > Please respond if you are using a similar or larger system
              successfully.
              > > > Please post the server specs and the weblogic specs above. Feel free
              to
              > > > respond to me by email.
              > > >
              > > > Thank You,
              > > > Mica Cooper
              >
              >
              

  • Quick 5.1 JSP clustering question...

    Does JSP/servlet clustering (WL 5.1) absolutely require a proxy sitting in
              front of the clustered servers?
              Brian Dainton
              Pervado Systems, Inc.
              www.pervado.com
              

    Yes.
              Brian Dainton wrote:
              > Does JSP/servlet clustering (WL 5.1) absolutely require a proxy sitting in
              > front of the clustered servers?
              >
              > Brian Dainton
              > Pervado Systems, Inc.
              > www.pervado.com
              

  • Clustering questions

    Hi,
    I've read with interest the dev center article - Clustering
    CF MX for JRun J2EE (
    http://www.adobe.com/devnet/coldfusion/j2ee/articles/endtoend.html).
    However, it does not address one clustering scenario which
    I'd like to address.
    Is it possible to deploy more than one CF instance within a
    single JRun instance ?
    That is, I'd like to run multiple instances of my CF
    application within a single JRun instance
    on a single server. If this is possible, is it also possible
    to configure the JRun Apache connector
    to load balance across the multiple CF instances ? I can see
    how the connector load balances
    across IP:Port combinations, but not across URLs on the same
    IP:Port combination.
    The article I've referenced depicts various clustering
    alternatives, but they all show a one to one
    relationship between CF instances and JRun instances. Is this
    definitive ?
    Thanks,

    Correct, unless they have a specific requirement for continuous communication with AD.  The best solution is to have a minimum of two Domain Controllers.  This is not a unique requirement for clustered environments - a minimum of two DCs is recommended
    for almost any configuration.
    As mentioned, machines/users have cached credentials that would allow them to continue access to the cluster resources if they had started communication before the DC went down.  However, new connections could not be made - those need a DC to coordinate
    the Kerberos ticketing.
    . : | : . : | : . tim

Maybe you are looking for