Cluster Question

          If I have a cluster which contains 2 nodes (e.g 192.168.0.1 and 192.168.0.2). In the
          admin console, I need to provide a cluster address which I might put 192.168.0.1
          and 192.168.0.2.
          In this case, I might need to bind those ip addresses in a single DNS and put it
          as the cluster address.
          How can I bind those ip addresses ? Do I need a DNS server to do that ?
          another question is, when the WLS cluster receive a request, where is the first point
          which is responsible for passing the requests into the nodes ? Is that a java class
          or ?
          

          Hello Ramy,
          If I have a proxy, then I might need to also have a cluster for the proxy server
          as well. Does it mean that I need a local director in front of the proxy cluster
          thanks,
          Friend
          "Ramy Saad" <[email protected]> wrote:
          >
          >Hello Friend,
          >
          >I think you need a proxy-server (for example with load balancing) which
          >can handel
          >a cluster. In your application you can use the IP-Address of the proxy-server
          >and
          >the proxy decides to which WLS the connection will be established. I think
          >a plug-in
          >for the appache server is shipped with the bea software...
          >
          >Regards,
          >Ramy.
          >
          >"Friend" <[email protected]> wrote:
          >>
          >>If I have a cluster which contains 2 nodes (e.g 192.168.0.1 and 192.168.0.2).
          >>In the
          >>admin console, I need to provide a cluster address which I might put 192.168.0.1
          >>and 192.168.0.2.
          >>In this case, I might need to bind those ip addresses in a single DNS and
          >>put it
          >>as the cluster address.
          >>How can I bind those ip addresses ? Do I need a DNS server to do that ?
          >>another question is, when the WLS cluster receive a request, where is the
          >>first point
          >>which is responsible for passing the requests into the nodes ? Is that
          >a
          >>java class
          >>or ?
          >
          

Similar Messages

  • JMS/Queue cluster question

              Hi
              I have some very basic cluster questions on JMS Queues. Lets say Q1>I have 3 WLS
              in cluster. I create the queue in only WLS#1 - then all the other WLS (#2 and #3)
              should have a stub in their JNDI tree for the Queue which points to the Queue in
              #1 - right? Basically what I am trying to acheive is to have the queue in one server
              and all the other servers have a pointer to it - I beleive this is possible in WLS
              cluster - right??
              Q2> Is there any way a client to the queue running on a WLS can tell whether the
              Queue handle its using is local (ie in the same server) or remote. Is the API createQueue(./queuename)
              going to help here??
              Q3>Is there any way to create a Queue dynamically - I guess JMX is the answer -right?
              But I will take this question a bit further - lets say Q1 answer is yes. In this
              case if server #1 crashes - then #2 and #3 have no Queues. So if they try to create
              a replica of the Queue (as on server#1) - pointing to the same filestore - can they
              do it?? - I want only one of them to succed in creating the Queue and also the Queue
              should have all the data of the #1 Queue (1 to 1 replica).
              All I want is the concept of primary and secondary queue in a cluster. Go on using
              the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession replication
              concept in clusters. My cluster purpose is more for failover rather than loadbalancing.
              TIA
              Anamitra
              

              Anamitra wrote:
              > Hi Tom
              > 7.0 is definitely an option for me. So lets take the scenarion on case of JMS cluster
              > and 7.0.
              >
              > I do not understand what u mean by HA framework?
              An HA framework is a third party product that can be used to automatically restart a failed server
              (perhaps on a new machine), and that will guarantee that the same server isn't started in two
              different places (that would be bad). There are few of these HA products, "Veritas" is one of
              them. Note that if you are using JMS file stores or transactions, both of which depend on the disk,
              you must make sure that the files are available on the new machine. One approach to this is to use
              what is known as a "dual-ported" disk.
              > If I am using a cluster of 3 WLS
              > 7.0 servers - as u have said I can create a distrubuted Queue with a fwd delay attribute
              > set to 0 if I have the consumer only in one server say server #1.
              > But still if the server #1 goes down u say that the Queues in server #2 and server
              > #3 will not have access to the messages which were stuck in the server #1 Queue when
              > it went down -right?
              Right, but is there a point in forwarding the messages to your consumer's destination if your
              application is down?
              If your application can tolerate it, you may wish to consider allowing multiple instances of it (one
              per physical destination). That way if something goes down, only those messages are out-of-business
              until the application comes back up...
              >
              >
              > Why cant the other servers see them - they all point to the same store right??
              > thanks
              > Anamitra
              >
              Again, multiple JMS servers can not share a store. Nor can multiple stores share a file. That will
              cause corruption. Multiple stores CAN share a database, but can't use the same tables in the
              database.
              Tom
              >
              > Tom Barnes <[email protected]> wrote:
              > >
              > >
              > >Anamitra wrote:
              > >
              > >> Hi
              > >> I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > >> in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > >> should have a stub in their JNDI tree for the Queue which points to the
              > >Queue in
              > >> #1 - right?
              > >
              > >Its not a stub. But essentially right.
              > >
              > >> Basically what I am trying to acheive is to have the queue in one server
              > >> and all the other servers have a pointer to it - I beleive this is possible
              > >in WLS
              > >> cluster - right??
              > >
              > >Certainly.
              > >
              > >>
              > >> Q2> Is there any way a client to the queue running on a WLS can tell whether
              > >the
              > >> Queue handle its using is local (ie in the same server) or remote. Is
              > >the API createQueue(./queuename)
              > >> going to help here??
              > >
              > >That would do it. This returns the queue on the CF side of the established
              > >Connection.
              > >
              > >>
              > >> Q3>Is there any way to create a Queue dynamically - I guess JMX is the
              > >answer -right?
              > >> But I will take this question a bit further - lets say Q1 answer is yes.
              > >In this
              > >> case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > >> a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > >> do it??
              > >> - I want only one of them to succed in creating the Queue and also the
              > >Queue
              > >> should have all the data of the #1 Queue (1 to 1 replica).
              > >
              > >No. Not possible. Corruption city.
              > >Only one server may safely access a store at a time.
              > >If you have an HA framework that can ensure this atomicity fine, or are
              > >willing
              > >to ensure this manually then fine.
              > >
              > >>
              > >>
              > >> All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > >> the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > >> concept in clusters. My cluster purpose is more for failover rather than
              > >loadbalancing.
              > >
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > >you want used most. Optionally, 7.0 will automatically forward messages
              > >from distr. dest
              > >members that have no consumers to those that do.
              > >
              > >In 6.1 you can emulate a distributed destination this way (from an upcoming
              > >white-paper):
              > >Approximating Distributed Queues in 6.1
              > >
              > >If you wish to distribute the destination across several servers in a cluster,
              > >use the distributed
              > >destination features built into WL 7.0. If 7.0 is not an option, you can
              > >still approximate a simple
              > >distributed destination when running JMS servers in a &#8220;single-tier&#8221;
              > configuration.
              > > Single-tier indicates
              > >that there is a local JMS server on each server that a connection factory
              > >is targeted at. Here is a
              > >typical scenario, where producers randomly pick which server and consequently
              > >which part of the
              > >distributed destination to produce to, while consumers in the form of MDBs
              > >are pinned to a particular
              > >destination and are replicated homogenously to all destinations:
              > >
              > >· Create JMS servers on multiple servers in the cluster. The servers will
              > >collectively host the
              > >distributed queue &#8220;A&#8221;. Remember, the JMS servers (and WL servers) must
              > >be named differently.
              > >
              > >· Configure a queue on each JMS server. These become the physical destinations
              > >that collectively become
              > >the distributed destination. Each destination should have the same name
              > >"A".
              > >
              > >· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;, and also
              > take
              > >care to set the destination&#8217;s
              > >&#8220;JNDINameReplicated&#8221; parameter to false. The &#8220;JNDINameReplicated&#8221;
              > parameter
              > >is available in 7.0, 6.1SP3
              > >or later, or 6.1SP2 with patch CR061106.
              > >
              > >· Create a connection factory, and target it at all servers that have a
              > >JMS server with &#8220;A&#8221;.
              > >
              > >· Target the same MDB pool at each server that has a JMS server with destination
              > >&#8220;A&#8221;, configure its
              > >destination to be &#8220;JNDI_A&#8221;. Do not specify a connection factory URL
              > when
              > >configuring the MDB, as it can
              > >use the server&#8217;s default JNDI context that already contains the destination.
              > >
              > >· Producers look up the connection factory, create a connection, then a
              > >session as usual. Then producers
              > >look up the destination by calling javax.jms.QueueSession.createQueue(String).
              > > The parameter to
              > >createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > so
              > >&#8220;./A&#8221; works in this example.
              > >This will return a physical destination of the distributed destination that
              > >is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later, and 6.1SP2
              > >with patch CR072612.
              > >
              > >This design pattern allows for high availability, as if one server goes
              > >down, the distributed destination
              > >is still available and only the messages on that one server become unavailable.
              > > It also allows for high
              > >scalability as speedup is directly proportional to the number of servers
              > >on which the distributed
              > >destination is deployed.
              > >
              > >
              > >
              > >>
              > >> TIA
              > >> Anamitra
              > >
              > >
              > ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              > ><html>
              > >Anamitra wrote:
              > ><blockquote TYPE=CITE>Hi
              > ><br>I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > ><br>in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > ><br>should have a stub in their JNDI tree for the Queue which points to
              > >the Queue in
              > ><br>#1 - right?</blockquote>
              > >Its not a stub. But essentially right.
              > ><blockquote TYPE=CITE>Basically what I am trying to acheive is to have
              > >the queue in one server
              > ><br>and all the other servers have a pointer to it - I beleive this is
              > >possible in WLS
              > ><br>cluster - right??</blockquote>
              > >Certainly.
              > ><blockquote TYPE=CITE>
              > ><br>Q2> Is there any way a client to the queue running on a WLS can tell
              > >whether the
              > ><br>Queue handle its using is local (ie in the same server) or remote.
              > >Is the API createQueue(./queuename)
              > ><br>going to help here??</blockquote>
              > >That would do it. This returns the queue on the
              > >CF side of the established Connection.
              > ><blockquote TYPE=CITE>
              > ><br>Q3>Is there any way to create a Queue dynamically - I guess JMX is
              > >the answer -right?
              > ><br>But I will take this question a bit further - lets say Q1 answer is
              > >yes. In this
              > ><br>case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > ><br>a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > ><br>do it?? <br>
              > >- I want only one of them to succed in creating the Queue and also the
              > >Queue
              > ><br>should have all the data of the #1 Queue (1 to 1 replica).</blockquote>
              > >No. Not possible. Corruption city.
              > ><br>Only one server may safely access a store at a time.
              > ><br>If you have an HA framework that can ensure this atomicity fine, or
              > >are willing
              > ><br>to ensure this manually then fine.
              > ><blockquote TYPE=CITE>
              > ><p>All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > ><br>the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > ><br>concept in clusters. My cluster purpose is more for failover rather
              > >than loadbalancing.</blockquote>
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > ><br>you want used most. Optionally, 7.0 will automatically
              > >forward messages from distr. dest
              > ><br>members that have no consumers to those that do.
              > ><p><i>In 6.1 you can emulate a distributed destination this way (from an
              > >upcoming white-paper):</i>
              > ><br><i>Approximating Distributed Queues in 6.1</i><i></i>
              > ><p><i>If you wish to distribute the destination across several servers
              > >in a cluster, use the distributed destination features built into WL 7.0.
              > >If 7.0 is not an option, you can still approximate a simple distributed
              > >destination when running JMS servers in a &#8220;single-tier&#8221; configuration.
              > >Single-tier indicates that there is a local JMS server on each server that
              > >a connection factory is targeted at. Here is a typical scenario,
              > >where producers randomly pick which server and consequently which part
              > >of the distributed destination to produce to, while consumers in the form
              > >of MDBs are pinned to a particular destination and are replicated homogenously
              > >to all destinations:</i><i></i>
              > ><p><i>· Create JMS servers on multiple servers in the cluster.
              > >The servers will collectively host the distributed queue &#8220;A&#8221;. Remember,
              > >the JMS servers (and WL servers) must be named differently.</i><i></i>
              > ><p><i>· Configure a queue on each JMS server. These become
              > >the physical destinations that collectively become the distributed destination.
              > >Each destination should have the same name "A".</i><i></i>
              > ><p><i>· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;,
              > >and also take care to set the destination&#8217;s &#8220;JNDINameReplicated&#8221;
              > parameter
              > >to false. The &#8220;JNDINameReplicated&#8221; parameter is available in
              > >7.0, 6.1SP3 or later, or 6.1SP2 with patch CR061106.</i><i></i>
              > ><p><i>· Create a connection factory, and target it at all servers
              > >that have a JMS server with &#8220;A&#8221;.</i><i></i>
              > ><p><i>· Target the same MDB pool at each server that has a JMS server
              > >with destination &#8220;A&#8221;, configure its destination to be &#8220;JNDI_A&#8221;.
              > >Do not specify a connection factory URL when configuring the MDB, as it
              > >can use the server&#8217;s default JNDI context that already contains the destination.</i><i></i>
              > ><p><i>· Producers look up the connection factory, create a connection,
              > >then a session as usual. Then producers look up the destination by
              > >calling javax.jms.QueueSession.createQueue(String). The parameter
              > >to createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > >so &#8220;./A&#8221; works in this example. This will return a physical
              > >destination of the distributed destination that is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later,
              > >and 6.1SP2 with patch CR072612.</i><i></i>
              > ><p><i>This design pattern allows for high availability, as if one server
              > >goes down, the distributed destination is still available and only the
              > >messages on that one server become unavailable. It also allows
              > >for high scalability as speedup is directly proportional to the number
              > >of servers on which the distributed destination is deployed.</i>
              > ><br><i></i>
              > ><br><i></i>
              > ><blockquote TYPE=CITE>
              > ><br>TIA
              > ><br>Anamitra</blockquote>
              > ></html>
              > >
              > >
              

  • Front-end/back-end cluster question

    [att1.html]
              

    Patrick Power wrote:
              > Thanx for your reply Prasad. I was surprised none of the Bea engineers
              > wished to touch this one. What do you suppose is up with that? Either
              > they are too busy, or possibly my question is too dumb.
              >
              I am from BEA so its not that we are not responding ;).
              >
              > Back to the issue: Yes, we will NES bridge/proxy into servlet front-end
              > cluster, potentially with Directors on the very front of the topology for
              > balancing. Your diagram as such:
              >
              > <Netscape/IIS/Apache/WLS FRONT END> ----- <CLUSTER OF WEBLOGIC SERVER
              > > SERVING SERVLETS> --- <CLUSTER OF WEBLOGIC SERVERS SERVING EJB>
              >
              > 1) Does <Netscape/IIS/Apache/WLS FRONT END> mean NES with proxy shared lib,
              > with a WLS service definition into cluster in obj.conf? I assume yes.
              Yes.
              >
              > 2) I would assume that <CLUSTER OF WEBLOGIC SERVERS SERVING SERVLETS> would
              > need the WLS HttpClusterServlet to the <CLUSTER OF WEBLOGIC SERVERS SERVING
              > EJB> all the way in the back.
              No. I was splitting presentation logic (namely servlets and jsp) and business
              logic (ejb) into two layers. Again you don't have to split it into two. You can
              colocate them both together. You could use NES or IIS or Apache or WLS. You
              don't need HttpClusterServlet.
              Lets get this straight.
              1. You need our proxy plugin for failover and to load balance the request that
              are going to presentation logic.
              2. From presentation logic layer, when you talk to backend business logic
              providers (like ejb cluster), if you use stateless session beans we provide
              failover and load balancing. In future we will support clustered stateful
              session beans as well. Therefore you don't need load balancer here.
              3. HttpClusterServlet should run only in front of presentation logic cluster and
              also it supports http only.
              Hope this helps.
              - Prasad
              > The NES proxy would only proxy into the f/e
              > cluster, right? You're not suggesting an external proxy of some type, are
              > you? The HttpClusterServlet is for WLS cluster-to-cluster proxies.
              > 3) A load balancer between the wls f/e and wls b/e clusters? That doesn't
              > seem applicable here. Once again, it should be HttpClusterServlet for WLS
              > cluster-to-cluster proxies.
              > 4) "use two or three proxy servers to avoid single point of failure."
              > Hmmm, once again - are we talking the WLS HttpClusterServlet proxy? Well,
              > that's the inital question: Can I have more than one HttpClusterServlet
              > proxy in the front-end cluster, proxying to the back-end cluster?
              > Otherwise, internally from this WLS architecture perspective, it is a single
              > point of failure.
              >
              > An example: 10 instances in f/e cluster. can more than one of these
              > instances have the WLS HttpClusterServlet proxy to the b/e cluster? Or, are
              > there instances of WLS HttpClusterServlet proxy in all 10 f/e cluster
              > instances?
              >
              > Cheers, Pat
              >
              > Prasad Peddada <[email protected]> wrote in message
              > news:[email protected]...
              > >
              > >
              > > Patrick Power wrote:
              > >
              > > > I know that this topic was addressed to some degree here in an earlier
              > > > posting, but I still have a question regarding the architecture
              > > > design:
              > > >
              > > > If configuring a front-end cluster for servlets/sessions and a
              > > > back-end cluster for remote services -- you route requests to the
              > > > back-end using the WLS proxy servlet. ok, got that part.
              > >
              > > Not quite. The typical scenario is
              > >
              > > <Netscape/IIS/Apache/WLS FRONT END> ----- <CLUSTER OF WEBLOGIC SERVER
              > > SERVING SERVLETS> --- <CLUSTER OF WEBLOGIC SERVERS SERVING EJB>
              > >
              > > You don't proxy and serve servlets from the same server.
              > >
              > > >
              > > > The question: Is there a single instance of the wls proxy servlet in
              > > > the front-end cluster? Or, is it on every instance in the front-end
              > > > cluster? What is the failover mechanism, in the case of a single
              > > > instance of proxy servlet in the f-e cluster failing?
              > >
              > > To prevent that you need to use some kind of h/w or software load
              > > balancer and then use two or three proxy servers to avoid single point
              > > of failure.
              > >
              > > > Is it a single point of failure between the 2 clusters?
              > > >
              > > > Thanx in advance for your help.
              > > >
              > > > BTW, I think Wei, Kumar and the other Bea folks cruising this group
              > > > have been doing a bang-up job of providing badly-needed detail on this
              > > > subject area - material this largely absent from the documentation.
              > > > Good job.
              > > >
              > > >
              > >
              > > --
              > > Cheers
              > >
              > > - Prasad
              > >
              > >
              

  • Windows 2008 Cluster question on using a new cluster drive source from shrinking existing disk

    I have a two node Windows 2008 R2 enterprise SP1 cluster. It has a basic cluster setup of one (Q:)quorum disk and data disk (E:) which is 2.7tb is size. This cluster is connected to a shared Dell Disk array.
    My question is can I safely shrink the 2.7tb drive down and carve out a disk size of 500gb from the same disk and use for a new cluster disk resource. We want to install Globalscape SFTP software on this new disk for use as a cluster resource.
    Will this work without crashing the cluster.
    Thanks,
    Gonzolean

    Hi ,
    Thank you for posting your issue in the forum.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Sun Cluster question

    Hello everyone
    I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of the databases on one node fails for whatever reason, the whole system gets shifted to the second node to keep the environment going. That works fine.
    My intended scenario:
    Each node is holding 2 database instances, both nodes ARE working at the same time so that each one is serving one instance of the database. In the event of failure on one node, the other one should assume the role of BOTH database instances till the first one gets fixed.
    The question is: is that possible? and if it is, does that require breaking the whole cluster and rebuilding it? or can this be done online without bringing down the system?
    Thanks a lot in advance

    What you propose will not work either. E.g. there is no logic implemented to fence the underlying zpool from one node to the other in such a configuration.
    Also the current SUNW.HAStoragePlus(5) manpage document:
            Note -   SUNW.HAStoragePlus does not support  file  sys-
                     tems created on ZFS volumes.
                     You cannot use SUNW.HAStoragePlus  to  manage  a
                     ZFS storage pool that contains a file system for
                     which the ZFS  mountpoint  property  is  set  to
                     legacy or none.[...]
    Greets
    Thorsten

  • Newbie cluster question: where does the admin server live?

    Hello, I'm looking at clustering for our application where I work, and so I'm reading through the cluster-related documentation for 11g. I have a sort of an architecture question: where should the admin server be started? Hypothetically speaking, if you have two nodes for your cluster, would the AdminServer run on a 3rd box which sort of stood in front of the two nodes?
    Thanks much -

    The ideal situation would be for your admin server to be separate from the machines hosting your managed server. This allows you to control access to the admin server, eliminate the performance impact if you have the admin + managed on the same host, and limits impact if youir #1 host fails, etc.
    But companies may be unwilling to invest in a distinct (3rd) host just for the admin server, especially if you have multiple environments ( prod, testing, dev, etc. ).
    So usually the admin winds up sharing the host with a managed server.

  • MDB/Topic/WLS cluster question

              Hi
              I was going through some WLS 8.1 docs on JMS and had a question abt Topics & WLS
              in cluster config where say I have 3 servers with say server#1 hosting the Topic
              [not a distributed destination]. I have an an ear file containing an MDB with
              no pool size limit. After deploying the ear in the cluster - lets say that each
              server on the cluster has 5 instances of the MDB [just an example] and a message
              is published on the Topic.
              Q1>Will all the 3 servers get a [one and only one] copy of that message? [my guess
              is yes]
              Q2>Only 1 instance [out of 5] of the MDB/per server will get the message - right?
              Q3> Had I had a separate deployment of the same MDB class in the EAR file for
              the same Topic - thats just going to get treated as a completely separate subscriber
              independent of the first MDB though the implementing class is the same - right?
              thanks
              Anamitra
              

              Anamitra wrote:
              > Hi
              > I was going through some WLS 8.1 docs on JMS and had a question abt Topics & WLS
              > in cluster config where say I have 3 servers with say server#1 hosting the Topic
              > [not a distributed destination]. I have an an ear file containing an MDB with
              > no pool size limit. After deploying the ear in the cluster - lets say that each
              > server on the cluster has 5 instances of the MDB [just an example] and a message
              > is published on the Topic.
              >
              > Q1>Will all the 3 servers get a [one and only one] copy of that message? [my guess
              > is yes]
              Yes.
              > Q2>Only 1 instance [out of 5] of the MDB/per server will get the message - right?
              Yes.
              > Q3> Had I had a separate deployment of the same MDB class in the EAR file for
              > the same Topic - thats just going to get treated as a completely separate subscriber
              > independent of the first MDB though the implementing class is the same - right?
              Yes.
              >
              > thanks
              > Anamitra
              >
              For a little more information, I'm attaching notes on durable
              subscriber MDBs.
              A JMS durable subscription is uniquely identified within a cluster by a combination of "connection-id" and "subscription-id". Only one active connection may use a particular "connection-id" within a WebLogic cluster.
              In WebLogic 8.1 and previous, a durable topic subscriber MDB uses its name to generate its client-id. Since JMS enforces uniqueness on this client-id, this means that if a durable subscriber MDB is deployed to multiple servers only one server will be able to connect. Some applications want a different behavior where
              each MDB pool on each server gets its own durable subscription.
              The MDB connection id, which is unique within a cluster, comes from:
              1) The "ClientId" attribute configured on the WebLogic connection factory.
              This defaults to null. Note that if the ClientId is set on a connection
              factory, only one connection created by the factory
              may be active at a time.
              2) If (1) is not set, then, as with the subscriber-id,
              the connection-id is derived from jms-client-id descriptor attribute:
              <jms-client-id>MyClientID</jms-client-id>
              (the weblogic dtd)
              3) If (1) and (2) are not set, then, as with the subscriber-id,
              the connection-id is derived from the ejb name.
              The MDB durable subscription id, which must be unique on its topic, comes from:
              1) <jms-client-id>MyClientID</jms-client-id>
              (the weblogic dtd)
              2) if (1) is not set then the client-id
              comes from the ejb name.
              The above prevents a durable topic subscriber MDB from running on multiple servers. When an instance of the MDB starts on another server, it deploys successfully, but a conflict is detected and the MDB fails to fully connect to JMS. The work-around is the following:
              A) Create a custom connection-factory for each server:
              1) configure "JNDIName" to the same value across all servers
              ("myMDBCF" in this example)
              2) configure "ClientId" to a unique value per server
              3) enable "UserTransactionsEnabled"
              4) enable "XAConnectionFactoryEnabled"
              5) set "AcknowledgePolicy" to "ACKNOWLEDGE_PREVIOUS"
              6) target the CF at a single WebLogic server
              (Number 5 is required for non-transactional topic MDBs)
              B) In the MDB's weblogic-ejb-jar.xml descriptor, set the MDB's connection
              factory to the JNDI name of the custom connection factories configured in
              (A). Optionally, also specify the subscriber-id via the jms-client-id
              attribute.
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
              <ejb-name>exampleBean</ejb-name>
              <message-driven-descriptor>
              <connection-factory-jndi-name>myMDBCF</connection-factory-jndi-name>
              <jms-client-id>myClientID</jms-client-id>
              </message-driven-descriptor>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              C) Target the application at the same servers that have the custom connection
              factories targeted at them.
              Notes/Limitations:
              1) If the MDB is moved from one server to another, the MDB's corresponding
              connection-factory must be moved with it.
              2) This work-around will not work if the destination is not in the same
              cluster as the MDB. (The MDB can not use the local connection factory, which
              contains the connection-id, as connection factories do not work unless they
              are in the same cluster as the destination.)
              3) This work-around will not work for non-WebLogic JMS topics.
              4) A copy of each message is sent to each to each server's MDB pool.
              

  • Cluster (question à 10 cent)

    Bonjour à tous,
    Le contexte
    un Projet "relativement" complexe comportant de multiple VIs et sous-VIs.
    un Cluster "traverse" la plupart de ces VIs et sous-Vis .... un peu comme un "data bus".
    Je suis en phase de développement et de mise au point .... donc, je modifie régulièrement le contenu de ce "cluster_data_bus" (il est en typedef bien entrendu)
    Ce cluster contient x éléments .... dont l'élément "toto".
    Question
    L'élément "toto" est-il toujours utile ?
    autrement dit ... l'élément "toto" est-il encore utilisé quelque part ? (ou ne sert-il plus à rien)
    ma solution : (assez barbare) ... 
    enlever l'élement "toto" du cluster et "regarder" si le code devient broken quelque part.
    existerait-il une "belle solution" à ce type de situation.
    merci
    Résolu !
    Accéder à la solution.

    gros cluster ...
    Ce n'est pas réellement un "gros gros" cluster.
    Ce que je dis est simplement le résultat d'un constat.
    Oui, chaque sous-VIs n'a pas besoin de l'ensemble.
    mais "séparer" suivant les besoins ... augmente le nombre de cluster à "passer" et augmente (un peu partout) le nombre de shift-register.
    J'ai essayé "les deux" ... au départ j'avais 4 cluster séparés ... plus certains paramètres que je passais de façon "indépendante"
    j'ai regroupé le tout dans un seul et même cluster ... et j'utilise les données, suivant les besoins, en utilisant des unbundle_by_name.
    Le résultat n'est certainement pas plus "lent" ... (même un peu le contraire)
    et la lisibilité graphique est beaucoup plus grande.
    De plus, j'ai "tout" sous la main dans un seul et même cluster ... plus besoin de se demander "quoi" est "ou".
    ceci dit ... encore une fois ... ce n'est pas réellement un "énorme" cluster.
    en pièce-joint (il s'agit de  bus.ctl )
    Pièces jointes :
    CTL.zip ‏95 KB

  • Conventional Cluster questions (client reconect & sizing)

    hi *,
    i have studied the documentation of MQ 4.2 and i have a few particular questions about conventional cluster mode:
    when i setup a cluster like this:
    datacenter1                                           datacenter2
    host1                                                    host2
    broker1(master)                                     broker2(slave)
    appserver1                                            appserver 2
    consumer+producer for q1(persistent)     consumer & prod q1
    consumer and producers are configured to take the local broker (nearest) as home broker (mq://localhost:7676,mq://otherdatacanterhost:7676) 1) what exactly is the impact if broker 1 (master) (and only broker 1 ) fails ,besides the data that was in transit on this broker) is not available ? what will i not be able to?
    2) when point one takes place (broker 1 fails) consumer & producers of appserver 1 will switch to broker 2 right? when broker 1 comes up again will the clients on appserver1 somewhen try to switch to their homebroker again ? somewhen?
    3) since datacenter 1 + 2 are seperated geographically what delay is acceptable for JMQ to be between brokers? or how do they communicate in particular?
    4) sizing: on our current STCMS JMS implementation we do have a traffic of
    ~ 1 500 000 messages / day
    ranging from 0k -- 100 000 k payload
    distributed over 1000 JMS queues distributed over
    8 STCMS JMS Servers (2 servers for every business domain ( 1 warehouse, 2 finance...))
    would it be feasible to just create one cluster including 1 master broker with 8 cluster broker members (servers)
    setup like this?
    host1                                                              host2
    appserver warehouse1                                        appserver warehouse2
    appserver finance1                                             appserver finance2
    appserver otherbusiness1                                   appserver otherbusiness2                                  
    appserver otherotherbusiness1                            appserver otherotherbusiness2
    master broker (does basically nothing but mastering is this needed to be standalone doing nothg but admin tasks?)
    broker warehouse1                                             broker warehouse2
    broker finance1                                                  broker finance2
    broker otherbusniess1                                        broker otherbusiness2
    broker otherotherbusiness1                                 broker otherotherbusniess2
    e.g.
    broker warehouse 1 basicallly speaks to appserver warehouse 1 maybe to appserver warehouse 2 in failover cases  seldomly interroutes messages to other brokers e.g. finance 1 or finance 2 or otherbusniess 1
    broker finance basicallly speaks to appserver finance1 (failover appsrv finnance2)  seldomly interroutes messages to other brokers e.g. warehouse1 or warehouse2

  • Cisco ise 1.2 install certificates for ise cluster question

    hello all i have an ise cluster of 4 devices. 1 primary admin/secondary monitor, 1 secondary admin/primary admin and 2 policy nodes
    i need to install public CA certs on them. can I generate 1 CSR on one of the nodes, that includes a SAN with the DNS names of all the nodes?
    Therefore get only 1 cert from the CA, and export and import the same cert into all the other nodes?
    or do i have to generate 1 CSR for each node and purchase 4 certs? Wild card certs is not an option. tHANKS,

    ISE allows you to install a certificate with multiple Subject Alternative Name (SAN) fields. A browser reaching the ISE using any of the listed SAN names will accept the certificate without any error as long as it trusts the CA that signed the certificate.
    The CSR for such a certificate cannot be generated from the ISE GUI. http://www.cisco.com/c/en/us/support/docs/security/identity-services-engine-software/113675-ise-binds-multi-names-00.html
    Cisco ISE checks for a matching subject name as follows:
    1. Cisco ISE looks at the subject alternative name (SAN) extension of the certificate. If the SAN contains one or more DNS names, then one of the DNS names must match the FQDN of the Cisco ISE node. If a wildcard certificate is used, then the wildcard domain name must match the domain in the Cisco ISE node's FQDN.
    2. If there are no DNS names in the SAN, or if the SAN is missing entirely, then the Common Name (CN) in the Subject field of the certificate or the wildcard domain in the Subject field of the certificate must match the FQDN of the node.
    3. If no match is found, the certificate is rejected.
    Regards,
    Jatin Katyal
    *Do rate helpful posts*

  • SQL Server Failover Cluster Questions

    Dear All,
                I am building a two-node failover cluster on SQL Server 2012 SP1 (inside Hyper-V as a Guest Cluster) and want clarification on few things that I am facing.
    1.  I am receiving MSDTC Warning.  I can go ahead and create the cluster, but want to understand whether this MSDTC is to be configured as a role on the cluster or not.  I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases
    and Reports through it so in such a scenario, do I need MSDTC? If yes, how much should be the size of the MSDTC Drive? Is following process correct?
    http://www.sqlnotebook.info/configure-msdtc-on-windows-cluster-2012/
    2.  During First Node configuration, one needs to provide the "SQL CLUSTER RESEOURCE GROUP NAME".  Does it have any bearing on how it will be accessed by other servers for databases and logs? or is it just how the cluster resource group
    would be named? would it be required for every instance that is created inside the cluster? Just to be clear, so one can name it according to the instance name.
    3.  During the instance creation, one needs to provide "SQL Server Network Name".  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through it, so would I be required to provide this
    for all instances that I create or this is only required once in the cluster:
    4.  During the instance creation, one needs to provide the features required for installation i.e. instance features and shared features.  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through
    it, so which features should be selected? so that there is less workload on the server.
    5.  All the instances use TempDB for databases that are present inside it.  What would be the best practice with respect to TempDB.  One TempDB for all instance on the servers on a separate LUN or all instance having their own TempDB LUN?  What
    should be the ideal size of the TempDB LUN?
    6.  Should all the disks required for DBs and Logs be added to Cluster?  Should they be added normal disks or CSV Volumes?
    Thanks in advance. 

    Hello,
     1.You can run the Microsoft Distributed Transaction Coordinator service (MSDTC) as a clustered resource on a failover cluster server for increased reliability, based on the failover capabilities of the clustered servers. You can
    refer to the MSDTC section of the following blog about determine whether the Microsoft Distributed Transaction Coordinator (MSDTC) cluster resource must be created.
    Reference:http://msdn.microsoft.com/en-us/library/ms189910.aspx#MSDTC
    2. The Cluster Resource Group is where SQL Server failover cluster resource will be placed. Each clustered SQL Server will belong to a Failover
    Cluster Resource Group. For example, if you had configure a two node SQL Server Cluster, each clustered instance on the two node belong to a same Cluster Resource Group.
    You can change the Cluster Resource Group name, but notes the following name is reserved and already used as Resource Group names: Available Storage, Cluster Group.
    3. Each SQL Server cluster is assigned a virtual Network name and IP address, which client applications use to connect to the clustered SQL Server.
    4. Not familiar with SCVMM, SCOM, Orchestrator, but you should install the Database Engine Services and SQL Server Management tools.If you want to use SQL Server Reporting Services, you can install Reporting Servers, but Report Server service cannot participate
    in a failover cluster.
    5. You can use isolated disk for user database and temp DB of each SQL Server Cluster
    6. Yes. You should use Cluster Disks which add to Clustered Shared Volumes to host the data file and log of databases.
    http://www.pythian.com/blog/how-to-install-a-clustered-sql-server-2012-instance-step-by-step-part-1/
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Extremely basic cluster questions

    I posted this in the Qmaster section, but it's been 24 hours with no response, so I thought I'd try at this more heavily-trafficked board.
    I have a Dual 2.5 G5 and a dual 1.25 G4. I moved all my Final Cut/Adobe licenses from the G4 to the G5 when I bought it.
    1) Do I have to buy a fully working copy of Final Cut Studio (i.e. spend $1300) to create a cluster that will render Compressor and After Effects? (And now that I think about it, will I have to buy another seat of After Effects too?)
    2) Do I need to pay to upgrade the OS on the G4 (currently 10.3.x)?
    3) Will clustering a G4 to my G5 significantly impact my render times?
    Thanks in advance.

    Qmaster won't help you with After Effects renders. AFAIK. You need the AE engine, which ships free with AE, or GridIron's Nitro products.
    Networked rendering has workflow issues that you must research carefully before you attempt to engage. For instance, AE's distributed rendering creates only image sequences, not QT movies. Converting them to a useful format can cancel any gains from the render.
    The traffic on the Qmaster forum represents its utility. I don't know anyone who can make it work well enough to actually use it and, if there are successful users, they don't post. If I remember the past discussions, G4s are not recommended. You need at least 4 machines to gain any appreciable render improvements and you need to be doing a huge volume of work to justify the management overhead.
    But just scrolling down the Qmaster list of posts turned up several threads that might help you out. Did you look?
    bogiesan

  • Patch cluster question.

    What if ,all of my patches in solaris10 05/08 there are not find following:
    118731-01 122660-10 119254-59 138217-01 .... about 30
    How i can fill up 30 patches? Do command patchadd separetely for everyone? Or make all patch cluster in S mode? Do it in global? I must stop zones befor patchadd command?
    Please help?
    Edit/Delete Message

    Hi Hartmut,
    I kind of got the idea. Just want to make sure. The zones 'romantic' and 'modern' show "installed" as the current status at cluster-1. These 2 zones are in fact running and online at cluster-2. So I will issue your commands below at cluster-2 to detach these zones to "configured" status :
    cluster-2 # zoneadm -z romantic detach
    cluster-2 # zoneadm -z modern detach
    Afterwards, I apply the Solaris patch at cluster-2. Then, I go to cluster-1 and apply the same Solaris patch. Once I am done patching both cluster-1 and cluster-2, I will
    go back to cluster-2 and run the following commands to force these zones back to "installed" status :
    cluster-2 # zoneadm -z romantic attach -f
    cluster-2 # zoneadm -z modern attach -f
    CORRECT ?? Please let me know if I am wrong or if there's any step missing. Thanks much, Humphrey
    root@cluster-1# zoneadm list -iv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    15 classical running /zone-classical native shared
    - romantic installed /zone-romantic native shared
    - modern installed /zone-modern native shared

  • HFM Cluster Question

    We are having some issues with cluster configuration. We have two servers A and B. As there is no round-robin algorithm followed by HFM server every time a user logs on he is assigned a fixed server. We want to know how to overcome this issue as we don't want to assign specific server to the user but we want to assign it by the job sequence. Can anyone help me out.
    Thanks in advance.

    We use the F5 load balancer for this. Not sure if it's the best brand of load balancer, but our I/T group seems to approve of the performance.
    http://www.f5.com/glossary/load-balancing.html

  • ZSM cluster question 7SP1

    Hello everyone,
    I'm running a NW6.5 cluster that currently has ZSM6.5SP1 on it. Rather
    than move to SP2, I'm going to upgrade to 7SP1. The current
    configuration runs 2 distributors - in separate cluster resources that
    never cross paths.
    I want to make some changes so that I can manage the base cluster
    servers as well. I want to run a distributor in a cluster resource and
    I also want to run subscribers on every cluster node.
    The documentation discusses cluster-ready and cluster-aware as two
    options. I want both.
    Cluster-ready puts the distributor/subscriber in a cluster resource
    and lets me migrate it to any of its associated nodes - which I want.
    This is fine for the distributor, but it leaves the base servers
    unmanageable by ZSM.
    Cluster-aware lets me install the subscribers to the base cluster
    nodes, which I also want.
    I'm going to be installing in a test environment using this
    configuration - install as cluster-aware so that we can have a
    subscriber on each cluster node. Then I'm planning to install a
    distributor/subscriber as cluster-ready to a cluster resource so that
    I can migrate it across its assigned servers.
    Has anyone done this?
    Comments/suggestions would be appreciated.
    Thanks
    Carol Ann

    On Wed, 26 Jul 2006 16:18:58 GMT, Carol Ann wrote:
    > Has anyone done this?
    no doesn't work.. you can either install the subscriber on the node or on
    the resource.. not on both..
    Marcus Breiden
    If you are asked to email me information please change -- to - in my e-mail
    address.
    The content of this mail is my private and personal opinion.
    http://www.edu-magic.net

Maybe you are looking for