Conventional Cluster questions (client reconect & sizing)

hi *,
i have studied the documentation of MQ 4.2 and i have a few particular questions about conventional cluster mode:
when i setup a cluster like this:
datacenter1                                           datacenter2
host1                                                    host2
broker1(master)                                     broker2(slave)
appserver1                                            appserver 2
consumer+producer for q1(persistent)     consumer & prod q1
consumer and producers are configured to take the local broker (nearest) as home broker (mq://localhost:7676,mq://otherdatacanterhost:7676) 1) what exactly is the impact if broker 1 (master) (and only broker 1 ) fails ,besides the data that was in transit on this broker) is not available ? what will i not be able to?
2) when point one takes place (broker 1 fails) consumer & producers of appserver 1 will switch to broker 2 right? when broker 1 comes up again will the clients on appserver1 somewhen try to switch to their homebroker again ? somewhen?
3) since datacenter 1 + 2 are seperated geographically what delay is acceptable for JMQ to be between brokers? or how do they communicate in particular?
4) sizing: on our current STCMS JMS implementation we do have a traffic of
~ 1 500 000 messages / day
ranging from 0k -- 100 000 k payload
distributed over 1000 JMS queues distributed over
8 STCMS JMS Servers (2 servers for every business domain ( 1 warehouse, 2 finance...))
would it be feasible to just create one cluster including 1 master broker with 8 cluster broker members (servers)
setup like this?
host1                                                              host2
appserver warehouse1                                        appserver warehouse2
appserver finance1                                             appserver finance2
appserver otherbusiness1                                   appserver otherbusiness2                                  
appserver otherotherbusiness1                            appserver otherotherbusiness2
master broker (does basically nothing but mastering is this needed to be standalone doing nothg but admin tasks?)
broker warehouse1                                             broker warehouse2
broker finance1                                                  broker finance2
broker otherbusniess1                                        broker otherbusiness2
broker otherotherbusiness1                                 broker otherotherbusniess2
e.g.
broker warehouse 1 basicallly speaks to appserver warehouse 1 maybe to appserver warehouse 2 in failover cases  seldomly interroutes messages to other brokers e.g. finance 1 or finance 2 or otherbusniess 1
broker finance basicallly speaks to appserver finance1 (failover appsrv finnance2)  seldomly interroutes messages to other brokers e.g. warehouse1 or warehouse2

Similar Messages

  • Client reconnection to conventional cluster taking about 9 minutes

    Hi,
    I've set up a 3 node OpenMq (4.4) conventional cluster and am having trouble with client reconnections when I simulate a halt on a node.
    I have two threads within the same process, sending to a pre-configured destination and replying using a temporary queue. Each have a JMS connection to the same home broker. If I bring the home broker down with either the imqcmd command or with a ctrl+c, the goodbye messages are sent and the client connections are immediately reestablished by the connection factory with the next broker. But... if I remove the broker from the cluster by simulating a BSOD, power failure, etc, no goodbye messages are sent (obviously) and the connected clients are left 'connected' to the dead broker and only reconnect to another broker after about 9 minutes.
    I read in the docs that the 'imqPingInterval' property can be used to test the client connections but it doesn't seem to be doing the trick. After I've cut the power to the broker's virtual machine, I'm seeing the ping messages in the logs, e.g.:
    FINEST: Outbound Packet:PING(54):296-127.0.1.1(bd:ea:13:9b:ae:a)-51155-1267024410861;BrokerAddress=10.59.148.9:7676(1082), ConnectionID=4090718717872600064, ReconnectEnabled: true, IsConnectedToHABroker: falseAt this point, telnet cannot connect to 10.59.148.9:7676 so it's definitely gone.
    Finally after about 9 minutes, the following shows up in the logs:
    24-Feb-2010 15:26:26 com.sun.messaging.jmq.jmsclient.ExceptionHandler throwJMSException
    FINER: I501
    com.sun.messaging.jms.JMSException: [C4002]: Read packet failed. - cause: java.net.SocketException: No route to host
            at com.sun.messaging.jmq.jmsclient.ExceptionHandler.getJMSException(ExceptionHandler.java:380)
            at com.sun.messaging.jmq.jmsclient.ExceptionHandler.handleException(ExceptionHandler.java:331)
            at com.sun.messaging.jmq.jmsclient.ProtocolHandler.readPacket(ProtocolHandler.java:1796)
            at com.sun.messaging.jmq.jmsclient.ReadChannel.run(ReadChannel.java:1197)
            at java.lang.Thread.run(Thread.java:619)
    Caused by: java.net.SocketException: No route to host
            at java.net.SocketInputStream.socketRead0(Native Method)
            at java.net.SocketInputStream.read(SocketInputStream.java:129)
            at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
            at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
            at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
            at com.sun.messaging.jmq.io.ReadOnlyPacket.readFully(ReadOnlyPacket.java:271)
            at com.sun.messaging.jmq.io.ReadOnlyPacket.readFixedHeader(ReadOnlyPacket.java:191)
            at com.sun.messaging.jmq.io.ReadOnlyPacket.readPacket(ReadOnlyPacket.java:151)
            at com.sun.messaging.jmq.io.ReadWritePacket.readPacket(ReadWritePacket.java:82)
            at com.sun.messaging.jmq.jmsclient.ProtocolHandler.readPacket(ProtocolHandler.java:1758)
            ... 2 more
    24-Feb-2010 15:26:26 com.sun.messaging.jmq.jmsclient.ConnectionImpl logLifeCycle
    FINE: Connection closed.  The connection is closed due to a network problem, broker crashed, or internal error: BrokerAddress=10.59.148.9:7676(1082), ConnectionID=4090718717872604160, ReconnectEnabled: true, IsConnectedToHABroker: falseAnd we can then send/receive again.
    The connection factory details are as follows:
    Class:                  com.sun.messaging.ConnectionFactory
    getVERSION():           3.0
    isReadonly():           false
    getProperties():        {imqOverrideJMSPriority=false, imqConsumerFlowLimit=1000, imqOverrideJMSExpiration=false, imqAddressListIterations=1, imqLoadMaxToServerSession=true, imqConnectionType=TCP, imqPingInterval=30, imqSetJMSXUserID=false, imqConfiguredClientID=, imqSSLProviderClassname=com.sun.net.ssl.internal.ssl.Provider, imqJMSDeliveryMode=PERSISTENT, imqConnectionFlowLimit=1000, imqConnectionURL=http://localhost/imq/tunnel, imqBrokerServiceName=, imqJMSPriority=4, imqBrokerHostName=localhost, imqJMSExpiration=0, imqAckOnProduce=, imqEnableSharedClientID=false, imqAckTimeout=10000, imqAckOnAcknowledge=, imqConsumerFlowThreshold=50, imqDefaultPassword=guest, imqQueueBrowserMaxMessagesPerRetrieve=1000, imqDefaultUsername=guest, imqReconnectEnabled=true, imqConnectionFlowCount=100, imqAddressListBehavior=PRIORITY, imqReconnectAttempts=1, imqSetJMSXAppID=false, imqConnectionHandler=com.sun.messaging.jmq.jmsclient.protocol.tcp.TCPStreamHandler, imqSetJMSXRcvTimestamp=false, imqBrokerServicePort=0, imqDisableSetClientID=false, imqSetJMSXConsumerTXID=false, imqOverrideJMSDeliveryMode=false, imqBrokerHostPort=7676, imqQueueBrowserRetrieveTimeout=60000, imqSetJMSXProducerTXID=false, imqSSLIsHostTrusted=false, imqConnectionFlowLimitEnabled=false, imqReconnectInterval=3000, imqAddressList=mq://10.59.148.9,mq://10.59.148.17,mq://10.59.148.11, imqOverrideJMSHeadersToTemporaryDestinations=false}The brokers are virtualbox guests running 'WinXP 64' and the producer and consumer are both running on Ubuntu 9.10.
    Handling a machine failure seems like a pretty standard scenario for a cluster so I'm sure there's something I've mis-configured.
    Is there something I can do to fix this?
    If not, the only other alternatives I see are to either:
    1) Try the HA custer. But I'm not sure this will fix the problem if the client is waiting on a dead socket.
    2) Lower the message ack to something like 10 seconds and reconnect to the cluster for a retry every time we get a JMSException. But this is obviously not ideal either.
    Any suggestions would be great.
    Thanks,
    Nick

    Hi Nigel,
    The producer is configured to send messages every 5 seconds and the consumer is sitting with a message listener on the queue.
    The producer send is failing each time with the following trace...
    5-Feb-2010 12:13:19 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: Outbound Packet:OBJECT_MESSAGE(5):30-127.0.1.1(a2:3f:b1:e2:fd:33)-45573-1267099999075;BrokerAddress=10.59.148.9:7676(1184), ConnectionID=882185479630361088, ReconnectEnabled: true, IsConnectedToHABroker: false
    25-Feb-2010 12:13:19 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: sent packet ... OBJECT_MESSAGE(5):30-127.0.1.1(a2:3f:b1:e2:fd:33)-45573-1267099999075
    25-Feb-2010 12:13:19 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: Outbound Packet:PING(54):31-127.0.1.1(eb:66:36:db:8:79)-45574-1267099999156;BrokerAddress=10.59.148.9:7676(1184), ConnectionID=882185479630365184, ReconnectEnabled: true, IsConnectedToHABroker: false
    25-Feb-2010 12:13:19 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: sent packet ... PING(54):31-127.0.1.1(eb:66:36:db:8:79)-45574-1267099999156
    25-Feb-2010 12:13:24 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: Outbound Packet:PING(54):32-127.0.1.1(a2:3f:b1:e2:fd:33)-45573-1267100004156;BrokerAddress=10.59.148.9:7676(1184), ConnectionID=882185479630361088, ReconnectEnabled: true, IsConnectedToHABroker: false
    25-Feb-2010 12:13:24 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: sent packet ... PING(54):32-127.0.1.1(a2:3f:b1:e2:fd:33)-45573-1267100004156
    25-Feb-2010 12:13:24 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: Outbound Packet:PING(54):33-127.0.1.1(eb:66:36:db:8:79)-45574-1267100004156;BrokerAddress=10.59.148.9:7676(1184), ConnectionID=882185479630365184, ReconnectEnabled: true, IsConnectedToHABroker: false
    25-Feb-2010 12:13:24 com.sun.messaging.jmq.jmsclient.ProtocolHandler writePacketNoAck
    FINEST: sent packet ... PING(54):33-127.0.1.1(eb:66:36:db:8:79)-45574-1267100004156
    25-Feb-2010 12:13:29 com.sun.messaging.jmq.jmsclient.AckQueue printInfo
    WARNING: [W2003]: Broker not responding [OBJECT_MESSAGE(5)] for 10 seconds. Still trying..., broker addr=10.59.148.9:7676(1184), connectionID=882185479630361088, clientID=null, consumerID=14
    25-Feb-2010 12:13:29 com.sun.messaging.jmq.jmsclient.ExceptionHandler throwJMSException
    FINER: I501
    com.sun.messaging.jms.JMSException: [C4000]: Packet acknowledge failed. user=guest, broker=10.59.148.9:7676(1184)
         at com.sun.messaging.jmq.jmsclient.ProtocolHandler.writePacketWithAck(ProtocolHandler.java:712)
         at com.sun.messaging.jmq.jmsclient.ProtocolHandler.writePacketWithAck(ProtocolHandler.java:575)
         at com.sun.messaging.jmq.jmsclient.ProtocolHandler.writePacketWithReply(ProtocolHandler.java:430)
         at com.sun.messaging.jmq.jmsclient.ProtocolHandler.writeJMSMessage(ProtocolHandler.java:1919)
         at com.sun.messaging.jmq.jmsclient.WriteChannel.sendWithFlowControl(WriteChannel.java:154)
         at com.sun.messaging.jmq.jmsclient.WriteChannel.writeJMSMessage(WriteChannel.java:107)
         at com.sun.messaging.jmq.jmsclient.SessionImpl.writeJMSMessage(SessionImpl.java:770)
         at com.sun.messaging.jmq.jmsclient.MessageProducerImpl.writeJMSMessage(MessageProducerImpl.java:203)
         at com.sun.messaging.jmq.jmsclient.MessageProducerImpl.writeJMSMessage(MessageProducerImpl.java:192)
         at com.sun.messaging.jmq.jmsclient.MessageProducerImpl.send(MessageProducerImpl.java:624)
         at com.sun.messaging.jmq.jmsclient.QueueSenderImpl.send(QueueSenderImpl.java:97)
         at uk.co.mydomain.jms.ClientJmsConnection.sendMessage(ClientJmsConnection.java:278)
         at uk.co.mydomain.jms.JmsClientMessageBroker.sendMessage(JmsClientMessageBroker.java:147)
         at test.uk.co.mydomain.messaging.quicktest.QuickTest$2.run(QuickTest.java:67)Is there any trace I could get that may be useful?
    Thanks,
    Nick

  • WLAN AP & Client subnet sizing

    Does anyone know of any recommendations regarding sizing of:
    - AP subnets
    - Client subnets
    when designing Cisco wireless networks?
    I've checked out the design guides and various FAQ's etc., but haven't come across anything obvious.
    In the case of AP subnets, I wonder if there is a recommended point at which the number of APs in a subnet becomes too high. There must be a break-point where the level of broadcast traffic starts to have negative impact on performance for all APs in the subnet. I often use an AP subnet range per switch stack or per floor, which seems to work fine, but may not be best use of limited IP address space. But, would it really be advisable to create a 24 bit AP range and then put 250 APs into it?
    The same question applies to client subnets. Again, if I have 500 users, I wouldn't usually create a single 23 bit subnet to accomodate them and then allow that single range to be assigned to a single SSID to cover a campus. Generally, I would use a number of ranges and use AP groups on an SSID to keep the broadcast domains down to reasonable sizes on the client side. Again, what is a 'reasonable' size (in terms of numbers of clients on a subnet)?
    I'm guessing there are a lot of variables in here (for instance the levels & types of traffic). But, I would be interested to hear of any tried & tested (or Cisco recommended) rules of thumb.
    Thanks in advance.
    Nigel.

    Just to add in another consideration to this discussion, I'd like to throw in multicasting.
    The main argument that underpins the sizing considerations discussed above is the fact that the WLC does not forward broadcasts to client, allowing large subnets to be used with no issues.
    However, with the growth of BYOD etc. recently, there is a growing demand for multicasting due to the services provided by Bonjour for Apple devices (e.g. Apple TV, Air Print etc.).
    I'd be interested to hear if anyone has any views on how the potential growth in multicast traffic for Bonjour services is going to impact client subnet sizing (if at all..?).
    There is a great guide about Bonjour deployment from Cisco at: http://www.cisco.com/en/US/products/hw/wireless/ps4570/products_tech_note09186a0080bb1d7c.shtml
    I'm guessing that IGMP snooping should ensure that only clients that need to receive a multicast stream will get it. But, even so, I'm guessing this will have some detrimental impact as many clients on the same subnet may receive the same stream?
    Anyone any useful input on this?
    Nigel.

  • JMS/Queue cluster question

              Hi
              I have some very basic cluster questions on JMS Queues. Lets say Q1>I have 3 WLS
              in cluster. I create the queue in only WLS#1 - then all the other WLS (#2 and #3)
              should have a stub in their JNDI tree for the Queue which points to the Queue in
              #1 - right? Basically what I am trying to acheive is to have the queue in one server
              and all the other servers have a pointer to it - I beleive this is possible in WLS
              cluster - right??
              Q2> Is there any way a client to the queue running on a WLS can tell whether the
              Queue handle its using is local (ie in the same server) or remote. Is the API createQueue(./queuename)
              going to help here??
              Q3>Is there any way to create a Queue dynamically - I guess JMX is the answer -right?
              But I will take this question a bit further - lets say Q1 answer is yes. In this
              case if server #1 crashes - then #2 and #3 have no Queues. So if they try to create
              a replica of the Queue (as on server#1) - pointing to the same filestore - can they
              do it?? - I want only one of them to succed in creating the Queue and also the Queue
              should have all the data of the #1 Queue (1 to 1 replica).
              All I want is the concept of primary and secondary queue in a cluster. Go on using
              the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession replication
              concept in clusters. My cluster purpose is more for failover rather than loadbalancing.
              TIA
              Anamitra
              

              Anamitra wrote:
              > Hi Tom
              > 7.0 is definitely an option for me. So lets take the scenarion on case of JMS cluster
              > and 7.0.
              >
              > I do not understand what u mean by HA framework?
              An HA framework is a third party product that can be used to automatically restart a failed server
              (perhaps on a new machine), and that will guarantee that the same server isn't started in two
              different places (that would be bad). There are few of these HA products, "Veritas" is one of
              them. Note that if you are using JMS file stores or transactions, both of which depend on the disk,
              you must make sure that the files are available on the new machine. One approach to this is to use
              what is known as a "dual-ported" disk.
              > If I am using a cluster of 3 WLS
              > 7.0 servers - as u have said I can create a distrubuted Queue with a fwd delay attribute
              > set to 0 if I have the consumer only in one server say server #1.
              > But still if the server #1 goes down u say that the Queues in server #2 and server
              > #3 will not have access to the messages which were stuck in the server #1 Queue when
              > it went down -right?
              Right, but is there a point in forwarding the messages to your consumer's destination if your
              application is down?
              If your application can tolerate it, you may wish to consider allowing multiple instances of it (one
              per physical destination). That way if something goes down, only those messages are out-of-business
              until the application comes back up...
              >
              >
              > Why cant the other servers see them - they all point to the same store right??
              > thanks
              > Anamitra
              >
              Again, multiple JMS servers can not share a store. Nor can multiple stores share a file. That will
              cause corruption. Multiple stores CAN share a database, but can't use the same tables in the
              database.
              Tom
              >
              > Tom Barnes <[email protected]> wrote:
              > >
              > >
              > >Anamitra wrote:
              > >
              > >> Hi
              > >> I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > >> in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > >> should have a stub in their JNDI tree for the Queue which points to the
              > >Queue in
              > >> #1 - right?
              > >
              > >Its not a stub. But essentially right.
              > >
              > >> Basically what I am trying to acheive is to have the queue in one server
              > >> and all the other servers have a pointer to it - I beleive this is possible
              > >in WLS
              > >> cluster - right??
              > >
              > >Certainly.
              > >
              > >>
              > >> Q2> Is there any way a client to the queue running on a WLS can tell whether
              > >the
              > >> Queue handle its using is local (ie in the same server) or remote. Is
              > >the API createQueue(./queuename)
              > >> going to help here??
              > >
              > >That would do it. This returns the queue on the CF side of the established
              > >Connection.
              > >
              > >>
              > >> Q3>Is there any way to create a Queue dynamically - I guess JMX is the
              > >answer -right?
              > >> But I will take this question a bit further - lets say Q1 answer is yes.
              > >In this
              > >> case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > >> a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > >> do it??
              > >> - I want only one of them to succed in creating the Queue and also the
              > >Queue
              > >> should have all the data of the #1 Queue (1 to 1 replica).
              > >
              > >No. Not possible. Corruption city.
              > >Only one server may safely access a store at a time.
              > >If you have an HA framework that can ensure this atomicity fine, or are
              > >willing
              > >to ensure this manually then fine.
              > >
              > >>
              > >>
              > >> All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > >> the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > >> concept in clusters. My cluster purpose is more for failover rather than
              > >loadbalancing.
              > >
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > >you want used most. Optionally, 7.0 will automatically forward messages
              > >from distr. dest
              > >members that have no consumers to those that do.
              > >
              > >In 6.1 you can emulate a distributed destination this way (from an upcoming
              > >white-paper):
              > >Approximating Distributed Queues in 6.1
              > >
              > >If you wish to distribute the destination across several servers in a cluster,
              > >use the distributed
              > >destination features built into WL 7.0. If 7.0 is not an option, you can
              > >still approximate a simple
              > >distributed destination when running JMS servers in a &#8220;single-tier&#8221;
              > configuration.
              > > Single-tier indicates
              > >that there is a local JMS server on each server that a connection factory
              > >is targeted at. Here is a
              > >typical scenario, where producers randomly pick which server and consequently
              > >which part of the
              > >distributed destination to produce to, while consumers in the form of MDBs
              > >are pinned to a particular
              > >destination and are replicated homogenously to all destinations:
              > >
              > >· Create JMS servers on multiple servers in the cluster. The servers will
              > >collectively host the
              > >distributed queue &#8220;A&#8221;. Remember, the JMS servers (and WL servers) must
              > >be named differently.
              > >
              > >· Configure a queue on each JMS server. These become the physical destinations
              > >that collectively become
              > >the distributed destination. Each destination should have the same name
              > >"A".
              > >
              > >· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;, and also
              > take
              > >care to set the destination&#8217;s
              > >&#8220;JNDINameReplicated&#8221; parameter to false. The &#8220;JNDINameReplicated&#8221;
              > parameter
              > >is available in 7.0, 6.1SP3
              > >or later, or 6.1SP2 with patch CR061106.
              > >
              > >· Create a connection factory, and target it at all servers that have a
              > >JMS server with &#8220;A&#8221;.
              > >
              > >· Target the same MDB pool at each server that has a JMS server with destination
              > >&#8220;A&#8221;, configure its
              > >destination to be &#8220;JNDI_A&#8221;. Do not specify a connection factory URL
              > when
              > >configuring the MDB, as it can
              > >use the server&#8217;s default JNDI context that already contains the destination.
              > >
              > >· Producers look up the connection factory, create a connection, then a
              > >session as usual. Then producers
              > >look up the destination by calling javax.jms.QueueSession.createQueue(String).
              > > The parameter to
              > >createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > so
              > >&#8220;./A&#8221; works in this example.
              > >This will return a physical destination of the distributed destination that
              > >is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later, and 6.1SP2
              > >with patch CR072612.
              > >
              > >This design pattern allows for high availability, as if one server goes
              > >down, the distributed destination
              > >is still available and only the messages on that one server become unavailable.
              > > It also allows for high
              > >scalability as speedup is directly proportional to the number of servers
              > >on which the distributed
              > >destination is deployed.
              > >
              > >
              > >
              > >>
              > >> TIA
              > >> Anamitra
              > >
              > >
              > ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              > ><html>
              > >Anamitra wrote:
              > ><blockquote TYPE=CITE>Hi
              > ><br>I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > ><br>in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > ><br>should have a stub in their JNDI tree for the Queue which points to
              > >the Queue in
              > ><br>#1 - right?</blockquote>
              > >Its not a stub. But essentially right.
              > ><blockquote TYPE=CITE>Basically what I am trying to acheive is to have
              > >the queue in one server
              > ><br>and all the other servers have a pointer to it - I beleive this is
              > >possible in WLS
              > ><br>cluster - right??</blockquote>
              > >Certainly.
              > ><blockquote TYPE=CITE>
              > ><br>Q2> Is there any way a client to the queue running on a WLS can tell
              > >whether the
              > ><br>Queue handle its using is local (ie in the same server) or remote.
              > >Is the API createQueue(./queuename)
              > ><br>going to help here??</blockquote>
              > >That would do it. This returns the queue on the
              > >CF side of the established Connection.
              > ><blockquote TYPE=CITE>
              > ><br>Q3>Is there any way to create a Queue dynamically - I guess JMX is
              > >the answer -right?
              > ><br>But I will take this question a bit further - lets say Q1 answer is
              > >yes. In this
              > ><br>case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > ><br>a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > ><br>do it?? <br>
              > >- I want only one of them to succed in creating the Queue and also the
              > >Queue
              > ><br>should have all the data of the #1 Queue (1 to 1 replica).</blockquote>
              > >No. Not possible. Corruption city.
              > ><br>Only one server may safely access a store at a time.
              > ><br>If you have an HA framework that can ensure this atomicity fine, or
              > >are willing
              > ><br>to ensure this manually then fine.
              > ><blockquote TYPE=CITE>
              > ><p>All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > ><br>the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > ><br>concept in clusters. My cluster purpose is more for failover rather
              > >than loadbalancing.</blockquote>
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > ><br>you want used most. Optionally, 7.0 will automatically
              > >forward messages from distr. dest
              > ><br>members that have no consumers to those that do.
              > ><p><i>In 6.1 you can emulate a distributed destination this way (from an
              > >upcoming white-paper):</i>
              > ><br><i>Approximating Distributed Queues in 6.1</i><i></i>
              > ><p><i>If you wish to distribute the destination across several servers
              > >in a cluster, use the distributed destination features built into WL 7.0.
              > >If 7.0 is not an option, you can still approximate a simple distributed
              > >destination when running JMS servers in a &#8220;single-tier&#8221; configuration.
              > >Single-tier indicates that there is a local JMS server on each server that
              > >a connection factory is targeted at. Here is a typical scenario,
              > >where producers randomly pick which server and consequently which part
              > >of the distributed destination to produce to, while consumers in the form
              > >of MDBs are pinned to a particular destination and are replicated homogenously
              > >to all destinations:</i><i></i>
              > ><p><i>· Create JMS servers on multiple servers in the cluster.
              > >The servers will collectively host the distributed queue &#8220;A&#8221;. Remember,
              > >the JMS servers (and WL servers) must be named differently.</i><i></i>
              > ><p><i>· Configure a queue on each JMS server. These become
              > >the physical destinations that collectively become the distributed destination.
              > >Each destination should have the same name "A".</i><i></i>
              > ><p><i>· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;,
              > >and also take care to set the destination&#8217;s &#8220;JNDINameReplicated&#8221;
              > parameter
              > >to false. The &#8220;JNDINameReplicated&#8221; parameter is available in
              > >7.0, 6.1SP3 or later, or 6.1SP2 with patch CR061106.</i><i></i>
              > ><p><i>· Create a connection factory, and target it at all servers
              > >that have a JMS server with &#8220;A&#8221;.</i><i></i>
              > ><p><i>· Target the same MDB pool at each server that has a JMS server
              > >with destination &#8220;A&#8221;, configure its destination to be &#8220;JNDI_A&#8221;.
              > >Do not specify a connection factory URL when configuring the MDB, as it
              > >can use the server&#8217;s default JNDI context that already contains the destination.</i><i></i>
              > ><p><i>· Producers look up the connection factory, create a connection,
              > >then a session as usual. Then producers look up the destination by
              > >calling javax.jms.QueueSession.createQueue(String). The parameter
              > >to createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > >so &#8220;./A&#8221; works in this example. This will return a physical
              > >destination of the distributed destination that is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later,
              > >and 6.1SP2 with patch CR072612.</i><i></i>
              > ><p><i>This design pattern allows for high availability, as if one server
              > >goes down, the distributed destination is still available and only the
              > >messages on that one server become unavailable. It also allows
              > >for high scalability as speedup is directly proportional to the number
              > >of servers on which the distributed destination is deployed.</i>
              > ><br><i></i>
              > ><br><i></i>
              > ><blockquote TYPE=CITE>
              > ><br>TIA
              > ><br>Anamitra</blockquote>
              > ></html>
              > >
              > >
              

  • Reconfiguring a conventional cluster (add a new node) must there be outage?

    hi *,
    i just wanted to know if some of you know if a conventional cluster e.g. can be expaanded without any outages?
    questions like do i have to shutdown the master broker?
    or
    do i have to shut down every node then ?
    or
    do i have to shut down everything?
    does anyone of you know the details`?
    is a updated broker config file recognized by the running processes?
    regards chris

    hi *,
    i have already found almost all the things i needed in http://docs.sun.com/app/docs/doc/820-4916/gbnlp?l=en&q=sun+java+system+message+queue&a=view
    regards chris

  • MDB/Topic/WLS cluster question

              Hi
              I was going through some WLS 8.1 docs on JMS and had a question abt Topics & WLS
              in cluster config where say I have 3 servers with say server#1 hosting the Topic
              [not a distributed destination]. I have an an ear file containing an MDB with
              no pool size limit. After deploying the ear in the cluster - lets say that each
              server on the cluster has 5 instances of the MDB [just an example] and a message
              is published on the Topic.
              Q1>Will all the 3 servers get a [one and only one] copy of that message? [my guess
              is yes]
              Q2>Only 1 instance [out of 5] of the MDB/per server will get the message - right?
              Q3> Had I had a separate deployment of the same MDB class in the EAR file for
              the same Topic - thats just going to get treated as a completely separate subscriber
              independent of the first MDB though the implementing class is the same - right?
              thanks
              Anamitra
              

              Anamitra wrote:
              > Hi
              > I was going through some WLS 8.1 docs on JMS and had a question abt Topics & WLS
              > in cluster config where say I have 3 servers with say server#1 hosting the Topic
              > [not a distributed destination]. I have an an ear file containing an MDB with
              > no pool size limit. After deploying the ear in the cluster - lets say that each
              > server on the cluster has 5 instances of the MDB [just an example] and a message
              > is published on the Topic.
              >
              > Q1>Will all the 3 servers get a [one and only one] copy of that message? [my guess
              > is yes]
              Yes.
              > Q2>Only 1 instance [out of 5] of the MDB/per server will get the message - right?
              Yes.
              > Q3> Had I had a separate deployment of the same MDB class in the EAR file for
              > the same Topic - thats just going to get treated as a completely separate subscriber
              > independent of the first MDB though the implementing class is the same - right?
              Yes.
              >
              > thanks
              > Anamitra
              >
              For a little more information, I'm attaching notes on durable
              subscriber MDBs.
              A JMS durable subscription is uniquely identified within a cluster by a combination of "connection-id" and "subscription-id". Only one active connection may use a particular "connection-id" within a WebLogic cluster.
              In WebLogic 8.1 and previous, a durable topic subscriber MDB uses its name to generate its client-id. Since JMS enforces uniqueness on this client-id, this means that if a durable subscriber MDB is deployed to multiple servers only one server will be able to connect. Some applications want a different behavior where
              each MDB pool on each server gets its own durable subscription.
              The MDB connection id, which is unique within a cluster, comes from:
              1) The "ClientId" attribute configured on the WebLogic connection factory.
              This defaults to null. Note that if the ClientId is set on a connection
              factory, only one connection created by the factory
              may be active at a time.
              2) If (1) is not set, then, as with the subscriber-id,
              the connection-id is derived from jms-client-id descriptor attribute:
              <jms-client-id>MyClientID</jms-client-id>
              (the weblogic dtd)
              3) If (1) and (2) are not set, then, as with the subscriber-id,
              the connection-id is derived from the ejb name.
              The MDB durable subscription id, which must be unique on its topic, comes from:
              1) <jms-client-id>MyClientID</jms-client-id>
              (the weblogic dtd)
              2) if (1) is not set then the client-id
              comes from the ejb name.
              The above prevents a durable topic subscriber MDB from running on multiple servers. When an instance of the MDB starts on another server, it deploys successfully, but a conflict is detected and the MDB fails to fully connect to JMS. The work-around is the following:
              A) Create a custom connection-factory for each server:
              1) configure "JNDIName" to the same value across all servers
              ("myMDBCF" in this example)
              2) configure "ClientId" to a unique value per server
              3) enable "UserTransactionsEnabled"
              4) enable "XAConnectionFactoryEnabled"
              5) set "AcknowledgePolicy" to "ACKNOWLEDGE_PREVIOUS"
              6) target the CF at a single WebLogic server
              (Number 5 is required for non-transactional topic MDBs)
              B) In the MDB's weblogic-ejb-jar.xml descriptor, set the MDB's connection
              factory to the JNDI name of the custom connection factories configured in
              (A). Optionally, also specify the subscriber-id via the jms-client-id
              attribute.
              <weblogic-ejb-jar>
              <weblogic-enterprise-bean>
              <ejb-name>exampleBean</ejb-name>
              <message-driven-descriptor>
              <connection-factory-jndi-name>myMDBCF</connection-factory-jndi-name>
              <jms-client-id>myClientID</jms-client-id>
              </message-driven-descriptor>
              </weblogic-enterprise-bean>
              </weblogic-ejb-jar>
              C) Target the application at the same servers that have the custom connection
              factories targeted at them.
              Notes/Limitations:
              1) If the MDB is moved from one server to another, the MDB's corresponding
              connection-factory must be moved with it.
              2) This work-around will not work if the destination is not in the same
              cluster as the MDB. (The MDB can not use the local connection factory, which
              contains the connection-id, as connection factories do not work unless they
              are in the same cluster as the destination.)
              3) This work-around will not work for non-WebLogic JMS topics.
              4) A copy of each message is sent to each to each server's MDB pool.
              

  • Front-end/back-end cluster question

    [att1.html]
              

    Patrick Power wrote:
              > Thanx for your reply Prasad. I was surprised none of the Bea engineers
              > wished to touch this one. What do you suppose is up with that? Either
              > they are too busy, or possibly my question is too dumb.
              >
              I am from BEA so its not that we are not responding ;).
              >
              > Back to the issue: Yes, we will NES bridge/proxy into servlet front-end
              > cluster, potentially with Directors on the very front of the topology for
              > balancing. Your diagram as such:
              >
              > <Netscape/IIS/Apache/WLS FRONT END> ----- <CLUSTER OF WEBLOGIC SERVER
              > > SERVING SERVLETS> --- <CLUSTER OF WEBLOGIC SERVERS SERVING EJB>
              >
              > 1) Does <Netscape/IIS/Apache/WLS FRONT END> mean NES with proxy shared lib,
              > with a WLS service definition into cluster in obj.conf? I assume yes.
              Yes.
              >
              > 2) I would assume that <CLUSTER OF WEBLOGIC SERVERS SERVING SERVLETS> would
              > need the WLS HttpClusterServlet to the <CLUSTER OF WEBLOGIC SERVERS SERVING
              > EJB> all the way in the back.
              No. I was splitting presentation logic (namely servlets and jsp) and business
              logic (ejb) into two layers. Again you don't have to split it into two. You can
              colocate them both together. You could use NES or IIS or Apache or WLS. You
              don't need HttpClusterServlet.
              Lets get this straight.
              1. You need our proxy plugin for failover and to load balance the request that
              are going to presentation logic.
              2. From presentation logic layer, when you talk to backend business logic
              providers (like ejb cluster), if you use stateless session beans we provide
              failover and load balancing. In future we will support clustered stateful
              session beans as well. Therefore you don't need load balancer here.
              3. HttpClusterServlet should run only in front of presentation logic cluster and
              also it supports http only.
              Hope this helps.
              - Prasad
              > The NES proxy would only proxy into the f/e
              > cluster, right? You're not suggesting an external proxy of some type, are
              > you? The HttpClusterServlet is for WLS cluster-to-cluster proxies.
              > 3) A load balancer between the wls f/e and wls b/e clusters? That doesn't
              > seem applicable here. Once again, it should be HttpClusterServlet for WLS
              > cluster-to-cluster proxies.
              > 4) "use two or three proxy servers to avoid single point of failure."
              > Hmmm, once again - are we talking the WLS HttpClusterServlet proxy? Well,
              > that's the inital question: Can I have more than one HttpClusterServlet
              > proxy in the front-end cluster, proxying to the back-end cluster?
              > Otherwise, internally from this WLS architecture perspective, it is a single
              > point of failure.
              >
              > An example: 10 instances in f/e cluster. can more than one of these
              > instances have the WLS HttpClusterServlet proxy to the b/e cluster? Or, are
              > there instances of WLS HttpClusterServlet proxy in all 10 f/e cluster
              > instances?
              >
              > Cheers, Pat
              >
              > Prasad Peddada <[email protected]> wrote in message
              > news:[email protected]...
              > >
              > >
              > > Patrick Power wrote:
              > >
              > > > I know that this topic was addressed to some degree here in an earlier
              > > > posting, but I still have a question regarding the architecture
              > > > design:
              > > >
              > > > If configuring a front-end cluster for servlets/sessions and a
              > > > back-end cluster for remote services -- you route requests to the
              > > > back-end using the WLS proxy servlet. ok, got that part.
              > >
              > > Not quite. The typical scenario is
              > >
              > > <Netscape/IIS/Apache/WLS FRONT END> ----- <CLUSTER OF WEBLOGIC SERVER
              > > SERVING SERVLETS> --- <CLUSTER OF WEBLOGIC SERVERS SERVING EJB>
              > >
              > > You don't proxy and serve servlets from the same server.
              > >
              > > >
              > > > The question: Is there a single instance of the wls proxy servlet in
              > > > the front-end cluster? Or, is it on every instance in the front-end
              > > > cluster? What is the failover mechanism, in the case of a single
              > > > instance of proxy servlet in the f-e cluster failing?
              > >
              > > To prevent that you need to use some kind of h/w or software load
              > > balancer and then use two or three proxy servers to avoid single point
              > > of failure.
              > >
              > > > Is it a single point of failure between the 2 clusters?
              > > >
              > > > Thanx in advance for your help.
              > > >
              > > > BTW, I think Wei, Kumar and the other Bea folks cruising this group
              > > > have been doing a bang-up job of providing badly-needed detail on this
              > > > subject area - material this largely absent from the documentation.
              > > > Good job.
              > > >
              > > >
              > >
              > > --
              > > Cheers
              > >
              > > - Prasad
              > >
              > >
              

  • How do you make sure the cluster keeps clients macs in it?

    every time the client macs are turned off then back on again they don't rejoin the cluster so each time i have to go around manually and join every mac to the cluster. There must be someway of making them stay in the cluster or be able to at least send a unix command to make them automatically join the cluster???

    You may need to remake the cluster.
    Also i've found that a cluster will work best with machines and software of the same spec. Make sure all the same software including pro-apps updates are running.
    QT will need to be up to date on all computers with the same QT components also.
    Failing that you might want to start from scratch using Digital Rebellions compressor repair, which is an awesome piece of freeware.
    Regards,
    SJ

  • 10g instant client "Zero Sized Response"

    tried to get instant clients for sol64bit, sol32bit, AIX64bit.
    all give:
    ERROR
    The requested URL could not be retrieved
    While trying to retrieve the URL: http://download-uk.oracle.com/otn/solaris/instantclient/instantclient-basic-solaris32-10.1.0.3.zip
    The following error was encountered:
    * Zero Sized Reply
    Squid did not receive any data for this request.
    Your cache administrator is [email protected]
    Generated Mon, 29 Nov 2004 15:19:47 GMT by undertow.inchinnan.grahamtech.co.uk (squid/2.5.STABLE1)

    Custom client is the path we chose. This is because you cannot upgrade the Instant client (thanks from another post).

  • SQL Server Failover Cluster Questions

    Dear All,
                I am building a two-node failover cluster on SQL Server 2012 SP1 (inside Hyper-V as a Guest Cluster) and want clarification on few things that I am facing.
    1.  I am receiving MSDTC Warning.  I can go ahead and create the cluster, but want to understand whether this MSDTC is to be configured as a role on the cluster or not.  I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases
    and Reports through it so in such a scenario, do I need MSDTC? If yes, how much should be the size of the MSDTC Drive? Is following process correct?
    http://www.sqlnotebook.info/configure-msdtc-on-windows-cluster-2012/
    2.  During First Node configuration, one needs to provide the "SQL CLUSTER RESEOURCE GROUP NAME".  Does it have any bearing on how it will be accessed by other servers for databases and logs? or is it just how the cluster resource group
    would be named? would it be required for every instance that is created inside the cluster? Just to be clear, so one can name it according to the instance name.
    3.  During the instance creation, one needs to provide "SQL Server Network Name".  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through it, so would I be required to provide this
    for all instances that I create or this is only required once in the cluster:
    4.  During the instance creation, one needs to provide the features required for installation i.e. instance features and shared features.  As stated above, I plan to run SCVMM, SCOM, Orchestrator and Windows Azure Pack Databases and Reports through
    it, so which features should be selected? so that there is less workload on the server.
    5.  All the instances use TempDB for databases that are present inside it.  What would be the best practice with respect to TempDB.  One TempDB for all instance on the servers on a separate LUN or all instance having their own TempDB LUN?  What
    should be the ideal size of the TempDB LUN?
    6.  Should all the disks required for DBs and Logs be added to Cluster?  Should they be added normal disks or CSV Volumes?
    Thanks in advance. 

    Hello,
     1.You can run the Microsoft Distributed Transaction Coordinator service (MSDTC) as a clustered resource on a failover cluster server for increased reliability, based on the failover capabilities of the clustered servers. You can
    refer to the MSDTC section of the following blog about determine whether the Microsoft Distributed Transaction Coordinator (MSDTC) cluster resource must be created.
    Reference:http://msdn.microsoft.com/en-us/library/ms189910.aspx#MSDTC
    2. The Cluster Resource Group is where SQL Server failover cluster resource will be placed. Each clustered SQL Server will belong to a Failover
    Cluster Resource Group. For example, if you had configure a two node SQL Server Cluster, each clustered instance on the two node belong to a same Cluster Resource Group.
    You can change the Cluster Resource Group name, but notes the following name is reserved and already used as Resource Group names: Available Storage, Cluster Group.
    3. Each SQL Server cluster is assigned a virtual Network name and IP address, which client applications use to connect to the clustered SQL Server.
    4. Not familiar with SCVMM, SCOM, Orchestrator, but you should install the Database Engine Services and SQL Server Management tools.If you want to use SQL Server Reporting Services, you can install Reporting Servers, but Report Server service cannot participate
    in a failover cluster.
    5. You can use isolated disk for user database and temp DB of each SQL Server Cluster
    6. Yes. You should use Cluster Disks which add to Clustered Shared Volumes to host the data file and log of databases.
    http://www.pythian.com/blog/how-to-install-a-clustered-sql-server-2012-instance-step-by-step-part-1/
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Windows 2008 Cluster question on using a new cluster drive source from shrinking existing disk

    I have a two node Windows 2008 R2 enterprise SP1 cluster. It has a basic cluster setup of one (Q:)quorum disk and data disk (E:) which is 2.7tb is size. This cluster is connected to a shared Dell Disk array.
    My question is can I safely shrink the 2.7tb drive down and carve out a disk size of 500gb from the same disk and use for a new cluster disk resource. We want to install Globalscape SFTP software on this new disk for use as a cluster resource.
    Will this work without crashing the cluster.
    Thanks,
    Gonzolean

    Hi ,
    Thank you for posting your issue in the forum.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Cluster setup and APS sizing

    Hi,
    I have 2 windows servers here with BI 4.0 SP2 installed on each of them and I have clustered the two CMS. Now, in CMC, I see 18 servers each for each node - so total of 36 servers that are running and enabled. ( I am talking about APS, AJS, CMS, IFRS, OFRS, dashboard servers etc.....). My questions are:
    1) Should I leave them all enabled and running at all times ? my understanding is that I should, because that is how the load balancing and failover will happen.  But at a couple of forums, I have read that some people leave the servers of one node off.
    2)I read in some forum that by clustering the two bobjs on two servers, basically only the CMS is being load balanced. Is that true ?? Does this mean that other servers like APS, AJS, Dashobard server, crystal server and the corresponding services within them are not really being load balanced even though both are running at the same time ?
    3) My installation is default and we havent gone live yet. I havent really created or modified any of the servers like APS etc. but I am reading that in your production system, you should create multiple APS servers and distribute similiar services among them ? is this true ? Note # 1580280 and the sizing companion for BO 4.0 talks about it too.Should i really worry about this even if I am only going to have 100 users in production ?
    Thanks,
    Jason

    1) Should I leave them all enabled and running at all times ? my understanding is that I should, because that is how the load balancing and failover will happen. But at a couple of forums, I have read that some people leave the servers of one node off.
    You should stop and disable all default APS and AJS servers and then create Specific APS/AJS servers only with services that you need.
    There are several KB's on APS issues in BI4 and most are resolved with splitting them up.
    Non APS servers you should stop those that are not part of your reporting requirements.
    2)I read in some forum that by clustering the two bobjs on two servers, basically only the CMS is being load balanced. Is that true ?? Does this mean that other servers like APS, AJS, Dashobard server, crystal server and the corresponding services within them are not really being load balanced even though both are running at the same time ?
    That is not true. When 2 BI4 nodes are clustered , over time the load of each service is balanced.
    3) My installation is default and we havent gone live yet. I havent really created or modified any of the servers like APS etc. but I am reading that in your production system, you should create multiple APS servers and distribute similiar services among them ? is this true ? Note # 1580280 and the sizing companion for BO 4.0 talks about it too.Should i really worry about this even if I am only going to have 100 users in production ?
    Yes, the default APS are not good at all and should never be used in the production, even if you only have 5 users

  • Sun Cluster question

    Hello everyone
    I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of the databases on one node fails for whatever reason, the whole system gets shifted to the second node to keep the environment going. That works fine.
    My intended scenario:
    Each node is holding 2 database instances, both nodes ARE working at the same time so that each one is serving one instance of the database. In the event of failure on one node, the other one should assume the role of BOTH database instances till the first one gets fixed.
    The question is: is that possible? and if it is, does that require breaking the whole cluster and rebuilding it? or can this be done online without bringing down the system?
    Thanks a lot in advance

    What you propose will not work either. E.g. there is no logic implemented to fence the underlying zpool from one node to the other in such a configuration.
    Also the current SUNW.HAStoragePlus(5) manpage document:
            Note -   SUNW.HAStoragePlus does not support  file  sys-
                     tems created on ZFS volumes.
                     You cannot use SUNW.HAStoragePlus  to  manage  a
                     ZFS storage pool that contains a file system for
                     which the ZFS  mountpoint  property  is  set  to
                     legacy or none.[...]
    Greets
    Thorsten

  • Question about relative sizing on JPanels

    Hi,
    My question is about relative sizing on components that are not drawn yet. For example I want to draw a JLabel on the 3rd quarter height of a JPanel. But JPanel's height is 0 as long as it is not drawn on the screen. Here is a sample code:
    JPanel activityPnl = new JPanel();
    private void buildActivityPnl(){
            //setting JPanel's look and feel
            activityPnl.setLayout(null);
            activityPnl.setBackground(Color.WHITE);
            int someValue = 30;  // I use this value to decide the width of my JPanel
            activityPnl.setPreferredSize(new Dimension(someValue, 80));
            //The JLabel's height is 1 pixel and its width is equal to the JPanel's width. I want to draw it on the 3/4 of the JPanel's height
            JLabel timeline = new JLabel();
            timeline.setOpaque(true);
            timeline.setBackground(Color.RED);
            timeline.setBounds(0, (activityPnl.getSize().height * 75) / 100 , someValue , 1);
            activityPnl.add(timeline);
        }Thanks a lot for your help
    SD
    Edited by: swingDeveloper on Feb 24, 2010 11:41 PM

    And use a layout manager. It can adjust automatically for a change in the frame size.
    Read the Swing tutorial on Using Layout Managers for examples of the different layout managers.

  • Cluster Question

              If I have a cluster which contains 2 nodes (e.g 192.168.0.1 and 192.168.0.2). In the
              admin console, I need to provide a cluster address which I might put 192.168.0.1
              and 192.168.0.2.
              In this case, I might need to bind those ip addresses in a single DNS and put it
              as the cluster address.
              How can I bind those ip addresses ? Do I need a DNS server to do that ?
              another question is, when the WLS cluster receive a request, where is the first point
              which is responsible for passing the requests into the nodes ? Is that a java class
              or ?
              

              Hello Ramy,
              If I have a proxy, then I might need to also have a cluster for the proxy server
              as well. Does it mean that I need a local director in front of the proxy cluster
              thanks,
              Friend
              "Ramy Saad" <[email protected]> wrote:
              >
              >Hello Friend,
              >
              >I think you need a proxy-server (for example with load balancing) which
              >can handel
              >a cluster. In your application you can use the IP-Address of the proxy-server
              >and
              >the proxy decides to which WLS the connection will be established. I think
              >a plug-in
              >for the appache server is shipped with the bea software...
              >
              >Regards,
              >Ramy.
              >
              >"Friend" <[email protected]> wrote:
              >>
              >>If I have a cluster which contains 2 nodes (e.g 192.168.0.1 and 192.168.0.2).
              >>In the
              >>admin console, I need to provide a cluster address which I might put 192.168.0.1
              >>and 192.168.0.2.
              >>In this case, I might need to bind those ip addresses in a single DNS and
              >>put it
              >>as the cluster address.
              >>How can I bind those ip addresses ? Do I need a DNS server to do that ?
              >>another question is, when the WLS cluster receive a request, where is the
              >>first point
              >>which is responsible for passing the requests into the nodes ? Is that
              >a
              >>java class
              >>or ?
              >
              

Maybe you are looking for

  • How can specify the max width of uix pages in a browser

    If I have 7 tabs in my jheadstart application. This works fine on a 1200X1000 resolution but on a 1024X720 resolution the users have a scrollbar how can i specify the max resolution in jheadstart

  • How to make different stroke widths like CSS can?

    With CSS, it's easy to specify border-top, border-right, etc. Is there a way to do this (on basic rectangles) with InDesign? I think it could be a very useful feature. I realize this might require a wholly separate panel than the existing Stroke pane

  • Brazil vs Germany Live ?

    http://www.boston.com/community/forums/sports/mixed-bag/general/brazil-vs-germany-live-/100/7233096 http://www.boston.com/community/forums/sports/mixed-bag/general/brazil-vs-germany-live-/100/7233096 http://www.boston.com/community/forums/sports/mixe

  • Serial Number not flowing from GR from Production to Inspection lot

    Dear Friends, In GR from production we are entering Serial Numbers but it is not coming automatically in the inspection lot. Please guide me Regards Prashant Atri

  • Downloading Foreign Music

    I'm a fan of music that isn't always available on the US itunes store. For example, I'm trying to download the FLCL album My Foot. It's available on the Japanese itunes store, but I am unable to purchase it. Please help. How do I download music from