OSB 10gR3 (WLS 10.3) - Distributed Queues & Load Balancing

I have a question in relation to distributed queues and its JMS proxy service consumer in OSB
I've set up a uniform distributed queue deployed using a sub-deployment resulting in the queue being targeted to the respective JMS servers in the cluster.
I've then set up a messaging service using JMS as the transport with the following URI
jms://server1:7011,server2:7012/weblogic.jms.XAConnectionFactory/myQueue
When I look at the monitoring tab of my distributed queue, I can see 16 current consumers to one of the members but none for the other one. My understanding is that the proxy is just a mere MDB and as such I thought WLS was optimised to make sure all MDB instances would listen to all members of the distributed queue. Why do I have 16 consumers to one member only?
Since only one member has consumers, any producer will always push messages to this member only. (I believe it is optimised to get a member with consumer(s) if any available)
I've also tried to use a custom Connection Factory deployed the same way my distributed queue was, and ensure the connection factory had load balancing enabled. But no success with this either.
jms://server1:7011,server2:7012/jms.MyConnectionFactory/myQueue
I looked at the deployment - though not directly performed by me but rather the bus console - and it looks like the application is targeted to the cluster.
How can I achieve true load balancing here, ensuring both members are consumed by my JMS proxy service?
In that case, would any produced message go to either member then as both have consumers?
Also, is the load balancing decision made by the producer when the Queue connection is created?
If so, how do you achieve true load balancing? Do you need to ask for a new Q connection each time you want to send a message rather than caching the connection?
Hope I am clear enough
Thanks
Arnaud

This confused me too!
The way I understand it, is that, as you say, a proxy service is like a single MDB. The MDB will bind to the queue it first finds when it connects.
The URL that you specify which contains your two servers but the first address in the URL is the one that will be used for the connection. If the first server is unavailable, then the second one will be used.
If you have a distributed queue, this doesn’t help much, as you do end up with one of the queue members with no consumers on it.
You can configure a forward delay for the distributed queue, which will cause WLS to forward messages to a queue with consumers, but this isn’t a good idea if you have large JMS messages as WLS needs to serialize and de-serialize across the network to move the message.
I think that what you have to do, is define two proxy services, one connecting to the first server, and the other connecting to the second.
I haven’t found a better way so far, but it does seem a bit over the top, but then, if you wrote a an external java client which attached to a distributed queue you would specify the connection url and it would behave in the same way – if you wanted it to bind to both distributed destination members, you would have to code it or run two instances, so maybe its just working as it should – even though it seems strange.
I think the producer will simply load balance across the distributed queue members, it doesn’t pay any attention whether there are to consumers attached – this happened to me the other day!!
Pete

Similar Messages

  • 3rd party distributed SW load balancing with In-Memory Replication

              Hi,
              Could someone please comment on the feasibility of the following setup?
              I've started testing replication with a software load balancing product. This
              product lets all nodes receive all packets and uses a kernel-level filter
              to let only one node at the time receive it. Since there's minimum 1 heartbeat
              between the nodes, there are several NICs in each node.
              At the moment it seems like it doesn't work: - I use the SessionServlet - with
              a 2-node cluster I first have the 2 nodes up and I access it with a single client:
              .the LB is configured to be sticky wrt. source IP address, so the same node gets
              all the traffic - when I stop the node receiving the traffic the other node takes
              over (I changed the colours of SessionServlet) . however, the counter restarts
              at zero
              From what I read of the in-memory replication documentation I thought that it
              might work also with a distributed software load balancing cluster. Any comments
              on the feasability of this?
              Is there a way to debug replication (in WLS6SP1)? I don't see any replication
              messages in the logs, so I'm not even sure that it works at all. - I do get a
              message about "Clustering Services startting" when I start the examples server
              on each node - is there anything tto look for in the console to make sure that
              things are working? - the evaluation license for WLS6SP1 on NT seems to support
              In-Memory Replication and Cluster. However, I've also seen a Cluster-II somewhere:
              is that needed?
              Thanks for your attention!
              Regards, Frank Olsen
              

    We are considering Resonate as one of the software load balancer. We haven't certified
              them yet. I have no idea how long its going to take.
              As a base rule if the SWLB can do the load balancing and maintain stickyness that is fine
              with us as long as it doesn't modify the cookie or the URL if URL rewriting is enabled.
              Having said that if you run into problems we won't be able to support you since it is not
              certified.
              -- Prasad
              Frank Olsen wrote:
              > Prasad Peddada <[email protected]> wrote:
              > >Frank Olsen wrote:
              > >
              > >> Hi,
              > >>
              > > We don't support any 3rd party software load balancers.
              >
              > Does that mean that there are technical reasones why it won't work, or just that
              > you haven't tested it?
              >
              > > As >I said before I am thinking your configuration is >incorrect if n-memory
              > replication is not working. I would >strongly suggest you look at webapp deployment
              > descriptor and >then the config.xml file.
              >
              > OK.
              >
              > >Also doing sticky based on source ip address is not good. You >should do it based
              > on passive cookie persistence or active >cookie persistence (with cookie insert,
              > a new one).
              > >
              >
              > I agree that various source-based sticky options (IP, port; network) are not the
              > best solution. In our current implementation we can't do this because the SW load
              > balancer is based on filtering IP packets on the driver level.
              >
              > Currently I'm more interested in understanding whether it can our SW load balancer
              > can work with your replication at all?
              >
              > What makes me think that it could work is that in WLS6.0 a session failed over
              > to any cluster node can recover the replicated session.
              >
              > Can there be a problem with the cookies?
              > - are the P/S for replication put in the cookie by the node itself or by the proxy/HW
              > load balancer?
              >
              > >
              > >The options are -Dweblogic.debug.DebugReplication=true and
              > >-Dweblogic.debug.DebugReplicationDetails=true
              > >
              >
              > Great, thanks!
              >
              > Regards,
              > Frank Olsen
              

  • JMS cluster and distributed destination load balancing question

              Hi All
              Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
              both the servers have an MDB deployed for the queue. Now if a producer in server
              #1 writes to the Queue - he will write to the local queue - right?
              In that case will the local MDB pick up the message or that can be load balanced?
              OR the write it self can be load balanced?
              I really want either the write or the read to be load balanced - but I suspect
              server affinity will play a mess here. Can anyone pls clarify.
              thanks
              Anamitra
              

              Hi All
              Scenario: 2 WL 7 servers in cluster with distributed queue in both of them and
              both the servers have an MDB deployed for the queue. Now if a producer in server
              #1 writes to the Queue - he will write to the local queue - right?
              In that case will the local MDB pick up the message or that can be load balanced?
              OR the write it self can be load balanced?
              I really want either the write or the read to be load balanced - but I suspect
              server affinity will play a mess here. Can anyone pls clarify.
              thanks
              Anamitra
              

  • WLS proxy plugin does not load balance

    I have a cluster created with two app servers in separate boxes and a Weblogic proxy plug-in to forward the client requests to the servers. However, the proxy doesn't distribute the load equally. Very often 90% of the user sessions go to one server and 10% to the other. Both boxes have the same hardware specs.
              Does the WLS plugin really support round-robin load balancing ? I'd appreciate any information to solve this problem.
              Thanks
              - Miguel
              I'm using WLS 6.1 SP2.
              

    Are you load balancing the web servers? What kind of web servers are you
              using?
              Miguel Vilar wrote:
              >I have a cluster created with two app servers in separate boxes and a Weblogic proxy plug-in to forward the client requests to the servers. However, the proxy doesn't distribute the load equally. Very often 90% of the user sessions go to one server and 10% to the other. Both boxes have the same hardware specs.
              >Does the WLS plugin really support round-robin load balancing ? I'd appreciate any information to solve this problem.
              >Thanks
              >- Miguel
              >I'm using WLS 6.1 SP2.
              >
              

  • WLS 5.1 SP8 improper load balancing of EJB

    I have noticed a strange behaviour in load balancing of EJB.
              I have 3 instances of WLS 5.1 sp8 running under a cluster, lets say X, Y and
              Z. Now if I access the EJB in the cluster from with in one of the WLS
              instances(eg servlet), lets say Y, then I always get the reference from that
              same machine "Y" no matter what load balancing algorithm I set. No matter
              how many times I access this EJB from machine Y it will only give me
              reference of EJB from Y. Somehow this seems wrong, it should round-robin
              between X, then Y then Z and then X again no matter where I get a reference
              from.
              Now if I access the EJB from outside of the cluster, a fourth JVM, then each
              time I get the reference from a different machine and clustering works fine
              and load balances as expected.
              Any explanation, workaround ??? Most of the EJB referencing we are doing is
              within the cluster itself, this is causing problems because everything
              works, but on a single machine even though we have a cluster set up. We only
              reference the EJB externally once, that does get clustered, but from that
              point on, everything happens on that machine.
              

              What you really want to do is use CallRouter - I've used it with clustered RMI,
              I have not used it with EJB. (but EJB use RMI so you should be able to make it
              work).
              Personally, I'd be writing my number crunching bits in C about now.
              Mike
              "Haider Kazmi" <[email protected]> wrote:
              >Thanks Mike, let me give creating InitialContext with specific IP address
              >a
              >try , maybe we can put in load sharing logic in our code this way.
              >
              >As for using C, I think as with all products, our marketing team wouldn't
              >like that but thats a definate approach to improve performance.
              >
              >cheers
              >
              >"Mike Reiche" <[email protected]> wrote in message
              >news:[email protected]...
              >>
              >> You're doing heavy computation in Java - it's going to be slow.
              >> Write it in C, call it using JNI and it will be about 10 times as fast.
              >>
              >> There 'executing sequentially' is not a function of the EJB
              >specification. I
              >> assume your code looks something like...
              >>
              >>
              >> result1= ejb1.calculate1 ( a, b, c );
              >> result2= ejb2.calculate2 ( e, f, result1);
              >> result3= ejb3.calcualte3 ( x, result3);
              >>
              >> caculate3 won't be called until calculate2 has finished, which won't
              >be
              >called
              >> until calculate1 has finished. Sequential.
              >>
              >> If you really want to force the EJBs on a specific WL instance, just
              >specify the
              >> IP address and port number when you create the InitialContext.
              >>
              >> Mike
              >>
              >>
              >>
              >> "Haider Kazmi" <[email protected]> wrote:
              >> >Hi Mike
              >> >
              >> >> First you gotta understand that calling an EJB on a remote JVM costs
              >> >about
              >> >10 times
              >> >> as much as calling one on a local instance. That's what I tried
              >to
              >> >say in
              >> >the
              >> >> first followup.
              >> >>
              >> >> Second, if whatever is calling the EJB (JSP, Servlet) is load balanced,
              >> >then -
              >> >> presto - all the EJB calls are load balanced as well.
              >> >
              >> >Thats definately true, I think I might have created some confusion,
              >what
              >> >I
              >> >was really trying to say is that we are doing a lot of processing
              >and
              >> >its
              >> >mathematical in nature unlike the standard credit card transaction
              >or
              >> >website transaction. We have optimized this to a large extent, the
              >real
              >> >problem we get into is when the consumers of this result try to consume
              >> >them, each of which are stateless session beans.
              >> >The result gets posted on a JMS topic, servlet was an example, what
              >we
              >> >actually have are JMS clients calling more EJBs once a result is posted
              >> >on
              >> >the JMS topic. So here is how it goes, once the processing of workflow
              >> >is
              >> >done, this published on the relevent JMS topic based on the result
              >of
              >> >the
              >> >workflow process. All the related JMS clients subscribed to this topic
              >> >pick
              >> >it up. They call relevent EJBs with this result. So we don't have
              >control
              >> >over load balancing our clients.
              >> >
              >> >> The only time that such a configuration would load only one instance
              >> >is if
              >> >you
              >> >> only had one such EJB call at a time. But this is still faster
              >than
              >> >having the
              >> >> EJBs load balanced on remote JVM. Load balanced or not, the EJB
              >calls
              >> >are
              >> >NOT
              >> >> made in parallel, thus the calling method has to wait for the first
              >> >call
              >> >to finish,
              >> >> then the second, then the third... so that if each call takes longer
              >> >on a
              >> >remote
              >> >> JVM than locally, your total wait time is longer.
              >> >thanks for this info. I think I better read the EJB specs in more
              >details.
              >> >however whats not clear to me is even if the EJBs are different (not
              >> >just
              >> >different instances of the same EJBs and what about different instances
              >> >of
              >> >the same EJB, are things still done sequentially) will only one method
              >> >be
              >> >called at a time??
              >> >
              >> >> Maybe that's not clear - distributing the calls does not make anything
              >> >happen
              >> >> any faster - it just spreads it out.
              >> >Is this also true for stateless session beans, specifically among
              >different
              >> >instances of the same bean?
              >> >
              >> >> You should be spending your time figuring out how to reduce the
              >amount
              >> >of
              >> >processing.
              >> >This is where I am stuck at now. We are at a point where a lot of
              >the
              >> >optimization is done. The problem arizes when processing is done and
              >> >the
              >> >result is posted on the JMS, a bunch of stateless session bean try
              >to
              >> >consume the result all at once.
              >> >
              >> >
              >>
              >
              >
              

  • Best way for HTTP load balancing in OSB

    Hi everybody,
    We have setup an OSB cluster and we need to load balance HTTP requests across managed servers. Looking for info about load balancing in OSB I found that there are mainly two options: using a hardware load balancer or a software solution like Weblogic HttpClusterServlet. At the moment we have no hardware balancer available so we will have to take the software option. I found some articles about configuring HttpClusterServlet like http://redstack.wordpress.com/2010/12/20/using-weblogic-as-a-load-balancer.
    But I have a question about this configuration. If we use a managed server as an HTTP proxy that balances requests between OSB managed servers, what would happen if this server goes down? I think one of the main goals of a clustered deployment is avoiding a single point of failure but with that setup all requests would depend on the availability of the proxy managed server.
    Could you recommend us a setup for implementing load balancing in OSB?
    Thank you in advance,
    Daniel.

    Load balancing in a cluster for http requests can be achieved using atleast 4 different ways:
    (1)- use a hardware load balancer like F5 BigIP LTM
    (2)- use a web server with weblogic plugin to frontend the cluster
    (3)- use weblogic with HTTPClusterServlet
    (4)- use DNS round robin - this works if you have managed servers running on 2 machines (say mach1, mach2) but on the same port. HTTP clients use hostname 'mach' to access the URL's and the dns does a round robin name resolution of mach to mach 1 and mach2 IP addresses..
    All the options except (1) achieve only load balancing and not auto failover on all instances.. Hardware load balancers has the extra feature of probing [ sending periodic pings to the targets] , by which it can detect whether the target resource is alive and if not send the traffic to other nodes which are alive.. this is why hardware load balancers are worth their investment..
    other options may work if client is coded to do retrying on failure.. so on 2nd or subsequent attempt, the routing is done to the machine which is alive..
    For options (1),(2) and (3), you also need some redundancy of load balancing device ( web server, weblogic or hardware load balancer) to prevent single point of failure.. Hardware load balancers are usually deployed in redundant pairs to achieve this..
    Edited by: atheek1 on 22/11/2011 15:31

  • Help: newbie of iplanet clustering & loading balancing

    As l went through the sample app, "bank" coming along with iplanet. l have a few questions in mind. It is highly appreciated somebody can give a helping hand to me by answering below questions
    1) In ias-web.xml and web.xml, the app is set as "distributable" and session is synchronized by "dsync". Also, all servlets are set "sticky". My question is why the JSP don't need to be set as "sticky". If l have a web site made up of serveral ten of thousand of JSP pages, it will be a big trouble
    2) Let's say, if the sample include a stateful session bean, do l have to set that stateful session bean to "sticky". If so,
    does it mean the invocation from servlet to stateful session bean will take place in same JVM process.
    3) Is there any latency if session is synchronized by "dsync" process. If the application is set as "distributable" (support load-balancing), but the servlet is NOT set as sticky. Will it cause any trouble due to latency of synchronization of session among iplanet server
    regards
    Danny

    Hi
    Firstly, Thanks for your reply. it would very thankful if you can provide me more detailed information on below questions
    1) As a stateful session bean is set sticky. Does it mean all requests from same session will be directed from web-tier to the same jvm initializing that stateful session bean. Also, all invocation to entity bean/other sesson beans from that sticky stateful session bean will take place within same jvm as this stateful session bean is, am l correct ?
    2) Is there a latency for "dsync" to do replication of state of HTTPsession and stateful session bean ? If so, there is chance that state of HTTPSession / stateful session bean will be lost if the crash of jvm happen in-between the replication, am l correct ? Is there any way to avoid it ?
    regards
    Danny

  • Sticky Load balancing

    Does WLS 5.1 supports Sticky Load Balancing?
              Thank you.
              Rob.
              

    http://www.weblogic.com/docs51/classdocs/javadocs/weblogic/rmi/extensions/CallRouter.html
              - Prasad
              Rob wrote:
              > Looking at the online docs from BEA I found something that I think is what I
              > need. it is called Parameter-based routing
              >
              > It seems that WebLogic Clusters supports several algorithms to address this
              > kind of Load balancing. (something like sticky load balancing)
              >
              > The next text is from the online BEA's docs:
              >
              > Parameter-based routing
              >
              > It is also possible to gain finer grain control over load-balancing. Any
              > clustered object can be assigned a CallRouter. This is a plug-in that is
              > called before each invocation with the parameters of the call. The
              > CallRouter is free to examine the parameters and return the name server to
              > which the call should be routed.
              >
              > If this is correct (that this type of load balancing is slightly the same as
              > sticky load balancing) then the questions is now:
              >
              > What is exactly a CallRouter and where can I see an example of this or more
              > documentation.
              >
              > Rob wrote in message <[email protected]>...
              > >Does WLS 5.1 supports Sticky Load Balancing?
              > >
              > >Thank you.
              > >
              > >Rob.
              > >
              > >
              Cheers
              - Prasad
              

  • OSB 11g: Business Service not getting reply back from distributed queue

    Here is the scenario:
    OSB Domain A has a Business service that does a JMS send to a distributed JMS queue hosted on another WLS domain.
    It expects a response from a distributed queue sitting on this same remote WLS domain.
    The remote WLS domain is configured as a Foreign JNDI Provider in the OSB Domain with the request and response queues defined. Testing this business service, we see that messages are being posted to the remote request queue fine, and that the message are being picked up and processed. The response was sent to the response queue, but the message just sits there, the OSB business service did not pick it up. I can see 16 active Consumers on the response queue, presumably from the OSB business service?
    Using CorrelationID.
    URI for request/response queues are using the locally defined JNDI name for the queues and connection factories from the Foreign JNDI Provider definition:
    - URI for request queue: jms:///[RemoteXAConnectionFactoryX]/[LocalJNDINameForRequestQueue]
    - URI for response queue: jms:///[RemoteXAConnectionFactoryX]/[LocalJNDINameForResponseQueue]
    Completely stumped at the moment...anybody has any ideas?
    Thanks,
    Melvin

    Hi,
    I am also facing the similar kind of issue as below. After placing the message in to request Queue, I need that message Id as response to that Business service. Please provide the solution for the below issue. Thanks in advance.
    Thanks,
    bharath.

  • OSB distributed queue proxy configuration

    I have set up a distributed queue in WeblogicServer and I wonder how should I configure OSB proxy to consume JMS messages from it. I have read documentation at Oracle site and it says a distributed queue groups a bunch of local JMS queues in different servers that can be accesed through a common JNDI name. This provide load balancing and failover to messages put in the queue.
    In OSB proxy a JMS destination is configured with following URL: jms://server:port/destination. So if I have a distributed queue that maps to two JMS servers, what server and port must I set in proxy URL? Do I need to setup two different proxies, one for JMS server1 and another for JMS server2?
    Regards,
    DC.

    For weblogic JMS message producer load balancing happens at 3 different places.
    1- For intial context lookup - this depends upon the URI you specify in the Initial Context Call. e.g. if you specify t3://localhost:7002,localhost:7003 , all context lookups happen on port 7002. Port 7003 is used only if 7002 is down. Thus this supports failover instead of load balancing. If you want load balancing for context lookup then you have to use a dns based cluster address where dns does the name resolution to a different address each time.
    2- When a jms connection is created. The jms connection can be created to any managed server to which the Connection factory is targeted to. Thus you can have your context lookup on 7002 , but the actual jms connection can get created on 7003. If server affinity is enabled for the connection factory then the jms connection will be created to the same managed server instance as on which context lookup happened.
    3- When a message Producer send is executed. The message send can land on any dd member in the cluster if load balancing is enabled. If server affinity is enabled, then the message will end up on dd member on the same managed server instance to which the jms connection was made. For e.g. assume you have your jms client app has a jms connection on ms1 and the message produced ends up on ms2, then the following path would have been taken.
    jms client app --------Over jms connection-------> ms1 -------internally forwards ----------> ms2 ------puts message ---> dd2
    I would recommend you to read the JMS chapter of Professional Oracle Weblogic Server book where this is explained clearly.

  • Distributed Queue - Unable To Load Balance Between Each Time A Send Method Is Called

    Hi,
              According to the JMS documentation, I should be able to get the
              distributed queue to load balance
              between each time the message producer calls Message.send(). I was not
              able to achieve this, however,
              I noticed the load balancing happens when a JMS client is stopped and
              restarted (meaning totally
              exit the JVM and restart the JVM).
              Here is my configuration:
              WLS 8.1 SP2 on XP
              One cluster with two nodes (running on the same machine w/ different port)
              Each node hosts one JMS server, which hosts one physical queue and using
              JDBC store
              One distributed queue with two physical members from each of the JMS
              server.
              JMS Connection Factory is configured with "Load Balancing Enabled" set to
              yes,
              and "Server Affinity Enabled" to no. This connection factory is target to
              the cluster.
              The queue session for the queue sender is created with transaction setting
              to false.
              Any hints and ideas would greatly appreciated.
              Here is the content of config.xml:
              ========================================================================
              <?xml version="1.0" encoding="UTF-8"?>
              <Domain ConfigurationVersion="8.1.0.0" Name="odh">
              <Cluster ClusterAddress="localhost:8001,localhost:9001"
              MulticastAddress="237.0.0.1" Name="odhCluster_1"/>
              <Server ListenAddress="" ListenPort="7001" Machine="localhost"
              Name="odhAdmin" NativeIOEnabled="true" ServerVersion="8.1.2.0">
              <SSL Enabled="false" HostnameVerificationIgnored="false"
              IdentityAndTrustLocations="KeyStores" Name="odhAdmin"/>
              </Server>
              <Server Cluster="odhCluster_1" ExpectedToRun="false"
              IIOPEnabled="false" ListenAddress="" ListenPort="8001"
              Machine="localhost" Name="odhManagedServer_1"
              NativeIOEnabled="true" ServerVersion="8.1.2.0">
              <SSL Enabled="false" IdentityAndTrustLocations="KeyStores"
              Name="odhManagedServer_1"/>
              <ExecuteQueue Name="weblogic.kernel.Default" ThreadCount="15"/>
              </Server>
              <Server Cluster="odhCluster_1" ExpectedToRun="false"
              IIOPEnabled="false" ListenAddress="" ListenPort="9001"
              Machine="localhost" Name="odhManagedServer_2"
              NativeIOEnabled="true" ServerVersion="8.1.2.0">
              <SSL Enabled="false" IdentityAndTrustLocations="KeyStores"
              Name="odhManagedServer_2"/>
              <ExecuteQueue Name="weblogic.kernel.Default" ThreadCount="15"/>
              </Server>
              <MigratableTarget Cluster="odhCluster_1"
              Name="odhManagedServer_1 (migratable)"
              Notes="This is a system generated default migratable target for a
              server. Do not delete manually."
              UserPreferredServer="odhManagedServer_1"/>
              <MigratableTarget Cluster="odhCluster_1"
              Name="odhManagedServer_2 (migratable)"
              Notes="This is a system generated default migratable target for a
              server. Do not delete manually."
              UserPreferredServer="odhManagedServer_2"/>
              <Machine Name="localhost">
              <NodeManager ListenAddress="localhost" Name="localhost"/>
              </Machine>
              <JMSConnectionFactory AcknowledgePolicy="All"
              DefaultDeliveryMode="Persistent"
              JNDIName="com.neoforma.ConnectionFactory"
              Name="odhConnectionFactory" ServerAffinityEnabled="false"
              Targets="odhCluster_1" XAConnectionFactoryEnabled="true"/>
              <JMSDistributedQueue JNDIName="com.neoforma.odhDistributedQueue_1"
              LoadBalancingPolicy="Round-Robin" Name="odhDistributedQueue_1"
              Targets="odhCluster_1">
              <JMSDistributedQueueMember JMSQueue="odhQueue_1"
              Name="DistributedQueueMember_1"/>
              <JMSDistributedQueueMember JMSQueue="odhQueue_2"
              Name="DistributedQueueMember_2"/>
              </JMSDistributedQueue>
              <JMSJDBCStore ConnectionPool="odhMessagePool"
              Name="odhJMSJDBCStore_1" PrefixName="Order1_"/>
              <JMSJDBCStore ConnectionPool="odhMessagePool"
              Name="odhJMSJDBCStore_2" PrefixName="Order2_"/>
              <JMSServer Name="odhJMSServer_1" Store="odhJMSJDBCStore_1"
              Targets="odhManagedServer_1">
              <JMSQueue CreationTime="1076439896999"
              JNDIName="com.neoforma.odhQueue_1" Name="odhQueue_1"
              StoreEnabled="true"/>
              </JMSServer>
              <JMSServer Name="odhJMSServer_2" Store="odhJMSJDBCStore_2"
              Targets="odhManagedServer_2">
              <JMSQueue CreationTime="1076439664343"
              JNDIName="com.neoforma.odhQueue_2" Name="odhQueue_2"
              StoreEnabled="true"/>
              </JMSServer>
              <JDBCConnectionPool
              DriverName="oracle.jdbc.xa.client.OracleXADataSource"
              Name="odhConnectionPool" Password="...."
              Properties="user=..." Targets="odhCluster_1"
              TestTableName="SQL SELECT 1 FROM DUAL" URL="................."/>
              <JDBCConnectionPool DriverName="oracle.jdbc.driver.OracleDriver"
              Name="odhMessagePool" Password="....."
              Properties="user=....." Targets="odhCluster_1"
              TestTableName="SQL SELECT 1 FROM DUAL" URL="............."/>
              <JDBCMultiPool AlgorithmType="High-Availability"
              Name="odhJDBCMultiPool_1"
              PoolList="odhConnectionPool,odhMessagePool"
              Targets="odhCluster_1"/>
              <JDBCTxDataSource EnableTwoPhaseCommit="false"
              JNDIName="com.neoforma.order.orderDS" Name="odhJDBCDataSource_1"
              PoolName="odhConnectionPool" Targets="odhCluster_1"/>
              <Security Name="odh" PasswordPolicy="wl_default_password_policy"
              Realm="wl_default_realm" RealmSetup="true"/>
              <!--
              <EmbeddedLDAP
              Credential="{3DES}j+xkS9y1EYJUfic+M9ZJ+5DqGjiwTaVnt+Ti0TQWxXg="
              Name="odh"/>
              <SecurityConfiguration
              Credential="{3DES}OiyDMEOJS4gPLumKeKYWC+Kj9xWib6MhbmrNjeBmjJ7bpJypNb6Z7bUtAQF/bvi2RrFMs+3kqKerWNyD3NyT3QsrsyPoBDT0"
              Name="odh" RealmBootStrapVersion="1"/>
              -->
              <Realm FileRealm="wl_default_file_realm" Name="wl_default_realm"/>
              <FileRealm Name="wl_default_file_realm"/>
              <PasswordPolicy Name="wl_default_password_policy"/>
              <Application Deployed="true" Name="odh.ear"
              Path="D:\bea\user_projects\domains\odh\applications\odh.ear"
              StagedTargets="odhManagedServer_1,odhManagedServer_2"
              StagingMode="stage" TwoPhase="true">
              <EJBComponent Name="odh.jar" Targets="odhCluster_1" URI="odh.jar"/>
              </Application>
              <StartupClass ClassName="com.neoforma.startup.JMXMBeanStartup"
              DeploymentOrder="1" Name="ODH MBean Startup Class"
              Notes="ODH MBean Startup Class - Note" Targets="odhAdmin"/>
              <EmbeddedLDAP
              Credential="{3DES}YFY55/dsdxI9HL/AKGRXHuR1VwyJewNFdAHdrtk/WMM="
              Name="odh"/>
              <SecurityConfiguration
              Credential="{3DES}ZCPa1Bsrj3z2DhVKVUbq32zTYipDVff+LDB9+1b2Dr4VLhz5yjZyHgPheqS/kum4VVZamDYN07Hyb6rALiCTHhwt1EzK5+M+"
              Name="odh" RealmBootStrapVersion="1"/>
              </Domain>
              

    Thanks for the Makiey. I am surprise that BEA hasn't come back with any
              info.
              Hien
              On 7 Jul 2004 01:51:01 -0700, makiey <[email protected]> wrote:
              >
              > Hi Hien Luu,
              >
              > We also have a problem with load balancing, tested with WLS 7.0 SP4 and
              > WLS 8.1
              > SP2 (HP UX). The only "working" configuration is load-balancing policy =
              > random
              > (CF deployed to cluster, load balancing enabled, affinity disabled).
              > With the
              > "round-robin" policy we cannot utilize more than 50% dis. queue's
              > members.
              >
              > I'm trying to prepare a reproducer...
              >
              > greetings,
              > makiey
              >
              >
              > "Hien Luu" <[email protected]> wrote:
              >> Hi,
              >>
              >> According to the JMS documentation, I should be able to get the =
              >>
              >> distributed queue to load balance
              >> between each time the message producer calls Message.send(). I was not
              >> =
              >> =
              >>
              >> able to achieve this, however,
              >> I noticed the load balancing happens when a JMS client is stopped and
              >> =
              >>
              >> restarted (meaning totally
              >> exit the JVM and restart the JVM).
              >>
              >> Here is my configuration:
              >>
              >> WLS 8.1 SP2 on XP
              >> One cluster with two nodes (running on the same machine w/ different
              >> por=
              >> t)
              >> Each node hosts one JMS server, which hosts one physical queue and
              >> using=
              >> =
              >>
              >> JDBC store
              >> One distributed queue with two physical members from each of the JMS
              >> =
              >>
              >> server.
              >> JMS Connection Factory is configured with "Load Balancing Enabled" set
              >> t=
              >> o =
              >>
              >> yes,
              >> and "Server Affinity Enabled" to no. This connection factory is target
              >> =
              >> to =
              >>
              >> the cluster.
              >>
              >> The queue session for the queue sender is created with transaction
              >> setti=
              >> ng =
              >>
              >> to false.
              >>
              >> Any hints and ideas would greatly appreciated.
              >>
              >>
              >> Here is the content of config.xml:
              >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
              >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
              >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
              >>
              >> <?xml version=3D"1.0" encoding=3D"UTF-8"?>
              >> <Domain ConfigurationVersion=3D"8.1.0.0" Name=3D"odh">
              >> <Cluster ClusterAddress=3D"localhost:8001,localhost:9001"
              >> MulticastAddress=3D"237.0.0.1" Name=3D"odhCluster_1"/>
              >> <Server ListenAddress=3D"" ListenPort=3D"7001" Machine=3D"localhost=
              >> "
              >> Name=3D"odhAdmin" NativeIOEnabled=3D"true" ServerVersion=3D"8.1=
              >> .2.0">
              >> <SSL Enabled=3D"false" HostnameVerificationIgnored=3D"false"
              >> IdentityAndTrustLocations=3D"KeyStores" Name=3D"odhAdmin"/>=
              >>
              >> </Server>
              >> <Server Cluster=3D"odhCluster_1" ExpectedToRun=3D"false"
              >> IIOPEnabled=3D"false" ListenAddress=3D"" ListenPort=3D"8001"
              >> Machine=3D"localhost" Name=3D"odhManagedServer_1"
              >> NativeIOEnabled=3D"true" ServerVersion=3D"8.1.2.0">
              >> <SSL Enabled=3D"false" IdentityAndTrustLocations=3D"KeyStores"
              >> =
              >> =
              >>
              >> Name=3D"odhManagedServer_1"/>
              >> <ExecuteQueue Name=3D"weblogic.kernel.Default" ThreadCount=3D"1=
              >> 5"/>
              >> </Server>
              >> <Server Cluster=3D"odhCluster_1" ExpectedToRun=3D"false"
              >> IIOPEnabled=3D"false" ListenAddress=3D"" ListenPort=3D"9001"
              >> Machine=3D"localhost" Name=3D"odhManagedServer_2"
              >> NativeIOEnabled=3D"true" ServerVersion=3D"8.1.2.0">
              >> <SSL Enabled=3D"false" IdentityAndTrustLocations=3D"KeyStores"
              >> =
              >> =
              >>
              >> Name=3D"odhManagedServer_2"/>
              >> <ExecuteQueue Name=3D"weblogic.kernel.Default" ThreadCount=3D"1=
              >> 5"/>
              >> </Server>
              >> <MigratableTarget Cluster=3D"odhCluster_1"
              >> Name=3D"odhManagedServer_1 (migratable)"
              >> Notes=3D"This is a system generated default migratable target
              >> f=
              >> or a =
              >>
              >> server. Do not delete manually."
              >> UserPreferredServer=3D"odhManagedServer_1"/>
              >> <MigratableTarget Cluster=3D"odhCluster_1"
              >> Name=3D"odhManagedServer_2 (migratable)"
              >> Notes=3D"This is a system generated default migratable target
              >> f=
              >> or a =
              >>
              >> server. Do not delete manually."
              >> UserPreferredServer=3D"odhManagedServer_2"/>
              >> <Machine Name=3D"localhost">
              >> <NodeManager ListenAddress=3D"localhost" Name=3D"localhost"/>
              >> </Machine>
              >> <JMSConnectionFactory AcknowledgePolicy=3D"All"
              >> DefaultDeliveryMode=3D"Persistent"
              >> JNDIName=3D"com.neoforma.ConnectionFactory"
              >> Name=3D"odhConnectionFactory" ServerAffinityEnabled=3D"false"
              >> Targets=3D"odhCluster_1" XAConnectionFactoryEnabled=3D"true"/>
              >> <JMSDistributedQueue JNDIName=3D"com.neoforma.odhDistributedQueue_1=
              >> "
              >> LoadBalancingPolicy=3D"Round-Robin" Name=3D"odhDistributedQueue=
              >> _1" =
              >>
              >> Targets=3D"odhCluster_1">
              >> <JMSDistributedQueueMember JMSQueue=3D"odhQueue_1" =
              >>
              >> Name=3D"DistributedQueueMember_1"/>
              >> <JMSDistributedQueueMember JMSQueue=3D"odhQueue_2" =
              >>
              >> Name=3D"DistributedQueueMember_2"/>
              >> </JMSDistributedQueue>
              >> <JMSJDBCStore ConnectionPool=3D"odhMessagePool"
              >> Name=3D"odhJMSJDBCStore_1" PrefixName=3D"Order1_"/>
              >> <JMSJDBCStore ConnectionPool=3D"odhMessagePool"
              >> Name=3D"odhJMSJDBCStore_2" PrefixName=3D"Order2_"/>
              >> <JMSServer Name=3D"odhJMSServer_1" Store=3D"odhJMSJDBCStore_1"
              >> =
              >>
              >> Targets=3D"odhManagedServer_1">
              >> <JMSQueue CreationTime=3D"1076439896999"
              >> JNDIName=3D"com.neoforma.odhQueue_1" Name=3D"odhQueue_1"
              >> =
              >>
              >> StoreEnabled=3D"true"/>
              >> </JMSServer>
              >> <JMSServer Name=3D"odhJMSServer_2" Store=3D"odhJMSJDBCStore_2"
              >> =
              >>
              >> Targets=3D"odhManagedServer_2">
              >> <JMSQueue CreationTime=3D"1076439664343"
              >> JNDIName=3D"com.neoforma.odhQueue_2" Name=3D"odhQueue_2"
              >> =
              >>
              >> StoreEnabled=3D"true"/>
              >> </JMSServer>
              >> <JDBCConnectionPool
              >> DriverName=3D"oracle.jdbc.xa.client.OracleXADataSource"
              >> Name=3D"odhConnectionPool" Password=3D"...."
              >> Properties=3D"user=3D..." Targets=3D"odhCluster_1"
              >> TestTableName=3D"SQL SELECT 1 FROM DUAL" URL=3D"...............=
              >> .."/>
              >> <JDBCConnectionPool DriverName=3D"oracle.jdbc.driver.OracleDriver"
              >> Name=3D"odhMessagePool" Password=3D"....."
              >> Properties=3D"user=3D....." Targets=3D"odhCluster_1"
              >> TestTableName=3D"SQL SELECT 1 FROM DUAL" URL=3D"............."/=
              >>>
              >> <JDBCMultiPool AlgorithmType=3D"High-Availability"
              >> Name=3D"odhJDBCMultiPool_1"
              >> PoolList=3D"odhConnectionPool,odhMessagePool" =
              >>
              >> Targets=3D"odhCluster_1"/>
              >> <JDBCTxDataSource EnableTwoPhaseCommit=3D"false"
              >> JNDIName=3D"com.neoforma.order.orderDS" Name=3D"odhJDBCDataSour=
              >> ce_1"
              >> PoolName=3D"odhConnectionPool" Targets=3D"odhCluster_1"/>
              >> <Security Name=3D"odh" PasswordPolicy=3D"wl_default_password_policy=
              >> "
              >> Realm=3D"wl_default_realm" RealmSetup=3D"true"/>
              >> <!--
              >> <EmbeddedLDAP
              >> Credential=3D"{3DES}j+xkS9y1EYJUfic+M9ZJ+5DqGjiwTaVnt+Ti0TQWxXg=
              >> =3D" =
              >>
              >> Name=3D"odh"/>
              >> <SecurityConfiguration
              >> Credential=3D"{3DES}OiyDMEOJS4gPLumKeKYWC+Kj9xWib6MhbmrNjeBmjJ7=
              >> bpJypNb6Z7bUtAQF/bvi2RrFMs+3kqKerWNyD3NyT3QsrsyPoBDT0"
              >> Name=3D"odh" RealmBootStrapVersion=3D"1"/>
              >> -->
              >> <Realm FileRealm=3D"wl_default_file_realm" Name=3D"wl_default_realm=
              >> "/>
              >> <FileRealm Name=3D"wl_default_file_realm"/>
              >> <PasswordPolicy Name=3D"wl_default_password_policy"/>
              >> <Application Deployed=3D"true" Name=3D"odh.ear"
              >> Path=3D"D:\bea\user_projects\domains\odh\applications\odh.ear"
              >> StagedTargets=3D"odhManagedServer_1,odhManagedServer_2"
              >> StagingMode=3D"stage" TwoPhase=3D"true">
              >> <EJBComponent Name=3D"odh.jar" Targets=3D"odhCluster_1" URI=3D"=
              >> odh.jar"/>
              >> </Application>
              >> <StartupClass ClassName=3D"com.neoforma.startup.JMXMBeanStartup"
              >> DeploymentOrder=3D"1" Name=3D"ODH MBean Startup Class"
              >> Notes=3D"ODH MBean Startup Class - Note" Targets=3D"odhAdmin"/>=
              >>
              >> <EmbeddedLDAP
              >> Credential=3D"{3DES}YFY55/dsdxI9HL/AKGRXHuR1VwyJewNFdAHdrtk/WMM=
              >> =3D" =
              >>
              >> Name=3D"odh"/>
              >> <SecurityConfiguration
              >> Credential=3D"{3DES}ZCPa1Bsrj3z2DhVKVUbq32zTYipDVff+LDB9+1b2Dr4=
              >> VLhz5yjZyHgPheqS/kum4VVZamDYN07Hyb6rALiCTHhwt1EzK5+M+"
              >> Name=3D"odh" RealmBootStrapVersion=3D"1"/>
              >> </Domain>
              >
              Using Opera's revolutionary e-mail client: http://www.opera.com/m2/
              

  • Distributed queue - uneven load-balancing

    I've read relevant threads, JMS : loadbalancing of messages is not happening uniformly between all the is close but no help.
    I have 6 managed nodes in a cluster, nodes 1-3 on box A, nodes 4-6 on box B. I have a distributed queue + connection factory with server affinity disabled and load balancing enabled. Both the queue and connection factory are deployed (targeted) to the entire cluster. The sender on node 1 pushed 100k messages into the queue and 33k ended up evenly distributed (right down to a message!) across physical queues on 3 nodes on box A. Queues on nodes 4-6 received no messages. During the test all physical queues were alive and well, producers and consumers (MDBs) were alive and well, all nodes were up and so on. In the cluster config I said I've got 6 nodes (Number Of Servers In Cluster Address field). Please tell me something is wrong with my config and this is not a bug that I have to bring up with Support..
    I am on 10.3.3 on Linux 64-bit wit JRockit 1.6

    Hi,
    No, this is not the correct way the JNDI tree supposed to look for distributed queue in a single cluster with two hosts. This means that either your configuration is wrong or both the hosts are not connected to each other properly.
    In your scenario this should be your architecture
    Box-A
    =====
    AS
    MS-1, MS-2, MS-3 under Cluster
    JMSServer-1 => MS-1
    JMSServer-2 => MS-2
    JMSServer-3 => MS-3
    JMS_Module => Cluster
    SubDeployment_UDQ => JMSServer-1, JMSServer-2, JMSServer-3
    ConnFacty => Cluster
    UDQ => SubDeployment_UDQ
    Box-B
    ======
    MS-4, MS-5, MS-6 under Cluster
    JMSServer-4 => MS-4
    JMSServer-5 => MS-5
    JMSServer-6 => MS-6
    JMS_Module => Cluster
    SubDeployment_UDQ => JMSServer-4, JMSServer-5, JMSServer-6
    ConnFacty => Cluster
    UDQ => SubDeployment_UDQ
    Where: *=>* means targeted to
    In Box-A and Box-B the Cluster, SubDeployment_UDQ, ConnFacty and UDQ are the same NOT different.
    Once this is done then you should be able to see UDQ and ConnFacty on all the servers JNDI tree then its the right configuration and now when you send messages on UDQ using the ConnFacty with "affinity disabled" you will see the messages getting distributed properly on all the 6 servers.
    Hope this helps you.
    Regards,
    Ravish Mody

  • OSB 10gR3 - Create load balanced endpoint URI with WLST

    Hi,
    I need to create load balanced endpoint URIs for a Business Service listening to a JMQ queue. The configuration can be done through the console as shown below:
    Protocol: JMS
    Load Balancing Algorithm: round-robin
    URI 1 - jms://localhost:7001/loggingXACF/loggingQueue
    URI 2 - jms://localhost:7002/loggingXACF/loggingQueue
    I would like to do the same using ALSB customization API in WLST. Any pointers on this would be helpful
    Regards
    Vikas

    Any one have any idea what a CLUSTER-BROADCAST message is? And where it would be coming from?Cluster broadcast message is one of the way by using which WebLogic Server instances in a cluster communicate with one another. Details are here -
    http://download.oracle.com/docs/cd/E14571_01/web.1111/e13709/features.htm#i1021836
    Having little idea about this, I may not comment on the exact reason behind 21.4 million CLUSTER-BROADCAST messages in 70 minutes, but you may get better and faster response in Weblogic clustering forum -
    WebLogic Server - Clustering
    If you have Oracle Support, then I will suggest you to track this throgh a SR.
    Regards,
    Anuj

  • OSB 10gR3 (i.e. 10.3.1 on WLS 10.3.0) download

    Hello,
    I am looking for the download links to an older version of OSB (10gR3 or 10.3.1). On the product overview page, this version is mentioned (see http://www.oracle.com/technetwork/middleware/service-bus-fs/overview/index.html) but when you click on the download link (http://www.oracle.com/technetwork/middleware/service-bus/downloads/index-100284.html), only the latest (11g) version of the product is available. I've been searching through the site with no luck. Is there some "archived downloads" area which I am missing?
    The windows binary is called "osb1031_wls103_win32.exe" and I would like to download a Linux version.

    Contact Oracle Support to get the installable. You may also search on http://edelivery.oracle.com/
    Regards,
    Anuj
    Edited by: Anuj Dwivedi on Oct 15, 2010 12:56 PM

  • Reading messages from all distributed queues on a cluster (WLS 9.2)

    Hi,
    I have a following problem with distributed queues on WebLogic Server 9.2 MP1.
    Here's a brief description of my setup:
    I've got a cluster called 'myCluster', and two cluster nodes on it, 'nodeA' and 'nodeB'. I also have two JMS servers, 'jmsA' and 'jmsB'. Then I have a jms module 'myModule' with a subdeployment 'mySubdeployment' targeted on jms servers 'jmsA' and 'jmsB'. I have a jms connection factory 'TestFactory' that is targeted on myCluster (default targetting), and a jms queue 'TestQueue' (Uniform Distributed Queue) using subdeployment 'mySubdeployment' (targeted to 'jmsA' and 'jmsB').
    Now, the queue 'TestQueue' is used as a location where messages are stored when the system has met a problem in the environment that is preventing the normal handling of messages. Messages are kept in this queue until the problem is over, that's when the system administrator uses a browser application that requests the system to read the messages and handle them normally.
    The problem is that the cluster node which gets the request seems to be able to read only its own queue and the messages on the other nodes queue are left untouched. I know that I could send a message to the another node for example via topic 'purge_your_messages', but that's not suitable for this case. I need to sort the messages by their ids (set as a message property) and because of that I need exactly one node executing the purge.
    Any advice?
    - jj
    Edited by: user5736915 on 10-Dec-2008 14:10

    Browsers and receivers always attach to a single member of a distributed destination.
    WebLogic MDBs, on the other hand, automatically handle the task of attaching receivers to every member, and are quite simple to code and use these days. If you have the option of using WL MDBs, I recommend using them. (There's no equivalent for browsers.)
    Spring won't do the same OOTB, but there does appear to be a work-around for the Spring receiver issue (albeit not for browsers - just receivers). Here's a sample Spring impl that attaches a subscription to each and every member of a distributed topic:
    http://sleeplessinslc.blogspot.com/2011/12/weblogic-jms-partitioned-distributed.html.
    If the above isn't helpful, and you must cycle through every message on every server in the cluster, then you'll need write special case code to check each separately. There are two common options for enumerating the destinations and working with each one - the JMX mbean "message management" APIs (WLST Jython scripting or Java based) and the weblogic.jms.extensions destination availability APIs.
    HTH,
    Tom

Maybe you are looking for