Help: newbie of iplanet clustering & loading balancing

As l went through the sample app, "bank" coming along with iplanet. l have a few questions in mind. It is highly appreciated somebody can give a helping hand to me by answering below questions
1) In ias-web.xml and web.xml, the app is set as "distributable" and session is synchronized by "dsync". Also, all servlets are set "sticky". My question is why the JSP don't need to be set as "sticky". If l have a web site made up of serveral ten of thousand of JSP pages, it will be a big trouble
2) Let's say, if the sample include a stateful session bean, do l have to set that stateful session bean to "sticky". If so,
does it mean the invocation from servlet to stateful session bean will take place in same JVM process.
3) Is there any latency if session is synchronized by "dsync" process. If the application is set as "distributable" (support load-balancing), but the servlet is NOT set as sticky. Will it cause any trouble due to latency of synchronization of session among iplanet server
regards
Danny

Hi
Firstly, Thanks for your reply. it would very thankful if you can provide me more detailed information on below questions
1) As a stateful session bean is set sticky. Does it mean all requests from same session will be directed from web-tier to the same jvm initializing that stateful session bean. Also, all invocation to entity bean/other sesson beans from that sticky stateful session bean will take place within same jvm as this stateful session bean is, am l correct ?
2) Is there a latency for "dsync" to do replication of state of HTTPsession and stateful session bean ? If so, there is chance that state of HTTPSession / stateful session bean will be lost if the crash of jvm happen in-between the replication, am l correct ? Is there any way to avoid it ?
regards
Danny

Similar Messages

  • Help: AM Agent working with load balancing AM Server

    Hi,
    We are trying to set up the policy agent to work with two AM Servers behind a load balancer.
    The agent deployment document said that in the AMAgent.properties we must set
    com.sun.am.loadBalancer_enable=true
    According to the AM deployment guide(http://docs.sun.com/source/817-7644/appE_loadbalancerconfig.html),
    we also set in the AMConfig.properties something like
    com.iplanet.am.lbcookie.name=server1
    com.iplanet.am.lbcookie.value=server1
    The loading balancing just does not work. Can anyone explain how AM agent works under such an deployment
    environment? Some people say the agent can find the real server using the naming service, but the not
    much explanation can be found.
    More info on our two machines:
    The two AM servers are named server1.domain and server2.domain. The virtual LB name is server.domain.
    The two AM servers were installed using the host name server.domain. We added the servers' real name
    in the AM's fqdnMap. At the agent config file, the name service is pointing to the LB.
    Really appreciated any advices.
    Regards,
    Henry

    Thanks for your reply.
    We figured it out lately thanks to help from Bernhard.
    1) use each machine's name to install the AM servers using the same LDAP server.
    2) In AmAgent.propeties, set com.sun.am.loadBalancer_enable=true
    3) In AM server platform, add in all machine's names
    4) In Organization alias, add in two machines' name
    5) In fqdnMap, add in load balancer's name
    6) In LB, set cookie stickiness based on cookie JSESSIONID

  • IPlanet sticky load balancing question

    We have two iPlanet Application Servers v6.0 sp3 and two iPlanet
    Enterprise Web servers v4.1. All machines are on the same domain name.
    All machines point to the same LDAP server. The application has been
    installed on both App servers. Clustering has been configured in the
    App server Admin tool to be based upon system load. For non-sticky load
    balancing, all works fine, and session information is carried over
    properly. However, for sticky load balancing, hits going into either of
    the Web servers do not always go back to the App server that first
    started the session.
    The online iPlanet documentation says it can be done, but we have found
    differently. Has anybody else gotten this to work?
    Thank you,
    David Shade

    Hi David,
    With sticky load balancing enabled, when you execute the application first
    time it will go to any kjs depending on your load banancing criteria which
    you set (Round robin or server responce or whatever), this applies to only
    for the first time when you send request, afterwords it will be executed in
    that perticular KJS process only till that KJS is alive, whatever may be
    your LoadB creteria.
    You please kill that kjs and see, you will be able to see the failover.
    Feel free to mail me for any further information.
    Sanjeev,
    Developer Support Team iAS-India.
    David Shade wrote:
    We have two iPlanet Application Servers v6.0 sp3 and two iPlanet
    Enterprise Web servers v4.1. All machines are on the same domain name.
    All machines point to the same LDAP server. The application has been
    installed on both App servers. Clustering has been configured in the
    App server Admin tool to be based upon system load. For non-sticky load
    balancing, all works fine, and session information is carried over
    properly. However, for sticky load balancing, hits going into either of
    the Web servers do not always go back to the App server that first
    started the session.
    The online iPlanet documentation says it can be done, but we have found
    differently. Has anybody else gotten this to work?
    Thank you,
    David Shade--
    I have never learn not to learn...
    ------------------------------------------------------------------

  • OSB jms clustering - load balancing seems to be not working

    Hi All,
    I have one admin server and two managed servers running ( one of these managed server is running in the remote linux machine) in a cluster
    I have connectionfactory created with load balance enabled with round robin
    and server affinity is disabled
    I have queue created as uniformly distributed Q
    I have a proxy service with load balancing as roundrobin and endpoint URL as below
    jms://rdoelapp001011:61703,rdoelapp001013:61703/synergyConnectionFactory1/MM_gridQ0
    If I execute this proxy sending messages it always go to one server only. There is no message going to the other server.
    If I shutdown the server that receives messages then the other server is receiving messages. Seems like fail-over is working but not the load-balancing
    There is one point may be worth mentioning here is, from the admin console if I look at the servers for the clusters it has below information
    Name      State      Drop-out Frequency      Remote Groups Discovered      Local Group Leader      Total Groups      Discovered Group Leaders      Groups      Primary      
    synergyOSBServer1     RUNNING     Never     0     synergyOSBServer1     1     synergyOSBServer1     *{synergyOSBServer1}*     0          
    synergyOSBServer2     RUNNING     Never     0     synergyOSBServer1     1     synergyOSBServer1     *{synergyOSBServer1, synergyOSBServer2}* 0
    one server has groups as {synergYOSBServer1} instead of {synergyOSBServer1, synergyOSBServer2}. Does that look correct?
    here is my jms xml file
    <?xml version='1.0' encoding='UTF-8'?>
    <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd">
    *<connection-factory name="synergyConnectionFactory1">*
    *<sub-deployment-name>synergySubDeploy1</sub-deployment-name>*
    *<default-targeting-enabled>false</default-targeting-enabled>*
    *<jndi-name>synergyConnectionFactory1</jndi-name>*
    *<client-params>*
    *<client-id-policy>Restricted</client-id-policy>*
    *<subscription-sharing-policy>Exclusive</subscription-sharing-policy>*
    *<messages-maximum>10</messages-maximum>*
    *</client-params>*
    *<transaction-params>*
    *<xa-connection-factory-enabled>false</xa-connection-factory-enabled>*
    *</transaction-params>*
    *<load-balancing-params>*
    *<load-balancing-enabled>true</load-balancing-enabled>*
    *<server-affinity-enabled>false</server-affinity-enabled>*
    *</load-balancing-params>*
    *<security-params>*
    *<attach-jmsx-user-id>false</attach-jmsx-user-id>*
    *</security-params>*
    *</connection-factory>*
    <uniform-distributed-queue name="errorQ">
    <sub-deployment-name>synergySubDeploy1</sub-deployment-name>
    <default-targeting-enabled>false</default-targeting-enabled>
    <jndi-name>errorQ</jndi-name>
    <load-balancing-policy>Round-Robin</load-balancing-policy>
    <forward-delay>-1</forward-delay>
    <reset-delivery-count-on-forward>true</reset-delivery-count-on-forward>
    </uniform-distributed-queue>
    <uniform-distributed-queue name="undlvQ">
    <sub-deployment-name>synergySubDeploy1</sub-deployment-name>
    <default-targeting-enabled>false</default-targeting-enabled>
    <jndi-name>undlvQ</jndi-name>
    <load-balancing-policy>Round-Robin</load-balancing-policy>
    <forward-delay>-1</forward-delay>
    <reset-delivery-count-on-forward>true</reset-delivery-count-on-forward>
    </uniform-distributed-queue>
    *<uniform-distributed-queue name="MM_gridQ0">*
    *<sub-deployment-name>synergySubDeploy1</sub-deployment-name>*
    *<default-targeting-enabled>false</default-targeting-enabled>*
    *<jndi-name>MM_gridQ0</jndi-name>*
    *<load-balancing-policy>Round-Robin</load-balancing-policy>*
    *<forward-delay>5</forward-delay>*
    *<reset-delivery-count-on-forward>true</reset-delivery-count-on-forward>*
    *</uniform-distributed-queue>*
    <saf-imported-destinations name="synergySAFImportedDest1">
    <sub-deployment-name>synergySubDeploy1</sub-deployment-name>
    <default-targeting-enabled>false</default-targeting-enabled>
    <saf-queue name="gridQ0">
    <remote-jndi-name>MB_gridQ0</remote-jndi-name>
    <local-jndi-name>gridQ0</local-jndi-name>
    <non-persistent-qos>At-Least-Once</non-persistent-qos>
    <time-to-live-default>0</time-to-live-default>
    <use-saf-time-to-live-default>false</use-saf-time-to-live-default>
    <unit-of-order-routing>Hash</unit-of-order-routing>
    </saf-queue>
    <jndi-prefix>MB_</jndi-prefix>
    <saf-remote-context>synergySAFContext1</saf-remote-context>
    <saf-error-handling>synergySAFErrorHndlr1</saf-error-handling>
    <time-to-live-default>0</time-to-live-default>
    <use-saf-time-to-live-default>false</use-saf-time-to-live-default>
    <unit-of-order-routing>Hash</unit-of-order-routing>
    </saf-imported-destinations>
    <saf-remote-context name="synergySAFContext1">
    <saf-login-context>
    <loginURL>t3://rdoelapp001013:7001</loginURL>
    <username>weblogic</username>
    <password-encrypted>{AES}z9VY/K4M7ItAr2Vedvhx+j9htR/HkbY2LRh1ED+Cz5Y=</password-encrypted>
    </saf-login-context>
    <compression-threshold>2147483647</compression-threshold>
    </saf-remote-context>
    <saf-error-handling name="synergySAFErrorHndlr1">
    <policy>Log</policy>
    <log-format xsi:nil="true"></log-format>
    <saf-error-destination xsi:nil="true"></saf-error-destination>
    </saf-error-handling>
    </weblogic-jms>
    Any help will be greatly appriciated
    Edited by: 818591 on Feb 16, 2011 11:28 AM

    I am not getting you here "the right approach is to make OSB run on the man server cluster and not on admin server. "
    I have a jms proxy service that I created from admin console
    And also I have gone thru the step 5 in the link below
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/deploy/config.html#wp1524235
    If I am not wrong, the proxy service endpoint URI determines where it is pointing to. If it is a cluster environment, it should point to a clustered address
    My proxy has below endpoint URI
    jms://rdoelapp001011:61703,rdoelapp001013:61703/synergyConnectionFactory1/MM_gridQ0
    and rdoelapp001011:61703,rdoelapp001013:61703 is my cluster address
    As per your suggestion "To fix your problem, *make osb to run on the cluster* and specify the same URL for the jms proxy service"
    Could you please provide some instruction how would I "make osb jms proxy service to run in a cluster"
    As a note, I have Q defined as a distributed Q and connection factory targets to the cluster. UDQ also targtes to the cluster.
    Just for a testing I have created another manged server running local to the machine where my admin server is running
    And I created a proxy by following steps as I mentioned above and with endpoint URI as below
    jms://rdoelapp001011:61703,rdoelapp001013:61703,*rdoelapp001011:61700*/synergyConnectionFactory1/MM_gridQ0
    where the new address of my cluster is rdoelapp001011:61703,rdoelapp001013:61703,rdoelapp001011:61700
    It did create consumers in both the managed servers in the cluster that are running locally, but no consumers in the remote managed server.
    So I am kind of leaning towards thinking that there is some incorrect setup for the remote managed server and may be admin server is not able to communicate to the remote server for some reason but not sure about it..
    As a note the cluster is setup to communicate using "unicast" channel
    and I created a channel in each manged server with the same name
    here is the cluster configuration
    <name>synergyCluster1</name>
    <cluster-address>rdoelapp001011:61703,rdoelapp001013:61703,rdoelapp001011:61700</cluster-address>
    <default-load-algorithm>round-robin</default-load-algorithm>
    *<cluster-messaging-mode>unicast</cluster-messaging-mode>*
    *<cluster-broadcast-channel>synergyChannel1</cluster-broadcast-channel>*
    *<number-of-servers-in-cluster-address>3</number-of-servers-in-cluster-address>*
    </cluster>
    here are the twoOSBserver configuration
    <server>
    <name>synergyOSBServer1</name>
    <machine xsi:nil="true"></machine>
    <listen-port>61703</listen-port>
    <cluster>synergyCluster1</cluster>
    <web-server>
    <web-server-log>
    <number-of-files-limited>false</number-of-files-limited>
    </web-server-log>
    </web-server>
    <server-debug>
    <debug-scope>
    <name>weblogic.jms.saf</name>
    <enabled>true</enabled>
    </debug-scope>
    <debug-jmssaf>true</debug-jmssaf>
    <debug-saf-sending-agent>true</debug-saf-sending-agent>
    </server-debug>
    <listen-address>localhost</listen-address>
    <network-access-point>
    *<name>synergyChannel1</name>*
    *<protocol>cluster-broadcast</protocol>*
    *<listen-address>localhost</listen-address>*
    *<listen-port>61702</listen-port>*
    <http-enabled-for-this-protocol>true</http-enabled-for-this-protocol>
    <tunneling-enabled>false</tunneling-enabled>
    *<outbound-enabled>true</outbound-enabled>*
    *<enabled>true</enabled>*
    <two-way-ssl-enabled>false</two-way-ssl-enabled>
    <client-certificate-enforced>false</client-certificate-enforced>
    </network-access-point>
    <jta-migratable-target>
    <user-preferred-server>synergyOSBServer1</user-preferred-server>
    <cluster>synergyCluster1</cluster>
    </jta-migratable-target>
    </server>
    <server>
    <name>synergyOSBServer2</name>
    <ssl>
    <enabled>false</enabled>
    </ssl>
    <machine xsi:nil="true"></machine>
    <listen-port>61703</listen-port>
    <listen-port-enabled>true</listen-port-enabled>
    <cluster>synergyCluster1</cluster>
    <web-server>
    <web-server-log>
    <number-of-files-limited>false</number-of-files-limited>
    </web-server-log>
    </web-server>
    <listen-address>rdoelapp001013</listen-address>
    <network-access-point>
    *<name>synergyChannel1</name>*
    *<protocol>cluster-broadcast</protocol>*
    *<listen-address>rdoelapp001013</listen-address>*
    *<listen-port>61702</listen-port>*
    <http-enabled-for-this-protocol>true</http-enabled-for-this-protocol>
    <tunneling-enabled>false</tunneling-enabled>
    *<outbound-enabled>true</outbound-enabled>*
    *<enabled>true</enabled>*
    <two-way-ssl-enabled>false</two-way-ssl-enabled>
    <client-certificate-enforced>false</client-certificate-enforced>
    </network-access-point>
    <java-compiler>javac</java-compiler>
    <jta-migratable-target>
    <user-preferred-server>synergyOSBServer2</user-preferred-server>
    <cluster>synergyCluster1</cluster>
    </jta-migratable-target>
    <client-cert-proxy-enabled>false</client-cert-proxy-enabled>
    </server>
    <server>
    Edited by: 818591 on Feb 18, 2011 11:26 AM

  • XIR3 Clustered load balancing - how is it handled?

    Can anyone help me with this, or point to some useful documentation on the following? The scenario is this. I plan to deploy the following 2 servers:
    1. Server 1, with it's CMS and system database
    2. Server 2, clustered with Server 1 (different machine), using the same system database and FRS (on a NAS).
    Questions
    1. How is the load balancing handled. Is there an SIA for each server, or one SIA handling each server and respective CMS? By what process/servers is the load balancing handled?

    That's strange. The client side of DeskI doesn't store what connections are being used or how many, so how could it evaluate which server to use before connecting? Surely if I am asking it to connect to Server B and it finds Server B then it would use Server B? If it couldn't find Server B then I can understand how it would redirect to Server A.
    In my case the CMS for Server B is up and running, my registry settings are configure to search for A and B, so in the case of Server B being unused and the CMS up and running the DeskI client that is looking for Server B sould find Server B and not Server A, right?
    Is there any documentation on this?

  • Hardware clustering/load balancing/failover with Tomcat

    Hello forum!
    I recently bought a Cisco 1801, and it sure is capable! Anyhow, I've got a hobby website that is getting a fair bit of traffic - approaching too much for one node to handle and it's time to start thinking about distributing the load.
    I'd like to do a little clustering of server nodes running Apache Geronimo, which is J2EE running atop Apache Tomcat. For the sake of keeping things generic, let's just call it Tomcat because it configures the same way.
    I do not run Apache HTTP Server as a proxy, I only run Tomcat directly connected to the internet. I do this for performance reasons.
    Anyhow, I'm wondering if any of you evil geniuses could suggest a way that I could cluster Tomcat nodes directly using the router to serve as a hardware load balancer and have the whole sticky session thing with failover, etc... All of the documents I find on the subject discuss clustering by way of Apache HTTP with Mod_JK.
    I have already asked this question on the hardware side, and got great information about the capable load balancing features my router sports (but limited compared to Cisco CSS products.)
    Now I'm wondering if anyone has experience taking an open source application server like Geronimo or Tomcat or JBoss and clustering it using hardware load balancing. What kinds of Tomcat configurations, if any, do I need to add for things like sticky sessions and failover? Or, is all that automatic?
    Thanks so much for reading and for any replies. If there is a better forum for my question, please direct me there.
    Cheers,
    Dave Woldrich
    http://CardMeeting.com

    This occurs rarely when the Tomcat process is not able to connect to the database. The database connection problem is an internal cause which manifests externally as missing fields in reports.
    Workaround: Restart the Apache process and the Tomcat process. From the CLI on your CiscoWorks Server, enter the following commands in the specified sequence:
    1. pdterm Apache
    2. pdterm Tomcat
    3. pdexec Tomcat
    4. pdexec Apache

  • Query on Clustering,Load Balancing and Dehydration in Oracle BPEL PM

    Hi,
    I had a Few queries on Dehydration ,Clustering and Load Balancing:
    1) In section 5.1 "Use Case for Asynchronous Web Services",insertion of a dehydration point is mentioned.What do you exactly mean by this.Can a dehydration point be inserted explicitly?OR we have to put a wait activity or something so that dehydration happens?
    2) If a dehydration point needs to be inserted explicity arent we assuming that the invoke-reply will take long?But this might not be the case always.Is is a lso possible that an asynchronous process is not a long running one.
    3) Can we configure administratively ,how much time the process (say 10 seconds or so)waits after which it gets dehydrated?
    4) Consider a load balancing scenario,where we have 2 BPEL PM's (PM1 and PM2) running on 2 different App Servers (SRV1 and SRV2)in an clustered environment.
    Scenario:
    1)We have an asynchronous process which makes a call to a human task activity.
    2)A request from client comes for this process.
    3) The load balancing server forwards it to PM1 on SRV1.PM1 processes the request and calls the human task.
    4) PM1 has not dehydrated the process.
    5) The response from the Human User comes back to the Load Balancer before the process is dehydrated by PM1.It forwards this response to PM2.PM2 searches for the Process based on the Cor-relation ID and does not find it.
    Can you tell me how this scenario is handled?
    I would be grateful if someone can answer these queries or direct me to some place where it is already explained.
    Thanks
    Dileep

    1) In section 5.1 "Use Case for Asynchronous Web Services",insertion of a dehydration point is mentioned.What do you exactly mean by this.Can a dehydration point be inserted explicitly?OR we have to put a wait activity or something so that dehydration happens?
    Insert a java snippet with "checkpoint();"
    3) Can we configure administratively ,how much time the process (say 10 seconds or so)waits after which it gets dehydrated?
    No, this is due to the performance of your database.

  • JMS Clustering : Load Balancing expected Behaviour

    Hi All,
              I have a Cluster with a 2 managed servers A and B . ConnectionFactory is deployed to the cluster and Server B hosts JMS Server.Destinations on the JMS Server are not distributed, but the JNDI Names of the same are replicated across the cluster.Both Load Balancing and server affinity are enabled on the connectionFactory(I hope these attributes are required only if the destinations are distributed).
              An application containing MDBs and EJBs are deployed to the cluster and onMessage MDB looks up for a Facade and makes calls on it.An external java client sets up the initialContext based on the cluster address and starts sending messages to the destination
              What should be the expected behaviour in this scenario ?According to my understanding,
              -Eventhough, the connectionFactory is deployed across the cluster, since the physical destinations are available only in the weblogic server hosting the JMS Server(Server B), the actual message handling(MDB invocation) would be done only here.
              -When the MDBs are invoked on serverB, it would performs a lookup for the Facade.Because of the colocation optimisation, the replica aware stub used would be the one in ServerB and henceforth all the method processing should be done on Server B.
              Is this correct ? But this would also mean that no load balancing would happen because of the colocation optimisation ? Do i need to use a distributed destination to enable load balancing in this scenario ?
              Any help would be greatly appreciated..
              thanks,
              Josh

    Hi All,
              I have a Cluster with a 2 managed servers A and B . ConnectionFactory is deployed to the cluster and Server B hosts JMS Server.Destinations on the JMS Server are not distributed, but the JNDI Names of the same are replicated across the cluster.Both Load Balancing and server affinity are enabled on the connectionFactory(I hope these attributes are required only if the destinations are distributed).
              An application containing MDBs and EJBs are deployed to the cluster and onMessage MDB looks up for a Facade and makes calls on it.An external java client sets up the initialContext based on the cluster address and starts sending messages to the destination
              What should be the expected behaviour in this scenario ?According to my understanding,
              -Eventhough, the connectionFactory is deployed across the cluster, since the physical destinations are available only in the weblogic server hosting the JMS Server(Server B), the actual message handling(MDB invocation) would be done only here.
              -When the MDBs are invoked on serverB, it would performs a lookup for the Facade.Because of the colocation optimisation, the replica aware stub used would be the one in ServerB and henceforth all the method processing should be done on Server B.
              Is this correct ? But this would also mean that no load balancing would happen because of the colocation optimisation ? Do i need to use a distributed destination to enable load balancing in this scenario ?
              Any help would be greatly appreciated..
              thanks,
              Josh

  • Help choose the appropriate etherchannel load balance method

    Hi
    I have 2 network architectures :case A and case B  (found architecture below)
    Case A : one server connected on the switch on each site
    Case B : 3 server connected on the switch behind a router on each site
    2 site are connected by 2 wireless link :each wireless link have 105 Mbps bandwith (I absolutly need the agregate bandwith 210)
    Site headquarter is the principal site and site backup is use to backup data located on the principal site
    I use Gbit cisco  2960 switch
    I use etherchannel to agregate the 2 switch port (port 1 and port 2) where the 2 wireless link are connected
    I configure src-mac for case A but all trafic is send only on one wireless link .
    Please help me to choose the more appropriate load balance method to load balance traffic between the 2 link for the case A and for the case B
    Please advise
    Thanks in advance

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Your Case A might be handled by port hashing, but unfortunately most Cisco platforms don't support it.
    Your Case B isn't much better, as you only have 3 hosts on each side, and according to your drawing, they are behind routers, so you don't want to use MAC hashes.  If you don't have port hashing, next best choice might be src-dest-IP hashing.  Again, though, with just 3 hosts, your distribution will likely not be very balanced, especially over shorter time intervals.
    To obtain best utilization of your links, you need some kind of better link bonding, such as MLPPP (unfortunately, usually won't scale to FE rates) or a hardware MUX.  Next best option, if you could route across the links, would be something like Cisco's OER/PfR which can dynamically load balance.

  • Multiple dgraph clustering (load balancing) OOTB available?

    Hello,
    We have 2 dgraphs running in parallel having the same data to share the load of the clients.
    I would like to know if there is any OOTB solution provided by Endeca where in these two dgraphs can share the load. OR I need to use an external load balancer for the same?
    If there is any document that explains the configuration for OOTB Endeca solution, please let me know
    Thanks!

    Hi,
    yes I know...
    i tried to check if they were still available...but found none,
    that's why I said if you want me to send them email me ...(I downloaded them from Eden ) and the files aren't too big
    regards
    Saleh
    29/11/12 : copy of the document called Simple MDEX Load Balancing with Apache HTTP Server (Endeca)
    Endeca Solution Article Simple MDEX Load Balancing with Apache HTTP Server
    Endeca Solution Article
    Simple MDEX Load Balancing with Apache HTTP Server
    By Robert Dennis
    Last Updated: December 2010
    Endeca Product Versions: 5.0+
    This document describes how to set up a simple load balancing and failover solution using Apache HTTP server. This provides a cost effective mechanism leveraging widely available open source technologies to address simple infrastructure needs. This document includes the following sections:
     Introduction
     Configuration Steps
     References
    Endeca Solution Article Simple MDEX Load Balancing with Apache HTTP Server
    Endeca Confidential 2 of 5
    Introduction
    Load balancers are the preferred solution for providing scalability, redundancy, and fail-over for MDEX Engine queries. Typically, load balancing and failover are accomplished with the use of dedicated load balancing hardware. For some Endeca implementations, however, a robust mechanism for load balancing may not be available or required by the customer. For more information on the basics of load balancing an Endeca application, see the “Endeca Load Balancing Best Practices” Solution Article available on EDeN.
    This document describes the configuration steps involved in leveraging Apache’s default load balancing mechanisms to provide a simple load balancing solution for an Endeca application. In this scenario, Apache will sit between the Endeca presentation API housed in a web application and redundant MDEX engines. Apache performs the duties of a simple load balancer and failover broker, managing query requests from the application tier to specific MDEX engines.
    Configuration Steps
    The described configuration leverages Apache 2.2 HTTP Server as the load balancing mechanism between two identical MDEX engines all residing on a single server. The sample configuration expects the Apache HTTP server to be listening on port 5555 and the MDEX engines to be listening on ports 8000 and 8001. These can be changed as appropriate for a given environment.
    1. In the Apache httpd.conf, enable the server for listening on port 5555.
    Endeca Solution Article Simple MDEX Load Balancing with Apache HTTP Server
    Endeca Confidential 3 of 5
    2. Enable the following modules by un-commenting the appropriate loadmodule statements in the httpd.conf file. These modules include: mod_proxy, mod_proxy_balancer, mod_proxy_connect, mod_proxy_http, mod_negotiation.
    3. Include the httpd-vhosts.conf file by un-commenting the appropriate line in the httpd.conf file.
    4. Save the httpd.conf file and open the httpd-vhosts.conf file. Append the below information and save the file.
    Endeca Solution Article Simple MDEX Load Balancing with Apache HTTP Server
    Endeca Confidential 4 of 5
    NameVirtualHost *:5555 <VirtualHost *:5555> ServerName localhost ProxyPass / balancer://dgraphs/ ProxyPassReverse / balancer://dgraphs/ <Proxy balancer://dgraphs> BalancerMember http://localhost:8000 loadfactor=1 retry=0 BalancerMember http://localhost:8001 loadfactor=1 retry=0 </Proxy> </VirtualHost> <Location /balancer-manager> SetHandler balancer-manager </Location>
    The “retry” parameter associated with Balance Members disables the period of inactivity for a particular worker after Apache determines it is offline. The default is 60 seconds. It is recommended that this is set to a low number such as 0.
    In environments where particular MDEX engines are targeted for additional load, the “loadfactor” parameter associated with Balance Members can be adjusted. Higher values ensure that the load balancing algorithm that is used will route the traffic load accordingly to specific Balance Members.
    5. Restart Apache.
    6. Within the UI application, configure the host and port of the HttpENEConnection to the host and port of the load balancer (e.g. localhost:5555).
    Endeca Solution Article Simple MDEX Load Balancing with Apache HTTP Server
    Endeca Confidential 5 of 5
    Apache HTTP Server is now properly configured to serve as a load balancer for the MDEX Engines.
    References
    “Endeca Load Balancing Best Practices” Solution Article (EDeN)
    Apache Module mod_proxy
    Apache Module mod_proxy_balancer
    Apache Module mod_proxy_connect
    Apache Module mod_proxy_http
    Apache Module mod_negotiation
    Edited by: sabdelhalim on Nov 28, 2012 5:46 PM

  • RMI Clustering/Load Balancing

    I want to be able to access a stateless application deployed in an IAS 10g cluster, with a remote client using RMI and have the RMI requests load balanced and failed over between the cluster. The application doesn't use EJBs.
    Can anybody tell me how the client RMI requests could be load balanced and failed over accross the cluster?

    This is the document you want :- http://download.oracle.com/docs/cd/E12825_01/epm.111/epm_high_avail/launch.html
    or pdf version - http://download.oracle.com/docs/cd/E12825_01/epm.111/epm_high_avail.pdf
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Clustering load balancing

    How do you set up clutering/load balancing?
    I know it's with APS but can't find the documentation for 11.1.1
    Thank you,
    Jz

    This is the document you want :- http://download.oracle.com/docs/cd/E12825_01/epm.111/epm_high_avail/launch.html
    or pdf version - http://download.oracle.com/docs/cd/E12825_01/epm.111/epm_high_avail.pdf
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Setting up Load Balancing / Clustering in BIP installation

    Hi,
    We're upgrading Siebel 7.8 to 8.1.1.5. As part of this process, we'll be replacing the two clustered Actuate Servers with two BI Publisher installations.
    We need BIP to behave like for like compared to Actuate (clustered / load balanced) but I cannot any detailed instructions on implementing Load Balancing / Clustering of BIP anywhere.
    We use a Loadbalancer.org appliance to balance the Siebel URL - my plan is to use this to support load balancing of the Web Services across the two BIP instances. However, I also need the two BIP installations to share the same repository.
    I've come across an Oracle White Paper (http://www.oracle.com/technetwork/middleware/bi-publisher/bip-cluster-deployment-366859.pdf) that states:
    The repository is shared among different servers in the cluster
    However, looking at the installation guide for Oracle BIP 10.1.x.x.x, there are no details on how to achieve this.
    Anyone know where to find documentation explaining, in detail, how to load balance multiple BIP servers to achieve the same level of high availability as we had with Actuate?
    Thanks in advance!
    mroshaw

    http://e-docs.bea.com/wls/docs60/////adminguide/apache.html
              "Kit Chan" <[email protected]> wrote in message news:[email protected]..
              > After I have gone through the documentation of Administering the server and
              > Clustering, I still do not know where should I start with and what settings
              > I need to add. Is there any detailed document concerning setting up a
              > cluster with Apache plug-in and two WL6.0 servers?
              >
              > Can anyone please help?
              >
              > Thanks a lot.
              >
              > Kit Chan
              > [email protected]
              >
              >
              >
              

  • Oracle Web Cache - Load Balancer Mode

    Hello,
    I am facing a problem related to fault tolerance in Oracle Web Cache
    My environment:
    2 Nodes of Oracle Application Server 10.1.3 configured in cluster.
    have configured Oracle Web Cache (in LoadBalancer mode only) to simulate a BigIP load balancing above these two oc4j node. This to be used as fault tolerance so if one http request will fail the LoadBalancer will direct it to the second oc4j (its http server).
    I have used the defaults recommended by Oracle documentation for configuration clustering load balancer using Oracle Web Cache and using JSESSIONID for Session binding with oc4j-type. I left all other cluster/server setting to default
    Problem:
    The problem is when I am shutting down one oc4j instance the Oracle Web Cache tries to keep the session with the un-available oc4j node and not direct the session to the other oc4j node.
    If I choose to start a new session (loose current work) than it does work with the second (alive) oc4j node, but than where is its fault tolerance??
    What do i need to do to keep working with the same session over the second oc4j node. My deployed Web App is enabled to work in cluster, using Oracle Application Server console. Here is its orion-application.xml
    +<?xml version="1.0"?>+
    +<orion-application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlns.oracle.com/oracleas/schema/orion-application-10_0.xsd" deployment-version="10.1.3.1.0" default-data-source="jdbc/OracleDS" component-classification="external"+
    schema-major-version="10" schema-minor-version="0" >
    +     <web-module id="session_rep" path="session_rep.war" />+
    +     <persistence path="persistence" />+
    +     <principals path="principals.xml" />+
    +     <jazn provider="XML" />+
    +     <log>+
    +          <file path="application.log" />+
    +     </log>+
    +     <cluster allow-colocation="false">+
    +     <replication-policy trigger="onRequestEnd" scope="modifiedAttributes"/>+
    +     <flow-control-policy enabled="false" />+
    +          <protocol>+
    +               <peer>+
    +                              <opmn-discovery/>+
    +               </peer>+
    +          </protocol>+
    +     </cluster>+
    +</orion-application>+
    Will appreciate any help !

    is OHS/apache deployed between web cache and your OC4J? if so, i think since OHS/apache is alive to web cache, web cache keeps routing to it.
    maybe in your script, you can shutdown both OHS/apache and OC4J totgether so web cache can correctly route.

  • Weblogic 10.3.6 - Load balancing- sending duplicate requests 5 min

    Hello,
             We have configured clustering and load balancing to our integration solution as mentioned at http: //abhinavgupta3.blogspot.in/2012/02/osb-clustering-load-balancer.html
    But when ever a requests goes through load balancer and if that requests goes beyond 5 min (300Sec), then load balancer generating new request and that request is being redirected to another node.
    As per my blogging  I found there is some setting need t
    Because of this we are going to ambiguous situation.o configured at load balancer for not falling in to this situation.Its about a property called 'Idempotent'
    Please help me on this.
    Thanks,
    Satyendra

    "The datasource is created successfully from the console so there's no problem with the jar file."
    Did you also test the data source when you created it?
    - when yes then WebLogic can indeed find the derbyclient.jar as the driver org.apache.derby.jdbc.ClientDriver is part of it.
    "I specified in the weblogic.xml that the antlr.* classes should be taken from the application. Also tried this option for org.apache.derby.* classes."
    Have you only defined a
    <prefer-application-packages>
        <package-name>antlr.*</package-name>
        <package-name>org.apache.derby.*</package-name>
    </prefer-application-packages>in weblogic.xml or did you also turn on prefer-web-inf-classes?
    It is smart to start from scratch and delete all the jar files in all the directories. To make the derbyclient.jar part of the WebLogic classpath you can do the following:
    - edit setDomainEnv: When you open setDomainEnv there is an entry like "SET THE CLASSPATH". Just before - if [ "${JAVA_VENDOR}" != "BEA" ] ; then
    you can put something like: CLASSPATH=${CLASSPATH}${CLASSPATHSEP}location/derbyclient.jar
    If the class must be available on every server in a cluster, you can edit the commEnv file (located in the ${WL_HOME}/common/bin directory)
    Search for the entry "set up WebLogic Server's class path". And below the WEBLOGIC_CLASSPATH you can add something like:
    WEBLOGIC_CLASSPATH=${WEBLOGIC_CLASSPATH}${CLASSPATHSEP}location/derbyclient.jar
    Hope this helps you a little

Maybe you are looking for

  • Keychain keeps asking for my password

    Hi, I downloaded a twitter application Natsulion, and keychain access keeps asking for my password. I tried looking in keychain access application for where the password is stored but I don't see it in there. I tried doing keychain first aid but that

  • Creative cloud failed to update error code: 1

    free trial of creative cloud failed to update error code: 1  so I couldn't use the creative cloud. Please help, thanks a lot!!

  • PS CS5 Can not pan nor scrubby nor reset to default

    Hello, RE: PS CS5 Extended I have a 64x HP duo core with the latest updates of my NVIDEA GeForce 6150 LE video driver and with PS stating "Your applications are up-to-date".  I have three problems that I believe are all interelated. 1. I can not pan

  • How can I control PIC programmer with LabVIEW

    I need to control a PIC Programmer (MPLAB PM3) with LabVIEW. How to do this?

  • Using Discoverer workbooks in JDev.

    Hi All, I have some discoverer workbooks Discoverer 10g R2(10.1.2) , and developed one web application. I want to import the workbook on click of related link from the web page with user defined look and feel. what I mean is the data and all the feat