URL-Based Load Balancing

I'm having a difficult time trying to configure load balancing on my CSM based on the URL entered. Here is my scenerio:
Two web servers (WebA & WebB), load balanced on a CSM. WebA & WebB have 90% the same content, so most traffic can be load balanced between them without a problem. The problem (for me anyway) comes in where WebA has certain web sites that WebB doesn't, and vice versa. So I need to load balance to both for 90% of the traffic, and point traffic to a particular server the other 10% of the time based on the URL entered.
Below is the test config I have so far (that doesn't work correctly), what I am trying for in this example is that any URL that contains /vhosts/ or /programs/ be directed to WebA, and any URL that contains /platform/ or /ssl/ be directed to WebB, and all other traffic be load balanced between the two evenly. (For testing purposes, the servers are being load balanced in "bridge-mode", in production they will be "routed-mode"....I did't want to go through the change controls to change the IP addresses for the test servers!).
module ContentSwitchingModule 2
vlan 605 client
ip address 10.63.240.4 255.255.255.0
gateway 10.63.240.1
vlan 606 server
ip address 10.63.240.4 255.255.255.0
natpool URL-POLICY-TEST 10.63.240.204 10.63.240.204 netmask 255.255.255.254
map SRV-A url
match protocol http url /vhosts/*
match protocol http url /programs/*
map SRV-B url
match protocol http url /platform/*
match protocol http url /ssl/*
serverfarm URL-POLICY-TEST
nat server
nat client URL-POLICY-TEST
real 10.40.109.100
inservice
real 10.40.109.101
inservice
serverfarm URL-TESTA
nat server
nat client URL-POLICY-TEST
real 10.40.109.100
inservice
serverfarm URL-TESTB
nat server
nat client URL-POLICY-TEST
real 10.40.109.101
inservice
policy TESTWEB-A
url-map SRV-A
serverfarm URL-TESTA
policy TESTWEB-B
url-map SRV-B
serverfarm URL-TESTB
vserver URL-POLICY_TEST
virtual 10.63.240.10 tcp 0
vlan 605
serverfarm URL-POLICY-TEST
sticky 1
persistent rebalance
slb-policy TESTWEB-A
slb-policy TESTWEB-B
inservice

Thanks for the reply Gilles....I've been out of the office for a while.
Well, right now nothing is working....except that all traffic is going to the default server farm assinged to the vserver. Here are the URLs I am testing with:
**************TEST A************
http://10.63.240.10/manual/vhosts/fd-limits.xml
http://10.63.240.10/manual/programs/apachectl.xml
**************TEST B************
http://10.63.240.10/manual/platform/ebcdic.xml
http://10.63.240.10/manual/ssl/ssl_compat.xml
***************BOTH****************
http://10.63.240.10/manual/howto/htaccess.xml
http://10.63.240.10/manual/howto/cgi.xml
When I try attaching to the first URL for example, here is the connection info (I trimmed it down so it will fit here):
MOSL1S1A#sh mod csm 2 real
real server farm Conns/hits
10.40.109.100 URL-POLICY-TEST 1
10.40.109.101 URL-POLICY-TEST 0
10.40.109.100 URL-TESTA 0
10.40.109.101 URL-TESTB 0
MOSL1S1A#
MOSL1S1A#sh mod csm 2 conn
prot vlan source destination
In TCP 605 10.47.10.10:3738 10.63.240.10:80
Out TCP 605 10.40.109.101:80 10.63.240.204:8820
I've tried changing the syntax on the URL statement in the map as such:
/manual/*
*/manual/*
/manual/
*manual*
/manual*

Similar Messages

  • Session based load balance + Prepared statements

    Experts,
    From the docs I understand that there are 3 load balancing techniques. One is client side and two are server side. Of the two, one is session count based load balancing, and as per docs, it is recommended for connection pool setting.
    My question is if I have prepared statements originally created using connection to node1, and say if listener re-directs the conneciton to another node node2, will the prepared statement work on node2 ?.
    Thanks
    Vissu

    Just to clarify, the question is:
    Are the prepared statements usable when we use session count based load balancing.

  • Health based load balancing.

    I know that RM can provide health based load balancing e.g. RM will stop giving the load if WEF server is not healthy. We have a F5 load balancer, Can't we get the health based load balancing using F5?
    Regards Restless Spirit

    i think you can do. You can specify the number of monitors that must report a pool member as being available before that member is defined as being in an up state.
    check this support article will give you different method of loadbalacing
    Load Balancing pool
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Interesting ACE URL Header & Load-balance & SSL on 2 VIPs

    Hi There
    I have an interesting situation that I am trying to solve. I have 4 websites, each one with SSL Off-Loading on the ACE on the outside. All FOUR websites run on a single server on the inside, but each website is using a different port number for differentiation. Also, they are currently only available on TWO IPs on the outside! I know.....it's a mare!
    So, RSERVER = SERVER = 192.168.0.1
    Each website has SSL Certs on the outside. https://website1.abc.com - https://website4.abc.com
    But, DNS is only bound to 2 IPs on the outside, as that is all we have available currently, until we free up more IPs.
    OUTSIDE:
    website1.abc.com = 172.16.0.1:443
    website2.abc.com = 172.16.0.1:443
    website3.abc.com = 172.16.0.2:443
    website4.abc.com = 172.16.0.2:443
    On the server we have:
    INSIDE: 192.168.0.1
    SERVER:8001 = website1.abc.com
    SERVER:8002 = website2.abc.com
    SERVER:8003 = website3.abc.com
    SERVER:8004 = website4.abc.com
    So, in a nutshell what I need to do is:
    Terminate SSL for each website, then match the HTTP header, and pass it to the SERVER on the right port. Sounds easy enough.
    But, I am struggling like hell. The VIPs (Wirtual IPs on the OUTSIDE are causing me grief) My steps seem to be breaking my ruleset. Individually they all work, but once I tie them to the VIPs on the outside, it seems to stop. The first site in each CM (class-map) match in the PM (Profile-Map) works but the subsequent site just breaks.
    I would post my config, but right now I have sooooooooooooo many variations, it looks like a dog's breakfast.
    Can anyone give advice on the process flow to follow to get this to work. My issue is arround the VIPs mainly. To be honest, I don't really care about Load-Balancing right now. That will come later when more servers are added to mix. And then we might have to do inbound NAT too to the Server Farm, but that can wait! :-o
    I have created a HEADER map for the headers, individual SERVER FARMS for each port on the RSERVER, ACLs matching the VIPs inbound on 443, CLASS-MAPs matching the HEADER and applying to SFARM, POLICY MAPS matching the CMAPs and doing Load-Balancing with SSL-PROXYs for the SSL headers. SERVICE-POLICY tieing it all together on Interface.
    But .... things are going hey-wire.
    So, steps are:
    RSERVER
    SFARMs = RSERVER:PORTs
    ACLs = VIPs
    CMAP = HEADER = URL
    LB PMAP = HEADER CMAP & SFARM
    PMAP MULITM = ACL CMAP + LB PMAP & SSL-Proxy
    SVC-POL = PMAP MULTIM

    Hi Surya
    Thanks for the prompt reply. I'm not quite sure what you mean when you say it ca only handle 2 certs. Can you elaborate please?
    It would appear to me that you can actually only bind one cert to an IP, based on using a VIP address for the server farm as per the CM in the PM. I can hack out the irrelevant bits tomorrow and post what I have done thus far. I have played with multiple lines of code and various ways of trying to do this, but the end result is that it appears once I have the CM set per VIP I can only set one SSL-Proxy, and so only one cert. If I use multiple CMs, as per the MultiMatch policy, it matches the first CM against the VIP and doesn't appear to move on as per the HTTP Header. If any of that makes sense?
    regards
    Sent from Cisco Technical Support iPad App

  • Cookie based Load Balancing

    If 3 Real servers in a non-load balancing environmet are setting session cookies with diffrenet cookie names e.g.
    server1 response
    set-Cookie: SESSIDSAAAAAA=DMNNNELCECNCKDIIDCPOIMGG
    Server2 response
    set-Cookie: SESSIDSBBBBBB=DAAMMNELCECNCKPYTWPOIPOP
    Server3 response
    set-Cookie: SESSIDSCCCCCC=POHYTUOIPOPPLKJHTERIQOKJ
    then how can CSM be configured with cookie based stickiness.
    I tried cookie insert on CSM with NULL value Assigned to "COOKIE_INSERT_EXPIRATION_DATE".
    It resulted in two set cookie responses (one from server and one from CSM).
    I am wondering how csm will react ( cookie insert is used) if client request carries two cookie name-value pairs.
    clients are behind megaproxy so cookie based stickiness is needed.
    Thanks

    if you look into a http client request you will see that many times there are more than 1 cookies.
    The most important is to make sure the CSM insert a cookie with a different name.
    Create your own name.
    The client will receive both the csm cookie and the server cookie and will send both when opening a new connection.
    The CSM is able to locate its own cookie in the list and do the stickyness.
    Gilles.

  • IP source based Load balancing?

    Hi all;
    We encounter the following issue:
    A load balancer directs requests in a round robin mechanism to several servers. We want the load balancer direct requests based on the source IP addresses, so that the same host would be directed to the same server at each time it reaquests to be connected (reconnection). Is this possible when using CSM module knowing that NAT is implemented?
    Regards

    Yes this is possible doing
    vserver VAPP
    virtual 10.1.1.11 tcp 2514
    serverfarm SAPP
    sticky 90 group 8
    idle 5400
    persistent rebalance
    inservice
    sticky 8 netmask 255.255.255.255 address source timeout 90
    This should make the session sticky

  • Rv042 dual-wan threshold based load balance?

    I have an RV042 (it's old, silver/dark grey plastic front one) w/ firmware 1.3.13.02-tm.
    The reason we bought this (long ago) was to balance two WAN connections, one with unlimited data and one capped monthly.  It did that once, but for a couple years both connections have been unmetered so it's just been balancing them 50/50.  As of today one WAN connection (the new much faster one) is back to being metered but I can't figure out how to configure the RV042 as it once was to prefer sending traffic over the slow, unmetered connection first, and only use the faster metered connection when necessary.
    It's been a long time and honestly I only vaguely remember the ability to prioritize a connection based on % of bandwidth used so that all traffic would go over the unlimited connection 1st until it was flooded, and only then fall over to the metered connection.  This is totally different than the weighted round robin, or smart link backup.
    I found this 3rdparty pforum post that supports that vauge memory and suggests this was eliminated netweem firmware 1.23 and 1.3:
    http://www.linksysinfo.org/index.php?threads/rv042-load-balancing-options-from-the-manual-where-to-find.15512/#post-69948
    So I humlbly ask...  Is it possible to replicate this functionality with the current firmware? if so how?  If not, how to do roll back to firmware 1.23?
    It sounded like perhaps I could assigned WAN1 a bandwidth of 100000 (even though it's really 1500) and then assign WAN2 a bandwidth of 1 (even though it's really 20000) and the result might be the prioritization I'm looking to achieve...  but I feel like I'm stumbling in the dark at the point.
    Just FYI, I'm not at all opposed to buying new hardware to acheive this if it's not terribly expensive (ie. <$200).  I'd rather not, but I've got to solve this quick.

    Hi Jon,
    I Also have one of these routers.
    On the bottom mine says (v02) which means its hardware version is 2.
    I just got this one brand new for home as I have been using them for a very long time now. However I have been using them for VPN and now I am needing the same functionality as you.
    I am currently running Firmware Version: 1.3.12.19-tm
    If you login to the web management (eg 192.168.1.1) and go to System Management > Dual-WAN
    Down the bottom you will see "Protocol Binding".
    This is all I know of to send specific ports or applications via a specific WAN.
    I'll give you an example of how I am using it currently.. (BTW it seems to be working OK, But you are on a higher firmware)
    eg: WAN1 is more reliable than WAN2 which is a cheap unlimited service.
    So I bind port 5060 (sip), port 80 (http) and port 443 (https) to WAN1 so that my VOIP phone is on the good service and so is all web traffic.
    so all the other stuff can use the unlimited connection.
    Also, My current bandwidth settings are
    WAN          UPSTREAM          DOWNSTREAM
    1                384                       8000
    2                384                       10000
    And Under: System Management > Bandwidth Management you can also prioritize those ports.
    This may help you in some way, So maybe you can help me..
    Your post has made me not want to upgrade the firmware.. Can you please confirm that this functionality exists still?
    Thanks

  • Two gateways, port-based load balancing

    Hello,
    I have a simple question on Mac OS X Leopard/SL Server regarding the use of 2 distinct internet connections on a single LAN.
    Gateway #1 : 10.0.1.1 (delivering IPs) - 18 mbps
    Gateway #2 : 10.0.1.254 - 4 mbps
    Any computer accessing the network is delivered an IP by the DHCP server (10.0.1.1), thus uses #1 as of main gateway.
    The main server (10.0.1.16) is running DNS services and a Squid proxy-cache.
    Now, is it possible to set all the computers that connect to the network up so that they use the main server as of main gateway and see their requests redirected to #1 or #2 according to the port in use ?
    For example:
    mail,http,https,jabber -> #1
    skype,rtsp,... -> #2
    Thank you very much for your help
    Tha
    Message was edited by: Kwintin

    is it possible to set all the computers that connect to the network up so that they use the main server as of main gateway and see their requests redirected to #1 or #2 according to the port in use ?
    No. routing is based on destination IP address, not port.
    Therefore each client will send all traffic for a specific address to a specific router address. It doesn't matter whether it's talking HTTP, SMTP, IMAP, POP, AIM, or any other protocol - any traffic for that IP will go to the same router.
    You have three ways of getting around this.
    One is to install a router that supports dual WAN connections. Point all internal clients to the LAN address of the router and let it do the work of routing the traffic as needed, based on its routing policies (routers may be able to route based on port).
    Option two is to setup a proxy server for specific services - for example you could setup a HTTP/HTTPS proxy server on a machine that has router #1 as its default gateway and configure the clients to talk to router #2. All traffic on the clients will go over router #2 except the proxied traffic which will go to the proxy and then out via router #1.
    This is relatively simple to setup, but is limited to traffic that can be easily proxied (e.g. that probably excludes email).
    The third option is static routing. Look at the servers each machine is contacting and setup static routes for the smaller set of addresses. For example, if you're only splitting off traffic to Skype's servers then set each client with a default route of router #1, and static routes to Skype's server to router #2. Now all traffic except that to Skype will use router #1.
    This is really only viable if you have a relatively small number of destination addresses you're trying to divert. That's why it works well for Skype (single server address), but wouldn't work well for something more generic such as 'web traffic' since you cannot predict which web servers (and therefore which IP addresses) need static routes.
    Of the three options, only option #1 will cover all protocols for all clients, but it's also the only option that costs $$s if your current router doesn't support multiple WAN interfaces.

  • CSS Load Balancing with Cookies

    We are trying to load balance 2 backend servers hosted on Websphere with advance balance cookies method.
    Restrictions
    ServerA is unable to accept cookies generated from ServerB.
    ServerA and ServerB are generating random cookies
    Unable to modify cookie string with a constant.
    How can we load balance based on cookies considering the above restrictions?
    We have attempted to do hash based load balancing with cookies but the problem we run into is the servers do not accept cookies generated from another server.
    The configuration we tried is written below:
    service ServerA
    ip address 192.168.10.2
    keepalive type tcp
    keepalive port 80
    active
    service ServerB
    ip address 192.168.20.2
    keepalive type tcp
    keepalive port 80
    active
    content ABC
    url "/*"
    add service ServerA
    string prefix "JSESSIONID="
    advanced-balance cookies
    port 80
    add service ServerB
    string skip-length 5
    string process-length 16
    string operation hash-xor
    protocol tcp
    vip address 172.16.32.1
    active
    Can we change the string prefix to JSESSION instead of JSESSIONID= ?
    The only place the app guys can add a constant string to match on is before the = sign.
    Is it possible for CSS to match on a constant string before = sign e.g below:
    service ServerA
    ip address 192.168.10.2
    keepalive type tcp
    keepalive port 80
    string id567=
    active
    service ServerB
    ip address 192.168.20.2
    keepalive type tcp
    keepalive port 80
    string id123=
    active
    content ABC
    url "/*"
    add service ServerA
    string prefix "JSESSION"
    advanced-balance cookies
    port 80
    add service ServerB
    string skip-length 0
    string process-length 6
    protocol tcp
    vip address 172.16.32.1
    active

    It should work.
    There is no reason for it not to work...
    This is the best method you can have on the CSS for stickyness.
    Get a sniffer trace on the client and server with arrowpoint cookie configured on the CSS and capture a failure so we can see what is going on.
    also send me the config so I can verify everything is ok.
    If you have a service request open with the TAC, you can also give the SR # so I can review what has been done.
    Gilles.

  • Sticky load balancing across 2 ports with cookies

    Hi,
    I have a server configuration where I have 1 top level Apache server that deals with SSL termination (and handles static content) and proxy passes dynamic content onto 2 Tomcat servers on 2 ports, one for http requests (9001) and one for the requests that were secure, but have now been un-encrypted by Apache (9002).  My 2 Tomcat servers are load balanced using a CSS and I need this load balancing to stick to the tomcat servers regardless of port so that the user is stuck to the same Tomcat server for their entire session. 
    I would like to use arrowpoint cookies to perform this stickyness, but the documentation suggests that arrowpoint cookie load balancing (in fact any cookie based load balancing) requires the port to be specified in the content rule.  Is this correct?  Is my only option to use the source IP for stickyness? I don't understand why the port should be required if the stickyness is via a cookie. Can I not simply configure my 2 tomcat servers as services with no port and add a single content rule that load balances these services using arrowpoint-cookie advanced balancing?
    service tomcat1
      ip address x.x.x.x
      active
    service tomcat2
      ip address x.x.x.x
      active
    owner me
       content sticky
         vip address x.x.x.x
         protocol tcp
         url "/*"
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active

    Angela-
    The issue with port is that cookies are very specifically HTTP only and the CSS has no way of knowing what protocol will hit a VIP prior to trying to address it as HTTP. Your issue is actually a bit clearer than it is initially led to be - you can still use 2 different rules by using the configuration below. 
    However, you might be headed for a headache if you don't implicitly control the client's actions.  By default, browsers don't generally send cookies cross-protocol and definitely not cross-domain.  Use something like httpwatch or iewatch to check out the headers your client sends to your site.  Make sure when the 200ok arrives with the set-cookie that the client sends that cookie in all preceeding packets that are HTTP and HTTPS both.
    service tomcat1
      string "tomcat1"
      ip address x.x.x.x
      active
    service tomcat2
      string "tomcat2"
      ip address x.x.x.x
      active
    owner me
       content sticky9001
         vip address x.x.x.x
         protocol tcp
         url "/*"
         port 9001
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active
       content sticky9002
         vip address x.x.x.x
         protocol tcp
         url "/*"
         port 9002
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active
    With this configuration, the CSS will use the "string" as the cookie value. So if the client were to recieve set-cookie: ArrowpointCookie=tomcat1, it should use it for either rule, and end up on tomcat1 accessing either VIP.
    Regards,
    Chris

  • Recommended configuration for load balanced Portal with load balancer, multiple gateways and multiple servers.

    Does anyone have a recommended network, hardware and software configuration guide for a Portal installation running with multiple gateways load balanced (ie one URL) that talk to multiple servers?

    David,
    We've used Resonate (software) to load balance the gateways. It allows
    you to group all the gateways under 1 virtual URL and load balance the
    incoming connections over each gateway depending on the rules that you
    define in Resonate. Look in the SUN portal whitepapers there is one that
    talks about it specifically.
    As far as load balancing the calls to the portals, the gateways will
    automatically load balance across all the portals that they know about
    using a simple round-robin rotation. You may be able to use Resonate in
    front of the portals but you may need to activate persistance within
    Resonate to ensure that the user always ends up on the portal that he
    established his initial connection on (if you want that), check with Sun
    on this one.
    David Broeren wrote:
    Recommended configuration for load balanced Portal with load balancer,
    multiple gateways and multiple servers.
    Does anyone have a recommended network, hardware and software
    configuration guide for a Portal installation running with multiple
    gateways load balanced (ie one URL) that talk to multiple servers?
    Try our New Web Based Forum at http://softwareforum.sun.com
    Includes Access to our Product Knowledge Base!

  • How to change the OraSSO login link in webcache/load balance

    Hi
    we have 10gAsR1 installed as a Portal instance. We have 6-server
    load balancer => webcache as loadbalancer (listening port 80)
    Wb ch1 and wb ch2 => webcache (listening port 7777)
    portal1 and portal2 => Portal listening 7778
    infra =>Infrastruture with repository Portal/Oracle SSO (listening 7777)
    This set up is working fine for our intranet setup, now we need to open this for couple of external clients. Well initially we need to open on the load balancer server on port 80 for external team to access, it works fine when we make it publc access.
    Now when we need to make it SSO (siteminder) enables, when users click on login link it first goes oracle sso then it internally redirects the page to site minder sso.
    Well, I have noted that the sso server details are mentioned in global setting sso/oid details. Since we need to open this for external client we have to add a DNS entry for this so that we can allow its access over firewall..
    Now I have made DNS name change at my infrserver level, now I need to update the change at the load balancer server (where wheb chache is running).
    Any one know how to chang the URL at load balancer.
    I am struck at this point please suggest how should i proceed..
    Thanks,

    Extract from Personalization Guide - Page Footer - Personalization Considerations
    * If you wish to personalize the URL that points to the Privacy Statement for a page that displays a standard Copyright and Privacy (that is, its Auto Footer property is set to true), set the Scope to OA Footer, in the Choose Personalization Context page of the Personalization UI.
    * If you wish to personalize the URL that points to the Privacy Statement for a page that displays a custom Copyright and Privacy (that is, its Auto Footer property is set to false), set the Scope to Page in the Choose Personalization Context page of the Personalization UI. In the following Page Hierarchy Personalization page , identify and personalize the Privacy page element.

  • Lync 2013 Enterprise load balancing on the front end and edge pool

    Hi,
    I am setting up a Lync 2013 Enterprise deployment consisting of a Front End pool (x2 FE servers) and an Edge pool (x2 Edge servers).  I'm seeing some conflicting advice regarding load balancing using hardware or DNS for the front end and the edge.
    On the front end I have 2 internal DNS records 'lyncfepool1.contoso.local' each of which map to one of the IPs of the FE servers.  I've used my details to populate the Detailed Design Planner excel spreadsheet and am told that I require a HLB to load
    balance my front end pool.  I'm aware of the need to load balance HTTPS traffic internally (which will be done by TMG) however other traffic to the front end (SIP, etc) can be balanced by DNS only, and not require a HLB?
    Can someone clarify the front end requirement?
    Also - looking now at the edge pool - this site again have two edge servers in a pool.  We are using a total of six private IP addresses, two per edge service (2 x av.contoso.com, 2 x sip.contoso.com and 2 x webcon.contoso.com).  These will be
    NAT'ed by the external firewall and directed to the respective external (DMZ) IP addresses on the Edge servers on port 443.  I know this isn't true roundrobin due to the intelligence of the Lync client when connecting (in that the Lync client will connect
    to one of the public IPs and if it can't connect, it will know to connect to the other service IP), however I want to clarify this set up, particularly the need to direct the external public IP traffic at the DMZ Edge IP specified in the topology builder.
    I've attached a basic diagram of the external/DMZ/Edge side which hopefully helps with this question
    Persevere, Persevere, Per..

    That is because you will always need HLB for a front-end server since it hosts the Lync webservices which use HTTP/HTTPS traffic.
    The description on the calculation tool also describes this correctly:
    Supports Standard and Enterprise pools (up to 12 nodes), with pure device-based load balancing or a combination of DNS load balancing and device-based load balancing (for
    Lync web services)
    You can use either Hardware or DNS loadbalancing for SIP traffic only, but you will always need a HLB for the webservices.  Both are applicable for the Front-End so you have either
    full HLB for both SIP and HTTP(S) traffic
    DNS LB for SIP traffic and HLB for HTTP(S) traffic
    Hope this is more clear :-)
    Lync Server MVP | MCITP Lync Server 2010 | If you think my post is the answer to your question, please mark it as answer so future visitors can easily find it.

  • Load balancing on iplanet

    What are the capabilitiesof the load balancing plug-in "Resonate Command Module" for iplanet? Is this the only load balancing feature that iplanet provides.
    Also, if anyone knows of other reasonable (price) load balancing solutions for iplanet web server: preferably, software based only (no load balancing hardware).
    I looked at the documentation, and deduced that "resonate" alone cannot provide load balancing for iplanet web servers.
    TIA,
    Mo

    Actually resonate is a sofware based load balancing solution. The confusion is that it doesnt come with the iplanet web server. The command module is only an interface to resonate that will tell it how to react to iplanet internals. So, you would still need to buy Resonate central dispatch for this to be any use to you. Resonate software is expensive. Expect to pay about 60k to start last I checked.
    There arent that many other software load balancers that are cheap. I found one open source thing called pen. Check out http://siag.nu/pen
    It's really not bad. It supports sticky lb and can actually work for any simple tcp protocol: http,https,ldap,imap,pop,etc....
    The only thing is that you need to set up your own failover for pen so that if the pen process goes down another machine will service the requests.

  • Virtual Hosts for Load Balancing

    Hi all,
    So I have two identical servers, that have SOA Suite installed pointing to the same database. (RHEL 4) I have configured the mod_oc4j.conf file for round robin, and then I created a virtual host on the httpd.conf file to point to the load balance hardware.
    It worked! But with a slight issue. The Load Balance server is orasoaqa.tmpw.net:7777, which opens the main Application Server page. However, when I click on BPEL console, or ESB, it continues using the orasoaqa.tmpw.net:7777 virtual, which of course, breaks even trying to login.
    What am I missing here? Here's the entry I added to mod_oc4j.conf:
    Oc4jSelectMethod roundrobin:local
    Here's the entry I added to the httpd.conf on server 1:
    Port 7777
    Listen 7777
    NameVirtualHost XXX.XXX.XXX.XXX:7777
    <VirtualHost orasoaqaapp101.ma.tmpw.net:7777>
    ServerName orasoaqa.tmpw.net
    ServerAlias orasoaqaapp101.ma.tmpw.net
    Port 7777
    </VirtualHost>
    and on server 2:
    Port 7777
    Listen 7777
    NameVirtualHost XXX.XXX.XXX.XXX:7777
    <VirtualHost orasoaqaapp102.ma.tmpw.net:7777>
    ServerName orasoaqa.tmpw.net
    ServerAlias orasoaqaapp102.ma.tmpw.net
    Port 7777
    </VirtualHost>
    Yes, the port for Application Server and the port on the Load Balancer are exactly the same, I hope this isn't the issue.
    Any help would be greatly appreciated.
    Thanks!
    Message was edited by:
    CooperHawkes

    Seeing that you have an external load balancer available, you really should not be trying software load balancing. Windows NLBS is a software based load balancing and is not really dependent on the physical/virtual hosting of the OS. There are certain gotchas
    w.r.t the NLB modes since they depend on network protocols such as arp, propagation of MAC addresses and usage of the MAC in the L2 routing tables of network devices.
    You should be taking the help of the cloud service provider as ONLY they would have an understanding of their underlying networking infrastructure such as VLANS. addressing, routing rules and such.
    You could also refer to the VMware support forum for KB's such as
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006580
    Most importantly, NLBS is a base OS related service and you're more likely to get a response in the Windows Server Forum. In the specific case of BizTalk, the Windows NLB is only used for hosting the out of process services such as those associated with
    HTTP/s Receive, orchestration published as schemas, web services (asmx) and/or WCF endpoints.
    Regards.

Maybe you are looking for