Session based load balance + Prepared statements

Experts,
From the docs I understand that there are 3 load balancing techniques. One is client side and two are server side. Of the two, one is session count based load balancing, and as per docs, it is recommended for connection pool setting.
My question is if I have prepared statements originally created using connection to node1, and say if listener re-directs the conneciton to another node node2, will the prepared statement work on node2 ?.
Thanks
Vissu

Just to clarify, the question is:
Are the prepared statements usable when we use session count based load balancing.

Similar Messages

  • Health based load balancing.

    I know that RM can provide health based load balancing e.g. RM will stop giving the load if WEF server is not healthy. We have a F5 load balancer, Can't we get the health based load balancing using F5?
    Regards Restless Spirit

    i think you can do. You can specify the number of monitors that must report a pool member as being available before that member is defined as being in an up state.
    check this support article will give you different method of loadbalacing
    Load Balancing pool
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Sticky sessions and Load Balancing in WL Clusters

    We are using iPlanet Web Server 4.1 with WebLogic App Server; and would like
    to implement load balancing with sticky sessions and in-memory state
    replication.
    The documentation in Weblogic says that -
    When using in-memory state replication, your WebLogic Server Cluster must
    live behind one or more proxy servers. The proxy servers are smart enough to
    send servlet requests, belonging to the same HTTP session, back to the same
    server in the cluster that holds the session data.
    (Ref: http://www.weblogic.com/docs51/cluster/setup.html)
    Does this mean that the sticky session configuration has to be done on the
    iPlanet Web Server itself ?
    Also, if WebLogic is used as the Web server, does WebLogic provide any
    support for sticky sessions?
    Any help, suggestions or links to useful info are welcome.
    Regards,
    Milind.

    Mike,
    im curious as to why you would recomend using weblogic as a web server in 6.1?
    I would not for the following reasons:
    - it costs 10x more per cpu list
    - it doesnt support hardware accell cards (afaik, please let me know if this has
    changed)
    iplanet is really good a serving up static html and gif's, especially in ssl if you
    have a hardware accell card. So if you have a site with lots of graphics and you use
    ssl a lot, I think its still a better solution.
    -Joel
    Mike Reiche wrote:
    You get sticky round-robin by default.
    You need to have session tracking turned on (i think it is on by default). You
    need to have the WL plugin configured in iPlanet.
    When WL creates an httpSession, it writes a cookie (or rewrites the URL) back
    to the browser. On subsequent requests, the browser sends the cookie and iPlanet
    plug-in directs the request to the correct WL instance based on the ip address
    of the WL server embedded in the cookie.
    If you are using WLS 6.1, I would recommend using it as a web server (and not
    using iPlanet). I imagine that it supports stickly load balancing as well.
    Mike
    Joel Nylund <[email protected]> wrote:
    you get round robin by default, if you want a different scheme you can
    use one
    of the other 3 options (weight, random or parameter).
    -Joel
    I think weight can be set in weblogic properties. I havent used any other
    than
    round robin.
    Milind Prabhu wrote:
    We are using iPlanet Web Server 4.1 with WebLogic App Server; and wouldlike
    to implement load balancing with sticky sessions and in-memory state
    replication.
    The documentation in Weblogic says that -
    When using in-memory state replication, your WebLogic Server Clustermust
    live behind one or more proxy servers. The proxy servers are smartenough to
    send servlet requests, belonging to the same HTTP session, back tothe same
    server in the cluster that holds the session data.
    (Ref: http://www.weblogic.com/docs51/cluster/setup.html)
    Does this mean that the sticky session configuration has to be doneon the
    iPlanet Web Server itself ?
    Also, if WebLogic is used as the Web server, does WebLogic provideany
    support for sticky sessions?
    Any help, suggestions or links to useful info are welcome.
    Regards,
    Milind.

  • Cookie based Load Balancing

    If 3 Real servers in a non-load balancing environmet are setting session cookies with diffrenet cookie names e.g.
    server1 response
    set-Cookie: SESSIDSAAAAAA=DMNNNELCECNCKDIIDCPOIMGG
    Server2 response
    set-Cookie: SESSIDSBBBBBB=DAAMMNELCECNCKPYTWPOIPOP
    Server3 response
    set-Cookie: SESSIDSCCCCCC=POHYTUOIPOPPLKJHTERIQOKJ
    then how can CSM be configured with cookie based stickiness.
    I tried cookie insert on CSM with NULL value Assigned to "COOKIE_INSERT_EXPIRATION_DATE".
    It resulted in two set cookie responses (one from server and one from CSM).
    I am wondering how csm will react ( cookie insert is used) if client request carries two cookie name-value pairs.
    clients are behind megaproxy so cookie based stickiness is needed.
    Thanks

    if you look into a http client request you will see that many times there are more than 1 cookies.
    The most important is to make sure the CSM insert a cookie with a different name.
    Create your own name.
    The client will receive both the csm cookie and the server cookie and will send both when opening a new connection.
    The CSM is able to locate its own cookie in the list and do the stickyness.
    Gilles.

  • IP source based Load balancing?

    Hi all;
    We encounter the following issue:
    A load balancer directs requests in a round robin mechanism to several servers. We want the load balancer direct requests based on the source IP addresses, so that the same host would be directed to the same server at each time it reaquests to be connected (reconnection). Is this possible when using CSM module knowing that NAT is implemented?
    Regards

    Yes this is possible doing
    vserver VAPP
    virtual 10.1.1.11 tcp 2514
    serverfarm SAPP
    sticky 90 group 8
    idle 5400
    persistent rebalance
    inservice
    sticky 8 netmask 255.255.255.255 address source timeout 90
    This should make the session sticky

  • URL-Based Load Balancing

    I'm having a difficult time trying to configure load balancing on my CSM based on the URL entered. Here is my scenerio:
    Two web servers (WebA & WebB), load balanced on a CSM. WebA & WebB have 90% the same content, so most traffic can be load balanced between them without a problem. The problem (for me anyway) comes in where WebA has certain web sites that WebB doesn't, and vice versa. So I need to load balance to both for 90% of the traffic, and point traffic to a particular server the other 10% of the time based on the URL entered.
    Below is the test config I have so far (that doesn't work correctly), what I am trying for in this example is that any URL that contains /vhosts/ or /programs/ be directed to WebA, and any URL that contains /platform/ or /ssl/ be directed to WebB, and all other traffic be load balanced between the two evenly. (For testing purposes, the servers are being load balanced in "bridge-mode", in production they will be "routed-mode"....I did't want to go through the change controls to change the IP addresses for the test servers!).
    module ContentSwitchingModule 2
    vlan 605 client
    ip address 10.63.240.4 255.255.255.0
    gateway 10.63.240.1
    vlan 606 server
    ip address 10.63.240.4 255.255.255.0
    natpool URL-POLICY-TEST 10.63.240.204 10.63.240.204 netmask 255.255.255.254
    map SRV-A url
    match protocol http url /vhosts/*
    match protocol http url /programs/*
    map SRV-B url
    match protocol http url /platform/*
    match protocol http url /ssl/*
    serverfarm URL-POLICY-TEST
    nat server
    nat client URL-POLICY-TEST
    real 10.40.109.100
    inservice
    real 10.40.109.101
    inservice
    serverfarm URL-TESTA
    nat server
    nat client URL-POLICY-TEST
    real 10.40.109.100
    inservice
    serverfarm URL-TESTB
    nat server
    nat client URL-POLICY-TEST
    real 10.40.109.101
    inservice
    policy TESTWEB-A
    url-map SRV-A
    serverfarm URL-TESTA
    policy TESTWEB-B
    url-map SRV-B
    serverfarm URL-TESTB
    vserver URL-POLICY_TEST
    virtual 10.63.240.10 tcp 0
    vlan 605
    serverfarm URL-POLICY-TEST
    sticky 1
    persistent rebalance
    slb-policy TESTWEB-A
    slb-policy TESTWEB-B
    inservice

    Thanks for the reply Gilles....I've been out of the office for a while.
    Well, right now nothing is working....except that all traffic is going to the default server farm assinged to the vserver. Here are the URLs I am testing with:
    **************TEST A************
    http://10.63.240.10/manual/vhosts/fd-limits.xml
    http://10.63.240.10/manual/programs/apachectl.xml
    **************TEST B************
    http://10.63.240.10/manual/platform/ebcdic.xml
    http://10.63.240.10/manual/ssl/ssl_compat.xml
    ***************BOTH****************
    http://10.63.240.10/manual/howto/htaccess.xml
    http://10.63.240.10/manual/howto/cgi.xml
    When I try attaching to the first URL for example, here is the connection info (I trimmed it down so it will fit here):
    MOSL1S1A#sh mod csm 2 real
    real server farm Conns/hits
    10.40.109.100 URL-POLICY-TEST 1
    10.40.109.101 URL-POLICY-TEST 0
    10.40.109.100 URL-TESTA 0
    10.40.109.101 URL-TESTB 0
    MOSL1S1A#
    MOSL1S1A#sh mod csm 2 conn
    prot vlan source destination
    In TCP 605 10.47.10.10:3738 10.63.240.10:80
    Out TCP 605 10.40.109.101:80 10.63.240.204:8820
    I've tried changing the syntax on the URL statement in the map as such:
    /manual/*
    */manual/*
    /manual/
    *manual*
    /manual*

  • Rv042 dual-wan threshold based load balance?

    I have an RV042 (it's old, silver/dark grey plastic front one) w/ firmware 1.3.13.02-tm.
    The reason we bought this (long ago) was to balance two WAN connections, one with unlimited data and one capped monthly.  It did that once, but for a couple years both connections have been unmetered so it's just been balancing them 50/50.  As of today one WAN connection (the new much faster one) is back to being metered but I can't figure out how to configure the RV042 as it once was to prefer sending traffic over the slow, unmetered connection first, and only use the faster metered connection when necessary.
    It's been a long time and honestly I only vaguely remember the ability to prioritize a connection based on % of bandwidth used so that all traffic would go over the unlimited connection 1st until it was flooded, and only then fall over to the metered connection.  This is totally different than the weighted round robin, or smart link backup.
    I found this 3rdparty pforum post that supports that vauge memory and suggests this was eliminated netweem firmware 1.23 and 1.3:
    http://www.linksysinfo.org/index.php?threads/rv042-load-balancing-options-from-the-manual-where-to-find.15512/#post-69948
    So I humlbly ask...  Is it possible to replicate this functionality with the current firmware? if so how?  If not, how to do roll back to firmware 1.23?
    It sounded like perhaps I could assigned WAN1 a bandwidth of 100000 (even though it's really 1500) and then assign WAN2 a bandwidth of 1 (even though it's really 20000) and the result might be the prioritization I'm looking to achieve...  but I feel like I'm stumbling in the dark at the point.
    Just FYI, I'm not at all opposed to buying new hardware to acheive this if it's not terribly expensive (ie. <$200).  I'd rather not, but I've got to solve this quick.

    Hi Jon,
    I Also have one of these routers.
    On the bottom mine says (v02) which means its hardware version is 2.
    I just got this one brand new for home as I have been using them for a very long time now. However I have been using them for VPN and now I am needing the same functionality as you.
    I am currently running Firmware Version: 1.3.12.19-tm
    If you login to the web management (eg 192.168.1.1) and go to System Management > Dual-WAN
    Down the bottom you will see "Protocol Binding".
    This is all I know of to send specific ports or applications via a specific WAN.
    I'll give you an example of how I am using it currently.. (BTW it seems to be working OK, But you are on a higher firmware)
    eg: WAN1 is more reliable than WAN2 which is a cheap unlimited service.
    So I bind port 5060 (sip), port 80 (http) and port 443 (https) to WAN1 so that my VOIP phone is on the good service and so is all web traffic.
    so all the other stuff can use the unlimited connection.
    Also, My current bandwidth settings are
    WAN          UPSTREAM          DOWNSTREAM
    1                384                       8000
    2                384                       10000
    And Under: System Management > Bandwidth Management you can also prioritize those ports.
    This may help you in some way, So maybe you can help me..
    Your post has made me not want to upgrade the firmware.. Can you please confirm that this functionality exists still?
    Thanks

  • Two gateways, port-based load balancing

    Hello,
    I have a simple question on Mac OS X Leopard/SL Server regarding the use of 2 distinct internet connections on a single LAN.
    Gateway #1 : 10.0.1.1 (delivering IPs) - 18 mbps
    Gateway #2 : 10.0.1.254 - 4 mbps
    Any computer accessing the network is delivered an IP by the DHCP server (10.0.1.1), thus uses #1 as of main gateway.
    The main server (10.0.1.16) is running DNS services and a Squid proxy-cache.
    Now, is it possible to set all the computers that connect to the network up so that they use the main server as of main gateway and see their requests redirected to #1 or #2 according to the port in use ?
    For example:
    mail,http,https,jabber -> #1
    skype,rtsp,... -> #2
    Thank you very much for your help
    Tha
    Message was edited by: Kwintin

    is it possible to set all the computers that connect to the network up so that they use the main server as of main gateway and see their requests redirected to #1 or #2 according to the port in use ?
    No. routing is based on destination IP address, not port.
    Therefore each client will send all traffic for a specific address to a specific router address. It doesn't matter whether it's talking HTTP, SMTP, IMAP, POP, AIM, or any other protocol - any traffic for that IP will go to the same router.
    You have three ways of getting around this.
    One is to install a router that supports dual WAN connections. Point all internal clients to the LAN address of the router and let it do the work of routing the traffic as needed, based on its routing policies (routers may be able to route based on port).
    Option two is to setup a proxy server for specific services - for example you could setup a HTTP/HTTPS proxy server on a machine that has router #1 as its default gateway and configure the clients to talk to router #2. All traffic on the clients will go over router #2 except the proxied traffic which will go to the proxy and then out via router #1.
    This is relatively simple to setup, but is limited to traffic that can be easily proxied (e.g. that probably excludes email).
    The third option is static routing. Look at the servers each machine is contacting and setup static routes for the smaller set of addresses. For example, if you're only splitting off traffic to Skype's servers then set each client with a default route of router #1, and static routes to Skype's server to router #2. Now all traffic except that to Skype will use router #1.
    This is really only viable if you have a relatively small number of destination addresses you're trying to divert. That's why it works well for Skype (single server address), but wouldn't work well for something more generic such as 'web traffic' since you cannot predict which web servers (and therefore which IP addresses) need static routes.
    Of the three options, only option #1 will cover all protocols for all clients, but it's also the only option that costs $$s if your current router doesn't support multiple WAN interfaces.

  • SA520 Load Balancing Problems

    Hi,
    we've got an SA520 with activated Load Balancing for two ISP's on the two WAN ports. Both WAN ports are showing "WAN status UP".
    The problem is, that every connection stalls after a few minutes (for example a download, a web radio live stream or an PPTP VPN connection). It seems that the load balancing is switching the lines permanently for all sessions, which doesn't make any sense. How can I configure an session based load balancing without binding protocols on a special WAN port?
    Best Regards, Klaus

    Hello Klaus,
    Thank you so much for your inquiry.
    It seems that the load balancing is switching the lines permanently for
    all sessions, which doesn't make any sense.
    With Load Balancing enabled, pacekts traverse through the gateway in a manner that has no initial regard to protocol assignment to a specific WAN port. After the SA learns the routes to destination networks, it uses the route that is best, usually the shortest. The router will automatically switch back and forth, literally balancing the load, based on packet amount, with no regard to protocols.
    How can I configure an
    session based load balancing without binding protocols on a special WAN
    port?
    That said, protocol binding is neccessary in order to direct the traffic, based on protocol assigment, in a specified manner. The nature of Load Balancing requires protocol binding in order to prevent a protocol, even HTTP, from jumping WAN ports. I hope this helps!

  • Oracle RAC load balancing sessions

    Hi Gurus,
    I have just implemented SAP on a 2 node RAC. We had RAC1 shutdown for maintenance, and all sessions were failed over to RAC2. Upon bringing RAC1 up, all the sessions remains connected to RAC2. New sessions are load balance between the 2 nodes. Hence, RAC2 is much busier than RAC1.
    How can I distribute the sessions/load in oracle upon bringing back a RAC node?
    Thanks,
    Tzyy Ming

    Hi,
    >> I am using 11g scan. I have switched all the scan services to RAC1. RAC2 has over 300 sessions. After the entire afternoon, RAC1 only manage to build up less than 100 sessions, while RAC2 still has 300 sessions. Most users are connected to SAP throughout the day. It will be nice if we can average out to 200 sessions on each node.
    You shoudn't expect that a SAP workprocess kills itself and connect to another resource during the runtime, in order to RAC1. So, newly created processes may be connected to the newly startup RAC node. Because of this RAC2 has 300 and RAC1 has 100 sessions on it.
    >> Relocating the service manually requires human intervention and creates human error. How can we configure RAC so that it knows 1 node is busier than the others and automatically redirect the connection to the lighter node?
    As far as I know that this is manual operation, not automatic
    I hope that it is clear
    Best regards,
    Orkun Gedik

  • MPLS/VPN network load balancing in the core

    Hi,
    I've an issue about cef based load-balancing in the MPLS core in MPLS/VPN environment. If you consider flow-based load balancing, the path (out interface) will be chosen based on source-destination IP address. What about in MPLS/VPN environment? The hash will be based on PE router src-dst loopback addresses, or vrf packet src-dst in P and PE router? The topology would be:
    CE---PE===P===PE---CE
    I'm interested in load balancing efficiency if I duplicate the link between P and PE routers.
    Thank you for your help!
    Gabor

    Hi,
    On the PE router you could set different types and 2 levels of load-balancing.
    For instance, in case of a DUAL-homed site, subnet A prefix for VPN A could be advertised in the VPN by PE1 or PE2.
    PE1 receives this prefix via eBGP session from CE1 and keep this route as best due to external state.
    PE2 receives this prefix via eBGP session from CE2 and keep this route as best due to external state.
                                 eBGP
                         PE1 ---------CE1
    PE3----------P1                          Subnet A
                         PE2----------CE2 /
                                eBGP
    Therefore from PE3 point of view, 2 routes are available assuming that IGP metric for PE3/PE1 is equal to PE3/PE2.
    The a 1rst level of load-sharing can be achieve thanks to the maximum-paths ibgp number command.
    2 MP-BGP routes are received on PE3:
    PE3->PE1->CE1->subnet A
    PE3->PE2->CE2->subnet A
    To use both routes you must set the number at 2 at least : maximum-paths ibgp 2
    But gess what, in the real world an MPLS backbone hardly garantee an equal IGP cost between 2 Egress PE for a given prefix.
    So it is often necessary to ignore the IGP metric by adding the "unequal-cost" keyword: maximum-paths unequal-cost ibgp 2
    By default the load-balancing is called "per-session": source and destination addresses are considered to choose the path and the outgoing interface avoiding reordering the packets on the target site. Overwise it is possible to use "per-packet" load-balancing.
    Then a 2nd load-sharing level can occur.
    For instance:
             __P1__PE1__CE1
    PE3           \/                   Subnet A
            \ __P2__PE2__CE2
    There is still 2 MP-BGP paths :
    PE3->P1->PE1->CE1->subnet A
    PE3->P1->PE2->CE2->subnet A
    But this time for 2 MP-BGP paths 4 IGP path are available:
    PE3->P1->PE1->CE1->subnet A
    PE3->P1->PE2->CE2->subnet A
    PE3->P2->PE1->CE1->subnet A
    PE3->P2->PE2->CE2->subnet A
    For a load-balancing to be active between those 4 paths, they must exist in the routing table thanks to the "maximum-path 4 "command in the IGP (ex OSPF) process.
    Therefore if those 4 paths are equal-cost IGP paths then a 2nd level load-balancing is achieved. the default behabior is the same source destination mechanism to selected the "per-session" path as mentionned before.
    On an LSP each LSR could use this feature.
    BR

  • Question about Cluster/DSync/Load Balance

    According to the admin doc of iplanet, primary server is
    the "manager" for data sync, is there any impact on
    load balance when the iAS run as primary or backup?
    will the primary kxs get the request first and do dispatching?
    Thanks.
    Heng

    First of all lets discuss load balancing....
    The type of load balancing you are using will determine which process manages the load balancing. If you are using Response time (per server or per component response time) or round robin (regular or weighted) the web connector does the load balancing. If you are using User Defined (iAS based) load balancing then the kxs process becomes involved with load balancing of requests since the "Load Balancing System" is part of the kxs process.
    Now for Dsync and how it impacts load balancing.
    When a server is a sync primary or a sync backup role it is doing more work. For the sync primary the extra work is making sure the backup has the latest Dsync Data and processing requests from the other servers in the cluster about the Distributed data. All state/session information is updated/created/deleted on the sync primary, when this happens the sync primary immediately updates the sync backup(s) with this new information. As you can guess managing the Dsync information and making the updates to the sync backups causes extra processing on the sync primary, so this will impact the overall performance of the machine (whether it be in server load or response time of processing). All lookup of state/session information is done on the sync primary only so the more lookups/updates you have to more impact on the server.
    The sync backup(s) also have the extra work of managing their copy of the Dsync Data which will impact server performance but to a lessor degree of the sync primary.
    Ultimately the extra overhead involved does have an impact on loadbalancing due to the extra load on the sync primary and sync backups.
    Hope that helps,
    Chris Buzzetta

  • Sticky load balancing across 2 ports with cookies

    Hi,
    I have a server configuration where I have 1 top level Apache server that deals with SSL termination (and handles static content) and proxy passes dynamic content onto 2 Tomcat servers on 2 ports, one for http requests (9001) and one for the requests that were secure, but have now been un-encrypted by Apache (9002).  My 2 Tomcat servers are load balanced using a CSS and I need this load balancing to stick to the tomcat servers regardless of port so that the user is stuck to the same Tomcat server for their entire session. 
    I would like to use arrowpoint cookies to perform this stickyness, but the documentation suggests that arrowpoint cookie load balancing (in fact any cookie based load balancing) requires the port to be specified in the content rule.  Is this correct?  Is my only option to use the source IP for stickyness? I don't understand why the port should be required if the stickyness is via a cookie. Can I not simply configure my 2 tomcat servers as services with no port and add a single content rule that load balances these services using arrowpoint-cookie advanced balancing?
    service tomcat1
      ip address x.x.x.x
      active
    service tomcat2
      ip address x.x.x.x
      active
    owner me
       content sticky
         vip address x.x.x.x
         protocol tcp
         url "/*"
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active

    Angela-
    The issue with port is that cookies are very specifically HTTP only and the CSS has no way of knowing what protocol will hit a VIP prior to trying to address it as HTTP. Your issue is actually a bit clearer than it is initially led to be - you can still use 2 different rules by using the configuration below. 
    However, you might be headed for a headache if you don't implicitly control the client's actions.  By default, browsers don't generally send cookies cross-protocol and definitely not cross-domain.  Use something like httpwatch or iewatch to check out the headers your client sends to your site.  Make sure when the 200ok arrives with the set-cookie that the client sends that cookie in all preceeding packets that are HTTP and HTTPS both.
    service tomcat1
      string "tomcat1"
      ip address x.x.x.x
      active
    service tomcat2
      string "tomcat2"
      ip address x.x.x.x
      active
    owner me
       content sticky9001
         vip address x.x.x.x
         protocol tcp
         url "/*"
         port 9001
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active
       content sticky9002
         vip address x.x.x.x
         protocol tcp
         url "/*"
         port 9002
         add service tomcat-1
         add service tomcat-2
         advanced-balance arrowpoint-cookie
         active
    With this configuration, the CSS will use the "string" as the cookie value. So if the client were to recieve set-cookie: ArrowpointCookie=tomcat1, it should use it for either rule, and end up on tomcat1 accessing either VIP.
    Regards,
    Chris

  • How to load balance everything

    Edit:
    More up to date list is available on the wiki:
    http://wiki.oracle.com/page/WCI+Load+Balancing
    Its hard to find information related to deployment anymore, as there is no deployment guide for 10gR3 and a lot of information is scattered around in outdated blog posts. I'd like to keep a list of how to design every component for HA. I don't doubt that there are a lot of errors and things that need filled in so please reply with your additions and corrections.
    h3. Types of load balancing referenced:
    h4. external
    refers to a hardware or software based load balancing handled outside Webcenter software
    h4. MPPE
    the 'massively parallel portlet engine' is the portal's ability to internally load balance web services that are configured with round robin DNS
    h4. cold failover
    I might have the terminology wrong, but I'm referring to when you have another instance of the product installed but disabled. The instance can be turned on in the event of an ourage of the primary component, but it is not automatically available.
    h1. Portal
    Load balanced with an external load balancer (sticky session enabled)
    h3. References:
    http://edocs.bea.com/alui/deployment/docs604/networking/c_loadbalancing.html
    h1. API
    h1. Publisher
    Publisher is able to be load balanced by breaking it up into components: publisher admin, publisher redirect, published content
    h2. Publisher admin
    Cannot be load balanced, use cold failover
    h2. Publisher redirector
    MPPE (or external?)
    h2. Published content
    external
    h3. References:
    http://fsanglier.blogspot.com/2008/02/alui-publisher-increase-performance.html
    h1. Collaboration
    External / MPPE (with collab internal clustering)
    (Although i'm currently having issues wtih this and someone reported that you can do without the collab internal clustering)
    h3. Collab's API
    When using the IDK to connect to collab, custom applications bypass the MPPE and communicate directly to the collab host, losing the benefit of the MPPE. In order to load balance in this situation, the collab host must use external load balancing. (is this true?)
    h3. "Search" service (the collab one) (only applies to 4.5 or newer)
    Install on same servers as collab?
    http://download.oracle.com/docs/cd/E13158_01/alui/collaboration/docs103/install/install.htm#i1138897
    h3. References:
    http://edocs.bea.com/alui/deployment/docs604/networking/c_loadbalancing.html
    recent collab outage related questions
    h1. Document Repository
    External
    h1. (AD/LDAP) Identity Web Services
    MPPE to the Web Services
    External between Web Services and AD /LDAP servers (or use HOSTS files to point each ADAWS server to a differnet AD server?)
    h1. Search
    (grid search: 6.1 or newer)The portal can load balance search requests internally. Each search node has knowledge of other nodes, so only 1 node needs to be reported to the portal. When the portal starts up, the 1 search node that is registered with the portal MUST be available.
    h1. Analytics
    Analytics UI
    (i'm not sure? i'm guessing anything would work here because the admin UI is pretty much read only on the database)
    Analytics collector can be load balanced (as of 2.5)
    http://download-llnw.oracle.com/docs/cd/E13158_01/alui/analytics/docs103/installALI/quickstart.html#wp1063387
    h1. Automation
    Load balancing for reduncancy is not possible. However, the work can be split up Automation servers are assigned to different folders. Don't assign 2 automation servers to the same folder, as they can compete for jobs. (?)
    h1. Content Upload
    (mppe / external)?
    h1. ALUI Directory Service
    h1. Remote Portlet Service
    ? (i'm gussing MPPE)
    h1. Notification
    ? no idea
    h1. What else am i missing?
    Edited by: Joel Collins on Apr 27, 2009 6:52 AM

    Here are few bits of info:
    For load balancing for Analytics, that isn't officially supported yet for the UI and Administration components. It might work with sticky sessions from portal-to-analytics but that doesn't get you much. Improving the options there would be a good enhancement request.
    For the collector, I think there are two important things to remember. Use broadcast mode. I've never actually seen it in unicast mode, but broadcast definitely works. Also, remember that this is all managed by Portal and Analytics - you don't use your own load balancer between Portal instances and the Analytics Collector instances.
    Finally, ALUI Directory is not certified to be load balanced yet. On a separate note, ALUI Directory has a socket leak on Windows due to a bug in the version of Jrockit that ships with it. Upgrading the embedded application server for the ALUI/WCI installation to jrockit-R27.5.0-jdk1.5.0_14 will resolve it.

  • CEF and per-packet load balancing

    We have four OC3 links across the atlantic and I was looking for a solution which would allow load balacing across the four links on a per-packet basis (not session). The objective is both resiliency i.e. being able to handle link failures transparently & balancing the load across all the links. BGP multptah looked like the ideal soultion. However, I was told that the CEF packet based load balancing is no longer supported by CISCO. Is this correct ? Is it applicable for all models ? Are there any other potential solutions?
    Appreciate a response from the experts.

    Hello Rittick,
    an MPLS pseudowire will use only one link of the 4 links based on inner MPLS label, it cannot be spread over multiple parallel links.
    The MPLS pseudowire can travel within an MPLS TE LSP that can be protected by FRR.
    per packet load balancing does not apply to your scenario.
    You need to mark traffic of the critical application with an appropriate EXP settings. The EXP bits are copied to the outer (external) label.
    On the OC-3 physical interfaces you will configure a CBWFQ scheduler providing 100 Mbps of bandwidth to traffic with specific EXP marking. This is elastic and over unused links bandwidth will be left available to other traffic.
    On the LAN interface you need to mark the EXP bits in received packets using a policy-map
    access-list 101 permit tcp host x.x.x.x host y,y,y,y
    class CLASSIFY-BACKUP
    match access-group 101
    policy-map MARKER
    class CLASSIFY-BACKUP
    set mpls exp 3
    class class-default
    set mpls exp 0
    int gex/y/z
    service-policy in MARKER
    class-map BACKUP
    match mpls exp 3
    policy-map SCHED-OC3
    class BACKUP
    bandwidth 100000
    class class-default
    fair-queue
    int posx/y/z
    service-policy out SCHED-OC3
    applied on all pos interfaces.  The MPLS pseudowire will use one link only. Different pseudowires can use different OC-3 links. Load balancing of MPLS traffic is based on internal label (the VC label of the pseudowire)
    Note:
    you should check if it is possible to mark traffic received on the incoming interface of the pseudowire otherwise you need to mark IP precedence nearer to the host.
    Hope to help
    Giuseppe

Maybe you are looking for

  • ORA-01785

    Hi, Can anyone help me with this error - ORA-01785: ORDER BY item must be the number of a SELECT-list expression Both queries are running fine individually so i am quite confused. select oh.client_id,        oh.customer_id,        oh.name,        to_

  • Setting Variables to embedded SWF in Director

    I have a print button on screen at the end of a test within my director project. Once the user completes they are able to print there results. I have a certificate created in Flash publised using player 7 AS2 which is suppose to recieve varibles and

  • 10gR2 Clusterware Install on Windows 2003 SP2 after Patch 10.2.0.4 crash

    After fresh installaion 10g R2 clusterware and db on Windows 2003 (x64) sp2.->>Works well Install Patch 10.2.0.4 -->>Tnserror and other error as write in metalink (February 2010 Patch 32) --Remove all oracle files (c:\programfiles\oracle\*.*,d:\oracl

  • Opening a java jar file through a labview program

    Hello, I need to open a java jar file in labview........this jar file is a GUI with multiple frames and has buttons on each frame help navigate through between frames....i just require the program to open it and close it based on some signals so then

  • Mapping a flipping coin on both sides of the coin

    Dear specialist, I'm pretty new with after effects and I want to use a clip from clipcanvas.com with nr 223635 it is a flipping coin. On that coin that is coming in the screen and disappears again I want to map a picture on both side of the coin. Wit