Load balancing application

I am developing an application which balances the load on server dynamically.
can I connect one client to multiple servers at same time using socket programming......
if there is any option to do this task please suggest me.
I am in a big problem. so reply with your answers fast.
can you please provide me code for the same

please search for Exchange 2010/2013 SDK on msdn
Where Technology Meets Talent

Similar Messages

  • Load balancing Application Server

    Hi
    I am new to peoplesoft dba
    It would be great if somebody could point me to the steps required for setting up loadbalancing for Peoplesoft application server (not web server)
    In particular i wanted to know where to look for information on 'directing certain loads' to particular server.
    Thanks a lot
    Cyril

    Are you talking about load balancing from Webserver to multiple appserver in 4tier mode ? See here the configuration.properties conf :
    http://download.oracle.com/docs/cd/E13292_01/pt849pbr0/eng/psbooks/tsvt/book.htm?File=tsvt/htm/tsvt14.htm#H4003
    Or are you talking about load balancing for 3tier mode ? See TUXEDO Connect String* in the profile (configuration manager) :
    http://download.oracle.com/docs/cd/E13292_01/pt849pbr0/eng/psbooks/tsvt/book.htm?File=tsvt/htm/tsvt11.htm#H4032
    Nicolas.

  • ACE load balancing and testing using soapUI

    Hey, I am trying to crowd source a solution for this problem.
    A client is testing using soapUI to an application that is being load balanced via ACE. There are two webservers behind the VIP servicing the client request. When client tests, requests are timing out per the soapUI log. A packet capture was taken and it clearly shows that ACE is not forwarding the HTTP data back to the client. When client tests by bypassing the ACE load balancer, it works fine. But, there are other clients from other applications that are making successful connection to the load balanced application via the VIP.
    Question, is there any thing unique with making HTTP/XML based requests using soapUI? LB configuration is shown below:
    class-map match-all EAI_PWS_9083
      2 match virtual-address 10.5.68.29 tcp eq 9083
    serverfarm host EAI_PWS_9083
      description WebSphere Porduction
      failaction purge
      probe tcp9083
      rserver ESSWSPAPP01 9083
        inservice
      rserver ESSWSPAPP02 9083
        inservice
    policy-map type loadbalance first-match L7_POLICY_EAI_PWS_9083
      class class-default
        serverfarm EAI_PWS_9083
    policy-map multi-match L4SLBPOLICY
    class EAI_PWS_9083
        loadbalance vip inservice
        loadbalance policy L7_POLICY_EAI_PWS_9083
        loadbalance vip icmp-reply active
        appl-parameter http advanced-options CASE_PARAM
    parameter-map type http CASE_PARAM
      case-insensitive

    Hi,
    Your configuration looks fine. I am not familiar with soapUI but if it is like a normal TCP connection followed by HTTP requests, i don't see why this shouldn't work.
    Do you know if there is a difference while using soapUI and normal request using browser?
    Regards,
    Kanwal

  • R1213 Load Balance using F5 load balancers on Sun/Linux

    Hi,
    We got below requirement to perform upgrade and applications Load Balance
    1. Web and Courion services using F5 Load Balancers after R1213 Upgrade.
    Any Idea bout Courion services and how we can perform Load Balance for its services on Apps R1213
    The load balancers would be configured for sticky sessions for consistency.
    2. How we can achive Load Balanced applications to SSL off-loading method.
    3. What is the best methods and any whitepapers to achive the same.
    Please let me know.
    Thanks,
    Bhargava

    Any Idea bout Courion services and how we can perform Load Balance for its services on Apps R1213
    The load balancers would be configured for sticky sessions for consistency.Please elaborate more on this.
    2. How we can achive Load Balanced applications to SSL off-loading method.How To Redirect HTTP Traffic to HTTPS On A BIG-IP F5 Load Balancer [ID 889308.1]
    3. What is the best methods and any whitepapers to achive the same.How To Check Session Persistence On BigIP F5, Cisco Ace, Citrix Netscaler or Radware AppDirector Load Balancer Appliances [ID 601694.1]
    Tips and Queries for Troubleshooting Advanced Topologies [ID 364439.1]
    You can find also more details in Steven Chan's Blog (search for load balancer) -- http://blogs.oracle.com/stevenChan/
    Thanks,
    Hussein

  • APEX SSO and Load balancing: Could not determine workspace for application

    We had a single HTTP Server serving APEX in a 10.2.0.2 database configured with SSO to be used by the developers. APEX has been registered as a partner application and the login url has been CA Siteminder protected so that the SM_USER details are forwarded in the header for the application to use for authorization. Everything is fine so far.
    Now we have added a HTTP Server on another host and have it all set up for APEX and its pointing to the same database. APEX_ADMIN access works as normal, but applications previously using SSO now get the following error after entering the URL.
    Expecting p_company or wwv_flow_company cookie to contain security group id of application owner.
    Error ERR-7620 Could not determine workspace for application ().
    Using HTTP Watch I find that the application is not even trying to redirect to the login page.
    What is wrong here?

    APEX has been registered as a partner application as described in
    http://www.oracle.com/technology/products/database/application_express/howtos/sso_partner_app.html
    In the meantime I found metalink document 368746.1 which describes the cause of this problem. Please read carefully what I wrote, it all works when the the new APEX web server is turned off in the server farm on the load balancer and directed through the original web server. When running regapp.sql the hostname in the listener token was using the virtual hostname. This works fine if the request comes from the original APEX server which proofs that there is nothing wrong with the installation and set up of SSO. When directing the request to the new APEX web server the APEX_ADMIN page still works only existing work spaces using SSO don't seems to work anymore resulting in a error as described in the subject.
    As for metalink document 368746.1 naming the causes of this error:
    - there are no duplicate entries in WWSEC_ENABLER_CONFIG_INFO$
    -LISTENER_TOKEN clearly works for requests coming from the first web server
    -theoretically the web server listener port could be changed from 7777, but port 80 needs to be maintained here as production is mimiced as far down as possible.
    Is there some cache table which can be cleared? How is it that the flows schema (apex engine) can not find the work space when the request comes from a new web server which can however access the APEX_ADMIN pages.
    anyone?

  • Reverse Proxy and Load Balancer for SMP 2.3 and Agentry Application

    Hi Expert,
    I'm putting in place a mobile solution composed by SMP 2.3 SPS 4 and SAP ECC 6.0. In the SMP 2.3 I created the agentry server and I have deployed my agentry application.
    My SMP/Agentry infrastructure is composed by two servers therefore I need a load balancer for balance the load into the several servers. Furthermore I need to use a reverse proxy in my DMZ zone.
    Based on what indicated in the SAP note "1904213 - SAP Mobile Platform Server Release Information" the Apache Reverse Proxy is not supported for Agentry clients. Agentry uses nginx for Reverse Proxy.
    I also found the following document How-to-Guide for Reverse Proxy and Load Balancing in SAP Mobile Platform 3.x that explain how to set-up a reverse proxy and load balancer with nginx and apache.
    Both the SAP note and the HOW to document are refereed to SMP 3.0 and not to SMP 2.3.
    I would know if the NGINX must be used also for SMP 2.3.
    Any suggestion/information is appreciated.
    Thanks in advance
    g.

    Please see Agentry Network Landscapes

  • Load balancing across multiple application servers not working with JCo RFC

    We have a problem where inbound messages to the Mapping Runtime engine (ABAP -> J2EE) are not load balanced over application servers. However, load balancing does take place across server nodes within one application server.
    Our system comprises of the following:
    Central Instance (2 X server nodes)
    Database Instance
    2 X Dialog Instances (with 2 X server nodes each)
    The 1st application server that starts is usually the one that is used for inbound messaging.
    We have looked at the sap gateway configuration and have tried various options without much luck:
    i.e.: local gateways vs. one central gateway, load balancing type by changing parameter gw/reg_lb_level, see: http://help.sap.com/saphelp_nw70/helpdata/EN/bb/9f12f24b9b11d189750000e8322d00/frameset.htm
    Here are our release levels:
    SAP_ABA     700     0012     SAPKA70012
    SAP_BASIS     700     0012     SAPKB70012
    PI_BASIS     2005_1_700     0012     SAPKIPYJ7C
    ST-PI     2005_1_700     0005     SAPKITLQI5
    SAP_BW     700     0013     SAPKW70013
    ST-A/PI     01J_BCO700     0000          -
    Any help would be greatly appreciated.
    Many thanks

    Tim
    Did you follow the guide here:
    How to Scale Up SAP Exchange Infrastructure 3.0  
    Learn what the most likely scaled system architecture looks like, and read about a step by step procedure to install additional dialog instances. The guide also walks you through additional configuration steps and the application of Support Package Stacks.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
    We followed this guide for XI3.0 and PI7.0 and works successfully!

  • Server load balancing for application access using multiple servers

    1.what are the methods supported by cisco switches for load balancing
    2. I want to achive users to access 1 particular ip from different locations but phsically few servers which handle the application and data

    well some servers allow you to install routing protocols on them. you could OSPF some links together.
    or you could NLB if it is a microsoft server. this uses a heartbeat network, a virtual mac and an IP address bound to the vmac.
    you could use NIC teaming. broadcom nics on dell servers allow you to configure them for loadbalancing, failover and a few other options.
    or if the servers are mirrored using MSCS or something similar (i.e configured the same but independant) you could just load balance using DNS.
    hope this helps. jsut some ideas quickly off the top of my head

  • Load balancing on RMI Server Application

    Hi there, I'd like to know more detail about load balancing on RMI server application, been digging the web for some time, but couldn't find much information, does anyone know any website or has the knowledge of load balancing on RMI to share? Sort like its design and implementation.
    Thanks,
    Jax

    I want performance and fault-tolerance,
    In that case, is it necessary to have a main
    rmi server to allow client to access and pick one of
    the rmi servers for the client?Well, if you have one main RMI server, you have a single point of failure again.. Why can't your application know about all rmi servers, and randomly choose one? If it's down, pick a different one.. If your applicatoin is an applet, you'll have to sign it.. Do that and put it on 2 redundant web servers. You have performance by splitting the load and you have your fault tolerance by having x identical rmi servers.
    If you're saying what I think, why would you have one main rmi server that picks a different rmi server? That's more work AND your redundancy goes byebye, ie if your main rmi server dies..

  • Advantages of using a webserver inbetween a load balancer and application servers

    I am building out a new weblogic domain.
    I am wondering which one of these configuration to go with:
    1. Load balancer > weblogic servers
    2. Load balancer > web server > weblogic servers
    Could someone tell me what are the specific advantages of having web servers inbetween a load balancer and application servers (besides caching static data content and acting as a proxy)?
    Thanks in advance
    Srini

    Other than hosting the static content, nothing much really.   We have our load balancer go straight to WL for applications without static content and route to web server if there is static content.   Easy enough to do it both ways, best of both worlds.

  • Ask the Expert: Configuration and Troubleshooting the Cisco Application Control Engine (ACE) load balancer

    With Ajay Kumar and Telmo Pereira 
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about configuration and troubleshooting the Cisco Application Control Engine (ACE) load balancer with Cisco expert Ajay Kumar and Telmo Pereira. The Cisco ACE Application Control Engine Module for Cisco Catalyst 6500 Series Switches and Cisco 7600 Series Routers is a next-generation load-balancing and application-delivery solution. A member of the Cisco family of Data Center 3.0 solutions, the module: Helps ensure business continuity by increasing application availability Improves business productivity by accelerating application and server performance Reduces data center power, space, and cooling needs through a virtualized architecture Helps lower operational costs associated with application provisioning and scaling
    Ajay Kumar  is a customer support engineer in the Cisco Technical Assistance Center in Brussels, covering content delivery network technologies including Cisco Application Control Engine, Cisco Wide Area Application Services, Cisco Content Switching Module, Cisco Content Services Switches, and others. He has been with Cisco for more than four years, working with major customers to help resolve their issues related to content products. He holds DCASI and VCP certifications. 
    Telmo Pereira is a customer support engineer in the Cisco Technical Assistance Center in Brussels, where he covers all Cisco content delivery network technologies including Cisco Application Control Engine (ACE), Cisco Wide Area Application Services (WAAS), and Digital Media Suite. He has worked with multiple customers around the globe, helping them solve interesting and often highly complex issues. Pereira has worked in the networking field for more than 7 years. He holds a computer science degree as well as multiple certifications including CCNP, DCASI, DCUCI, and VCP
    Remember to use the rating system to let Ajay know if you have received an adequate response.
    Ajay and Telmo might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum Application Networking shortly after the event.
    This event lasts through July 26, 2013. Visit this forum often to view responses to your questions and the questions of other community members.

    Hello Krzysztof,
    Another set of good/interesting questions posted. Thanks! 
    I will try to clarify your doubts.
    In the output below both resources (proxy-connections and ssl-connections rate) are configured with a min percentage of resources (column Min), while 'Max' is set to equal to the min.
    ACE/Context# show resource usage
                                                         Allocation
            Resource         Current       Peak        Min        Max       Denied
    -- outputs omitted for brevity --
      proxy-connections             0      16358      16358      16358      17872
      ssl-connections rate          0        626        626        626      23204
    Most columns are self explanatory, 'Current' is current usage, 'Peak' is the maximum value reached, and the most important counter to monitor 'Denied' represents the amount of packets denied/dropped due to exceeding the configured limits.
    On the resources themselves, Proxy-connections is simply the amount of proxied connections, in other words all connections handled at layer 7 (SSL connections are proxied, as are any connections with layer 7 load balance policies, or inspection).
    So in this particular case for the proxy-connections we see that Peak is equal to the Max allocated, and as we have denies we can conclude that you have surpassed the limits for this resource. We see there were 17872 connections dropped due to that.
    ssl-connections rate should be read in the same manner, however all values for this resource are in bytes/s, except for Denied counter, that is simply the amount of packets that were dropped due to exceeding this resource. 
    For your particular tests you have allocated a min percentage and set max equal to min, this way you make sure that this context will not use any other additional resources.
    If you had set the max to unlimited during resource allocation, ACE would be allowed to use additional resources on top of those guaranteed, if those resources were available.
    This might sound a great idea, but resource planning on ACE should be done carefully to avoid any sort of oversubscription, specially if you have business critical contexts.
    We have a good reference for ACE resource planning that contains also description of all resources (this will help to understand the output better):
    http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/v3.00_A2/configuration/virtualization/guide/config.html#wp1008224
    1) When a resource is utilized to its maximum limit, the ACE denies additional requests made by any context for that resource. In other words, the action is to Drop. ACE  should in theory silently drop (No RST is sent back to the client). So unless we changed something on the code, this is what you should see.
    To give more context, seeing resets with SSL connections is not necessarily synonym of drops. As it is usual to see them during normal transactions.
    For instance Microsoft servers are usually ungracefully terminating SSL connections with RESET. Also when there is renegotiation during an SSL transaction you may see RESETS, but this will pass unnoticed for end users. 
    2)  ACE will simply drop/ignore new connections when we reach the maximum amount of proxied connections for that context. Exisiting connections will continue there.
    As ACE doesn't respond back, client would simply retransmit, and if he is lucky maybe in the next attempt he will be able to establish the connection.
    To overcome the denies, you will definitely have to increase the resource allocation. This of course, assuming you are not reaching any physical limit of the box.
    As mentioned setting max as unlimited might work for you, assuming there are a lot of unused resources on the box.
    3)  If a new connection comes in with a sticky value, that matches the sticky entry of a real server, which is already in MAXCONNS state, then both the ACE module/appliance should reject the connection and that sticky entry would be removed.
    The client would at that point reestablish a new connection and ACE would associate a new sticky entry with the flow for a new RSERVER after the loadbalancing decision.
    I hope this makes things clearer! Uff...
    Regards,
    Telmo

  • CSS - Load balancing to Microsoft 2008 Sharepoint Application

    We are tring to load balance using the CSS 11503 to two Servers running Microsoft Sharepoint 2008. Everything is working fine as far as load balancing is cocerned. But what we want is if the Microsoft Sharepoint 2008 Application is down one one server then we do not want any request for this application to be sent to this server. What sort of keepalive should we be using, because TCP port 80 is still up and responds when the Microsoft Sharepoint 2008 Application is down on this server.
    I do not know much about how Microsoft Sharepoint 2008 Application interfaces / interacts with IIS and port 80, etc.
    Any suggestions?

    Partial Config:
    ===============
    service FRED30
    ip address x.x.x..100
    protocol tcp
    port 80
    redundant-index 3
    keepalive port 80
    keepalive type http
    active
    service FRED31
    ip address x.x.x.101
    protocol tcp
    port 80
    redundant-index 4
    keepalive port 80
    keepalive type http
    active
    When we do the above where we have
    "keepalive type http"
    and then do a show keepalive we get the State as DOWN - why? But if we take out the keepalive type http command from the above services then we don't see the state as DOWN.
    But even when it says DOWN we can still connect to port 80 without problem.
    CSS# sh keepalive AUTO_FRED30
    Name: AUTO_FRED30 Index: 7 State: Down
    Description: Auto generated for service for FRED30
    Address: x.x.x.100 Port: 80
    Type: HTTP:HEAD:/
    Keepalive Error: General failure
    Frequency: 5
    Max Failures: 3
    Retry Frequency: 5
    Dependent Services:
    FRED30
    sh keepalive FRED31
    Name: AUTO_FRED31 Index: 9 State: Down
    Description: Auto generated for service FRED31
    Addresess: x.x.x.101 Port: 80
    Type: HTTP:HEAD:/
    Keepalive Error: General failure
    Frequency: 5
    Max Failures: 3
    Retry Frequency: 5
    Dependent Services:
    FRED31

  • Patch applying on Two node application server(load balancing)

    Hi,
    We have Two aplication servers with load balancing with PCP.
    I want to know about applying patches order.
    First patch has to be applied on primary applicaton node.
    and next it has to be applied on secodary application node.
    Please confirm.
    Regards,
    maleem

    maleem wrote:
    Hi Mapps,
    We do not have shared aplicaton Tier. I think in that case we have to apply patches on both applicaton nodes.
    am i right? please correct me if i am wrong.
    Regards,
    maleem
    Correct.
    Thanks,
    Hussein

  • Load balancing UDP application in ACE

    Hi all,
    What's the proper way to load balance a UDP application (NTP protocol) using ACE? We used to do it in our CSS using a content to load-balance and a source group to source-NAT the UDP replies from the servers to the VIP. I guess this should be implemented using NAT in the ACE, but I can't find any example.
    According to the manual, src-natting to VIPs is supported only in A1(8) and it is supposed to be used "when there is a limited number of real-world IP addresses on the client-side network".
    This is not our case, we just need to ensure that the client receives the UDP replies as coming from the VIP, not from real IP address of the server. This is not a problem in TCP-based applications, because the NAT from the rserver IP to the VIP is automatic. What is the proper way to obtain this behaviour for UDP applications? Thanks a lot!
    Regards,
    Pedro

    Pedro,
    reverse nating is not required in ACE world.
    This is done automatically.
    So, the server response will be automatically nated to the vip address when going back to the client.
    If you have an appliance and are just deploying now, I would recommend version A3(2.1).
    If you have a module go for A2(1.3).
    Gilles

  • Load-balancing by application version

    Hi there. I have a pair of CSS-11501's that I'm using for load-balancing incoming connections for a specific software application. We have 2 versions of the software that connect to the same TCP port on the server side. Is there any way to have the CSS distinguish between the application versions so I can direct traffic to different clusters based on version, without the customer knowing?

    no way with the CSS.
    The CSS can understand http but not other applications.
    The ACE module in its next software release will be able to catch data on any application and make an action like loadbalance with it.
    Gilles.

Maybe you are looking for