Load balancing across remote systems

Hi everyone,
When a request is routed to a remote machine (assuming that the service is not
available on the local machine) load balancing is based on the load factors (plus
NETLOAD, if set) of requests previously sent to the remote systems from the local
system. My question is: is the load accumulation ever reset? As I see it, a
reset could probably happen in one or more of the following ways:
1. A rolling time period that will cause earlier load values to be progressively
discareded.
2. The load accumulations are reset every time a system is booted.
3. The load accumulations are reset whenever the tuxconfig is modified.
4. The accumulations are reset when the tuxconfig file is removed and recreated
(I guess this is obvious, assuming this is where the accumulations would be stored
between boots).
The question is prompted by the following scenario. Machines A, B and C have
been configured to offer the same service, but the service has only been enabled
on A and B. Machine D does not offer this service, and therefore requests from
D will be balanced between A and B based on the accumulated load recorded on D.
This situation has been in operation for a couple of months without rebooting
(quite possible - after all, this is Tuxedo we are talking about!), and it has
now been decided to activate the service on machine C (without rebooting). The
big question is: Will all the requests be routed to C until such time as it catches
up to the load processed by A and B?
Thanks to anyone who can shed some light on this.
Regards,
Malcolm.

Where are you looking for the load accumulation?
"load done" with psr command?
In my case, with Tux 6.5, these stats are only reset with a reboot.
Christian
"Malcolm Freeman" <[email protected]> wrote:
>
Thanks, Scott - I found confirmation that stats are reset every SANITYSCAN
interval
in some old 4.2.1 documentation.
Regards,
Malcolm.
Scott Orshan <[email protected]> wrote:
Malcolm,
This was an issue a number of years ago, back in one of the early 4.x
releases.
But now the stats are reset, I believe every SANITYSCAN interval.It
might even
happen as soon as the new service comes into operation. I'm too lazy
to look, but
it's easy to test.
     Scott
Malcolm Freeman wrote:
Hi everyone,
When a request is routed to a remote machine (assuming that the serviceis not
available on the local machine) load balancing is based on the loadfactors (plus
NETLOAD, if set) of requests previously sent to the remote systemsfrom the local
system. My question is: is the load accumulation ever reset? As
I
see it, a
reset could probably happen in one or more of the following ways:
1. A rolling time period that will cause earlier load values to beprogressively
discareded.
2. The load accumulations are reset every time a system is booted.
3. The load accumulations are reset whenever the tuxconfig is modified.
4. The accumulations are reset when the tuxconfig file is removedand recreated
(I guess this is obvious, assuming this is where the accumulationswould be stored
between boots).
The question is prompted by the following scenario. Machines A, Band C have
been configured to offer the same service, but the service has onlybeen enabled
on A and B. Machine D does not offer this service, and therefore
requests
from
D will be balanced between A and B based on the accumulated load recordedon D.
This situation has been in operation for a couple of months withoutrebooting
(quite possible - after all, this is Tuxedo we are talking about!),and it has
now been decided to activate the service on machine C (without rebooting).The
big question is: Will all the requests be routed to C until such
time
as it catches
up to the load processed by A and B?
Thanks to anyone who can shed some light on this.
Regards,
Malcolm.

Similar Messages

  • Load balancing across 4 web servers in same datacentre - advice please

    Hi All
    Im looking for some advice please
    The apps team have asked me about load balancing across some servers but im not that well up on it for applications
    Basically we have 4 apache web servers with about 2000 clients connecting to them, they would like to load balance connections to all these servers, they all need the same DNS name etc.
    what load balancing methods would I need for this, I believe they run on Linux
    Would I need some sort of device, or can the servers run some software that can do this, how would it work? and how would load balancing be achieved here?
    cheers

    Carl,
    What you have mentioned sounds very straightforward then everything should go well.
    The ACE is a load balancer which takes a load balancing decisions based on different matching methods like matching virtual address, url, source address, etc then once the load balance decision has been taken then the ACE will load balance the traffic based on the load balance method which you have configured (if you do not configure anything then it will use the default which is "round robin"), then it will send the traffic to the servers which it has available and finally the client should get the content.
    If you want to get some details about the load balancing methods here you have them:
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA3_1_0/configuration/slb/guide/overview.html#wp1000976
    For ACE deployments the most common designs are the following.
    Bridge Mode
    One Arm Mode
    Routed Mode
    Here you have a link for Bridge Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Bridged_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for One Arm Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Here you have a link for Routed Mode and a sample for that:
    http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Routed_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
    Then as you could see in all those links you may end up having a configuration like this:
    interface vlan 40
      description "Default gateway of real servers"
      ip address 192.168.1.1 255.255.255.0
      service-policy input remote-access
      no shutdown
    ip route 0.0.0.0 0.0.0.0 172.16.1.1
    class-map match-all slb-vip
      2 match virtual-address 172.16.1.100 any
    policy-map multi-match client-vips
      class slb-vip
        loadbalance vip inservice
        loadbalance policy slb
    policy-map type loadbalance http first-match slb
      class class-default
        serverfarm web
    serverfarm host web
      rserver lnx1
        inservice
      rserver lnx2
        inservice
      rserver lnx3
        inservice
    rserver host lnx1
      ip address 192.168.1.11
      inservice
    rserver host lnx2
      ip address 192.168.1.12
      inservice
    rserver host lnx3
      ip address 192.168.1.13
      inservice
    Please mark it if it answered you question then other users can use it as reference in the future.
    Hope this helps!
    Jorge

  • Load balancing across DMZs - Revisited

    I know this question has been asked before and the answer is to have separate content switches per DMZ in order to maintain the security policy. There is an option to have the content switch in front of the firewall and then use only one content switch to load balance across multiple DMZs. Is this an acceptable design or the recommendation is to have a separate content switch behind the firewall for each DMZ of the firewall?
    Can a Cisco 6500 with CSM be configured for multiple layer 2 load balanced VLANs thus achieving a mutiple DMZ load balancing scenario with only one switch/CSM?

    How do you connect the router to the firewall ?
    Problem is the response from the server to a client on the internet.
    Traffic needs to get back to the CSS and if the firewall default gateway is the router, the response will not go to the CSS and the CSS will reset it.
    If you configure the default gateway of the firewall to be the CSS, than all traffic from your network to the outside will go through the CSS.
    This could be a concern as well.
    If you don't need to know the ip address of the client for your reporting, you can enable client nat on the CSS to guarantee that server response is sent to the css without having the firewall default gateway pointing at the CSS.
    Gilles.

  • Load balancing across multiple machines

    I am looking for assistance in configuring Tuxedo to perform load balancing across
    multiple machines. I have successfully performed load balancing for a service
    across different servers hosted on one machine but not to another server that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

    Hello, Christina.
    Load balancing with multiple machines is a little bit different than
    in the same machine. One of the important resource in this kind
    of application is network bandwidth, so tuxedo tries to keep the
    traffic among the machines as low as possible. So, it only
    balance the load (call services in other machine) in case all the
    services are busy in the machine where they are call.
    I mean, if you have workstation clients attached only to one
    machine, then tuxedo will call services in this machine untill
    all servers are busy.
    If you want load balancing, try to put one WSL in each machine,
    and the corresponding configuration in your WSC ( with the | to
    make tuxedo randomly choose one or the other) or spread your
    native clients among all the machines.
    And so, be carefull with the routing!
    Ramón Gordillo
    "Christina" <[email protected]> wrote:
    >
    I am looking for assistance in configuring Tuxedo to perform load balancing
    across
    multiple machines. I have successfully performed load balancing for a
    service
    across different servers hosted on one machine but not to another server
    that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

  • Load balancing across multiple paths to Internet

    Hello,
    I have a 2821 router. Currently, I have two bonded T-1 circuits to the Internet.
    I would like to add a DSL circuit to augment the T1s. I would also like to load balance across all of the circuits. Currently, IOS performs inherent load balancing for the T1 circuits. The DSL circuit is from a different provider than the T1s.
    The T1s are coming from a local ISP that runs no routing protocols within their infrastructure. (They run static routes and rely on the upstream provider for BGP.) The DSL provider is a national telecom carrier.
    What is the best way to perform load balancing for this scenario?

    Here is the answer (sort of) for anyone reading this post with the same question:
    No matter which way I choose to do it, the trick is to have the local ISP subnet advertised via BGP through both pipes. The national telecom DSL provider will not advertise a /20 subnet down a DSL pipe. (Ahh, why not? =:)
    Had the secondary pipe been a T-1,T-3, or other traditional pipe, I could have used a load balancer like a BigIP, or FatPipe device or possibly CEF within the IOS.
    Case closed. Thanks to everyone that took a look.
    Doug.

  • IPTV load balancing across broadcast servers.

    I know that across Archive servers in the same cluster that IPTV control server will load balance , is there is a similar function with Broadcast servers. I know broadcast servers use a different delivery mechanism (Multicast). We have multiple broadcast servers that take in an identical live stream, but the only way to advertise thru a URL is a seperate URL per server. Is there some way to hide the multiple URL's to the client population?

    No. There is no way to load balance across multiple broadcast servers for live streams. Since this is going to be multicast, there should not be any additional load on the servers when the number of users are more.

  • ACE30 load balancing across two slightly different rservers

    Hi,
    is there a possibility to get a load balancing across two rservers so:
    when client sends http://vip/ and it goes to rserver1 then url is sent without change
    when client sends http://vip/ and it goes to rserver2 then url is modified to http://vip/xyz/
    Or maybe load balancing can be done across two serverfarms ?
    thanks

    Ryszard,
    I hope you are doing great.
    I do not think that´s possible since the ACE just load balance the traffic to the servers and once the load balance decision has been taken it will pass the "ball" to the chosen server.
    Think about this, let´s say user A needs to go to Server1 but guess what? based on the load balance decision it was sent to Server2 which unfortunately does not have what the customer was looking for. OK, fine, user A close the connection and tries again but now the Server1 is down then the only available is Server2 then the ACE sends it to Server2 again then user A just decides to leave, you see how bad that can be.
    A better approach would be to have either 2 VIPs ( different IP addresses) or 2 with the same IP address but hearing on another port, perhaps, one port per server.
    Hope this helps!
    Jorge

  • Load balancing across database connection

    Do you provide load balancing across database connections and allow RDBMS load
    balancing for read only access?
    Thanks in advance.

    Hello, Christina.
    Load balancing with multiple machines is a little bit different than
    in the same machine. One of the important resource in this kind
    of application is network bandwidth, so tuxedo tries to keep the
    traffic among the machines as low as possible. So, it only
    balance the load (call services in other machine) in case all the
    services are busy in the machine where they are call.
    I mean, if you have workstation clients attached only to one
    machine, then tuxedo will call services in this machine untill
    all servers are busy.
    If you want load balancing, try to put one WSL in each machine,
    and the corresponding configuration in your WSC ( with the | to
    make tuxedo randomly choose one or the other) or spread your
    native clients among all the machines.
    And so, be carefull with the routing!
    Ramón Gordillo
    "Christina" <[email protected]> wrote:
    >
    I am looking for assistance in configuring Tuxedo to perform load balancing
    across
    multiple machines. I have successfully performed load balancing for a
    service
    across different servers hosted on one machine but not to another server
    that's
    hosted on a different machine.
    Any assistance in this matter is greatly appreciated.

  • Tuxedo load balancing across system

    Hi there,
    does bea tuxedo version 8 or above has feature like weblogic to configure load balancing/failover across multiple system?
    Thanks,
    Simon

    Simon,
    Tuxedo does offer load balancing. A high-level description of this feature
    for Tuxedo 8.0 is at http://edocs.bea.com/tuxedo/tux80/atmi/intatm24.htm
    A description of the configuration file parameters used to set up load
    balancing is at http://edocs.bea.com/tuxedo/tux80/atmi/rf537.htm ; look for
    the LDBAL, NETLOAD, and LOAD parameters.
    <Simon Leong> wrote in message news:[email protected]..
    Hi there,
    does bea tuxedo version 8 or above has feature like weblogic to configureload balancing/failover across multiple system?
    >
    Thanks,
    Simon

  • Load balancing across multiple application servers not working with JCo RFC

    We have a problem where inbound messages to the Mapping Runtime engine (ABAP -> J2EE) are not load balanced over application servers. However, load balancing does take place across server nodes within one application server.
    Our system comprises of the following:
    Central Instance (2 X server nodes)
    Database Instance
    2 X Dialog Instances (with 2 X server nodes each)
    The 1st application server that starts is usually the one that is used for inbound messaging.
    We have looked at the sap gateway configuration and have tried various options without much luck:
    i.e.: local gateways vs. one central gateway, load balancing type by changing parameter gw/reg_lb_level, see: http://help.sap.com/saphelp_nw70/helpdata/EN/bb/9f12f24b9b11d189750000e8322d00/frameset.htm
    Here are our release levels:
    SAP_ABA     700     0012     SAPKA70012
    SAP_BASIS     700     0012     SAPKB70012
    PI_BASIS     2005_1_700     0012     SAPKIPYJ7C
    ST-PI     2005_1_700     0005     SAPKITLQI5
    SAP_BW     700     0013     SAPKW70013
    ST-A/PI     01J_BCO700     0000          -
    Any help would be greatly appreciated.
    Many thanks

    Tim
    Did you follow the guide here:
    How to Scale Up SAP Exchange Infrastructure 3.0  
    Learn what the most likely scaled system architecture looks like, and read about a step by step procedure to install additional dialog instances. The guide also walks you through additional configuration steps and the application of Support Package Stacks.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c3d9d710-0d01-0010-7486-9a51ab92b927
    We followed this guide for XI3.0 and PI7.0 and works successfully!

  • CF 10 Load-Balancing with Remote Instances

    I was reading an article on Clustering/LB/HA using CF8, but have not found any updates for CF10.
    Using VM VirtualBox to setup a few virtual servers, I am looking to setup a load balancing of ColdFusion 10 on 2 remote instances. The goal would be have ColdFusion Cluster Manager be able to point http request to one of the two servers based on load/availability. Not really having a hardware cluster/failover setup, just managing resources on two CF instances instead of a standalone.
    The servers are Windows Server 2008 R2 with IIS7.5 and ColdFusion 10 Enterprise on installed on 3 of these machines. Let's call them CF-LBManager, CF-Web1, and CF-Web 2. In the CF Docs, they show the Cluster Manager adding the local CF instance and "if you want" a remote instance. However, this scenario would require the main instance to be running and not fail for it to direct to the other instance.
    I am trying to set this up now with CF-LBManager as just a manager of the requests coming in. In the Enterprise Manager >> Instance Manager, the local instance is shown and I add the two remote instances with the correct Remote Port, JVM Route, etc. I also made sure the <Cluster>...</Cluster> block was added to the two remote instances (CF-Web1 and CF-Web2) \runtime\conf\server.xml file too, Jetty Services also is running. Now under the Enterprise Manager >> Cluster Manager I add the two remote instances to the cluster, not the local instance on CF-LBManager with Multicast Port and Sticky Sessions enabled. On Submit, I get a green message "You must restart all the server instances and any configured webservers for these changes to take effect.". I go ahead and reboot the servers and come back.
    I now browse to the ColdFusion page as a test on CF-Web1 and CF-Web2 to make sure CF is running properly, they do. I then browse the IP of the CF-LBManager, however it only returns the local IIS web site and not redirect to one of the two cluster members. I am not seeing any message on the coldfusion-out.log on the remote instances. Am I not setting this up correctly or not enabling the Cluster Manager to take over and pass along the requests to those in the cluster?

    Unfortunatley I don't have a lot of experience with CF10 on Windows, but if you are running CF behind IIS I think  you will need to update the Tomcat connector configuraiton to do load balancing. I'm not sure if re-running the wsconfig tool on all of the servers will do this or not, but that is what I would suggest trying first. If that doesn't work you will need to update the Tomcat connector configuraiton manually. You can find more information on load balancing with the Tomcat connector here: http://tomcat.apache.org/connectors-doc/generic_howto/loadbalancers.html.

  • Server Load-balancing Across Two Data centers on Layer 3

    Hi,
    I have a customer who would like to load balance two Microsoft Exchange 2010 CAS Servers which are residing across two data centers.
    Which is the best solution for this? Cisco ACE or Cisco ACE GSS or both?

    I would go with source natting the clients ip addresses, so that return traffic from the servers is routed correctly.
    It saves you the trouble with maintaining PBR as well.
    Source NAT can be done on the ACE, by applying the configuration to either the load balancing policy, or adding the configuration to the class-map entries in the multi-match policy.
    Cheers,
    Søren
    Sent from Cisco Technical Support iPad App

  • App.server load balancing for SAP System with 1 PS

    Hi,
    In SAP CPS 7.0 (Build M26.12) I have a SAP system with Central Instance + 10 App.servers, but all instances are managed by 1 ProcessServer.
    After activating the "App.server load balancing" setting in SAP system definition the application servers are becoming visible in CPS with their load factors (number of BGD wp's on app.servers) and load numbers (number of active jobs on app.servers).
    This is so far fine, but the additional functionality is not working as I would expect, I have issues with 2 functionalities:
    1. Based on documentation after activating also the XAL connection the CPS should submit the job on app.server with best performance based on XAL monitoring data filling the TARGET_SERVER parameter.
    This functionality is not working for me at all
    2. A useful functionality after activating the "App.server load balancing" setting is that the ProcessServer is going to "Overloaded" status when all BGD wp's of SAP system are occupied, thus restricting submitting new jobs during overload situation. But I had an issue also with this functionality, after SAP system recovery from overload situation, the CPS still remained in Overload status (so no new jobs were submitted).
    As a workaround I had increased the treshold values for loads on all app.servers for this SAP system, what was fine for several days, but after a while I believe this was a reason of unexpected performance issues in CPS, therefore I have deactivated the 'App.server load balancing" setting at all for this ProcessServer.
    I would appreciate your feedbacks with this functionality.
    Thanks and Regards,
    Ernest Liczki

    Hi Preetish,
    This connect string option is to loadbalance RFC connections. These are balanced upon login, once you are connected to a particular application server (AS) you stay on that server until you reconnect.
    Since CPS uses multiple RFC connections, this will result in the connections being distributed over the available AS resources which is fine as long as they are generally evenly loaded. If you have certain AS hosts that are continuosly more loaded than the rest, then you probably don't want the CPS RFC connections to end up on these servers.
    The original question is about loadbalancing of batch jobs over the available AS resources, and this is done independent of the RFC connection load balancing. Even if all CPS RFC connections are pinned to the DB/CI host, you can still loadbalance jobs over the available SAP AS hosts, either by using SAPs builtin balancing, or the CPS algorithm by activating the checkbox as indicated in the first entry in this thread.
    Finally, to reply to Ernest's question: I believe there are some fixes on the app load balancing in the latest release, M26.17 should be available on the SWDC now.
    Regards,
    Anton.
    Edited by: Anton Goselink on May 29, 2009 9:06 PM

  • Load Balancing across Domains

    We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
    and have observed some strange behaviour. Can anyone out there explain it?
    We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
    consisting of a master and slave machine (actually, both are set up on a single
    Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
    each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
    domain config on the Tux side is pretty standard, and the only service listed
    is as follows (TOLOWER is provided by each of the WLS instances via WTC):
    *DM_REMOTE_SERVICES
    TOLOWER RDOM="WDOM1"
    TOLOWER RDOM="WDOM2"
    We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
    that we can fire off 1000 calls one after the other). Load Balancing is turned
    on, and we have a NETLOAD value of 0. When we run the client on the slave machine
    it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
    the same client on the master machine it sends the first request via the slave
    (TDOM2) and the other 999 through the master (TDOM1). If we run the client on
    the slave again immediately afterwards, it load balances perfectly as before,
    so there does not seem to be any bottleneck.
    One thing we noticed when doing a psr is that the count for the number of messages
    is correct for each GWTDOMAIN server, but the load is always zero irrespective
    of how many messages have been processed (we have not specified any load factors
    - the *SERVICES section is empty).
    Any ideas?
    Thanks for any feedback,
    Malcolm.

    We're looking into another load balancing issue that might be related to
    this. I'll pass this message on to the person who is looking into this.
    Malcolm Freeman wrote:
    We have been carrying out some tests with the WebLogic Tuxedo Connector (WTC),
    and have observed some strange behaviour. Can anyone out there explain it?
    We are using the 8.1 version of both Tux and WLS/WTC, and have set up a Tux domain
    consisting of a master and slave machine (actually, both are set up on a single
    Windows 2000 system using PMID). Each machine has a local domain (TDOM1 and TDOM2),
    each going to a separate instance of a WebLogic Server (WDOM1 and WDOM2). The
    domain config on the Tux side is pretty standard, and the only service listed
    is as follows (TOLOWER is provided by each of the WLS instances via WTC):
    *DM_REMOTE_SERVICES
    TOLOWER RDOM="WDOM1"
    TOLOWER RDOM="WDOM2"
    We have modified the client calling TOLOWER to use a tpacall with TPNOREPLY (so
    that we can fire off 1000 calls one after the other). Load Balancing is turned
    on, and we have a NETLOAD value of 0. When we run the client on the slave machine
    it load balances beautifully - 500 each through TDOM1 and TDOM2; but when we run
    the same client on the master machine it sends the first request via the slave
    (TDOM2) and the other 999 through the master (TDOM1). If we run the client on
    the slave again immediately afterwards, it load balances perfectly as before,
    so there does not seem to be any bottleneck.
    One thing we noticed when doing a psr is that the count for the number of messages
    is correct for each GWTDOMAIN server, but the load is always zero irrespective
    of how many messages have been processed (we have not specified any load factors
    - the *SERVICES section is empty).
    Any ideas?
    Thanks for any feedback,
    Malcolm.

  • OSPF load balancing across multiple port channels

    I have googled/searched for this everywhere but haven't been able to find a solution. Forgive me if I leave something out but I will try to convey all relevant information. Hopefully someone can provide some insight and many thanks in advance.
    I have three switches (A, B, and C) that are all running OSPF and LACP port channelling among themselves on a production network. Each port channel interface contains two physical interfaces and trunks a single vlan (so a vlan connecting each switch over a port channel). OSPF is running on each vlan interface.
    Switch A - ME3600
    Switch B - 3550
    Switch C - 3560G
    This is just a small part of a much larger topology. This part forms a triangle, if you will, where A is the source and C is the destination. A and C connect directly via a port channel and are OSPF neighbors. A and B connect directly via a port channel and are OSPF neighbors. B and C connect directly via a port channel and are OSPF neighbors. Currently, all traffic from A to C traverses B. I would like to load balance traffic sourced from A with a destination of C on the direct link and on the links through B. If all traffic is passed through B, traffic is evenly split on the two interfaces on the port channel. If all traffic is pushed onto the direct A-C link, traffic is evenly balanced on the two interfaces on that port channel. If OSPF load balancing is configured on the two vlans from A (so A-C and A-B), the traffic is divided to each port channel but only one port on each port channel is utilized while the other one passes nothing. So half of each port channel remains unused. The port channel on B-C continues to load balance, evenly splitting the traffic received from half of the port channel from A.
    A and C port channel load balancing is configured for src-dst-ip. B is a 3550 and does not have this option, so it is set to src-mac.
    Relevant configuration:
    Switch A:
    interface Port-channel1
    description Link to B
     port-type nni
     switchport trunk allowed vlan 11
     switchport mode trunk
    interface Vlan11
     ip address x.x.x.134 255.255.255.254
    interface Port-channel3
    description Link to C
     port-type nni
     switchport trunk allowed vlan 10
     switchport mode trunk
    interface Vlan10
     ip address x.x.x.152 255.255.255.254
    Switch B:
    interface Port-channel1
     description Link to A
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 11
     switchport mode trunk
    interface Vlan11
     ip address x.x.x.135 255.255.255.254
    interface Port-channel2
     description Link to C
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 12
     switchport mode trunk
    interface Vlan12
     ip address x.x.x.186 255.255.255.254
    Switch C:
    interface Port-channel1
     description Link to B
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 12
     switchport mode trunk
    interface Vlan12
     ip address x.x.x.187 255.255.255.254
    interface Port-channel3
     description Link to A
     switchport trunk encapsulation dot1q
     switchport trunk allowed vlan 10
     switchport mode trunk
    interface Vlan10
     ip address x.x.x.153 255.255.255.254

    This is more FYI. 10.82.4.0/24 is a subnet on switch C. The path to it is split across vlans 10 and 11 but once it hits the port channel interfaces only one side of each is chosen. I'd like to avoid creating more vlan interfaces but right now that appears to be the only way to load balance equally across the four interfaces out of switch A.
    ME3600#sh ip route 10.82.4.0
    Routing entry for 10.82.4.0/24
      Known via "ospf 1", distance 110, metric 154, type extern 1
      Last update from x.x.x.153 on Vlan10, 01:20:46 ago
      Routing Descriptor Blocks:
        x.x.x.153, from 10.82.15.1, 01:20:46 ago, via Vlan10
          Route metric is 154, traffic share count is 1
      * x.x.x.135, from 10.82.15.1, 01:20:46 ago, via Vlan11
          Route metric is 154, traffic share count is 1
    ME3600#sh ip cef 10.82.4.0
    10.82.4.0/24
      nexthop x.x.x.135 Vlan11
      nexthop x.x.x.153 Vlan10
    ME3600#sh ip cef 10.82.4.0 internal       
    10.82.4.0/24, epoch 0, RIB[I], refcount 5, per-destination sharing
    sources: RIB 
    ifnums:
    Vlan10(1157): x.x.x.153
    Vlan11(1192): x.x.x.135
    path 093DBC20, path list 0937412C, share 1/1, type attached nexthop, for IPv4
    nexthop x.x.x.135 Vlan11, adjacency IP adj out of Vlan11, addr x.x.x.135 08EE7560
    path 093DC204, path list 0937412C, share 1/1, type attached nexthop, for IPv4
    nexthop x.x.x.153 Vlan10, adjacency IP adj out of Vlan10, addr x.x.x.153 093A4E60
    output chain:
    loadinfo 088225C0, per-session, 2 choices, flags 0003, 88 locks
    flags: Per-session, for-rx-IPv4
    16 hash buckets             
    < 0 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 1 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 2 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 3 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 4 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 5 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 6 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 7 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    < 8 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    < 9 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <10 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <11 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <12 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <13 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    <14 > IP adj out of Vlan11, addr x.x.x.135 08EE7560
    <15 > IP adj out of Vlan10, addr x.x.x.153 093A4E60
    Subblocks:                                                                                  
    None

Maybe you are looking for