Question on WCCPv2 - bucket assignment for WCCP2 load balancing

Hello,
I would like to know if any one has tried out running Cisco's WAAS/WAFS/WAE or
Squid proxy as a cache cluster to leverage load balancing support in WCCPv2.
I am trying to understand WCCP based transparent network redirection in a lab setup using squid cache's WCCP and Cisco routers only. When I tried with 2 proxies for load balancing, I see that the router *always* allocates buckets in the reverse order of the specified assignment - its confusing as its not mentioned in Cisco WCCP2 protocol drafts.
In my case, the lead cache with the lowest IP specifies buckets 0-127 to itself and 128-255 to the other; but the router assigns buckets 0-127 to the second cache and 128-255 to lead cache.
I have attached the ethereal trace. Can someone explain what is going wrong here?
The issue was found in the following router versions:
Cisco 3600, IOS 12.3(1a);
Cisco 2600 IOS 12.3(9a);
Cisco 2800 IOS 12.4(3d)
Squid proxy:
2.5
WCCP status output - all the routers above show the same behavior.
From the trace, 192,168.8.231 specifies bucket distribution as its the
lead cache with a lower IP than 192.168.41.232.
router#sh ip wccp 99 detail
WCCP Cache-Engine information:
Web Cache ID: 192.168.41.232
Protocol Version: 2.0
State: Usable
Initial Hash Info: 00000000000000000000000000000000
00000000000000000000000000000000
Assigned Hash Info: 00000000000000000000000000000000 FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
Hash Allotment: 128 (50.00%)
Packets Redirected: 0
Connect Time: 00:23:06
Bypassed Packets
Process: 0
Fast: 0
CEF: 0
Web Cache ID: 192.168.8.231
Protocol Version: 2.0
State: Usable
Initial Hash Info: 00000000000000000000000000000000
00000000000000000000000000000000
Assigned Hash Info: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
00000000000000000000000000000000
Hash Allotment: 128 (50.00%)
Packets Redirected: 0
Connect Time: 00:23:05
Bypassed Packets
Process: 0
Fast: 0
CEF: 0
Thanks in advance.

I'm not an expert of the details of wccp, but looks like the squid is not setting the bucket info correctly.
From the draft [below], the first bit is the the A flag - for alternative hashing.
And the alternative hashing is determined by another flag in the service info.
So, why is Squid setting this bit ?
I feel like they forgot to shift the index by 1 bit to the left.
Bucket 0-255
Contents of the Redirection Hash Table. The content of each bucket is a
web-cache index value in the range 0-31. If set the A flag indicates
that alternative hashing should be used for this web-cache. The value
0xFF indicates no web-cache has been assigned to the bucket.
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
| Index |A|
+-+-+-+-+-+-+-+-+
I'm double checking with a developpers for our cache, but I feel like this is the explanation.
More info to come if I'm wrong.
Gilles.

Similar Messages

  • Best way for HTTP load balancing in OSB

    Hi everybody,
    We have setup an OSB cluster and we need to load balance HTTP requests across managed servers. Looking for info about load balancing in OSB I found that there are mainly two options: using a hardware load balancer or a software solution like Weblogic HttpClusterServlet. At the moment we have no hardware balancer available so we will have to take the software option. I found some articles about configuring HttpClusterServlet like http://redstack.wordpress.com/2010/12/20/using-weblogic-as-a-load-balancer.
    But I have a question about this configuration. If we use a managed server as an HTTP proxy that balances requests between OSB managed servers, what would happen if this server goes down? I think one of the main goals of a clustered deployment is avoiding a single point of failure but with that setup all requests would depend on the availability of the proxy managed server.
    Could you recommend us a setup for implementing load balancing in OSB?
    Thank you in advance,
    Daniel.

    Load balancing in a cluster for http requests can be achieved using atleast 4 different ways:
    (1)- use a hardware load balancer like F5 BigIP LTM
    (2)- use a web server with weblogic plugin to frontend the cluster
    (3)- use weblogic with HTTPClusterServlet
    (4)- use DNS round robin - this works if you have managed servers running on 2 machines (say mach1, mach2) but on the same port. HTTP clients use hostname 'mach' to access the URL's and the dns does a round robin name resolution of mach to mach 1 and mach2 IP addresses..
    All the options except (1) achieve only load balancing and not auto failover on all instances.. Hardware load balancers has the extra feature of probing [ sending periodic pings to the targets] , by which it can detect whether the target resource is alive and if not send the traffic to other nodes which are alive.. this is why hardware load balancers are worth their investment..
    other options may work if client is coded to do retrying on failure.. so on 2nd or subsequent attempt, the routing is done to the machine which is alive..
    For options (1),(2) and (3), you also need some redundancy of load balancing device ( web server, weblogic or hardware load balancer) to prevent single point of failure.. Hardware load balancers are usually deployed in redundant pairs to achieve this..
    Edited by: atheek1 on 22/11/2011 15:31

  • GSS act as an authoritative DNS for non-load balanced sites?

    I have a client asking if a GSS can be the authoritative DNS server for their entire domain.  This would include sites that are not load balanced.
    TIA,
    Dan

    Hi Dan,
    Yes, you would just have to create a new domain under the Domain Lists, create an Answer Group associated to that domain and then you can start adding DNS answers. For non-load balanced sites you would just have one answer in your answer group.
    Sincerely,
    Kyle

  • Forms Services availability checking for BIGIP Load Balancer

    We are load balancing across a number of 10.1.2.2 Forms servers using a BIGIP load balancer. Currently our load balancing is done based on which server has the "least connections" to the BIGIP. So far we have been using the following test URL to allow BIGIP to check the availability of the Forms Services on each server.
    http://server:port/forms/frmservlet?ifcmd=status
    This works well however it only checks through to the HTTP level within Forms Services. We encountered a problem when the Forms Services failed to work on a particular server however the above URL showed that everything was OK. The effect of this was that all new users attempting to login were directed to the failed server as this server had the "least number of connections".
    After raising an SR with Oracle they advised that the forking of runtime processes had probably failed and this was not detectable by the load balancer with the above URL. So they have recommended a number of options for checking the status of the Forms Services. These are:
    a) http://server:port/forms/frmservlet
    This loads the default Form and therefore by definition tests the forking of runtime processes. However BIGIP is unable to automatically process the information to distinguish whether the service is up or down. Oracle recommended that if using this method we would need to customise BIGIP to handle the various FRM-xxxx error codes.
    b) http://server:port/forms/frmservlet?userid=scott/tiger@YOURDB&form=yourtestform.
    fmx
    Even more thorough would be to actual log on to the database using a test form as above.
    My question is does anyone out there have experience in checking Forms Services availability using these last two methods as I'm not sure how to customize the load balancer so that it can handle the output of these URLs. Also when using the original URL is it normal to load balance using a "least connections" method or do people out there use a different algorithm.
    Thanks for any help/advise that you can give.
    Regards,
    Philippe

    Well SR followed up and it looks like the only course of action is to use the standard HTTP check: http://server:port/forms/frmservlet?ifcmd=status ...
    ... unless that is you want to do some serious customisation. Oracle don't support any other form of checking.
    I'm guessing from the lack of responses to this thread that this hasn't been an issue for anybody else ... ???
    Any thoughts/suggestions really welcome as we go into production in 4 weeks.
    a) What do people recommend for load balancing Forms ... least connection, round robin ... ?
    b) Do people use http://server:port/forms/frmservlet?ifcmd=status or have some of you used something else?
    Thanks,
    Philippe

  • Coherence Extend Address-provider for dynamic load balancing

    Hi,
    I want to have load balancing across my proxies. so, trying to use <address-provider> feature.
    Here is my setup,
    I have two proxies and many clients.I am configuring both proxy and the client to use load sharing scheme.
    There are implemented classes for the interface AddressProvider. So, i am going to use that.
    Here is proxy server settings,
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <address-provider>
    <class-name>com.tangosol.net.RefreshableAddressProvider</class-name>
    </address-provider>
    </tcp-acceptor>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    I know there is some silly mistake with that,
    I am getting error message in the following log,
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.emc.srm.common.daemon.SrmDaemon.start(SrmDaemon.java:59)
    ... 5 more
    Caused by: (Wrapped: Missing or inaccessible constructor "com.tangosol.net.RefreshableAddressProvider()"
    <address-provider>
    <class-name>com.tangosol.net.RefreshableAddressProvider</class-name>
    </address-provider>) java.lang.InstantiationException
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2542)
    at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2426)
    at com.tangosol.net.ConfigurableAddressProvider.createAddressProvider(ConfigurableAddressProvider.java:277)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.peer.acceptor.TcpAcceptor.configure(TcpAcceptor.CDB:78)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ProxyService.configure(ProxyService.CDB:67)
    at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:6)
    at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
    at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInternal(DefaultConfigurableCacheFactory.java:1057)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:892)
    at com.tangosol.net.DefaultCacheServer.startServices(DefaultCacheServer.java:81)
    at com.tangosol.net.DefaultCacheServer.intialStartServices(DefaultCacheServer.java:250)
    at com.tangosol.net.DefaultCacheServer.startAndMonitor(DefaultCacheServer.java:55)
    at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
    ... 10 more
    Caused by: java.lang.InstantiationException
    at sun.reflect.InstantiationExceptionConstructorAccessorImpl.newInstance(InstantiationExceptionConstructorAccessorImpl.java:30)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at java.lang.Class.newInstance0(Class.java:355)
    at java.lang.Class.newInstance(Class.java:308)
    at com.tangosol.util.ClassHelper.newInstance(ClassHelper.java:587)
    at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2501)
    ... 23 more
    Cannot start daemon
    First question:
    1) without address provider section in the proxy server configuration xml, will it use any default address provider implementation?
    2) what's wrong with the proxy server settings xml which causes above error?
    Thanks
    Prab

    Hi LuK,
    Sounds like there is no load balancing is happening in my environment.
    I didn't use address provider tag since it's mentioned in AddressProvider Interface API that by default it uses ConfigurableAddressProvider implementation.
    Here is a xml snippet in remote client cache configuration xml,
    <tcp-initiator>
    *<remote-addresses>*
    *<socket-address>*
    *<address>PROXY-A</address>*
    *<port>9099</port>*
    *</socket-address>*
    *<socket-address>*
    *<address>PROXY-B</address>*
    *<port>9099</port>*
    *</socket-address>*
    *</remote-addresses>*
    <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    what i mean by load balancing is,
    I will be having lot of web service queries which runs very fast. I would like to see both the proxies serves the web requests.
    I am observing, One connection has been made to one proxy when it's starts up and keep hold on to it (no matter whether other proxy is free or not).
    is it a bug? or am i doing some mistake?
    Thanks
    Prab

  • SUNW.gds for jboss + Load Balancing Group = Failed to connect to host ...

    Hi all!
    In a simple two node cluster (Solaris cluster 3.2) with quorum server I've created a resource for jboss 5.1.0 using SUNW.gds. It is supposed to be load-balanced. To achieve that I've followed instructions from [http://download.oracle.com/docs/cd/E18728_01/html/821-1258/gds-25.html]
    The command I've used to create the resource was:
    clresource create -g scalable-rg -t SUNW.gds -p resource_dependencies=vip -p Scalable=TRUE -p Start_timeout=400 -p Stop_timeout=400 -p Probe_timeout=30 -p Port_list=8080/tcp -p Start_command="/opt/jboss-5.1.0.GA/bin/run.sh -b 0.0.0.0" -p Child_mon_level=0 -p Failover_enabled=TRUE -p Stop_signal=15 -p Load_balancing_policy=LB_STICKY_WILD jboss-rs
    The whole configuration seems to work, but when the second node joins cluster, resource with jboss can't bind to shared ip address. There are many entries in /var/adm/messages like:
    Jan 19 13:46:35 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:35 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:37 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:37 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:39 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:39 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:41 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:41 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    Jan 19 13:46:43 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host vip and port 8080: Connection refused.
    Jan 19 13:46:43 play SC[,SUNW.gds:6,scalable-rg,jboss-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <vip> and port <8080>.
    However, after some time, it finally somehow binds to shared ip and port 8080.
    Is this something I should be worried about? Or it is a normal thing, since e.g. it takes some time to bring the interface with shared ip up? I've never had to install such configuration, and I don't have neither intuiton, nor the experience. Any help would be very appreciated.
    Thanks a bunch,
    Bart

    Hi, a couple of things to check:
    - did you check that both JBOSS instances were up and running?
    - can you check in the logs (on both nodes) when a message saying something like "service .... registered..." showed up. This should show up for both JBOSS instances. From the time of the second "registered" message, the load balancer should sending incoming requests to both instances using its distribution mechanism
    - did you check the vip address on one of the nodes on an external interface and on lo0 on the other
    - connection refused looks like a server problem; can you connect to the JBOSS instance locally?
    Hth
    Hartmut

  • Not able to find mod_oc4j.conf file for setting Load Balancing on ESB

    Hi Guys,
    I am trying to set load balancing on ESB ,I got one document from oracle site for setting the load balancing on ESB. According to that document we have to change mod_oc4j.conf file for setting the roundrobin:weighted,metric,random algorithms.
    But I am not able to find mod_oc4j.conf file in my ORACLE_HOME directory.
    Actual path is
    ORACLE_HOME/Apache/Apache/conf/mod_oc4j.conf file
    I have All folders up to conf but in conf folder there is no mod_oc4j.conf file.
    I just want to know is there any installation problem or we have to create
    mod_oc4j.conf file in that conf folder for seting the loadbalancing on ESB
    kindly give me the exact solution on this .
    Thanks in advance

    Hi Chintan,
    I have apache folder ,but there i have cong folder but in that I am not able to find mod_oc4j ,I know ESB it maintain load in transparent way but in my project we have heavy load on ESB we have to maintain this load in a proper way
    Please give me the exact solution to set load balancing On ESB.
    Thank you
    Bollineni

  • Geting IP address for Internal Load Balancer

    I've recently been experimenting with internal load balancing for VMs. I'm able to create and delete an internal load balancer (ILB) using the .NET wrapper for the API (https://github.com/Azure/azure-sdk-for-net).  What I cannot do though is actually
    get the internal address for it. Nor does it seem you can get it from the REST API (which, as far as I know, is what .NET wrapper wraps).   The only method I can see that claims to get the address is Powershell.
    Can anyone confirm if there is any way using the REST API or its .NET wrapper to obtain the internal address for the ILB?

    I have not looked into the .NET wrapper that you mentioned here, but according to this powershell script:
    http://msdn.microsoft.com/en-us/library/azure/dn690125.aspx
    $svc="<Cloud Service Name>"
    $ilb="<Name of your ILB instance>"
    $subnet="<Name of the subnet within your virtual network-optional>"
    $IP="<The IPv4 address to use on the subnet-optional>"
    Add-AzureInternalLoadBalancer -ServiceName $svc -InternalLoadBalancerName $ilb –SubnetName $subnet –StaticVNetIPAddress $IP
    IP address is optional, so maybe the wrapper hasn't implemented this, which is kind of undesirable. But maybe it allows you to specify the IP?
    Frank

  • Azure: "Cloud Services" for VM - Load Balancing, yes, and other things?

    I'm trying to get a handle on the significance of the cloud service
    (that is created when a new VM is created). I understand that a group of
    VMs need to belong to the same cloud service in order to participate in
    Load Balancing. I can't see any other reason to group VMs into a single
    Cloud Service. On the other hand it seems like overkill to create a
    cloud service for each VM.
    Are there any advantages/reasons to adding a group of VMs to Cloud Service other than Load Balancing?

    Hi,
    If you made a group VMs as a cloud service, you can configure them and manage them by yourself, you could select Linux or Windows Server VMs and either compose the VM images in the cloud or upload a VHD you’re previously
    created using Hyper-V, You can capture a VM and add it your image gallery for easy reuse. you also could run a product like Active Directory or SQL Server or SharePoint Server successfully, etc...
    I suggest you have a look at following article. (create VM as cloud service belong to IaaS)
    #http://davidpallmann.blogspot.in/2012/07/windows-azure-is-3-lane-highway-how-to.html
    Best Regards
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • ConnCacheBean Setting for a Load Balance  URL

    Hi,
    We are using the ConnCacheBean to set the URL property. We have a new load-balance database, how can I pass that on as a paramter? Thanks.
    <jsp:setProperty name="tdbean" property="URL" value= "jdbc:oracle:thin:@----" />
    Thanks.

    Hi,
    I would pass this to the database forum as this class belongs to the Oracle JDBC package
    Frank

  • GATP: ATP bucket assignment for supply elements - how to use BucketLogic ?

    Hello experts,
    I have a little issue with applying correct bucket logic - hoping somebody has a hint for me on this.
    My case:
    I have a PurchaseOrder coming in on Date_X, Time 00:00 in R3 --> same day and time in Product view in APO
    I have a ProductionOrder coming in on Date_X, Time 24:00 in R3 --> same day and time = 23:59:59 in Product view in APO
    .. this means both elements should create ATP supply qty to confirm a SalesOrders for Date_X
    My general ATP settings in APO are very simple:
    "ShiftReceipts" = 00:00, "IssueLimit" = 00:00 (there is no offset defined etc., demand & supply bucket are in synch and should behave like R3 ATP check)
    Now if I apply in the "progressiv" Bucket Logic in the ATP group
    --> the PurchaseOrder becomes available a day before Date_X (problem, too early)
    --> the ProductionOrder becomes available on Date_X (correct)
    Now if I apply in the "conservative" Bucket Logic in the ATP group
    --> the PurchaseOrder becomes available on Date_X (correct)
    --> the ProductionOrder becomes available on Date_X+1 (problem, too late)
    I believe its somehow linked to how SAP is handling the supply element assignment to an ATP bucket in case it's falling exactly onto a "bucket cut line" ...any help is very appreciated !
    Regards
    Thomas

    Hello Michael,
    tks for the hint. I'll give it try but ...
    I want to avoid "exact" logic since SAP is not recommending it for performance reasons (we are running multiple times a day massive BOPs for several plants). Basically we want to gain the BOP performance advantages over R3 using the ATP time series aggregation with gATP - therefore I'm playing around with "conservative", progressive" in the ATP bucket logic.
    I simply need to have all supply elements of a day in R3 assigned to the same day as available qty in APO. I wonder why such very simple case is not managable in APO - meanwhile my guess is that we have a bug here (using SCM5.0 SP13) - but I haven't found anything related in OSS.
    Regards
    Thomas

  • Help for ACE load balancing process

    Hi All,
    how servers behind ACE can access a VIP on the unit/context
    the scneraio is this:
    10.51.72.247 --> 204.110.71.140  --> 10.51.70.41
    as you can see it is traffic within same ACE context
    could you please share the document for the same ?
    Thanks in advance,
    Reg
    Mohamed.

    Hope you are doing well!
    Since you are having issues with ACE, Have you considered using Citrix Netscaler with Cisco RISE Integration. I wanted to mention this new feature Automated Policy based Routing(APBR) and RISE(Remote Integrated Service Engine)  that is available on Citrix NetScaler which might ease you pain points in configuring services and provides signification OPEX and CAPEX savings when used with a Nexus Series Switches like Nexus 7000.
    Here are some details and links
    RISE (Remote Integrated Services Engine) is an innovative, industry-first architecture conceived by the Nexus Services engineering team to seamlessly integrate Nexus switches with appliances offering L2/L3/L4-L7 services. RISE makes the service appliance look like a line card in the Nexus 7K series. This integration allows any appliance to take advantage of the benefits of an in-chassis module such as increased application performance, high application availability, and data center consolidation.
    RISE press release on Wall Street Journal : http://online.wsj.com/article/PR-CO-20140408-905573.html
    RISE At A Glance white paper: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/at-a-glance-c45-731306.pdf
    RISE announcement blog: http://blogs.cisco.com/datacenter/rise
    RISE Video at Interop: https://www.youtube.com/watch?v=1HQkew4EE2g
    Cisco RISE page: www.cisco.com/go/rise
    Gartner blog on RISE: “Cisco and Citrix RISE to the Occasion”: http://blogs.gartner.com/andrew-lerner/2014/03/31/cisco-and-citrix-rise-to-the-adc-occasion/
    Please contact us for a demo/presentation/POC. Please send email to [email protected]
    Thanks
    Avni

  • Cookie for HTTP Load Balancing

    I'm getting a lot of bots hitting my site.
    Log entries are very similar (except for the source IP):
    1.247.32.58 - - [11/Dec/2012:22:57:03 -0800] "POST  /?ptrxcz_Ah5qDayLi6TrEbzVtPwSqMtGmJgDa7
    HTTP/1.1" 403 3985 "-" "Mozilla/4.0  (compatible; MSIE 6.0; Windows NT 5.1; SV1)
    Can someone give me an example of how I can filter these out based the "/?ptrxcz..." part? Most of these requests
    have this string in them.
    100 match http cookie testcookie1 cookie-value ptrxcz?
    Do I need a secondary name? I don't quite understand the syntax.
    Thanks!

    Here you go:
    policy-map type loadbalance first-match abc.ca.prod.http-l7slb
      class abc.ca.http-l7class
        drop
      class class-default
        serverfarm SF_nocms.prod
    policy-map multi-match int194-webhosting
      class abc.ca.prod.http
        loadbalance vip inservice
        loadbalance policy abc.ca.prod.http-l7slb
    class-map match-all abc.ca.prod.http
      2 match virtual-address 111.111.111.167 tcp eq www
    class-map type http loadbalance match-all abc.ca.http-l7class
      10 match http cookie secondary ptrxcz.* cookie-value ".*"
    Here's a bigger snippet of what I see in the logs:
    187.244.110.209 - - [12/Dec/2012:15:31:35 -0800] "POST /?ptrxcz_uCVmQegPo4Y4Y3YYoCqB0mj5Ptk8ev HTTP/1.1" 403 3985 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
    87.69.255.148 - - [12/Dec/2012:15:31:35 -0800] "POST /?ptrxcz_MMMMMMMMMMMMMNNNNNNNNNNNNNNNNN HTTP/1.1" 403 3985 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
    180.246.22.189 - - [12/Dec/2012:15:31:36 -0800] "POST /?ptrxcz_555555566666666666667777777777 HTTP/1.1" 403 3985 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
    201.137.39.236 - - [12/Dec/2012:15:31:36 -0800] "POST /?ptrxcz_pppqqqqqqqrrrrrrrssssssstttttu HTTP/1.1" 403 3985 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
    203.127.8.98 - - [12/Dec/2012:15:31:36 -0800] "POST /?ptrxcz_WXXXXXXXYYYYYYYYYYYZZZZZZZZZZZ HTTP/1.1" 403 3985 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)"
    Thanks again.

  • Best Option (if any) for Load Balancing Distribution Point(s) on Same LAN

    Hey Guys - 
    I've got a simple question this time.  We use SCCM 2012 R2, manage ~800 systems at 3 locations, but perform most work at our main office where this scenario takes place.  Here, we have a single DP on-site which is a separate VM than our Primary
    which is also local.
    Recently, our PC Lab tried running our OSD TS on 16 systems which were each started 1-2 minutes apart.  When run on a single  brand new PC connected via GB network, the OSD TS takes a couple of hours to complete so with 16 it really caused some
    issues.  We had a couple of Programs/Packages/Applications which actually timed out due to the default 120 minute max run time.  We don't usually image 16 at a time, but often do 2-3 at once so are looking to find a solution to speed things up anyways.
    My Question
    What is the best solution / method for implementing any type of load balancing on a single LAN?  I'm not asking for true load balancing, but simply any solution where multiple systems running a TS can pull from more than a single local source if possible.
    We do not use multicasting and from what I've been told it will not be a possibility as it causes havoc on networks so it's out. I know that some clients can share content depending on deployment configuration, but don't know how / if this applies to OSD
    Task Sequences.
    Any suggestions or ideas?  Thanks! 
    Ben K.

    Agree, 16 machines is not a lot. I would normally go for about 2000 machines per DP depending on pkg/img size etc. Whats the size of that rebuild? Image + packages? Do you do a full DL before starting or is it started from WinPE?
    Our BranchCache tools will of course help, regardless of how fast your gig link is since data will be pulled from more sources, but think your issue is more network related? If the images is 5 gig and you add another 5 gig of packges in the sequence a 1Gb/s
    link should pull that 16 * 10GB=160GB in about half an hour. So think dont think you are having 1Gb/s from server to clients.
    //Andreas
    http://2pintSoftware.com

  • N2000 weighted load balance question

    Hi --
    I have a question about how to use a weighted load balancing configuration to support a failover condition.
    My goal is to have an active and a standby configuration. This is not a web application, so it doesn't follow the same type of rules. In particular, I have a situation where there are multiple clients establishing long-term connections to a server. If the server goes down, I would like the LB to close the connections and when the clients try to reconnect it will route to the standby system.
    My question is: If I configure the 'weighting' for the active server to be 1 and for the standby server to be 0, will it always result in incoming connections being routed to the active server (and never the standby server)? If the active server is down, will it still route to the standby even though the weight is zero?
    Any thoughts/ideas/suggestions are greatly appreciated.

    Hi --
    I have a question about how to use a weighted load balancing configuration to support a failover condition.
    My goal is to have an active and a standby configuration. This is not a web application, so it doesn't follow the same type of rules. In particular, I have a situation where there are multiple clients establishing long-term connections to a server. If the server goes down, I would like the LB to close the connections and when the clients try to reconnect it will route to the standby system.
    My question is: If I configure the 'weighting' for the active server to be 1 and for the standby server to be 0, will it always result in incoming connections being routed to the active server (and never the standby server)? If the active server is down, will it still route to the standby even though the weight is zero?
    Any thoughts/ideas/suggestions are greatly appreciated.

Maybe you are looking for