CSS11506 - flow-timeout-multiplier

Hello,
I have a pair of Sun Directory Proxy servers behind our CSS with the following config...
<<< START CONFIG >>>
!************************** SERVICE **************************
service DirProxy_mmcdif22_636
keepalive type tcp
keepalive tcp-close fin
keepalive port 636
ip address 172.16.30.72
active
service DirProxy_mmcdif62_636
keepalive type tcp
keepalive tcp-close fin
keepalive port 636
ip address 172.16.30.76
active
!*************************** OWNER ***************************
owner Security
content DirProxy_pdd4_636
add service DirProxy_mmcdif22_636
add service DirProxy_mmcdif62_636
protocol tcp
port 636
vip address 123.123.102.201
balance aca
flow-timeout-multiplier 200
active
!*************************** GROUP ***************************
group v4DirProxy_group
add destination service DirProxy_mmcdif22_636
add destination service DirProxy_mmcdif62_636
vip address 172.16.30.12
active
<<< END CONFIG >>>
During a recent outage of mmcdif62, all existing connections appear to have been 'orphaned' on the CSS for approximately 53 minutes... which correlates with the 'flow-timeout-multiplier 200' config on this content rule.
Is there any way to overcome these 'orphaned' connections during a failure scenario as shown above?
Also, is it possible to configure the CSS to act upon source IP address info? If so, perhaps this would be a solution to our problem.
Thanks,
-Adam

Adam,
we consider the application should recover from this by itself.
If the client keeps retransmitting and the server does not respond, the application should reset the connection and open a new one which would then be loadbalanced to a working server.
The ACE module has a feature to automatically kill connections linked to a dead server.
Unfortunately this feature does not exist on the CSS.
Regarding the client ip address, you have configured a group to do client nat.
The server will therefore lose the client info.
This is however not related to the connection hang issue.
Gilles.

Similar Messages

  • Tcp-flow-timeout on outgoing connections

    Hello,
    We have clients connecting to a server, through a CSS.
    The server, on some specific cases, has to connect to the clients (different port).
    Since IP adresses are the same, the connection has to go through the CSS, which in this case is acting as gateway.
    We're facing issues, because of teared down flows at CSS level ; we'd like to change the inactivity timeout, but can't find an easy way.
    So far, the only thing I found, is to set a permanent port, but it's not really the best solution, as connections which were not closed correctly would accumulate in the system.
    Would there be an easy way (I'd prefer to avoid having to create contents) for the outgoing flows, on a specific port, to have a different inactivity timeout than the default one ?
    Thanks in advance for your help.
    Cheers
    Mael

    Hi Mael,
    CSS will not break routed connections just going through it.
    Here's what happens:
    1) CSS gets a packet directed towards an IP which is not configured as VS. When it does, it creates a connection for that flow even if the  packet is not the original SYN of the three way handshake.
    2) If no data is received on this connection for 16 seconds, it is moved to the free-flows list. Once it is there, the CSS will continue to  use the information located in it to forward traffic but the connection will be removed as soon as we need some room for new entries.
    3) Once the connection is removed even from the free-flows list and that we get a new packet for the connection, you got back to point #1.  Since the CSS doesn't check for stateful information (AKA doesn't check if the first packet is a SYN).
    So even if the idle timeout of the connections going through it are 16 seconds, a routed TCP flow through the CSS *will never  time out*.
    For changing inctivity timeouts for loabbalanced connnections you can use the below command which needs to be applied to Content rule/source groups.
    flow-timeout multiplier
    Note: If you give number as 2, CSS will multiply it by 16 and actual timeout would be 32.
    The permanent option is also good one but can be a problem if you have high traffic for that port. You can also use cmd-scheduler along with permanent port to clear the flows periodically.
    Regards,
    Kanwal

  • General query on CSM and CSS flow timeout values

    Hi all,
    i have a SLB Application Processor Complex module on my Cisco 6504 which basically does some load balancing work. I am pretty new to this device but the configurations and setup looks somewhat similar to the Cisco ACE but i only have some experience with the Cisco CSS.
    What i would like to know is what the equivalent command to the CSS "flow timeout" is on the CSM. Would that be the "idle timeout" command? I understand that the "pending timeout" is more to governing how long it takes to setup a 3 way handshake from client to server and the "idle timeout" is what i am looking for. Please correct me if i am wrong...
    On the CSS, a flow timeout is on 16secs for most standard ports and 8 secs for HTTP. I would like to know what the default setting is for the CSM idle timeout?? Thanks alot!!
    Daniel

    Hi Daniel,
    For Idle Timeout the the default is 1 hour/ 3600 sec.
    As you know for Cicso CSM thare are 2 timers per vserver.
    Idle timeout
    Pending timeout.
    If a connection is timed out it's because of one of these timers.
    Idle timeout per vserver - If there is no traffic neither from client nor server. Idle connection timer duration in seconds; the range is from 0 (connection remains open indefinitely) to 13500000. The default is 1 hour. If you do not specify a duration value, the default value is applied.
    Examples
    This example shows how to specify an idle timer duration of 4000:
    Cat6k-2(config-slb-vserver)# idle 4000
    Pending timeout per vserver - is the max time allowed to complete the 3-way handshake.The default is 30 sec.Range is from 1 to 65535. This is a SLB virtual server configuration submode command. The pending connection timeout sets the response time for terminating connections if a switch becomes flooded with traffic. If the 3-way handshake does not complete within this time, the connection is dropped.
    The CSM expect to see 2-way traffic within the pending timeout. If no traffic is received from the server, the session is removed.
    Examples
    This example shows how to set the number to wait for a connection to be made to the server:
    Cat6k-2(config-slb-vserver)# pending 300
    These are not counted as failures.
    A failure is when the server does not respond or respond with a reset.
    The CSM can hold 1 million connections in memory at the max.
    So, if you set the idle timeout to 10 hours, your max connection rate is 1 M / 10 * 3600 = ~250 conn/sec.
    Assuming they would all be open and then idle.
    When the number of pending connections exceeds a configurable threshold, the CSM begins using the SYN cookies feature, encrypting all of the connection state information in the sequence numbers that it generates. This action prevents the CSM from consuming any flow state for pending (not fully established) TCP connections. This behavior is fully implemented in hardware and provides a good protection against SYN attacks.
    Generic TCP termination
    Some connections may not require TCP termination for Layer 7 load balancing. You can configure any virtual server to terminate all incoming TCP connections before load balancing those connections to the real servers. This configuration allows you to take advantage of all the CSM DoS features located in Layer 4 load-balancing environments.
    To select the traffic type and appropriate timeout value, use the unidirectional command in the SLB virtual server submode.
    [no | default] unidirectional
    some protocol automatically set the 'unidirectional' function.
    For example : UDP.
    You can see if a vserver is unidirectional or bidirectional by doing a 'sho mod csm X vser name detail'
    When a virtual server is configured as unidirectional, it no longer uses the pending timer. Instead, the idle timer will determine when to close idle or errant flows. Because the idle timer has a much longer default duration than the pending timer, be sure to set the idle timer to an appropriate value.
    Use the command  "show module csm slot# stats" to get the details of connection.
    The statistics counters are 32-bit. Totals are accumulated since the last time the counters were cleared.
    Examples
    This example shows how to display SLB statistics:
    Cat6k-2# show module csm 4 stats
    Connections Created:       180
    Connections Destroyed:     180
    Connections Current:       0
    Connections Timed-Out:     0
    Connections Failed:        0
    Server initiated Connections:
          Created:0, Current:0, Failed:0
    L4 Load-Balanced Decisions:180
    L4 Rejected Connections:   0
    L7 Load-Balanced Decisions:0
    L7 Rejected Connections:
          Total:0, Parser:0,
          Reached max parse len:0, Cookie out of mem:0,
          Cfg version mismatch:0, Bad SSL2 format:0
    L4/L7 Rejected Connections:
          No policy:0, No policy match 0,
          No real:0, ACL denied 0,
          Server initiated:0
    Checksum Failures: IP:0, TCP:0
    Redirect Connections:0,  Redirect Dropped:0
    FTP Connections:           0
    MAC Frames:
          Tx:Unicast:1506, Multicast:0, Broadcast:50898,
              Underflow Errors:0
          Rx:Unicast:2385, Multicast:6148349, Broadcast:53916,
              Overflow Errors:0, CRC Errors:0
    Table mentioned below describes the fields in the display.
    Table for "show module csm stats" Command Field Information
    Field
    Description
    Connections Created
    Number of connections that have been created on the CSM.
    Connections Destroyed
    Number of connections that have been destroyed on the CSM.
    Connections Current
    Number of current connections at the time the command was issued.
    Connections Timed-Out
    Number of connections that have timed out, which can occur for the following reasons:
    •connection has been idle (in one or both directions) for longer than the configured idle timeout.
    •TCP connection setup not completed successfully.
    Connections Failed
    Number of connections failed because the server did not respond within the timeout period, or the server replied with a reset.
    Server initiated Connections
    Number of connections created by real servers, the number of current connections, and the number of connections that failed (because the destination is unreachable).
    L4 Load-Balanced Decisions
    Number of Layer 4 load-balancing decisions attempted.
    L4 Rejected Connections
    Number of Layer 4 connections rejected because no real server was available
    L7 Load-Balanced Decisions
    Number of Layer 7 load-balancing decisions attempted.
    L7 Rejected Connections: Total
    Number of Layer 7 connections rejected.
    L7 Rejected Connections: Parser
    Number of Layer 7 connections rejected because the Layer 7 processor in the CSM ran out of session buffers to save the parsing state for multi-packet HTTP headers. The show module csm tech-support proc 3 command will show detailed buffer usage.
    L7 Rejected Connections: Reached max parse len
    Number of Layer 7 connections rejected because the HTTP header in the packet is longer than max-parse-len. When a virtual server is configured with HTTP persistent rebalancing or cookie matching/sticky, the CSM must parse to the end of HTTP header. The default max-parse-len value is 2000 bytes.
    L7 Rejected Connections: Cookie out of mem:
    Number of Layer 7 connections rejected because of no memory to store cookies. When a virtual server is configured with cookie matching, the CSM must save the cookie contents in memory.
    L7 Rejected Connections: Cfg version mismatch
    Number of Layer 7 connections rejected because part of the request was processed with an older version of the configuration. This counter should only increase after configuration changes.
    L7 Rejected Connections: Bad SSL2 format:
    Number of Layer 7 connections rejected because the request is using an unsupported SSL format or the format is not valid SSL.
    L4/L7 Rejected Connections
    Number of Layer 4 and Layer 7 connections rejected for policy related reasons:
    No policy: connection rejected because the request matched a virtual server, but this virtual server did not have a policy configured.
    No policy match: connection rejected because the request matched a virtual server, but the request did not match any policy configured on the virtual server.
    No real: connection rejected because no real server was available to service the request
    ACL denied: connection rejected because a request matched a policy with a client-access-list entry and the entry is configured to deny the request.
    Server Initiated: connection initiated by a real server is rejected.
    Checksum Failures
    Number of checksum failures detected (there are separate counters for IP and TCP failures).
    Redirect Connections
    Number of connections redirected, and the number of redirect connections dropped.
    FTP Connections
    Number of FTP connections opened.
    MAC Frames
    Number of MAC frames received and transmitted on the CSM backplane connection.
    For getting details on all of these commands kindy refer Catalyst 6500 Series Switch Content Switching Module Command Reference, 4.2 URL mentioned below:
    http://cisco.biz/en/US/docs/interfaces_modules/services_modules/csm/4.2.x/command/reference/cmdrfIX.html
    Kindly Rate.
    HTH
    Sachin Garg

  • Flow Timeouts & TCP FIN/RST

    I've been reading for a few hours, both the docs and the web, and I can't seem to definitively answer if the CSS cleans flows internally or if it closes them to the server too. It seems to be a local cleanup process from what I've read. I also wonder if there's a way to have the CSS send FINs or RSTs to the server for old flows. Basically, I'm about to help on an issue where the servers are not having their connections closed properly and they end up out of ports. I don't know if it's the app or the lb but I am curious anyway if there's a way to have the CSS send FINs or RSTs to the server for old entries. If I understand properly, the CSS proxies requests between the client and server and there should be nothing to stop FINs or RSTs from being sent back and forth. Thanks!

    the CSS does not send any RESET when deleting a flow.
    The reset is sent to the client when a flow was deleted AND the client sends a packet that would have matched that flow.
    I believe there is a feature request to reset client and server on flow deletion but not yet implemented.
    You should look for a solution on the server side.
    Reduce the tcp idle timeout or something equivalent.
    Also, I know some server do not like the RESET sent by CSS keepalive when using tcp keepalive.
    There is a command 'tcp-close-fin' to change the reset into a fin.
    If you think the kal could be part of the problem, I would recommend to use this command.
    Regards,
    Gilles.

  • ********** CSS TIMEOUT PROBLEMS ********

    Hi, Is there a way to stop the CSS trying to processing client requests even though all services are down.
    I would like the CSS to basically send an immediate "timeout" to the client side, knowing that all services are down. Instead of getting locked waiting for a timeout.
    I am positive there is a way because I have come across it in the past through some documentation, but can not remember were I read about it. Can anyone help?

    (config-owner-content) flow-timeout-multiplier. This command can be used to specify the time in seconds for which an idle flow can exist before the CSS tears it down. For more information on this command and its usage have a look at the following URL.
    http://www.cisco.com/univercd/cc/td/doc/product/webscale/css/css_730/cmdrefgd/cmdowcnt.htm#wp1140473

  • CSS 11501 web timeout

    Hi,
    I have a CSS11501 running 8.10.0.02 software. I have 2 windows 2003 web servers that connect to a backend database. I am recieving complaints that intermittently the end user is getting a session timeout.
    I set the flow multiple to 2700 so the flow would be active for 12 hours, but when I issue the show flows command I see flow disappearing after just a few minutes.
    Is this normal behavior?
    Does my config look correct otherwise?
    service A
    ip address 192.168.248.17
    keepalive type http
    active
    service B
    keepalive type http
    ip address 192.168.248.18
    active
    !*************************** OWNER ***************************
    owner test1
    content web
    port 80
    protocol tcp
    add service A
    add service B
    vip address 192.168.248.16
    balance aca
    advanced-balance sticky-srcip
    sticky-inact-timeout 720
    flow-timeout-multiplier 2700
    active
    Thanks
    Frank

    Frank,
    if the client or the server closed the connection, it will disappear.
    Sniff the traffic on both side of the css and see which device is closing the connection.
    Don't forget that servers also come with an idle timeout which is usually far less than 12 hours.
    Gilles.

  • Extended flows being torn down?

    Hello.
    I have a SAN with an external client. The inbound backup flow goes through an 11506 with ver 7.2~. Backups of 2 to 3 hours work fine, but others with a longer time die around 8 - 10 hours into the session.
    I modified the flow-timeout multiplier for 20 hours, but that didn't help- as I expected. The session is dying mid-activity and not during an idle period. Has anyone encountered similiar behavior in long term flows?
    As a separate question - how is the flow-timeout value calculated? Isn't the default 16 (16*16=256)? It would seem then that if I opened a terminal window into a backend server and let it sit - I should see it terminate within 5 minutes. But usually, the window will stay active until an overnight period..then close. Modifying the flow-timeout value fixes this problem, but doesn't quite explain why the default value didn't time out the session sooner.
    Thanks,
    Chad

    Gilles,
    There is no RESET.
    On the content rule, I set the flow-timeout.
    And yes, the content is hitting the right rule.
    I understand the CSS doesn't "proxy" the connection, but does session information (layer 5, OSI) get exchanged between the client and CSS in place of the server?
    For example, I had a Solaris 2.6 client that FTP'd large files (2GB) to a server without issue. When I placed the server behind the CSS, the ftp would fail on these large files..around 1.65GB. If I put the same file on another client - say a Linux box - the ftp succeeds to the same server. This leads me to believe that something isn't being agreed on correctly in the session or presentation portions - but how do I confirm that?
    So previous experiences like this, add further confusion to my current problem, where small volumes from the client to the SAN succeed, but a larger,longer-term volume backup fails after 8-10 hours.
    If the CSS resources were being maxed, would it kill off active long-term flows? I have no indication that the CSS is maxed on resources, but at this point I'm trying to consider all options.
    Thanks,
    Chad

  • Host on Demand (HOD) Frequent disconnects

    CSS is disconnecting HOD clients. When HOD is pointed to the reals, the connections work fine. The disconnects do not happen all the time. I am wondering if the sticky is causing a problem with HOD disconnects. There is a correlation between using the VIP and not using the VIP.
    The 443 flow timeout is a default of 15 secs. This needs to be set to not disconnect the long inactivity of the hod clients. Is this a true statement?
    content xxxx:23
    vip address xxxx
    protocol tcp
    port 23
    advanced-balance sticky-srcip
    add service xxxx:443
    add service xxxx
    active
    content hod-xxxx:443
    vip address xxxx
    protocol tcp
    port 443
    add service xxxxx:443
    add service xxxxx
    flow-timeout-multiplier 113
    active
    Any ideas to what is causing the disconnects? Running 07.50.0.04.

    I don't think it is a problem with the sticky. Yes, you may have to port 443 timeout value from their default value to the maximum, which will mean that it will disconnect the HOD clients.Hope this helps.

  • Trying to understand SSL sticky with CSS 11506 / ssl-l4-fallback behavior

    Dear experts
    I have a CSS 11506 (v7.50) which is used to load balance several SSL-based sites. We use the following textbook content rule:
    content mysite-SSL
    vip address 10.0.0.1
    add service s01
    add service s02
    add service s03
    port 443
    protocol tcp
    advanced-balance ssl
    application ssl
    flow-timeout-multiplier 225
    active
    If I read the manual correctly, SSL L3 session IDs are going to be used till a flow is set up. Then the ssl-l4-fallback (it is enabled) directive kicks in and load balancing is done based on the source IP, destination port.
    However, my stats show:
    Sticky Statistics - SFM Slot 1, Subslot 1:
    Total number of new sticky entries is 4937735
    Total number of sticky table hits is 33476045
    Total number of sticky rejects (no entry) is 0
    Total number of sticky collision is 0
    Total number of available sticky entries is 0
    Total number of used sticky entries is 131071
    Total L3 sticky entries are 131
    Total L4 sticky entries are 0
    Total SSL sticky entries are 130940
    Total WAP sticky entries are 0
    Total number of SIPCID sticky entries is 0
    So, why don't I see anything in the L4 sticky entries?
    Also, I would expect that once the ssl-l4-fallback kicks in, a client will be always directed to the same server (since the CSS uses now source IP, dest port for load balancing). However, if I close and start again my browser I hit a different server.
    Your thoughts and suggestions are highly appreciated.
    John.

    Hi Gilles
    Thank you for your response. If I may ask the group for a final further clarification, so as to put this matter to rest. Since there are a lot of frames transmitted in either direction, I would expect the following to be happening and overriding the use of SSLv3 session IDs. Following is the section of the manual that seems to contradict what you say (and I see on the stats). Am I reading the manual wrong?
    "Cisco Content Services Switch
    Content Load-Balancing
    Configuration Guide
    Software Version 8.20
    November 2006
    page 11-14
    Configuring SSL-Layer 4 Fallback
    Insertion of the Layer 4 hash value into the sticky table occurs when more than
    three frames are transmitted in either direction (client-to-server, server-to-client)
    or if SSL version 2 is in use on the network. If either condition occurs, the CSS
    inserts the Layer 4 hash value into the sticky table, overriding the further use of
    the SSL version 3 session ID."

  • Problem with load balancing on CSS

    Hi, we have a pair of servers that are load balanced for port 80 traffic to 198.x.x.21 using the h_www_tcp:80 content rule . I recently got a request that port 80 traffic directed to 198.x.x.21/apps/auction be directed only to one of the load balanced servers. I created a service and content rule r_auction_tcp:80, but traffic still seems to matching the load balancing content rule and going to the wrong server (the /app/auction site only exits on the first server).
    I thought it might be sticyness so I experimented by assigning my pc IP addresses that have not be used and I would still end up on the wrong server with some of the addresses.
    Any ideals?
    content h_www_tcp:80
      vip address 198.x.x.21
      port 80
      protocol tcp
      advanced-balance sticky-srcip
      add service webwa1.1_tcp:80
      add service webwa2.1_tcp:80
      redundant-index 1064
      sticky-inact-timeout 480
      active
    service webwa1.1_tcp:80
      ip address 10.6.3.30
      protocol tcp
      port 80
      keepalive type tcp
      keepalive port 80
      redundant-index 130
      active
    service webwa2.1_tcp:80
      ip address 10.6.3.31
      protocol tcp
      port 80
      keepalive type tcp
      keepalive port 80
      redundant-index 140
      active
    content r_auction_tcp:80
      vip address 198.x.x.21
      redundant-index 1077
      add service webwa1.1-auction_tcp:80
      port 80
      protocol tcp
      url "/apps/auction/*"
      active
    service webwa1.1-auction_tcp:80
      ip address 10.6.3.30
      protocol tcp
      port 80
      keepalive type tcp
      keepalive port 80
      redundant-index 131
      active

    does that never work, or is it just some requests that sometimes fail to be remapped to the appropriate server?
    It should normally work, but with the current config it is possible to see failures for session that stayed idle.
    The solution is then to increase the flow-timeout-multiplier under each content rule to 50.
    Gilles.

  • Exploring CSS 11503 sticky table / sticky mask

    Hi All
    I am currently undergoing some testing with a client.
    We have a VIP load balancing 8 instances. We are testing with the following configs
    content test-test
        add service a
        add service b
        add service c
        add service d
        add service e
        add service f
        add service g
        add service h
        vip address 10.10.10.1
        flow-timeout-multiplier 225
        sticky-mask 255.255.255.252
        redundant-index 1000
        port 443
        protocol tcp
        advanced-balance sticky-srcip-dstport
        sticky-inact-timeout 360
        balance leastconn
    active
    We  have traffic been sourced from 32 IP addresses and want all 8 instances  to be used/hit, but this is not happening in all instances.
    (from the above config, 4 consecutive IPs will be stuck to the same instance based on the sticky mask -- yes?)
    For instance I would expect the following: with the Test IP addresses used based on the sticky mask:
    10.120.1.168
    10.120.1.169
    10.120.1.170
    10.120.1.171 
    (to be stuck to maybe instance a)
    10.120.1.176
    10.120.1.177
    10.120.1.178
    10.120.1.179
    (to be stuck to maybe instance b)
    I have tried the following command during tests:
    show sticky-table l4-sticky ipaddress 10.10.10.1  255.255.255.252  443
    and get an empty table back.
    L4 Sticky List on Slot 1, subslot 1:
    Entries for page 1.
    Entry   Hash    Rule Rule  Srv  Srv      Time(Sec)     Hit Col  Elem Inact
    Number  Value   Indx State Indx State    Elapsed       Cnt Cnt  Type Cfg(Min)
    Total number of entries found is 0.
    L4 Sticky List on Slot 2, subslot 1:
    Entries for page 1.
    Entry   Hash    Rule Rule  Srv  Srv      Time(Sec)     Hit Col  Elem Inact
    Number  Value   Indx State Indx State    Elapsed       Cnt Cnt  Type Cfg(Min)
    Total number of entries found is 0.
    I would like to ascertain what source IP address is been stuck to what load balanced instance at any one time.
    I have tried looking at the flow table but, that clears out quite quicky so not really an accurate method.
    Thanks!

    Hi All
    I am currently undergoing some testing with a client.
    We have a VIP load balancing 8 instances. We are testing with the following configs
    content test-test
        add service a
        add service b
        add service c
        add service d
        add service e
        add service f
        add service g
        add service h
        vip address 10.10.10.1
        flow-timeout-multiplier 225
        sticky-mask 255.255.255.252
        redundant-index 1000
        port 443
        protocol tcp
        advanced-balance sticky-srcip-dstport
        sticky-inact-timeout 360
        balance leastconn
    active
    We  have traffic been sourced from 32 IP addresses and want all 8 instances  to be used/hit, but this is not happening in all instances.
    (from the above config, 4 consecutive IPs will be stuck to the same instance based on the sticky mask -- yes?)
    For instance I would expect the following: with the Test IP addresses used based on the sticky mask:
    10.120.1.168
    10.120.1.169
    10.120.1.170
    10.120.1.171 
    (to be stuck to maybe instance a)
    10.120.1.176
    10.120.1.177
    10.120.1.178
    10.120.1.179
    (to be stuck to maybe instance b)
    I have tried the following command during tests:
    show sticky-table l4-sticky ipaddress 10.10.10.1  255.255.255.252  443
    and get an empty table back.
    L4 Sticky List on Slot 1, subslot 1:
    Entries for page 1.
    Entry   Hash    Rule Rule  Srv  Srv      Time(Sec)     Hit Col  Elem Inact
    Number  Value   Indx State Indx State    Elapsed       Cnt Cnt  Type Cfg(Min)
    Total number of entries found is 0.
    L4 Sticky List on Slot 2, subslot 1:
    Entries for page 1.
    Entry   Hash    Rule Rule  Srv  Srv      Time(Sec)     Hit Col  Elem Inact
    Number  Value   Indx State Indx State    Elapsed       Cnt Cnt  Type Cfg(Min)
    Total number of entries found is 0.
    I would like to ascertain what source IP address is been stuck to what load balanced instance at any one time.
    I have tried looking at the flow table but, that clears out quite quicky so not really an accurate method.
    Thanks!

  • CSS Citrix CAG Load Balancing

    Hi,
    I'm looking to get an opinion as to whether we should see even load balancing over two services.  The content rule is configured as follows :-
    content secure_cag
      add service citrix_cag_1
      port 443
      protocol tcp
      vip address 10.80.2.150
      balance srcip
      add service citrix_cag_2
      sticky-inact-timeout 240
      flow-timeout-multiplier 1800
      active
    Services :-
    service citrix_cag_x
      keepalive type tcp
      keepalive port 443
      ip address 10.200.16.18
      active
    At present we only have around 40 users using it but at times we are seeing a very uneven distribution of sessions, as much as 80% on one server.  Do we have too few users to see effective load balancing? Maybe our long timeout settings are breaking load balancing?
    Thanks for any insight anyone can share.

    Hi Chris,
    You might want to try balance leastconn for your balancing method.  Also, note that you are not currently configured for sticky, so the sticky timeout you have configured isn't doing anything.  Do you require sticky?  If you do not require sticky, then leastconn should give you the best distribution across services at any given point in time.  Adding sticky, such as with advanced-balance sticky-srcip, will skew load balancing as clients become stuck to one service.
    Hope this helps,
    Sean

  • CSS load balancing issue: url isn't accessible even though services are up

    service Server1:80
      ip address 10.10.10.34
      protocol tcp
      port 80
      keepalive type http
      keepalive uri "/test.asp"
      active
    service Server2:80
      protocol tcp
      port 80
      keepalive type http
      keepalive uri "/test.asp"
      ip address 10.10.10.35
      active
    owner Ow1
    content LBR1:80
        vip address 192.168.1.159
        port 80
        protocol tcp
        url "/*"
        balance weightedrr
        add service Server1:80
        add service Server2:80
        advanced-balance sticky-srcip
        sticky-inact-timeout 21
        flow-timeout-multiplier 8
        active
    service Server1:80
      ip address 10.10.10.34
      protocol tcp
      port 80
      keepalive type http
      keepalive uri "/test.asp"
      active
    service Server2:80
      protocol tcp
      port 80
      keepalive type http
      keepalive uri "/test.asp"
      ip address 10.10.10.35
      active
    owner OW1
      content LBR2:80
        vip address 192.168.1.98
        protocol tcp
        port 80
        url "/*"
        balance weightedrr
        add service Server1:80
        add service Server2:80
        advanced-balance sticky-srcip
        sticky-inact-timeout 21
        flow-timeout-multiplier 8
        active
    All services are alive all the time and both contexts are alive all the time.
    when user tries to access LBR2:80's URL it works all the time. but when user tries to access LBR1:80's url then it works sometimes and some times it doesn't work.
    could you advise what the issue could be?

    When the SYN comes in the CSS will first check for the srcip in the sticky database and if it finds a match will forward to the stuck server. If the source ip is not in the sticky database the request will be load balanced using weightedrr and a server selected. That sticky server will then be added to the sticky database.
    If the sticky-srcip is used between 2 content rule, it will use separate sticky table.
    You may need to take packet capture to understand what is really failing along with
    a following outputs :
    sh flow
    sh rule Ow1 LBR1:80 ser
    regards
    Andrew

  • HT4113 On the 6th passcode attemp entered correctly, does the attempts will revert to zero?and does it need to wait to see the result?

    i tried one failed then its ok on the second , after that i try it again with wrong passcode then it appears back to one failed attemps which supposed to be the 2nd failed? Or if you input the correct passcode it will revert to zero automatically? Or there is any interval? If you failed for the 6th attempt does the only way to this is to restore form my macbook or pc where i backed up the phone? Thanks

    Server side flows will also timed out when flows are needed by CSS. In your situation I see 3 possible options
    1. Increase flow timeout using "flow-timeout-multiplier number"
    or
    2. Use "flow permanent port1 "
    It will prevent cleanup of the idle flows for specified port and will make these flows permanent. This can severly affect the scalibility as you may run out of available flows on the box. In order to reclaim these flows you can either manually (doing "no flow permanent port1" and then "flow permanent port1 " every night may be) remove these idle flows or can write a script to do that for you on CSS.
    or
    3. Add HeartBeat functionality in the application.
    HTH
    Syed Iftekhar Ahmed

  • CSS CItrix Nfuse Connections Drop

    We are using a CSS-11501 version 7.5 to load balance SSL connections to a pair of Citrix Web Interface servers. The CSS is connected to a DMZ interface of an ASA5520 on one side, and a 3550 with the web interface servers on the other side. Citrix app servers are in the internal network.
    The problem is that users are dropped after 45-75 minutes. If the load balancer is bypassed by suspending the service and connecting to the server IP, the drops stop occurring. Sniffer traces indicate it is the Citrix 1494 connection between the Web server and the internal Citrix server that is being dropped.
    Tried extending TCP flow, and sticky timeouts but no change.
    Is it possible to disable the NAT function on the 1494 backend connection and still allow load balancing of the 443 client connection?
    Thanks, Dave

    where and how did you apply the flow-timeout-multiplier ?
    You need it under the content rule and under the group.
    You can apply nating to a specific port by using ACL.
    Instead of doing a 'add destination service' under the group, you leave it empty [except for the vip] and use an acl to decide when to use the group
    ie:
    acl 1
    clause 10 permit tcp any destination content owner/rule sourcegroup
    Gilles.

Maybe you are looking for