Best practices...ACE

I am trying to get myself familier theoratically with ACE before actual deployment, so here are some questions...
First, we often hear the term offloading the SSL to ACE, to make it work on L7 and make decision based on header, quesion arises as to how simple is this to work on ACE at Layer 7 rather than on layer 4? Is it really worthit to go for it, and in which scenarios?
Mutliple Contexts, is there any central way to measure the overall resource consumption done by all/selective context? what are the benchmarks/limitations (not just the count) for ACE module for 6500? Is it prductive/or wast to have FT for each context, or FT is one per box?
Howuseful it is to have http over ssl (i know it depends on requirement, need to know the impact on the box itself)
TCP Reuse, Persistant LB & Header Insert, can these features be used in L4 model, or is it really L7 thing?
Can a single context of ACE be used for multiple contexts of a boundry ASA (i don't know why i am asking for this)

SSL offloading is not just needed for making L7 based decisions but is also beneficial
for performance & certificate management.
* On Web Servers typically native SSL processing happens in Software (unless they are using SSL acceleration cards)
, as a result they handle fewer requests, will have far slower response time, and significantly decreased total throughput.
* SSL processing is computationally intensive, SSl offloading takes the SSL processing off the Web server
* Since the back-end WEb/App servers do not need to process any encrypted data,
they more effectively serve data or run applications.
* Overall certificate Management gets easier
(as you have to deploy/renew only one cert, versus N certs for N servers)
With Load balancing SSL traffic using Layer 4 parameters you can ensure persistence only by using
"Source IP" or "SSL ID" (the unencrypted fields).Source IP based persistence is not recommended if Mega Proxies are used.
SSL ID based persistence is not reliable as some IE browsers renegotiate it during a session.
The only viable option is to use cookies/Header values to stick client to a single server during
a session. If SSL offload is not configured, a loadbalancer cannot read the headers and cannot ensure
persistence (if IP based persistence is not an option).
Similarly there are scenarios where you want to make decisions based on L7 based info and due to encrypted traffic
LoadBalancer cant read the headers. For e.g lets suppose you are running an internet facing application in 3 languages,
If the loadbalnacer can read header (SSL offloading in place) it can select Server-X dedicated
for language X and Server-Y for language-Y requests. There is a lot of valuable information in Headers which Loadbalancers
can look and utilize. If the traffic is encrytpted & SSL offloading is not in place, it wont be able to make "Intelligent decisions".
"show resource usage" will give you resources used each context.
One FT link is used for all contexts. Each context uses the same link.
Withing configuration you can select which context will be active on which ACE peer.
Any feature that utilizes TCP fields are L4 & any feature that goes beyond that can be called L7
Persistence can be L4 ( Source IP) 0r L7 (cookie, headers..)
Header Insert is Layer7 (You are opening the header)
You can share ASAs inside with multiple ACE contexts. (Not true for FWSM).
HTH
Syed Iftekhar Ahmed

Similar Messages

  • Best Practice to use one Key on ACE for new CSR?

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

  • HTTP/HTTPS on the same ACE VIP - best practice

    I currently have a VIP representing one server farm that contains two http servers:-
    class-map match-all VIP-HTTP-xxxxx.co.uk
    2 match virtual-address 10.79.18.10 tcp eq www
    class-map match-all VIP-SSL-xxxxx.co.uk
    2 match virtual-address 10.79.18.10 tcp eq https
    I have port 80 and 443 open on the VIP and SSL termination performed on the ACE (both http servers are the same and configured for default load balancing behaviour - I've also specified port 80 for ACE to server traffic). Having 80 and 443 on the same VIP (meaning the site can be accessed via one NAT'd external IP) came from a request from the business so the site can have one domain.
    The majority of the http server(s) web content is standard http but there is a specific sub-directory of interactive forms that requires https termination.
    I have a couple of queries with regards to URL re-writes:-
    1) Is the SSL URL re-write functionality limited to just the host part of the URL or can the ACE enforce https for specific sub-directories, i.e. can the ACE intercept and re-write a URL if a user tries to go to a particular https page/directory using http (by just deleting the s from the URL within their browser)? A possible example being:-
    ssl url rewrite location "www\.cisco\.com\secure-forms"
    2) Can the ACE re-direct users back to a standard http page if they try to 'secure' their session by changing http to https within their browser (basically the opposite of the above).
    Basically as I have 80 and 443 on the same VIP I'm interested in the best practice methods of enforcing http and https content segregation using just the ACE (as opposed to having Apache doing the re-writes, etc).
    Web services functionality (in terms of SSL and URL re-writes) has traditionally fallen within the domain of a dedicated web development team (who use Apache, Tomcat, etc.) but the introduction of the ACE as a load balancing appliance that is primarily managed by the networks team but with functionality that crosses traditional team boundaries has resulted in lots of questions from web development around what functionality can be moved from Apache, etc. and onto the ACE?
    Any advice or personal experiences would be gratefully received.
    Thanks
    Matthew

    Back again!
    Could someone possibly cast their eye over the following config?
    The only bit I'm not sure on (syntactically and whether it can even be done on the ACE) is how to specify a DO NOT match regular expression, i.e. how to capture https URLs that do not match my secure pages so I can re-direct the request back to the normal http URL (class-map type http loadbalance Non-Secure_Pages). What I'd like to avoid is re-directing requests that don't need to be, i.e. re-directing all requests that don't match /secure back to http when the majority will be correctly going to a normal http URL :-
    rserver host server1
    description *** HTTP server 1 ***
    ip address 10.100.194.2
    inservice
    rserver host server2
    description *** HTTP server 2 ***
    ip address 10.100.194.3
    inservice
    rserver redirect REDIRECT_TO_HTTPS
    webhost-redirection https://www.website.co.uk/%p 302
    inservice
    rserver redirect REDIRECT_TO_HTTP
    webhost-redirection http://www.website.co.uk/%p 302
    inservice
    class-map type http loadbalance Secure_Pages
    match http url /secure.*
    class-map type http loadbalance Non-Secure_Pages
    *** DO NOT *** match http url /secure.*
    class-map match-all VIP-HTTP-website.co.uk
    2 match virtual-address 10.79.18.10 tcp eq www
    class-map match-all VIP-SSL-website.co.uk
    2 match virtual-address 10.79.18.10 tcp eq https
    policy-map type loadbalance first-match VIP-LB-HTTP-website.co.uk
    class Secure_Pages
    serverfarm REDIRECT_TO_HTTPS
    class class-default
    serverfarm serverfarm-website.co.uk
    policy-map type loadbalance first-match VIP-LB-SSL-website.co.uk
    class Non-Secure_Pages
    serverfarm REDIRECT_TO_HTTP
    class class-default
    serverfarm serverfarm-website.co.uk
    serverfarm host serverfarm-website.co.uk
    failaction purge
    rserver server1 80
    probe PING_SERVER
    probe http-website.co.uk
    inservice
    rserver server2 80
    probe PING_SERVER
    probe http-website.co.uk
    inservice
    serverfarm redirect REDIRECT_TO_HTTPS
    rserver REDIRECT_TO_HTTPS
    inservice
    serverfarm redirect REDIRECT_TO_HTTP
    rserver REDIRECT_TO_HTTP
    inservice
    many thanks

  • ACE best practice for proxy servers

    Dear,
    I would like to know which is the best practice scenario to load balance proxy servers:
    1- Best practice to have transparent proxy or proxy setting on the web browser?
    2- for transparent proxy: best practice to use ip wccp or route-map pointing to the ACE VIP?
    3- What are the advantages and disadvantages of transparent proxy V/S web browser proxy setting.
    Regards,
    Pierre

    Hi,
    Sorry, that seem to be an internal link.
    You can also check the below post where a sample config is posted here for transparent cache.
    https://supportforums.cisco.com/thread/129106
                   Best practice :
    VIP would be a catch all address.
    To optimize the caching predictor hash url is used.
    You can also use mac-sticky on interface so proper flow persistence is used within ACE
    The mode is transparent so we preserve the destination ip address.
    Regards,
    Siva

  • ACE access-list best practice

    Hi,
    I was wondering what was the best practice for the access-list's on the Cisco ACE.
    Should we permit Any in the access-list, and classify the traffic in the class-maps as seen in a brief example:
    access-list ANY line 10 extended permit ip any any
    access-list EXCH-DMZ-INTERNET-OUT line 10 extended permit tcp 10.134.10.0 255.255.254.0 any eq www
    access-list EXCH-DMZ-INTERNET-OUT line 15 extended permit tcp 10.134.10.0 255.255.254.0 any eq https
    class-map match-all EXCH-DMZ-INTERNET-OUT
      2 match access-list EXCH-DMZ-INTERNET-OUT
    policy-map multi-match EXCH-DMZ-OUT
    class EXCH-DMZ-INTERNET-OUT
        nat dynamic 1 vlan 1001
    interface vlan 756
      description VLAN 744 EXCH DMZ BE
      ip address 10.134.11.253 255.255.255.0
      alias 10.134.11.254 255.255.255.0
      peer ip address 10.134.11.252 255.255.255.0
    access-group input ANY
      service-policy input EXCH-DMZ-OUT
    Or should we also also the access-list for the access-group in the interface as seen bellow:
    access-list EXCH-DMZ-INTERNET-OUT line 10 extended permit tcp 10.134.10.0 255.255.254.0 any eq www
    access-list EXCH-DMZ-INTERNET-OUT line 15 extended permit tcp 10.134.10.0 255.255.254.0 any eq https
    class-map match-all EXCH-DMZ-INTERNET-OUT
      2 match access-list EXCH-DMZ-INTERNET-OUT
    policy-map multi-match EXCH-DMZ-OUT
    class EXCH-DMZ-INTERNET-OUT
        nat dynamic 1 vlan 1001
    interface vlan 756
      description VLAN 744 EXCH DMZ BE
      ip address 10.134.11.253 255.255.255.0
      alias 10.134.11.254 255.255.255.0
      peer ip address 10.134.11.252 255.255.255.0
      access-group input EXCH-DMZ-INTERNET-OUT
      service-policy input EXCH-DMZ-OUT
    Regards,

    Hello,
    I don't think you'll find a "best practice" for this scenario.  It really just comes down to meeting your needs.  The first example you have a far and away the more commonly seen configuration, as you'll only NAT the traffic matching the EXCH-DMZ-INTERNET-OUT, but all other traffic will be forwarded by the ACE whether it is load balanced or not.  The second way will only allow NAT'd traffic, and deny all others.
    Hope this helps,
    Sean

  • Best Practice for Mass Deleting Transactions

    Hi Gurus
    Can you please guide on this - We need to mass delete Leads from the system. I can use Crm_Order_Delete FM. But want to know if there are any best practices to follow, or anything else that i should consider before deleting these transactions from the system.
    We have our archiving policy under discussion which may take some time, but due to large volume of reduntatn data we have some performance issues. For example when searching for leads and using ACE, the system goes through all the lead data.
    That is the reason we are plannign to delete those old records. My concerns is that using CRM_ORDER_DELETE, would it clear all the tables for those deleted transactions and if there are any best practices to follow.
    Thanks in Advance.
    Regards.
    -MP
    Edited by: Mohanpreet Singh on Apr 15, 2010 5:18 PM

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best practice for GSS design

    Please advice as to what records needs to go in Public DNS server in a scenario where i have url say x.y.com which is listed in the Domain List of the GSS-P, sot that GSS-P or GSS-S can handout the respective external VIP to the clients requesting the url in case one of the GSS/site (GSS_P and GSS-S) goes unavailable
    Please also specify the communication path of a client accessing x.y.com.
    Advice the best practice
    Thanks in advance
    ~EM

    Hi,
    I am new to GSS. I would appreciate if some can help me with the deisgn. I want to know if I need to put the GSS inline after the inernet facing firewall and befor the ACE module. OR use it as one arm mode. Trying to figure out the best fit in the design.
    FWSM1 >>> GSS >>> ACE
    or
    just put the GSS as one arm mode between the FWSM1 >>> ACE
                                                                                                         |
                                                                                                    GSS
    Thanks in advance,
    Nav

  • 2K8 - Best practice for setting the DNS server list on a DC/DNS server for an interface

    We have been referencing the article 
    "DNS: DNS servers on <adapter name> should include their own IP addresses on their interface lists of DNS servers"
    http://technet.microsoft.com/en-us/library/dd378900%28WS.10%29.aspx but there are some parts that are a bit confusing.  In particular is this statement
    "The inclusion of its own IP address in the list of DNS servers improves performance and increases availability of DNS servers. However, if the DNS server is also a domain
    controller and it points only to itself for name resolution, it can become an island and fail to replicate with other domain controllers. For this reason, use caution when configuring the loopback address on an adapter if the server is also a domain controller.
    The loopback address should be configured only as a secondary or tertiary DNS server on a domain controller.”
    The paragraph switches from using the term "its own IP address" to "loopback" address.  This is confusing becasuse technically they are not the same.  Loppback addresses are 127.0.0.1 through 127.255.255.255. The resolution section then
    goes on and adds the "loopback address" 127.0.0.1 to the list of DNS servers for each interface.
    In the past we always setup DCs to use their own IP address as the primary DNS server, not 127.0.0.1.  Based on my experience and reading the article I am under the impression we could use the following setup.
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  127.0.0.1
    I guess the secondary and tertiary addresses could be swapped based on the article.  Is there a document that provides clearer guidance on how to setup the DNS server list properly on Windows 2008 R2 DC/DNS servers?  I have seen some other discussions
    that talk about the pros and cons of using another DC/DNS as the Primary.  MS should have clear guidance on this somewhere.

    Actually, my suggestion, which seems to be the mostly agreed method, is:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    The tertiary more than likely won't be hit, (besides it being superfluous and the list will reset back to the first one) due to the client side resolver algorithm time out process, as I mentioned earlier. Here's a full explanation on how
    it works and why:
    This article discusses:
    WINS NetBIOS, Browser Service, Disabling NetBIOS, & Direct Hosted SMB (DirectSMB).
    The DNS Client Side Resolver algorithm.
    If one DC or DNS goes down, does a client logon to another DC?
    DNS Forwarders Algorithm and multiple DNS addresses (if you've configured more than one forwarders)
    Client side resolution process chart
    http://msmvps.com/blogs/acefekay/archive/2009/11/29/dns-wins-netbios-amp-the-client-side-resolver-browser-service-disabling-netbios-direct-hosted-smb-directsmb-if-one-dc-is-down-does-a-client-
    logon-to-another-dc-and-dns-forwarders-algorithm.aspx
    DNS
    Client side resolver service
    http://technet.microsoft.com/en-us/library/cc779517.aspx 
    The DNS Client Service Does Not Revert to Using the First Server in the List in Windows XP
    http://support.microsoft.com/kb/320760
    Ace Fekay
    MVP, MCT, MCITP EA, MCTS Windows 2008 & Exchange 2007 & Exchange 2010, Exchange 2010 Enterprise Administrator, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.
    I agree with this proposed solution as well:
    Primary DNS:  Locally assigned IP of the DC (i.e. 192.168.1.5)
    Secondary DNS: The assigned IP of another DC (i.e. 192.168.1.6)
    Tertiary DNS:  empty
    One thing to note, in this configuration the Best Practice Analyzer will throw the error:
    The network adapter Local Area Connection 2 does not list the loopback IP address as a DNS server, or it is configured as the first entry.
    Even if you add the loopback address as a Tertiary DNS address the error will still appear. The only way I've seen this error eliminated is to add the loopback address as the second entry in DNS, so:
    Primary DNS:  The assigned IP of another DC (i.e. 192.168.1.6)
    Secondary DNS: 127.0.0.1
    Tertiary DNS:  empty
    I'm not comfortable not having the local DC/DNS address listed so I'm going with the solution Ace offers.
    Opinion?

  • Resource class best practice

    I have created a reserved context with 20% min and max = to min in every resource
    including sticky.
    I also have the default resource class
    I have also created another resource with 20% sticky but left everything else at default 0-100%
    our network traffic doesnt carry a heavy load on the new loadbalancer..but what is a good rule of thumb?
    most of the traffic is http and at this point we will create about 2 contexts after the Admin

    Hello!
    This is a very pertinent question, however as many things in life there is no one size fits all here.
    We basically recommend, as best practice, to allocate for each specific context only the estimated needed resources. These values should always come from a previous study on the network patterns/load.
    To accomodate for growth and scalability it is strongly advised to initially keep as many resources reserved as possible and allocate the unused resources as needed. To accomplish this goal, you should created a reserved resource class, as you did already, with a guarantee of 20 to 40 percent of all ACE resources and configure a virtual context solely with the purpose of ensuring that these resources are reserved.
    As you might already know ACE protects resources in use, this means that when decreasing a context's resources, the resources must be unused before then can be reused by other context. Although it is possible to decrease the resource allocations in real time, it typically requires additional overhead to clear any used resources before reducing them.
    Based on the traffic patterns, number of connections, throughput, concurrent SSL connections , etc, for each of the sites you will be deploying you will have a better idea on what might be the estimated needed resources and then assign them to each of the contexts. Thus this is something that greatly depends on customer's network environment.
    Hope this helps to clarify your doubts.

  • _msdcs subdomain best practice with NS records?

    I have the _msdcs subfolder under my domain (the grey folder). example below
    It has only one DC inside of it for a NS server. This DC is old and no longer exists. I checked my test environment and it has the same scenario (an old DC that does that not exist). example below
    I'm just wondering:
    1) Is this normal, should this folder update itself with other servers?
    2) should I be adding one of my other DC's? and removing the original?
    I have a single forest, single domain setup 2008 functional level. My normal
    _msdcs Zone does behave as expected and removes and add the appropriate records. Thanks.

    I apologize for the late response. I see you've gone further than what I've recommended.
    No, you shouldn't have deleted the _msdc.parent.local zone!!!!!! I'm not sure why you did that. Are you working with someone else on this that recommended to do that? If not,
    you're over-thinking it. I provide specifics to fix it by simply  updating the NS records, that's it. If you only found the _msdcs folder had the wrong record, then that's all you had to change.
    In cases where DCs are removed, replaced, upgraded, etc, it's also best practice to check a few things to make sure things are in order, and one of them is check the NS records on all zones and delegations. Delegation's NS records won't update
    automatically with changes, but zone NS records will if DCs are properly demoted.
    The _msdcs delegated zone is required by Active Directory. And yes, based on your thread subject, it's best practice. When Windows 2000 came out, and IF you had created the initial domain with it, it did not have it this way, but all domains initially created
    with Windows 2003 and newer are designed this way. If you had upgraded from 2000 to 2003, then one of the steps that we must perform is to create the _msdcs delegation.
    Please re-create it in this order:
    In the DNS console, right-click Forward Lookup Zones, and then click
    New Zone. Click Next
    On the Zone Type page in the New Zone Wizard, click
    Primary zone, and then click to select the Store the zone in Active Directory check box. Click
    Next
    On the Active Directory Zone Replication Scope page, click "To all DNS servers in the Active Directory forest parent.local.
    On the Zone Name page, in the Zone Name box, type
    _msdcs.parent.local
    Complete the wizard by accepting all the default options.
    After you've done that:
    Delete the _msdcs subfolder under parent.local.
    Right-click parent.local, choose New Delegation.
    Type in _msdcs
    In the Nameserver page, type in the name of your server, and its IP address.
    Complete the wizard
    You should now see a grayed out _msdcs folder under parent.local.
    Go to c:\windows\system32\config\ folder
    Find netlogon.dns and rename it to netlogon.dns.old
    Find netlogon.dnb and rename it to netlogon.dnb.old
    Open a command prompt
    Run ipconfig /registerdns
    Run net stop netlogon
    Run net start netlogon
    Wait a few minutes, then click on the _msdcs.parent.local zone, and click the F5 button to refresh it.
    You should see the data populate.
    Ace Fekay
    MVP, MCT, MCITP/EA, MCTS Windows 2008/R2 & Exchange 2007, Exchange 2010 EA, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Technical Blogs & Videos: http://www.delawarecountycomputerconsulting.com/
    This post is provided AS-IS with no warranties or guarantees and confers no rights.

  • Best Practices to Mass Delete Leads

    Hi Gurus
    Can you please guide on this - We need to mass delete Leads from the system. I can use Crm_Order_Delete FM. But want to know if there are any best practices to follow, or anything else that i should consider before deleting these transactions from the system.
    We have our archiving policy under discussion which may take some time, but due to large volume of reduntatn data we have some performance issues. For example when searching for leads and using ACE, the system goes through all the lead data.
    That is the reason we are plannign to delete those old records. My concerns is that using CRM_ORDER_DELETE, would it clear all the tables for those deleted transactions and if there are any best practices to follow.
    Thanks in Advance.
    Regards.
    -MP

    Hello,
    as the root is single label you could get only rid of it with migrating to a new forest. Therefore you should built a lab first and test the steps. As tool you could use ADMT.
    http://blogs.msmvps.com/mweber/2010/03/25/migrating-active-directory-to-a-new-forest/
    Also you might rethink your design if an empty root is really needed, there is no technical requirement and cost only additional hardware and licenses.
    Keep in mind that the new forest MUST use different domain/NetBios names otherwise you cannot create the required trust for migration steps.
    You can NOT switch a sub domain to the root and vice versa.
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • Best practice SSL End-to-End in Exchange 2010 CAS loadbalancing

    Hi,
    I was wondering if there is a best practice for deploying SSL End-to-End in Exchange 2010 CAS loadbalancing.
    We have ACE modules A5(1.1) and ANM 5.1(0), although there seems to be a template available in ANM it doesn't work. It throws a error when deploying, i believe the template is corrupt.
    As I am undersome pressure to deploy this asap I am looking for a sample config. I found one for SSL offloading, but I need one for End-to-End SSL.
    Thanks in advance,
    Dion

    Hi Dion,
    You can open up a case with TAC to have that template reviewed and confirm if the problem is at the ACE or ANM side.
    In the meantime here is a nice example for End-To-End SSL that can help you to get that working:
    http://www.cisco.com/en/US/products/hw/modules/ps2706/products_configuration_example09186a00809c6f37.shtml
    For CAS load balancing there's nothing special other than opening the right ports, I'd advise you to get SSL working first and take it from there, if any problem comes up you can post it here and we'll give you a hand.
    HTH
    Pablo

  • Resource Bundles and Embedded fonts - best practice

    Hello,
    I am in digging into creating a localized app and I would
    like to use embedded fonts.
    Ideally, I would like to have two locales (for example), each
    with a different embedded font and/or unicode range.
    For example, how do I set up my localized app so that EN uses
    Arial (Latin Range), and JP uses Arial Unicode MS with Japanese
    Kanji unicode range ?
    Note that I do know how to embed fonts with different ranges,
    I just don't know how to properly embed them into resource bundles
    and access them easily in style sheets.
    Thanks!
    -Daniel

    Hello!
    This is a very pertinent question, however as many things in life there is no one size fits all here.
    We basically recommend, as best practice, to allocate for each specific context only the estimated needed resources. These values should always come from a previous study on the network patterns/load.
    To accomodate for growth and scalability it is strongly advised to initially keep as many resources reserved as possible and allocate the unused resources as needed. To accomplish this goal, you should created a reserved resource class, as you did already, with a guarantee of 20 to 40 percent of all ACE resources and configure a virtual context solely with the purpose of ensuring that these resources are reserved.
    As you might already know ACE protects resources in use, this means that when decreasing a context's resources, the resources must be unused before then can be reused by other context. Although it is possible to decrease the resource allocations in real time, it typically requires additional overhead to clear any used resources before reducing them.
    Based on the traffic patterns, number of connections, throughput, concurrent SSL connections , etc, for each of the sites you will be deploying you will have a better idea on what might be the estimated needed resources and then assign them to each of the contexts. Thus this is something that greatly depends on customer's network environment.
    Hope this helps to clarify your doubts.

  • Best practice document for ACE30

    Can someone point me to a best practice document for the ACE30.  I am specifically looking at best practices as they relate to resouce allocation, logging, FT, and snmp.  I am migrating from CSM so the VIP/Server configuration is basically set.  I am looking for areas that pertain to the ACE as a whole.
    Thank you

    Good afternoon,
    I'm afraid there isn't a best practices document as such, however, I would suggest you to have a look at the ACE section in doc-wiki (http://docwiki.cisco.com/wiki/Cisco_Application_Control_Engine_%28ACE%29_Troubleshooting_Guide).
    This document can give you some useful insights on different topics, including (but not limited to) resource allocation.
    I hope this helps
    Daniel

  • Is account lockout policy still best practice

    Windows Server 2008 r2 (will be moving to 2012 r2)
    since implementing account lockout policy two days ago, we've been bombarded by calls to unlock accounts. and after a few minutes, same users get their accounts locked again.
    my question, since we are already using strong password policy (8 chars min, 90 days max to expire), at this day and age is it still best practice to rely on account lockout policy? keeping in mind the above flood of calls.

    Just to add, I think it would have been a better idea to broadcast the planned changes organization wide before implemeting something like this.
    Place to check that we usually check and possibly good to let people know:
    Desktops
    Extra Laptops that may not be on site
    Mobile phone Exchange accounts or Office 365 hybrid ADFS accounts
    WIFI profiles on laptops, iPads, other tablets, mobile phones, etc
    Locked workstations that have not been logged off
    Services using a user account or with old credentials - usually I see devs doing this
    Mapped Drives with explicit permissions
    Current running RDP/RDS sessions
    Scheduled Tasks with old credentials
    VPN connections
    etc
    Troubleshooting account lockout the Microsoft PSS way
    http://blogs.technet.com/b/instan/archive/2009/09/01/troubleshooting-account-lockout-the-pss-
    Account Lockout and Management Tools
    http://www.microsoft.com/en-us/download/details.aspx?id=18465way.aspx
    Ace Fekay
    MVP, MCT, MCSE 2012, MCITP EA & MCTS Windows 2008/R2, Exchange 2013, 2010 EA & 2007, MCSE & MCSA 2003/2000, MCSA Messaging 2003
    Microsoft Certified Trainer
    Microsoft MVP - Directory Services
    Complete List of Technical Blogs: http://www.delawarecountycomputerconsulting.com/technicalblogs.php
    This posting is provided AS-IS with no warranties or guarantees and confers no rights.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

Maybe you are looking for