Best practices for network design on WLC 2504 and 5508

Dear all:
I'm looking for some recommendations on WLC 2504 and 5508 about the the following:
Maximum amount of AP per port
The scenario when to use all ports in both WLC
Maximum number of clients(users) per port
Bandwidth comsumption of  management vs data in order to assign one port for management
I've just found this:
Cisco 5508 controllers have eight Gigabit Ethernet distribution system ports, through which the controller can manage multiple access points. The 5508-12, 5508-25, 5508-50, 5508-100, and 5508-250 models allow a total of 12, 25, 50, 100, or 250 access points to join the controller. Cisco 5508 controllers have no restrictions on the number of access points per port. However, Cisco recommends using link aggregation (LAG) or configuring dynamic AP-manager interfaces on each Gigabit Ethernet port to automatically balance the load. If more than 100 access points are connected to the 5500 series controller, make sure that more than one gigabit Ethernet interface is connected to the upstream switch.
http://www.cisco.com/c/en/us/td/docs/wireless/controller/6-0/configuration/guide/Controller60CG/c60mint.html
Thanks for your help.

The 5508-12, 5508-25, 5508-50, 5508-100, and 5508-250 models allow a total of 12, 25, 50, 100, or 250 access points to join the controller.
This is an old document.  5508 can now support up to 500 APs if you run firmware 7.X.  2504 can support up to 75 APs if you run firmware 7.4.X.
I'm looking for some recommendations on WLC 2504 and 5508 about the the following:
Best practice and recommendation is to LAG all ports so you will be able to form a link redundancy.  If one link goes down, you have other link to push traffic. 

Similar Messages

  • Best Practice For Cube Design

    All,
    First post here and was wondering if anyone out there has a best practice for cube design or optimisation. Currently have 7 Cubes that have been populated for the last 6 months and am now looking at ways of speeding up their population.
    Are there any hard and fast rules about dimensions?
    Should they be kept to a percentage of the fact table?
    When should line item dimensions be used?
    Regards
    Gary Boyle

    Hi Gary,
    Ideally the DIM tables should be 20% of the fact table and preferably less. You can check the size ratios in RSRV using the Database tables test > Database info about InfoProvider tables. Line items dimensions should be employed where the char has a large number of unique values (like 0MATERIAL, or 0CUSTOMER), so that anothe DIM ID is not created, but the SID values are used directly in the Fact Table.
    See these for more:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/10b589ad-0701-0010-0299-e5c282b7aaad
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/08f1b622-0c01-0010-618c-cb41e12c72be
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    Hope this helps...

  • Best Practices For Dashboard design  through BI

    Hi
      If I want to create a dashboard through BI7 .what is the best way to suggest the client people. Which way SAP recommended for BI7 Best practices for dashboard design.
    Thanks & Regards,
    Praveen

    Sloved

  • Best Practices for CS6 - Multi-instance (setup, deployment and LBQ)

    Hi everyone,
    We recently upgraded from CS5.5 to CS6 and migrated to a multi-instance server from a single-instance. Our current applications are .NET-based (C#, MVC) and are using SOAP to connect to the InDesign server. All in all it is working quite well.
    Now that we have CS6 (multi-instance) we are looking at migrating our applications to use the LBQ features to help balance the workload on the INDS server(s). Where can I find some best practices for code deployment/configuration, etc for a .NET-based platform to talk to InDesign?
    We will be using the LBQ to help with load management for sure.
    Thanks for any thoughts and direction you can point me to.
    ~Allen

    Please see if below metalink note guides you:-
    Symmetrical Network Acceleration with Oracle E-Business Suite Release 12 [ID 967992.1]
    Thanks,
    JD

  • Best practice for GSS design

    Please advice as to what records needs to go in Public DNS server in a scenario where i have url say x.y.com which is listed in the Domain List of the GSS-P, sot that GSS-P or GSS-S can handout the respective external VIP to the clients requesting the url in case one of the GSS/site (GSS_P and GSS-S) goes unavailable
    Please also specify the communication path of a client accessing x.y.com.
    Advice the best practice
    Thanks in advance
    ~EM

    Hi,
    I am new to GSS. I would appreciate if some can help me with the deisgn. I want to know if I need to put the GSS inline after the inernet facing firewall and befor the ACE module. OR use it as one arm mode. Trying to figure out the best fit in the design.
    FWSM1 >>> GSS >>> ACE
    or
    just put the GSS as one arm mode between the FWSM1 >>> ACE
                                                                                                         |
                                                                                                    GSS
    Thanks in advance,
    Nav

  • Best Practice for Networking in UCS required

    Hi
    We are planning to deploy UCS n our environment. The Fabric Interconnects A and B will need to connect to pair of Catalyst 4900 M switch. Whats is the best practice to connect? How should the 4900 switch be configured? Can I do port channel in UCS?
    Appreciate your help.
    Regards
    Kumar

    I highly recommend you review Brad Hedlund's videos regarding UCS networking here:
    http://bradhedlund.com/2010/06/22/cisco-ucs-networking-best-practices/
    You may want to focus on Part 10 in particular, as this talks about running UCS in end-host mode without vPC or VSS.
    Regards,
    Matt

  • What is the Best Practice for publishing Offline Root CA Cert and CRL to Active Directory?

    Hi,
    I've read and seen in a few labs different approaches to what is published in Active Directory for a Offline Root CA.  I've seen just the Root Cert published to AD as well as the Root Cert and the Root CRL published to AD. 
    I can understand why the Root Cert is published to AD, but why would the Root CRL need to be published to AD, especially if my Offline Root CA just issues the Cert for my Subordinate Issuing CA?  So looking for Best Practices here.
    Thanks for your help! SdeDot

    On Sun, 22 Feb 2015 18:44:25 +0000, Andrzej Kazmierczak wrote:
    Best practice is to publish CRL to 2 alternative paths - LDAP for your internal users to access them on the first place and HTTP as an alternative option to LDAP and as the only option for your external users.
    No, the current recommended best practice is to publish to a highly
    available HTTP location first (and possibly the only CDP) that is available
    both internally and externally. This covers Windows and non-Windows
    devices, domain joined and non-domain joined devices and internal and
    external devices as well as multi-forest scenarios with no trust between
    forests.
    Paul Adare - FIM CM MVP

  • Best practices for realtime communication between background tasks and main app

    I am developing (in fact, porting to WinRT Universal App) an application connecting to Bluetooth medical devices. In order to support background connectivity, it seems best is to use background tasks triggered by a device connection. However, some of these
    devices provide a stream of data which has to be passed to the main app in real time when it is active - i.e. to show an ECG on the screen. So my task ideally should receive and store data all the time (both background and foreground) and additionally make
    main app receive it live when it is in foreground.
    My question is: how do I make background task pass real-time data to the app when it is active? Documentation talks about using storage, but it does not seem optimal for realtime messaging.. Looking for best practices and advice. Platform is Windows 8.1
    and Windows Phone 8.1.

    Hi Michael,
    Windows phone app has resource quotas, to prevent it from interfering with real-time communication functionality, background task using the ControlChannelTrigger and PushNotificationTrigger receive guaranteed resource quotas for every running task. You can
    find more information from
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh977056(v=win.10).aspx. See Background task resource guarantees for real-time communication section. ControlChannelTrigger is not supported on windows phone, so you can have a look at PushNotificationTrigger
    class.
    https://msdn.microsoft.com/en-us/library/windows/apps/xaml/windows.applicationmodel.background.pushnotificationtrigger.aspx.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Click HERE to participate
    the survey.

  • Sanity check - Best practice for network configuration

    Basic configuration is this:
    Physical server has two interfaces. Two different networks. Generically we reference them as Front End and Back End
    Back End is dedicated to storage (nfs from netapp or Sun 74xx)
    xenbr0 associated with eth0 (front end)
    xenbr1 associated with eth1 (back end)
    For each Virtual Machine, have been creating two interfaces; vif0 with xenbr0 and vif1 with xenbr1
    The desire is to have all disk I/O use the vif1 -> xenbr1 -> eth1 path. So far it seems to be working that way.
    Questioning the setup because we have seen this sort of error when shutting down a VM
    nfs: server axqntap1 not responding, still trying
    In case it matters, mount options inside the vm are: rw,bg,hard,intr,timeo=600,proto=tcp,vers=3,rsize=32768,wsize=32768
    Any advice, ideas? Are we all wrong with the bridge config? Mount options?
    Thank you - Randall

    Shutdown applications within the guest.
    Either power off from Oracle VM Manager or 'xm shutdown xxx' from the command line
    It is possible one or more files could be open when the shutdown is initiated.
    Have found at least one case of misconfigured IP which would have resulted in the disk access being via the 'Front End' interface rather than the Back End.
    Thanks

  • Advice re best practice for managing the scan listener logs and list logs

    Hi friends,
    I've just started a job as a RAC dba administrator for some big 24*7 systems, I've never worked with clusterware and RAC.
    2 Space problems
    1) Very large listener_scan2.log in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/trace folder
    2) Heaps of log_nnn.xml files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert folder (4Gb used up)
    Welcome advice on the best way to manage these in the short term (i.e. delete manually) and the recommended practice and safest way (adri maybe not sure how it works with scan listeners)
    Welcome advice and commands that could be used to safely clean these up and put a robust mechanism in place for logfile management in RAC and CLusterware systems.
    Finally should I be checking the log files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert regulalrly ?
    My experience with listener logs is that they are only looked at when there are major connectivity issues and on the whole are ignored.
    Thanks for your help,
    Cheers, Rob

    Have you had any issues that require them for investigative purposes? If not, just remove them. Are the logs required for some sort of audit process? If yes, gzip them to a location where you can use your OS tape backup policies to retain them for n-days. Once you remove an active file, it should recreate the file and continue without interruption.

  • About max local MAC filtering can be register in WLC 2504 and 5508

    Hi all
    My customer is considering to use WLC with MAC filtering feature (use local database not external Radius). So they are concerning about maximum local MAC filtering entries that can be register on WLC2504 and WLC5508 to buy (the number of APs is about 20, but the MAC is more than 200)
    I tried to search, but I could not find any specs mention it. If anyone knows, please help to answer
    Rgds

    I looked at this before. I want to say its maxed at 2048 regardless of the model ..
    http://www.cisco.com/c/en/us/support/docs/wireless-mobility/wlan-security/91901-mac-filters-wlcs-config.html

  • Universe Design Best Practices for Oracle

    Hello All,
    We recently moved from XIR2 on MS SQL 2005 to XI 3.1 on Oracle. This has been a difficult move for us as my team is new to oracle. I'm currently working on several performance issues between BOBJ and Oracle and am looking for documentation on best practices for universe design with oracle. I've foudn tidbits here and there regrading using parameters joing_by_sql and boundary_weight_table, wondering if there are other options out there that might help. We have queries taking 45+ minutes to run and that is totally unacceptable.
    thanks
    Andrea

    I am not sure if you are looking for Optimization or anything else. sorry for that following link might help you considering Oracle as DB.
    Link:[Universe Optimization 1|http://www.bidwtoday.com/business-objects/universe-designer/business-objects-universe-optimization/]
    Link:[Universe Optimization 2|http://forums.sdn.sap.com/post!reply.jspa?messageID=8721932]
    --Kuldeep

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best practice for Global Address?

    Good Morning,
    I am new to Cisco firewalls and would like to know what is the best practice for creating an external ip address and port into my network and then redirecting that to a specific machine.  I am thinking of using a global ip address and then only allowing this type of traffic to talk to the specific destnation and on that specific port.  Is this the correct course of action?  Or os there a better or more effecient way of allowing this process using ADSM.
    Troy
    Message was edited by: Troy Currence

    Hi,
    Basically when you are attempting to allow traffic from the external public network to some of your servers/hosts you will either use Static NAT or Static PAT
    Static NAT is when you bind a single public IP address to be used by only one internal host. This is usually the preferred option if you can spare a single public IP address for your server, meaning you probably have a small public subnet from your ISP.
    Static PAT is when you only allocate certain ports on your public IP address and map them to a local port on the host. This is usually the option when you only have a single public IP address that is configured on your ASAs external interface. Or perhaps in a situation when you just want to conserver your public IP addresses even though you might have a few of them.
    In Static NAT case you configure the Static NAT and use the interface ACL to allow the services you require.
    In Static PAT you only create a translation for a specific port/service so only connections to that port are possible. Naturally you will also have to allow those services/ports in the interface ACL just like with Static NAT.
    Again if you can spare the public IP addresses then I would go with Static NAT or if you only have a single or few IP addresses you can consider Static PAT (Port Forward) also.
    I dont personally use ASDM for configurations but can help you with the required CLI format configurations. These can actually be done through ASDM also from the Tools -> Command Line Interface menus at the top.
    Hope this helps
    - Jouni

  • Best Practices for FSCM Multiple systems scenario

    Hi guys,
    We have a scenario to implement FSCM credit, collections and dispute management solution for our landscape comprising the following:
    a 4.6c system
    a 4.7 system
    an ECC 5 system
    2 ECC6 systems
    I have documented my design, but would like to double check and rob minds with colleagues regarding the following areas/questions.
    Business partner replication and synchronization: what is the best practice for the initial replication of customers in each of the different systems to business partners in the FSCM system? (a) for the initial creation, and (b) for on-going synchronization of new customers and changes to existing customers?
    Credit Management: what is the best practice for update of exposures from SD and FI-AR from each of the different systems? Should this be real-time for each transaction from SD and AR  (synchronous) or periodic, say once a day? (assuming we can control this in the BADI)
    Is there any particular point to note in dispute management?
    Any other general note regarding this scenario?
    Thanks in advance. Comments appreciated.

    Hi,
    I guess when you've the informations that the SAP can read and take some action, has to be asynchronous (from non-SAP to FSCM);
    But when the credit analysis is done by non-SAP and like an 'Experian', SAP send the informations with invoices paid and not paid and this non-SAP group give a rate for this customer. All banks and big companies in the world does the same. And for this, you've the synchronous interface. This interface will updated the FSCM-CR (Credit), blocking or not the vendor, decreasing or increasing them limit amount to buy.
    So, for these 1.000 sales orders, you'll have to think with PI in how to create an interface for this volume? What parameters SAP does has to check? There's an time interval to receive and send back? Will be a synchronous or asynchronous?
    Contact your PI to help think in this information exchange.
    Am I clear in your question?
    JPA

Maybe you are looking for