WLC Configuration Best practices - no updates since 2008?

There has been no updates to this doc for almost 4 years.
http://www.cisco.com/en/US/partner/tech/tk722/tk809/technologies_tech_note09186a0080810880.shtml
That's a long time for wireless, especially since it still references release 5.2, and we now it's 7.0.  Plus quite a few new AP families have been announced, 802.11n, Cleanair, etc.  I think this document is overdue for an update.  Has there not been any lessons learned since 2008?  Can anyone from Cisco comment on this?

Guys:
I agree with you. many docs are old, pretty old.
You can use the Feedback button at the bottom of the doc page and send your feedback to Cisco.
Most of the time they will reply you and you can discuss your opinion about that doc is very old.
I've done this with more than one doc and config examples that describes the config by providing images for version 3.x. They updated some of the docs to reflect later releases (6.x and 7.x).
They have no problem with updating the docs, they have a good team to work on the docs to create and update. Just you be positive and hit that "Feedback" button and tell them and they'll surely help. (if not please tell me. I have a kind of personal contact with the wireless docs manager).
HTH,
Amjad
You want to say "Thank you"?
Don't. Just rate the useful answers,
that is more useful than "Thank you".

Similar Messages

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • EBS Supplier best practice to update vendor site code, update or create a new one

    I have a question related to EBS Supplier vendor site code. Application lets you update the vendor site code, but what is the best practice to update the site code?....would you inactivate the exiting one and create a new one? or would you just update the existing value?

    Ok,
    My workaround was to put in my TaskFlow an action to commit. After that I put two more actions (execute) and then back to my page. This way works but I would like to know if there is any more efficient way to do this just when I am inserting.
    Regards

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Server 2008 R2 RDS HA Licensing configuration best practices

    Hello
    What is the best practice for setting up and HA licensing environment for RDS?  I'm using a mixture of RDS CALs for my internal/AD users and External Connector license for my external/Internet users. 
    Daddio

    Hi,
    To ensure high availability you want to have a fallback License Server in your environment. The recommended method to configure Terminal Service
    Licensing servers for high availability is to install at least two Terminal Services Licensing servers in Enterprise Mode with available Terminal Services CALs. Each server will then advertise in Active Directory as enterprise license servers with regard to
    the following Lightweight Directory Access Protocol (LDAP) path: //CN=TS-Enterprise-License-Server,CN=site name,CN=sites,CN=configuration-container.
    To get more details on how to setup your License Server environment for redundancy and fallback, go over the "Configuring License Servers for High Availability"
    section in the Windows Server 2003 Terminal Server Licensing whitepaper
    Regards,
    Dollar Wang
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Technology changes life……

  • SRST Configuration - Best Practices

    We are starting a new Unified Communication deployment and will have an SRST at each remote location. I am wondering if there are any best practices in regards to the configuration of the SRST.
    For example Does it matter what interface is specific for the source address. I have seen some that say it needs to be the LAN address and others say it needs to be a Loopback address. Since the phones themselves will be attached to a VLAN on a switch that is connected to the router is there a benefit either way? Are there any considerations not really covered in the base configuration that need to be considered as a best practice?
    I am sure I will have more questions as we progress so thanks for the patience in advance...
    Brent                    

    Hi Brent,
    The loopback is used because it is an interface that remains up regardless of the physical layer, so provided that appropriate routing is in place, the lo address will be reachable through the physical interfaces.
    Best practices on the top of my mind should include looking at the release notes for the software version you're using, check network requirements and compatibility matrix, interworking, caveats, and reserve time for testing.
    I'm sure you'll be just fine
    hth

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • DNS Configured-Best Practice on Snow Leopard Server?

    How many of you configure and run DNS on your Snow Leopard server as a best practice, even if that server is not the primary DNS server on the network, and you are not using Open Directory? Is configuring DNS a best practice if your server has a FQDN name? Does it run better?
    I had an Apple engineer once tell me (this is back in the Tiger Server days) that the servers just run better when DNS is configured correctly, even if all you are doing is file sharing. Is there some truth to that?
    I'd like to hear from you either way, whether you're an advocate for configuring DNS in such an environment, or if you're not.
    Thanks.

    Ok, local DNS services (unicast DNS) are typically straightforward to set up, very useful to have, and can be necessary for various modern network services, so I'm unsure why this is even particularly an open question.  Which leads me to wonder what other factors might be under consideration here; of what I'm missing.
    The Bonjour mDNS stuff is certainly very nice, too.  But not everything around supports Bonjour, unfortunately.
    As for being authoritative, the self-hosted out-of-the-box DNS server is authoritative for its own zone.  That's how DNS works for this stuff.
    And as for querying other DNS servers from that local DNS server (or, if you decide to reconfigure it and deploy and start using DNS services on your LAN), then that's how DNS servers work.
    And yes, the caching of DNS responses both within the DNS clients and within the local DNS server is typical.  This also means that there is need no references to ISP or other DNS servers on your LAN for frequent translations; no other caching servers and no other forwarding servers are required.

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • IP over Infiniband network configuration best practices

    Hi EEC Team,
    A question I've been asked a few times, do we have any best practices or ideas on how best to implement the IPoIB network?
    Should it be Class B or C?
    Also, what are your thoughts in regards to the netmask, if we use /24 it doesn't give us the ability to visually separate two different racks (ie Exalogic / Exadata), whereas netmask /23, we can do something like:
    Exalogic : 192.168.*10*.0
    Exadata : 192.168.*11*.0
    While still being on the same subnet.
    Your thoughts?
    Gavin

    I think it depends on a couple of factors, such as the following:
    a) How many racks will be connected together on the same IPoIB fabric
    b) What rack configuration do you have today, and do you foresee any expansion in the future - it is possible that you will move from a purely physical environment to a virtual environment, and you should consider the number of virtual hosts and their IP requirements when choosing a subnet mask.
    Class C (/24) with 256 IP values is a good start. However, you may want to choose a mask of length 23 or even 22 to ensure that you have enough IPs for running the required number of WLS, OHS, Coherence Server instances on two or more compute nodes assigned to a department for running its application.
    In general, when setting a net mask, it is always important that you consider such growth projections and possibilities.
    By the way, in my view, Exalogic and Exadata need not be in the same IP subnet, especially if you want to separate application traffic from database traffic. Of course, they can be separated by VLANs too.
    Hope this helps.
    Thanks
    Guru

  • Best practice to update to Snow Leopard

    I just placed my family pack order on Amazon.com for Snow Leopard. But this will be the first time for me doing an OS upgrade on a Mac (all 4 Macs in my house came with Leopard on them so we've only done the "software update" variety). I am a reformed PC guy so humor me!
    What is the best practice to upgrade from 10.5.8 to 10.6? On a PC, my inclination would be to back up my data, reformat the whole drive and install Windows fresh... then all my apps.. then the data. I hate that and it takes hours.
    What is the best practice way to upgrade the Mac OS?

    The best option is Erase and Install. The next best option is Archive and Install. Use the latter if you do not want to or can't erase your startup volume.
    How to Perform an Archive and Install
    An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
    1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
    Repairing the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer.
    2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
    3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
    4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
    5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
    6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.

  • Best practice for DHCP Server 2008 utilization of IP Addresses

    I am currently using 85% of addresses on my DHCP server running windows 2008 Server. Does microsoft recommend a particular percentage (%) of its utilization before building another scope? Or what is the industry's best practice or microsoft's
    recommendation to build another scope?

    Hi,
    As far as I know, there is no standard for the
    usage of DHCP scope. Just make sure that the IP address pool isn’t exhausted.
    For the best practices of DHCP, please refer to the article below,
    DHCP Best Practices
    http://technet.microsoft.com/en-us/library/cc780311(v=WS.10).aspx
    Recommended tasks for the DHCP server role
    http://technet.microsoft.com/en-us/library/cc731392.aspx
    Hope this helps.
    Steven Lee
    TechNet Community Support

  • Best practice to update inline/publish folio?

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

Maybe you are looking for