HA or Dual Controllers

Hello,
We are wanting to expand our wireless network and we currently have a 5508 controller with 150 AP's.  We will be adding another 100 AP's soon, and we were thinking of possibly adding an HA controller or a second controller.  Current AP's are almost all 1142's with about 10 1131's.  The new AP's will most likely be 2602i's.
I see that the new 5760's are out, and i like the fact that they are based on IOS, but I don't think I can justify the cost of that controller for a small/medium deployment like ours. 
An HA 5508 controller was what I was leaning towards, only for the fact that it would be a simple design and I would not have to deal with configuration of two controllers.
Any recommendations?
Does anyone know if the 5508's will be EOL soon?
Thanks,
Dan.

Wow..... I would talk to your local Cisco SE for that or search the forums for ones that are published.  I don't want to break any NDA's with Cisco.  The bug toolkit can offer some along with the release notes.
Thanks,
Scott
Help out other by using the rating system and marking answered questions as "Answered"

Similar Messages

  • Dual Controllers, Single Point of Failure?

    As a member of a professional services group within an IT sales organization, my team’s focus is on evaluating our customer’s business problems and engineering solutions in the form of products and services we offer in order to fix the issue. We are, by definition, “problem solvers.” This reminds me of the old adage, “there is more than one way to skin a cat.” Well Mr. Customer, there is more than one way to solve your storage issue, more than one way to clear up that excessive network traffic, and more than one way to leverage virtualization to increase consolidation and availability. Read More..
    This topic first appeared in the Spiceworks Community

    Hi Friend,
    Your question is very vast.
    Redundancy can be on LAN , WAN , Routing etc etc.
    On LAN there can be many feature you can use to have complete redundancy like STP (Root Bridge and Secondary Root Bridge) , Etherchnnel between access and distribution and between distributon and core, also you can have uplink fast for fast convergence,
    you can also run HSRP. So I am saying there are many features which are available to have complete redundancy.
    On WAN interface you can implement Lease line and rame relays and isdn as backup or you can play around with your routing protocols and static routes for redundancy.
    Now a days there are many features on IOS on LAN and WAN which can used for complete redundancy so if you are aware of the features you can design your network very well.
    HTH
    Ankur

  • Dual HBA and dual RAID controllers

    PCI Dual Ultra3 SCSI Host adapter, single host (V210), StorEdge 3320 with dual RAID controllers.
    Looking for best speed and reliability - can't find any documentation explaining how best to utilize dual controllers in SE-3320 RAID array.
    Only reference is in "Sun StorEdge 3000 Family Best Practices" document, that says:
    Use dual-controller arrays to avoid a single point of failure. A dual-controller SCSI array features a default active-to-active controller configuration. This configuration improved application availability because, in the unlikely event of a controller failure, the array automatically fails over to a second controller, resulting in no interruption of data flow.
    Does the HBA provide the same active-active configuration?
    Can BOTH ports of HBA be connected to the dual controllers (one-to-one) to provide redundency/failover at both ends?
    Does this provide better throughput?

    problem starting to look like harware or firmware...
    installed patch 124750-02 T2000 firmware update
    installed patch 123305-02 Qlogic firmware update
    after updates: port 1 works fine, port 2 still flashing lights
    as long as fiber port lights flash - there is no communication occuring
    so, there are no LD's, LUN's to report even at OBP.
    fiber cable is proven good, controller port is proven good...
    guessing it's a bad HBA GBIC port, can't explain it any other way.
    turned on extended-logging=1; in qlc.conf
    /var/adm/messages reports a problem on port 2 during boot...
    qlc(0): ql_check_isp_firmware: Load RISC code
    qlc(0): ql_fw_ready: mailbox_reg[1] = 4h
    ql_async_event, 8013h LIP received
    etc...
    NOTICE: Qlogic qlc(0): Loop ONLINE
    NOTICE: qlc(0): Firmware version 4.0.23
    qlc(1): ql_check_isp_firmware: Load RISC code
    qlc(1): ql_fw_ready: mailbox_reg[1] = 4h
    NOTICE: Qlogic qlc(1): Loop OFFLINE
    NOTICE: qlc(1): Firmware version 4.0.23
    qlc(1): ql_fw_ready: failed = 100h

  • VPCs for Dual NetApp Controllers

    OK, here is our setup:
    1 3240 NetApp with Dual Controllers (Gen 1)
    2 5548s
    2 2232s
    Servers with Dual Port Emulex CNAs (Gen 2)
    1 4507 to legacy network
    I am trying to figure out how to be redundant as possible, so if we lose a 2k or a 5k the servers can still get to the SAN.
    Questions:
    How are the port channels setup? vPCs sertup? LACP is enabled on the NETApp. Can the 5ks present a single vPC to each controller?
    Thanks for any advice or help you can provide,
    P.

    Run vPC on the two NX5K switches.
    From NetApp FAS, connect one ethernet to first 5K. Connect second ethernet to second 5K.
    Configure port-channel on 5K with member interface going down to NetApp FAS.
    Associate Port-Channel with a vPC ID. The vPC ID must be the same.

  • Unix layout question  single vs. multiple logical volumes

    Hello friends,
    I have a question which I have seen various points of view. I'm hoping you might be able to give me a better insight so I can either confirm my own sanity, or accept a new paradigm shift in laying out the file system for best performance.
    Here are the givens:
    Unix systems (AIX, HP-UX, Solaris, and/or Linux).
    Hardware RAID system on large SAN (in this case, RAID-05 striped over more than 100 physical disks).
    (We are using AIX 6.1 with CIO turned on for the database files).
    Each Physical Volume is literally striped over at least physical 100 disks (spindles).
    Each Logical Volume is also striped over at least 100 spindles (all the same spindles for each lvol).
    Oracle software binaries are on their own separate physical volume.
    Oracle backups, exports, flash-back-query, etc., are on their own separate physical volume.
    Oracle database files, including all tablespaces, redo logs, undo ts, temp ts, and control files are in their own separate physical volume (that is made up of logical volumes that are each striped over at least 100 physical disks (spindles).
    The question is if it makes any sense (and WHY) to break up the physical volume that is used for the Oracle database files themselves, into multiple logical volumes? At what point does it make sense to create individual logical volumes for each datafile, or type, or put them all in a single logical volume?
    Does this do anything at all for performance? If the volumes are logical, then what difference would it to put them into individual logical volumes that are striped across the same one-hundred (+) disks?
    Basically ALL database files are in a single physical volume (LUN), but does it help (and WHY) to break up the physical volume into several logical volumes for placing each of the individual data files (e.g., separating system ts, from sysaux, from temp, from undo, from data, from indexes, etc.) if the physical volume is created on a RAID-5 (or RAID-10) disk array on a SAN that literally spans across hundreds of high-speed disks?
    If this does makes sense, why?
    From a physical standpoint, there are only 4 hardware paths for each LUN, so what difference does it make to create multiple 'logical' volumes for each datafile, or for separating types of data files?
    From an I/O standpoint, the multi-threading of the operating system should only be able to use the number of pathways that are capable based on the various operating system options (e.g., multicore CPUs using SMT (simultaneous multipath threading). But I believe they are still based on physical paths, not based on logical volumes.
    I look forward to hearing back from you.
    Thanks.
    ji li

    Thanks for your reply damorgan.
    We have dual HBAs in our servers as standard equipment, along with dual controllers.
    I totally agree with the idea of getting rid of RAID-5, but that is not my choice.
    We have a very large (massive) data center and the decision to use RAID-5 was at the discretion of our unix team some time ago. Their idea is one-size-fits-all. When I questioned it, I was balked at. After all, what do I know? I've only been a sys admin for 10 years (but on HP-UX and Solaris, not on AIX), and I've only been an Oracle DBA for nearly 20 years.
    For whatever it is worth, they also mirror their RAID-5, so in essence, it is a RAID 5-1-0 (RAID-50).
    Anyway, as for the hardware paths, from my understanding, there are only 4 physical hardware paths going from the servers to the switches, to the SAN and back. Their claim (the unix team's) is that by using multiple logical volumes within a single physical volume, that it increases the number of 'threads' to pull data from the stripe. This is the part I don't understand and may be specific to AIX.
    So if each logical volume is a stripe within a physical volume, and each physical volume is striped across more than one hundred disks, I still don't understand how multiple logical volumes can increase I/O through-put. From my understanding, if we only have four paths, and there are 100+ spindles, even if it did increase I/O somehow by the way AIX uses multipathing (SMT) with its CPUs, how can it have any affect on the I/O. And if it did, it would still have to be negligible.
    Two years ago, I've personally set up three LUNs on a pair of Sun V480s (RAC'd) connected to a Sun Storage 3510 SAN. One LUN for Oracle binaries, one for database datafiles, and one for backups and archivelogs), and then put all my datafiles in a single logical volume on one LUN, and had fantastic performance for a very intense database that literally had 12,000 to 16,000 simultaneous active* connections using Webshere connection pools. While that was a Sun system, and now I'm dealing with an AIX P6 570 system, I can't imagine the concepts being that much different, especially when the servers are basically comparable.
    Any comments or feedback appreciated.
    ji li
    Edited by: ji li on Jan 28, 2013 7:51 AM

  • Map a SAN LUN to a WS2012 R2 virtual machine

    Hello,
    I need some help from someone who has dealt with this before.
    I have:
    an HP DL380 G6 running the unlimited eval copy of Hyper-V Server 2012 R2.
    an HP MSA 2040 SAN enclosure.
    a WS2012 R2 Std. x64 virtual machine running on the Hyper-V Server 2012 R2 host.
    an HP/LSI SAS 9207-8e host adapter.
    I've been asked to configure the WS2012 R2 Std. x64 virtual machine as a file server, but really the data will be on an MSA 2040 SAN enclosure volume, so I need to map an LUN from the MSA 2040 to the WS2012 R2 Std. x64 virtual machine.
    Of course I can't actually install the HP/LSI SAS 9207-8e host adapter into the WS2012 R2 Std. x64 virtual machine, but is there an emulation/software solution for this?
    If I install the physical HP/LSI SAS 9207-8e host adapter into the physical Hyper-V Server 2012 R2 host, will I be able to add a virtual version of said host adapter to my VM?
    I'm searching and reading posts now, but if someone, who knows, has the answer, I would really appreciate a pointer or two.
    Thanks in advance for your time and help.
    SOLVED!
    Hello Elton_Ji/Darshana Jayathilake
    And thanks for trying to help, but please stop repeating disinformation.
    It CAN be done; I did it last week with the specific hardware/software listed.
    Domain Environment - Server 2003 AD with DFL/FFL set to Windows Server 2003.
    1 - HP DL308 G6 running the unlimited eval of Hyper-V Server 2012 R2 (core).
    1 - member server (desktop class hardware) running WS2012 R2 Std. (used for remote mgmt.).
    (there are 5/6 netsh advfirewall firewall commands that need to be run in order to use the computer 
    mgmt. console on a WS2012 R2 Std box to manage the Hyper-V Server core install; look 'em up.).
    1 - HP MSA 2040 Enclosure with Dual Controllers - purchased in the last two months; no firmware changes 
    have been made.
    1 - HP/LSI SAS 9207-8e HBA - purchased in the last two months.
    2 - SFF-8644 to SFF-8088 SAS HBA connector cables - purchased in the last two months.
    Two SAN volumes were successfully mapped to the phys. SAS 9207-8e HBA in the Hyper-V Server using the 
    dual ctrlr connection scenario per the MSA2040 UserGuide. p.36
    http://h20565.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c03792322
    one server/one hba/dual path 
    connection from ctrlr A sas port A1 
    connection from ctrlr B sas port B1 
    all other ports on both ctrlrs were deselected 
    ctrlr A connected to right hand port interface on phys sas card in Hyper-V Server 2012 R2 (core). 
    ctrlr B connected to left hand port interface on phys sas card in Hyper-V Server 2012 R2 (core).
    link led was green on both ctrlrs; activity led was off on both ctrlrs 
    applied the settings: "The host mapping was modified successfully. Note: It may take a minute to update 
    table.
    i didn't have to choose another lun while cfging ctrlr B; lun 1 is still set. 
    The file server VM is OFF!
    the File Server VM's settings > SCSI ctrlr > Hard Drive settings DID change; the "physical disk option" 
    was NOW available with
    disk 1 1115.72 GB bus 0 lun 1 target 1 
    and 
    disk 2 1117.11 GB bus 0 lun 2 target 2 
    a rescan disks in disk mgmt. on Hyper-V Server 2012 R2 (core) did not show any new drives yet; that's 
    OK, you won't see them there.
    I selected scsi disk 1 in the File Server VM's settings > scsi ctrlr > hard drive > physical disk 
    option, and did likewise with the second scsi hard drive phy. disk 2.
    Then I started the File Server VM, and logged in > opened the computer mgmt. console > disk mgmt. > 
    EUREKA! > there were two, new, unknown/offline/unallocated disks. 
    The properties of both offline disks identified them as an "HP MSA 2040 SAS SCSI Disk Device" 
    the offline disk info ballon reads "offline (the disk is offline because of policy set by an 
    administrator) 
    Disk 1 was set to online and initialized as GPT (went with GPT due to size/plans for future growth).
    I created a simple volume partition and formatted it as ntfs (1115.6 GB) 
    new simple volume > used all available space (1142370 MB) > drive letter E > format partition > file 
    system = ntfs > allocation unit size = default > volume label = DATA > perform quick format deselected. 
    the activity led on ctrlr A was now green and blinking. 
    1:38 PM on 4/2/15 1% 
    2:03 PM 22 % 
    2:22 PM 37% 
    3:40 PM done 
    I opened a unc window to \\VirtualFileServer\e$, pasted a text file and deleted it.
    I'm domain admin, so I was able to open the window without sharing or setting ntfs permissions.
    fantástico!

    Hi Sir,
    >>so I need to map an LUN from the MSA 2040 to the WS2012 R2 Std. x64 virtual machine.
    Please ensure NPIV is enabled on HBA and SAN switch .
    "Virtual Fibre Channel support in Windows Server 2012 leverages Fibre Channel HBAs and switches that are compatible with N_Port ID Virtualization (
    NPIV ).  NPIV is leveraged by Windows Server 2012 Hyper-V and the FREE Hyper-V Server 2012 to define virtualized World Wide Node Names (
    WWNNs ) and World Wide Port Names (
    WWPNs ) that can be assigned to virtual Fibre Channel HBAs within the settings of each VM.  These virtualized World Wide Names can then be zoned into the storage and masked into
    the LUNs that should be presented to each clustered Virtual Machine."
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx
    Also :
    http://blogs.msdn.com/b/robertvi/archive/2014/07/11/getting-started-with-virtual-fibre-channel-inside-a-hyper-v-vm.aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • NAS connection with Exadata on Network Port

    Hi,
    What I found on exadata and NAS.
    Can I connect the database servers to external storage?
    Yes, you may use the network ports on the database servers to connect to external storage via NFS or iSCSI. However, the Fibre Channel Over Ethernet (FCoE) protocol is not supported or allowed.
    My question is : What would be impact on performance if we connect Exadata with NAS on network port as NFS?
    Does any one using ? I want to have all backup on disk and from disk to tape kind of.
    Thanks

    If you can afford it look at using zfs over infiniband for the best bandwidth if you go for 73xx or 74xx models. You will also get dual controllers for full redundancy.
    Oracle's benchmark backup figures of 7 tb an hour are based on this setup If you go for a standard or 10 gig nfs connection you will get considerably slower speeds.
    Thanks
    Edited by: robinsc on Apr 19, 2012 8:12 AM

  • Which Promise? VTE610fD or VTrak E/J-Class for Mac OS X

    Hello everyone,
    Promise suggests VTrak E/J-Class for Mac OS X. Does anyone has used VTE610fD?
    The server will be an Xserve with FC.
    Best regards
    Kostas

    Hi Guys,
    Thanks for the info here. I'm trying to plan my budget for next year, and I'd love to upgrade to a dual controller RAID system for our Mac-based corporate network, which does not utilize XSAN. Our needs are pretty straightforward: lots of relatively fast storage for graphic design and general office files.
    I'm currently running an Intel Xserve with an Enhance Technology UltraStor RS16 single-controller RAID unit connected via FC. It's worked quite well thus far and provides around 90-95 MB/s read/write to clients even on the currently fragmented volumes. But it'll turn 3 years old this year, and I don't want to tempt fate.
    I've been eyeing the Promise VTE610fD, but the total lack of support for it on the Mac from either Apple or Promise is troubling. I really can't afford a fully-populated unit with overpriced hard drives, and I don't want to pay a premium for the Apple-certified version of the VTrak.
    Beatle, other than XSAN certification, why would you recommend the Apple version of the VTrak over the regular versions? I just need to to be able to set it up via its Web UI, then be able to format and mount the volumes on the Mac server. Will I run into difficulties?
    Enhance Technologies has announced the new UltraStor ES3160 FS with dual controllers, which could also meet my needs, but it may not come in a bare version.
    Thanks!

  • 2530/2540 array cache size vs. 6140

    Greetings,
    Needing some new storage we are considering the new 2540 array. Everything looks great except the cache size is up to 1GB. 512MB per controller. We are currently using 3510 storage arrays with dual controllers and 1GB cache per controller.
    So at this point I'm thinking we should opt to go with a 6140 instead.
    We've ruled out the 3510 just because we'd like this storage to take us 3-4 years down the road.
    Any thoughts on this?
    Thanks.

    IMHO the 2540 is placed as "low cost entry" storage system, midrange is 6140/2GB and 4GB. everything higher: 6540/9990 ;)
    i personally prefer ST6140/2GB as "entry" (better scalabality, nice IOPS)
    -- randy

  • Areca RAID 1212 PCI Card Drivers ( and possibly others ) not 10.7.4 compatible.

    Definatly beware of the Lion 10.7.4 update if your a Areca 1212 ( or other ) RAID card user / owner.
    I have a definate issue ... system freeze ... with their current driver.
    In contact with them .. awaiting reply.

    Yes, It looks like it was the switch and the ethernet ports were the problem. I had them plugged into the HP Procurve switch hoping they would get along, then noticed that the Macpro would not give the ethernet connection an IP without disabling my airport and adding in Ethernet 1 and 2 , then plugged them both into my airport extreme and everyone is dancing!! Thanks for the breadcrumbs to lead my path from my dark thoughs of sledghammers and sparks flying, I was thinking I spent way too much to peice and part this thing together from 4 different sellers on e-bay at about $1050.  The LSI FC PCI Express (125) the two brand new cables in box ($60) 10 500Gb Apple modules plug and play, (575) and the Cabinet loaded with 4 500Gb drives, dual PS, Dual Controllers, Dual Fans shipped for (325) Now that I look at uptime and count in the DROBOs taking a dump on my data when I was treansitioning macs loosing 3500 photos from 10 years, it is a bargan.  it was devistating. Now I feel a lot safer and a new Drobo to backup this set. Two is always better than one... CHEERS BEATLE! Too bad Mountain Lion is not supported, but for professional, who cares if is more "Phone Apps" than a mac mans meat and potatoes...

  • Fiber Cabling

    I have a Promise Vtrak e-series with dual contollers configured in LUN affinity mode. The Xserve Intel has two identical Fiber cards each with two ports. Should I connect all four optical ports using 4GB Copper Fiber cabling. (I know that only two maximum will work at one time). Will the two cables running from Contoller #2 to the second Fiber Channel card give me fail-over proctection if either the first card fails or Controller #1 fails? I run another setup this way using a single four-port fiber card to dual controllers, but I am wondering if th Xserve will treat the two fiber channel cards connected to the same LUN as fail-over, as opposed to a single card.

    Here is a Cisco Press book that should help.
    <http://www.ciscopress.com/articles/article.asp?p=170740&seqNum=7>
    Also, an optical networking book co-authored by a Cisco employee (Rajiv Ramaswami):
    <http://www.amazon.com/Optical-Networks-Practical-Perspective-Networking/dp/1558606556>
    You will also find info on Corning's website:
    <http://www.corning.com/products_services/telecommunications/index.aspx>
    Good reading!

  • Best architecture for 2 servers, 1 storage

    I'm planning an architecture with two servers sharing a single storage. One server is primarily for mail and one is for file sharing. I'm considering using two Mac Mini's as the servers and Promise vTrak e30 as the storage.
    I noticed the vTrak e30 has dual controllers with 4x8Gb FC each. Is it possible to connect the servers to the storage without an FC switch? Any downsides?
    If using Xsan, do I need to have a third dedicated server setup as the metadata controller or can I assign one or both of the servers for that task (in addition to their main tasks)? Should the metadata traffic be separated from other IP traffic into a dedicated ethernet to get adequate performance?

    As much as I want to keep Xsan alive, I think it is the wrong solution for you.  You have two diametrically opposed objectives here:  Email server = millions of small files and file sharing = random file size.  Defining a single Xsan volume that will work effectively for both of these scenarios will be impossible.  And the performance penalty that you will get will make you want to throw the whole solution out.
    Also, keep in mind that you need the SanLink adaptors from Promise if you want to get a mini connected to FC storage.  Ah, but here is the rub.  SanLink is 4 GB FC.  The vTrak is 8 GB FC.  So you are not getting full throughput on a link level.  Also, the vTrak has 8 total FC ports.  The SanLink has two.  And finally, the mini has only one Thunderbolt port.
    However, if you are considering this as a method of utilizing a common storage array and splitting load across two head units, then I would suggest creating two volumes on the vTrak.  One volume set to handle the small file writes of mail and a second to handle the variability of file services.  Then assign each volume to a controller or LUN mask them to the host.  Or, if you want an FC switch, then simply zone the connections. 
    Now, you will not be able to mount the file sharing volume on the mail server or the inverse unless you move cables, but this can keep you out of the overhead of Xsan. 
    Hope that made sense. 

  • Dual Stepper Motor Controllers on a CAN Bus

    Hello All,
    I hope this is the right forum for this message.  If not, please let me know where I should post it.
    Friends of mine are working on a project involving controlling many dual-stepper motor assemblies, using LabVIEW as the main control code running in the control hub computer.  The stepper assemblies will be located remotely, up to several hundred feet away from the control hub.  I suggested that they consider using a CAN bus for communications, and they seem interested.
    Can any of you recommend drop-in stepper motor controllers for this application that will integrate easily with LabVIEW and NI-CAN?
    Is CAN the right com bus for this application?  If not, what would you suggest?  What else should my friends be considering?
    Many thanks in advance for any suggestions you can provide.
    Sincerely,
    Forbes Black

    Just post a link to the new forum post so that anyone that wants to answer will be directed to the correct one. You can not remove it yourself. Have a great day.
    Alex D
    Applications Engineer
    National Instruments

  • Two dual-channel scsi controllers on S10 8/07 x86?

    I'm attempting an install of Solaris 10 8/07 x86 on a server with four channels of scsi devices. The problem is that the install procedure only recognises the controller on the motherboard, and not the additional controller on the pci bus with the boot disk.
    The motherboard is a Supermicro X7DB8+ with integrated Adaptec AIC-7902 (dual U320), and the additional controller is an Adaptec 39320A (dual U320) installed in PCI-X slot 3. The scsi bios gives the following display:
    AIC-7902 A at slot 07 04:02:00
    AIC-7902 B at slot 07 04:02:01
    39320A A at slot 03 07:01:00
    39320A B at slot 03 07:01:01
    The controller master is set to the 39320A, and the system disk (id 0 on 39320A channel A) is enabled as a bootable device on the motherboard bios. Everything appears fine at both the scsi and motherboard bios levels - all drives are visible and all diagnostics work as expected.
    However, trying all permutations and combinations I can think of, the S10 installation routine persists in only recognising the drives on AIC-7902 channels A and B (as c0xxxxxx and c1xxxxxx). I cannot see the drives on the 39320A nor enable any of them as a boot disk for installation as desired.
    Any one with any ideas? I'm stumped now.

    Thanks for the tip! I was sure I disabled HostRaid, but apparently it was not disabled on the 39320A. Everything works as expected now.
    Regards,
    Adam.

  • Dual 5508 controllers ?

    I currently have 1 5508 WLC running in prod that has a 25 AP license. Unfort wireless access has really caught on in the company
    and I am now at my max for AP's but I need more.
    After suffering a mild stroke from the sticker shock of the 50 ap license upgrade, I also have a 5508 WLC in my testing environment that
    came with a 25 AP license.
    Am I able to run both in prod at the same time ? So half of my AP's connect to one controller and the other half to the 2nd WLC ?
    Looking for some design advice if I need to run them both. What options to I have ?
    Cheers
    Dave

    Dave,
    As Michael said, you can do that.
    Create same SSIDs on both WLCs with same security method and security keys (if any), same VLAN, same DHCP scope...etc.
    use same mobility gropu name and same RF gropu name.
    This way you can have client moving among APs on different controllers smoothly (keeping in mind that RF enviornment and cell adjacency correctly met).
    If you need any assistance with this just use family friend GOOGLE
    search for "cisco wireless controller rf gropus" for example to check how to configure them.
    Good luck.
    Amjad

Maybe you are looking for