B.Y.O.x , authentication, best practice, mixed OSes

                   Morning me again.........
I have 2 x 5508 WLCs in place with around 50 APs split between the 2.
As there is currently a low uptake of using wireless devices all of which we own I have up until now been using WPA2 and MAC filtering to control access to the network.
This all needs to change as we are about to embark on the B.Y.O.x revolution. This means being able to support a wide range of OSes from Windows to Android. This in itself presents a whole series of issues but right now I'm trying to explore how much of the burden the WLCs can take.
For instance I was thinking about setting up web auth on the controllers that would authenticate against an external RADIUS server - this seems fairly straight forward.
If this was to be a bog standard windows network I could set up a Microsoft NPS server that could control and define policies to mobile devices, but as this is going to be a mixed environment that's not a solution I can use.
What other features do the controllers provide that would be useful in my situation - can you for instance automatically direct data to a specific vlan based on authentication information?
Mamny thanks.

Well it's hard to say since I don't know what exactly you want to do. If you want to be able to determine what device is in the network and create a policy for that, then you need Cisco ISE. Windows IAS, NPS, Cisco ACS, and other radius will just look at the authentication piece and will have no idea what type if device it is. So if you have iPads that are corporate owned and want to allow that in the network and not allow non corporate iPads for example, you either put a certificate on the iPad and run EAP-TLS or you can use ISE to run EAP-TLS or take a look at the device, profile that device and verify that is has a valid certificate and out that device on vlan X. If the device is not approved, you can put that is the guest vlan.
Sent from Cisco Technical Support iPhone App

Similar Messages

  • User Account Authentication across multiple Solaris servers - Best Practice

    Hi,
    I am new to Solaris admin and would like to know the best practice/setup for authenticating user accounts across multiple solaris servers.
    Currently we have 20 - 30 Solaris 8 & 10 servers which each have their own user accounts setup. I am planning to replace these with a similar number of Solaris 10 servers and would like to centralise the user accounts and their authentication.
    I would be grateful for any suggestions on the best setup and any links to tutorials.
    Thanks
    Jools

    i would suggest LDAP + kerberos, LDAP for name lookups and krb5 for auth. provides secure auth + extensable directory for users and other apps if needed. plus, it provides a decent spring board to add other unix plats into the mix since this will support any unix/linux/bsd plat. you could integrate this design with a windows AD env if you want as well.
    [http://www.sun.com/bigadmin/features/articles/kerberos_s10.jsp] sol + ldap+ AD
    [http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server] sol + ldap (openldap)
    [http://aput.net/~jheiss/krbldap/howto.html] sol + ldap + krb5
    now these links are all using some diff means, however they should give you some ideas as to whats out there. sol 10 comes with suns ldap server and you can use the krb5 server which comes with it as well. many many diff ways to do this. many many more links out there as welll. these are just a few.

  • Best practices of BO/BW SSO SAP Authentication transports

    Hi Friends,
    We are going to integrate BW system with BO (SAP authentication). All the queries are built through BICS connections. And we have various reporting tools to implement SSO SAP authentication (Webi,Crystal,Dashboard.Design studio…etc)
    As per the process there are certain activities which has to be performed at BW level
    e.g -- BW Roles creation (PFCG---Crystal role enablement) and assigning to BO users
    Once it is created in BW , we have to do  integration at BO level( in CMC application) by selecting authentication and roles import followed by ……Groups..Users…folder and access level...
    My question here is
    Transports of BW objects for BO SSO (SAP) authentication (such as roles created for Users, Keystore certificate, uploads). Will these objects be transported by BW team or they will be separately downloading or uploading the certificate in different systems (like QAS  ...PROD….)
    And at BO level, once I integrate BO SSO, Do I need to do manual integration in QAS and Production system as well or it can be transported with promotion management of BO tool
    Will these SSO(SAP) authentication can be applied to all tools in BI Launchpad such as (Design studio,Webi,Web application,Crystal….etc)  as all users  are required to have SSO to all BO tool
    Regarding LUMIRA tool , Can we do SSO authentication
    Please share your thoughts and experience.
    I t would be great if I get BO administration best practices document for BW BO SSO and Users and Group management  in CMC for implementing
    Thanks in advance

    Hi ,
    Please find my answers below:
    1. The roles will be created in BW and should automatically appear in BO CMC Authentication SAP roles, if there is a connectivity setup between BO and BW irrespective of the SSO.The roles are transported by the BW security team.
    2. Every environement will have a unique connection to the corresponding SAP BW environment.For example SAP BW DEV will be mapped to BO DEV, SAP BW PROD will be mapped to BO PROD.So these settings cannot be migrated through Promotion Management.
    3.This authentication can be applied to all tools , the SSO does not depend on the tool ,it depends on the integration between two systems which in this case are BO and SAP BW
    As mentioned earlier, after integration all tools can have SSO
    You can refer to a lot of help documents on this site which will help you to setup the integration between SAP BW AND SAP BO.
    Kind Regards,
    Priyanka

  • External System Authentication Credentials Best practice

    We are in the process of an 5.0 upgrade.
    We are using NTLM as our authentication source to get teh users and the groups and authenticate against the source. So currently we only have the NT userid, group info(NT domain password is not stored).
    We need to get user credentials to other systems/applications so that we can pass that on the specfic applications when we search/crawl or integrate with those apps/systems.
    We were thinking of getting the credentials(App userid and password) for other applications by developing a custom Profile Web service to gather the information specific to these users. However, don't know if external application password is secured when retrieving from the external repository via a PWS and storing into the Portal database.
    Is this the best approach to take to gather the above information? If not, please recommend the best practice to follow.
    Alternatively, can have the users enter the external system credentials by having them edit their user profile. However, this approach is not preferred.
    If we can't store the user credential to the external apps, we won't eb able to enhance the user experience when doing a search/or click-thorugh to tthe other applications.
    Any insight would be appreciated.
    Thanks.
    Vanita

    Hi Vanita,
    So your solution sounds fine - however, it might be easier to use an SSO Token or the Plumtree UserID in your external applications as a difinitive authentication token.
    For example if you have some external application that requires a username and password, then if you are in a portlet view of the application the application should be able to take the userid plumtree sends it to authenticate that it is the correct user.  You should limit this sort of password bypass to traffic being gatewayed by the portal (i.e. coming from the portal server only).
    If you want to write a Profile Web Service, the data the gets stored in the Plumtree Database is exactly what the Profile Web Service send it as the value for a particular attribute.  For example if your PWS tells Plumtree that the APP1UserName and APP1Password for user My Domain\Akash is Akash and password then that is what we save.  If your PWS encrypts the password using some 2-way encryption before hand, then that is what we will save.  These properties are simply attached to the user, and can be sent to different portlets.
    Hope this helps,
    -aki-

  • Cisco ISE: 802.1x Timers Best Practices / Re-authentication Timers [EAP-TLS]

    Dear Folks,
    Kindly, suggest the best recommended values for the timers in 802.1x (EAP-TLS)... Should i keep default all or change or some of them?
    Also, what do we need reauthentication timers? Any benefit to use it? Does it prompt to users or became invisible? and What are the best values, in case if we need to use it?
    Thanks,
    Regards,
    Mubasher
    My Interface Configuration is as below;
    interface GigabitEthernet1/34
    switchport access vlan 131
    switchport mode access
    switchport voice vlan 195
    ip access-group ACL-DEFAULT in
    authentication event fail action authorize vlan 131
    authentication event server dead action authorize vlan 131
    authentication event server alive action reinitialize
    authentication open
    authentication order dot1x mab
    authentication priority dot1x mab
    authentication port-control auto
    mab
    snmp trap mac-notification change added
    dot1x pae authenticator
    dot1x timeout tx-period 5
    storm-control broadcast level 30.00
    spanning-tree portfast
    spanning-tree bpduguard enable

    Hello Mubashir,
    Many timers can be modified as needed in a deployment. Unless you are experiencing a specific problem where adjusting the timer may correct unwanted behavior, it is recommended to leave all timers at their default values except for the 802.1X transmit timer (tx-period).
    The tx-period timer defaults to a value of 30 seconds. Leaving this value at 30 seconds provides a default wait of 90 seconds (3 x tx-period) before a switchport will begin the next method of authentication, and begin the MAB process for non-authenticating devices.
    Based on numerous deployments, the best-practice recommendation is to set the tx-period value to 10 seconds to provide the optimal time for MAB devices. Setting the value below 10 seconds may result in the port moving to MAC authentication bypass too quickly.
    Configure the tx-period timer.
    C3750X(config-if-range)#dot1x timeout tx-period 10

  • "Best Practices" for using different Authentication Schemes ?

    Hi
    We are using different authentication schemes in different environments (Dev/QA/Prod). Changing the authentication scheme between the environments is currently a manual step during the installation. I am wondering if there are better "Best Practices" to follow, where the scheme is set programmatically as part of the build/ load process for a specific environment. ... or any other ideas.
    We refrained from merging the authentication schemes (which is possible) for the following reasons:
    - the authentication code becomes unnecessary complex
    - some functions required in some environments are not available in all environments (LDAP integration through centrally predefined APIs), requiring dynamic execution
    Any suggestions / experience / recommendation to share are appreciated.
    Regards,
    - Thomas
    [On Apex 4.1.0]

    t-o-b wrote:
    Thanks Vikram ... I stumbled over this post, I was more interested in what the "Work Around" / "Best Practices" given these restrictions.
    So I take it that:
    * load & change; or
    * maintain multiple exports
    seem to be the only viable options
    ... in addition to the one referred to in my questions.
    Best,
    - ThomasThomas,
    Its up-to you really and depends on many criteria +(i think its more of release process and version controlling)+.
    I haven't come across a similar scenario before..but I would maintain multiple exports so that the installation can be automated (no manual intervention required).
    Once the API is published +(god knows when it will be)+ you can just maintain one export with an extra script to call the API.
    I guess you can do the same thing with the load & change approach but I would recommend avoiding manual intervention.
    Cheers,
    Vikram

  • Looking for best practice Port Authentication

    Hello,
    I'm currently deploying 802.1x on a campus with Catalyst 2950 and 4506.
    There are lots of Printers and non-802.1x devices (around 200) which should be controlled by their mac-address. Is there any "best practice" besides using sticky mac-address learning.
    I'm thinking of a central place where alle mac-addresses are stored (i.e. ACS).
    Another method would be checking only the first part of the mac-address (vendor OID) on the switch-ports.
    Any ideas out there??
    regards
    Hubert

    check out the following link, this provides info on port based authentication, see if it helps :
    http://www.cisco.com/en/US/products/hw/switches/ps628/products_configuration_guide_chapter09186a00801cde59.html

  • Wireless authentication network design questions... best practices... etc...

    Working on a wireless deployment for a client... wanted to get updated on what the latest best practices are for enterprise wireless.
    Right now, I've got the corporate SSID integeatred with AD authentication on the back end via RADIUS.
    Would like to implement certificates in addition to the user based authentcation so we have some level of dual factor authentcation.
    If a machine is lost, I don't want a certificate to allow an unauthorized user access to a wireless network.  I also don't want poorly managed AD credentials (written on a sticky note, for example) opening up the network to an unathorized user either... is it possible to do an AND condition, so that both are required to get access to a wireless network?

    There really isn't a true two factor authentication you can just do with radius unless its ISE and your doing EAP Chaining.  One way that is a workaround and works with ACS or ISE is to use "Was machine authenticated".  This again only works for Domain Computers.  How Microsoft works:) is you have a setting for user or computer... this does not mean user AND computer.  So when a windows machine boots up, it will sen its system name first and then the user credentials.  System name or machine authentication only happens once and that is during the boot up.  User happens every time there is a full authentication that has to happen.
    Check out these threads and it explains it pretty well.
    https://supportforums.cisco.com/message/3525085#3525085
    https://supportforums.cisco.com/thread/2166573
    Thanks,
    Scott
    Help out other by using the rating system and marking answered questions as "Answered"

  • NX7K M and F series mixed chassis best practice?

    I have NX7010 chassis' with mixed M and F series line cards. Fucntions to implement include VDC, vPC, VRF, and L3 routing. What are the best practices for mixed chassis? I once saw a Cisco document talking about it but couldn't find it right now.
    Thanks in advance                 

    Understand Layer 3 functions should be performed by M1 ports. But if I use F2e 10G ports to build a trunk between 2 NX7K, then use a SVI to make Layer 3 adjacency across this trunk connection, is it ok?
    I can use M1 1G ports to build this trunk too but I would prefer F2e 10G ports if this is ok.
    Thanks

  • Localization: Best practice suggestions Apps with mixed UI and Content languages?

    I am trying to write a simple Universal app that can be easily localized to different UI languages. But the app also needs to display content that is determined by user settings. For example I would like the app UI to display in the users region (English,
    Russian, etc.) while at the same time having fields on the page whose strings are coming from other resources (Latin "la"? , Spanish, etc.).
    The samples are pretty good about how to setup resources with respect to the UI ( e.g. Strings/en-us/Resources.resw ) but not what to do if you want to also be able to draw strings from a different language. When the words in the content fields show in Latin
    I don't want the UI to also be in Latin.
    Suggestions on best way to do this?
    Thanks,
    -Tom19

    Hi Tom19,
    I did not receive the email notification on my mailbox for your reply, that's weird. Sorry for the late response.
    Basically we have the best practice documentation for you:
    Creating and retrieving resources in Windows Store apps also
    Quickstart: Using string resources, take a look at the documentations to see if these helps.
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Sessions and Controllers best-practice in JSF2

    Hi,
    I've not done web development work since last using Apache Struts for its MVC framework ( about 6 years ago now ). So bear with me if my questions does not make sense:
    SESSIONS
    1) Reading through the JSF2 spec PDF, it mentions about state-saving via the StateManager. I presume this is also the same StateManager that it used to store managed-beans that are in @SessionScoped ?
    2) In relation to session-scoped managed beans, when does a JSF implementation starts a new session ? That is, when does the implementation such as Mojarra call ExternalContext.getSession( true ) .. and when does it simply uses an existing session ( calling ExternalContext.getSession( false ) ) ?
    3) In relation to session-scoped managed beans, when does a JSF implementation invalidate a session ? That is, when does the implementation call ExternalContext.invalidateSession() ?
    4) Does ExternalContext.getSession( true ) or ExternalContext.invalidateSession() even make sense if the state-saving mechanism is client ? ( javax.faces.STATE_SAVING_METHOD = client ) Will the JSF implementation ever call these methods if the state-saving mechanism is client ?
    CONTROLLERS
    Most of the JSF2 tutorials that I have been reading on-line uses the same backing-bean when perfoming an action on the form ( when doing a POST or a GET or a post-back to the same page ).
    Is this best practice ? It looks like mixing what should have been a simple POJO with additional logic that should really be in a separate class.
    What have others done ?

    gimbal2 wrote:
    jmsjr wrote:
    EJP wrote:
    It's better because it ensures the bean gets instantiated, stuck in the session, which gets instantiated itself, the bean gets initialised, resource-injected, etc etc etc. Your way goes goes behind the scenes and hopes for the best, and raises complicated questions that don't really need answers.Thanks.
    1) But if I only want to check that the bean is in the session ... and I do NOT want to create an instance of the bean itself if it does not exist, then I presume I should still use ExternalApplication.getSessionMap.get(<beanName>).I can't think of a single reason why you would ever need to do that. Checking if a property of a bean in the session is populated however is far more reasonable to me.In my case, there is an external application ( e.g. a workflow system from a vendor ) that will open a page in the JSF webapp.
    The user is already authenticated in the workflow system, and the external system from the vendor sends along the username and password and some parameters that define what the request is about ( e.g. whether to start a new case, or open an existing case ). There will be no login page in the JSF webapp as the authentication was already done externally by the workflow system.
    Basically, I was think of implementing a PhaseListener that would:
    1) Parse the request from the external system, and store the relevant username / password and other information into a bean which I store into the session.
    2) If the request parameter does not exist, then I go look for a bean in the session to see if the actual request came from within the JSF webapp itself ( e.g. if it was not triggered from the external workflow system ).
    3) If this bean does not exist at all ( e.g. It was triggered by something else other than the external workflow system that I was expecting ) then I would prefer that it would avoid all the JSF lifecycle for the current request and immediately do a redirect to a different page ( be it a static HTML, or another JSF page ).
    4) If the bean exist, then proceed with the normal JSF lifecycle.
    I could also, between [1] and [2], do a quick check to verify that the username and password is indeed valid on the external system ( they have a Java API to do that ), and if the credentials are not valid, I would also avoid all the JSF lifecycle for the current request and redirect to a different page.

  • Row level security with session variables, not a best practice?

    Hello,
    We are about to implement row level security in our BI project using OBIEE, and the solution we found most convenient for our requirement was to use session variables with initalization blocks.
    The problem is that this method is listed as a "non best practice" in the Oracle documentation.
    Alternative Security Administration Options - 11g Release 1 (11.1.1)
    (This appendix describes alternative security administration options included for backward compatibility with upgraded systems and are not considered a best practice.)
    Managing Session Variables
    System session variables obtain their values from initialization blocks and are used to authenticate Oracle Business Intelligence users against external sources such as LDAP servers or database tables. Every active BI Server session generates session variables and initializes them. Each session variable instance can be initialized to a different value. For more information about how session variable and initialization blocks are used by Oracle Business Intelligence, see "Using Variables in the Oracle BI Repository" in Oracle Fusion Middleware Metadata Repository Builder's Guide for Oracle Business Intelligence Enterprise Edition.
    How confusing... what is the best practice then?
    Thank you for your help.
    Joao Moreira

    authenticating / authorizing part is take care by weblogic and then USER variable initialized and you may use it for any initblocks for security.
    Init block for authenticating / authorizing and session variables are different, i guess you are mixing both.

  • Best practice for keeping a mail session open in web application?

    Hello,
    We have a webmail like application where users login with their IMAP credentials, then are taken to an authenticated area of the site where they can manage different things about their email account.
    Right now the application is opening and closing a mail store connection (via a new javax.mail.Session) for each page load based on the current logged in user credentials. To me this seems like it would be a bad practice to keep opening and closing a connection each page load.
    Are there any best practices for this situation? It would be nice to be able to open the connection to the mail server on login, then keep that connection open until the person logs out, session expires, etc.
    I can probably put the javax.mail.Session into the HTTP session, but that seems like it would break any clustering functionality of tomcat. This would be fine if the machine the user is on didn't fail, but id assume if they failed over to another the mail session would be gone. Maybe keeping the mail session in the http session, checking for a connection, then first attempting to reconnect with the logged in credentials before giving up would be a possiblity?
    Any pointers would be appreciated

    If you keep the connection open across pages, you're going to need to deal with
    timeouts - from the http session and from the mail server.
    If you don't keep the connection open, you're going to need to "resynchronize"
    your view of the store/folder with each operation, in case the folder is modified
    by another session.
    The former is easier in the common cases, especially if you don't care how gracefully
    you handle failures. The latter is more difficult in the common cases, but handles
    failure better, and in particular handles clustering better. You'll need to measure it to
    see if it meets your performance and scalability requirements. You may need to mix
    the two approaches to get acceptable performance.

  • Syncing App IDs across servers -- Best Practice?

    This is prompted by a comment chrisstephens made in the thread at non-existent applications in non-existent workspaces reserving app id's
    Our developers are convinced that the application id's between our dev + staging
    + production environments need to be synchronized.Our team also keeps our dev, test, and prod server app IDs synchronized -- for instance, the Widget Reporting App is always app # 38 on all three servers. For us, it's not something we see as REQUIRED, but it is convenient, and a general sanity check. If the numbers didn't sync, it seems it would be all too easy to get values mixed up and accidentally field an app to the wrong place (possibly overwriting some other application).
    What is the community's opinions on this? Would you consider this an Apex Best Practice? Just a habit for some groups? Or overly rigid thinking?
    (I personally fall in the Best Practice group.)

    One good reason to keep them the same is so that there are no differences between what is tested in one environment and what is deployed in another. Case in point, just last week someone demonstrated that an application's authentication scheme failed when the application ID was changed from xxx to xxxxxxxxx (a longer string of digits). Of course this was due to a previously unknown bug, but that's what testing should reveal.
    Another good reason is to make it possible to export application components (pages, etc.) from one database (say, dev) and install them into an application in another database (say, prod). This is not possible if the application IDs are different.
    Scott

  • AD Sites and Services and Best Practices

    Hey All,
    I am new to OES, but not new to AD. I am in an environment in which DSfW was recently setup to support VDI testing.
    I notice that there is no configuration under AD Sites and Services. We have multiple sites, with DCs setup at each site. The consequence of not having Sites and Services configured is that machines/users in site "A" are logging in through site "B" domain controllers. Obviously, this is not ideal nor best practice. Secondly, this leads me to wonder how the domain controllers are replicating since I do not see NTDS entries in Sites and Service MMC for the domain controllers, yet I do see that AD data is replicating by comparing databases (simply adding a new user on one DC I see it added on the secondary DCs). So I know it's replicating, but apparantly not using AD schema?
    One other question I have about DSfW is regarding the migration from a mixed environment to a full AD environment. We are deploying AD primarily due to VDI initiatives, and currently only testing this. Looking further down the road for planning purposes I have to wonder if it's possible to stand up a 2008 R2 server, join it to the domain, dc promo it, FSMO transfer, then decommossion the DSfW systems. This would leave us with purely Windows DC environment for authentication. Is this something some people have done before? Is it a recommended best path for migrating? Cause I also see others creating a second AD environment, then building the trusts between DSfW's domain and the "new" domain (assuming these are not in the same forrest). That would be less than ideal.
    Thanks in advance for any responses...

    Originally Posted by jmarton
    DSfW does not currently support "sites and services" but it's on the
    roadmap and currently targed for OES 11 SP2.
    Excellent! I feel sane now :) I can live with this, as long as it's expected/normal.
    It sounds like you need sites and services, but once that's in DSfW,
    why migrate from DSfW to MAD if DSfW works for your VDI initiative?
    You are correct. I am simply planning and making sure all the options are in play here.
    I would rather not get too deeply reliant on DSfW if it will make any future possible migration more difficult. Otherwise, DSfW is extremely convenient....I am impressed actually.
    I also believe there may be a way we can control the DC used for specific "contexts" (or OUs as Microsoft calls them). So if I have a group of users in a particular OU that reside at a particular branch I think I should be able to set their preferred domain controller....and if so, that means sites & services becomes nearly irrelevent. I would be ineterested to talk to people who are using DSfW with multiple sites in play.

Maybe you are looking for

  • Error while updating to DSO

    Hi gurus My DTP fails wheni update data from PSA to DSO..the error simply says: "Data Package 1: Errors During Processing". processing terminated..... And i am unsble to drill down any further to get the exact reason behind the error...but when i che

  • Problems with the receiver

    Hello, I've just got my Nike + iPod today. I inserted the receiver to my iPod Nano and nothing happened... When I go to Setting>About, there's no mention to the nike + ipod. The only place it shows in my iPod is under Settings>Main Menu. There I can

  • Remove SID

    While installing the Oracle DB 10g Standard on Windows XP, created a general purpose database orcl(name = orcl SID = orcl). Then I uninstalled the orcl database for some reason. Now I reintall the database and try to create the database with the same

  • Same message type calling 2 maps

    Hi Experts, I want to send same Idoc messagetype for two different partners and want to call different map. And these idocs are coming from same SAP system. Let say message type Orders , first partner PARTNER01 and second partner PARTNER02. For PARTN

  • Behavior of OBIEE

    Hi Gurus, Login to OBIEE Select Dashboard X and select prompts and run the report. Once the dashboard loads, click on Answers, then click on dashboard X again. You will be taken to the last REPORT used instead of displaying what is on dashboard. Is t