ISE policy creation question - best practices

Ok, I am a rookie ISE user here and am trying to learn as I go. I have a 802.1x policy for our corporate users on both wired and wireless and a wireless guest policy that redirects to the guest portal to enter credentials created in the sponsor portal. The corporate user has access to corporate resources and the guest basically has access to just the internet.
I need to make what I am calling a Vendor policy that is basically a hybrid of the corporate user and the guest user. These would be vendors that are on-site to assist with programming and need access longer than what the guest account can be created for. This would also have specific ACLs that grant them access to the specific resources they would nee. I would like to tie this into AD authentication since they have an AD account created to be able to access those corporate resources in most cases. My first question is do I have a single policy that is tweaked as vendors come and go or do I simply create a specific policy for each vendor? My second question is do I or should I create unique SSIDs for each vendor?
As I said I am just now getting into getting ISE configured. I am just not sure of what is considered a best practice or what is considered a secure way to may things happen. In regards to the policies I have created, they work but I think I have a couple holes to address.
Thanks ...
Brent

Mostly makes sense. I have the AD part just need to get an AD group created for my test subject.
I created an Endpoint Identity Group to place the vendors devices into so that we can allow laptop to connect but not phone. Got that.
I think I can handle the Authorization Profile. It will be something like if VendorAsset and AD1:ExternalGroups Equals VendorADGroup then VendorPermissions. VendorPermissions would be the ACL that limits where they can go. I also need to create a non 802.1x based SSID as well and add this to the Authorization profile but can still be generic enough to be useable by all vendors.
I think it is my Authentication rules that I need to modify for Vendor as my Corporate based policies use Dot1x and I need a policy that does not use dot1x. Right?

Similar Messages

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • Cisco ISE and WLC Timeout Best Practices

    I am fairly new to ISE. Our Cisco WLC is using 802.1x and ISE is configured for PEAP with all inner methods enabled.
    I am looking for some guidance around where I should be configuring timeouts. There is a PEAP Session timeout in ISE, a session timeout on the WLC and a RADIUS reauthentication timeout that can be set in the Authorization profile results object in ISE.
    Currently I have the WLC configured for its default 1800 second timeout and ISE PEAP timeout at the default 7,200 value.

    I ended up answering my own question. The authorization session timeouts should be set in ISE if at all.
    Once I removed the session timeout value from the WLC and used the re-auth value in the ISE policy I had less complaints about disconnects.
    The session timeout on the PEAP settings has not caused any ill affects at it's default. The session resume has taken a huge load off of AAA though. Its worth turning on.

  • Cisco ISE: 802.1x Timers Best Practices / Re-authentication Timers [EAP-TLS]

    Dear Folks,
    Kindly, suggest the best recommended values for the timers in 802.1x (EAP-TLS)... Should i keep default all or change or some of them?
    Also, what do we need reauthentication timers? Any benefit to use it? Does it prompt to users or became invisible? and What are the best values, in case if we need to use it?
    Thanks,
    Regards,
    Mubasher
    My Interface Configuration is as below;
    interface GigabitEthernet1/34
    switchport access vlan 131
    switchport mode access
    switchport voice vlan 195
    ip access-group ACL-DEFAULT in
    authentication event fail action authorize vlan 131
    authentication event server dead action authorize vlan 131
    authentication event server alive action reinitialize
    authentication open
    authentication order dot1x mab
    authentication priority dot1x mab
    authentication port-control auto
    mab
    snmp trap mac-notification change added
    dot1x pae authenticator
    dot1x timeout tx-period 5
    storm-control broadcast level 30.00
    spanning-tree portfast
    spanning-tree bpduguard enable

    Hello Mubashir,
    Many timers can be modified as needed in a deployment. Unless you are experiencing a specific problem where adjusting the timer may correct unwanted behavior, it is recommended to leave all timers at their default values except for the 802.1X transmit timer (tx-period).
    The tx-period timer defaults to a value of 30 seconds. Leaving this value at 30 seconds provides a default wait of 90 seconds (3 x tx-period) before a switchport will begin the next method of authentication, and begin the MAB process for non-authenticating devices.
    Based on numerous deployments, the best-practice recommendation is to set the tx-period value to 10 seconds to provide the optimal time for MAB devices. Setting the value below 10 seconds may result in the port moving to MAC authentication bypass too quickly.
    Configure the tx-period timer.
    C3750X(config-if-range)#dot1x timeout tx-period 10

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Ffmpeg question - best practice

    I have a script i saved and have used for a while without any issues;
    #!/bin/bash
    for i in *.mkv
    do
    ffmpeg -i "$i" -acodec ac3 -vcodec copy "${i%.mkv}.mp4"
    done
    which gives me;
    Stream mapping:
      Stream #0:0 -> #0:0 (copy)
      Stream #0:1 -> #0:1 (ac3 (native) -> ac3 (native))
    My question is, is it better for me to copy the audio instead and is AC3 -> AC3 going to give me an issue?
    Sometimes i get AAC souce audio which is why i specify AC3

    psjbeisler wrote:is AC3 -> AC3 going to give me an issue?
    No, other than wasting time re-encoding. You probably wouldn't notice a difference in quality. If something weird happens, like a change in channel layout, then it should be reported upstream.
    psjbeisler wrote:Sometimes i get AAC souce audio which is why i specify AC3
    AAC is the most common audio format for MP4 container, so stream copying it would be the best option.
    qubodup wrote:
    You could check what the codec is:
    codec=`ffprobe video.mkv 2>&1 >/dev/null |grep Stream.*Audio | sed -e 's/.*Audio: //' -e 's/[, ].*//'`
    You can avoid the redirection, grep, and sed (see FFmpeg Wiki: FFprobe Tips).
    $ ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of default=nw=1:nk=1 input.mkv
    aac
    Note that only the first audio stream will be probed in this example. If there are others they will be ignored. Change "-select_streams a:0" to "-select_streams a" if you want to list all.
    Last edited by DrZaius (2015-04-18 23:41:57)

  • Session question; best practice

    Hi,
    One of our high profile application's queries/updates are served to user sessions. But we wanted to improve user query performance and reduce general database activity.
    This piece of application cause an auto refresh to execute every 60 seconds. These queries execute against order tables looking for statuses on active orders, are user specific, and in some cases are not optimally tuned producing very high database buffer get and disk read activity. On average, 1,500 executions representing various flavors of these queries are executed hourly.
    my questions are:
    1) how can we get max performance ?
    2) can we cache these queries for like every 30 secs ?
    3) how can we cache ? so that user sessions would access the cache.
    -sharma

    well, you could load the data and put it in the application scope (in memory) with a timeout time so that it's not used after however long, in which case, a request would have to go to get the newer data from the DB.

  • Redirection question - best practice

    I have a managed session scoped bean, named UserBean, which as it's name implies stores user information. Now, if the session has expired (or was never created), a lot of it's methods will return null values and will result in an error. What I'd like to know is the best way to redirect to a login page if the UserBean is null. My first idea was the following:
    <navigation-rule>
            <from-view-id>*</from-view-id>
            <navigation-case>
                <from-outcome>#{UserBean == null}</from-outcome>
                <to-view-id>/login.xhtml</to-view-id>
                <redirect/>
            </navigation-case>
        </navigation-rule>However it didn't work. Am I onto something? If not -- whats the best solution?
    I appreciate your help.

    ServletRequest is an interface [1]. In a HTTP servlet environment the ServletRequest instance in the Filter is an implementation of HttpServletRequest [2]. So cast it back.
    [1] http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletRequest.html
    [2] http://java.sun.com/javaee/5/docs/api/javax/servlet/http/HttpServletRequest.html

  • MiniDV work flow question-best practice

    I've got a client with a Cannon DC100 miniDV camcorder. This unit does not seem to have a firewire or USB port.
    What I have are 17 of these little puppies that I need up get into iMovie so i can teach him the iMovie basics.
    What I think I need is some freestanding reader that plugs into the firewire/usb port.
    Is there a better way?
    Thanks

    Michael:
    Take a look at this camera here:
    http://www.camcorderinfo.com/content/Canon-DC100-Camcorder-Review.htm
    It records in MPEG2 format into miniDVD. You can insert the miniDVDs into your G5's drive and take aout and convert the movies to DV. As far as I know, you can find problems inserting miniDVD/CDs in slot loading drives, but not in a standard one.
    The camera has a AV output but you need a A/D converter to digitalize the video. If your customer wants to learn to edit his home videos, he must change to any miniDV (tape) consumer camera in place of getting any other hardware to work with this one.
      Alberto

  • Database Primary Key Question - Best Practices

    I posted this in the ADDT forum, but I imagine I'll get more
    responses here:
    All you database developers - how do you deal with primary
    keys? Do you
    ALWAYS use an AutoIncrement/AutoNumber? Or only sometimes? Is
    there an
    argument to NOT use AutoIncrement? I know how I create
    databases and how
    I usually do things. I know how a few of my colleagues work.
    But how
    about the rest of the world? (Research for a MS Access book I
    am
    involved with.)
    Alec
    Adobe Community Expert

    .oO(Alec)
    >I posted this in the ADDT forum, but I imagine I'll get
    more responses here:
    >All you database developers - how do you deal with
    primary keys? Do you
    >ALWAYS use an AutoIncrement/AutoNumber?
    No.
    >Or only sometimes? Is there an
    >argument to NOT use AutoIncrement?
    AUTO_INCREMENT is a proprietary MySQL feature. For some
    people this
    might be an argument against it, but doesn't have to. Every
    DBMS has its
    own special features. You just have to decide whether you
    want to keep
    your code/queries as portable as possible or want to get the
    most out of
    your DB. Usually I prefer performance/features over
    portability, simply
    because for me and my projects it's very unlikely that I have
    to change
    the DBMS. I've chosen MySQL for good reasons and will stay
    with it for
    quite a while.
    >I know how I create databases and how
    >I usually do things. I know how a few of my colleagues
    work. But how
    >about the rest of the world? (Research for a MS Access
    book I am
    >involved with.)
    It always depends on the table itself, what data it contains,
    what I
    want to do with it and also some personal preferences. In n:m
    tables for
    example there's no need for an extra numeric PK, since the
    entire record
    already is the PK, built from two or more FKs.
    But if I need a numeric PK, I usually use sequences. Some
    DBMS support
    them natively, in MySQL they can be emulated with an extra
    table. It
    simply means, that the used PK number is generated _before_
    the record
    itself is inserted. For me and my framework this has some
    advantages
    (makes the internal work a bit easier), but of course in
    other cases an
    AUTO_INCREMENT might be more appropriate.
    So IMHO there's no general solution. If an AUTO_INCREMENT or
    something
    similar fits your needs, you should use it. I don't see a
    real problem
    with that.
    Micha

  • Best practices for placing images in epubs

    Okay, I've read the books, I've watched the tutes and I'm still at a loss on the best way to add images to InDesign 5.5 documents that I will convert to ePub. The images are created in Photoshop at 300 dpi and sized at 800by600. And yet when I place them in InDesign and create the ePub, the images display poorly in ADE and other readers. What is the magic formula for adding images to my ePubs?
    Thanks in advance
    Chris

    Steve/Jongware:
    My thanks for your responses.
    Very good point to look at the images within the epub file. They are 72 dpi images, so it would seem that InDesign is indeed lowering the resolution. I had thought I tried every variation of the available settings within InDesign to avoid this but it would seem to not be the case. If you have any specific suggestions about what setting is causing this, that would be great.
    Viewing the images in the target device is of course ideal. I'm creating this for a wide range of online bookstores, so what device it will be viewed on can't be known. Since ADE drives more than 50 devices, I had hoped that would at least prove to be a reliable base of some sort -- disappointing to hear that ADE can not be trusted in that regard. I had assumed I could proof the book in that before converting to mobi and proofing on a Kindle.
    Why would anyone insert 300 dpi images, you ask? In its publishing guidelines, Amazon says: "To future-proof the content, save images in 300 dpi" Is this then bad advice on their part? Elizabeth Castro echoes this recommendation in her excellent book on the subject, by the way. It's rather difficult to know just what to do, I must admit. But I guess we're still in the early days of eBook creation, with best practices still being in a state of flux.
    Once again, my thanks for sharing your experience.
    Chris

  • Best Practice: Deploying Group Policy to Users on different OUs

    Greetings, everyone! I am needing some advice on how to deploy some group policy objects to specific users stored on different OUs.
    Let me set the stage: I work for a large school district, and have recently taken over the district's career center. The idea behind the career center is that students from different high schools around the city come in to take classes based on their choice
    of career, such as radio broadcasting or auto mechanic and such. The AD structure is set up so that each school has their own OU.  When a user (staff, student, etc.) is assigned to a school OU, they automatically are added to
    their school's security group (i.e. EASTHIGH-STUDENT), and that when any user moves from one school to another, we have to move their AD account to that school's OU, which will remove the security group from the old school and apply the new school
    security group.
    For the career center, since we have students coming from different buildings every day, rather than trying to find a way to move their AD account from their high school OU to the career center OU, the previous techs created generic accounts (such as tv001,
    tv002, etc.) in AD and stored them in the career center OU.  This way, teachers can assign students that particular generic account so that they can access the drives and printers from the career center, as well as access the career center network
    drives while they are at their home high school.
    Since I have moved to the career center, and apparently I have more knowledge about group policy than most of the techs in the district, the district system engineers want me to remove all of the generic accounts from the career center OU, and have students
    use their own AD accounts.  Obviously I also want to do this since the generic accounts are very confusing to me, but I'm trying to figure out the best way to do this.
    For simplicity sake, I'm just going to start off by figuring out how to set up a group policy for mapping the career center drives.  Now, I obviously know that the best way would be to create security groups for each career area, and that we would need
    to add students to those groups so that only those particular students would get the GPO for the career center, but my question is where would I like the group policies to?  Do I need to link it at the root of the domain so that every OU is hit? 
    Just curious about this.
    Thanks!

    Don't link it to the root.... apply the drive mapping as a policy at the OU or you could apply the drive mapping using Group Policy Preferences using security group targeting... .I would also strongly recommend you check out my articles
    Best Practice: Active Directory Structure Guidelines
    – Part 1
    Best Practice: Group Policy Design Guidelines – Part 2
    Hope it helps...

  • Is it best practice to use account lockout policy

    Windows Server 2008 r2 (will be moving to 2012 r2)
    since implementing account lockout policy two days ago, we've been bombarded by calls to unlock accounts. and after a few minutes, same users get their accounts locked again.
    my question, since we are already using strong password policy (8 chars min, 90 days max to expire), at this day and age is it still best practice to rely on account lockout policy? keeping in mind the above flood of calls.

    since implementing account lockout policy two days ago, we've been bombarded by calls to unlock accounts. and after a few minutes, same users get their accounts locked again.
    my question, since we are already using strong password policy (8 chars min, 90 days max to expire), at this day and age is it still best practice to rely on account lockout policy? keeping in mind the above flood of calls.
    account lockout is generally considered un-necessary if you have implemented a very strong password complexity/history policy.
    There are many discussions on the topic of password/passphrase "strength", and it's important to consider the various factors involved, and, how they affect your organisation's view of "security".
    I would say that 8 chars is not very strong. You should also consider if password aging/expiry is a useful control at all.
    Since this forum is related to Group Policy, and, password/security is really quite a separate topic, you should consider the DS forum or the security forum, or separate research or consulting services, to get a broad understanding of the things to consider
    for your particular requirements/scenario.
    Other considerations include any security standards which can be useful reading to understand the nature of the topic (e.g. PCI DSS, HIPAA, FIPS, etc)
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

Maybe you are looking for

  • IOS 5 broke some of my apps

    After upgrading my iPhone 3GS to iOS 5, some of my apps (Merriam-Webster's Unabridged Dictionary, and Oxford German-English Dictionary) no longer work properly.  Instead of displaying properly formatted dictionary entries, they display several screen

  • ORA-01461 in report.

    Hi All, Please advice on following: I have: Table A with column ONE varchar2(2000) Table B With columnTWO varchar2(2000) Report with embedded program unit (package) that does something like: Declare v_temp varchar2(2000); begin select ONE into v_temp

  • Dynamic XSD Mapping?

    Is it possible to make some kind om dynamic mapping from one XSD to another, based on some metadata stored for instance in DB table? The table could info like "field1 goes to fieldX". How can acomplish something like that? I know I could do in PL/SQL

  • IPad Mini... Is it the wrong charger in the box for the task involved?

    It seems the 5 Watt charger in the box isn't really man enough for the job? Yes it chargers the iPad mini... but at a snails pace! And just manages to creep in a charge if the Cellular network is still active... It struggles away at about 10% increas

  • Removing duplicate songs

    I have a problem. I was trying to reduce my music folder size, and I selected all of my iTunes music and told iTunes to convert them all to mp4. Once I checked the folder later, everything already was in mp4 format before I converted them...now, the