Wireless authentication network design questions... best practices... etc...

Working on a wireless deployment for a client... wanted to get updated on what the latest best practices are for enterprise wireless.
Right now, I've got the corporate SSID integeatred with AD authentication on the back end via RADIUS.
Would like to implement certificates in addition to the user based authentcation so we have some level of dual factor authentcation.
If a machine is lost, I don't want a certificate to allow an unauthorized user access to a wireless network.  I also don't want poorly managed AD credentials (written on a sticky note, for example) opening up the network to an unathorized user either... is it possible to do an AND condition, so that both are required to get access to a wireless network?

There really isn't a true two factor authentication you can just do with radius unless its ISE and your doing EAP Chaining.  One way that is a workaround and works with ACS or ISE is to use "Was machine authenticated".  This again only works for Domain Computers.  How Microsoft works:) is you have a setting for user or computer... this does not mean user AND computer.  So when a windows machine boots up, it will sen its system name first and then the user credentials.  System name or machine authentication only happens once and that is during the boot up.  User happens every time there is a full authentication that has to happen.
Check out these threads and it explains it pretty well.
https://supportforums.cisco.com/message/3525085#3525085
https://supportforums.cisco.com/thread/2166573
Thanks,
Scott
Help out other by using the rating system and marking answered questions as "Answered"

Similar Messages

  • Design Patterns/Best Practices etc...

    fellow WLI gurus,
    I am looking for design patterns/best practices especially in EAI / WLI.
    Books ? Links ?
    With patterns/best practices I mean f.i.
    * When to use asynchronous/synchronous application view calls
    * where to do validation (if your connecting 2 EIS, both EIS, only in WLI,
    * what if an EIS is unavailable? How to handle this in your workflow?
    * performance issues
    Anyone want to share his/her thoughts on this ?
    Kris

              Hi.
              I recently bought WROX Press book Professional J2EE EAI, which discusses Enterprise
              Integration. Maybe not on a Design Pattern-level (if there is one), but it gave
              me a good overview and helped me make some desig decisions. I´m not sure if its
              technical enough for those used to such decisions, but it proved useful to me.
              http://www.wrox.com/ACON11.asp?WROXEMPTOKEN=87620ZUwNF3Eaw3YLdhXRpuVzK&ISBN=186100544X
              HTH
              Oskar
              

  • Network Design Review - Best Practices

    Looking to start a discussion around best practices for inbound network design at the core. 
    The planned devices are as followings:
    Edge Routing / DMVPN - Cisco 2951
    Cisco UCM / IP Phone VPN Concentrator - Cisco ASA 5512-X
    Cisco AnyConnect SSL Client Concentrator - Cisco ASA 5515-X
    Cisco FirePower / IPS Device - Cisco ASA 5515-X
    The plan is as follows:
    All traffic enters through the 2951. 
    DMVPN traffic will go directly to the FirePower Device and then to the core network.
    IP Phones will pass-through 2951, enter 5512-X for VPN, go to FirePower and then to the core network.
    AnyConnect Clients will pass-through 2951, enter 5515-X for VPN, go to FirePower and then to the core network. 
    Wondering if anyone else has completed a similar setup and any issues you may have fun into. 
    Basic diagram attached. 
    Thanks!

    There really isn't a true two factor authentication you can just do with radius unless its ISE and your doing EAP Chaining.  One way that is a workaround and works with ACS or ISE is to use "Was machine authenticated".  This again only works for Domain Computers.  How Microsoft works:) is you have a setting for user or computer... this does not mean user AND computer.  So when a windows machine boots up, it will sen its system name first and then the user credentials.  System name or machine authentication only happens once and that is during the boot up.  User happens every time there is a full authentication that has to happen.
    Check out these threads and it explains it pretty well.
    https://supportforums.cisco.com/message/3525085#3525085
    https://supportforums.cisco.com/thread/2166573
    Thanks,
    Scott
    Help out other by using the rating system and marking answered questions as "Answered"

  • Wireless Authentication/Security Design questions

    Wireless newbie here...I was required to quicky stand up a wireless deployment at a new warehouse/office building. I have the basic network up and working. My remote AP's have associated with the 2106 in the main office and users can associate and authenticate with the 1130G AP's and can access the office network. I did the basic configs and am now looking to tighten up security. My questions are as follows:
    1) The user clients are Dell Laptops with integrated wireless. They authenticate using LEAP..how do I migrate to EAP or do I need to. I have a Cisco ACS doing RADIUS authentication now.
    2) Should I be using some kind of supplicant client on the laptops?
    3) How do I filter mac's so rogue AP's and rogue clients cant try and associate.
    4) Am I correct in assuming the connections between the 1130 AP's and 2106 are secured and if so do I need to tweak anything to tighten them up?
    5) I have an AP in the main office building that I want to setup to detect rogue AP's. Do I have it associate as a regular AP and push some kind of policy to turn it into a detector?
    I have attached a diagram to help explain. Any help would be appreciated.
    v/r
    Chad

    1. LEAP is a form of EAP, so you must already have something terminating your EAP sessions. The WLC can do this to some extent, or ACS. Which one you chose will be based upon your requirements for manageability, scalability and feature-richness. I would suggest that PEAP-MSCHAPv2 provides a good balance of usability and security, and is significantly better than LEAP.
    2. No, stick with Windows XP SP2 supplicant. This can be configured using domain policy (2k3 SP1 or better) and is pretty good. Just make sure your laptops have new Intel drivers on them. Dell in particular have been quite bad with sending out old drivers in the builds.
    3. MAC authentication is now lergely regarded as a waste of time. It is so easy to spoof a MAC address it's ridiculous, and it's a fair amount of work for the admin(s).
    4. The LWAPP tunnel encrypts all management / config / security related traffic between the AP and WLC, while user data is simply encapsulated in LWAPP, so it can potentially be read if packets are captured.
    5. All APs will do rogue detection, don't really need to have dedicated APs unless you're REALLY paranoid. Main benefit is quicker detection, but drawback is that the 'detector' AP won't serve clients.
    Regards,
    Richard

  • Process modelling : Design patterns & Best Practices

    Hi
    Could some one please suggest/share any technical information or documents tha's related to 'Process modelling - Design Patterns & Best Practices'
    Thanks in Advance
    Santosh K.
    Edited by: Santosh539 on Jul 29, 2010 4:07 PM

    Hi Santosh,
    There is no specific site with all the information you asked for.
    But I think these links would be helpful...
    on Work Flow Patterns: http://www.workflowpatterns.com/
    on BPM Service Pattern: http://enterprisearchitecture.nih.gov/ArchLib/AT/TA/WorkflowServicePattern.htm
    HTH
    Sharma

  • Design Pattern / Best Practice Question

    Hi,
    I have been using Flex for a while now, but there is a
    scenario which I still have not found a solution I'm entirely happy
    with. I'm wondering if anyone else out there might have suggestions
    on a design pattern or best practice.
    Suppose I have a view which depends on model data which
    resides in some back end systems. That model data may or may not
    have been loaded (e.g. via a web service or remote object call) at
    the time the view is displayed.
    I don't know if the user will ever visit this part of the
    application so I would prefer to defer retrieval of the data until
    the user actually navigates to this view. Or I want to retrieve the
    data each time the view is displayed because the data is dynamic
    and could change between one presentation of the view and the next.
    Because the data comes from several systems, I cannot simply
    make one service call and display the view when it completes and
    all the data is available. I need to call several services which
    could complete in any order but I only want to display my view
    after I know all of them have completed and all of the model data
    is available. Otherwise, I can present the user an incomplete view
    (e.g. some combo boxes are empty until the corresponding service
    call to get the data completes).
    The solution I like best so far is to dispatch a single event
    (I am using Cairngorm) handled by a single command which acts as
    the caller and responder for all of the services. This command then
    remembers which responses it has received and dispatches another
    event to navigate to the view once all the results have returned.
    If the services being called are used in different
    combinations on different screens, this results in proliferation of
    events and commands. An event and command for each service and
    additional events and commands to bundle the services and the
    handling of their responses in the right combinations for each of
    the views.
    Another approach is to have some helper class listen for all
    of the model changes and only display the view when the model
    enters some state that is acceptable. It is sometimes difficult to
    determine just by looking at the model whether it is in the right
    state (e.g. how can I tell that a collection is the new collection
    that should just have been requested versus an old one lingering
    from a previous call). The logic required can get kind of
    convoluted and brittle.
    Basically, all of the solutions I've come up with so far seem
    less than ideal and a little hackish. I keep thinking there is some
    elegant solution out there that I am just missing ... but so far,
    no luck finding it. Thoughts?
    Thanks.
    Bill

    i think a service class is right - to coordinate your calls.
    i would have 1 event per call (so you could listen to individual
    responses if you wanted to).
    then i would use a flag. if you want to check for staleness,
    you would probably want two objects to map your service flag to
    lastRequested and lastCompleted. when you check, check if it's
    completed, and if it's not stale and that your lastRequested is
    less than lastCompleted (meaning that you're not currently waiting,
    i.e. you've returned since making a request). then make the request
    and update your lastRequested.
    here's a snippet of what i mean.
    ./paul
    public static const SVC1_LOADED:int = 1;
    public static const SVC2_LOADED:int = 2;
    public static const SVC3_LOADED:int = 4;
    public static const SVCALL_LOADED:int = 7;
    private var completedFlag:int = 0;
    then each call would have it's own callback.
    private function onSvc1Complete( evt:Event):void {
    completedFlag |= SVC1_LOADED;
    lastCompleted[ SVC1_LOADED ] = getTimer();
    dispatchEvent( new Event("svc1complete") );
    checkDone();
    private function checkDone():void{
    if( completedFlag == SVCALL_LOADED )
    dispatchEvent(new Event( "allLoaded" ));

  • ISE policy creation question - best practices

    Ok, I am a rookie ISE user here and am trying to learn as I go. I have a 802.1x policy for our corporate users on both wired and wireless and a wireless guest policy that redirects to the guest portal to enter credentials created in the sponsor portal. The corporate user has access to corporate resources and the guest basically has access to just the internet.
    I need to make what I am calling a Vendor policy that is basically a hybrid of the corporate user and the guest user. These would be vendors that are on-site to assist with programming and need access longer than what the guest account can be created for. This would also have specific ACLs that grant them access to the specific resources they would nee. I would like to tie this into AD authentication since they have an AD account created to be able to access those corporate resources in most cases. My first question is do I have a single policy that is tweaked as vendors come and go or do I simply create a specific policy for each vendor? My second question is do I or should I create unique SSIDs for each vendor?
    As I said I am just now getting into getting ISE configured. I am just not sure of what is considered a best practice or what is considered a secure way to may things happen. In regards to the policies I have created, they work but I think I have a couple holes to address.
    Thanks ...
    Brent

    Mostly makes sense. I have the AD part just need to get an AD group created for my test subject.
    I created an Endpoint Identity Group to place the vendors devices into so that we can allow laptop to connect but not phone. Got that.
    I think I can handle the Authorization Profile. It will be something like if VendorAsset and AD1:ExternalGroups Equals VendorADGroup then VendorPermissions. VendorPermissions would be the ACL that limits where they can go. I also need to create a non 802.1x based SSID as well and add this to the Authorization profile but can still be generic enough to be useable by all vendors.
    I think it is my Authentication rules that I need to modify for Vendor as my Corporate based policies use Dot1x and I need a policy that does not use dot1x. Right?

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • Subclass design problems/best practices

    Hello gurus -
    I have a question problem regarding the domain objects I'm sticking in my cache. I have a Product object - and would like to create a few subclasses - say BookProduct and MovieProduct (along with the standard Product objects). These really need to be contained in the same cache. The issue/concern here is that both subclasses have attributes that I'd like to index AND query on.
    When I try to create an index on the subclasses attributes when there are just "standard" products - I get the following error (which only exists on one of the subclasses):
    2010-10-20 11:08:43.280/227.055 Oracle Coherence GE 3.5.2/463 <Error> (thread=DistributedCache:MyCache, member=2): Exception occured during index rebuild: java.lang.RuntimeException: Missing or inaccessible method: com.test.domain.product.Product.getAuthors()
    So I'm not sure the indexing is working or stopping once it hits this exception.
    Furthermore, I get a similar error when attempting to Filter based on that attribute. So if I want to add the following filter:
    Filter filter = new ContainsAnyFilter( "getAuthors", authors );
    I will receive the following exception:
    Caused by: Portable(java.lang.RuntimeException): Missing or inaccessible method: com.test.domain.product.Product.getAuthors()
    What is considered the best practices for this assuming these really should be part of the same names cache? Should I attempt to subclass the extractors to "inspect" the Object for its class type during indexing or applying filters? Or should I just add all the attribute in the BookProduct and MovieProduct into the parent object and just forget about subclassing? That seems to have a pretty high "yuck" factor to me. I'm assuming people have run into this issue before and am looking for some best practices or perhaps something that deals with this that I'm missing. We're currently using Coherence 3.5.2. Not sure if it matters, but we are using the POF format for serialization.
    Thanks!
    Chris

    Hi Chris,
    I had a similar problem. The way I solved it was to use a subclass of the ChainedExtractor that catches all RuntimeException occurring during the extraction like the following:
    * {@link ChainedExtractor} that catches any exceptions during extraction and returns null instead.
    * Use this for cases where you're not certain that an object contains that necessary methods to be extracted.
    * F.e. an object in the cache does not contain the getSomeProperty() method. However other objects do.
    * When these are put together in the same cache we might want to use a {@link ChainedExtractor} like the following:
    * new ChainedExtractor("getSomeProperty.getSomeNestedProperty"). However this will result in a RuntimeException for those entries that
    * don't contain the object with the someProperty. Using the this class instead won't result in the exception.
    public class SafeChainedExtractor extends ChainedExtractor
         public SafeChainedExtractor()
              super();
         public SafeChainedExtractor( String sMethod )
              super( sMethod );
         @Override
         public Object extract( Object entry )
              try
                   return super.extract( entry );
              catch(RuntimeException e)
                   return null;
         @Override
         public Object extractFromEntry( Entry entry )
              try
                   return super.extractFromEntry( entry );
              catch(RuntimeException e)
                   return null;
    }For all indexes and filters we then use extractors that subclassed the SafeChainedExtractor like the following:
    public class NestedPropertyExtractor extends SafeChainedExtractor
         private static final long serialVersionUID = 1L;
         public NestedPropertyExtractor()
              super("getSomeProperty.getSomeNestedProperty");
    //adding an index:
    myCache.addIndex( new NestedPropertyExtractor(), false, null );
    //using a filter:
    myCache.keySet(new EqualsFilter(new NestedPropertyExtractor(), "myNestedProperty"));This way, the extractor will just return null when a property doesn't exist on the target class.
    Regards
    Jan

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • XDK -Performance best practices etc

    All ,
    Am looking for some best practices with specific emphasis on performance
    for the Oracle XDK ..
    can any one share any such doc or point me to white papers etc ..
    Thanks

    The following article discusses how to choose the most performant parsing strategy based on your application requirements.
    Parsing XML Efficiently
    http://www.oracle.com/technology/oramag/oracle/03-sep/o53devxml.html
    -Blaise

  • Design question - best order of implementation

    Please see attached an existing core network infrastructure design. I am planning to implement the following:
    1. A secondary DMVPN hub (dual DMVPN hub)
    2. A secondary ASA (active/active configuration)
    3. A secondary ISP (BGP multihoming)
    What would be the best/right order to start implementing these technologies?
    Thanks,

    Hello jimiohara,
    jimiohara wrote:
    The question I have is what would be the best way to store these constant parameters as strings so they can be retrieved using a single identifier such as the graphs type or class name(bearing in mind there are about 20+ different graphs)?I am not really sure, whether I understand the question right? But why not using a hash table (e.g. HashMap). In this key/value-list you can store whatever you like. The key only have to be "hashable" (implement equals() and hashCode(), e.g. String!!!). If you want to use TreeMap you also have to define an order with Comparable or Comparator.
    Or use the Properties-class where the key and the value are always Strings.
    regards
    tk

  • Network Design Questions

    Hello All,
    I am in the process of replacing some of our current Cisco equipment with newer one as well as incorporating additional third party hardware by Sonicwall NSA 5500. I am attaching the preliminary network diagram.
    -The SonicWalls are in Active/Standby mode
    -The Core 1 switch is the primary HSRP gateway as well as the primary STP root for all Vlans.
    -Core switches perform all of the inter-vlan routing
    -The uplinks FROM the Core switches TOWARDS the WAN-ACCESS-STACK will be Port-Channels in trunk modes, carrying traffic for VLAN2 (infrastructure Vlan between Cores, Wan-Access-Switches and Sonicwalls), VLAN 254 (Management Vlan which is the same throughout the entire networks), and the Native VLAN 999. 
    I have a few questions and would appreciate your input on them:
    -I would like to carry the management VLAN all the way to the DMZ-ACCESS-STACK, and ultimately to the  the small DMZ-PUB switches (located on different floors). What is the best/safest method of doing this? Should i or shouldn't i extend the management vlan all the way to the DMZ zone? The DMZ zone doesn't use any directly assigned public IP addresses.
    -Should the uplinks FROM the WAN-ACCESS-STACK TOWARDS the Sonicwalls be:
                  -each link in access mode (VLAN2)
                  -each link in trunk mode (VLAN2, VLAN254, VLAN999)
                  -all links combined into one port-channel access mode (VLAN2)
                  -all links combined into one port-channel trunk mode (Vlan 2, 254, 999).
    ** SonicWall does support port-channeling, i have tested it successfully.
    Is this design valid? Any suggestions?
    Thank you for your input in advance.

    Hey Jon, 
    You have a good and valid point about whether the SonicWall interfaces are L3 or L2. Since they are assigned an IP address i assume that they are L3, however, what throws me off is the VLAN ID tag field. I am attaching the screenshot of it.
    Moreover, what i have decided to do is the following:
    1. Created port-channel in trunk mode from Core 1 owards WAN-ACCESS-STACK allowing vlans 2,254,999.
    2. Created port-channel in trunk mode from Core 2 towards WAN-ACCESS-STACK allowing vlans 2,254,999.
    3. Created 1 port-channel in access mode for VLAN 2 from WAN-ACCESS-STACK towards the Sonicwalls.
    Everything seems fine, however, except one thing. I can't ping the SonicWall IP address 10.100.2.254 nor any other address on the Internet such as 8.8.8.8 from the WAN-ACCESS-STACK. as well as the ACCESS-LAYER-SW1 switch that is connected directly to Cores. I have no such problem with pinging from the Core. 
    To summarize,
    I CAN:
    -from WAN-ACCESS-STACK ping my ip default-gateway (vlan 254) 10.100.254.1
    -from WAN-ACCESS-STACK ping ACCESS-LAYER-SW1 switch (vlan 254) 10.100.254.15
    -from ACCESS-LAYER-SW1 switch ping my ip default-gateway (vlan 254) 10.100.254.1
    -from ACESS-LAYER-SW1 ping WAN-ACCESS-STACK switch (vlan 254) 10.100.254.20
    -from the CORE switches ping WAN-ACCESS-STACK and ACCESS-LAYER-SW1, along with the SONICWALL LAN IP 10.100.2.254 as well as any address on the Internet such as 8.8.8.8
    I CAN'T:
    -from WAN-ACCESS-STACK ping the SONICWALL LAN IP 10.100.2.254
    -from WAN-ACCESS-STACK ping any Internet address such as 8.8.8.8
    -from ACCESS-LAYER-SW1 ping the SONICWALL LAN IP 10.100.2.254
    -from ACCESS-LAYER-SW1 ping any Internet address such as 8.8.8.8
    When i do the traceroute on the WAN-ACCESS-STACK, the ICMP packets get delivered to the active Core and go nowhere from there. See below:
    WAN-ACCESS-STACK#traceroute 8.8.8.8
    Type escape sequence to abort.
    Tracing the route to 8.8.8.8
    VRF info: (vrf in name/id, vrf out name/id)
      1 10.100.254.2 0 msec 0 msec 10 msec
      2  *  *  *
      3  *  *  *
      4  *  *  *
      5  *  *  *
      6  *  *  *
      7  *  *  *
      8  *  *  *
      9  *  *  *
     10  *  *  *
    When I ping the Sonicwall i get the same reply:
    WAN-ACCESS-STACK#traceroute 10.100.2.254
    Type escape sequence to abort.
    Tracing the route to 10.100.2.254
    VRF info: (vrf in name/id, vrf out name/id)
      1 10.100.254.2 10 msec 0 msec 0 msec
      2  *  *  *
      3  *  *  *
      4  *  *
    ACCESS-LAYER-SW1 provides exactly the same output. I am currently confused why the ping works from the Core switches but from the wan stack and the access layer switches. Since the Core is the default gateway it should route this traffic to the appropriate areas of the network. What do you think? Thank you

  • Design question - best way to design a page for layout at runtime

    I have an application that I want to port to ADF faces. The application currently generates HTML for both the layout and for the data at runtime. All of the examples I have researched rely on a layout defined at design time, which will not work in my case as I have no way of knowing exactly what the layout will be until runtime.
    My question is what is the best way to use ADF faces to dynamically build a web page where the page layout can not be known at design time, only at runtime. Is there a way to build the component tree that will generate the HTML at runtime?
    Here are the specifics:
    I have an existing application that generates repeating sections of HTML for a user view -- a set of step by step instructions. Each step in the instructions contains one or more of the following elements
    - a heading
    - some text
    - a table with text in the table cells
    There can be one or more steps in a document. Some documents will have a few steps, some will have many steps.
    I am looking for the best way to generate the repeating HTML steps in a single HTML document.
    One idea: can I use a fragment for each step and bind to the data at rutime? If this works, how would I create an iterator or loop that would be able to include the fragment n times to render a single HTML view showing the sequence of all steps?

    Thanks for the suggestions, still don't have a good strategy, however.
    What I am really trying to do here is to generate HTML at runtime. This is easy to do with servlets and or jsp, it looks like it is very difficult or impossible with ADF Faces. Using Java EE technologies, I simply include or exclude the HTML markup as an output of my servlet or jsp. I can use JSTL and a backing bean to do most of this. Unfortunately, in ADF faces, it seems that all controls are defined at design time in XML. Editing the XML files at runtime does not seem to be a logical approach. I have tried adding some controls in the backing bean at runtime, and got them to render. However, I also got an el error at runtime, indicating that the framework cannot find the accessor methods, which of course makes sense, as they don't exist in the backing bean. I don't really see any way to effectively modify the backing bean to accommodate this use case.
    Interestingly, a coworker is using .Net to do a similar project. In this project we want to print labels using convention instead of configuration. In the convention, we have a data object that contains named fields. We try to match up the field name in a label template with the field name in the data structure, using reflection. Any fields that cannot be matched up end up as input text boxes with associated prompts. Since we cannot know how many fields will match up at design time, we dynamically build a web page containing one or more text inputs for the user to enter data that cannot be pulled from our data structure. We return this page to the user for data input. In short, using .Net we can create a single tool that can be used to print any current or future label our users may want to create. Apparently, .Net can do this quite easily.
    This is a somewhat common approach, that I have used in a number of businesses. Did Oracle really miss this use case with ADF Faces, or is there some way to approach these types of problems?
    Thanks,
    Steve

Maybe you are looking for

  • Can we update the field VBAP-BEDAE  in USEREXIT_SAVE_DOCUMENT_PREPARE

    Hi Experts, I have a requirement to change the requirment type(VBAP-BEDAE) of the Sales order item to be changed to custom requirment type, when ever the item quantity is changed. I have used the function module 'RV_SCHEDULING_TYPE_DETERMINE' to dete

  • Jars on classpath within webservice?

    Hi. I've created a web service using apache axis. I am having a problem getting my external classes on the classpath of my web service. import com.mycompany.MyClass public class MyService{   static {     MyClass.initialize();   public String myServic

  • IWeb 09

    Hi Im new to this forum. i have recently bought ilife for the mac, and use the other programs associated with it. i wantt o create my own website, and from what i have seen this seems easy to do and update. a couple of questions i have are: 1. what d

  • Frozen iPhone 5 during restore from iCloud

    I backed up my 4s and began setup on my new iPhone 5s. It began the restore from the iCloud but a Less than inch into the process it froze. It has been an hour with no movement. Should I wait? Shut it off and try again?

  • How to ignore topics?

    Hi, I check what new has been posted by using New. Frequently, I get huge list of topics and many of them are threads which I'm not interested in, and I have seen them previously. So, I consider it as a noise. It would be good to have a feature to ca