Disabling IPv6 on 2008R2 Domain Controllers... Best Practice?

At the end of last year I had a call with Microsoft Support in which I spoke with a member of the Directory Services team regarding an issue.  The issue was resolved with no further problems, but while conversing with the Technical Support Engineer
I queried him on another issue regarding a second copy of our DNS zone in Active Directory.  He looked at it (remoted in via RDP) then looked at my NIC properties and stated that the reason it happened is because we are running IPv6 on our DCs. 
I told him we do that on all our servers. (leave IPv6 enabled.)  He then stated that we should not do that, expanding by saying that "Microsoft is in the process of rewriting documentation as IPv6 is no longer supported on Domain Controllers."    
Needless to say I could not believe this.  I told him how Exchange on an SBS server cannot have IPv6 disabled as the server will stop booting, but he was very adamant about it; he even put me on hold for 10 minutes then came back saying he confirmed
that this is the case and spoke with the "Documentation Team" and the new Best Practices would be released within the next month. In the meantime he recommended I disable IPv6 on all my DCs. (I work in Consulting so that's a lot of DCs at various different
business entities.)
I didn't believe him then, and I don't believe him now.  Reviewing the FAQ linked through http://support.microsoft.com/kb/929852  Says that Microsoft does not recommend disabling IPv6.  Of course no documentation ever came out, nor have I
found anything to agree with his statements. (we solved the duplicate partition issue ourselves.)
I just wanted to post here and see if anyone else has heard of this, maybe I'm the one not up and up on my info.  Has or does Microsoft plan on reversing course on the new IPv6 technology that 2008 and up are built on?  I would think that quite
preposterous!
Thanks,
Christopher Long
Science is a way of thinking much more than it is a body of knowledge. -- Carl Sagan

There are cases where you DO WANT to disable IPv6 on a domain controller. 
Example: you have an IPV4 network and do not have IPV6 deployed. In this case if you are not using IPv6 but leave it enabled than Windows will assign itself an IPv6 at random via the APIPA process. That IP address can and does change when you reboot the
server.... So I bet you see the problem here. 
If you build a domain controller with IPv6 enabled - it will register it's IPV6 address in DNS as offering AD services. Then when you reboot that domain controller and that address changes - BOOM. AD comes crashing down. AD relies heavily on DNS. Windows
thinks it's smarter than you and registers it's IPv6 address obtained via APIPA in DNS. Now that's a problem. Particularly because Win Server 2008+ prefer IPV6 over IPV4 networks. So communication can blow up even if a valid IPv4 network is available. 
So yes - there are instances where you do want to - in fact need to - disable IPv6 on domain controllers. Microsoft's documentation does not reflect this but it should. At a minimum if they want you to leave it on they should at least remind you to set a
static IPv6 address if you're running an IPv4 network. 
(ask me how I know all this over a beer some time)
I opted to just disable it. Despite MS's documentation warning of the contrary - I've seen no adverse impacts. Exchange, Sharepoint, AD, etc. all humm along fine. 

Similar Messages

  • Is it possible for Windows 2008R2 Domain Controllers to audit when a programs are installed/uninstalled on clients and send alerts to Admins?

    We have a program called Audit Wizard that we used with Windows 2003 that monitored all clients and alerted my department when a program was installed/uininstalled. since upgrading to windows server 2008R2, the program no longer works correctly.
    So we are wondering if it is possible for Windows 2008R2 Domain Controllers, running at a 2008R2 forest and domain level) to be able to audit when a programs are installed/uninstalled on clients and send alerts to our Admins?
    If so, How?
    Thanks in advance for your help!
    Pete Macias

    Hi Pete,
    >>So we are wondering if it is possible for Windows 2008R2 Domain Controllers, running at a 2008R2 forest and domain level) to be able to audit when a programs are installed/uninstalled on clients and send alerts to our Admins?
    As far as I know, group policy can't help us do this. If you are interested, we can take a look at System Center Operation Manager and ask for suggestions in the following SCOM forum.
    Operations Guide for System Center 2012 - Operations Manager
    https://technet.microsoft.com/en-us/library/hh212887.aspx
    System Center Operation Manager
    https://social.technet.microsoft.com/Forums/systemcenter/en-US/home?category=systemcenteroperationsmanager
    Best regards,
    Frank Shen 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Sessions and Controllers best-practice in JSF2

    Hi,
    I've not done web development work since last using Apache Struts for its MVC framework ( about 6 years ago now ). So bear with me if my questions does not make sense:
    SESSIONS
    1) Reading through the JSF2 spec PDF, it mentions about state-saving via the StateManager. I presume this is also the same StateManager that it used to store managed-beans that are in @SessionScoped ?
    2) In relation to session-scoped managed beans, when does a JSF implementation starts a new session ? That is, when does the implementation such as Mojarra call ExternalContext.getSession( true ) .. and when does it simply uses an existing session ( calling ExternalContext.getSession( false ) ) ?
    3) In relation to session-scoped managed beans, when does a JSF implementation invalidate a session ? That is, when does the implementation call ExternalContext.invalidateSession() ?
    4) Does ExternalContext.getSession( true ) or ExternalContext.invalidateSession() even make sense if the state-saving mechanism is client ? ( javax.faces.STATE_SAVING_METHOD = client ) Will the JSF implementation ever call these methods if the state-saving mechanism is client ?
    CONTROLLERS
    Most of the JSF2 tutorials that I have been reading on-line uses the same backing-bean when perfoming an action on the form ( when doing a POST or a GET or a post-back to the same page ).
    Is this best practice ? It looks like mixing what should have been a simple POJO with additional logic that should really be in a separate class.
    What have others done ?

    gimbal2 wrote:
    jmsjr wrote:
    EJP wrote:
    It's better because it ensures the bean gets instantiated, stuck in the session, which gets instantiated itself, the bean gets initialised, resource-injected, etc etc etc. Your way goes goes behind the scenes and hopes for the best, and raises complicated questions that don't really need answers.Thanks.
    1) But if I only want to check that the bean is in the session ... and I do NOT want to create an instance of the bean itself if it does not exist, then I presume I should still use ExternalApplication.getSessionMap.get(<beanName>).I can't think of a single reason why you would ever need to do that. Checking if a property of a bean in the session is populated however is far more reasonable to me.In my case, there is an external application ( e.g. a workflow system from a vendor ) that will open a page in the JSF webapp.
    The user is already authenticated in the workflow system, and the external system from the vendor sends along the username and password and some parameters that define what the request is about ( e.g. whether to start a new case, or open an existing case ). There will be no login page in the JSF webapp as the authentication was already done externally by the workflow system.
    Basically, I was think of implementing a PhaseListener that would:
    1) Parse the request from the external system, and store the relevant username / password and other information into a bean which I store into the session.
    2) If the request parameter does not exist, then I go look for a bean in the session to see if the actual request came from within the JSF webapp itself ( e.g. if it was not triggered from the external workflow system ).
    3) If this bean does not exist at all ( e.g. It was triggered by something else other than the external workflow system that I was expecting ) then I would prefer that it would avoid all the JSF lifecycle for the current request and immediately do a redirect to a different page ( be it a static HTML, or another JSF page ).
    4) If the bean exist, then proceed with the normal JSF lifecycle.
    I could also, between [1] and [2], do a quick check to verify that the username and password is indeed valid on the external system ( they have a Java API to do that ), and if the credentials are not valid, I would also avoid all the JSF lifecycle for the current request and redirect to a different page.

  • Windows 2012 Domain Controllers and RC4

    We are using Qualysguard as our vulnerability scanner, and we are getting QID 38601, "SSL/TLS use of weak RC4 cipher". While we have created a GPO to disable RC4 on the 2008/2012 servers, we have 4 Domain Controllers that we haven't included in
    the GPO yet. I'm wondering if disabling RC4 on 2012 Domain Controllers will cause problems that I'm not forseeing right now.
    Does someone out there have any knowledge of this through experience or otherwise?
    Thanks in advance.

     
    Hi,
    As far as I know, disable RC4 cipher usage in SSL/TLS wouldn’t affect Kerberos related services on Domain Controller, since Key Distribution Center (KDC) just use the available encryption type to encrypt tickets that requested from our clients with
    RC4_HMAC_NT.
    More information for you:
    Disabling RC4 Cipher KB2868725 relation to Kerberos
    https://social.technet.microsoft.com/Forums/sqlserver/en-US/836eba80-a070-486d-98b2-69b6325cb40e/disabling-rc4-cipher-kb2868725-relation-to-kerberos?forum=winserversecurity
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Running Best Practice Analyzer on remote 2008 R2 domain controllers

    Hello Powershell World,
    I'll start out by first mentioning that I am a powershell rookie so I gladly welcome any input to help me improve or work more efficiently.  Anyway, I recently used powershell to run the best practice analyzer for DNS on all of our domain controllers.
     The way I went about was pretty tedious and inefficient but still got the job done through a series of one-liners and exported the report to a UNC path as follows:
    Enable-PSremoting -Force (I logged into all of the domain controllers individually and ran this before running the one-liners below from my workstation)
    New-PSSession -Name <Session Name> -ComputerName <Hostname>
    Enter-PSSession -Name <Session Name>
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer | Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help | Sort Category | Export-CSV \\server\share\BPA_DNS_SERVERNAME.csv
    I'm looking to do this again but for the Directory Services best practice analyzer without having to individually enable remoting on the domain controllers and also provide a lsit of servers for the script to run against. 
    Thanks in advance for all your help!

    What do you mean by "without having to individually enable remoting "?
    You cannot remote without enabling remoting.  You only need to enable remoting once.  It is a configuraiton change.  If you have done it once you do not need to do it again.
    Here is how to runfrom a list of DCs.
    $sb={
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer |
    Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help |
    Sort Category |
    Export-CSV "\\server\share\BPA_DNS_$env:COMPUTERNAME.csv"
    Invoke-BPAModel Microsoft/Windows/DirectoryServices
    # etc...
    ForEach($dc in $listofDCs){
    Invoke-Command -ScriptBlock $sb -Computer $dc
    ¯\_(ツ)_/¯

  • Best Practices for Setting up a Windows 2012 R2 STD Domain Controller in a Remote Site

    So I'm looking for an article or writeup similar to the "Adding Domain Controllers in Remote Sites" TechNet article but for Windows Server 2012 STD R2.  Here is my scenario:
    1.  I want to setup the domain controller at Site A where the primary domain controller is located.  The primary domain controller is Windows Server 2008 R2. 
    2.  Once the DC is setup I plan on leaving it on our network for a few days before shipping it to remote Site B for installation
    Other key items:
    1.  The remote Site B will have a different IP range than Site A but will be connected to Site A via a single VPN tunnel.  All the DCs that replicate with each other are on the same domain. 
    2.  The 2012 DC that I setup for Site B (same domain in same forest) will be a DHCP, DNS, and WSUS server all replicating to the primary DC at Site A
    Questions:
    1.  What items can I setup while it's at Site A without effecting or conflicting with the existing network and domain controller?  Can I setup a scope once the DHCP role is added? 
    2.  All of our DCs replicate through Sites and Services, do I have to manually add this to our primary DC for the new DC going to remote Site B?  Or when does this happen automatically when I promote the DC? 
    All and all I'm just looking for a list of Best Practices for 2012 or a Step by Step Guide.  Any help would be appreciated. 

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best Practice - IPv6 Turn Off? Windows 2008 (and R2) Servers

    Hi,
    I work for a large organization.  We have close to 10,000 servers globally.  A question has come up if we should disable IPv6 on our Windows 2008 servers.  Our network is not yet capable of using IPv6.
    It seems like you have to go out of your way to disable IPv6; therefore, I would assume that it is best to leave IPv6 turned on.
    Does anyone have any thoughts about this.  It would be most helpful to have links to pros/cons - best practice.
    Thank you.

    No, do not disable IPv6. There is no reason to do so unless you have a very specific application issue which is highly unlikely (although there are a few documented ones out there).
    http://blogs.technet.com/b/ipv6/archive/2007/11/08/disabling-ipv6-doesn-t-help.aspx
    http://blogs.technet.com/b/jlosey/archive/2011/02/02/why-you-should-leave-ipv6-alone.aspx
    Jason | http://myitforum.com/cs2/blogs/jsandys | Twitter @JasonSandys
    Whoever marked this as an "answer" obviously doesn't remember that bad-old-days of "Everything's on by default! It just works!!" and the billions of dollars in lost productivity that ensued from the hundreds (thousands?) of exploits that followed this
    painfully short-sighted decision. You'd think Microsoft would have learned their lesson from the "IIS is on by default!" debacle, but that doesn't seem to be the case.
    Best practices for system administration say "turn off the services and protocols you don't use." As such,
    IPv6 should ALWAYS be disabled on servers and business workstations if you're not currently using it, just like any other protocol or service you haven't implemented or aren't using. Not using FTP? Don't install or enable
    an FTP server. Not using IPv6? Why would you have IPv6 enabled? Just because IPv6 is the newest, shiniest thing on the block doesn't mean you have to use, and it definitely shouldn't require you to start hacking about in the registry to turn it off.
    The only exception to this rule would be laptops or mobile devices where the user might (conceivably) take them into an IPv6 environment. Even then, though, you have to balance the "convenience" of not having to switch-on IPv6 against the risk that
    your unconfigured IPv6 stack might be exploitable by nefarious patries. Even so, the fact that almost nobody has switched their internal networks to IPv6 means that turning it off on laptops is still probably a "good idea." 
    For example, my laptop wireless interface might make sense to leave IPv6 enabled--I don't have any idea when I'll walk into an environment that uses IPv6 and want to use my wireless. But in the office when I'm hard-connected? We don't use IPv6 so it needs
    to be "off." Even if it's just a bit of code in the Windows firewall that can be used to say "just reject IPv6 traffic on x interface" is still a big improvement. An absolute home-run would be the addition of logic that allows Windows to dynamically disable
    IPv6 when there aren't any NICs or WNICs that are allowed to use it.
    Finally, none of this should be construed to mean I'm advising against implementing IPv6, because I'm not. However, like any major technology project you should plan carefully, making sure you understand ALL of the consequences (not just the marketing hype,)
    and making sure you'll be able to administer the changed environment as well as your administer the current one. You definitely should not let a vendor (of all things) force the decision to use it upon you because it's "on by default."

  • What is the best practice and Microsoft best recommended procedure of placing "FSMO Roles on Primary Domain Controller (PDC) and Additional Domain Controller (ADC)"??

    Hi,
    I have Windows Server 2008 Enterprise  and have
    2 Domain Controllers in my Company:
    Primary Domain Controller (PDC)
    Additional Domain Controller (ADC)
    My (PDC) was down due to Hardware failure, but somehow I got a chance to get it up and transferred
    (5) FSMO Roles from (PDC) to (ADC).
    Now my (PDC) is rectified and UP with same configurations and settings.  (I did not install new OS or Domain Controller in existing PDC Server).
    Finally I want it to move back the (FSMO Roles) from
    (ADC) to (PDC) to get UP and operational my (PDC) as Primary. 
    (Before Disaster my PDC had 5 FSMO Roles).
    Here I want to know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    In case if Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    Example like (FSMO Roles Distribution between both Servers) should be……. ???
    Primary Domain Controller (PDC) Should contains:????
    Schema Master
    Domain Naming Master
    Additional Domain Controller (ADC) Should contains:????
    RID
    PDC Emulator
    Infrastructure Master
    Please let me know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles.
    I will be waiting for your valuable comments.
    Regards,
    Muhammad Daud

    Here I want to know the best practice
    and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    There is a good article I would like to share with you:http://oreilly.com/pub/a/windows/2004/06/15/fsmo.html
    For me, I do not really see a need to have FSMO roles on multiple servers in your case. I would recommend making it simple and have a single DC holding all the FSMO roles.
    In case if
    Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    No. This is not true. Each FSMO role is unique and if a DC fails, FSMO roles will not be automatically transferred.
    There is two approaches that can be followed when an FSMO roles holder is down:
    If the DC can be recovered quickly then I would recommend taking no action
    If the DC will be down for a long time or cannot be recovered then I would recommend that you size FSMO roles and do a metadata cleanup
    Attention! For (2) the old FSMO holder should never be up and online again if the FSMO roles were sized. Otherwise, your AD may be facing huge impacts and side effects.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Best practices of having a different external/internal domain

    In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
    The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
    So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
    Thanks in advance for any advice you all can offer!

    Part of this is personal preference, but there are some technical elements to it, too.
    You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
    For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
    For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
    Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
    It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
    Of course, you can always drive DNS manually by editing the zone files directly.

  • DNS best practice in local domain network of Windows 2012?

    Hello.
    We have a small local domain network in our office. Which one is the best practice for the DNS: to setup a DNS in our network forwarding to public DNSs or directly using public DNS in all computers including
    server?
    Thanks.
    Selim

    Hi Selim,
    Definately the first option  "setup a DNS in our network forwarding to public DNSs " and all computers including server has local DNS configured
    Even better best practice would be, this local DNS points to a standalone DNS server in DMZone which queries the public DNS.
    Using a centralized DNS utilizes the DNS cache to answer similar queries, resulting in faster response time, less internet usage for repeated queries.
    Also an additional DNS layer helps protect your internal DNS data from attackers out in the internet.
    Using internal DNS on all the computer will also help you host intranet websites and accessibility to them directly. Moreover when you are on a AD domain, you need to have the computers DNS configured properly for AD authentication to happen.
    Regards,
    Satyajit
    Please “Vote As Helpful”
    if you find my contribution useful or “Mark As Answer” if it does answer your question. That will encourage me - and others - to take time out to help you.

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • Webaccess Domain Best Practice

    With GroupWise 8, best practice was to put the Webaccess domain on the same server as Webaccess. While designing our GW 2014 system security is much more important. In efforts to make GroupWise more secure, I don't think I like the idea any longer putting a secondary domain on a host that has direct internet access.
    What are other people doing?

    Thanks
    >>> On 2/2/2015 at 3:56 PM, magic31<[email protected]> wrote:
    kwhite;2345909 Wrote:
    > With GroupWise 8, best practice was to put the Webaccess domain on the
    > same server as Webaccess. While designing our GW 2014 system security
    > is much more important. In efforts to make GroupWise more secure, I
    > don't think I like the idea any longer putting a secondary domain on a
    > host that has direct internet access.
    >
    > What are other people doing?
    In short, no need for a secondary domain on the WebAccess server. I
    haven't done so since GroupWise 2012. As a note, it was not a necessity
    with GroupWise 8 and lower, as you could install the WebAccess agent on
    a server that was running on the LAN, and only install the
    WebApplication on the server in the DMZ.
    One main thing that has changed with WebAccess, as of GroupWise 2012, is
    that the WebAccess application doesn't make use of gwinter anymore
    (meaning there's no more Web Access agent component in 2012 and 2014).
    It's now a standalone (client) component that talks directly to the
    POA(s).
    So all you need is a SLES or Windows server in the DMZ and install and
    configure the WebAccess component on that.
    There are also no more eDir counterparts for WebAccess. All that is
    needed is a port opened to the POA's (for SOAP, which defaults to 7191)
    and since 2014 also port 8500 needs to be opened from POA(s) to the
    server running WebAccess. 8500 is needed for the auto refresh
    functionality that's new in WebAccess 2014.
    Cheers,
    Willem
    Knowledge Partner (voluntary sysop)
    magic31's Profile: https://forums.novell.com/member.php?userid=2303
    View this thread: https://forums.novell.com/showthread.php?t=481627

  • What is the best practice way of stopping a sub-domain from being indexed?

    Hi there
    I notice that a client site is being indexed as both xxx.com.au [their primary domain] as well as xxx.PARTNERDOMAIN.com.au.
    I have Googled quite a bit on the subject and have browsed the forums, but can't seem to find any specific best practice approach to only having the primary domain indexed.
    One method that seems to be the most recommended is having a second robots.txt site for the sub-domain xxx.PARTNERDOMAIN.com.au with Disallow: /
    Does anyone have a definitive recommendation?
    Many thanks
    Gavin

    Sorry I assumed they were two different sites, they are the same "content" just two different URLs?
    Canonical links will help but it wont stop or remove you being indexed it only adds higher index weight to the Canonical linked URL. Plus only search engines that support that meta tag will work.
    You essentially need two robots.txt to do this effectively or add the META TAG if you can split the sites somehow.
    There is a more complex way, you could host the second domain somewhere else, use htaccess or similar to do a reverse proxy to the main site to pull the contents in realtime, all except the robots.txt file. This way you could have two sites with only 1 to update but still have two robots.txt's
    http://en.wikipedia.org/wiki/Reverse_proxy
    I've done this for a few sites, you are essentially adding a middle man, it will be a tad slower depending on how far the two servers are apart, but it is like having a cname domain but with total control.

  • Joining mac to windows domain what are the best practice?

    Hi,
    I work in a MNC environment and we have been using Windows based system 95% of our servers are on windows and as of now 100% of our users are on windows. Now we are looking forward to give our management some Mac's. I wanted to know what would be the best practice to be followed in order to add Mac's to our existing domain's and use our AD. At the same time we have windows based file servers which are added to the user using windows script's on to the user profile.
    Thanks & Regards,
    Aj_Mac

    1) Use section name instead of Title View to name your report. This way sections can be collapsed and user can still see report name.
    2) Enable alternate coloring in tables and pivots for easy readablity and set table and pivot widths to 100% (for reports in dashboards) to reduce white space and achieve a more "professional look."
    3) Use column selectors and view selectors to reduce the width of reports and reduce the amount of columns user sees to a "practical minimum."

  • Best practices for disabling an employees account, but leaving mailbox available for others while not accepting messages

    I'm sure that other organizations have some policy for this. In our case, we want to keep the mailbox available for others to still access, but disable the user account and remove it from OWA.
    In this case, I've disabled the AD object, disabled OWA from the features, and set the mailbox to only receive emails from a dummy mailbox (so that no new emails are accepted).
    This all works fine and senders receive a NDR that their mail was rejected, however I'd also like to set a friendlier custom NDR to call the office instead when any sender attempts to send email to that recipient.
    What would best practices, suggestions be for this behavior?

    Hi,
    According to your description, the user object in AD has been disabled.
    In this case, the mailbox cannot mostly likely be accessed. Thus, maybe OOF couldn’t help you.
    If I misunderstand your meaning, please feel free to let me know.
    And we can depend on transport rule:
    The recipient is
    send rejection message to sender with enhanced status code:
    http://technet.microsoft.com/en-us/library/bb123506(v=exchg.141).aspx
    Thanks,
    Angela Shi
    TechNet Community Support

Maybe you are looking for

  • Setting timeout for all the web test scripts in the solution

    Hello, I have around 16 web test scripts (using VSTS 2010 ultimate version) in my project (in a solution). By default the timeout set for each request is 60 sec. I need to increase the timeout to 180 sec. Currently, I am clicking on each request and

  • Autimatic trigger of packaging material in a sales order

    Hi, I am trying to understand client SAP system.When I try to create a Sales Order with one line item (material type = FERT) the system automatically triggers a second line item (material type = VERP). If anybody has come across this kind of requirem

  • Report painter details any one know please help me

    please send me report painter material if any body have it

  • POSSIBLE BUG on page 0 in 3.1.2 and 3.0

    Hi, Could one of the APEX developers have a look into this bug for me? How to replicate it? 1. Create an application with 2 pages and one page zero 2. On page zero create a html region and create a button and select "Create a button in a region posit

  • Iphoto '11 show some pix only with black screen

    In thumbnail view all pix are visible, but if I go to edit view, some are shown only with black. Some of these pix are in a folder witch I synchronize with my iPhone. On iPhone all pix are visible. This phenomen is not a new one, but now the number o