Webaccess Domain Best Practice

With GroupWise 8, best practice was to put the Webaccess domain on the same server as Webaccess. While designing our GW 2014 system security is much more important. In efforts to make GroupWise more secure, I don't think I like the idea any longer putting a secondary domain on a host that has direct internet access.
What are other people doing?

Thanks
>>> On 2/2/2015 at 3:56 PM, magic31<[email protected]> wrote:
kwhite;2345909 Wrote:
> With GroupWise 8, best practice was to put the Webaccess domain on the
> same server as Webaccess. While designing our GW 2014 system security
> is much more important. In efforts to make GroupWise more secure, I
> don't think I like the idea any longer putting a secondary domain on a
> host that has direct internet access.
>
> What are other people doing?
In short, no need for a secondary domain on the WebAccess server. I
haven't done so since GroupWise 2012. As a note, it was not a necessity
with GroupWise 8 and lower, as you could install the WebAccess agent on
a server that was running on the LAN, and only install the
WebApplication on the server in the DMZ.
One main thing that has changed with WebAccess, as of GroupWise 2012, is
that the WebAccess application doesn't make use of gwinter anymore
(meaning there's no more Web Access agent component in 2012 and 2014).
It's now a standalone (client) component that talks directly to the
POA(s).
So all you need is a SLES or Windows server in the DMZ and install and
configure the WebAccess component on that.
There are also no more eDir counterparts for WebAccess. All that is
needed is a port opened to the POA's (for SOAP, which defaults to 7191)
and since 2014 also port 8500 needs to be opened from POA(s) to the
server running WebAccess. 8500 is needed for the auto refresh
functionality that's new in WebAccess 2014.
Cheers,
Willem
Knowledge Partner (voluntary sysop)
magic31's Profile: https://forums.novell.com/member.php?userid=2303
View this thread: https://forums.novell.com/showthread.php?t=481627

Similar Messages

  • Parent child domain best practice

    Currently we have multiple location, each location has its own AD and DNS, they are not connected to each other.
    Mostly the user at these location do not login/access resources of the other location. The few user that needed to login/access resources at multiple location have one account per location. This was fine since we had very few user who
    needed multiple account, but now with their number growing it is creating problems for many of the users.
    We are planning to redo our AD infra structure by installing new AD's on windows 2012 R2 Servers. We would like to setup one parent domain and multiple child domain (one per location).
    Users created on parent domain should be able to login/access resources from each location whereas user of a child domain should be able to only login/access resources at their location.
    Can someone please recommend a best way to do this?
    SKR

    if you are planning on redoing your AD infra, do not create additional AD domains, but rather CONSOLIDATE what you already have into one AD forest with one AD domain. Create OUs to manage objects differently or allow different teams to have their own delegation,
    and create AD sites/subnets to optimize replication and authentication.
    To consolidate AD domains see:
    http://jorgequestforknowledge.wordpress.com/2006/12/27/migrating-stuff-with-admtv3/
    http://jorgequestforknowledge.wordpress.com/2014/06/19/microsoft-released-an-admt-version-to-also-support-w2k12r2/
    Cheers,
    Jorge de Almeida Pinto
    Principal Consultant | MVP Directory Services | IAM Technologies
    COMMUNITY...:
    DISCLAIMER: This post is provided "AS IS" with no warranties of any kind, either expressed or implied, and confers no rights! Always evaluate/test yourself before using/implementing this!

  • Best practices of having a different external/internal domain

    In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
    The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
    So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
    Thanks in advance for any advice you all can offer!

    Part of this is personal preference, but there are some technical elements to it, too.
    You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
    For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
    For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
    Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
    It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
    Of course, you can always drive DNS manually by editing the zone files directly.

  • Best Practices for Setting up a Windows 2012 R2 STD Domain Controller in a Remote Site

    So I'm looking for an article or writeup similar to the "Adding Domain Controllers in Remote Sites" TechNet article but for Windows Server 2012 STD R2.  Here is my scenario:
    1.  I want to setup the domain controller at Site A where the primary domain controller is located.  The primary domain controller is Windows Server 2008 R2. 
    2.  Once the DC is setup I plan on leaving it on our network for a few days before shipping it to remote Site B for installation
    Other key items:
    1.  The remote Site B will have a different IP range than Site A but will be connected to Site A via a single VPN tunnel.  All the DCs that replicate with each other are on the same domain. 
    2.  The 2012 DC that I setup for Site B (same domain in same forest) will be a DHCP, DNS, and WSUS server all replicating to the primary DC at Site A
    Questions:
    1.  What items can I setup while it's at Site A without effecting or conflicting with the existing network and domain controller?  Can I setup a scope once the DHCP role is added? 
    2.  All of our DCs replicate through Sites and Services, do I have to manually add this to our primary DC for the new DC going to remote Site B?  Or when does this happen automatically when I promote the DC? 
    All and all I'm just looking for a list of Best Practices for 2012 or a Step by Step Guide.  Any help would be appreciated. 

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • DNS best practice in local domain network of Windows 2012?

    Hello.
    We have a small local domain network in our office. Which one is the best practice for the DNS: to setup a DNS in our network forwarding to public DNSs or directly using public DNS in all computers including
    server?
    Thanks.
    Selim

    Hi Selim,
    Definately the first option  "setup a DNS in our network forwarding to public DNSs " and all computers including server has local DNS configured
    Even better best practice would be, this local DNS points to a standalone DNS server in DMZone which queries the public DNS.
    Using a centralized DNS utilizes the DNS cache to answer similar queries, resulting in faster response time, less internet usage for repeated queries.
    Also an additional DNS layer helps protect your internal DNS data from attackers out in the internet.
    Using internal DNS on all the computer will also help you host intranet websites and accessibility to them directly. Moreover when you are on a AD domain, you need to have the computers DNS configured properly for AD authentication to happen.
    Regards,
    Satyajit
    Please “Vote As Helpful”
    if you find my contribution useful or “Mark As Answer” if it does answer your question. That will encourage me - and others - to take time out to help you.

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • Disabling IPv6 on 2008R2 Domain Controllers... Best Practice?

    At the end of last year I had a call with Microsoft Support in which I spoke with a member of the Directory Services team regarding an issue.  The issue was resolved with no further problems, but while conversing with the Technical Support Engineer
    I queried him on another issue regarding a second copy of our DNS zone in Active Directory.  He looked at it (remoted in via RDP) then looked at my NIC properties and stated that the reason it happened is because we are running IPv6 on our DCs. 
    I told him we do that on all our servers. (leave IPv6 enabled.)  He then stated that we should not do that, expanding by saying that "Microsoft is in the process of rewriting documentation as IPv6 is no longer supported on Domain Controllers."    
    Needless to say I could not believe this.  I told him how Exchange on an SBS server cannot have IPv6 disabled as the server will stop booting, but he was very adamant about it; he even put me on hold for 10 minutes then came back saying he confirmed
    that this is the case and spoke with the "Documentation Team" and the new Best Practices would be released within the next month. In the meantime he recommended I disable IPv6 on all my DCs. (I work in Consulting so that's a lot of DCs at various different
    business entities.)
    I didn't believe him then, and I don't believe him now.  Reviewing the FAQ linked through http://support.microsoft.com/kb/929852  Says that Microsoft does not recommend disabling IPv6.  Of course no documentation ever came out, nor have I
    found anything to agree with his statements. (we solved the duplicate partition issue ourselves.)
    I just wanted to post here and see if anyone else has heard of this, maybe I'm the one not up and up on my info.  Has or does Microsoft plan on reversing course on the new IPv6 technology that 2008 and up are built on?  I would think that quite
    preposterous!
    Thanks,
    Christopher Long
    Science is a way of thinking much more than it is a body of knowledge. -- Carl Sagan

    There are cases where you DO WANT to disable IPv6 on a domain controller. 
    Example: you have an IPV4 network and do not have IPV6 deployed. In this case if you are not using IPv6 but leave it enabled than Windows will assign itself an IPv6 at random via the APIPA process. That IP address can and does change when you reboot the
    server.... So I bet you see the problem here. 
    If you build a domain controller with IPv6 enabled - it will register it's IPV6 address in DNS as offering AD services. Then when you reboot that domain controller and that address changes - BOOM. AD comes crashing down. AD relies heavily on DNS. Windows
    thinks it's smarter than you and registers it's IPv6 address obtained via APIPA in DNS. Now that's a problem. Particularly because Win Server 2008+ prefer IPV6 over IPV4 networks. So communication can blow up even if a valid IPv4 network is available. 
    So yes - there are instances where you do want to - in fact need to - disable IPv6 on domain controllers. Microsoft's documentation does not reflect this but it should. At a minimum if they want you to leave it on they should at least remind you to set a
    static IPv6 address if you're running an IPv4 network. 
    (ask me how I know all this over a beer some time)
    I opted to just disable it. Despite MS's documentation warning of the contrary - I've seen no adverse impacts. Exchange, Sharepoint, AD, etc. all humm along fine. 

  • Running Best Practice Analyzer on remote 2008 R2 domain controllers

    Hello Powershell World,
    I'll start out by first mentioning that I am a powershell rookie so I gladly welcome any input to help me improve or work more efficiently.  Anyway, I recently used powershell to run the best practice analyzer for DNS on all of our domain controllers.
     The way I went about was pretty tedious and inefficient but still got the job done through a series of one-liners and exported the report to a UNC path as follows:
    Enable-PSremoting -Force (I logged into all of the domain controllers individually and ran this before running the one-liners below from my workstation)
    New-PSSession -Name <Session Name> -ComputerName <Hostname>
    Enter-PSSession -Name <Session Name>
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer | Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help | Sort Category | Export-CSV \\server\share\BPA_DNS_SERVERNAME.csv
    I'm looking to do this again but for the Directory Services best practice analyzer without having to individually enable remoting on the domain controllers and also provide a lsit of servers for the script to run against. 
    Thanks in advance for all your help!

    What do you mean by "without having to individually enable remoting "?
    You cannot remote without enabling remoting.  You only need to enable remoting once.  It is a configuraiton change.  If you have done it once you do not need to do it again.
    Here is how to runfrom a list of DCs.
    $sb={
    Import-Module bestpractices
    Invoke-BPAModel Microsoft/Windows/DNSServer
    Get-BPAResult Microsoft/Windows/DNSServer |
    Select ModelId,Severity,Category,Title,Problem,Impact,Resolution,Compliance,Help |
    Sort Category |
    Export-CSV "\\server\share\BPA_DNS_$env:COMPUTERNAME.csv"
    Invoke-BPAModel Microsoft/Windows/DirectoryServices
    # etc...
    ForEach($dc in $listofDCs){
    Invoke-Command -ScriptBlock $sb -Computer $dc
    ¯\_(ツ)_/¯

  • What is the best practice way of stopping a sub-domain from being indexed?

    Hi there
    I notice that a client site is being indexed as both xxx.com.au [their primary domain] as well as xxx.PARTNERDOMAIN.com.au.
    I have Googled quite a bit on the subject and have browsed the forums, but can't seem to find any specific best practice approach to only having the primary domain indexed.
    One method that seems to be the most recommended is having a second robots.txt site for the sub-domain xxx.PARTNERDOMAIN.com.au with Disallow: /
    Does anyone have a definitive recommendation?
    Many thanks
    Gavin

    Sorry I assumed they were two different sites, they are the same "content" just two different URLs?
    Canonical links will help but it wont stop or remove you being indexed it only adds higher index weight to the Canonical linked URL. Plus only search engines that support that meta tag will work.
    You essentially need two robots.txt to do this effectively or add the META TAG if you can split the sites somehow.
    There is a more complex way, you could host the second domain somewhere else, use htaccess or similar to do a reverse proxy to the main site to pull the contents in realtime, all except the robots.txt file. This way you could have two sites with only 1 to update but still have two robots.txt's
    http://en.wikipedia.org/wiki/Reverse_proxy
    I've done this for a few sites, you are essentially adding a middle man, it will be a tad slower depending on how far the two servers are apart, but it is like having a cname domain but with total control.

  • Joining mac to windows domain what are the best practice?

    Hi,
    I work in a MNC environment and we have been using Windows based system 95% of our servers are on windows and as of now 100% of our users are on windows. Now we are looking forward to give our management some Mac's. I wanted to know what would be the best practice to be followed in order to add Mac's to our existing domain's and use our AD. At the same time we have windows based file servers which are added to the user using windows script's on to the user profile.
    Thanks & Regards,
    Aj_Mac

    1) Use section name instead of Title View to name your report. This way sections can be collapsed and user can still see report name.
    2) Enable alternate coloring in tables and pivots for easy readablity and set table and pivot widths to 100% (for reports in dashboards) to reduce white space and achieve a more "professional look."
    3) Use column selectors and view selectors to reduce the width of reports and reduce the amount of columns user sees to a "practical minimum."

  • What is the best practice and Microsoft best recommended procedure of placing "FSMO Roles on Primary Domain Controller (PDC) and Additional Domain Controller (ADC)"??

    Hi,
    I have Windows Server 2008 Enterprise  and have
    2 Domain Controllers in my Company:
    Primary Domain Controller (PDC)
    Additional Domain Controller (ADC)
    My (PDC) was down due to Hardware failure, but somehow I got a chance to get it up and transferred
    (5) FSMO Roles from (PDC) to (ADC).
    Now my (PDC) is rectified and UP with same configurations and settings.  (I did not install new OS or Domain Controller in existing PDC Server).
    Finally I want it to move back the (FSMO Roles) from
    (ADC) to (PDC) to get UP and operational my (PDC) as Primary. 
    (Before Disaster my PDC had 5 FSMO Roles).
    Here I want to know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    In case if Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    Example like (FSMO Roles Distribution between both Servers) should be……. ???
    Primary Domain Controller (PDC) Should contains:????
    Schema Master
    Domain Naming Master
    Additional Domain Controller (ADC) Should contains:????
    RID
    PDC Emulator
    Infrastructure Master
    Please let me know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles.
    I will be waiting for your valuable comments.
    Regards,
    Muhammad Daud

    Here I want to know the best practice
    and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    There is a good article I would like to share with you:http://oreilly.com/pub/a/windows/2004/06/15/fsmo.html
    For me, I do not really see a need to have FSMO roles on multiple servers in your case. I would recommend making it simple and have a single DC holding all the FSMO roles.
    In case if
    Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    No. This is not true. Each FSMO role is unique and if a DC fails, FSMO roles will not be automatically transferred.
    There is two approaches that can be followed when an FSMO roles holder is down:
    If the DC can be recovered quickly then I would recommend taking no action
    If the DC will be down for a long time or cannot be recovered then I would recommend that you size FSMO roles and do a metadata cleanup
    Attention! For (2) the old FSMO holder should never be up and online again if the FSMO roles were sized. Otherwise, your AD may be facing huge impacts and side effects.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Printing best practices

    I've been asking a bunch of questions recently on TS printing and realize that I should just start from scratch. Since I'm not sure what best practices are for this environment, I would like to get everyone's opinion. 
    This environment:
    20 Server 2008 R2 TS' and approximately 200 fat clients (mixed XP and 7). Currently, all network printers are installed on each TS individually (not shared). We also have about 10 USB printers that redirect. Our network printers are set up on 5 different local
    servers since we have multiple locations. We print both from local desktops and terminal servers.
    What we need is for all network printers to be on each server like they are currently but I'd like to eliminate the need to manage each one on each and every server whenever there is a change. Our current environment was set up by previous IT personnel and
    I'm not sure if it's optimal
    I understand there are multiple ways to deploy printers but I don't know what is best for our environment. I've tried Print Management but I need to be able to set preferences. I've tried GPP in Computer Configuration but it doesn't seem to work (possibly because
    of the current set up). I would like to know how others would manage the printers in this environment, even if I need to delete everything and start over. I am also inexperienced with servers and group policy so I will ask follow up questions to most responses.
    Sorry in advance!
    Edit:To be more clear about my scope of knowledge- I know where the Active Directory and Group Policy Management reside. I have modified existing group policies
    but not made new ones. Since all of our changes always apply to all users/terminal servers/roaming profiles, I've never needed to create OU's or use any kind of item-level targeting so I am not familiar with those.
    Also, I would greatly appreciate not being redirected to another site/forum for answers. I've read hundreds
    and am getting mixed responses since I'm not sure what is appropriate for this particular environment. That and because I need layman's terms :) Thank you!

    Hey Lynnette
    I read through some of the other questions you were asking. 
    Deployed Printers from Print Management is only for adding printer connections, it's not for adding local printers and Deployed Printers does not support setting a default printer.
    Group Policy Preferences supports adding local printers and connections.  It can be used to set the default but not sure if that's for connections or local printers.
    If the end result is to have the same configuration of local printers on multiple machines, I suggest using \windows\system32\spool\tools\PrintBRM.exe to backup the local printers from your Primary machine, then restore to all the other targets. 
    You can create a scheduled task to perform the backup and restores.
    If you are looking to add printer connections in the "Computer" context (all users logging on will get the connection to the shared printer), you can achieve this using the local machine policy or using a domain policy that only applies to a specific
    set of computers.  But once again no default is set but it's fairly easy to set the default with printui.exe or prnmngr.vbs both included with the operating system.
    Alan Morris Windows Printing Team

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • Best Practices when replacing 2003 server R2 with a new domainname and server 2012 r2 on same lan network

    I have a small office (10 computers with five users) that have a Windows 2003 server that has a corrupted AD. Their 2003 server R2 is essentially a file server and provides authentication.  They purchased a new Dell 2012 R2 server.  
    It seems easier to me to just create a new domain (using their public domain name).  
    But I need as little office downtime. as possible . Therefore I would like to promote this server to its new domain on the same lan as the current domain server.  I plan to manually replicate the users and folder permissions.  Once done, I plan to
    remove the old server from the network and join the office computers to the new domain.  
    They also they are also running a legacy application that will require some tweaking by another tech. I have been hoping to prep the new domain prior to new legacy tech arriving.  That is why I would like both domain to co-exist temporarily. I have read
    that the major issues involved in this kind of temporary configuration will then be related to setting up dns.  They are using the firewall to provide dhcp.
    Are there any best practices documents for this situation?
    Or is there a better or simpler strategy?
    Gary Metz

    I followed below two links. I think it should be the same even though the links are 2008 R2 migration steps.
    http://kpytko.pl/active-directory-domain-services/adding-first-windows-server-2008-r2-domain-controller-within-windows-2003-network/
    http://blog.zwiegnet.com/windows-server/migrate-server-2003-to-2008r2-active-directory-and-fsmo-roles/
    Hope this help!

Maybe you are looking for

  • What is the diffrence b/w jar and ear?

    what is the diffrence b/w jar and ear?

  • Error while executing in the DB adapter

    hi When i register some service in the ESB console. i get this error when the Db adpater try executing the stored procedure. stored procedure is not getting executed through the DB adapter, but when it is checked individually it works. but in the ESB

  • Dual Monitor G4 OS 9.2 won't revert to 1 Monitor

    HELP: I can't revert back to one monitor without getting a black screen. The monitor application is very confusing. Do I keep Monitor 1 plugged in ? all the toolbar menus appear on Monitor 2. UUUggghhhh!

  • Problem with Delegated Adminstrator.

    One hosted domain on my DA following problems: Automatic appear a blank line in the top. [http://img687.imageshack.us/img687/5368/69821783.png] This hosted domain have 80 users, but there are two user display information about first name and display

  • Help in the selection screen

    hi folks, I need help in updating the selection screen, I have declared the 'parameters' object in the selection screen like this. parameters:   Paygroup type ZPGKIS obligatory. it has three values '5A', '5B' and '5V'. when I select '5V' from the par