Best practices of having a different external/internal domain

In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
Thanks in advance for any advice you all can offer!

Part of this is personal preference, but there are some technical elements to it, too.
You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
Of course, you can always drive DNS manually by editing the zone files directly.

Similar Messages

  • Incoming Sharepoint Mail: External/Internal Domain Environment

    We have setup Incoming Sharepoint 2010 Mail both on Sharepoint side and on the Exchange 2007 side (Send Connector setup). And we have no problem delivering mail from Exchange to Sharepoint.
    We have an External/Internal Domain setup.
     Our Windows DNS does not own the “A” record for our External domain Name @external.Domain.com. 
    All mailboxes/mail-enabled contacts/UDG’s/USGS are stamped with our External domain Name: @external.Domain.com because within Exchange an accepted domain (Internal Relay Type) was created for @external.Domain.com.
    Sharepoint is part of our Internal Windows domain. Sharepoint mail-enabled contacts are created as
    [email protected]
    Per Incoming mail technote: we created Active Directory Org Unit for Share Point mail-enabled contacts. These contacts replicated to Exchange Recipient Management Console and then of course
    to the Outlook Global Address List; however, the contacts are not a routable address because they are stamped @windows.Internal.Domain.com.  If we add @external.Domain.com which is a routable address, mail is delivered to the Sharepoint Site.
    Q. One thing we do not want is within our Outlook Global Address List, to show two SMTP domain names, i.e., @windows.Internal.Domain.com and @external.Domain.com this would be
    too confusing for our users. Also changing each mail-enabled contact to the routable address (@external.Domain.com would be a nightmare. Any suggestions or assistance would be greatly appreciated?

    It doesn't need to be a nightmare, you simply have to create a Recipient Update Policy and apply it to the OU containing your contacts/DLs. You also need to configure SharePoint to use external.domain.com instead of the internal domain.
    See http://thesharepointfarm.com/2013/02/a-practical-guide-to-implementing-incoming-email-using-the-sharepoint-directory-management-service/
    for more info on how this is done.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best practice for having separate clone data for development purposes?

    Hi
    I am on a hosted Apex environment
    I have a workspace containing two instances/ copies of the application: DEV and PROD
    I would like to be able to develop functionality and data in/ with the DEV instance and then insert it into DEV.
    I gather that I can insert pages from DEV to PROD via Create -> New page as copy -> Page in another application
    But I don't know how I can mimic this process with database objects, eg. if I want to create a new table or manipulate the data in an existing table in a DEV environment before implementing in a PROD environment.
    Ideally this would be done in such a way that minimises changing table names etc when elevating pages from DEV to PROD.
    Would it be possible to create a clone schema that could contain the same tables (with the same names) as PROD?
    Any tips, best practices appreciated :)
    Thanks

    Hi,
    ideally you should have a little more separation between your dev and prod environments. At the minimum you should have separate workspaces each addressing separate schemas. Apex can be a little difficult if you want to move individual Apex application objects, such as pages, between applications (a much requested improvement), but this can be overcome by exporting and importing the whole application. You should also have some form of version control/backup of export files.
    As far as database objects go, tables etc, if you have tns access to your hosted environment, then you can use SQL Developer to develop, maintain and synchronize between your development and production schemas and objects in the different environments should have identical names. If you don't have that access, then you can use the Apex SQL Workshop features, but these are a little more cumbersome than a tool like SQL Developer. Once again, scripts for creating and upgrading your database schemas should be kept under some sort of version control.
    All of this is supposing your hosting solution allows more than one workspace and schema, if not you may have to incur the cost of a second environment. One other option would be to do your development locally in an instance of Oracle XE, ensuring you don't have any version conflicts between the different database object features and the Apex version.
    I hope this helps.
    Regards
    Andre

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • What is the best practice for connecting to different schemas?

    Hi all,
    We are porting an application from SQL Server to oracle and would like to know what the best practices are in oracle for user connections to an Oracle instance.
    More or less the question could be put like this:
    1) The equivalent of a SQL Server Database in Oracle is a Schema. (more or less)
    2) A specific application has it's own schema where it keeps all related objects (Tables, etc)
    3) In SQL Server you grant access to the Database and its objects (Tables, etc) to all users of the application.
    4) In Oracle do you grant access to the Schema and its objects (Tables, etc) to all users of the application also? Or do all users log
    in as the schema owner?
    So in Oracle if there existed [SchemaApplication].[table1], how would [userChris] and [userDave] query [SchemaApplication].[table1]?
    Would Chris and Dave log in as [userChris] and [userDave], or would they normally log in as [userApplication]?
    finally, is it good practice to log in as a unique user eg [userChris] and then issue the
    alter session set current_schema = shemaApplication;
    command to change the way references to tables are interpreted?

    We are porting an application from SQL Server to oracle and would like to know what the best practices are in oracle for user connections to an Oracle instance.
    More or less the question could be put like this:
    1) The equivalent of a SQL Server Database in Oracle is a Schema. (more or less)
    2) A specific application has it's own schema where it keeps all related objects (Tables, etc)
    3) In SQL Server you grant access to the Database and its objects (Tables, etc) to all users of the application.
    4) In Oracle do you grant access to the Schema and its objects (Tables, etc) to all users of the application also? Or do all users log
    in as the schema owner?There are ways to implement the same.
    Case 1.
    Create different roles, such as APP_ROLE, READONLY_ROLE. Create public synonym for objects in SchemaApplication user. Grant these role to single user say appUser this is different from you SchemaApplication user. Use appUser to connect to application and for different user like userChris, userDave provide another layer of security. Say userDave is allowed only to deal with cash related transaction, so allow him to open only those screens which are related to cash transaction only.
    Case 2.
    Create public synonym and grant privilege on tables from SchemaApplication to different users (say userChris, userDave).
    So in Oracle if there existed [SchemaApplication].[table1], how would [userChris] and [userDave] query [SchemaApplication].[table1]?This is resolved by public synonym. There are private synonym as well, you can create this also but in this case you have to create private synonym for each of the users.
    Would Chris and Dave log in as [userChris] and [userDave], or would they normally log in as [userApplication]? I would suggest you to connect either using a new user(Case 1) or the user itself has account in the database(Case2).
    finally, is it good practice to log in as a unique user eg [userChris] and then issue the
    alter session set current_schema = shemaApplication;
    No. It is not a good practice to allow the user to login to database using the application owner.
    command to change the way references to tables are interpreted?The public/private synonym can be used to resolve the schema.object value. For example, if SchemaApplication has table T, then you can create public synonym as 'CREATE PUBLIC SYNONYM T FOR SchemaApplication.T'; and now you can refer this table as T from any other schema(user).
    HTH
    Virendra

  • Best practice when deleting from different table simultainiously

    Greetings people,
    I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
    I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
    and getting my head around it.
    Is there a tutorial which deals with this topic?
    I was wondering the best way to go.
    Many Thanks.
    Phil
    is there a "best practice" method for

    Without knowing many details about your specifics... I can suggest a few alternatives -
    You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
    Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
    hth,
    v

  • Best Practice???  Change from internal boot disk to external disk

    I have a mini running 10.5.6 server and it currently boots off it's internal disk. I was hoping to get some feedback/input from others on a good process to convert the system from it's internal boot disk to an external boot disk (firewire).
    I wanted minimum downtime during the conversion and I of course, want a complete snapshot on the new boot disk. Lastly, I do not have a local keyboard and console on the system although I could connect one if it seemed to be much easier that way.
    In general, I am thinking of the following:
    1) Boot into the Leopard Server CD.
    2) Use diskutility to Restore from the internal boot disk to the new external boot disk.
    3) Choose the new boot disk as the startup disk.
    4) Reboot onto new disk.
    Is diskutility the best bet?? Meaning will it work this way if the drives are different sizes??
    Should I try to clone the disk from the internal boot disk (assuming I shutoff services first) using SuperDuper or Carbon Copy Cloner?? But I believe they do not copy over all logfiles, etc..
    Or does anyone have a quick overview of a methodology which they have done in the past or just are suggesting might be a better process than the ones I described??
    In Summary:
    1) From the CD, clone using diskutil and change boot disk
    2) From the running OS, clone using SuperDuper or CCC and than reboot onto new disk
    3) Something else??
    Thank you in advance.

    For cloning the machine you have the approach fine. Disk Utility is fine and booting from the CD is the best method. Simply use the restore method.
    But why on earth would you want to boot from an external Firewire drive? First, there is the issue of speed. You have a mini, let's assume it is a one generation back Intel. It has an internal SATA drive on a 1.5 Gbps connection. You want to move that to a 400 Mbps Firewire bus? Next, beyond the speed issues, you have a persistence issue. You are taking the boot volume and moving into to a transitory bus. One of Firewire's greatest strengths is easy connection/disconnection. Persistence is not a strong point.
    Next, if your plan is to move the boot volume to some form of a Firewire RAID, then you are even penalizing yourself more. The mini has one FireWire port. If you are using two devices and creating a mirror RAID, then you need to daisy chain. Talk about points of failure, asynchronous startup time, bus blocking, etc. Not wise.
    Plus, I can not count how many external firewire devices have burnt up in the effort to have small footprints. Lacie and the "let's put a drive in a metal case with no fan" approach = melted drive. Western Digital and Lacie with the "let's make a completely un-reusable external power brick that either breaks in a small breeze or falls out when the heavy guy walks by the server" approach.
    If you are looking at a Firewire RAID enclosure, then you are missing the objective of speed as you are limited by the 400 bus. It is nice to say that you have a four drive SATA 2 RAID case running a RAID 5, but you are defeating the purpose of why you bought the raid. The RAID 5 can provide an exponential increase in I/O performance. But that goes out the window because of the slow bus.
    If your argument is that "this is a server and my bottleneck is Ethernet," that too does not hold up. You are likely running on a gigabyte network.
    For system details check out http://developer.apple.com/documentation/HardwareDrivers/Conceptual/Macmini_0602 /Articles/architecture.html.
    Take this with a grain of salt. You caught me on a grumpy day as yesterday I dealt with a melted external firewire drive.
    My advice is buy real server class hardware. What is you objective? Drive redundancy? Capacity? A mini is a great dev server. Not a production server. This is your data. Presumably the data that makes your business function. Don't trust it to a single platter. And don't trust it to a consumer level, disposable system. I am not trying to malign the mini. It is a fine machine for its role. Its role however is not to be a production file server. Now as a web server, we are talking a different situation.
    Ok, I am rambling. Hope this helps in some way.

  • What is the best practice to incorporate a custom external swf form?

    I have been working creating a fla file that loads several swfs and random background images. However, I have a custom form that I want to implement and I am having a problem figuring out how to do so. I want to have it work like an inner popup where it blurs and unblurs the background image (like the schedule from hbo.com).
    Here's my setup:
    FLA file contains-
    UILoader for swf section
    UILoader for random background image
    Lets say I load a section called home.swf into the section loader. Within the home.swf I have a button for someone to call up the form.swf. Is it possible for the form to blur the random background image when loaded then unblur it when it's done? If so how would you recommend I go about programming it. **I also have no idea how to unload it.

    you can apply a blur filter to your background image after converting it to an object (like a movieclip).

  • Best practice for having an access point giving out only a specific range

    Hey All,
    I have an access point which is currently set to relay all dhcp request to the server DC-01, However the range that has been setup is becoming low on available IP addresses so I have been asked if it is possible to setup another range for the AP only.
    Is there a way to set the DHCP up with a new range and say anything from that access point it will then give out a 192.168.2 subnet address as apposed to the standard 192.168.1 subnet?
    Or would it be easier / better to create a superscope? and slowly migrate the users to a new subnet with a larger range?
    Any help suggestions would be appreciated
    thanks
    Anthony

    Hi,
    Maybe we could configure a DHCP superscope to achieve your target.
    For details, please refer to the following articles.
    Configuring a DHCP Superscope
    http://technet.microsoft.com/en-us/library/dd759168.aspx
    Create a superscope to solve the problem of dwindling IP addresses
    http://www.techrepublic.com/article/create-a-superscope-to-solve-the-problem-of-dwindling-ip-addresses/
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • SQL Server 2012 Infrastructure Best Practice

    Hi,
    I would welcome some pointers (direct advice or pointers to good web sites) on setting up a hosted infrastructure for SQL Server 2012. I am limited to using VMs on a hosted site. I currently have a single 2012 instance with DB, SSIS, SSAS on the same server.
    I currently RDP onto another server which holds the BI Tools (VS2012, SSMS, TFS etc), and from here I can create projects and connect to SQL Server.
    Up to now, I have been heavily restricted by the (shared tenancy) host environment due to security issues, and have had to use various local accounts on each server. I need to put forward a preferred environment that we can strive towards, which is relatively
    scalable and allows me to separate Dev/Test/Live operations and utilise Windows Authentication throughout.
    Any help in creating a straw man would be appreciated.
    Some of the things I have been thinking through are:
    1. Separate server for Live Database, and another server for Dev/Test databases
    2. Separate server for SSIS (for all 3 environments)
    3. Separate server for SSAS (not currently using cubes, but this is a future requirement. Perhaps do not need dedicated server?)
    4. Separate server for Development (holding VS2012, TFS2012,SSMS etc). Is it worth having local SQL Server DB on this machine. I was unsure where SQL Server Agent Jobs are best run from i.e. from Live Db  only, from another SQL Server Instance, or to
    utilise SQL ServerAgent  on all (Live, Test and Dev) SQL Server DB instances. Running from one place would allow me to have everything executable from one place, with centralised package reporting etc. I would also benefit from some license cost
    reductions (Kingsway tools)
    5. Separate server to hold SSRS, Tableau Server and SharePoint?
    6. Separate Terminal Server or integrated onto Development Server?
    7. I need server to hold file (import and extract) folders for use by SSIS packages which will be accessible by different users
    I know (and apologise that) I have given little info about the requirement. I have an opportunity to put forward my requirement for x months into the future, and there is a mass of info out there which is not distilled in a way I can utilise. It would
    be helpful to know what I should aim for, in terms of separate servers for the different services and/or environments (Live/Test/Live), and specifically best practice for where SQL Server Agent jobs should be run from , and perhaps a little info on how to
    best control deployment/change control . (Note my main interest is not in application development, it is in setting up packages to load/refresh data marts fro reporting purposes).
    Many thanks,
    Ken

    Hello,
    On all cases, consider that having a separate server may increase licensing or hosting costs.
    Please allow to recommend you Windows Azure for cloud services.
    Answers.
    This is always a best practice.
    Having SSIS on a separate server allows you isolate import/export packages, but may increase network traffic between servers. I don’t know if your provider charges
    money for incoming traffic or outgoing traffic.
    SSAS on a separate server certainly a best practice too.
     It contributes to better performance and scalability.
    SQL Server Developer Edition cost about $50 dollars only. Are you talking about centralizing job scheduling on an on-premises computer than having jobs enable on a
    cloud service? Consider PowerShell to automate tasks.
    If you will use Reporting Services on SharePoint integrated mode you should install Reporting Services on the same server where SharePoint is located.
    SQL Server can coexist with Terminal Services with the exception of clustered environments.
    SSIS packages may be competing with users for accessing to files. Maybe copying them to a disk resource available for the SSIS server may be a better solution.
    A few more things to consider:
    Performance storage subsystem on the cloud service.
    How Many cores? How much RAM?
    Creating a Domain Controller or using active directory services.
    These resources may be useful.
    http://www.iis.net/learn/web-hosting/configuring-servers-in-the-windows-web-platform/sql-2008-for-hosters
    http://azure.microsoft.com/blog/2013/02/14/choosing-between-sql-server-in-windows-azure-vm-windows-azure-sql-database/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Best way to merge 2 different networks/companies in same building

    I would like to get some thoughts on best practice regarding joining 2 different networks in the same building.  2 different companys 2 different networks, we are merging. Once networks are joined we will trust the windows domains.
    Both networks are using 3750's for core switching. So i would assume running fiber from Company1 core to Company2 core via  trunking and sharing select vlans across the cores would be least expensive and most secure route?
    Other ideas or flaws in the idea I have presented?
    Thanks!

    Other than the usual subnet and routing issues, stringing trunk fiber between the switches sounds good.
    If there are multiple firewalls and ISP's involved, you'll have to pay close attention to the routing topology, or reengineer to reduce the complexity.
    If there is overlap in subnet usage, you might want to renumber one side.  Using NAT internally will be an ongoing maintenance headache.
    -- Jim Leinweber, WI State Lab of Hygiene

  • Server Core 2008 R2 SP1 - AD DS Best Practice Analyzer Scans Don't Produce Any Output

    Hi,
    This is a re-post moving this discussion to the recommended forum "Server Core" from here:
    http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/cc33d429-88e0-4450-a73c-361e395fd217.
    I am having problems producing any output for any AD DS Best Practice Analyzer Scans on a Windows Server Core 2008 R2 SP1 Domain Controller.
    I have imported the "ServerManager" and "BestPractices" PS modules on that Server by running the following commands:
    Import-Module ServerManager
    Import-Module BestPractices
    I've then run
    get-BPAModel, to find out what best practice scan models are availale, this returns the following output:
    Id                                                       
    LastScanTime
    Microsoft/Windows/DirectoryServices     Never
    Microsoft/Windows/DNSServer               Never
    I then run all the BPA scans on that box:
    Get-BPAModel | Invoke-BPAModel
    This returns the following output:
    ModelId                                          
    Success  Detail
    Microsoft/Windows/DirectoryServices True       (InvokeBpaModelOutputDetail)
    Microsoft/Windows/DNSServer          True       (InvokeBpaModelOutputDetail)
    Since the BPA invocation results weren’t displayed automatically, I entered the following command to see them:
    Get-BPAModel | Get-BPAResult | Out-File "D:\Source\BPA.txt"
    This command will create a text file with the scan results but I only see the results of the DNSServer scan, not the DirectoryServices scan.
    I have also tried to view the results in a HTML format by running the following command but still only see the DNSServer scan results:
    Get-BPAModel | Get-BPAResult | ConvertTo-Html | Set-Content d:\Source\BPA.htm
    I have also tried exeucuting the scan ONLY for the "Microsoft/Windows/DirectoryServices" model but can't get any results to be returned.  I have also connected using server manager from a Full install of Server 2008 R2 SP1 but that
    doesn't seem to show any results under the "Best Practices Analyzer" section when the "Active Directory Domain Services" node is selected, all 4 tabs ("Noncompliant", "Excluded", "Compliant" and "All") show zero (0).  However, the summary text above the
    tabs does show when the last scan was performed. which seems to be correct.
    Is there something special that needs to be done to produce the BPA results for the "Microsoft/Windows/DirectoryServices" BPA model on Server Core 2008 R2 SP1?
    BTW: The Forest/Domain is W2K3R2 Native, this is the first W2K8R2 DC in the environment and I have installed .NET 4 framework (Server Core) to support Powershell 3, also installed.
    Thanks, Paul.
    belpad

    Hi Diana,
    OK, pretty sure I've now found the root cause of the issue I've described above.
    I was also looking into Windows Update Agent issues for these W2K8R2 Server Core DC's, where no updates would be applied via WSUS (configured via GPO) and would fail with "FATAL: CBS called Error with 0x8000ffff windows update agent server
    core". 
    Yesterday, I managed to get one of the W2K8R2 Server Core DC's (WSUS updates) working again by removing one of the .NET 4 Framework security updates (KB2600211) which was manually applied when the server was initially setup.  .NET 4 (Server Core Edition
    http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=22833) was installed as a pre-requisitie for Powershell 3.  Once this update was removed, the affected server core DC was restarted and WSUS updates started to get applied.
    So I followed the same procedure on the other server core DC but this did not resolve the WSUS issue this time.  Next, I did further investigation into the Windows Update Agent problem.  This led me to the following article:
    http://blogs.technet.com/b/brad_rutkowski/archive/2008/07/03/windows-update-fails-with-8000ffff-e-unexpected.aspx which described an issue with NTFS permissions being set incorrectly on C: drive, with the "BUILTIN\Users" group completely
    missing on the C: drive.
    I found the affected Server Core DC also had this issue and when the "BUILTIN\Users" was assigned permissions on the C: drive as described above, and the Windows Update Agent re-started, the Server Core DC started to install all required updates
    configured via WSUS.
    Next, I ran the Directory Service BPA, which now produces the desired output either locally or remotely via Server Manager.
    Therefore, I can only assume that the Directory Service BPA also uses "Network Service" much like WUAUSERV (Windows Update Agent), which requires access to the C: drive via the "BUILTIN\Users" assignment.
    So this has subsequently led me to check the C: drive (%systemdrive%) permissions across multiple W2K8R2 machines, all of which showed differing assigned permissions, as follows:
    1. W2K8R2 Server Core DC - With Directory Services BPA and Windows Update Agent Not Working
    C:\>icacls c:\
    c:\ BUILTIN\Administrators:(OI)(CI)(F)
        CREATOR OWNER:(OI)(CI)(IO)(F)
        NT AUTHORITY\INTERACTIVE:(OI)(CI)(RX)
        NT AUTHORITY\SYSTEM:(OI)(CI)(F)
    2. W2K8R2 Server Core DC - With Directory Services BPA and Windows Update Agent Working OK
    C:\>icacls c:\
    c:\ NT AUTHORITY\SYSTEM:(OI)(CI)(F)
        BUILTIN\Administrators:(OI)(CI)(F)
        BUILTIN\Users:(OI)(CI)(RX)
        BUILTIN\Users:(CI)(AD)
        BUILTIN\Users:(CI)(IO)(WD)
        CREATOR OWNER:(OI)(CI)(IO)(F)
    3. W2K8R2 Full DC - With Directory Services BPA and Windows Update Agent Working OK
    C:\>icacls c:
    c: NT SERVICE\TrustedInstaller:(F)
       NT SERVICE\TrustedInstaller:(CI)(IO)(F)
       NT AUTHORITY\SYSTEM:(M)
       NT AUTHORITY\SYSTEM:(OI)(CI)(IO)(F)
       BUILTIN\Administrators:(M)
       BUILTIN\Administrators:(OI)(CI)(IO)(F)
       BUILTIN\Users:(RX)
       BUILTIN\Users:(OI)(CI)(IO)(GR,GE)
       CREATOR OWNER:(OI)(CI)(IO)(F)
    4. W2K8R2 Server Core DHCP Server (Migrated from W2K3 with Server Migration Tools) - With DHCP BPA and Windows Update Agent Working OK
    C:\>icacls c:
    c: NT AUTHORITY\SYSTEM:(OI)(CI)(F)
       BUILTIN\Administrators:(OI)(CI)(F)
    5. W2K8R2 Server Core DHCP Server (Migrated from W2K3 with netsh) - With DHCP BPA and Windows Update Agent Working OK
    C:\>icacls c:
    c: NT AUTHORITY\SYSTEM:(OI)(CI)(F)
       BUILTIN\Administrators:(OI)(CI)(F)
       BUILTIN\Users:(OI)(CI)(RX)
       BUILTIN\Users:(CI)(AD)
       BUILTIN\Users:(CI)(IO)(WD)
       CREATOR OWNER:(OI)(CI)(IO)(F)
    None of the above servers have a Group Policy or any in-house scripts defined that configure C: drive permissions.  It seems odd that there should be such a variance in the C: (%systemdrive%) drive permissions across the above servers, with only
    scenarios 2 and 5 above have matching permissions.  I can only imagine that maybe some software or software update might be causing this.
    By reviewing the above output, it seems there is also a difference between the C: drive permissions of W2K8R2 Server Core and W2K8R2 Full.  Not sure if this is by design? 
    Is there any Microsoft Documentation describing what the default %systemdrive% NTFS permissions should be for W2K8R2 Server Core and Full.  Furthermore, do these permissions change when the various infrastructure roles are installed and enabled i.e.
    Domain Controller, DHCP etc.  I ask, since I would like to use the correct set of permissions for %systemroot% in each scenario. Please advise if I should be asking this question in a different forum?
    belpad

  • Best practices for handling elements and symbols (including preloading)

    I am trying to learn Edge Animate and I have not seen enough animations to know how this is typically handled and I searched the forum and have not found an answer either.
    If you have many different elements and symbols for a project, what is the best practice for having them appear, disappear, etc. on the timeline? I ask this question not only from a performance based perspective, but also keeping in mind the idea of preloading. This is a 2 part question:
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.

    Hi, escargo-
    Good questions!
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    I would recommend that you set your visibility to "off" instead of simply changing the opacity.  The reason I suggest this is that when your visibility is set to off, your object's hit points also disappear.  If you have any type of interactivity, having the object still visible but with 0 opacity will interfere with anything you have underneath it in the display order.
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.
    No, none of this has any impact on page load.  As you already noticed, all of the assets of your project will load before it displays.  If you want only part of your composition to load, you may want to do what we call a multi-composition project.  There's a sample of that in the Edge Animate API in the Advanced section, and plenty of posts in the forums (and one in the team's blog) explaining how to do that.
    http://www.adobe.com/devnet-docs/edgeanimate/api/current/index.html
    https://blogs.adobe.com/edge/
    Hope that helps!
    -Elaine

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

Maybe you are looking for

  • ALV: How do I suppress repeating values and page breaks on printed output?

    Good day, everyone! First, I've done a LOT of searching trying to find the answer to my question, but I'm not finding an answer.  If this has already been answered and you can point me to a URL, I would appreciate it. Here's my issue:  I have a rathe

  • Really weird ... need help!!

    Hey all ... I need your help!! I typically use Safari and Firefox for browsing on my iMac. Until this evening, I have always been able to highlight the text in the address bar and type the site I want to go to (i.e. xmradio). I have never had to type

  • "Sent" messages have no body

    Is anyone else having a problem where their "sent" mail within Hub is not keeping a record of actual message?  In other words, when I click on a recently sent message (a message sent from the z10), I am able to see who I sent it to, when I sent it, a

  • Displaying text editor

    Hello Experts, I have a pushbutton on a screen and clicking on that would display a text editor along with long text of that pushbutton.I dont know how to create container etc, so I am not able to make out how to solve this problem.Can anyone please

  • Hi Experts, some terms in adobe form? thanks in advance!

    Hi Experts, I am learning adobe form, and I am familiar with smartform, but when I read the document about adobe form, I can make comparison between adobe form and smartform, so could anyone please tell me the differences and same places of adobe for