Server Refresh - Best Practice

We are normally a Unix shop, but do have some Windows servers, notably a new installation of SAP Solution Manager, and upcoming Bex Broadcaster and SAP Portal.  All of these systems will be on Windows for reasons of cost and available technologies.
We will be refreshing our hardware (on practice) every 3 years.
For a unix environment, this would not pose a problem, since the SAP installations and databases copy easily from system to system.
My question, to my learned peers, is what is the best approach to maintain a Windows installation over hardware generations?
1) Is it simply to re-install the application and environment, then copy the database?
2) Is it to use a 3rd party tool such as PowerConvert?

Thanks, Yaroslav. Resourcing is allways an issue with VMWare. I would like to keep this thread open longer. Windows has been the OS for many large companies. Hardware refresh (not to mention DRP) happens as a matter of course. My feeling is that the best practice is simply to re-install then copy the database.  I am looking for an alternate solution. -alan

Similar Messages

  • License type of SQL Server 2005 Best Practices Analyzer

    Hi everybody.
    I need to install in my organization the software "SQL Server 2005 Best Practices Analyzer" but I need to know if this application it's free licensing. I have seen on several web sites about this tool it's free but not in official microsoft
    web page. So, where can I find the official microsoft information about the type of licensing of "SQL Server 2005 Best Practices Analyzer" ?
    Thanks of your support

    Hello Erland.
    I followed your advice and I have read the terms of use of this software. I stop at point 3 (which I highlighted). Based on this point, I doubt it is about using this application. Furthermore nowhere says that this software is free to use.
    Would appreciate if someone can clarify this to me.
     =============================================================
    MICROSOFT SOFTWARE LICENSE TERMS
    MICROSOFT SQL SERVER 2005 BEST PRACTICES ANALYZER:
    These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its affiliates) and you. 
    Please read them.  They apply to the software named above, which includes the media on which you received it, if any. 
    The terms also apply to any Microsoft
    *  updates,
    *  supplements,
    *  Internet-based services, and
    *  support services
    for this software, unless other terms accompany those items. 
    If so, those terms apply.
    BY USING THE SOFTWARE, YOU ACCEPT THESE TERMS. 
    IF YOU DO NOT ACCEPT THEM, DO NOT USE THE SOFTWARE.
    If you comply with these license terms, you have the rights below.
    1. 
    INSTALLATION AND USE RIGHTS.  You may install and use any number of copies of the software on your devices.
    2. 
    INTERNET-BASED SERVICES.  Microsoft provides Internet-based services with the software. 
    It may change or cancel them at any time.
    3. 
    SCOPE OF LICENSE.  The software is licensed, not sold. This agreement only gives you some rights to use the software. 
    Microsoft reserves all other rights. 
    Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement. 
    In doing so, you must comply with any technical limitations in the software that only allow you to use it in certain ways. 
    You may not
    *  work around any technical limitations in the software;
    *  reverse engineer, decompile or disassemble the software, except and only to the extent that applicable law expressly permits, despite this limitation;
    *  make more copies of the software than specified in this agreement or allowed by applicable law, despite this limitation;
    *  publish the software for others to copy;
    *  rent, lease or lend the software;
    *  transfer the software or this agreement to any third party; or
    *  use the software for commercial software hosting services.
    4. 
    BACKUP COPY.  You may make one backup copy of the software. 
    You may use it only to reinstall the software.
    5. 
    DOCUMENTATION.  Any person that has valid access to your computer or internal network may copy and use the documentation for your internal, reference purposes.
    6. 
    EXPORT RESTRICTIONS.  The software is subject to United States export laws and regulations. 
    You must comply with all domestic and international export laws and regulations that apply to the software. 
    These laws include restrictions on destinations, end users and end use. 
    For additional information, see www.microsoft.com/exporting.
    7. 
    SUPPORT SERVICES.  Because this software is "as is," we may not provide support services for it.
    8. 
    ENTIRE AGREEMENT.  This agreement, and the terms for supplements, updates, Internet-based services and support services that you use, are the entire agreement for the software and support services.
    9. 
    APPLICABLE LAW.
    a.  United States.  If you acquired the software in the United States, Washington state law governs the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws principles. 
    The laws of the state where you live govern all other claims, including claims under state consumer protection laws, unfair competition laws, and in tort.
    b.  Outside the United States.  If you acquired the software in any other country, the laws of that country apply.
    10. 
    LEGAL EFFECT.  This agreement describes certain legal rights. 
    You may have other rights under the laws of your country. 
    You may also have rights with respect to the party from whom you acquired the software. 
    This agreement does not change your rights under the laws of your country if the laws of your country do not permit it to do so.
    11. 
    DISCLAIMER OF WARRANTY.  THE SOFTWARE IS LICENSED "AS-IS." 
    YOU BEAR THE RISK OF USING IT.  MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES OR CONDITIONS. 
    YOU MAY HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. 
    TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT EXCLUDES THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
    12. 
    LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES.  YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. 
    YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
    This limitation applies to
    *  anything related to the software, services, content (including code) on third party Internet sites, or third party programs; and
    *  claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence, or other tort to the extent permitted by applicable law.
    It also applies even if Microsoft knew or should have known about the possibility of the damages. 
    The above limitation or exclusion may not apply to you because your country may not allow the exclusion or limitation of incidental, consequential or other damages.
    Please note: As this software is distributed in Quebec, Canada, some of the clauses in this agreement are provided below in French.

  • Adode premiere pro cc + server shared, best practices

    Where to place projects, media caches, preview files ...... ?
    A project can be opened on different stations ( not simultaneously , of course ) during the day ....
    I obtients no information about my interlocutor Adobe.
    Regards, Vince

    thank you very much for the explanation. I have a request: 6 emissions
    first assembly room, 2 technical stations for Backup and indgest and
    finally 2 deruhs stations prelude. each subject is in a directory with the
    name of the subject with the project, the media, provided files. by against
    the caches is also on the server in a directory caches common to all
    machines. there is there a max number of file caches has not exceeded? is
    that each machine must have its own cache directory?
    Le 25 nov. 2014 14:24, "Vinay Dwivedi" <[email protected]> a écrit :
        Adode premiere pro cc + server shared, best practices  created by Vinay
    Dwivedi <https://forums.adobe.com/people/Vinay+Dwivedi> in Premiere Pro
    - View the full discussion
    <https://forums.adobe.com/message/6960713#6960713>

  • SQL Server 2008 - Best Practices Analyzer

    Is there a version of SQL Server 2008 Best Practices Analyzer available for download?   If not, can I use BPA for SQL Server 2005 to run a DB assessment on a SQL Server 2008 database?  Please let me know what your recommendation is?
    Thanks

    Microsoft® SQL Server® 2008 R2 Best Practices Analyzer has been released for few months.
    More details here
    http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=0fd439d7-4bff-4df7-a52f-9a1be8725591

  • Failover cluster File Server role best practices

    We recently implemented a Hyper-V Server Core 2012 R2 cluster with the sole purpose to run our server environment.  I started with our file servers and decided to create multiple file servers and put them in a cluster for high
    availability.  So now I have a cluster of VMs, which I have now learned is called a guest cluster, and I added the File Server role to this cluster.  It then struck me that I could have just as easily created the File Server role under my Hyper-V
    Server cluster and removed this extra virtual layer.  
    I'm reaching out to this community to see if there are any best practices on using the File Server role.  Are there any benefits to having a guest cluster provide file shares? Or am I making things overly complicated for no reason?
    Just to be clear, I'm just trying to make a simple Windows file server with folder shares that have security enabled on them for users to access internally. I'm using Hyper-V Core server 2012 R2 on my physical servers and right now I have Windows
    Server Standard 2012 R2 on the VMs in the guest cluster.
    Thanks for any information you can provide.

    Hi,
    Generally with Hyper-V VMs available, we will install all roles into virtual machines as that will be easy for management purpose.
    In your situation the host system is a server core, so it seems that manage file shares with a GUI is much better.
    I cannot find an article specifically regarding "best practices of setting up failover cluster". Here are 2 articles regarding build guest cluster (you have already done) and steps to create a file server cluster. 
    Hyper-V Guest Clustering Step-by-Step Guide
    http://blogs.technet.com/b/mghazai/archive/2009/12/12/hyper-v-guest-clustering-step-by-step-guide.aspx
    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
    https://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Client on Server installation best practice

    Hi all,
    I wonder on this subject, searched and found nothing relevant, so I ask here :
    Is there any best practice/state of the art when you have a client application installed on the same machine as the database ?
    I know the client app use the server binaries, but must I avoid it ?
    Should I install a Oracle client home and parameter the client app to use the client libraries ?
    In 11g there is no more changeperm.sh anymore, doest-il prove Oracle agrees to have client apps using server libraries ?
    Precision : I'm on AIX 6 (or 7) + Oracle 11g.
    Client app will be an ETL tool - explaining why it is running on DB machine.

    GReboute wrote:
    EdStevens wrote:
    Given the premise "+*when*+ you have a client application installed on the same machine as the database", I'd say you are already violating "best practice".So I deduce from what you wrote, that you're absolutely against coexisting client app and DB server, which I understand, and usually agree.Then you deduce incorrectly. I'm not saying there can't be a justifiable reason for having the app live on the same box, but as a general rule, it should be avoided. It is generally not considered "best practice".
    But in my case, should I load or extract 100s millions of rows, then GB flow through the network, with possible disconnection issues, although I could have done it locally ?Your potentially extenuating circumstances were not revealed until this architecture was questioned. We can only respond to what we see.
    The answer I'm seeking is a bit more elaborate than "shouldn't do that".
    By the way, CPU or Memory resources shouldn't be an issue, as we are running on a strong P780.

  • Storage Server 2012 best practices? Newbie to larger storage systems.

    I have many years managing and planning smaller Windows server environments, however, my non-profit has recently purchased
    two StoreEasy 1630 servers and we would like to set them up using best practices for networking and Windows storage technologies. The main goal is to build an infrastructure so we can provide SMB/CIFS services across our campus network to our 500+ end user
    workstations, taking into account redundancy, backup and room for growth. The following describes our environment and vision. Any thoughts / guidance / white papers / directions would be appreciated.
    Networking
    The server closets all have Cisco 1000T switching equipment. What type of networking is desired/required? Do we
    need switch-hardware based LACP or will the Windows 2012 nic-teaming options be sufficient across the 4 1000T ports on the Storeasy?
    NAS Enclosures
    There are 2 StoreEasy 1630 Windows Storage servers. One in Brooklyn and the other in Manhattan.
    Hard Disk Configuration
    Each of the StoreEasy servers has 14 3TB drives for a total RAW storage capacity of 42TB. By default the StoreEasy
    servers were configured with 2 RAID 6 arrays with 1 hot standby disk in the first bay. One RAID 6 array is made up of disks 2-8 and is presenting two logical drives to the storage server. There is a 99.99GB OS partition and a 13872.32GB NTFS D: drive.The second
    RAID 6 Array resides on Disks 9-14 and is partitioned as one 11177.83 NTFS drive.  
    Storage Pooling
    In our deployment we would like to build in room for growth by implementing storage pooling that can be later
    increased in size when we add additional disk enclosures to the rack. Do we want to create VHDX files on top of the logical NTFS drives? When physical disk enclosures, with disks, are added to the rack and present a logical drive to the OS, would we just create
    additional VHDX files on the expansion enclosures and add them to the storage pool? If we do use VHDX virtual disks, what size virtual hard disks should we make? Is there a max capacity? 64TB? Please let us know what the best approach for storage pooling will
    be for our environment.
    Windows Sharing
    We were thinking that we would create a single Share granting all users within the AD FullOrganization User group
    read/write permission. Then within this share we were thinking of using NTFS permissioning to create subfolders with different permissions for each departmental group and subgroup. Is this the correct approach or do you suggest a different approach?
    DFS
    In order to provide high availability and redundancy we would like to use DFS replication on shared folders to
    mirror storage01, located in our Brooklyn server closet and storage02, located in our Manhattan server closet. Presently there is a 10TB DFS replication limit in Windows 2012. Is this replicaiton limit per share, or total of all files under DFS. We have been
    informed that HP will provide an upgrade to 2012 R2 Storage Server when it becomes available. In the meanwhile, how should we designing our storage and replication strategy around the limits?
    Backup Strategy
    I read that Windows Server backup can only backup disks up to 2TB in size. We were thinking that we would like
    our 2 current StoreEasy servers to backup to each other (to an unreplicated portion of the disk space) nightly until we can purchase a third system for backup. What is the best approach for backup? Should we use Windows Server Backup to be capturing the data
    volumes?
    Should we use a third party backup software?

    Hi,
    Sorry for the delay in reply.
    I'll try to reply each of your questions. However for the first one, you may have a try to post to Network forum for further information, or contact your device provider (HP) to see if there is any recommendation.
    For Storage Pooling:
    From your description you would like to create VHDX on RAID6 disks for increasment. It is fine and as you said it is 64TB limited. See:
    Hyper-V Virtual Hard Disk Format Overview
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    Another possiable solution is using Storage Space - new function in Windows Server 2012. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    It could add hard disks to a storage pool and creating virtual disks from the pool. You can add disks later to this pool and creating new virtual disks if needed. 
    For Windows Sharing
    Generally we will have different sharing folders later. Creating all shares in a root folder sounds good but actually we may not able to accomplish. So it actually depends on actual environment.
    For DFS replication limitation
    I assume the 10TB limitation comes from this link:
    http://blogs.technet.com/b/csstwplatform/archive/2009/10/20/what-is-dfs-maximum-size-limit.aspx
    I contacted DFSR department about the limitation. Actually DFS-R could replicate more data which do not have an exact limitation. As you can see the article is created in 2009. 
    For Backup
    As you said there is a backup limitation (2GB - single backup). So if it cannot meet your requirement you will need to find a third party solution.
    Backup limitation
    http://technet.microsoft.com/en-us/library/cc772523.aspx
    If you have any feedback on our support, please send to [email protected]

  • Report server setup best practice info needed -SOLVED-

    Hello, I'm looking for some best practice info on how to set up the report server to handle multiple reports from multiple apps (hopefully in separate directories).
    We are converting forms 5 apps to 10g. Currently reports live in the same dir as the form files, each in their own application directory. Moving to 10g, the report.conf file specifies a reports dir. It does not seem that there can be multiple directories listed in the sourceDir parameter, in order to handle mutiple directories where reports can live. Is it possible to set it up so it can find reports in any of our 20 application directories? Do we have to have only one directory where all reports are served from (if so we'll have an issue, as reports from different apps could be named the same)?
    How have you folks solved this situation?
    Thanks for any info,
    Gary

    Got it working! Thanks to all for your input! I found a reference on Metalink to a known issue with running on Sun Solaris, which was causing me problems.
    Bottom line, here's what I did to get it all working:
    1) Report server .conf file:
    - Comment out sourceDir line in the engine entry.
    - Add environment entries for each app before the </server> line at the end .
    <environment id="prs">
    <envVariable name="REPORTS_PATH" value="(path to dir where reports live for this app)"/>
    </environment>
    - Bounce the server (not sure if this is necessary)
    2) $ORACLE_HOME/bin/reports.sh:
    - Comment out line that sets REPORTS_PATH
    This was necessary for Sun Solaris (the bug as mentioned on Metalink)
    3) The app .fmb that calls the report:
    - Set the report object property to specify the environment ID before calling
    run_report_object():
    set_report_object_property(rpt_id, REPORT_OTHER, 'ENVID="prs"');
    Blue Skies,
    Gary

  • Flash Lite Server Redirect - best practice?

    Hey all,
    I work in a small creative department in Japan, and we are
    starting to take orders for mobile Flash apps. We have, until
    recently, only done inline Flash movies, but I've been put in
    charge of our first full-Flash app. The only thing I'm not clear on
    is this: How should I implement the switch on the server side that
    a) redirects people trying to get to the content from a PC to a PC
    page; and b) keeps people from being able to access the content
    directly so they can download and decompile the Flash file?
    Capcom, for example, is doing an excellent job at this, and
    even if I spoof the server with a mobile user-agent, I still can't
    get the mobile content to come up on my PC.
    Anyway, I was hoping maybe somebody would know something
    about this and would be able to advise on best practices: PHP?
    .htaccess? JavaScript redirects? Also how do you hide the content
    (no need to type up an example script - but what are the key
    parameters that I should be testing against)?
    Thanks in advance!
    Ryoma

    Kunerilite is free, but you can install only one sis
    application at a time, for the Basic version.
    Yeah, it does have plugins, etc, but then comes the pain,
    register at symbiansigned, UID, etc.
    For the simple choice and for a quick sis, nothing fancy,
    swf2go might be good.
    At least this is my initial thought. Haven't tested
    yet.

  • Portal server deployment best practices

    anyone out there knows what is the right way to deply portal server into production environment instead of manually copying all the folders and run the nessarily commands? Is there a better way to deploy portal server? Any best practices that i should follow for deploying portal server?

    From the above what I understood is you would like to transfer your existing portal server configuration to the new one. I don't think there is an easy method to do it.
    One way You can do is by taking the "ldif " back up from the existing portal server.
    For that first you have to install the portal server in the new box and then take back up of existing portal server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db /tmp/profile.ldif
    edit the "/tmp/profile.ldif " file and modify <hostname> and <Domain name> with the new system values.
    copy this file to the new server using
    # /opt/netscape/directory4/slapd-<host>/ldif2db -i /tmp/backdb.ldif
    and also copy the file "slapd.user_at.conf " under /opt/netscape/directory4/slapd-<hostname>/config to the new system.
    Restarting the server makes you to access the portal server with the confguration of the old one.

  • Mac Pro 10.7 Server DMZ best practice

    The Mac Pro has 2 gigE, what is the best practice for lion server and DMZ?
    Should I Ignor one and put the server in the DMZ and firewall from the LAN to the server (pain for file share), or use one port for the DMZ and one into the LAN.
    I have been trying to use the two ports and LION SERVER seems to want to bind only to one address (10.1.1.1 DMZ or 192.168.1.1 LAN).
    Does anyone have a best practice for this? I a using a Cisco ASA 5500 for the firewall.
    Thank you

    If you put your server in a DMZ all trafffic will be sent to it unfiltered, in which case the server firewall would be your only line of defense against attack. 
    For better security, set firewall rules in the Cisco that will pass trafffic to the ports you want open and deny traffic on all other ports.  You can also restrict access to specific ports by allowing or denying specific IP addresses or address blocks in the firewall settings.

  • Java Server Faces best Practices

    Hello,
    I am starting to like JSF. But I want to know the following.
    Where is the contoller in a JSF application, is it the Managed Bean???? or a Backing Bean?
    Should I put my Business Delegate on a Managed Bean? or in a Backing Bean? What is the difference?
    Can anyone tell?.
    I would like to know JSF Best Practices? Blueprints?.
    Thanks.
    Saludos,
    <Rory/>

    Hi Senthruan,
    The documentation referenced by the following links is based on an older version of JavaServer Faces technology.
    Backing Beans:
    http://java.sun.com/webservices/docs/1.3/tutorial/doc/J
    FUsing3.html
    UI Components :
    http://java.sun.com/webservices/docs/1.3/tutorial/doc/J
    FUsing3.html (just as a visual example).You should instead use the J2EE tutorial (http://java.sun.com/j2ee/1.4/docs/tutorial/doc/index.html), which is based on version 1.1 of JavaServer Faces technology. The JavaSErver Faces material starts with chapter 17. The topics have moved around a bit, but the tutorial has a nice search engine, so you shouldn't have trouble finding the information you need.
    Jennifer

  • DI Server + UDF best practices

    If I want to add a UDF to an object, what's the best way to go for?
    Let's say I want to add a role to a contact person.
    I would do this:
    - Add a user defined table, containing an integer (id) and a string (description)
    - Add an integer UDF to the object to store a reference to the role.
    Would you advice to do it like this or would you even advice not to do it like this, but recommend another way?
    Thanks for your time!
    Vincent

    Hi Vincent,
    I don't think there's a best practice as such but there are sometimes a few ways to set up the same solution. When making your choice of which configuration to use, it's best to consider aspects like data security, maintenance etc.
    In your example, your solution would work fine. Using a UDT makes it very easy to add new roles and you can also add columns to your UDT to store other role-based data. However, it's harder to set the security to prevent users opening the UDT form, if that's a requirement. The other approach would be to add the roles as valid values in the UDF itself. This makes it easier to set basic security but it's not a good solution if the number of roles will change frequently as you need everyone out of the system to update the UDF valid values.
    Also bear in mind that XL Reporter cannot read from UDTs.
    Kind Regards,
    Owen

  • OS X Server DNS Best practice?

    Hello,
    I am having a little trouble with my OS X Server DNS.
    I have set up server.example.com and that works fine but now from my internal network I cannot get to:
    example.com or
    www.example.com
    example.com is a website I have set up on a remote webserver.
    My records currently look like this.
    Primary Zone: example.com
    server.example.com - machine
    server.example.com - nameserver
    Reverse Zone: 50.168.192.in-addr.arpa
    192.168.50.25 - reverse mapping
    server.portalpie.com - nameserver
    My webserver for example.com has an IP of something like 175.117.174.19
    How would I get example.com and www.example.com to point to 175.117.174.19?
    Thanks

    tsumugari wrote:
    Hello, DNS for simple name resolution work correctly for internal and external name. Internal it is .lan and external .fr
    I think there is perhaps SRV entry to add.
    Please do not use .LAN as your top-level domain.  That's not a valid top-level domain right now, but it's also not reserved for this sort of use.  Either use a real and registered domain, or a subdomain of a real and registered domain, or — if you squat in a domain or try to use a TLD that's not registered — expect to have problems as new top-level domains are added.  At the rate that the new TLDs are coming online from ICANN, I'd expect to see .LAN get allocated and used, too.    .GURU, .RIP, .PLUMBING and dozens of other new top-level domains are already online, and probably thousands more are coming online.  
    SRV records are not related to accessing the Internet, those are service records which some applications use to access certain network services; they're a way to locate a target server and a port for specific applications — CalDAV does use an SRV record, but that's not related to the original posting's issues.   If you're having issues similar to the OP, then access your server and launch Terminal.app from Applications > Utilities and verify local DNS with the (harmless, diagnostic) command-line command:
    sudo changeip -checkhostname
    Enter your administrative password.  That command might show a one-time informational message about the use of sudo, and will usually then show some network configuration information about your server, and then an indication that no problems were found, or some indications of issues.  If there are errors reported, your IP network or your local DNS is not configured correctly — I'm here assuming a NAT network.
    I usually do this DNS set-up in a couple of steps.  First, get private DNS services configured and working.  This is always the first step, right after assigning the IP addresses.   It's just too convenient not to have DNS running on your local LAN, once you get to the point of having and running a server.   Then for external access for (for instance) web services, get port-forwarding working at the firewall/NAT/gateway box working; get your public static IP address mapped to the server's internal, private, static IP address.  Then get the public DNS configuration to resolve your external domain name to your public static IP address.
    My preference is to use separate DNS domains or a domain and a subdomain inside and outside.  Using real and registered domains, and not using any domains associated with a dynamic DNS provider — that's possible, but a little more tricky to configure.  This internal and external domain usage simplifies certain steps, and it avoids having to deal with cases where — for instance — some of your services have public IP addresses — such as a mail server you might be using — and other services might be entirely private.  If you have one domain (or subdomain) be public and one be private, then you don't have to track external IP address changes in your private DNS services; public DNS has just your public stuff, and your private domain (or subdomain) has just your private stuff.  Also obviously easy to tell what's inside your firewall, and what's outside, using this. 
    If you're thinking of running a publicly-accessible mail server, you'll need additional steps in the public DNS.
    Little of the above probably makes sense, so here's a write-up on configuring DNS on OS X Server.   All of the Server.app stuff works about the same for general DNS setup, too.  More recent Server.app is usually more flexible and capable than the older Server.app stuff, though.

  • Windows java server node best practice?

    I'm coming from the windows/.NET world where I would usually expect a service that can be managed from the services applet or via the usual command line remote admin utils and doesn't need to interact with the console. Starting a java instance in a command window seems a bit odd to me.
    I imagine it would need to use some mechanism to run at startup and not require a logged in account or manual intervention. What's the best way to do this?

    You can also check Apache Daemon Procrun (http://commons.apache.org/daemon/procrun.html). I haven't tried it, but it should be able to do the job.
    - Aleks

Maybe you are looking for