Moving Mailserver from Xserve G4 to intel, Best practice?, Recommendations?

Hi!
I will receive a new Xserve intel soon and possibly want to move mail services from the currently used Xserve G4 (which is working fine) to the new Xserve intel.
The Xserve G4 is running a heavily modified mail setup thanks to pterobyte's excellent tutorials on fixing, updating, extending, dare I say "pimping" Mac OS X Server's mailserver setup.
What I want to achieve in the long run:
Have Mail services run on the Xserve intel and have the Xserve G4 work as a mailbackup. (They will be connected via permanent VPN, but be in different LANs on different ISPs). They shall be serving email for at least three distinct domains then. (All low volume. currently the G4 is serving a single domain using WGM aliases.) I want (and need) to switch to postfix aliases.
What I need to consider:
My client desperately wants/needs to update to Leopard server once it becomes available. Both Xserve definitely will be upgraded to Leopard Server then.
Time is not an issue at the moment as the G4 is working very well. I want to keep the work at a minimum in regard to the Leopard switch. I am fine with an interim solution, even if it is somewhat inelegant, as long as it runs fine. The additional domains are not urgent at the moment. It will be fine when they transfer to the intel Xserve once we run Leopard.
Questions:
Does it pay to do all the work moving from the G4 to the intel (I'd need to compile and configure all the SpamAssassin, ClamAV, Amavisd-New, etc. again...) move all the Mailboxes, Users, IMAP and SMTP. In regard that there will be a clean install once Leopard comes out. (I am definitely no fan of Updating a Mac OS X server. Experience has proven to me that this does not work reliably.)
Are there any recommendations or best practice hints from your experience when moving a server from PPC to intel?
Thanks in advance
MacLemon

By all means do a clean install. If time is not an issue, make sure Leopard has been on the market 2-3 months before you do so.
Here is what I would do:
1. Clean install of Intel Server
2. Update all components
3. Copy all needed configuration files from PPC to Intel Server
4. Backup PPC mail server with mailbfr
5. Restore mail backup with mailbfr to Intel Server
This is all that needs to be done.
If you want to keep the G4 as a backup server, just configure it as a secondary MX in case your primary is down. Trying to keep mailboxes redundant is only possible in a cluster and a massive pain to configure (Leopard should change that though).
HTH,
Alex

Similar Messages

  • What is the best practices recommended from microsoft to give access a intranet portal from internet externally

    Hi
    what is the best practices recommended from microsoft
    i have a intranet portal in my organization used by employees  and i want to give access for employees to access external from  internet also
    can i use same url  for employees access intranet portal from internally and externally or diffrent url?
    like ( https://extranet.xyz.com.in)  and (http://intranet.xyz.com.in)
    internal url access by employees is( http://intranet.xyz.com.in)
    and this portal configured with claims based authentication
    here i have a F5 for load blance and
     a request from external to F5 is https request and F5 to sharepoint server http request
    and sharepoint server to F5 is http request but F5 to external users it is https response so 
    when i change below settings in alternate access mapings   all links changed to https
    but only authentication link is still showing http and authentication page not opened.
    adil

    Hi,
    One of my clients has an environment similar to yours with an internal pair of F5s and a pair used for the access from the internet. 
    I am only going to focus on the method using an F5 Load Balancer and SSL Offloading. the setup of the F5 will not be covered in detail but a reference to the documentation to support SharePoint and SSL Offloading will be provided
    Since you arte going to be using SSL Offloading you do not need to extend your WebApps to use separate IIS WebSites with Unique IP Addresses
    Configure the F5 with SSL Offloading
    Configure a Internal AAM for SSL (HTTPS) for each WebApp that maps to the Public HTTP FQDN AAM Setting for each WebApp
    Our environment has an additional component we require RSA Authentication for all internet facing Sites. So we have the extra step of extending the WebApp to a separate IIS WebSite and configuring RSA for each extended WebSite.Reference:
    Reference SharePoint F5 Configuration:
    http://www.f5.com/featured/video/ssl-offloading/
    -Ivan

  • Where does one find the Oracle Best Practice/recommendations for how to DR

    What is the Oracle Best Practice for install/deployment and configuration of ODI 11g for Disaster Recovery?
    We have a project that is using Oracle ODI 11g (11.1.1.5).
    We have configured all the other Oracle FMW components as per the Oracle DR EDG guides. Basically using the Host ip name/aliasing concept to ‘trick’ the secondary site into thinking
    it is primary and continue working with minimal (or no) manual reconfiguration. But will this work for ODI? The FMW DR guide has sections for SOA, WebCenter and IdM, but nothing for ODI.
    Since ODI stores so much configuration information in the Master Repository..when this DB gets ‘data guarded’ to the secondary site and promoted to Primary…ODI will still think it is at the ‘other’ site. Will this break the actual agents running the scenarios?
    Where does one find the Oracle Best Practice/recommendations for how to DR ODI properly?
    We are looking for a solution that will allow a graceful switchover/failover with minimal manual re-configuration.

    user8804554 wrote:
    Hi all,
    I m currently testing external components with Windows Server and I want to test Oracle 11g R2.
    The only resource I have is this website and the only binaries seem to be for Linux OS.You have one other HUGE resource that, while it won't answer your current question, you'd better start getting familiar with if you are going to use Oracle. That is the complete and official documentation, found at tahiti.oracle.com
    >
    Does anybody know how I can upgrade my Oracle 11.1.0.7 version to the R2 release?
    Thanks,
    Bertrand

  • Highest Quality Live Video Streaming | Best Practice Recommendations?

    Hi,
    When using FlashCollab Server, how can we achieve best quality publishing a live stream?
    Can you provide a bullet list for best practice recommendations?
    Our requirement is publishing a single presenter to many viewers (generally around 5 to 50 viewers).
    Also, can it make any difference if the publisher is running Flash Player 10 vs Flash Player 9?
              Thanks,
              g

    Hi Greg,
    For achieving best quality
    a) you should use RTMFP connection instead of RTMPS. RTMFP has a lower latency.
    b) You should the player 10 swc.
    c) If bandwidth is not a restriction for you, you can use higest quality values. WebcamPublisher class has property for setting quality.
    d) You can use a lower keyframeInterval value, which in turn  will send videos in full frames rather than by a video compression algorithm.
    e) You should use Speex Codec. Speex Codec is again provided with player 10 swc.
    These are some suggestions that can improve your quality depending on your requirements.
    Thanks
    Hironmay Basu

  • Could you point me to a ADF development best practice recommendation doc

    Could you point me to a ADF development best practice recommendations document for ADF 11g that could be used as a guideline for developers.
    Naming conventions
    Usage of Models, Implement validation in BC...
    Best practices for the UI with ADF Faces...
    Recommendations.
    Thanks

    The right place to start :
    http://groups.google.com/group/adf-methodology
    Also you may take a look at this:
    http://www.oracle.com/technology/products/jdev/collateral/4gl/papers/Introduction_Best_Practices.pdf
    Also
    http://groups.google.com/group/adf-methodology/browse_thread/thread/e7c9d557ab03b1cb?hl=en#
    There're some interesting tips there.

  • [CS4-CS5] Table from XML: what's the best practice?

    Hi,
    I have to build a huge table (20-25 pages long...) from an XML file.
    No problem with that, I wrote a XSLT file to convert my client's XML in the "Table/Cell structure" InDesign needs with all style parameters.
    The problem is that it takes a long time (4-5 hours) to ID to build the whole table.
    I wonder if this is still the best practice with such a huge amount of data (the input XML is 1,1 Mb).
    I also tried to build the table using a script (JavaScript) but from some time test I can see the problem is even bigger.
    I'm currently using an iMac (Mac OS X 10.6.2) with 3.06 GHZ Intel Core 2 Duo and 8 GB ram, it's not exactly the worst computer in this world...
    Is there a best practice for this kind of work?
    Client is becoming a pain in the arse...
    Thanks in advance!

    First transform the XML through XSLT seprately and then Import that XML in InDesign.
    Hope it help.
    Regards,
    Anil Yadav

  • Webchannel b2b Accesed from CRM Java multi language best practice?

    Hi,
    We are accesing Webchannel from CRM Portal and we have the requirement to access Webchannel b2b with the same language as the Portal User.
    Is there a best practice to do this? I´ve seen creating a URL iView with Spanish and English URL with a parameter called language, and also an iView for english and an iView for Spanish pointing to a system for english and a system for spanish respectively.
    Is there another way to do this? or a recommended way?
    Hope somebody else have solved this.
    Thanx in Advanced!
    Kind Regards,
    Gerardo J

    My concern will be how to pass the parameters (for language or country) from the Portal to the Webchannel b2b... So, you end up having an URL iView for each different locale .  Can we pass such information programatically form portal to webchannel? Say through http header / cookies? That will be seamless..
    But to answer your question, yes, URL ISA iView is pretty common if you have only very few languages.  More importantly, if you want deep integration with the portal, you must use  ISA iView configuration provided by SAP, then this is the only way. See [Note 1021959 - Portal settings for ISA iViews|https://service.sap.com/sap/support/notes/1021959] for details of the available features.

  • What are Best Practice Recommendations for Java EE 7 Property File Configuration?

    Where does application configuration belong in modern Java EE applications? What best practice(s) recommendations do people have?
    By application configuration, I mean settings like connectivity settings to services on other boxes, including external ones (e.g. Twitter and our internal Cassandra servers...for things such as hostnames, credentials, retry attempts) as well as those relating business logic (things that one might be tempted to store as constants in classes, e.g. days for something to expire, etc).
    Assumptions:
    We are deploying to a Java EE 7 server (Wildfly 8.1) using a single EAR file, which contains multiple wars and one ejb-jar.
    We will be deploying to a variety of environments: Unit testing, local dev installs, cloud based infrastructure for UAT, Stress testing and Production environments. **Many of  our properties will vary with each of these environments.**
    We are not opposed to coupling property configuration to a DI framework if that is the best practice people recommend.
    All of this is for new development, so we don't have to comply with legacy requirements or restrictions. We're very focused on the current, modern best practices.
    Does configuration belong inside or outside of an EAR?
    If outside of an EAR, where and how best to reliably access them?
    If inside of an EAR we can store it anywhere in the classpath to ease access during execution. But we'd have to re-assemble (and maybe re-build) with each configuration change. And since we'll have multiple environments, we'd need a means to differentiate the files within the EAR. I see two options here:
    Utilize expected file names (e.g. cassandra.properties) and then build multiple environment specific EARs (eg. appxyz-PROD.ear).
    Build one EAR (eg. appxyz.ear) and put all of our various environment configuration files inside it, appending an environment variable to each config file name (eg cassandra-PROD.properties). And of course adding an environment variable (to the vm or otherwise), so that the code will know which file to pickup.
    What are the best practices people can recommend for solving this common challenge?
    Thanks.

    HI Bob,
    As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
    Best way is  re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
    like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
    or you can ask you XI developer to give url for webservice and username password of server

  • Best practice recommendation--BC set

    Dear friends,
       I am using BC set concept to capture my configurations. I am a PP Consultant.
    Let us consider, one scenario, Configure plant parameters in t.code OPPQ.
    My requirement is:
    A.   Define Floats: (Schedule Margin key)
    SM key: 001
    opening period: 1day
    Float before prod: 2day
    Float After prod: 1 day
    Release period: 1 day
    B.   Number range
    Manitain internal number range as: 10-from:010000000999999999. (for planned orders)
    This is my configuration requirement.
    Method M1:
    Name of the BC set: ZBC_MRP1
    while creating BC set first time, while defining floats, i have wrongly captured/ activated the opening periodas 100, instead of 001. But i have correctly captured the value for number range (for my planned orders)
    Now if u see the activation log for my BC set, my BC set is in "GREEN" light--Version1, successfully activated, but activated values are wrong)
    So, i want to change my BC set values. Now i want to reactivate my BC set with correct value. Now i am again activating the same BC set with corret value of opening period (Value as 001 ). After reactivating the BC set, if i get into my BC set activation log, one more version (version 2) has appeared with "GREEN" light.
    So in my activation log, two BC sets are visible.
    If i activate Version 1---wrong values will be updated in configuration
    If i activate Version 2---corrrect values will be activated in configurations
    But both versions can be activated at any point of time. The latest activated version will be alwyas in top.
    <b>So method 1 (M1) talks about, with one BC set name, maintain different versions of BC set.</b>...Based on your requirement activate the versions
    Method 2 (M2)
    Instead of creating versions within a same BC set, create one more BC set to capture new values.
    So if i activate second BC set, the configuration will be updated.
    Please suggest me, which method is best practice( M1 or M2)?
    Thanks
    Senthil

    I am familiar with resource bundles, but wonder if there is a better approach within
    JDeveloper. Resourcebundles are the java-native way of handling locale-specific texts.
    Are there any plans to enhance this area in 9.0.3? For BC4J, in 9.0.3, all control-hints and custom-validation messages (new feature) are generated in resource-bundles rather than xml-files to make it easier to "extend" for multiple locales.

  • Performance Tuning Best Practices/Recommendations

    We recently went like on a ECC6.0 system.  We have 3 application servers that are showing a lot of swaps in ST02. 
    Our buffers were initially set based off of SAP Go-Live Analysis checks.  But it is becoming apparent that we will need to enlarge some of our buffers.
    Are there any tips and tricks I should be aware of when tuning the buffers? 
    Does making them too big decrease performance?
    I am just wanting to adjust the system to allow the best performance possible, so any recommendations or best practices would be appreciated.
    Thanks.

    Hi,
    Please increase the value of parameters in small increments. If you set the parameters too large, memory is wasted. This can result in paging if too much memory is taken from the operating system and allocated to SAP buffers.
    For example, if abap/buffersize is 500000, change this to 600000 or 650000. Then analyze the performance and adjust parameters accordingly.
    Please check out <a href="http://help.sap.com/saphelp_nw04/helpdata/en/c4/3a6f4e505211d189550000e829fbbd/content.htm">this link</a> and all embedded links. The documentation provided there is fairly elaborate. Moreover, the thread mentioned by Prince Jose is very good for a guideline as well.
    Best regards

  • Any best practice recommendations for controlling access to dashboards?

    Everyone,
         I understand that an Xcelsius dashboard compiled into a .swf file contains no means for providing access control to limit who can or how many times they can run the dashboard. Basically, if they have a copy of the .swf they can use it as much as they'd like. To protect access to sensitive data I'd like to be able to control who can access the dashboard and how many times or how long they can access it for.
         From what I've read it seems the simplest way to do this is to embed the swf file into a web portal that requires a user to authenticate before accessing the file. I suppose I can then handle how long they can access it from the back end.
         If I do this, is there anyway a user can do something like <right click - save as> on the flash file to save it on their local machine? Is there a best practice means for properly protecting the dashboard?
    Any advice would be appreciated,
    Jerry Winner

    Everyone,
         I understand that an Xcelsius dashboard compiled into a .swf file contains no means for providing access control to limit who can or how many times they can run the dashboard. Basically, if they have a copy of the .swf they can use it as much as they'd like. To protect access to sensitive data I'd like to be able to control who can access the dashboard and how many times or how long they can access it for.
         From what I've read it seems the simplest way to do this is to embed the swf file into a web portal that requires a user to authenticate before accessing the file. I suppose I can then handle how long they can access it from the back end.
         If I do this, is there anyway a user can do something like <right click - save as> on the flash file to save it on their local machine? Is there a best practice means for properly protecting the dashboard?
    Any advice would be appreciated,
    Jerry Winner

  • How often should the Cisco 6509 and 3750 switches be rebooted? Does Cisco have a best practice recommendation?

    How often should the 6509's and 3750's switches be rebooted?
    Does Cisco have a best practice document on this and recommendation how long the switch should be up before it gets rebooted?
    Why is a reboot needed if there are no indications of issues on the log?

    I'd agree with Larry here.
    If you're not seeing any issues with your IOS revision and there are no relevant PSIRTs (security notices applicalble to features and or exposure of your device requiring an IOS upgrade) then you can go a very long time without rebooting, if ever.
    I'm sure it's far from a record, but our corporate distribution router that supports >1000 downstream devices day in and day out has never been rebooted since installation just over 5 years ago. I have a top of rack Layer 2 switch (2900 series running CatOS) that's almost at 10 years.
    That said, you should have some monitoring scheme that assures you everything is healthy. But as long as memory and cpu are happy, the device will run forever.

  • Just updated to CC 2014. Interested in best practice recommendations for converting INDD hi-resolution print layout files to .jpeg for use on a portfolio preview website

    Seeking recommendations for best practices for converting hi-res magazine INDD docs to .jpgs for web portfolio

    Export to a hi-res PDF, then do your conversion in Photoshop where you have more control.

  • Best practice recommendations for Payables month end inbound jobs

    During Payables month end we hold off inbound invoice files and release them once the new period is open so that invoice get created in new fiscal period. Is this an efficient way to do? Please advise best practice for this business process.
    Thanks

    Hi,
    Can someone provide your valuable suggestions.
    Thanks
    Rohini.

  • Dual 7010 - Layer 3 Peering Best Practice/Recommendation

    I have 2 Nexus 7010s with 2 Nexus 5548s dual connected to each 7K. The 7010s are acting as redundant core devices. Dual Sup2E in each.
    Can someone tell me what the best practice is for layer 3 peering (EIGRP) between these devices. I can't seem to find any example documents.
    VPCs are used
    Approx 20 Vlans. Mutliple functions, lots of virtualized servers (200+) on UCS and VMware.
    A firewall HA pair will be connected - 1 to each 7K. This leads to the Internet and DMZ.
    1 MPLS WAN router will be connected to the primary 7K.
    Let me know if you need any additional info. Thanks!

    I'm not sure that I understand your question. Is it EIGRP peering across the vPC links between the Core switches?
    If so see below an example for OSPF but the concepts are the same for EIGRP
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    Don't forget to rate all posts that are helpful

Maybe you are looking for