Migrating from SunFire v1280 to T4 servers

Hi all,
I will be soon tasked with the migration from a SunFire v1280 server to a new T4, not sure if it will be T4-1 or T4-2, server. The v1280 is running Solaris 10 with a number of applications that the client wants to preserve and keep running, they include Oracle DB, WebLogic and a few others.
Is there a way that I can clone the OS on the v1280 and install it on the T4? flash archive maybe or some other method? I've been looking into the Oracle documentation but I cant find what I need, maybe I will find the info I need here.
The problem is that there is a bunch of software components at work here and the client wants minimal downtime. Both machines are Sparc servers, yes I know they are almost a decade apart, but thats where the Solaris binary comparability kicks in, right?
If you need more info let me know :)
p.s: I posted something similar a few months ago, not sure if I did it on this forum, if I did sorry for the double post.

Hi,
I already did some migrations from systems running solaris 10 to a new system.
The best way that i have found is the following.
1) On the new system make sure you have the latest solaris 10 or even solaris 11.
2) On the machine that you want to migrate stop all processes and create a flar from the system.
If there is storage that is connected to that sytem make sure you do an exclude off that filesystem so that you can import those luns in the new system.
3) on the new system create a zone with the same hostid and ip address.
4) import the flar into the zone.
If you want i can give you a conrete example that i have tested already.
Migrated a E2900 + V440 towards an M5000.
The external storage had a zfs fileystem
Regards
Filip

Similar Messages

  • DB2 Migration from Windows - Linux supported with Backup/Restore ?

    Hi folks,
    we have to do a DB2 V9.1 migration from Windows -> Linux. Both servers are Intel based. Is it official supported by SAP to do this without SAP migration tools? E.g. with a backup/restore or redirected restore? I heard about it, but i'm not sure.
    Thanks a lot
    Jochen
    Edited by: Jochen Raab on Mar 1, 2010 5:40 PM

    Hi Jochen,
    Please have a look at the DB2 docu.
    http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.ha.doc/doc/c0005960.html
    It clearly stats that Linux and Windows are not compatible.
    DB2® database systems support some backup and restore operations between different operating systems and hardware platforms.
    The supported platforms for DB2 backup and restore operations can be grouped into one of three families:
    Big-endian Linux® and UNIX®
    Little-endian Linux and UNIX
    Windows®
    A database backup from one platform family can only be restored on any system within the same platform family. For Windows operating systems, you can restore a database created on DB2 Universal Database (UDB) V8 on a DB2 Version 9 database system. For Linux and UNIX operating systems, as long as the endianness (big endian or little endian) of the backup and restore platforms is the same, you can restore backups that were produced on DB2 UDB V8 on DB2 Version 9.
    So you need to do an eport/import.
    Regards,
      Joachim

  • MIgration from physical servers to Oracle Solaris Containers

    We are in process of migrating our Oracle Databases from a physical SUN SPARC servers to Oracle Solaris Container.
    Do we need to any extra setting in the Oracle side or server side in a virtualized environment to run Oracle databases?
    Any comment will be of help.
    Thanks
    Abdul

    Oracle databases work fine in zones.
    Some planing is required to find out if you need to use dedicated cpus or other resource management options.
    Use projects to set the appropriate memory settings.
    Depending on the versions of the databases the installer may choke and complain about missing entries in /etc/system (like shminfo_shmmax) but they can be safely ignored.

  • ActiveSync stops working after migrating from Exchange 2007 to Exchange 2013

    We have started the migration from Exchange 2007 to Exchange 2013. We've followed best practices and everything is working great except ActiveSync. I've performed Exchange migrations in the past so this is nothing new for me. I've also been referring to
    a great guide which has been a big help,
    http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/planning-and-migrating-small-organization-exchange-2007-2013-part1.html.
    Once a user is migrated from Exchange 2007 to 2013, ActiveSync stops working properly. Email can be pulled to the device (Nokia Lumia 625 running Windows Phone 8) by performing a manual sync. But DirectPush is not working. The strange part is it's not affecting
    everyone who's been migrated. Anyone who is still on Exchange 2007 is not affected.
    At first I thought it was our wildcard certificate. 99% of our users are running Outlook 2013 on Windows 7 or higher but we do have a few terminal servers still running Outlook 2010. Outlook 2010 was giving us certificate errors. I realized it was the wildcard
    certificate. Rather than making changes to the OutlookProvider I simply obtained a new SAN certificate. Although that resolved the issues for Outlook 2010 users, ActiveSync was still a problem.
    Rebooting the phones and removing the email account from the user's device and re-adding it didn't resolve the issue either.
    Then I performed an iisreset on the CAS server. This didn't help either. I didn't know it at the time, but I was getting closer...
    I tried using the cmdlet Test-ActiveSyncConnectivity but it gave me the following error:
    WARNING: Test user 'extest_0d9a45b025374' isn't accessible, so this cmdlet won't be able to test Client Access server
    connectivity.
    Could not find or sign in with user DOMAIN.com\extest_0d9a45b025374. If this task is being run without
    credentials, sign in as a Domain Administrator, and then run Scripts\new-TestCasConnectivityUser.ps1 to verify that
    the user exists on Mailbox server EX02.DOMAIN.COM
    I started reviewing how Exchange 2013 proxied information from the CAS to the mailbox server and realized the issue may in fact be on the mailbox server.
    I performed an iisreset on the mailbox server and all of a sudden ActiveSync started working again. Awesome!
    I can't explain why. The only thing I can assume is when some users were migrated from 2007 to 2013 something wasn't being triggered on the Exchange 2013 side. Resetting IIS resolved the issue. I guess I'll have to do an IIS reset after I perform a batch
    of migrations. Disabling ActiveSync and re-enabling it for the affected users didn't help - only the IISRESET resolved the issue.
    If anyone has any information as to why this happens, please chime in. Also, if anyone knows why I can't run the Test-ActiveSyncConnectivity cmdlet, I'd appreciate the help.
    Thanks.

    Hi,
    In Exchange 2013, the Public Folder is changed to Public Folder mailbox instead of Public Folder in Exchange 2007 database.
    Due to the changes in how public folders are stored, legacy Exchange mailboxes are unable to access the public folder hierarchy on Exchange 2013 servers. However, user mailboxes on Exchange 2013 servers or Exchange Online can connect to legacy
    public folders. Exchange 2013 public folders and legacy public folders can’t exist in your Exchange organization simultaneously. This effectively means that
    there’s no coexistence between versions.
    For this reason, it’s recommended that prior to migrating your public folders, you should
    first migrate your all legacy mailboxes to Exchange 2013. For more information about migrating public folder from previous versions, please refer to:
    http://technet.microsoft.com/en-us/library/jj150486(v=exchg.150).aspx
    (Please note the What do you need to know before you begin part in this link)
    Regards,
    Winnie Liang
    TechNet Community Support

  • Cross Forest Migration from Exchange 2007 to Exchange 2013

    Hi
    Could anybody advice me the steps also the  pros and cons for below mentioned environment if we are going for the cross forest migration.
    Source 
    Domain -   test.local
    Active Directory -  Windows 2003
    Exchange Server - 2007
    Target
    Domain -   test.net
    Active Directory -  Windows 2012
    Exchange Server - 2013
    Also if it is possible ,
    How could I remove the source environment including the exchange servers. after the migration ?
    Regards
    Muralee

    Hi Oliver ,
    Please suggest us.               
     In my environment we are in a plan to migrate from exchange 2007 to exchange 2013 (cross forest migration).
    Source : Exchange 2007 with sp3 ru 10 
    Target : Exchange 2013 with cu2 ( new environment yet to be created).
    Trust : Forest trust in place (two way )
    Domain and forest functional level : 2003 in both target and source  
    Migration Steps :
    Step1 :
    We are in a plan to execute 'preparemoverequest.ps1' first in the target forest ,so that we will get the disable MEU
    in the target forest.
    Step2:
    Then we are going to use ADMT to migrate users SID'S and password .
    Step3:
    Then we are going to move the mailboxes with New-moverequest  
    Please have a look in to our steps and suggest us ,whether we are going to proceed the migration in a right way or not
    .Is anything needs to be changed please intimate me .
    Thanks 
    S.Nithyanandham 
    Hey there,
    Sorry for taking a little while to get back to you, i've been busy working on Hosted Lync deployments!
    Use ADMT first, then when using preparemoverequest.ps1 script using the -uselocalobject cmdlet. This will then tie it up to the ADMT migrated account.
    More info in this thread here: http://social.technet.microsoft.com/Forums/windowsserver/en-US/2916e931-36a0-4ba4-8c04-196dbe792b44/preparemoverequestps1-and-admt?forum=winserverMigration
    Oliver
    Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,MCITP:Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi

  • Migrating from Exchange 2007 to exchange 2013 ( special case )

    Hello , 
    what is the BEST scenario ( fastest , most efficient , most secure in terms of data loss )  , to migrate from exchange 2007 ( one server , all exchange roles installed on this server , 1200 mailbox ) , to exchange 2013 ? 
    knowing my environment needs to be connected to their mailboxes , 24/7 ! 
    it's very frustrating . 
    and i have no clue even if this is the right place to post about this , if not please refer me as to where to post . 
    Also , All my domain controllers are 2008 .

    It's fine to post your question here, and you are fine with Server 2008 Domain Controllers - that is a supported scenario.
    If you haven't performed such an upgrade and you need to have 24/7 mailbox availability, I would seriously recommend you to duplicate the production environment on a test network and run through the upgrade at least once.
    Most people neglect the Outlook clients requirements - they need to be updated and include several specific updates, which allow the automatic reconfiguration of internal clients. If you are preparing for this upgrade, you should be aware that all internal
    Outlook clients switch to Outlook Anywhere. Clients that miss these updates will get connectivity problems.
    Another common problem is the configuration of the Exchange URL - I mean the Exchange 2013 URL and the modified Exchange 2007 URL that will allow the co-existence. In your case, you definitely need to plan for co-existence - that includes requesting and
    installing a new Exchange UCC (Multiple Domain Certificate) on both Exchange servers, configuring Split DNS (or preferably PinPoint DNS zones), and correct timing when replacing the existing Certificate on the Exchange 2007 server. Failure to configure the
    correct URL (and it's quite easy to miss one, so triple check them) will get you in trouble.
    Once you get through the switchover (switching the mail flow and Client Access through the Exchange 2013 server), move just a couple of test mailboxes and check the result.
    Finally, if you are moving the Public Folders, make sure that the lock is really applied before you complete the process. Most people proceed right away and that get's the process stuck. If you can afford it (the mailboxes are already on the Exchange 2013
    server at that point), just restart the Exchange 2007 server (after locking the Public Folders) and then complete the Public Folder migration.
    Good Luck with the project!
    Step by Step Screencasts and Video Tutorials

  • Migrating from Exchange 2007 to Exchange 2013 Public Folders coexistence

    Hi all, I'm migrating from Exchange 2007 SP3 to Exchange 2013 SP1.
    I have an Exchange 2007 server (Client Access, Hub Transport and Mailbox Server) in Site A, mailboxes and Publics Folders
    In another Site in Site B, I have already installed (Client Access and Mailbox Server) Exchange 2013 server.
    I am migrating test users, the fact is that users who migrated to Exchange Server 2013 Public folders do not see.
    How I can make the migrated users from viewing the Exchange 2007 Public Folders duration coexistence?
    Is there a "how to" to migrate Public Folders from Exchange 2007 to Exchange 2013?
    thank you very much
    Microsoft Certified IT Professional Server Administrator

    Hi,
    In Exchange 2013, the Public Folder is changed to Public Folder mailbox instead of Public Folder in Exchange 2007 database.
    Due to the changes in how public folders are stored, legacy Exchange mailboxes are unable to access the public folder hierarchy on Exchange 2013 servers. However, user mailboxes on Exchange 2013 servers or Exchange Online can connect to legacy
    public folders. Exchange 2013 public folders and legacy public folders can’t exist in your Exchange organization simultaneously. This effectively means that
    there’s no coexistence between versions.
    For this reason, it’s recommended that prior to migrating your public folders, you should
    first migrate your all legacy mailboxes to Exchange 2013. For more information about migrating public folder from previous versions, please refer to:
    http://technet.microsoft.com/en-us/library/jj150486(v=exchg.150).aspx
    (Please note the What do you need to know before you begin part in this link)
    Regards,
    Winnie Liang
    TechNet Community Support

  • Web Analysis Report Migration from 9.3.1. to 11.1.1.3

    Hi All,
    We are migrating from 9.3.1 to 11.1.1.3 we have couple of web analysis reports running on the old server 9.3.1 and now i have to migrate those reports on the new server 11.1.1.3. Reports in 9.3.1 are in windows servers and it should me migrated to 11.1.1.3 which is on a solaris server. please let me me know if you have any ideas regarding this issue.
    Thanks,
    Ram.
    Edited by: KRK on Sep 29, 2009 8:56 AM
    Edited by: KRK on Nov 6, 2009 12:53 PM

    Hi Vijay,
    I have checked migration utility inversion 9.x basically this migration utility is compatible for reports from version 7.x to 9.x and it is not supported for reports from 9.x to 11.x ....
    So i have finally fallowed a procedure.
    I have exported the reports from 9.x and imported in 11.x .
    Then after exporting i have pointed database connections to new 11.x version server.
    Now I was facing a issue when i open a reports in read only mode (i.e as a business user) iam getting a screen where it asks for database details ( I have assigned business user read access to essbase)
    Basically this reports are pointing to my old essbase server i.e version 9.x server.
    Please let me if iam missing something while exporting and importing reports.
    Thanks,
    Ram.

  • NQS ERROR:14025 NO FACT TABLE EXISTS -after migrating from 10g to 11g

    NQS ERROR:14025 NO FACT TABLE EXISTS AT THE REQUESTED LEVEL OF DETAIL in all the reports after migrating from 10g to 11g ...
    then we applied the patch (One-off Patch for Bug: 11850704) for the error <<NQS ERROR:14025 NO FACT TABLE EXISTS AT THE REQUESTED LEVEL OF DETAIL>>
    But after applying the above the above patch we are still getting the same error.
    but in the above patch instructions file - Post deployment instructions to create the Variable
    Post Install Instructions:
    - To revert to the 10g navigator behavior for handling conforming dimensions,
    you must set the following session variable via an init block in the RPD:
    NO_FORCE_TO_DETAIL_BIN=1
    The default value for the above variable is 0.
    - Restart all servers (Admin Server and all Managed Server(s))
    but we didn’t find the process to create the specified variable and Initialization block in the RPD
    Can you please suggest us how to go further.
    Our questions are:

    Hi
    Refer the below thread.
    obiee 11g non-conforming dimensions and nQSError 14025
    Might be help you/
    Thanks,
    satya

  • How to Migrate from SAP BO XI 3.1 system to SAP BI 4.1

    Hello Gurus,
    I got a new project and I have to start Upgrade and Migration from BO XIR3.1 to BI 4.1. Please help me out here. More detail are given below. Appreciated in advance.
    1.1    Technical Scope
    Installation of only a production SAP BI 4.1 environment.
    Repository is currently on DB2 but will be on SQL Server for the BI 4.1 implementation.
    All VMware machines: The new architecture calls for 12 VM servers.
    Row-level Security in the universe for authorization of content and Enterprise for Authentication (no SSO). Matrix security model with custom level groups which gives Basic, Intermediate, and Advanced level users to pre-defined folders and content.
    Migrate content (objects, universe and instances) from SAP BO XI 3.1 to SAP BO BI 4.1 for the technical upgrade, the details are below:
    Universes will stay in the UNV format and will not be converted to UNX.
    Only WebI Documents
    All Controlled Folders - (~5600 documents)
    174 Total objects in Public Documents folders (All documents to upgrade & remediate)
    5,433 Total WebI reports in Corporate and other folders (All documents to upgrade, & remediate)
    User Folders – (~6,000 documents)
    6000 + Webi documents accessed in 2013 ( All 6000 + documents to upgrade, NO remediation)
    Inbox – None will migrate
    Total Documents to remediate: up to 5,607
    Migrate all universes and connections
    Migrate all Xcelsius and agnostic documents with no remediation.
    The estimated report count for remediation by complexity, Low-1525, Medium-150 and High-50.
    Assumes a 10% report remediation effort described earlier sections
    Report remediation: should it exceed the base assumptions made in this document, will be implemented as a Change Order. Effort for such change will be mutually agreed between parties. Price to project is determined using this effort and blended rates.
    Testing:
    Conduct planning and inventory analysis; Reusable templates for Migration Plan, Validation.
    Perform migration
    Use the Right Sized Testing framework to plan and conduct testing
    Use the automated Reports Compare tools to compare large volumes of excel / xml data
    Templates based remediation– ensures quality control
    Thanking you best regards.
    AK.
    Message was edited by: Simone Caneparo
    reduced title length

    Hello Mark,
    Thanks for your help. Appreciated.
    Do you know or some one know, how to create at report for Audit purpose of BO 3.1 Universes's Connection, Database type, Network Layer and so on... I want to pull all info in to Webi report which are you seeing in the pictures.
    Please see the attached file.

  • DB quries wrt migration from oim 9.1 to oim 11g (11.1.1.5)

    Hi All,
    We have to do migration from oim9.1 to oim 11g (11.1.1.5) with new DB and AppServers.
    High level Steps are as follows
    1. import the existing OIm9.1 DB to new DB.
    2. Create Additional schemas (SOA,MDS) on new DB.
    3. Install OIm 11g and SOA applications on new servers
    4. Migrate new DB's OIm 9.1 Schema to support oim 11g (by running Oracle_IDM_Home/bin/ua.sh)
    5. Some other releated tasks (As mentioned in the upgrade guide)
    6. Migrate OIM application middlte tier
    7. Other tasks (As mentioned in the upgrade guide)
    Our requirement is
    Step 1, we have to import DevDB data to new tempDB and will proceed with new tempDB for migration.
    Then after, we wanted to import the tempDB to NewDevDB and then wanted to replace newDevDB data(which is imported from tempDB) with QA DB's data (its QA oim 9.1 DB). (We have limitations to take QADB at first time itself, thats why, we are taking Dev DBdata in step 1).
    My questions are
    1. What are the required changes required for Moving oim11g DB to another DB (by dumping the same DB).
    2. After, performing the migration, can we dump only the OIm 9.1 DBdata(QA) to OIm 11g DB? if yes, will it affect the oim 11g DB Schema.
    Please suggest me, or do let me know, if you need any other information.
    Thanks.

    I think, I can consider my Question #1 is as equal to Oracle Identity Manager Database Host and Port Changes, So I can use the steps mentioned in the link http://download.oracle.com/docs/cd/E21764_01/doc.1111/e14308/handlinglcm.htm#CIAJBHHH (Section: 13.1.1 Oracle Identity Manager Database Host and Port Changes). Do we need to do any additional modification with in any configuration files? Please, can any one confirm it.
    And Can any one help on my question #2.
    Edited by: user13285646 on Aug 11, 2011 12:46 PM

  • JMS issues when migration from weblogic 9.2 to 10.3.5

    We are facing some issues when migration from weblogic 9.2 to 10.3.5
    In  weblogic 9.2 :_
    BMP Entity EJBs used in our project are read-only in nature using entity cache, below is the configuration details
    <!DOCTYPE weblogic-ejb-jar PUBLIC "-//BEA Systems, Inc.//DTD WebLogic 6.0.0 EJB//EN" "http://www.bea.com/servers/wls600/dtd/weblogic-ejb-jar.dtd">
    <weblogic-ejb-jar>
    <weblogic-enterprise-bean>
    <ejb-name>
    Company
    </ejb-name>
    <entity-descriptor>
    <pool>
    <max-beans-in-free-pool>300</max-beans-in-free-pool>
    <initial-beans-in-free-pool>150</initial-beans-in-free-pool>
    </pool>
    <entity-cache>
    <max-beans-in-cache>3500</max-beans-in-cache>
    <idle-timeout-seconds>100000</idle-timeout-seconds>
    <read-timeout-seconds>0</read-timeout-seconds>
    <concurrency-strategy>ReadOnly</concurrency-strategy>
    </entity-cache>
    Entity beans will get refreshed using the JMS messges. with in the MDB descriptor files(weblogic-ejb-jar.xml) we are using the provider URL directly and XA enabled connection factory is set to false.
    migration to Weblogic 10.3.5_
    With the same configurations MDB are not not getting deployed in weblogic 10 with some exception, so we removed the provider URL from weblogic-ejb-jar.xml and changed the JMS configuration to use foreign JMS and XA enable connection factory is set to true. Now when ever the JMS message is triggered Entity bean is not getting refreshed with the updated values. i.e values are stale.
    Can some one look into this and provide your inputs to resolve this issue.

    I think the Entity bean refresh problem appears to be unrelated to MDBs. The MDB is only responsible for getting the message to your application (which in turn interacts with Entity beans). You might want to try posting your question to an EJB newsgroup.
    Tom

  • LabVIEW DSC: Migration from 6.1 to 8.6 problems

    Colleagues,
    I need help from someone who experienced with LabVIEW DSC. I would like to recompile pretty old application written in LabVIEW 6.1/DSC 6.1 to LabVIEW 8.6, and got lot of troubles with this.
    At the first I have tried to migrate my old scf file as described here: "Migrating from LabVIEW DSC 7.1 to 8.0".
    Well, it seems to be OK, and LabVIEW.lvlib library with variables was created, but when I tried to double click on the some items, then exception occurred in LabVIEW (see dsc_exception.png in attachment).
    Can you please open test project (attached to this post) and double-click on the Slave005_A0 Item? Is crash happened only by me or by someone else?
    The second problem with understanding.
    In LabVIEW DSC 6.1 I have used "Read Tag.vi" / "Write Tag.vi" vis for accessing the items. When my VI opened in LabVIEW/DSC 8.6, then these calls replaced with "legacy_Write_Tag_(analog)7x.vi" (see screenshot). I'm unable to found according VIs in DSC 8.6. How can I write/read my tags in the latest version? As far as I can understand, I can use Shared Variables directly. Is this correct? But then how can I read multiple tags? Through DataSocket VIs?
    The same with "legacy_Get_Tag_List7x.vi". How can I get items list in DSC 8.6 programmatically?
    Or should I leave all legacy* vis in my application?
    thanks in advance and best regards,
    Andrey.
    Attachments:
    dsc_exception.png ‏26 KB
    dsc_legacy_Write_Tag.png ‏3 KB
    TestProject.zip ‏4 KB

    Hi Andrey,
    Yes, my LabVIEW crashes as well. As you may have noticed, a lot has changed in LabVIEW 8.0 with regards to DSC, the most important being that tags are replaced with Shared Variables. I would recommend that you go through each variable and create them by yourself to ensure the most reliable performance. 
    If you are interested in reading 'tags', then you just need to drag the Shared Variable and place it on the block diagram (that's the direct way). If you are interested in doing this programatically, then have a look at the DSC Module -> Engine Control -> Variables & I/O Servers -> Get Shared Variable List palette on the block diagram. You can then use DataSocket to access the Shared Variables.
    Don't leave the legacy VIs on your block diagram. Upgrade your whole project; shared variables are here to stay. Have a look at the following article to get a thorough understanding of them:
    Using the LabVIEW Shared Variable
    Let us know if you have more questions.
    Adnan Zafar
    Certified LabVIEW Architect
    Coleman Technologies

  • Complications migrating from Snow Leopard Server to Mountain Lion Server.

    I'm migrating from Snow Leopard Server to Mountain Lion Server. The article "OS X Server: Upgrade and migration" (http://support.apple.com/kb/HT5381) says
    "Make sure that any DNS or DHCP servers on which your server depends remain running during the upgrade"
    This advice is reinforced by the details of the article "OS X Server: Steps to take before upgrading or migrating the Open Directory database" (http://support.apple.com/kb/HT5300).
    As the server I'm migrating from provides these services it will need to be running during the migration process. This would seem to limit my options to doing the migration from a Time Machine backup (or, making a seperate clone of the server's drive and connecting it externally to the new box)
    My main concern is the seemingly inevitable clash that is going to occur on the network as the new server takes on the roles of the old one - while it is still running.
    What are my options here ?
    This is my second attempt as on my first try I did the migration from the TM backup with the network down - and none of my local network users or their home directories were migrated, although the settings for the mount points were, but there were no actual directories where they pointed to!
    Clear directions on how to procede would be VERY MUCH appreciated
    Thank you.

    Moving from Snow Leopard to Mountain Lion means first installing the client (non-Server) version of Mountain Lion and then install Server.app this means that for at least part of the process you will not be running DNS, DHCP or Open Directory.
    If you are going to end up using the same DNS name and IP address after the change then an approach you could follow would be as follows.
    Destroy any Open Directory replicas
    Archive your Open Directory Master (to make a backup)
    Note down your DNS records in case they get messed up
    Export via Workgroup Manager your users, and groups (you might not need this but better safe than sorry), make sure you do not include the diradmin account
    Keep a full back of the server (you should always have backups)
    Note down your DHCP server settings in case they get messed up
    Note down any other service settings
    Install Mountain Lion
    Install Server.app
    Install Workgroup Manager (extra free download)
    Run Server.app
    Make sure settings for services are as much as possible the same as before
    If your lucky that may be all you need to do, otherwise...
    Restore Open Directory archive, if your lucky that will be all you need to do, otherwise...
    Make new Open Directory Master
    Run Workgroup Manager
    Import users and groups you previously exported
    You will then have to set passwords for each user as these are not preserved via Workgroup Manager export
    When I did this, I was also being forced to change all my IP addresses so I had no choice but to use Workgroup Manager to export and import accounts.

  • Migration from F5 to Cisco ACE

    Hi All,
    I am preparing for a migration from F5 to ACE. Can someone help me with a technical approach that I should take to make this migration successful.
    Rishi

    Hi,
    What I would do:
    Is it a kind of copy/paste migration?
    Is it layer2/layer3 design?
    - list all IP addresses for the ACE (physical for the vlan interfaces + aliases (VRRP like address))
    - list all VIP addresses + services (fe. http, dns, ftp...)
    - list all real servers that will be used, ordened per VIP (serverfarm)
    - list all probes per real server
    - list all specific exceptions (fe. stickiness, cookie insertion, NAT...)
    - list all network parameters (fe. static routes, default routes, ACLs for management...)
    I think this should help you already. The rest depends on the design and complexity that is in your setup.
    Hope this helps.
    Regards,
    Dario

Maybe you are looking for

  • How to consolidate rows of data based on a single column?

    I have a large amount of data that is currently formatted like this: 18 - 2 - 0 - 0 - 0 - 0 - 0 - 0 - 0 18 - 0 - 4 - 0 - 0 - 0 - 0 - 0 - 0 18 - 0 - 0 - 5 - 0 - 0 - 0 - 0 - 0 19 - 0 - 2 - 0 - 0 - 0 - 0 - 0 - 0 19 - 0 - 0 - 0 - 5 - 0 - 0 - 0 - 0 20 - 1

  • [Solved] Several problems after update

    Hi. I updated my system (yaourt -Syua). Since then I have three problems, I don't know how much connected to each other. First, I can't enter LXDE. After I login, I get back the login screen of xdm. When I go init 3, and start startx from console, th

  • Sleeping screen on Nokia 500

    why is Sleeping Screen not supported in nokia 500 belle? I downloaded on my 500, but not running properly. display is only shown for a second while unlocking. is it due to hardware restriction?

  • MDT 2012 - Pre-Deployment File Deletion

    Hi, I have a small problem with migrating XP to Win 7 using MDT. Basically, I have about 200 machines to migrate but the TS will not allow me to copy user data. The error I get is "A supported previous version of Windows was not found on this compute

  • Abs cvs kernel config issue.

    When building a custom kernel I like to work from the arch config file as a starting point, however there seems to be a problem with the curent config files of the 2.6 kernel around line 78. Mine looks like this: CONFIG_M#CARCH#=y Which I am fairly c