Migration downtime

Hi, we are about to migrate an sbs 2003 server to windows server essentials 2012 & I would like to know what downtime this is going to create for the users on the network?
Once the new server is onsite, I will run the migration wizard but will THIS PART have any affect on the users working, ie exchange, data etc) or does it just setup the new server & allow it to run alongside the existing DC? Basically should this
wizard be run out of hours or does it not matter?
I was hoping this was the case, & then I can manually switch over to office 365 from the old Exchange server, & manually move the data & setup client PC's at a later date, but wasn't sure?

Hi,
Windows Server 2012 Essentials no longer includes Exchange Server as a component product, you’ll need to work out a plan to migrate mailboxes based on where you migrate to. We’ll drill into more details on email migration in a later section in this post.
For more and detail information, please refer to:
Migrating to Windows Server 2012 Essentials
http://blogs.technet.com/b/sbs/archive/2012/08/24/migrating-to-windows-server-2012-essentials.aspx
Migrate Windows Small Business Server 2003 to Windows Server 2012 Essentials
http://technet.microsoft.com/en-us/library/jj200112.aspx
Meanwhile, for the exchange migration, i think you should ask in:
http://social.technet.microsoft.com/Forums/en-US/home?forum=exchangesvrdeploylegacy
Regards.
Vivian Wang

Similar Messages

  • Migrate to new Hardware ( RAC-2-RAC )

    Dear all
    We have a 4 node oracle 10g R2 RAC and want to migrate to new machines with database size of 2 Terabytes. We took a backup and restore it on the new machines and everything is ok. However, we planned to migrate in a week and I don't want to restore the whole database again. What concerns me is I have to open the database at the destination with resetlogs option.
    My question is : What is the best way to keep the migration downtime minimum ? I want to restore a level 1 incremental only.
    Thank you
    Kamy

    Hi Kamy,
    I will skip the backup part and restore controlfile part of you question I think you know how to do it.
    If you copy your backup pieces to the new system you can use them to perform a restore/recovery (but do not open the database) in mount phase:
    <
    SET UNTIL TIME "TO_DATE('define_time','yyyy-mm-dd:hh24:mi:ss')";
    restore database;
    recover database;
    >
    For the next recovery step you can copy the newly created archivelogs (or the pieces containing the archivelogs) to the new system and apply them with
    <
    SET UNTIL TIME "TO_DATE('define_time','yyyy-mm-dd:hh24:mi:ss')";
    recover database;
    >
    Regards,
    Tycho

  • What are the Cleansing steps we can perform outside the DMU tool?

    We need your advise on this
    According to DMU documentation, There are three possible reasons for a value to have invalid binary representation in the database and their respective fixes using DMU tool
    a) An application stores binary values,
    Fix: To cleanse the invalid binary representation issues for a column storing binary values, migrate the data type of the column to RAW or LONG RAW.
    b) An application stores character values in a pass-through configuration and the values are encoded in one or more character sets different from the database character set.
    Fix: To identify the actual character set of data in the column, select the column in the Cleansing Editor and repeatedly change the selected value in the Character Set drop-down list, until all values in the column are legible.
    c) An application stores values that have some character codes corrupted due to software defects or hardware failures
    Fix:To cleanse randomly corrupted values, edit and correct them in the Edit Data dialog box.
    What are fixes we can perfmon outside of DMU tool for above issues and how?

    Data cleansing is potentially one of the most time-consuming steps in the migration process depending on the data volume and the extent of data exceptions found. The DMU is designed to allow most of the cleansing actions to be performed prior to the conversion downtime window without impact to the production environment. You can choose to have the cleansing actions committed to the database immediately (immediate cleansing) or saved and executed later as part of the conversion phase (scheduled cleansing). Many of the cleansing actions may not be easy to accomplish outside of the DMU or could require significant manual workload otherwise. In your case, I think you have several options:
    1) Upgrade to a DMU-supported database release first and work on Unicode migration separately from the upgrade. This way you can leverage the DMU cleansing features to address most of the data issues beforehand and only deal with any incremental data issues in the migration downtime.
    If you must do the upgrade and Unicode migration in the same downtime window:
    2) Prepare scripts for operations like enlarging column sizes or migrating column data types based on the latest iteration to speed up the process. Keep in mind you may still need extra work as the incremental data changes since the last iteration could affect the cleansing requirements. For invalid data issues, if they are caused by all data being stored in a character set different from the database character set, then set the assumed database character set instead of setting assumed character sets for individual columns.
    3) Use DMU and Streams setup to achieve near-zero downtime migration, see the page below for details:
    http://www.oracle.com/technetwork/database/globalization/dmu/learnmore/nzd-migration-524223.html

  • Looking for HA experiences for SAP on IBM i

    Hi,
    We are very experienced in disasters, it seems not possible but it is true: in last three years we have suffered three major outages, with big downtimes ( from 13 hours to 36 ) in our main production system, our R/3 which is supposed to be a 24x7 system.
    We started our HA architecture several years ago, using MIMIX to do a logical replica of our production database. It worked, but it needed so much administration at this time. So we moved to a hardware replica, using DS8000 storage subsystems and moving SAP to an iASP, we also use the Rochester Copy Services toolkit to manage all this landscape. At first we used MetroMirror, a synchronous protocol, because we still owned our machines and they were physically close. Recently we evolved our HA architecture, moving our systems to a different sites of an outsourcing partner, we were forced to change the replication protocol to GlobalMirror, asynchronous, because the distances.
    We know what is a 'rare' hardware failure on our storage susbsystem: it started to write zeroes on a disk, without practically no detection, lots of SAP tables were damaged. We also have suffered a human error that deleted an online disk and killed all the iASP ( first real tape recovery in my life ). And finally we know what is a power failure in the technical room of our partner. Imagine that all this failures with a database that is 3TB big. Do you know how much time is needed to restore from tape and run APYJRNCHGX or rebuilding access paths ? I know...
    As you can imagine we have invested a lot of money trying to protect our data, and it worked because we have never lost any bit of information but our recovery times are always far from the ones needed.
    I'm looking for experiences about how other SAP on IBM i customers are managing the HA in his critical systems, and if possible compare real experiences of similar outages. What are we doing wrong ? We cannot be the only ones...
    Regards,
    Joan B. Altadill

    Hi Joan,
    We run MIMIX replication for our ERP system/partition and 4 other partitions with BW, Portal, PI, SRM, and Solution Mgr in them.  There is some administration but it has been worth it for us.  We have duplicate 570 hardware in an offsite DC 35 miles away for failover.  We also do our backups on the replicated systems.  We have been running MIMIX since going live with SAP in 1998.
    Several years ago we used MIMIX replication to migrate to new servers during lease replacement which cut our migration downtime from 8 hrs for backup/restore to about 1 hr while we shut down the system on old servers, started up ERP system on new servers and checked all the interface connections.
    But the real payoff came in March this year when our production server went down hard during a hot maintenance procedure.  We were able to MIMIX switch to our DR server in under 1 hr and the business ran on the DR server for two weeks, while we reverse-replicated, then we switched back.
    We have subsecond replication so we did not lose any data and there were no incomplete transactions on the DR side after the switch.   MIMIX paid for itself, including administration, in that one incident.
    Hope this helps,
    Margie Teppo
    Perrigo Co.

  • Active/passive policies

    My Organization want me build two node cluster Active/passive windows server 2012,sql server2012 multiple instances 
    node1=active,node2=passive
    i have created cluster ,installed sql server....but the problem is for automatic failover.
    1)do i need to select both the nodes on advances policies under resources and also preferred owners?
    2) post and pre migration steps for migrating standalone instances databases (2008) to sql cluster(2012)
    please help me

    SQL Server needs to be installed on both the nodes as cluster instance, meaning on first node you should install new SQL Server cluster instance and on second node you need to run Add node and add the second node to this SQL cluster instance, hope this was
    already done.
    In cluster manager, from SQL Server cluster resource properties, you should have both the nodes  ticked for "possible owners", but having "preferred owners" selected is optional, which basically tells, which is the preferred node
    where SQL instance should run when both nodes are online.
    There are various ways to move databases from one instance(SQL 2008) to higher version(SQl 2012) like Backup databases on SQL 2008, copy backup file to SQL 2012 server and  then restore them on SQL 2012 Or Detach databases on SQL 2008, copy the detached
    .mdf, .ndf  and .ldf files to SQL 2012 and attach them there Or to reduce the amount of downtime during the migration process, you can setup logshipping between SQL 2008 and SQL 2012 and during cutover, break logshipping and restore latest log backups
    with recovery.
    Upgrade a Database Using Detach and Attach - http://msdn.microsoft.com/en-us/library/ms189625.aspx
    Copy Databases with Backup and Restore - http://msdn.microsoft.com/en-IN/library/ms190436(v=sql.110).aspx
    Minimizing DB Migration Downtime Using Log Shipping - http://blogs.msdn.com/b/sqlgardner/archive/2011/09/16/minimizing-db-migration-downtime-using-log-shipping.aspx
    Keerthi Deep | Blog SQLServerF1 |
    Facebook

  • Mirror between FC and iscsi

    Is that type of mirror possible? We are currently running a 2 node cluster with NW 6.5 SP8 NSS volumes connected to an AX100 fiber san, and we're planning to migrate to OES 2 Linux VMs connected to an EQ PS5000 iscsi san. If I can mirror the volumes first, then our migration downtime should be reduced quite a bit.
    If this scenario is possible, my next question is, will users see a performance impact by having a mirror with iscsi between the time we mirror and the time we migrate? I know iscsi performance isn't too good on NW. I imagine the initial mirror setup would take awhile if iscsi is slow, too. Thanks for any info.
    Tim

    Update: I set up a test cluster volume on the AX100 fiber san and created a mirror on the EQ iscsi san and it worked like a champ. The documentation was a bit confusing, so there was a hiccup when I tried to create a partition on the EQ first, but I blew away the partition and then was able to find the free space on the device.
    I tested upload speeds from my desktop to both the new mirrored volume and a non-mirrored san volume on the same server and got almost exactly the same speed on a 547 MB upload.

  • Migrate Data from one SAN to other in AlwaysON configuration with no/minimal downtime

    Team,
    There is a requirement to move the current DBs from existing SAN to new IBM SAN disks. I have outlined the below steps. Request you to please add your inputs/challenges/risks.
    This is in Always ON and We have 3 Nodes. 
    Server A , Server B and Server C
    A and B are Synchronous
    C is hosting Async replica.
    Note: The SQL binaries are installed on E: drive which is a SAN drive. Whether is this going to be impacted based on my steps below?
    Present the new SAN of same size  on all the nodes in the Availability Cluster.
    Break the secondary replica. On Secondary replica, Migrate the data by physically moving the DB files from old SAN to new SAN.
    Rename the new SANs back to the original filenames as it was present earlier. Bring up the services and join to Availability Group. Check whether the DBs are Synchronized with Primary.
    Now, Failover the Primary to Secondary replica- This will be a short outage.
    Follow the same steps on Primary as mentioned in Steps 1-3.
    Check the Synchronization of the DBs and perform all the sanity checks.
    Thanks!
    Sharath
    Best Regards, Sharath

    Hi Sharath,
    This can be easily achieved by storage migration by storage Vendor, In your case IBM should be able to do it, since they have sophisticated tools
    (one of these tools is Storage Virtualization for Data Migration and Consolidation) to this SAN migration they don't need downtime during the storage sync process. your downtime will be only the Cutoff Time, If SAN Vendor does the Data Migration for you. <o:p></o:p>
    Otherwise,
    1. Take out read-only replica out of Always on Group.
    2. Since SQL is also installed on SAN drive, I strongly suggest you to rebuild the SQL Server as new. (No Copy of Data from SAN to SAN)
            Note: Drive allocation and drive letters should be identical
    3. Once you rebuild the SQL Server, One Challenge here is to migrate the SQL Logins (You must migrate users using same SID, to avoid permission issues).
    4. then Add it Always on Group, and wait for the databases to sync.
    5.
    Then manually move the always on group read/write replica to new node.
    6. follow the steps 1 ,2,3 and 4 in sequence.
    Hope this will help you.

  • Zero Downtime Migration from Oracle to Sybase

    Is there any way/ tool to migrate from oracle to sybase with Zero Downtime??
    Thanks

    Better answered on a Sybase forum I suppose...

  • Migrating Hyper-V 2008 R2 HA Clustered to Hyper-V 2012R HA Clustered with minimal downtime to new hardware and storage

    Folks:
    Alright, let's hear it.
    I am tasked with migrating an existing Hyper-V HA Clustered environment from v2008R2 to new server hardware and storage running v2012R2.
    Web research is not panning out, it seems that we are looking at a lot of downtime, I am a VMware guy and I would do likely a V2V migration at this point with minimal downtime.
    What are my options in the Hyper-V world?  Help a brother out.

    Merging does require some extra disk space, but not much. 
    In most cases the data in the differencing disk is change, and not additional files.
    The absolute worst case is that the amount of disk space necessary would be the total of the root plus the snapshot.
    Quite honestly, I have seen merges succeed with folks being down to 10 GB free.
    But, low disk free space will cause the merge to take longer. So you always want to free up storage space to speed up the process.
    Merge is designed to not lose data.  And that is really what takes the time in the background.  To ensure that a partial merge will still allow a machine to run, and a full merge has everything.
    Folks have problems when their free space hits that critical level of 10GB, and if they have some type of disk failure during the process.
    It is always best to let the merge process happen and do its work.  You can't push it, and you cannot stop it once it starts (you can only cause it to pause).  That said, you can break it by trying to second guess or manipulate it.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Zero-downtime Database Migration tool ?

    We are exploring\evaluating tools provided by Oracle (or its partners) that ensures Zero-downtime Database Migration. Migration should include:
    - Migration of data from one version of the application to another version with or without changes to the database schema.
    - Migration of data from staging to production where staging was used for beta testing to host customers who created live data which need to be migrated to production. (Oracle to Oracle, SQL Server to Oracle, MySQL to Oracle, etc)
    - if a data type changes (say int to varchar) in staging database for a particular column in a table, the change migration should happen in the production database as well
    - if a column is added\deleted in a table of staging database, the same table alteration should migrate to production database
    - records in production database should not be deleted\truncated during data\schema migration
    - maintain zero-downtime
    By Zero-Downtime we mean: both the source and the target should be up and accept updates in real time during migration process. This should again be synced across and hence help to eliminate downtime during migration between various vendor databases.
    We are not looking for any ETL product, but out-of-the-box products like GoldenGate and Celona that ensure Zero-downtime database migration.

    Hi,
    I dont think that there is any easy answer. It looks like huge project so it should be done part to part.
    If I understand
    1) you have create staging database with all changes
    2) production is in old structure
    3) now you want to merge this two databases into one? Or applly all changes form staging to prod?
    I see there one solution clone your staging and create new prod. Whenit's donw switch connection to your new prod database.
    Regards,
    Tom

  • Public Folder Migration - 2010 to 2013 - Downtime incurred at "Lockdown" Stage

    Looking for some information from those who have completed a successful Public Folder migration from 2010 to 2013.
    1.  Size of 2010 Public Folder Store
    2.  Downtime incurred upon the "Lock down 2010 Public Folder" stage once all mailboxes moved to 2013 and ready to cut-over to 2013

    I would like to refer you at this well explained technet library and check whether you follow all the required prerequisites for a successful public folder migration from Exchange 2010 to 2013 : http://technet.microsoft.com/en-us/library/jj150486%28v=exchg.150%29.aspx
    Also, please check this another article resource : http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/migrating-public-folders-exchange-2013-part1.html
    Apart from above resources, you may also consider at this comprehensive application(http://www.exchangemigrationtool.com/) that can be a better alternative approach while migrating public folder database
    between two exchange server.

  • Migration to Simple Finance downtime

    Hi-
    assuming that the preparatory steps for a migration to Simple Finance have been completed successfully and assuming a migration from Classic GL, how long approximately will the business downtime of the productive system be?
    Best regards
    Lutz

    Hi Lutz,
    You can expect a minimum of 24 hours of production system downtime.
    For example, for an actual migration we did, we experienced the following figures:
    Data transfer (93 millions rows): 50 mns
    Data migration: 11h elapsed time
    Overall the elapsed time repartition should roughly work out to be:
    Technical install: 10%
    GL -> new GL: 30%
    sfin data migration: 50%
    activate NAA: 10%
    Hope it helps,
    Arnaud

  • Migrating SQL Server Published databases with minimal downtime

    All,
     I have a requirement in which databases in Server A has to be mirated to Server B. Server A has two databases in Merge replication and one with Transactional replication. Server A act as the distributor itself.
    My question is how can I migrate all these production databases which are also publishers to new sever B with minimal downtime, and with out breaking replication.
    I read that if the Server B is renamed as Server A then replication will continue with out any errors.Please suggest how to migrate the publisher to Server B with minimal downtime.
    Also I would like to know if the replication is Server A->Server C->Server D, then if I can migrate Server A to Server B with out dropping replication, will it affect the replication setup from Server C to Server D. Does it cause any replication errors.
    Thanks in advance!

    Just to throw some thoughts out there.
    It sounds like the "blocker" in the overall migration strategy is the Distribution database and the fact that it is local to a Publisher.
    Once you have migrated the Distributor, the remainder of the problem
    is more straightforward, using solutions such as Database Mirroring to failover to a new server with minimal downtime for example.
    If I were performing this migration project, I would agree with the business that there was to be a
    short replication service interruption whilst we migrated to a new Distribution database/server. I would also apply Hillary's recommendation to use the opportunity to split-out the Distribution database to a dedicated server.
    Using this approach, you could build and configure the new Distribution database before the migration "cut over" phase. The cut-over would require that you Drop the replication topology and re-create it, using the new Distributor DB/Server. You can script
    ALL the steps required for this ahead of time, resulting in the actual cut-over to the new Distributor being completed in a couple of minutes max (the database involved are already in Synch).
    Once done, you can proceed ahead with the remainder of the migration.
    Overall this is not a trivial project you have on your hands and in my opinion you absolutely must perform the process in a test environment before attempting to do so in Production.
    Good luck.
    John Sansom | SQL Server MCM
    Blog |
    Twitter | LinkedIn |
    SQL Consulting

  • Migrate entire database to ASM on another server via RMAN minimal downtime

    Hi
    I was looking for a procedure to migrate non ASM production databases to ASM via RMAN on a separate server with minimal downtime (backup as copy/switch database to copy technique). We have TDPO for tape backup and I normally rman clone test databases between servers but this involves too much downtime for production. The procedure in the ASM Admin Guide (Chapter 8) assumes the databases are on the same server.
    Thanks
    Tom Cullen

    tcullen wrote:
    Hi
    I was looking for a procedure to migrate non ASM production databases to ASM via RMAN on a separate server with minimal downtime (backup as copy/switch database to copy technique). We have TDPO for tape backup and I normally rman clone test databases between servers but this involves too much downtime for production. The procedure in the ASM Admin Guide (Chapter 8) assumes the databases are on the same server.
    Thanks
    Tom CullenDear Tom. Why you think you'll have downtime in the production database? The database will be running while the clone will be processing. Check the following link:
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#BGBDCDGJ
    RMAN DUPLICATE DATABASE From ASM to ASM: Example
    Kamran Agayev A.
    Oracle ACE
    My Oracle Video Tutorials - http://kamranagayev.wordpress.com/oracle-video-tutorials/

  • Migrate large database to MAA, minimum downtime

    we have 1 stand alone production db on stand alone linux server, the plan is to move to MAA with minimum down time.
    are there any docs on how to migrate?
    do I need to create physical or logical standby?
    we not using grid is thats a problem ?
    we are tlaking about 2TB database here.
    v11.2 , linux 5
    primary db is on virtual server so we also moving platform !

    user9198889 wrote:
    we have 1 stand alone production db on stand alone linux server, the plan is to move to MAA with minimum down time.
    are there any docs on how to migrate?
    do I need to create physical or logical standby?
    we not using grid is thats a problem ?
    we are tlaking about 2TB database here.
    v11.2 , linux 5
    primary db is on virtual server so we also moving platform !Hi,
    I can only advise and help you, if you have an idea of how to perform this work.
    These questions are not issues or doubts, but his work.
    Try google about it, if you have any question about specific doubt, we can help you.
    http://www.oracle.com/technetwork/database/features/availability/maa-090890.html

Maybe you are looking for

  • Error while creating Campaign in CRM

    Hi all, When I am trying to create Campaign in CRM, system is throwing an error saying that "Can not get RFC destination for SEM". We are not using SEM in our project. Can anybody suggest me how to avoid/rectify this error? points will be guaranteed.

  • Edit in External Editor does not work

    Hi, I added as External Editor Photoshop CS3, but no matter what option I chose as file format, bit depth or color space, Lightroom will always apply the defaul setting in "Edit in Adobe Photoshop CS3" preference tab, rather than the one I select. Do

  • Confused after an Update, No Space left?!

    Yesterday I did an update to my iPhone 3G and everything was ok, but then I noticed that it emptied out all of my music, so I tried to sync the music again and it tells me my memory is FULL, so I check the capacity, and it shows OTHER section is taki

  • Upgrade a 333 Mz G3 iMac?

    For my computerfriend I am trying to get an answer to the following question: "Is it possible or advisable to upgrade to OS X a 333 Mz G3 iMac with 384 Mb of RAM and running OS 9.1? What are the drawbacks? (Am I much better off purchasing an eMac or

  • Help with email script

    I am trying to create a script that will automatically send an email with information contained in a text file and the name of the text file changes daily based on the date. Currently the script is just sending the attachment and I couldn't figure ou