Best practice migrate file server

I have 3 file servers n two different domains:
FileS01 and FileS02 in domainA (FileS01 / FileS02: Windows 2003R2)
FilesS03 in domainB (FileS03: Windows 2003R2)
Should I migrate to a new
corporate server to be FileS01 (Windows 2012)
also call and be in the domainA.
As you advise me to do it?
Thanks

I recommend you consult this guide:
http://technet.microsoft.com/en-us/library/jj863566.aspx
Yes, it is a ton of information but it also covers all angles; what to name the servers, when to rename the servers, using DFSN to make things smoother, the migration tools, migrating local users, etc.  Due to the sheer amount of information, ignore
the areas that don't apply to your implementation (BrancheCache, as an example.)  HOWEVER, I do recommend you consider their suggestion to move this into a DFS Namespace. That way the next time you need to move to a new server, the users will barely feel
it.
Best of luck. 

Similar Messages

  • How to migrate File server to NAS device?

    All right so here's the scenario.
    An external forest level trust is setup between two forests A & B as recently as we acquired the company B.
    Users from forest B were already migrated to domain A.
    Now I need to migrate file server from domain B to my domain A but the target where we need to move file shares is a NAS device, so basically we cannot do a simple server migration from domain B to domain A.
    We need to copy over the data+ACL+Share permissions from fileserver in domain B to a NAS device in domain A whereas of course ACL permissions on file server present in domain B will obviously be as per source domain user accounts, example ACL permissions
    currently will show as "B\user1".
    So whats the best way to perform this in order to maintain the permissions & data.
    Another question how do we copy share permissions from file server to NAS as I can robocopy the data with security permissions but how to copy over all the share permissions to destination NAS?

    Hi,
    You need to retain the SID History attribute when users across domains to ensure users in the target domain can access to files during migration.
    For more detailed information, please refer to the threads below:
    ACL migration
    http://social.technet.microsoft.com/Forums/exchange/en-US/3c75a116-6bd3-407f-a76c-0d825d4f525a/acl-migration
    File server migration using FSMT 1.2 in NAS environment
    http://social.technet.microsoft.com/Forums/en-US/fb4bf505-2d95-4409-9777-8fd4a1c0c471/file-server-migration-using-fsmt-12-in-nas-environment?forum=winserverfiles
    In additional, you could backup registry key SYSTEM\CurrentControlSet\Services\LanmanServer\Shares to copy share permissions.
    Saving and restoring existing Windows shares
    http://support.microsoft.com/kb/125996
    ADMT and network mapped drives
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/9752e31d-2f35-4d7d-ae4d-2f9fe9400bfe/admt-and-network-mapped-drives?forum=winserverDS
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Migrate File Server data from one volume to another

    I am looking for the best way to handle this situation. We have a VHD that has a 4KB cluster size that is getting close to the 16TB mark so no expanding past that due to the cluster size. In the past whenever i needed to pull this off i would preload as
    much data as possible with robocopy, then during a maintenance window take the share offline, do a refresh with robocopy then flip everything that i needed to flip drive letter and share setup etc.
    I have the space to do a complete copy like this so that is not an issue. But other things to keep in mind are the data is deduplicated so we are talking 20TB total. The other is the backup of the File Server is done at a file level with DPM so DPM will
    see this as a new volume and be an issue.
    At this point i have time to plan and am just looking for ideas.

    Hi,
    If you want to copy files/folders from one Volume to another volume, you could use the File Server Migration Tool (FSMT) or Robocopy to accomplish.
    The tool can move all of the files from the shares on your original volume to the new volume.
    FSMT and Robocopy will not copy Share permissions but only NTFS permissions. So if the drive letter will not be changed, you can backup and restore the Share permission with steps here:
    Saving and restoring existing Windows shares
    http://support.microsoft.com/kb/125996
    Regards.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • BPC 7M SP6 - best practice for multi server setup

    Experts,
    We are considering purchasing new hardware for our BPC 7M implementation. My question is what is the recommended or best practice setup for SQL and Analysis Services? Should they be on the same server or each on a dedicated server?
    The hardware we're looking at would have 4 dual core processors and 32 GB RAM in a x64 base. Would this adequately support both services?
    Our primary application cube is just under 2GB and appset database is about 12 GB. We have over 1400 users and a concurrency count of 250 users. We'll have 5 app/web servers to handle this concurrency.
    Please let me know if I am missing information to be able to answer this question.
    Thank you,
    Hitesh

    I don't think there's really a preference on that point. As long as it's 64bit, the servers scale well (CPU, RAM), so SQL and SSAS can be on the same server. But it is important to look also beyond CPU and RAM and make sure there's no other bottlenecks like storage (Best practice is to split the database files on several disks and of course to have the logs on disks that are used only for the logs). Also the memory allocation in SQL and OLAP should be adjusted so that each has enough memory at all times.
    Another point to consider is high availability. Clustering is quite common on that tier. And you could consider having the active node for SQL on one server and the active node for OLAP (SSAS) on the other server. It costs more in SQL licensing but you get to fully utilize both servers, at the cost of degraded performance in the event of a failover.
    Bruno
    Edited by: Bruno Ranchy on Jul 3, 2010 9:13 AM

  • Best practice for licence server for RDS Farm & Certificate errors

    Hello,
    I am in the process of creating an RDS farm using Server 2008 R2.  I have three Session Hosts and a Connection Broker.
    I have a set of 10 user CALs available and also another 20 on our current RDS server which will need migrating once we go live with the farm.
    I understand the User CALs need to be installed on another Server 2008 R2 and I am wondering what is best practice.  We are running on an entirely virtual environment and it would be simple enough to create another server and install the CALs on there. 
    The only issue with that is that I would need to create a replica of this new machine for DR purposes, but this would take up valuable space which may not be necessary.
    We are planning on creating replicas of one of the Session hosts and the broker for DR, so I am guessing I would need to install some CALs on the Session Host which is going to be replicated.
    There are a few options and I am just wondering what is the best way to go about things.
    Also, as an aside, I am getting an annoying certificate error each time I log a test user onto the RDS farm - I think this is because I am using the DNS alias of the RDS Farm to log on. Is there an easy way to get around this, other than the 'Do not show
    this message again'. I have been doing some research and the world of Certificates is very confusing!!
    Thanks,
    Caroline
    C.Rafferty

    Hi Caroline,
    Firstly for your License related issue, you can perform the step on any VM or can create the new VM as replica for RDSH server also. But please be sure that you have installed RD License server on it, activate it and then install RDS CAL on it. But be safe
    if possible don’t install RD License server with RDCB, please make that out of it as little away. As you can also install RD License server with AD or make replica of that and install RDL on that.
    Best practices for setting up Remote Desktop Licensing (Terminal Server Licensing) across Active Directory Domains/Forests or Workgroup
    http://support.microsoft.com/kb/2473823
    What’s the specified certificate error which you are receiving?
    If you're going to allow users to connect externally and they will not be part of your domain, you would need to deploy certificates from a public CA. In meantime you can refer blog for getting insight for certificate case.
    Certificate Requirements for Windows 2008 R2 and Windows 2012 Remote Desktop Services
    http://blogs.technet.com/b/askperf/archive/2014/01/24/certificate-requirements-for-windows-2008-r2-and-windows-2012-remote-desktop-services.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Methods to Migrate File Server from MacOS to SMB

    Hey everyone,
    First time submitting a question, so please excuse (and advise) if I've done this incorrectly.
    I currently have an Apple File Server (~600Gb) which I'm in the process of migrating over to a 2k8R2 server.  I wanted to find out if anyone else has done this process, and if they have any tips on best methods to do this.  I'm trying to find out
    if there is a program which can truncate the names to make them NTFS compatible as well as removing the permissions on the files so they can be transferred successfully. At this point I'm manually moving files folder by folder and troubleshooting as problems
    arise but there has to be an easier way to do this.
    Any suggestions would be greatly appreciated.  Thanks everyone!
    Best,
    Dan

    i never tried it, but I have tried copying files back and forth from a Windows to Linux and Linux to Windows. 
    I believe this program should also work on Mac OS.
    Check out WinSCP.
    or try to install NFS service on your Windows server box and check if you can copy over from Mac to Windows.
    http://technet.microsoft.com/en-us/library/cc753302(v=ws.10).aspx
    Every second counts..make use of it. Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.

  • Best Practices for File Organizati​on/Project Explorer

    So we are finally getting SCC at my organization to manage our LabVIEW development, and that is good! 
    Now, we are starting in on discussions about how we should organize our files on disk and how we should use the Project Explorer. When I started here about 3 years ago, I wasn't very familiar with the project explorer, so I read the article at http://zone.ni.com/devzone/cda/tut/p/id/7197. Two of the main things I took away from that article are:
    1. Organize Files in a logical manner on disk. Whatever that is, it is not a flat file structure.
    2. The top level VI should be separate from other source code. Preferably, it should reside in the application folder.
    Push Back Against These Recommendations
    Before I was hired, most, if not all LabVIEW development was done utilizing a flat file structure and the top level VI lived with the source code. Since we didn't have a proper SCC, each individual organized files as he saw fit. So I started using the Project Explorer (not even its use is totally accepted right now) and I began follow recommendations 1 and 2 above. I didn't always follow #1 very strictly, but I have been working towards it, and I have always followed #2 religiously. 
    Since we are starting these discussions on how we should organize files on disk I'm starting to get some push back to following these two recommendations.
    The arguments I get in favor of using a flat file structure is that you always know where every file is; including the top-level VI. It is also argued that it is a lot of effort to organize and search for VIs when they all reside in different folders. I think the fear is that by getting "clever" and organizing our files in such a manner we'll make things complicated and we will somehow shoot ourselves in the foot. 
    The argument I get against separating the top level VI from the rest of the source code is that it:
    (a) Won't be clear where it is (like it is buried within hundreds of VIs). However, it is argued, you can just put a "!" in front of the file name and then it is always the at top of the flat file structure.
    (b) An extension of argument of (a) is that things either look or seem messy when VIs (including top level VI) don't live in a sub-folder and are just hanging out with the Project Explorer file. 
    (c) I think there may be some fear of breaking the VI by moving it and altering the dependencies for the VI. 
    Convincing Others its Good to Follow These Recommendations
    So, if I want to follow NI's recommendations, I need to come up with reasons we should follow these recommendations. Also, I should state that I care about following these recommendations because its what NI recommends. They've been around the block a few times and I'm sure there are good reasons why these are best practices. However, I don't think I've given a very compelling case for why these recommendations should be followed.
    So I'll tell you all what I think good reasons are for these recommendations and perhaps I can get some feedback or additional support? If I'm crazy for wanting to follow these recommendations maybe someone can point out why I'm crazy. 
    (a) Arguments for Following Both
    I. I passed the CLAD a couple of weeks ago, and I have started studying for the CLD. Part of the CLD is following both of these recommendations (see page 6 of http://ftp.ni.com/evaluation/certification/cld/cld​_exam_prep_guide_english.pdf). While this isn't a reason in and of itself, it suggests that if it important when being certified it is important in practice!
    II. If we hire new developers that are familiar with LabVIEW, they will most likely be familiar with these recommendations, especially if they are certified. That will lead to increased productivity out of the door because they won't have to learn our special way of doing things.
    (b) Arguments for Organized File Structure
    I. Unused VIs are easier to identify and remove. Right now we never remove VIs because we don't know if they are used or not. This leads to a lot of VI bloat.
    II. It is hard to know what a specific VIs function is in a flat file structure by looking at the name.
    (c) Arguments for Separating Top Level VI from Source Code
    I. Placing the top level VI is an intuitive place for this VI. As long as the top level VI is the only VI in the application folder there is no mistake it is the top level VI, especially once you open it. This makes it easy for new developers to find the top level VI. I'd argue it isn't very intuitive for new developers to know that a VI in the source code folder that is prefaced with a "!" is the top level VI.
    Summary
    So that is what I think so far. Is there anything else I am missing to support following those two recommendations or am I just being inflexible?
    Thanks!

    zenthoef,
    As a CLA, I have struggled with file structure over the years.  Here are my recommendations:
    1.  Put the top level VI and the project in the top-level folder.  This makes it very clear where to begin.
    2.  Put the remaining user interface VIs in a separate folder.  Again, it makes it very clear what the functionality of these VIs are.
    3.  If you are using object, put each object in a separate folder.  Place the family of objects in one folder, with each object in a subfolder.
    4.  Keep the remaining VIs either in a single folder.  This can contain a small number of subfolder if your project is large, but too many folders makes it hard to figure out where your VIs are.  For example, you might have a DAQ subfolder, an Analysis subfolder, and a Report subfolder.  But if you had a Test1 folder, a Test2 folder, and you had a VI that was used by both tests, where would it go?  Keep it simple.
    5.  You inferred that it is hard to figure out what a VI does by its name.  That implies that 1) you need better names, and 2) your VIs are too complicated.  A VI should do a single function which can be adequately described by its name.  That VI might be something like Analyze Data.vi, which would contain a bunch more subVIs (like Get 1st Harmonics.vi), but each VI would contain a single function.  You wouldn't save the data to a report in the Analyze Data.vi, for example.
    The most compelling reason for following these suggestions is that it is easier to figure out what the code is doing after you haven't looked at it for a while.  Once you have an application that is working and bug free, you shouldn't have to touch the code until you want to add features.  If that is even 6 months later, you will probably have forgotten how the code works.  As a consultant, I have had to update other people's code, and just figuring how where to start can be a challenge.
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com

  • Best practices for file I/O within producer/c​onsumer loops

    I'm looking to add file recording and playback functionality to a pre-existing data collection program.  The original progam is based on a Moore-style state machine, which I have added four additional states to.  They are: Record Start, Record Stop, Playback Start, and Playback Stop.
    What I have done, and what has since been identified as poor programming practice, was to "initialize" (either create or load) the appropriate file within the state machine loop during the "Start" command (for record or playback functionality), and then provide the file reference as an indicator, which is linked to for the appropriate read or write operation(whether I'm playing back or recording).  The actual I/O occurs within the the Consumer Loop. (screenshots attached).
    This is my first labview project outside of tutorials or other small examples, so any advise and constructive criticisims are welcomed.  Specifically with regards to file IO and refnum routing (it gets a little hairy in the consumer loop)!
    I'm running Labview 8.6 on Vista business.
    Attachments:
    Playback Start.JPG ‏71 KB
    Consumer_Loop.JPG ‏122 KB
    File Playback subvi.JPG ‏14 KB

    jamoore84 wrote:
    Ben,
    Thanks for the suggestion.  I think it's a little outside my ken at this moment, but  I'll look it over.  Despite any grevious coding transgressions, I have experienced some limited success with the current setup.  While the use of Action Engines/Functional Globals may constitute the best practice, I might revise my post to read "acceptable and/or easily absorbable practices" instead.
    Are there any other opinions on this?  Let me start by listing a problem and posing a question:
    Problem:  I am able to playback a file only once.  Subsequent attempts at file playback do not work.
    Thanks in advance,
    jimmy 
    HI Jimmy,
    I don't give up easy.
    Let me try to exaplain the issue with race conditons with a contrived example, the
     "Command by Mail box" case.
    Imagine you had a job where you recieved your orders via an old fashioned mail box. You never really saw your supervisor but relied on getting orders via the mail box. Now imagine the mail box could only hold one message and any time a new message was inserted, the old one would fall into the trash.
    So you come to work each day check your mail box do what was ordered and everyone is happy.
    The next day you come in and without your knowing, you are assigned to do the work of two bosses. So as long as you check your mail box more often than the two of them assigne work everything is fine... until you take a day of vacation! So you come back in the day after vacation and check you mail box and work a way until you catch hell for ignoring orders !?! Wel it turn out the order from boss 1 was replaced by the order from boss 2 while you were on vacation. Oh bother!
    How can we fix it?
    1) Expand the mail box so it hold more than one order. You just process them in the oder they are recieved.
    2) Change to mail box to not accept a new order until you have removed the old.
    Now back to reality!
    Local variables act like the funky mail box. The last message insterted over-rides the previous.
    Multiple variable writer ar like multiple bosses.
    Queues operate like an expanded mail box, letting you handle each message in order.
    Action Engines operate like the "mutexed" mail box.
    Why I don't want to encorage you to "just patch up" what you have.
    All of the less than ideal solutions either over-sample (Check e-mail twice as often as bosses assign work, waste CPU, an exercise in futility if you are coding in a non-Real-Time envirionment like Windows) or use a mutex to control access to the shared resource ( in this case local varialbes).
    LV offers mutexes through semaphores (found on the syncronization pallete) but...
    WHY WORK SO HARD?
    In my AE Nugget I explain that the exection of an AE is automatically protected by LV. So in the long run it will actually be easier to learn how to use the AE programming construct than it will be to learn how to solve the problem without them.
    so GO FOR IT! Take the lazy route and learn how to use the AE construct. Use the Syncronization pallete for Queues.  
    Just trying to help,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Best practice migration

    Hi
    i search, i link to explain the best practice to migrate SCCM 2007 to SCCM 2012 R2. (Ex.: It's necessary to configure a discovery method on SCCM 2012 before a start migration. Ex.: after migrate computer the new client deploy automaticaly or not?)
    Thanks

    Hi,
    There is a CM 2012 Migration guide below that a lot of articles and blog posts available to help you with migration process.
    http://anoopcnair.com/2012/07/06/sccm-configmgr-2007-to-2012-migration-reference-guide/
    Note: Microsoft provides third-party contact information
    to help you find technical support. This contact information may change without notice. Microsoft does not guarantee the accuracy of this third-party contact information.
    Best Regards,
    Joyce

  • Best practice configure DHCP server NAC

    hi all,
    any idea how the best practice deploy dhcp on cas? i tired follow user guide configure dhcp on cas but still cannot running smoothly user just only grep ip authenticate.
    - CCA agent very slow appear when user get ip dhcp on authenticate.any idea ?
    - how to integrated profiler with nac appliance .?

    Hi ahmed,
    You have configured your CAS to be your DHCP server, Thats well and good because you are using Real IP mode, Which Supports the CAS to be a DHCP server.
    Remember
    This Setting is only For your Authentication VLAN that your client gets an ip While Authentication ok.
    When your Client switches to Access VLAN , your client trafiic no longer flows through the CAS so CAS is now not responsible for DHCP.
    You'll have to configure another DHCP on the Trusted Side which can Lease IPs to the Acess VLAN Members.
    As you have configured OOB then your client is in Acess VLAN and does not come in contact with the CAS so you need the Trusted side DHCP to give the Client an IP address.
    Here in your Scenario your ACCESS VLANS are 2022,2044
    Hope this helps, Do reply after Testing.
    Thank You
    Regards
    Edward

  • Best practice for DHCP Server 2008 utilization of IP Addresses

    I am currently using 85% of addresses on my DHCP server running windows 2008 Server. Does microsoft recommend a particular percentage (%) of its utilization before building another scope? Or what is the industry's best practice or microsoft's
    recommendation to build another scope?

    Hi,
    As far as I know, there is no standard for the
    usage of DHCP scope. Just make sure that the IP address pool isn’t exhausted.
    For the best practices of DHCP, please refer to the article below,
    DHCP Best Practices
    http://technet.microsoft.com/en-us/library/cc780311(v=WS.10).aspx
    Recommended tasks for the DHCP server role
    http://technet.microsoft.com/en-us/library/cc731392.aspx
    Hope this helps.
    Steven Lee
    TechNet Community Support

  • Best practices: migrating from Aperture 2 to Aperture 3

    What are the best practices for moving from Aperture 2 to Aperture 3. One thing I do know from reading the discussions board is to turn off Faces recognition until everything is working. What else?

    Make sure you do a full backup and rebuild on the aperture 2 library before you migrate. (opt-cmd when you open Aperture 2)
    Aperture 3 may slow down on you don't be scared it will finish. Mine took 1 1-/2 days, about 30,000 raw photos / 800GB.
    Don't reprocess masters until Aperture is done with everything it needs to do, keep an eye on the activity window.
    After you reprocess masters it will take a long time to generate thumbnails, again don't worry.
    Bill Debevc
    sshaphotos.com

  • Imaging solution for EBS: Best practice guide for server setup

    Hi,
    We have to implement Imaging solution for EBS using AXF adapter. For this, customer is going to procure and implement SOA and WebCenter Content from scratch.
    We are now faced with the challenge whether to recommend SOA and WCC on the same Weblogic server or on separate Weblogic servers. Is there any best practice guide available for setting up Application Adapters for WCC?
    Thanks
    Arijit

    Hi ,
    I think this documentation would atleast help you in starting with planning :http://docs.oracle.com/cd/E23943_01/doc.1111/e15483/toc.htm
    Thanks,
    Srinath

  • Best Practice: Migrating transports to Prod (system down etc.)

    Hi all
    This is more of a process and governance question as opposed to a ChaRM question.
    We use ChaRM to migrate transports to Production systems. For example, we have a Minor BAU Release (every 2 weeks), a Minor Initiative Release (every 4 weeks) and a Major Release (every 3 months).
    We realise that some of the major releases may require SAP to be taken offline. But what is SAP Best practice for ANY release into Production? i.e. for our Minor BAU Release we never shut down any Production systems, never stop batch jobs, never lock users etc.
    What does SAP recommend when migrating transports to Prod?
    Thanks
    Shaun

    Have you checked out the "Two Value Releases Per Year" whitepaper for SAP recommendations?  Section 6 is applicable.
    Lifetime Support by SAP » Two Value Releases per Year
    The "real-world" answer is going to depend on how risk-adverse versus downtime adverse your company is.  I think most companies would choose to keep the systems running except when SAP forces an outage or there is a real risk of data corruption (some data conversions and data loads, for example).
    Specific to your minor BAU releases, it may be wise to make a process whereby anything that requires a production shutdown, stopped batch jobs, locked users, etc. needs to be in a different release type. But if you don't have the kind of control, your process will need to allow for these things to happen with those releases.
    Also, with regards to stopping batch jobs in the real world, you always need to balance the desire to take full advantage of the available systems versus the pain of managing the variations.  If your batch schedule is full, how are you going to make sure the critical jobs complete on time when you do need to take the system down?  If it isn't full, why do you need that time?  Can you make sure only non-critical batch jobs run during those times?  Do you have a good method of implementing an alternate batch schedule when need be?

  • Best Practice EPA files...

    Hi,
    i found the the following  EPA files from SAP Best Practices for Portals- V1.70 :
    1)BP_ALL.EPA
    2)BP_EC_ALL.EPA
    3)BP_ECC_BL.EPA 
    ....The question is that whether i can use the "iviews" in these files to give to my users ...or whether this iviews are only for demo purpose(i could see that this are a prototype files)....if this files are a prototype ....can i find the original EPA files...any where else?
    Rgds,
    P.Navakanth

    Apple has already hidden about half of the folders on a typical OS X install. The ones that make up the Unix core of the OS. So unless you know how to find those, you'll never have to worry about them.
    Aside from that, you just want to keep away from the System folder and any of the Library folders. Nothing else should be too crucial to the OS running.
    Of course if you wanted to elaborate on "a little strange" we might be able to be a bit more helpful.

Maybe you are looking for