Newbie question - cloning to a new server

Hi all,
I am very new to the EBS world, so please forgive the "newbie" question. We are running 12.0.4, and we have approximately 12 different environments. We probably do 3 to 4 clones a week, generally from production to one of the existing environments.
However, we now have a couple brand new Linux servers, and we want to set them up as a new environment to clone to (one will be apps server, one will be DB server). I am looking at some of the Metalink documents on how to do this. But my basic question is this - do I first have to do a basic EBS installation onto these servers, before I will be able to clone an environment over to them?
Thanks!!

Duplicate thread ...
Newbie question - cloning to a new server
Re: Newbie question - cloning to a new server

Similar Messages

  • Java manually installaiton is required in the case of  r12 cloning on the new server.

    Hi experts, 
    i have need some suggestion. i have required to clone my production database to a fully new test server. for this i know i have  required  to prepare the new test server for ebs r12. for exp install rpm package, setup the kernel parameters create the group and os users etc...
    my confusion is that  "  have I required to install java manually when going to clone the database to a fully new test server( 1st time on that server direct cloning) "
    as per my knowledge java  already bundle with r12 and need not to install manually when we are going to install r12   1st  time on a new server. but what in  the case of cloning.
    plz plz suggest me right way. i have to do the cloning on client side so i m little nervous. plz help me......

    as per my knowledge java  already bundle with r12 and need not to install manually when we are going to install r12   1st  time on a new server. but what in  the case of cloning.
    The same thing applies to cloning. So, you need to install OS software and pre-req. packages (which you are already aware of), then you Rapid Clone (pre-clone, copy files, post-clone).
    Cloning Oracle Applications Release 12 with Rapid Clone (Doc ID 406982.1)
    Rapid Clone Documentation Resources For Release 11i and 12 (Doc ID 799735.1)
    Thanks,
    Hussein

  • Total Newbie Question ... Sorry :-(

    I know it's a windows thing, and I am now converted to Mac but I gotta know this because it's doing my head in. It's a complete stupid green gilled newbie question.
    When installing new programs on a Mac can you create shortcuts to the programs on the Dock? I did what I THOUGHT it would be, i.e I made an Alias and stuck it in the dock, but on rebooting my Mac later on, in place of the shortcuts where 3 question marks which when clicked on did absolutely nothing???
    Help?
    A.L.I
    Windows XP Pro Desktop, Macbook Pro, 60GB iPod Video   Mac OS X (10.4.5)   OS X

    You aren't installing something from a dmg file are you? The dmg is a disk image – kind of a virtual CD. So when you double click the dmg and then get the little disk/hardrive/custom icon on your desktop that is the same as if you had mounted a CD. You then need to drag the application off of that "CD" into your application folder. Then it is truly installed.
    You can then "eject" the icon your your desktop. This is what happens when you shutdown and without remounting the image your dock shortcut can't find the original.
    Just a thought.

  • Apex Newbie question !!!!!  HTTP server ????

    I am newbie at APEX. I have a DBA background, but no experience with Oracle applications.
    I DO respect your time, and have tried to find this info on teh docs, but, honestly, can not find it. And I looked over everywhere, metalink, otn, gogling. I thought that I would find it on these forums under some form of FAQs, but have not found the answer :-(
    I have installed oracle database 11g for windows on my laptop, and when installing APEX 3.2, it asks me to stop the http server and ..... :-)
    Now, I have a question regarding the Oracle HTTP server, which is supposed to be isntalled with the 11g, instead of the Companion CD, which no longers exist.
    First question. We have a http server that is used by oracle EM database control, correct ? I do not see a service for it on the Services list. Teh apex application install asks for stopping/starting the http server. How do we do that ? AND, where the the http server binaries located ? I did a search for apache, and found it under C:\app\oracle\product\11.1.0\db_1\perl\site\5.8.3\lib\Apache, but no binaries there, or under ORACLE_BASE\bin either.
    What do I setup for ORACLE_HTTPSERVER windows environment on a brand new 11g database install ?
    Now, I see that we can get HTTP server from BEA install. I was trying to keep APEX from using a web server tier here. To me, the beauty of APEX is to not have to manage BEA, or any other tier. Keep it simple, at least for development, and keep everything on the DB, and simply use the native HTTP server.
    So, this is a VERY basic question, but how do I do that ? I will worry about what I can and can not do with the native HTTP server or BEA later. I just want to get this up and running so that I can start playing with it.
    Thanks,
    Henrique

    Here is my latest update before I go to bed :-(
    I was able to install the HTTP server, but had some issues later down on the apex install.
    C:\app\oracle\product\10g-iAS-http-server\opmn\bin>opmnctl status
    Processes in Instance: IAS-1
    --------------------------------------------------------------+---------
    ias-component | process-type | pid | status
    --------------------------------------------------------------+---------
    HTTP_Server | HTTP_Server | 5204 | Alive
    I am able to see the welcome page for the web server here
    http://localhost:7777/
    I am able to reset the admin password, ( by the way, I did setup the gateway as well :-) ). I am not sure what happens when you have both the gateway and the web server setup. Now, when try to login to the apex admin account using either
    http://localhost:7777/apex/apex_admin or
    http://localhost:7777/pls/apex/apex_admin, nothing happens besides a HTTP-404 error.
    Now, when I had the port set to 8080, via the commands
    SQL> EXEC DBMS_XDB.SETHTTPPORT(8080);
    PL/SQL procedure successfully completed.
    SQL> SELECT DBMS_XDB.GETHTTPPORT FROM DUAL;
    GETHTTPPORT
    8080
    and try to login again here
    http://localhost:7777/pls/apex/apex_admin
    I do get a prompt, but when i enter admin, and the password I have setup, it does not allow me to get in :-(
    Now, I need some explanation of this DAD file, as I am not sure if I need it to simply login during install. Here is the man page from the install doc :
    3.4.11.1 Creating a Workspace Manually
    To create an Oracle Application Express workspace manually:
    Log in to Oracle Application Express Administration Services. Oracle Application Express Administration Services is a separate application for managing an entire Oracle Application Express instance. You log in using the ADMIN account and password created or reset during the installation process.
    In a Web browser, navigate to the Oracle Application Express Administration Services application.
    If your setup uses Apache and mod_plsql, go to:
    http://hostname:port/pls/apex/apex_admin
    Where:
    hostname is the name of the system where Oracle HTTP Server is installed.
    port is the port number assigned to Oracle HTTP Server. In a default installation, this number is 7777.
    pls is the indicator to use the mod_plsql cartridge.
    ***apex is the database access descriptor (DAD) defined in the mod_plsql configuration file.***
    Here it mentions this DAD file, and I am not sure if I need it now, and even, how to reference it. My database name is called apex, but I do not have anything on my dads.conf file. Should I have something there ? I see a sample such as this from the README.DADs file :
    <Location /plsqlapp>
    SetHandler pls_handler
    Order deny,allow
    Allow from all
    AllowOverride None
    PlsqlDatabaseUsername scott
    PlsqlDatabasePassword tiger
    PlsqlDatabaseConnectString orcl
    PlsqlAuthenticationMode Basic
    PlsqlDefaultPage scott.home
    PlsqlDocumentTablename scott.wwdoc_document
    PlsqlDocumentPath docs
    PlsqlDocumentProcedure scott.wwdoc_process.process_download
    </Location>
    It is a bit late tonite. Tomorrow I will get back at it.
    Thanks for any help, but I must be doing something really obvious to you guys here, and I can't see it :-(
    Cheers,
    Henrique

  • Add New Server with Configtool Questions

    Hello All,
    We are running EP7 SP12 on W2K3 with 16GB RAM and 4 CPUs.
    As part of one the Go Live checks done to the system a while back, it was recommended that we increase the number of Server processes to the instance.
    So, I followed the information here (http://help.sap.com/saphelp_nw70/helpdata/en/68/dcde416fb3c417e10000000a155106/frameset.htm) to add 3 additional servers to the instance.  When I restarted the J2EE engine, it basically sat there in a 'Starting Apps' status.  I waited about 2 hours before I stopped the startup process.  I went back into Configtool and did a 'Remove Server' on the last, Server3, process and restarted J2EE.  This time, everything did start OK, but now I have a few questions I hope someone can answer.
    1.  Is there a limit to the number of Server processes that can exist in a single instance?  If not, what would cause the J2EE servers to essentially hang on startup although they didn't look hung, just extremely slow starting up.
    2.  I thought when you did the initial 'Add Server' it was supposed to create a duplicate Server process.  But, when I check the directory structures of the newly created Server processes, they aren't anywhere close to being the same as the original Server0 process.  For example, we have a custom redirect in place when people logon to the Portal, but this wasn't transferred/copied to the new Server1 or Server2.  The same was true for other customizations.  Also, when looking at the structure, Server0 has approx 80,000 files and 15,000 sub-folders.  When looking at the new Server1 & Server2, they are both different in size in both files and folders in respect to each other and compared to Server0.  Shouldn't they all be the same?
    3.  A follow up to question 2.  If they aren't the same and they are supposed to be the same, can we just copy the missing files & folders from Server0 to the new Servers?
    4.  Lastly, since I did a 'Remove Server' in Configtool for Server3, it does not appear in the MMC when the J2EE engine starts.  This I expect.  But, it did not remove the directory structure for Server3.  If I try to manually delete the Server3 directory, it simply says it is in use and won't let me delete the structure.  So, is it safe to delete this structure since it isn't being used anymore?  If so, I'll stop SAP and delete the structure offline.  Do I have to do any database cleanup once I do this?  If so, can someone point me to some documentation as to what needs to cleaned up in the DB and how?
    Thanks,
    Tom

    Thanks for the info.
    Interesting though.  I open a message with SAP and posed these same questions.  Their response was similar to yours, but it opened up a whole new set of questions.  What follows is the text of that message for others to benefit from (clipped for clarity):
    SAP's response to the original set of questions posted here:
    =====================
    ....1.Number of server nodes depends upon the CPU speed.J2EE Engine
    can support upto 21 server nodes in a intance.
    If number of server nodes is big,it will slow down the startup process.
    2.When you create the "New Server"its not the dublicate server.
    Its a new server.You can not copy mising files from one server to other
    J2EE engine syncronises the server nodes in the instance.
    3.Yes.You need to manually delete the file system.Configtool deletes
    the server node from the DB....
    =====================
    To which I replied:
    =====================
    .....2. I thought when you did the initial 'Add Server' it was supposed to
    create a duplicate Server process. But, when I check the directory
    structures of the newly created Server processes, they aren't anywhere
    close to being the same as the original Server0 process. For example,
    we have a custom redirect in place when people logon to the Portal, but
    this wasn't transferred/copied to the new Server1 or Server2. The same
    was true for other customizations. Also, when looking at the structure,
    Server0 has approx 80,000 files and 15,000 sub-folders. When looking at
    the new Server1 & Server2, they are both different in size in both
    files and folders in respect to each other and compared to Server0.
    Shouldn't they all be the same?"
    You replied:
    "2.When you create the "New Server"its not the dublicate server.
    Its a new server.You can not copy mising files from one server to other
    J2EE engine syncronises the server nodes in the instance."
    But, from actual experience, it does NOT synchronize the nodes, rather
    it is a partial synchronization.
    So, are the following staements true:
    -When a new server process is added, ONLY standard SAP delivered files
    & folders are synchronized. True or False?
    -No custom files/folders are synnchronized. True or False?
    -Custom values, whether part of standard SAP deliverables or custom
    deliverables, are NOT synchronized. True or False?
    -Each server node needs to be configured independently. True or False?
    The reason I'm asking such specific questions is because of what we are
    seeing. I'll use the same example from before. We modified the
    index.html file for the Portal in Server0. When we added the new
    server nodes, this index.html file was not synchronized. Instead, the
    default index.html file was created for the new server nodes. That is
    an example at the file level. Here is an example from a configuration
    perspective. In Server0, in the Visual Admin tool, we have a TREX
    server specified in TREX service properties. When the new server nodes
    were created, this setting was NOT transferred. Therefore, TREX didn't
    work until I specifically went back into VA and added the TREX settings
    to the new server nodes. This is a major problem and opens up three
    more questions:
    1.How do we know what settings were transferred vs. those that weren't
    transferred?
    2.Also, if you can't copy missing files/folders from one server to
    another, then how are we supposed to get those files/folders into the
    new server(s)?
    3. Is there a way to do a comparison between the server nodes to see
    what settings are missing?.....
    =====================
    SAP's reply:
    =====================
    .....-When a new server process is added, ONLY standard SAP delivered files
    & folders are synchronized. True or False? TRUE
    -No custom files/folders are synnchronized. True or False? TRUE
    -Custom values, whether part of standard SAP deliverables or custom
    deliverables, are NOT synchronized. True or False? TRUE
    -Each server node needs to be configured independently. True or False?
    YES.
    If any new server node is added,J2EE Engine syncronises the information
    at the time of next restart......
    ======================
    My Reply:
    ======================
    .....1.How do we know what settings were transferred vs. those that weren't
    transferred?
    2.Also, if you can't copy missing files/folders from one server to
    another, then how are we supposed to get those files/folders into the
    new server(s)?
    3. Is there a way to do a comparison between the server nodes to see
    what settings are missing?.....
    ======================
    SAP's Reply:
    ======================
    .....1.How do we know what settings were transferred vs. those that weren't
    transferred?
    NO there is no way.
    2.Also, if you can't copy missing files/folders from one server to
    another, then how are we supposed to get those files/folders into the
    new server(s)?
    You need to deploy the appliactions again on the new server.
    3. Is there a way to do a comparison between the server nodes to see
    what settings are missing?
    No there is no way......
    ======================
    So, although I really didn't have the answers I was looking for, I confirmed the message.  It seems ludicrous to me that there is no way of doing a comparison or manual synch between two server nodes.  Also, I can't believe you have to configure each node independently let alone when you do a deployment you now have to deploy to each server individually, of which I haven't found any docs explaining how to do that.  So if someone here has a suggestion I'd appreciate it.  Although at this point, given the fact the server nodes are so out of synch as to make them almost unusable, we might just delete them all and go back to one node, although I don't want to have to do that.
    Thanks,
    Tom

  • Question on adding new server to existing pool in OVM 2.2 environment

    I have an existing iscsi based server pool with one server attached to the pool. I am adding a second ovm server. I have configured the iscsi and multipath modules and can see the repository lun.
    I then added the server to the pool via ovm manager, but the server is not fully participating as the agent did not create a mount point for /OVS so I can't see the repos directiories from the new server. A "df" does not show the device as it does on the first server.
    The manager gui does show the server as part of the pool even though we received an error.
    In the ovm manager log, we see the following messages.
    Check prerequisites to add server (ovm2.highlinehosting.com) to server pool (san_pool1) succeed
    During adding servers ([ovm2.domain.com]) to server pool (san_pool1), Cluster setup failed: (OVM-1011 OVM Manager communication with ovm1.domain.com for operation HA Setup for Oracle VM Agent 2.2.0 failed: errcode=00000, errmsg=Unexpected error: <Exception: ha_check_cpu_compatibility failed:<Exception: CPU not compatible! {'ovm1.domain.com': 'vendor_id=GenuineIntel;cpu_family=6;model=44', 'ovm2.highlinehosting.com': 'vendor_id=GenuineIntel;cpu_family=6;model=46'}> )
    I'm not sure why it's complaining about cpu compatibility as we are not using any HA features at this time, and both servers are identical in make and model. They are both equipped with the same cpu type and speed.
    I checked ocfs2 network connectivity by executing the command "nc -zv nodename 7777", which ran successfully from both nodes. The manager was able to propagate a valid cluster.conf file to both servers. I restarted the ovs-agent service on the new server.
    Are there any other configuration steps that I need to take on the new server before trying to add it to the pool? I do need to tread carefully as I have live guests running in the pool.
    Thx

    Bob Weinmann wrote:
    If I have to go the route of using a separate pool for each server, do you have any suggestions on how I would be able to access the repos of server1 from server2 if server1 were to go down hard? I don't mean to access it as a repos for running the vms, but just to be able to copy the vm files over to the running server.This would be possible if you were using NFS for your storage, but not if you're using FC or iSCSI, as the LUN is formatted with the OCFS2 cluster ID of each pool, so probably wouldn't be able to be mounted by another pool. Your best bet is to upgrade to Oracle VM 3.0 so that you can create a pool that contains both servers: you won't have live migration, but HA will still work just fine.

  • Newbie cluster question: where does the admin server live?

    Hello, I'm looking at clustering for our application where I work, and so I'm reading through the cluster-related documentation for 11g. I have a sort of an architecture question: where should the admin server be started? Hypothetically speaking, if you have two nodes for your cluster, would the AdminServer run on a 3rd box which sort of stood in front of the two nodes?
    Thanks much -

    The ideal situation would be for your admin server to be separate from the machines hosting your managed server. This allows you to control access to the admin server, eliminate the performance impact if you have the admin + managed on the same host, and limits impact if youir #1 host fails, etc.
    But companies may be unwilling to invest in a distinct (3rd) host just for the admin server, especially if you have multiple environments ( prod, testing, dev, etc. ).
    So usually the admin winds up sharing the host with a managed server.

  • How to move or copy a database to new server

    Greetings All,
    Oracle Enterprise 11g r2, on a Windows2008 platform.
    I would appreciate some advice regarding moving/copying a database to a new server. Some of the information below may not be pertinent to my goal. Please be patient as I am a newbie.
    I have installed oracle and created a database (prod03) on the new/target server. I created the same tablespaces (and datafiles/names) as are on the existing/source server (prod01), except that on the new/target server (prod03) there is 1 more data file for the USERS tablespace than there is on the existing/source server (prod01).
    My initial thought was to perform a expdp full=y.
    The database contains 220 schemas, when I performed an expdp full=y estimate only it indicated 220Gb. I think this would take much more time to export and then import than what I hope to be able to do below.
    I would like to be able copy the datafiles from the source server prod01 server over to the target server prod03, some names/locations will change.
    One scenario I found (http://www.dba-oracle.com/oracle_tips_db_copy.htm) was to backup the control file to trace on the old/source server (prod01). Copy everything to the new/target server. Tweak the file that creates the new control file.
    Step 4 of the above mentioned link says to change
    CREATE CONTROLFILE REUSE DATABASE "PROD01" NORESETLOGS to
    CREATE CONTROLFILE SET DATABASE "PROD03" RESETLOGS
    Notice the change from REUSE to SET. I am not sure if this is right for my situation.
    Could I issue a backup control file to trace on the target server (prod03), add the reference to the additional datafile. Copy over all of the datafiles for all of the tablespaces (users,system/sysaux/undotbs1,temp),
    Delete the existing control file, and generate the new control file.
    Then perhaps issue a startup resetlogs or startup recover?
    Thanks for your time,
    Bob
    Edited by: Snyds on May 17, 2012 12:26 PM

    So unless someone provides me with an rman script I can't use rman.google is your friend
    Simply telling someone to get the experience dose not help. So your post is useless to me.I suppose you do not have experience with "old-school" manual cloning as well.
    Import of entire 200GB DB with datapump or imp will also require some experience otherwise it will be a long-long exercise.
    So, basically, any advise may be useless to you, because of your "the fact is I don't have the experience. Nor do I have the time to obtain the experience."

  • Move database to the new server

    Hello,
    I have 10.2.0.1 Standard Edition database (7GB, not in archivelog mode) on Windows2003 server, and need to move it to the new server (same environment). I thought to use OEM Clone option, but it appears cloning to the other host is working only in Grid - and I have DB Control.
    I know about manual cloning/recreation, but now looking into export and import option. So I think to install Oracle database on the new server, then import some relevant tablespaces exported from the old database. I will use datapump utility in OEM, under system username.
    Is that a good approach ? What rules should be applied to avoid problems in that case ? Sorry if it's very basic question, never moved database before.
    Marina

    Options for porting the database from one server to another after installing Oracle on the new server incldue
    Export/Import using expdp/impdp
    Rman duplicate to compatiable hardware
    Rman file conversion to non-compatiable hardware
    On advantage of expdp/impdp is this option supports reconfiguring the database tablespace/file/object storage layout as part of the migration. Also the adoption of new features like ASSM.
    A disadvantage is that using expdp/impdp will probably be slower and involve more DBA work than using file copies via rman. There are approaches to reduce the clock time necessary to support export/import and the work is often beneficial for long-term database management efforts.
    There have been threads on this topic before and failry recently. You might want to hunt one or two of them down.
    HTH -- Mark D Powell --

  • SG200-08 Newbie Questions

    I have recently purchased the SG200-08 Smart Switch, but I have a few "newbie questions" about it as I get started using it.
    The on board firmware shows 1.0.1.0. Is that the latest firmware to the switch?
    Do I need to enable IPv6 Auto Configuration and DCHPv6 in my switch settings to be ready for IPv6 as my ISP rolls it out down the road?
    How do I go about changing the switch's username? I was able to easily change the password, but having issues getting the username to change.
    Do I need to do anything about the LLDP-MED settings? What exactly is that?
    How do I confugure the System Time Settings so the switch functions in my time zone (USA Central Time)?
    Thanks a bunch for any assistance!

    Hi Nathan,
    My guess is that NAT is already on - you have one public IP address from your ISP. Your router will use NAT (network address translation) to allow multiple clients (and either dynamically assign them private IPs via dhcp or you set them statically) to connect to the internet using the one public IP. It also sounds like your RV042G is assigning both ipv6 and ipv4 addresses, and theres nothing wrong with that. Unless you have specific information re: ipv6 from your isp, however, I would suggest not worrying about it until you hear from them. Are your macs connected to the router via the SG200 switch? If so, it looks like its passing ipv6 just fine.  UPnP is something completely different - thats with opening ports like you mentioned - its a way that your devices can communicate with the router to automatically enable the proper port forwarding for the device/application.
    Regarding the username, create a new user account. I don't think you can edit the cisco user, but try deleting it after creating and testing a new user account..
    I'm not familiar with the Polycom system, but I would leave the settings as default unless you are using true IP phones (rather than an ATA adapter). From a quick google of the polycom device, I don't think you will gain anything from LLDP/CDP as the handsets use regular cordless phone freqs. With my setup, we use cisco IP desk phones and cordless wifi phones, CDP makes life easy as the cisco access point, wifi phones, cisco switch, and cisco desk phones (connected via ethernet) see each other and know what they're dealing with automatically.
    I don't see the SNTP setting for unicast / broadcast that you're looking at. For the switch to get the time from a sntp server, under administration -> time -> sntp settings, add a server, and then back on time-> system time, enable sntp server as the main clock source. What are you using as your sntp source? Do you have an internal sntp server? You don't need to enable dhcp on the sntp server.
    May I also point you to the two manuals, I think they may be helpful:  RV042G  & SG200
    Hope thats helpful.
    Best,
    David
    Please rate any helpful posts.

  • What happens to my local user data? -newbie question sorry

    Hi All,
    Firstly apologies if this seems a dumb question, I've scoured the forums but I require something that fits my specific situation.
    I've had a (my first) MacBook for about 9 months, built up a fairly healthy local user, setup just how I like it, MobileMe, iTunes, Chrome, iPhoto library, lots of other apps, etc etc and so forth.
    I'm setting up a Mac Mini Server, and was wondering what I can do to join the new server, but take all my settings/downloads/iTunes etc with me... I don't want it all stored on the server, but I come from a Micro$oft Windows background. With MS, when you add a PC to a domain, login with the appropriate user account, you have a fresh profile, no settings, no files, no customisations etc etc is this also the case when I hit that Join Network Account server button on my Mac? Will I get a blank fresh account on my Macbook?
    I'm guessing this must happen quite often as people start their way into Apple technology and build up a nice healthy local account before branching further into the Apple world...

    The two laptops I use everyday have access to all the servers via my network account. It is set so that my user account is listed as having "no home" So I log into the laptop with my local user account with a UID of 501 but access all the network services via the go menu and my network account of the same name but with a UID of 1034.
    For all other users in the company, if they are on a laptop, I use network accounts. The machines are managed to ask if the user wants to create a mobile account when they login. For permanently assigned laptop users, the answer is yes. This puts their home on the laptop and ties them to that machine. I use mobile account syncing to make sure their critical data is copied to the server for backup.
    By having the machine ask to create the mobile account, users can answer no and login to their network home. The use of the laptop may be needed temporarily if a regular workstation is down.
    Once in a while I will need to convert a local account to a network account. While a bit more laborious that setting it up correctly at the beginning, it can be done.
    But I never let any user account have the UID of 501. I would set that up as the local admin account I use for installing updates and performing other maintenance. If needed, I would back up the user data and erase and re-install the OS.

  • SBS2008: Move email from Exchange 2007 to new server with Exchange 2013

    We have an old server (SBS2008) and plan to buy a new server with (Server 2012). I need to move all the exchange emails, contacts & calendars to the new server. We will no longer use the old server. 
    Is there a document or migration tool that will help me understand how to move this data form the old exchange server to the new one? 
    Old Server:
    SBS2008 running Exchange 2007
    New Server:
    Server 2012
    Exchange 2013
    Any help is appreciated!

    Hi Dave,
    It can be done, and as Larry suggested you will consider two Server 2012 installs in order to achieve an environment that looks like your current SBS roles; Exchange 2013 on an Active Directory controller isn't a good long-term solution (SBS did this for
    you in the past).
    For your size operation, a virtual server host, with a Windows Server 2012 license, and two virtual machines would probably be a suitable design model.  In this manner, you have Server 2012 license that permits 1 +2 licenses (one host for virtualization,
    up to 2 Virtual Machines on same host).
    There's no migration tool. That comes with experience and usually trial and error. You earn the skills in this migration path, and for the average SBS support person you should plan on spending 3x (or more) your efforts estimate in hours planning your migration. 
    You can find a recommended migration path at this link to give you an idea of the steps, but its not exactly point by point going to cover you off for an sbs2008 to server 2012 w/exchange 2013 migration.  But the high points are in here. If it looks
    like something you would be comfortable with then you should research more.
    http://blogs.technet.com/b/infratalks/archive/2012/09/07/transition-from-small-business-server-to-standard-windows-server.aspx
    Specific around integrating Exchange 2013 into an Exchange 2007 environment, guidance for that can be found here:
    http://technet.microsoft.com/en-us/library/jj898582(v=exchg.150).aspx
    If that looks like something beyond your comfort level, then you might consider building a new 2012 server with Exchange 2013 environment out as new, manually export your exchange 2007 mailbox contents (to PST) and then import them into the new mail server,
    and migrate your workstations out of old domain into new domain.  Whether this is more or less work at your workstation count is dependent upon a lot of variables.
    If you have more questions about the process, update the thread and we'll try to assist.
    Hopefully this info answered your original question.
    Cheers,
    -Jason
    Jason Miller B.Comm (Hons), MCSA, MCITP, Microsoft MVP

  • No data in web analytics report after moving to new server

    Hi,
    I have a Sharepoint 2010 portal, we need to move that portal to a new server.
    For this, I took the backup of the Web Analytics Reporting db and Staging db and restored them in the SQL of new server. Then, I created new Web Analytics Service Application on the new server with reference to the restored databases.
    Now, when I go to check Web Analytics report, there is no data in it.
    Is this because the url of my portal is now changed? Will I be able to get all my data when I change the AAM of new server to the old portal's url?
    Kindly help

    Hi,
    I wonder if Data exists starting from the date of finishing moving. If not, then please make sure Web Analytic is working in your environment.
    You could also check ULS log for relevant error information.
    Here is a similar issue for your reference:
    http://sharepoint.stackexchange.com/questions/42881/web-analytics-in-central-admin-not-showing-all-data
    More information:
    http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/03/16/troubleshooting-sharepoint-2010-web-analytics.aspx
    http://blogs.technet.com/b/manharsharma/archive/2012/10/13/sharepoint-2010-web-analytics-troubleshooting-reporting-db.aspx
    Regards,
    Rebecca Tu
    TechNet Community Support
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
    [email protected]

  • How do I move mail from an old server to a new server?

    I am rebuilding my server. The new server runs on OS X 10.9.4 with Server 3.1.2. The old server ran OS X 10.9.x and Server 3.x (the exact versions are not known).
    Within the folder /Library/Server/Mail, I found the email stores for both systems.  I have gone through each folder and identified the 36 character string that serves to identify the user's mailbox and paired each one to a user id on both systems.  On the old system, there are multiple mailboxes for some users, and I think it is a result of the users being deleted and recreated: perhaps the system identified the identical name and assumed that the user might be different and therefore created a unique 36 character id for the mail system.
    The trick is, I am trying to recover the mail from the old server.
    I have attempted to copy the files which are human readable and formatted for SMTP transmission to the new server under the correct mailbox corresponding to the owning user (see screen shots below). The simple act of copying the files has not made these files visible via the IMAP protocol. I have tried restarting the mail service hoping that the Server app would rebuild whatever indexes need to be built so that the mail can be served via IMAP, and that has not worked either.
    The question is, how do I get the mail from the old server mail boxes into the new server mailboxes?
    This screen shot shows the location of one mail collection at /Library/Server/Mail/Data/mail/[userid].  Mail sits in the "new" folder only for a moment before being processed and put into the "cur" folder.  Copying mail from the old server into the "new" folder produces an empty "new" folder, but one can see the files populate briefly before they are moved into the "cur" folder.
    The next screen shot shows one email opened in TextEdit.  The format should look very familiar.  This is the same format that one would use to send SMTP requests to an SMTP server.  This particular example happens to be an email from a Gmail account to the PediatricHeartCenter.org domain to test the mail system when the old server was set up.  It was sent on 24 Jan 2014 and had text reading "Intended for Mavericks1. -Jared".

    On further research, I have learned that OS X Server sets Dovecot to use the MailDir format.  The email messages can be removed from the folders and put back, and as long as they were present in the folder to begin with (received by Dovecot originally), they reflect in the Mail.app on client computers.  Deleting a file in the "cur" folder causes the file to disappear in Mail.app. Copying the file back into the "cur" folder will cause the file to reappear without any modification of an index file or any other system component, as long as the file was properly formatted by Dovecot to be identifiable to that folder.
    According to Dovecot.org's review of MailDir found here (http://wiki2.dovecot.org/Ma,ilboxFormat/Maildir), the file name can be broken into simple pieces: " [unixtimestamp].[process id].[hostName],S=<message size>,W=<virtual message size>/2,[status tags]".  The original MailDir++ specification requires the string ":2," to appear after the virtual size, but this file naming format is not legal in Mac OS X, so Dovecot is modified by Apple to use "/2," instead.
    The Dovecot's wiki describes inserting new messages as follows:
    Mail delivery
    Qmail's how a message is delivered page suggests to deliver the mail like this:
    Create a unique filename (only "time.pid.host" here, later Maildir spec has been updated to allow more uniqueness identifiers)
    Do stat(tmp/<filename>). If the stat() found a file, wait 2 seconds and go back to step 1.
    Create and write the message to the tmp/<filename>.
    link() it into new/ directory. Although not mentioned here, the link() could again fail if the mail existed in new/ dir. In that case you should probably go back to step 1.
    All this trouble is rather pointless. Only the first step is what really guarantees that the mails won't get overwritten, the rest just sounds nice. Even though they might catch a problem once in a while, they give no guaranteed protection and will just as easily pass duplicate filenames through and overwrite existing mails.
    Step 2 is pointless because there's a race condition between steps 2 and 3. PID/host combination by itself should already guarantee that it never finds such a file. If it does, something's broken and the stat() check won't help since another process might be doing the same thing at the same time, and you end up writing to the same file in tmp/, causing the mail to get corrupted.
    In step 4 the link() would fail if an identical file already existed in the maildir, right? Wrong. The file may already have been moved to cur/ directory, and since it may contain any number of flags by then you can't check with a simple stat() anymore if it exists or not.
    Step 2 was pointed out to be useful if clock had moved backwards. However again this doesn't give any actual safety guarantees, because an identical base filename could already exist in cur/. Besides if the system was just rebooted, the file in tmp/ could probably be even overwritten safely (assuming it wasn't already link()ed to new/).
    So really, all that's important in not getting mails overwritten in your maildir is the step 1: Always create filenames that are guaranteed to be unique. Forget about the 2 second waits and such that the Qmail's man page talks about.
    The process described by the QMail man page referenced above suggests that as long as a file is placed in the "new" folder, that a mail reader can access it.  The mail reader then moves the file to the "cur" folder and "cleans up" the "new" folder.  This is clearly happening in OS X, because the messages are moving from "new" to "cur", but IMAP is still not serving these foreign messages to the remote readers.
    The thought crossed my mind that perhaps it is the fact that the host name does not match, which would cause the failure, however changing the "host" portion of the name from the old-server to the new-server did not fix the issue.  Even with the new server name in the file name, the inserted message fails to appear in client Mail applications.
    Within the file their is header information that still references the old machine. I'd like to not have to change the email files, because this will violate the integrity of the message. Also, this might take a lot of time or incur risks associated with poor automated processing. The header information should not be referenced by Dovecot, because the wiki page describing MailDir notes that neither Dovecot nor Dovecot's implementation of IMAP refers to the messages header information when moving and serving these mail files.
    Unlike when using mbox as mailbox format, where mail headers (for example Status, X-UID, etc.) are used to determine and store meta-data, the mail headers within maildir files are (usually) notused for this purpose by dovecot; neither when mails are created/moved/etc. via IMAP nor when maildirs are placed (e.g. copied or moved in the filesystem) in a mail location (and then "imported" by dovecot). Therefore, it is (usually) not necessary, to strip any such mail headers at the MTA, MDA or LDA (as it is recommended with mbox).
    This paragraph leads me to believe that after the mail box is identified that the content of the file becomes irrelevant to the system which manages. This suggests that we should be able to inject messages into a mailbox and have the system serve them as though they had belonged in that mailbox all along. Yet I have not found a way to do this.

  • How can I move the distribution database to a new server?

    I need to migrate an old distribution database to a new VM. My understanding is that you can detach/attach the distribution DB to make this easier. What are the 'gotchas' in this process? Do I need the detach/attach the system databases as well? The distributor
    is facilitating data from Oracle to SQL Server.
    Another question.. what are some good benchmarks for figuring out how much horsepower I should have set up in my VM that running distribution?
    Thanks,
    phil

    Hi philliptackett77,
    As your description, you want to migrate the distribution database to a new server. Based on my research, you need to remove the replication,  create the distribution on the new server, and recreate publication and subscription according to Satish's post.
    So you don’t need to detach or attach the distribution database or system databases.
    To make this process simple, you could use SQL Server Management Studio (SSMS) to generate scripts and run the scripts to recreate publications and subscriptions or drop publications and subscriptions as the screenshot below. Checking ‘To create or enable the
    components’ generates the script for creating the publications and subscriptions, and Checking ‘To drop or disable the components’ generates the script for dropping the publications and subscriptions.
    Firstly, please use SSMS to generate the script which is used to create publications and subscriptions.
    1.Connect to Publisher, or Subscriber in SSMS, and then expand the server node.
    2.Right-click the Replication folder, and then click Generate Scripts.
    3.In the Generate SQL Script dialog box, check ‘To create or enable the components’.
    4.Click Script to File.
    5.Enter a file name in the Script File Location dialog box, and then click Save. A status message is displayed.
    6.Click OK, and then click Close. For more information about the process, please refer to the article:
    http://msdn.microsoft.com/en-us/library/ms152483.aspx
    Secondly, follow the steps above, check ‘To drop or disable the components’ to generate the script used to drop publications and subscriptions. Then run the sript to drop publications and subscriptions.
    Thirdly, please disable distribution using Transact-SQL or SSMS following the steps in the article:
    http://technet.microsoft.com/en-us/library/ms152757(v=sql.105).aspx.
    Fourthly, please create the distribution at the new server using Transact-SQL or SSMS following the steps in the article:
    http://msdn.microsoft.com/en-us/library/ms151192.aspx#TsqlProcedure.
    Last, please run the script generated in the first step to recreate publications and subscriptions.
    Regards,
    Michelle Li

Maybe you are looking for