Import on new server

Hi All,
I just want to know that,
I have taken an export from one server.On this server all SQL and PL/SQL running very good.
Now I'm trying to import that dumps on new server with same configuration as the previous one.
But all SQL and PL/SQL are not running good on this server. It takes time to execute.
What do I need to check.
Is it a database statistics or something else need to be checked ???
Thanks ,
Raj

Raj wrote:
Hi All,
I just want to know that,
I have taken an export from one server.On this server all SQL and PL/SQL running very good.
Now I'm trying to import that dumps on new server with same configuration as the previous one.
But all SQL and PL/SQL are not running good on this server. It takes time to execute.
What do I need to check.
Is it a database statistics or something else need to be checked ???
Thanks ,
Rajsee the advice in Reports Take more longer time when changing the database version

Similar Messages

  • Import to new Server with Standard Database Failing at 5%

    Since the new database edition (basic, standard, premium) preview has gone live I have been trying to import an automatically exported bacpac onto a new server and into a Standard S1 database using the portal UI. So far every attempt has failed
    at roughly 5% which it never appears to move past. It is roughly 26GB, the bacpac was created with the automatic export on a business edition database and I have selected Standard, S1 and 30GB for the new database. I have tried 3 times now with
    no success and have no idea what to try next.
    Any ideas on how to get this database into the new edition?

    Hello,
    I had also the same problem with much smaller databases. The only solution for the moment is to use Premium (maybe Premium 2 or 3) to do the import the database.
    The reason is that when I am trying to import the database, I see on the dashboard  monitor that I use 100% of available Write_log metric for the edition, and in some way the server is "slowing" my import requests to keep me in my edition limits. Of
    course with this policy is almost impossible to import relatively large databases to these editions. I hope Microsoft engineers find a solution to this problem, or remove the limit for this metric.
    Regards,
    Dimitris

  • New server for hyperion

    Hi,
    I would like to know how can I know which latest patch I should go for Hyperion (9.3.1, Planning, SS, Essbase) or what should be approach? I need to migrate Hyperion servers and will install and configure Hyperion component.
    Secondly I know installation, config. backup and migration, but what test should i run for on windows 2003 R2 (i mean, should i need anything like IIS for planning and i don't think there is any need to do with environmental variable as it should create automatically at the time of installation. or should i use existing Environmental variable to create on new server.
    Regards
    Kumar

    Hi,
    I am going to replace Hyperion server.
    I know installation, config. backup and migration, but what test should i run for windows 2003 R2 (i mean, should i need anything like IIS for planning)
    Secondly i don't think there is any need to do with environmental variable as it should create automatically at the time of fresh installation. or should i take backup of Environmental variable from older server and then import to new server?

  • SBS2008: Move email from Exchange 2007 to new server with Exchange 2013

    We have an old server (SBS2008) and plan to buy a new server with (Server 2012). I need to move all the exchange emails, contacts & calendars to the new server. We will no longer use the old server. 
    Is there a document or migration tool that will help me understand how to move this data form the old exchange server to the new one? 
    Old Server:
    SBS2008 running Exchange 2007
    New Server:
    Server 2012
    Exchange 2013
    Any help is appreciated!

    Hi Dave,
    It can be done, and as Larry suggested you will consider two Server 2012 installs in order to achieve an environment that looks like your current SBS roles; Exchange 2013 on an Active Directory controller isn't a good long-term solution (SBS did this for
    you in the past).
    For your size operation, a virtual server host, with a Windows Server 2012 license, and two virtual machines would probably be a suitable design model.  In this manner, you have Server 2012 license that permits 1 +2 licenses (one host for virtualization,
    up to 2 Virtual Machines on same host).
    There's no migration tool. That comes with experience and usually trial and error. You earn the skills in this migration path, and for the average SBS support person you should plan on spending 3x (or more) your efforts estimate in hours planning your migration. 
    You can find a recommended migration path at this link to give you an idea of the steps, but its not exactly point by point going to cover you off for an sbs2008 to server 2012 w/exchange 2013 migration.  But the high points are in here. If it looks
    like something you would be comfortable with then you should research more.
    http://blogs.technet.com/b/infratalks/archive/2012/09/07/transition-from-small-business-server-to-standard-windows-server.aspx
    Specific around integrating Exchange 2013 into an Exchange 2007 environment, guidance for that can be found here:
    http://technet.microsoft.com/en-us/library/jj898582(v=exchg.150).aspx
    If that looks like something beyond your comfort level, then you might consider building a new 2012 server with Exchange 2013 environment out as new, manually export your exchange 2007 mailbox contents (to PST) and then import them into the new mail server,
    and migrate your workstations out of old domain into new domain.  Whether this is more or less work at your workstation count is dependent upon a lot of variables.
    If you have more questions about the process, update the thread and we'll try to assist.
    Hopefully this info answered your original question.
    Cheers,
    -Jason
    Jason Miller B.Comm (Hons), MCSA, MCITP, Microsoft MVP

  • New server and/or CA certificate for connection from custom authentication

    We are running Access Manager version 72005Q4 in the Sun ONE Web Server 6.1SP5 B06/23/2005 container with java build 1.5.0_07-b03. I run a custom authentication module which checks sessions against our university single sign on system which is CAS (from Yale/Jasig). The checks are essentially https calls. All this has been working well for us for the last couple of years.
    I would like to migrate the certificate used on the university CAS system from a Verisign certificate to a wildcard certificate issued by the IPS CA in spain -- these are in most browsers but are not in the standard batch of cacerts CA's -- and are free for .edu domains.
    My other java based authentication plugins (Blackboard, custom apps etc) have worked fine once I import the certificate into the cacerts for the java container, but I'm missing something (obvious probably) about importing this certificate so that my amserver custom authentication module can connect to the CAS server once the CAS server is using the new certificate.
    Could anyone provide guidance on where I need to import this server certificate (or preferably the IPS CA) in order to allow the custom authentication module to work properly? I assume this same problem has been solved by people wishing to connect from the amserver to services with self signed certificates. For some reason I'm finding the debugging unexpectedly difficult, I'll outline some of those details below.
    Relevant things I've tried so far:
    Import both the server cert and the IPS CA into the cacerts of the java container identified in the web server server.xml /usr/jdk/entsys-j2se.
    Import the IPS CA into the web server cert8 style db via the web admin server.
    The debugging has surprised me a bit, as I'm not getting an error that is explicitly SSL related error. It almost seems like the URLConnection object ends up using a HttpURLConnection rather than an HttpsURLConnection and never gives me a cert error, rather a connection refused since there is no non SSL service running on CAS. The same code pointed to the server running the verisign cert works as expected.
    Part of the stack:
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: java.net.ConnectException: Connection refused
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.socketConnect(Native Method)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.Socket.connect(Socket.java:516)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at java.net.Socket.connect(Socket.java:466)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.openServer(HttpClient.java:365)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.openServer(HttpClient.java:477)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.<init>(HttpClient.java:214)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.New(HttpClient.java:287)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.http.HttpClient.New(HttpClient.java:311)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.setNewClient(HttpURLConnection.java:489)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.setNewClient(HttpURLConnection.java:477)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.writeRequests(HttpURLConnection.java:422)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:937)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.yale.its.tp.cas.util.SecureURL.retrieve(Unknown Source)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.yale.its.tp.cas.client.ServiceTicketValidator.validate(Unknown Source)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at edu.fsu.ucs.authentication.providers.CASAMLoginModule.process(CASAMLoginModule.java:86)
    [28/Mar/2008:17:21:54] warning (25335): CORE3283: stderr: at com.sun.identity.authentication.spi.AMLoginModule.wrapProcess(AMLoginModule.java:729)
    The relevent bit of code from the SecureURL.retrieve looks as follows:
    URL u = new URL(url);
    if (!u.getProtocol().equals("https"))
    throw new IOException("only 'https' URLs are valid for this method");
    URLConnection uc = u.openConnection();
    uc.setRequestProperty("Connection", "close");
    r = new BufferedReader(new InputStreamReader(uc.getInputStream()));
    String line;
    StringBuffer buf = new StringBuffer();
    while ((line = r.readLine()) != null)
    buf.append(line + "\n");
    return buf.toString();
    } finally { ...
    The fact that this same code in other authentication modules running outside the amserver (in other web containers as well, tomcat and resin for example) running java 1.5 works fine with the new CA, as well as with self signed certs that I've imported into the appropriate cacerts file leads me to believe that I'm either importing the certificate into the wrong store, or that there is some additional step needed for the amserver in the Sun Web container.
    Thank you very much for any insights and help,
    Ethan

    I thought since this has had a fair number of views I would give an update.
    I have been able to confirm that the custom authentication module is using the cert8 db defined in the AMConfig property com.iplanet.am.admin.cli.certdb.dir as documented. I do seem to have a problem using the certificate to make outgoing connections, even though the certificate verifies correctly for use as a server certificate. This is likely a question for a different forum, but just to show what I'm looking at:
    root@jbc1 providers#/usr/sfw/bin/certutil -V -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u V
    certutil: certificate is valid
    root@jbc1 providers#/usr/sfw/bin/certutil -V -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u C
    certutil: certificate is invalid: Certificate type not approved for application.
    root@jbc1 providers#/usr/sfw/bin/certutil -M -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -t uP,uP,uP
    root@jbc1 providers#/usr/sfw/bin/certutil -V -l -n "FSU Wildcard Certificate" -d /opt/SUNWwbsvr/alias -P https-jbc1.ucs.fsu.edu-jbc1- -u C
    FSU Wildcard Certificate : Certificate type not approved for application.
    So it could be that I don't understand how to use the certutiil to get the permissions I want, or it could be that using the same certificate for both server and client functions is not supported -- though you can see why this would be a common case with wildcard certificates.
    BTW for those interested, it did seem to be the case that when the certificate failure occurred that the attempt was then made by the URLConnection to bind to port 80 in cleartext even though the URL was clearly https. I'm sure this was just an attempt to help out misformed URL, but it seemed that the URLConnection implementation in the amserver would swapped traffic over cleartext if that port had been open on the server I was making the https connection to; that seems dangerous to me, I would not have wanted it to quietly work that way exposing sensitive information to the network.
    This was why I was getting back a connection refused instead of a certificate exception. The URLConnection implementation used by the amserver is defined by java.protocol.handler.pkgs=com.iplanet.services.comm argument passwd to the JVM, and I imagine this is done because the amserver pre-dates the inclusion of the sun.net.www.protocol handlers, but I don't know, there maybe reasons why the amserver wants it own handler. I only noticed that this is what was going on when I as casting the httpsURLConnection objects to other types trying to diagnose the certificate problem. I would be interested in hearing if anyone knows if there is a reason not to use sun.net.www.protocol with the amserver.
    After switching to the sun.net.www.protocol handler I was able to get my certificate errors rather than the "Connection Refused" which is what lead me to the above questions about certutil.

  • How do I move mail from an old server to a new server?

    I am rebuilding my server. The new server runs on OS X 10.9.4 with Server 3.1.2. The old server ran OS X 10.9.x and Server 3.x (the exact versions are not known).
    Within the folder /Library/Server/Mail, I found the email stores for both systems.  I have gone through each folder and identified the 36 character string that serves to identify the user's mailbox and paired each one to a user id on both systems.  On the old system, there are multiple mailboxes for some users, and I think it is a result of the users being deleted and recreated: perhaps the system identified the identical name and assumed that the user might be different and therefore created a unique 36 character id for the mail system.
    The trick is, I am trying to recover the mail from the old server.
    I have attempted to copy the files which are human readable and formatted for SMTP transmission to the new server under the correct mailbox corresponding to the owning user (see screen shots below). The simple act of copying the files has not made these files visible via the IMAP protocol. I have tried restarting the mail service hoping that the Server app would rebuild whatever indexes need to be built so that the mail can be served via IMAP, and that has not worked either.
    The question is, how do I get the mail from the old server mail boxes into the new server mailboxes?
    This screen shot shows the location of one mail collection at /Library/Server/Mail/Data/mail/[userid].  Mail sits in the "new" folder only for a moment before being processed and put into the "cur" folder.  Copying mail from the old server into the "new" folder produces an empty "new" folder, but one can see the files populate briefly before they are moved into the "cur" folder.
    The next screen shot shows one email opened in TextEdit.  The format should look very familiar.  This is the same format that one would use to send SMTP requests to an SMTP server.  This particular example happens to be an email from a Gmail account to the PediatricHeartCenter.org domain to test the mail system when the old server was set up.  It was sent on 24 Jan 2014 and had text reading "Intended for Mavericks1. -Jared".

    On further research, I have learned that OS X Server sets Dovecot to use the MailDir format.  The email messages can be removed from the folders and put back, and as long as they were present in the folder to begin with (received by Dovecot originally), they reflect in the Mail.app on client computers.  Deleting a file in the "cur" folder causes the file to disappear in Mail.app. Copying the file back into the "cur" folder will cause the file to reappear without any modification of an index file or any other system component, as long as the file was properly formatted by Dovecot to be identifiable to that folder.
    According to Dovecot.org's review of MailDir found here (http://wiki2.dovecot.org/Ma,ilboxFormat/Maildir), the file name can be broken into simple pieces: " [unixtimestamp].[process id].[hostName],S=<message size>,W=<virtual message size>/2,[status tags]".  The original MailDir++ specification requires the string ":2," to appear after the virtual size, but this file naming format is not legal in Mac OS X, so Dovecot is modified by Apple to use "/2," instead.
    The Dovecot's wiki describes inserting new messages as follows:
    Mail delivery
    Qmail's how a message is delivered page suggests to deliver the mail like this:
    Create a unique filename (only "time.pid.host" here, later Maildir spec has been updated to allow more uniqueness identifiers)
    Do stat(tmp/<filename>). If the stat() found a file, wait 2 seconds and go back to step 1.
    Create and write the message to the tmp/<filename>.
    link() it into new/ directory. Although not mentioned here, the link() could again fail if the mail existed in new/ dir. In that case you should probably go back to step 1.
    All this trouble is rather pointless. Only the first step is what really guarantees that the mails won't get overwritten, the rest just sounds nice. Even though they might catch a problem once in a while, they give no guaranteed protection and will just as easily pass duplicate filenames through and overwrite existing mails.
    Step 2 is pointless because there's a race condition between steps 2 and 3. PID/host combination by itself should already guarantee that it never finds such a file. If it does, something's broken and the stat() check won't help since another process might be doing the same thing at the same time, and you end up writing to the same file in tmp/, causing the mail to get corrupted.
    In step 4 the link() would fail if an identical file already existed in the maildir, right? Wrong. The file may already have been moved to cur/ directory, and since it may contain any number of flags by then you can't check with a simple stat() anymore if it exists or not.
    Step 2 was pointed out to be useful if clock had moved backwards. However again this doesn't give any actual safety guarantees, because an identical base filename could already exist in cur/. Besides if the system was just rebooted, the file in tmp/ could probably be even overwritten safely (assuming it wasn't already link()ed to new/).
    So really, all that's important in not getting mails overwritten in your maildir is the step 1: Always create filenames that are guaranteed to be unique. Forget about the 2 second waits and such that the Qmail's man page talks about.
    The process described by the QMail man page referenced above suggests that as long as a file is placed in the "new" folder, that a mail reader can access it.  The mail reader then moves the file to the "cur" folder and "cleans up" the "new" folder.  This is clearly happening in OS X, because the messages are moving from "new" to "cur", but IMAP is still not serving these foreign messages to the remote readers.
    The thought crossed my mind that perhaps it is the fact that the host name does not match, which would cause the failure, however changing the "host" portion of the name from the old-server to the new-server did not fix the issue.  Even with the new server name in the file name, the inserted message fails to appear in client Mail applications.
    Within the file their is header information that still references the old machine. I'd like to not have to change the email files, because this will violate the integrity of the message. Also, this might take a lot of time or incur risks associated with poor automated processing. The header information should not be referenced by Dovecot, because the wiki page describing MailDir notes that neither Dovecot nor Dovecot's implementation of IMAP refers to the messages header information when moving and serving these mail files.
    Unlike when using mbox as mailbox format, where mail headers (for example Status, X-UID, etc.) are used to determine and store meta-data, the mail headers within maildir files are (usually) notused for this purpose by dovecot; neither when mails are created/moved/etc. via IMAP nor when maildirs are placed (e.g. copied or moved in the filesystem) in a mail location (and then "imported" by dovecot). Therefore, it is (usually) not necessary, to strip any such mail headers at the MTA, MDA or LDA (as it is recommended with mbox).
    This paragraph leads me to believe that after the mail box is identified that the content of the file becomes irrelevant to the system which manages. This suggests that we should be able to inject messages into a mailbox and have the system serve them as though they had belonged in that mailbox all along. Yet I have not found a way to do this.

  • Cannot add a new server in existing server pool

    Hi,
    I am trying to add a new server into an existing server pool.
    I have the same agent password, the same root password (i don't think is important).
    It disovers the server and is on unassigned Servers.
    When trying to add into existing server pool it fail with:
    Job Internal Error (Operation)com.oracle.ovm.mgr.api.exception.FailedOperationException: OVMAPI_4010E Attempt to send command: dispatch to server: vmsibm2 failed. OVMAPI_4004E Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 configure_server_for_cluster lun /dev/mapper/35000144f85151729 0004fb0000050000c696b251dc81a087 , Status: org.apache.xmlrpc.XmlRpcException: exceptions.OSError:[Errno 2] No such file or directory
    Any ideeas?
    Regards
    Nicolae

    Hi,
    I can see your point...
    From my error :Server Failed Command: dispatch https://?uname?:[email protected]:8899/api/2 configure_server_for_cluster lun /dev/mapper/35000144f85151729 0004fb0000050000c696b251dc81a087 , Status: org.apache.xmlrpc.XmlRpcException: exceptions.OSError:[Errno 2] No such file or directory
    /dev/mapper/35000144f85151729 is the path where the server pool uses for it's own...
    and
    0004fb0000050000c696b251dc81a087 is the Pool file System...
    On storage menu, at SAN Servers - Unmanaged iSCSI Storage Array - where I see my storage wich is with iSCSI at Add/Remove Admin Servers I added this new server.
    Also I went to Rescan Physical Discks for my new server.
    When I go with putty on my server and run
    df -h
    I don't see any storage...
    I belive I missed one step but I can't find wich one...
    Regards
    Nicolae

  • Migrating Reporting Services to new Server - Subscriptions are not transfering

    Hello,
    I have an instance of SQL Server 2008R2 running on Windows Server 2008.  It is setup to be a reporting server.  There are many subscriptions that are scheduled and run on this server.  We are wanting to move to Windows Server 2012 and SQL Server
    2012.  So, we have built out a new VM and I have exported from the current server the ReportServer and ReportServerTempDB and have imported them to the new server.  I have resolved the one Orphaned user that happened and went to look for the subscriptions
    so that I could disable them so they wouldn't run.  I could not find any.
    select * from msdb.dbo.sysjobs where enabled = 1 and category_id = 100.
    no rows...
    I had read from other posts to let it sit for a few days and they will appear.  I have waited 2 weeks.
    So, what am I missing?  I would prefer to do a clean install and migrate the data over rather than upgrading the OS and SQL.
    Thanks

    Hi Sql Dude,
    Per my undersranding that you can't find any informamation related to the subscription in the sydjobs table after migration, right?
    You issue can be caused by many factors.Please check details information below:
    Please check if you can see all the subscription in the report manager and can create new subscriptions. The ReportServer database used by SSRS to store the subscriptions maintains a record of the subscription owner (as well as audit fields) which track
    the user accounts that have created/modified the subscriptions. 
    If you can't see subscription on the report manager and can create new subscriptions, the issue can be caused by "My subscriptions" had been created on the original non-domain server(Local User account).  Therefore, once the instance was migrated
    to a new server on the domain, said Local User was no longer available.  Every user with access to the ReportServer database has an entry created in the Users table and a unique GUID generated.
    To work around the issue, you can do a SQL Update query to changed the OwnerID and ModifiedByID fields on the Subscriptions table to relate to the GUID of the equivalent user on the domain.Tip:
    Change the Owner of SQL Reporting Services Subscription
    If you can see all the subsription on the report manager but can't find any job, please try to edit and update the subscription to see if it will recreate the job again and please also try to provide more details information in the log file to see if you
    got some error message, the path like:
    C:\Program Files\Microsoft SQL Server\MSRS11.SQLEXPRESS\Reporting Services\LogFiles
    If above didn't help, please reference to the similar thread below:
    Can't access SSRS 2008 R2 subscriptions after migration
    If you still have any problem, please feel free to ask.
    Regards,
    Vicky Liu
    Vicky Liu
    TechNet Community Support

  • Migrating Crystal Reports Server XI to a new Server

    Post Author: david_okeefe
    CA Forum: Deployment
    Hi,I am trying to migrate an existing Crystal Reports installation from an old server to a new server.  To prevent interruption of service, the new server must be able to be online at the same time as the old server (thus it must have a different name).  I installed Crystal Reports Server XI and told it to create its CMS database by copying the existing CMS database.  Unfortunately, I received an unspecified error ("failed to received first object from source database").  After that, I opted to create a new, default CMS database.  I tried using the import wizard to migrate the reports to the new server.  The import wizard managed to bring over all of the folders, users, and groups, but not any of the reports.  When I looked at the migration log, it gave me an error stating that it could not find the parent folder.  I have no real idea how to address this issue.  On a related note, I also do not know how to point crystal at the reports repository on the new server.  I tried using the Data Source Migration wizard to copy the reports from the old server to the new server.  Unfortunately, it didn't appear to register any of the new reports.  Given what I have already tried, I am not sure how to proceed.Any help would be appreciated.Thanks,David

    Post Author: david_okeefe
    CA Forum: Deployment
    Hi,I am trying to migrate an existing Crystal Reports installation from an old server to a new server.  To prevent interruption of service, the new server must be able to be online at the same time as the old server (thus it must have a different name).  I installed Crystal Reports Server XI and told it to create its CMS database by copying the existing CMS database.  Unfortunately, I received an unspecified error ("failed to received first object from source database").  After that, I opted to create a new, default CMS database.  I tried using the import wizard to migrate the reports to the new server.  The import wizard managed to bring over all of the folders, users, and groups, but not any of the reports.  When I looked at the migration log, it gave me an error stating that it could not find the parent folder.  I have no real idea how to address this issue.  On a related note, I also do not know how to point crystal at the reports repository on the new server.  I tried using the Data Source Migration wizard to copy the reports from the old server to the new server.  Unfortunately, it didn't appear to register any of the new reports.  Given what I have already tried, I am not sure how to proceed.Any help would be appreciated.Thanks,David

  • Windows users doesn't work after migrating from old to new server!

    We have done a complete re-install on our XServe with OD. We have about 10 Windows users, and after the installation all their settings and mail are gone. All the "normal" files are there though.
    I'm not sure we have done it the right way though: We did a backup from the old server (a bootable copy with Super Duper), then we formatted and installed everything. We made new accounts (with different names if that's good to know) and copied the users home folders to the new location. The Mac clients seem to work good, but all the settings on the Windows clients are gone...
    Is there an easier way to this? We still have the workable copy from the old installation. There seems to be some kind of export/import way to do this, but I haven't got a clue how to do that...
    Please help!

    davidh,
    We didn't reintegrate smb.conf, but set the new server up just like the old one. We did however compare these files to see that the vital parts (netlogon, shares and so on, and of course basic settings) were correct.
    We also copied the user files and profiles and made them identical on the new server, except for placing them under the new usernames.
    Regarding the Local Settings folder, it doesn't exist on the old server, that's one of the weird things. We've checked the profile for a user on the client machine, and it is a roaming profile. That's why we're a bit puzzled as to why the login works and all files are there, but the user preferences and Outlook doesn't work.
    I know I've read somewhere that the Local Settings aren't replicated like the other files in a roaming profile, but I haven't finished checking up on that. I wouldn't expect anything else than that Windows takes care of Outlook e-mail for a roaming profile as well though; I mean, the user must be able to read his/her mail from any computer in the domain, what else would the purpose of a roaming profile be?
    Except for the weird thing about us not being able to find the user preferences or Outlook files for the client amongst the files on the server, I feel we're missing something; Apparently Windows isn't as straight forward as one would expect (not sure why I did expect anything, come to think of it).
    We're going to give it a new go next weekend. Except for doing further research we're thinking of copying /etc/smb.conf and the files in /var/samba and /var/db/samba to the new server, along with exporting and importing the old user accounts to the new server, and then see if everything works as expected.
    If so, we'll see if we can change the account names in a nice way, it's really desired to do so.
    If not, we really need to do some more research, but if I'm not mistaking, the Samba-related files I just mentioned are the ones that pretty much make up the Windows Services in OS X, isn't that so?
    Thanks!

  • Upgrade or re-install?  CE9 to BOE XI R2, new server has BOE XI old version

    We have an old server that's running Crystal Enterprise Reports version 9 and it is the current production machine.
    We also have another server that was purchased a couple years ago and BOE XI was loaded on it and the CE9 stuff was migrated to it.  But the migration was never completed and the person in charge of this left and the project has sat since then due to resource issues.
    I am now tasked with creating a plan for getting BOE XI R2 loaded on the "new" server and migrating the information from the CE9 server.  There are 2 options that I can see.
    1) Upgrade the existing BOE XI in place to R2.  Would doing this preserve configuration that had already been done?  What problems will be presented by the fact that information was already migrated once from CE9 and everything will need to be re-migrated?
    2) Completely uninstall BOE XI and then reinstall BOE XI R2.  Start from scratch with configuration and migrating information from the CE9 server.
    Also, I saw mention in another thread of obtaining the services of a consultant.  We typically do not do that unless it's absolutely necessary.  Would like your opinions on whether this is really complicated enough to warrant hiring outside help or if we should be able to learn what we need from documentation.  The person that did the original migration took training classes and we did not use a consultant.  Thanks for your help.

    Well you said the XI environment is old, but if you wanted to an upgrade will keep what you have. We usually recomend migration vs upgrade if a production server is installed but in DEV if all goes wrong, your other option is to reinstall anyway, so might as well try it out.
    What I think you will end up doing is migrating from CE9 to XIR2. It's a pretty tall task for anyone to assume they can guide you through a migration of 3 product versions in a simple forum post. This is why it's recomended to use a consultant, as they will get to know your existing environment and all the possible "gotcha's".
    The import wizard is your tool for migrating reports, users, groups, folders, and even instances from CE9 to XIR2. You are welcome to perform this on your own and if you have a contract you can always open a message with support if you get stuck.
    Regards,
    Tim

  • Configuration of STMS in new server

    Currently I'm configuring the STMS for new server. This is to replace the current development server. I would like to know is there any way I can import all the transport request, including those that are not yet release to the new server? If the old request number is not imported, then all the current program in progress will need to be redo by ABAPer, so is there any solution to this problem?

    Hi,
    Create virtual system on the current DEV system (domain controller) using links:
    http://www.saptechies.com/stms-in-single-system-by-using-virtual-system/
    http://help.sap.com/saphelp_smehp1/helpdata/en/44/b4a0db7acc11d1899e0000e829fbbd/content.htm
    After this provide this system as the target system for the transports in old DEV system. Once you release these TRs, the data and cofiles for the particular transport is stored in the DIR_TRANS path data/cofiles.
    Regards,
    Srikishan

  • Want to install CE10 on a new server, using the old servers config

    Does anyone have a quick and easy set of instructions on how to do this? I have tried this several different ways with several different errors. Each time, I simply get to a dead end, and uninstall-reinstall on the new server.
    I have tried the import wizard, starting from scratch, everything.

    Hi Joe,
    Are most of the issues you're encountering related to scheduling?  If they are,  some of the items that come to mind are:
    -  Database connections (ODBC, native clients, etc...)
    -  If the scheduled jobs are based off of Events,  make sure you take note of the events/triggers and that they exist.
    Please add additional information.
    Regards,
    Wallie

  • Essbase Copy of Users to new Server

    hi,
    I appreciate any help
    How can I copy my existing Essbase Users (in Shared Service Mode) to another server
    Im using Hyperion 11.1.1.3 version
    thanks

    I'm not too sure about the "What version will run on what platform", somebody else can chime in on that.
    As for transferring the applications...
    If you don't have a recent backup of the entire App/DB folders, take one before starting
    1. Log out users, and keep them out (this will help them not lose any data changes they may make during the process)
    2. Export level 0 data (or All data if the apps are small enough) from old server.
    3. If you use partitions, export the partitions to .xml files in EAS.
    4. Create applications/databases on new server with the same name(s) in EAS.
    5. In EAS, open outline of old app/db and "Save As" into new app/db.
    6. At a file level, copy all report scripts (.rep), load rules (.rul) and calc scripts (.csc) from old app/db folder to new app/db folder.
    7. Import the data into new application using the export taken in step 2. If you exported level 0, you need to run rollup scripts.
    8. If you exported partitions, import them to new application(s).
    Validate.
    Edited by: RobertR3 on Nov 12, 2010 8:47 AM
    BTW, this assumes straight Essbase apps, Planning applications would be different.

  • MOVE REPLICATION DATABASE TO NEW SERVER

    WE HAVE A PRODUCTION DATABASE THAT WORK'S AS REPLICATION FROM 9 DATABASES, WE ARE GOING TO MOVE DATABASE TO NEW SERVER.
    AS DATABASE IS PRODUCTION, WE CAN NOT STOP SERVICES. WE THINK TO CREATE A NEW DATABASE ON NEW SERVER, IMPORT DATA FROM PROD. DATABASE. SIZE APROX 100GB.
    QUESTION:
    I NEED TO KNOW HOW I CAN STOP SERVICES SHORT TIME. TO CHANGE TO SNEW SERVER.
    THANKS FOR YOUR SUGGESTIONS

    Здравствуте Ivan.
    Before i would do anything like this i would make sure that i have a good valid backup (just in case!).
    I would check all the applications that are currently conencting to your ORacle and make sure they are all conneting 'by name' not the IP. Or and then i would check DNS server (make sure nslookup is working) to makesure that everything resolves correctly after the moving DB to another machine.
    Then check that the oracle_home etc are in the same directory and check that all paths etc in the configurations are the same.
    Hope this helps
    Kind Regards

Maybe you are looking for

  • CUPC 8.6 logs out in softphone mode / works in deskphone mode

    Hi everyone, I'm experience an unusal problem with a virtual lab deployment I'm mocking up.  I have CUCM, CUC, and CUPS versions 8.6 all running on ESXi 4.1.  I have one Windows Server 2003 desktop with CIPC running and everything works fine.  When I

  • Is it possible to Update Remote Key

    Hi, I have a remote system which create remote key while syndicating to ERP... ERP will create the material number which inturn needs to be updated in MDM using import manager.. While importing I am mapping Material number to remote key.. which is cr

  • Scheduler Service Status failed - XIMDD Upgrade to 9.1.0.2 BP09

    Diagnostic Dashboard Test Result failed Scheduler Service Status Details: java.io.FileNotFoundException: http:/10.10.10.11:80/xlScheduler/ddstatus at sun.nt.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1147) I noticed tha

  • Recovery back up osx

    Hey there, without an OSX DVD like the past, hwo do you boot into a mode to service your hard drive or how do you make a back up boot disc? Thanks in advance Marc

  • Embedded Flash 10 -- when?

    When will Acrobat upgrade to a Flash 10 embedded player? SWF files built with Flash Builder 4 Beta embedded in a PDF show up blank in Acrobat Reader 9.1.3.  Acrobat Reader 9.1.3 embeds a Flash 9 player.  The Flex 4 SDK requires Flash 10. Thanks,   --