Migrating GW8 system to new servers

Going to be moving a GW system, GW8.0.3, to from SuSE10 to SuSE11. System consists of 2 servers, I running primary and secondary domain with 2 post offices, 2 server runs secondary domain and gwia and webaccess. Their is a server in the dmz running webaccess agent with a purchase certificate. What I plan to do is just scp the data over to the new servers and reinstall and configure the agents on the new boxes. My question has to do with the server in the dmz. We plan to leave it as is, and I was told that all that was needed to do is reinstall the webaccess on the new box and copy the commgr.cfg from the new server to the location on the dmz server so it knows what to point too. No ip addresses are changing as well. Wanted to see if that sounds about right or if I am missing a step to get the webaccess server in the dmz to correctly still work after the migraiton.

Little more info on issue. GroupWise 8 Webaccess application is running on web server in dmz. The agent runs on the SuSE10 server along with the secondary domain it is associated too. Moving the domain and to new SuSE11 server, so plan was to reinstall the webaccess agent on the new suse11 server and the new commgr.cfg file it creates, copy that to the existing web server and make sure rights are alike. Is there anything else that needs to be done? Just want to make sure I am not missing anything here.

Similar Messages

  • Migrating  Solaris Systems to new servers.

    OK here is the secnario, we have currently purchased two shiny new sun t5220 servers for out NW7 systems.
    Our current systems run on solaris 9.
    The new Systems will be solaris 10.
    Due to this the only method i can think of to migrate the data is to do a clean install on the solaris 10 systems.
    export the database from the Solaris 9 systems, as you can't clone a solaris 9 SAP system back onto a Solaris 10 box, due to the way 'Zones' and kernel parameters are different on S10.
    Then import this database back into the new clean system on solaris 10.
    That way, all the patching, config, data etc. etc. should be on the same level.
    This will be done using the Sap standard tools available in sapinst.
    Unless anyone has any other ideas ?
    Regards
    James

    SAP Suggest to use backup/restore or R3load .
    Yes - because that´s the most uncomplicated way.
    if I want to create a trace file manually and shut down database,sap .
    copy over  data files /origfile/logs from os level to target system and recover the system and install sap.
    There´s no issue with that if you know what you´re doing
    If you don't use sap mentioned procedure  in that case SAP does not provide any support .
    If you have a problem with that copy procedure itself you will be on you own. if the target system is running already you will still get support of course because nobody asks you later, how you did the copy.
    Markus

  • Cloning Apps 11.5.10.2 to new servers

    I am in the process of migrating our setup to new servers in a completely different domain, etc. Never cloned before much less try to set to new servers.
    I have the DB cloned over and working properly. I then installed the infrastructure (It never asked about DB connection settings, not sure if it was supposed to yet)
    I have all the directories as detailed in doc 230672.1 moved over and expanded in the new appropriate locations.
    When I execute adcfgclone.pl appsTier I get the following:
    Enter the APPS password [APPS]:
    apps_pass
    First Creating a new context file for the cloned system.
    The program is going to ask you for information about the new system:
    ERROR: context creation not completed successfully.
    Please check /tmp/adcfgclone.err file for errors
    The adcfgclone.err file just contains
    ./..jre/1.3.1/bin/java[21]: /usr/bin/basename: not found
    getconf: Unrecognized variable 'CPU_VERSION'
    ./..jre/1.3.1/bin/java[156]: /usr/bin/grep: not found
    ./..jre/1.3.1/bin/java[175]: /usr/bin/grep: not found
    ./..jre/1.3.1/bin/java[217]: ./..jre/1.3.1/bin/ ../bin/PA_RISC/native_threads/: cannot execute - Is a directory
    At this point do I need to configure the dbc files to point to the new servers/ports etc, the instructions don't say that I do but I am at a loss. The /usr/bin/basename does not exist on the source server either so I assume it is looking for something else but I am not sure what. Did I skip a step when unpacking the files before running adcfgclone.pl?
    Thanks for the input

    Hi,
    I have the DB cloned over and working properly. I then installed the infrastructure (It never asked about DB connection settings, not sure if it was supposed to yet)Were the database and the database listener created successfully? If yes, and you are able to connect to the database remotely then there is nothing to worry about here.
    ./..jre/1.3.1/bin/java[21]: /usr/bin/basename: not found
    getconf: Unrecognized variable 'CPU_VERSION'
    ./..jre/1.3.1/bin/java[156]: /usr/bin/grep: not found
    ./..jre/1.3.1/bin/java[175]: /usr/bin/grep: not found
    ./..jre/1.3.1/bin/java[217]: ./..jre/1.3.1/bin/ ../bin/PA_RISC/native_threads/: cannot execute - Is a directoryDid you install all the OS pre-req. software and packages as per Note: 230672.1?
    At this point do I need to configure the dbc files to point to the new servers/ports etc, the instructions don't say that I do but I am at a loss. The /usr/bin/basename does not exist on the source server either so I assume it is looking for something else but I am not sure what. Did I skip a step when unpacking the files before running adcfgclone.pl?You do not have to configure any DBC files as this will be created by Rapid Clone. Please verify that all the OS software/packages have been installed before running postclone script.
    Thanks,
    Hussein

  • Migration of BW system to New Hardware and Upgrade to NW04s

    Hi,
    I'm not sure this is the forum for this post but here goes...
    We have recently started a project to migrate our current BW3.5 system to new hardware (HP Superdome) and then upgrade to NetWeaver 2004s.
    To do this, we've taken a copy of our current Dev 3.5 system and moved it to the new hardware and will unicode and upgrade to NW04s.  Once this is complete, we plan to install NEW instances of QA and Prod with NW04s.  We will TRANSPORT all our configuration and reload all the providers in QA and Prod.  This will be a MASSIVE task as we have hundreds of infoproviders.  Additionally, I'm concerned we will not pick objects currently in 3.5 Prod system (e.g. custom queries/workbooks, infopkgs, process chains, variables, etc.) but not in current 3.5 Dev system.
    My question is this - can we instead take copies of the current QA and Prod (3.5) systems and migrate, unicode, and upgrade in the new Superdome hardware.  Per Basis, this is NOT possible due to the size of the current systems - approx 2.5 Terabytes.  But maybe half of the database is consumed by old PSA's, unused providers, etc.  If we were to clean up our current system and do a database reorg to reduce the database size, is it possible to do a system copy (Dev to Dev, QA to QA, Prod to Prod) --> any help/insight is much appreciated.
    Thanks,
    Senthil

    I had this experience in my previous project and i wanna share a with you. we have migrated our total bw landscape from alfa servers to HP superdome servers.
    we took all the copies of all the boxes( dev,QA,PRO) and migrate to superdome and upgrade to NW04. but in your case you are not taking copies QA and PRODUCTION. This is not realistic scenario. you got to move all the objects and configuration from dev to respective systems and load the data in the production.
    so you have to take all the copies of all the boxes and migrate to superdome and upgrage to NE04, is the right way of doing.

  • Exchange 2010 Migration - Decommissioning Multi Role Server and Splitting Roles to 2 new servers - Certificate Query

    Hi,
    I have been tasked with decommissioning our single Multi Role Server (CAS/HT/MB) and assigning the roles to 2 new servers. 1 server will be dedicated to CAS and the other new server will be dedicated to HT & MB roles.
    I think I'm OK with the moving of HT and MB roles from our current server to the new HT/MB server by following "Ed Crowley's Method for Moving Exchange Servers", my focus is on the migration of the CAS role from the current to the new server as
    this one has the potential to kill our mail flow if I don't move the role correctly.
    The actual introduction of the new CAS server is fairly straight forward but the moving of the certificate is where I need some clarification.
    Our current multi role server has a 3rd Party Certificate with the following information:
    Subject: OWA.DOMAIN.COM.AU
    SANs: internalservername.domain.local
              autodiscover.domain.com.au
    The issue here is the SAN entry "internalservername.domain.local" which will need to be removed in order for the certificate to be used on the new CAS server, firstly because the CAS server has a different name and secondly the internal FQDN will
    no longer be allowed to be used from 2015 onwards. So I will need to revoke this certificate and issue a new certificate with our vendor who is Thawte.
    This presents me with an opportunity to simplify our certificate and make changes to the URLs using a new certificate name, so I have proposed the following:
    New Certificate:
    Subject: mail.domain.com.au
    SANs: autodiscover.domain.com.au
              OWA.DOMAIN.COM.AU
    I would then configure the URLs using PowerShell:
    Set-ClientAccessServer -Identity NEWCASNAME-AutodiscoverServiceInternalUrl https://mail.domain.com.au/autodiscover/autodiscover.xml
    Set-WebServicesVirtualDirectory -Identity " NEWCASNAME\EWS (Default Web Site)" -InternalUrl https://mail.domain.com.au/ews/exchange.asmx
    Set-OABVirtualDirectory -Identity " NEWCASNAME\oab (Default Web Site)" -InternalUrl https://mail.domain.com.au/oab
    Set-OWAVirtualDirectory -Identity " NEWCASNAME\owa (Default Web Site)" -InternalUrl https://mail.domain.com.au/owa
    I would also then set up split DNS on our internal DNS server creating a new zone called "mail.domain.com.au" and add an host A record with the internal IP address of the new CAS server.
    Now I know I haven't asked a question yet and the only real question I have is to ask if this line of thinking and my theory is correct.
    Have I missed anything or is there anything I should be wary of that has the potential to blow up in my face?
    Thanks guys, I really appreciate any insights and input you have on this.

    Hi Ed,
    Thanks for your reply, it all makes perfect sense I guess I was being optimistic by shutting down the old server and then resubscribing the edge and testing with mailboxes on the new mailbox server.
    I will make sure to move all of the mailboxes over before removing the old server via "Add/Remove Programs". Will I have to move the arbitration mailboxes on the old server across to the new mailbox server? Will having the arbitration mailboxes
    on the old server stop me from completely removing exchange?
    Also, the InternalURL & ExternalURL properties are as follows:
    Autodiscover:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/Autodiscover/Autodiscover.xml
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/autodiscover/autodiscover.xml
    WebServices:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/EWS/Exchange.asmx
    New CAS - ExternalURL: https://owa.pharmacare.com.au/EWS/Exchange.asmx
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/ews/exchange.asmx
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/EWS/Exchange.asmx
    OAB:
    New CAS - InternalURL: http://svwwmxcas01.pharmacare.local/OAB
    New CAS - ExternalURL: https://owa.pharmacare.com.au/OAB
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/oab
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/OAB
    OWA:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/owa
    New CAS - ExternalURL: https://owa.pharmacare.com.au/
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/owa
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/
    ECP:
    New CAS - InternalURL: https://svwwmxcas01.pharmacare.local/ecp
    New CAS - ExternalURL: https://owa.pharmacare.com.au/ecp
    Old CAS - InternalURL: https://svwwmx001.pharmacare.local/ecp
    Old CAS - ExternalURL: https://owa.pharmacare.com.au/ecp
    Our Public Certificate has the following details:
    Name: OWA.PHARMACARE.COM.AU
    SAN/s: autodiscover.pharmacare.com.au, svwwmx001.pharmacare.local
    From your previous communications you mentioned that this certificate would not need to change, it could be exported from the old server and imported to the new which I have done. With the InternalURL & ExternalURL information that you see here can you
    please confirm that your original recommendation of keeping our public certificate and importing it into the new CAS is correct? Will we forever get the certificate warning on all of our Outlook clients when we cut over from the old to the new until we get
    a new certificate with the SAN of "svwwmx001.pharmacare.local" removed?
    Also, I am toying with the idea of implementing a CAS Array as I thought that implementing the CAS Array would resolve some of the issues I was having on Saturday. I have followed the steps from this website, http://exchangeserverpro.com/how-to-install-an-exchange-server-2010-client-access-server-array/,
    and I have got all the way to the step of creating the CAS array in the Exchange Powershell but I have not completed this step for fear of breaking connectivity to all of my Outlook Clients. By following all of the preceeding steps I have created a Windows
    NLB with dedicated NICs on both the old CAS and the new CAS servers (with separate IP addresses on each NIC and a new internal IP address for the dedicated CAS array) and given it the name of "casarray.pharmacare.local" as per the instructions on
    the website, the questions I have on adding the CAS array are:
    1. Do you recommend adding the CAS array using this configuration?
    2. Will this break Outlook connectivity alltogether?
    3. Will I have to generate a new Public Certificate with an external FQDN of "casarray.pharmacare.com.au" pointing back to a public IP or is it not required?
    4. If this configuration is correct, and I add the CAS Array as configured, when the time comes to remove the old server is it just as simple as removing the NLB member in the array and everything works smoothly?
    So, with all of the information at hand my steps for complete and successful migration would be as follows:
    1. Move all mailboxes from old server to new server;
    2. Move arbitration mailboxes if required;
    3. Implement CAS Array and ensure that all Outlook clients connect successfully;
    4. Remove old server;
    5. Shut down old server;
    6. Re-subscribe Edge from new Hub Transport server;
    7. Test internal & external comms;
    We also have internal DNS entries that would need changing:
    1. We have split DNS with a FLZ of "owa.pharmacare.com.au" that has a Host A record going to the old server, this would need changing from "svwwmx001.pharmacare.local" to "svwwmxcas01.pharmacare.local";
    2. The _autodiscover entry that sits under _TCP currently has the IP address of the old server, this would need to be changed to the IP address of the new CAS;
    3. The CNAME that sits in our FLZ for "pharmacare.local" would need to be changed from "svwwmx001.pharmacare.local" to "svwwmxcas01.pharmacare.local".
    4. Or rather than using the FQDN of the server where applicable in the DNS changes would I be using the FQDN of the CAS Array instead? Please confirm.
    Would you agree that the migration path and DNS change plan is correct?
    Sorry for the long post, I just need to make sure that everything goes right and I don't have egg on my face. I appreciate your help and input.
    Thanks again.
    Regards,
    Jamie

  • Migration of old system with new version and 64bit hardware

    HI Sapguru's,
    Our req: Build New CRM system with all latest version of software & 64 bit hardware. Details mentioned below.
    Old System:
    CRM 4.0 ABAP System
    win 2003
    Oracle 9i
    non-unicode
    32 bit hardware
    New System( which i want to build):
    CRM 5.2 (rampup version) ABAP+JAVA
    win 2003
    Oracle 10g
    Unicode
    64 bit hardware
    Above mentioned is an new requirement for us.
    Our Plan:
    1. Old system -Conversion from non-unicode to Unicode system
    2. Old system - System copy export method for create exports
    3. Build New system with new unicode version(5.2) with Old system exports
    Questions to Sapguru's:
    1. Our plan will work or not?
    a. If Yes - give me the guidelines ,Notes ,documentation paths
    b. If No - Please provide your opinions/advices
    2. 32 bit exports will work on 64 bit hardware SAP system?
    Thanks Advance.
    Regards,
    Ramu,

    Hello Markus,
    Thanks for your quick response!!!!
    I understood - what you have mentioned for my question but i have  one clarification on your answer and one new questions to you.
    Clarification:
    My oldsystem is non-unicode system. As you said i will do system copy export method on old system(non-unicode) then i will use that  exports(non-unicode) to build new system.
    Here my clarification is:
    Will that exports(non-unicode) work while installing new system with unicode? or Will it give any problem?
    Question:
    1. will old system with oracle 9i exports work on new system with oracle 10g?
    Thanks Advance,
    Bhaskar Rapelli

  • Moving SCOM environment to new servers

    We need to upgrade our SCOM servers from Windows Server 2008 to 2012.   In our current environment we have all of our servers in the same environment -- both Prod and Test servers.  When we make the move to new servers, we want to fix that,
    so that we have a proper Test environment with only the Test agents in it, and a Prod environment as well.    So, because we don't want to carry forward all of the old data that no longer applies to that environment, we were thinking that we'd
    just no migrate over the SCOM databases, and start fresh.   I have an idea of how I plan to tackle this migration, but I am looking for some confirmation that what I have in mind make sense and would work.   So here's how I foresee the
    migration going...
    1.  We install SCOM onto all the new Test environment servers, with a fresh database, everything brand new.
    2.  We export the management packs from the current SCOM environment and then import them into the new SCOM environment.
    3.  We update the Test SCOM agents and tell them to report to the new Test SCOM environment.
    4.  Repeat the same steps for Prod.
    Is it really as simple as that?   Seems like if we have the management packs there, that all the overrides we've configured should still be there as well as the groups, monitors, rules, etc that we've created in our current environment.  
    So if we simply import those onto the new environment and tell the agent to go there, then everything should still apply correctly, right?
       Does that make sense?   Am I missing something and it's not really this easy?

    Yes, it really is that simple. It's even easier if you have to upgrade the agents. I've already done this to migrate from SCOM 2007R2 to 2012R2. I stood up the new environment, imported the MPs and then ran discovery for the systems I wanted in that environment.
    The new SCOM 2012R2 MG deployed/upgraded the agents and they were "dual-homed" reporting to the old 07 and the new 12 instance. After I was happy with the new 12 instance, I just removed the configs from the 07 agents and shut down the 07 management
    group.
    "Fear disturbs your concentration"

  • ORACLE 10G 2 Node RAC on servers AB to 3 node 11GR2 RAC on  new servers XYZ

    Hi Gurus ,
    We have a business requirement to upgrade existing oracle 10.2.0.4 2 Node RAC on servers A,B (HP UX 11.31 ) to 3 Node ORACLE 11GR2 on new Servers X,Y and Z(Linux and servers are from different vendor)
    We don't have ASM.We have RAW file system
    Now this has to be done with near zero down time.This is a very busy OLTP System.Golden Gate is not an option as Management is not going for it this time.
    Storage is same for everything.I am thinking of the following ways.Please let me know if you have any better plan or if you want to correct my existing plan
    Initially i thought of this way and i immediately answered myself
    Plan A
    1 ) Storage copy (BC etc..) exising 10g RAC Database files on A,B to new volumes allocated to Servers X,Y and Z ( I don't think it's possible as OS is different.I am not sure whether copying this way is possible or not , even if possible i am not sure whether new OS can identify these files or not)
    2 ) upgrade 2 Node 10.2.0.4 on X,Y to 11GR2 .
    3 ) Add a Database Node on Z
    Plan B is
    1) Build 3 Node brand new 11GR2 on X,Y and Z
    2 ) Plan for how can you replicate the data ( This has to be a logical way)
    a ) RMAN - Very time consuming as this is >50 TB Database
    b) Golden Gate - Not an option.Even if we can use it , i see there are so many logical issues with golden gate
    c) Expdp/impdp - forget about it :)
    d) physical standby (Versions are different.If it is same version and Same OS then would have been best bet)
    What would be the ideal way to do this with minium down time and with out golden gate
    I think something can be done on the lines of PLAN A itself.
    Requesting your help Gurus
    Thanks

    Hi,
    Have you considered possibility to setup a logical standby and do rolling upgrade ?
    Please have a look at the following:
    1) http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-transientlogicalrollingu-1-131927.pdf
    2) http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-upgrades-made-easy-131972.pdf
    If you can practice on test setup, it looks a faster option than the others.
    Regards,
    Ganadeva

  • Upgrading/Migrating Portal System from EP 6.40 SP21 to EP 7.0 SP2

    Upgrading/Migrating Portal System from EP 6.40 SP21 to EP 7.0 SP2
    IN EP 6.40 we have ESS/MSS 60.2.2,KM,NWDI ,IN BAck end we have ECC 6.0 .
    COuld you let me what are the list of measures/Prerequisites i need to take before going for upgrade .

    Hi,
    You can upgrade your portal with all usages that you have installed. You need to check upgrade guide as well.
    You need to check if there is any custom development on the portal to migrate it to new version.
    Thanks
    Sunny

  • Is there a way to delete a migrated account on my new puter?

    The Apple store migrated my data to new MacBook Pro.
    I am unable to change the password on my admin account (won't accept my password although the same account on the old puter accepts the password).
    I want to delete what they migrated and try the migrate again.
    Can I delete the accounts that were migrated?

    Apple>System Preferences>Users and Groups ....
    Delete the account using the minus sign at bottom left, make sure to get the account you intend to.

  • Upgrading a RAC to new servers.

    I will be upgrading from 10.2.0.4 (2 node RAC) on Windows 2003 to 11gr2 (2 node RAC) on Windows. I will be creating the new RAC on new servers.
    My question is this, since it isn't a upgrade on the same server, do I just take an full export of the old 10.2.0.4 database and import it into the new 11gr2 database after creating the tablespaces or just a user export.
    If a full export, then I assume sys,system and sysaux would also be brought over to the new machine. Is this correct and if so what effect will it have on the new instances.
    I plan on spending alot of time researching and reading the documentation all summer since this will be done closer to December but understanding this will make reading the documentation easier. Thanks for all your help in clarifying this for me.

    If you can use the word "export" it must be a remarkably small database. I would recommend transportable tablespaces.

  • Migrate SBS 2008 to new hardware

    We have an SBS 2008 installation and are testing a disaster recovery solution by re-installing in Migration Mode.
    Step 5 in the migration document from Microsoft discusses;
        Demote and remove the Source Server from the network
    We'd like to fully test this and get the backup box up and running before signing it off as a working solution.
    Our concern is that, even if we isolate the test machine on a difference subnet from our production LAN, there is some kind of licensing consequence for our running box - via some kind of de-activation signal sent over the net, perhaps - should we continue
    through the final steps.
    Can anyone confirm or deny whether any kind of de-activation occurs? Perhaps paranoia, but thought best to check since we really can't have our live systems shut down.

    Hi,
    Before going further, would you please provide more detailed information about your concern? It’s difficult to understand your needs for me with the information you mentioned. For details about
    how to migrate SBS 2008 to new hardware, please refer to the following articles.
    Migrate Windows Small Business Server 2008 to New Hardware
    http://technet.microsoft.com/en-us/library/cc664208(v=ws.10).aspx
    Moving SBS to new hardware
    http://msmvps.com/blogs/kwsupport/archive/2005/06/18/53958.aspx
    Hope it helps.
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Migrate SCCM 2012 to new server

    I need to migrate our SCCM 2012 installation to a new server. Both the old and new servers will be up concurrently if necessary. What steps do I need to take to do this?
    Also, I want to upgrade to SP1 at the same time. Can I install SP1 on the new box alongside my current RTM install and do the migration or do I have to do the upgrade as a completely separate step after migration is complete?
    Thanks.

    Backup ConfigMgr on the old server (using the built-in maintenance task or a SQL backup; keep in mind to backup ContentLib, sources for applications, drivers, wim files etc!). Shut down server. Install new server (same name + domain + partition layout).
    Install ConfigMgr. Restore ConfigMgr backup. I don't know if it's supported to change the SP level at the same time. I would not recommend changing too much things simultaneously though.
    Torsten Meringer | http://www.mssccmfaq.de

  • How to set bundles to auto-install on new servers?

    What are the correct install / assignment options to set to make sure that new servers that become registered in Zenworks get the bnudle pushed to them?
    I want all new servers to install a specific bundle. Do I set an assignment schedule to a dynamic server group that will include all new Zenworks servers? Won't that only assign the bundle to install on the servers that are in that group at the time I set the schedule (and not future servers that enter the group)?
    If anyone knows the correct way to configure this, please let me know! Thanks.

    Originally Posted by CityRoamer
    Assuming the dynamic group refreshing works, what does the bundle assignment need to be set to in order for future devices to receive the bundle? If I do a date specific schedule (one time), will that be enough to have the bundle pushed down to new devices in that group permanently? I tried looking at the documentation, but I couldn't find any that talked about this specific scenario.
    Thanks
    You could do it in several ways, for example:
    -Date specific, pick a today's date and check the box "Process immediately if device unable to execute on schedule".
    -Event, Device boot.
    Make sure that the Install action option is set to "Install once per device" or if it's a launch action then set it's option to "Run once for each device" or have an system requirement (file exists/regkey or something else) on the action to check somehow if the bundle have been installed on the device or not.
    Thomas

  • How can I open a project made in 2000 on Final Cut Pro to migrate it to a new version?

    how can I open a project made in late 2000 on Final Cut Pro to migrate it to a new version to finish editing it? files, timeline and unfinished edit are all stored on hard drive since early 2001, with all the original miniDV tapes still in hand

    Do you still have legacy FCP? If you not you need to find someone who will open the project in FCP6.0.6 or later. Export an XML file of the project. Use 7toX to convert it to an XML format that works with FCPX. Import the converted XML file to FCP.

Maybe you are looking for

  • How do I change my startup settings?

    When I turn my computer on, about a dozen programs start up automatically -- which means I have to spend at least a minute closing or minimizing each one. How do I change my startup settings so that the programs only open manually, when I need them?

  • CS4 will not launch in Win7

    I have set D drive as my scratch drive in CS4.  When I try to launch PS I get the message "could not initialiaze PS because the file is locked.  Use the Properties command in the Win Explorer to unlock the file.  I have already dumped my PS preferenc

  • Error message when trying to installl an Audigy

    I have a problem when trying to install Audigy SE,Error message saying "can't detect a supported product" , my motherboard ASUS P4P800 deluxe has an integrated soundcard /realtek AC97/ and the sound doesn?t work. Waht shall I do, it is umpossible to

  • Can't migrate from iBook

    I just bought my new intel based 24" iMac and i can't get the migration assistant to work betwen my 1.4Ghz Power PC G4 iBook an this new one. It seems like the computers can't understand each other. I've done this procedure a dozen times before with

  • Creating user using database link

    Hi I have 11g installed on one of the server I have created the database link BUGAU to bugau.us.oracle.com link works fine select * from table@"Bugau" gives the expected result I need run the below statement from my server using the dblink.but not su