Migration from 10g to 12c using data pump

hi there, while I've used data pump at the schema level before, I'm rather new at full database imports.
we are attempting a full database migration from 10.2.0.4 to 12c using the full database data pump method over db link.
the DBA has advised that we avoid moving SYSTEM and SYSAUX objects. but initially when reviewing the documentation it appeared that these objects would not be exported from the target system given TRANSPORTABLE=NEVER. can someone confirm this? the export/import log refers to objects that I believed would not be targeted:
23-FEB-15 19:41:11.684:
Estimated 3718 TABLE_DATA objects in 77 seconds
23-FEB-15 19:41:12.450: Total estimation using BLOCKS method: 52.93 GB
23-FEB-15 19:41:14.058: Processing object type DATABASE_EXPORT/TABLESPACE
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"UNDOTBS1" already exists
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"SYSAUX" already exists
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"TEMP" already exists
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"USERS" already exists
23-FEB-15 20:10:33.200:
Completed 96 TABLESPACE objects in 1759 seconds
23-FEB-15 20:10:33.208: Processing object type DATABASE_EXPORT/PROFILE
23-FEB-15 20:10:33.445:
Completed 7 PROFILE objects in 1 seconds
23-FEB-15 20:10:33.453: Processing object type DATABASE_EXPORT/SYS_USER/USER
23-FEB-15 20:10:33.842:
Completed 1 USER objects in 0 seconds
23-FEB-15 20:10:33.852: Processing object type DATABASE_EXPORT/SCHEMA/USER
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OUTLN" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"ANONYMOUS" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OLAPSYS" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"MDDATA" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"SCOTT" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"LLTEST" already exists
23-FEB-15 20:10:52.372:
Completed 1140 USER objects in 19 seconds
23-FEB-15 20:10:52.375: Processing object type DATABASE_EXPORT/ROLE
23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"SELECT_CATALOG_ROLE" already exists
23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"EXECUTE_CATALOG_ROLE" already exists
23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"DELETE_CATALOG_ROLE" already exists
23-FEB-15 20:10:55.256: ORA-31684: Object type ROLE:"RECOVERY_CATALOG_OWNER" already exists
any insight most appreciated.

Schema's SYS,CTXSYS, MDSYS and ORDSYS are Not Exported using exp/expdp
Doc ID: Note:228482.1
I suppose he already installed a software 12c and created a database itseems - So when you imported you might have this "already exists"
Whenever the database is created and software installed by default system,sys,sysaux will be created.

Similar Messages

  • Migration from 10g to 11i using sapinst - system copy

    Hi ,
    We are planning to migrate from our current platform to new platform.
    Current Platform :  windows 2003,oracle 10g , ecc6.0
    Target :  Windows 2008 R2 , oracle 11g , ecc6.0
    The process we are going to follow is system copy (Import/Export)
    what are the Prerequisites for the for the same.
    Do we have to need Sap netwever 7.0 sr3 atleast to carry out import / export.
    What about the kernel and Export ?
    Is it compatible to move from 10g to 11g using sapinst system copy.
    Thanks,
    Neel

    Hello,
    >> what are the Prerequisites for the for the same.
    Read the system copy guide
    >> Do we have to need Sap netwever 7.0 sr3 atleast to carry out import / export.
    No, all versions are valid. Read the system copy guide according to your NW version.
    >> Is it compatible to move from 10g to 11g using sapinst system copy.
    Yes, absolutely. On target system you install 11g software.
    Thanks

  • How to create/import a target schema from a source schema using data pump?

    I have seen some examples where schema remapping will be done through remote link which I cannot as both the target schema and source schema are in same machine ...but I am not sure how to do it with out using remote link? I would be great if some one could provide a example
    Thanks in Advance

    I'm not sure what you are asking here, but I'll try to give some examples on remapping a schema:
    expdp prived_user/password schemas=orig_schema_1 directory=dpump_dir dumpfile=orig_schema_1.dmp
    impdp prived_user/password schemas=orig_schema_1 directory=dpump_dir dumpfile=orig_schema_1.dmp remap_schema=orig_schema_1:new_schema_1
    If you want to try without a dumpfile then you need a loop back dblink.  (A dblink on your database that points to the same database.  This would be your command:
    impdp prived_user/password schemas=orig_schema_1 directory=dpump_dir network_link=loop_back_link remap_schema=orig_schema_1:new_schema_1
    Hope this helps.  If not, can you explain your question a little more?
    Dean

  • How to consolidate data files using data pump when migrating 10g to 11g?

    We have one 10.2.0.4 database to be migrated to a new box running 11.2.0.1. The 10g database has too many data files scattered within too many file systems. I'd like to consolidate the data files into one or two large chunk in one file systems. Both OSs are RHEL 5. How should I do that using Data Pump Export/Import? I knew there is "Remap" option could be used, but it's only one to one mapping. How can I map multiple old data files into one new data file?

    hi
    datapump is terribly slow, make sure you have as much memory as possible allocated for Oracle but the bottleneck can be I/O throughput.
    Use PARALLEL option, set also these ones:
    * DISK_ASYNCH_IO=TRUE
    * DB_BLOCK_CHECKING=FALSE
    * DB_BLOCK_CHECKSUM=FALSE
    set high enough to allow for maximum parallelism:
    * PROCESSES
    * SESSIONS
    * PARALLEL_MAX_SERVERS
    more:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_perf.htm
    that's it, patience welcome ;-)
    P.S.
    For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).
    Edited by: g777 on 2011-02-02 09:53
    P.S.2
    breaking news ;-)
    I am playing now with storage performance and I turned the option of disk cache (also called write-back cache) to ON (goes at least along with RAID0 and 5 and setting it you don't lose any data on that volume) - and it gave me 1,5 to 2 times speed-up!
    Some says there's a risk of lose of more data when outage happens, but there's always such a risk even though you can lose less. Anyway if you can afford it (and with import it's OK, as it ss not a production at that moment) - I recommend to try. Takes 15 minutes, but you can gain 2,5 hours out of 10 of normal importing.
    Edited by: g777 on 2011-02-02 14:52

  • What are the 'gotcha' for exporting using Data Pump(10205) from HPUX to Win

    Hello,
    I have to export a schema using data pump from 10205 on HPUX 64bit to Windows 64bit same 10205 version database. What are the 'gotcha' can I expect from doing this? I mean export data pump is cross platform so this sounds straight forward. But are there issues I might face from export data pump on HPUX platform and then import data dump on to Windows 2008 platform same database version 10205? Thank you in advance.

    On the HPUX database, run this statement and look for the value for NLS_CHARACTERSET
    SQL> select * from NLS_DATABASE_PARAMETERS;http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_4218.htm#sthref2018
    When creating the database on Windows, you have two options - manually create the database or use DBCA. If you plan to create the database manually, specify the database characterset in the CREATE DATABASE statement - http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5004.htm#SQLRF01204
    If using DBCA, see http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#ADMQS0021 (especially http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#BABJBDIF)
    HTH
    Srini

  • Using  Data Pump when database is read-only

    Hello
    I used flashback and returned my database to the past time then I opened the database read only
    then I wanted use data pump(expdp) for exporting a schema but I encounter this error
    ORA-31626: job does not exist
    ORA-31633: unable to create master table "SYS.SYS_EXPORT_SCHEMA_05"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT", line 863
    ORA-16000: database open for read-only access
    but I could by exp, export that schema
    My question is that , don't I can use Data Pump while database is read only ? or do you know any resolution for the issue ?
    thanks

    You need to use NETWORK_LINK, so the required tables are created in a read/write database and the data is read from the read only database using a database link:
    SYSTEM@db_rw> create database link db_r_only
      2   connect to system identified by oracle using 'db_r_only';
    $ expdp system/oracle@db_rw network_link=db_r_only directory=data_pump_dir schemas=scott dumpfile=scott.dmpbut I tried it with 10.2.0.4 and found and error:
    Export: Release 10.2.0.4.0 - Production on Thursday, 27 November, 2008 9:26:31
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-39006: internal error
    ORA-39065: unexpected master process exception in DISPATCH
    ORA-02054: transaction 1.36.340 in-doubt
    ORA-16000: database open for read-only access
    ORA-02063: preceding line from DB_R_ONLY
    ORA-39097: Data Pump job encountered unexpected error -2054
    I found in Metalink the bug 7331929 which is solved in 11.2! I haven't tested this procedure with prior versions or with 11g so I don't know if this bug only affects to 10.2.0.4 or 10* and 11.1*
    HTH
    Enrique
    PS. If your problem was solved, consider marking the question as answered.

  • Can we load data in chunks using data pump ?

    We are loading data using data pump. So I want to clear my understanding.
    Please correct me if I am wrong on my understandings -
    ODI will fetch all data from source (whether it is INIT or CDC ) in one go and unload into staging area.
    If it is true, will performance hamper in case very huge data (50 million records at source) at source as ODI tries to load entire data in one go. I believe it will give better performance if we load in chunks using data pump.
    Please confirm and correct.
    Also I would like to know how can we configure chunk load using data-pump.
    Thanks in Advance.
    Regards,
    Dinesh.

    You may consider usingLKM Oracle to Oracle (datapump)
    http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/oracle_db.htm#r15c1-t2
    In 11g ODI reads from source and write to target in parallel. This is the case where you specify select query in source command and insert/update query in the target command. At source side Odi reads records from source and add them to a data queue. At target side a parallel thread reads data from the data queue and writes to the target. So the overall performance would be the slower of the read or write process.
    Thanks,

  • Application has problems when migrated from 10g to 11g

    Hi there,
    I am hoping someone can shed some light on a problem I have in moving an application from Oracle 10g to Oracle 11g. The app works fine on 10g (uses Apache webserver with PL/SQL module, and APEX 3.1.2.00.2), but on 11g using the embedded web server and APEX 3.2.0.00.27, it doesnt. Most of the app works fine, but there are a couple of pages that provide the ability to add child rows to a parent/child relationship, where the parameter passing mechanism from one page to the next appears to suffer from some sort of corruption. I have traced this using the "session" and "debug" buttons on the developer interface, which show that the values of the parameters get changed inexplicably, when branching from one page to the next - even when the page is actually branching to itself.
    I am using the "Set these items, "With these values" fields in the branch, and have verified that the correct values are being associated with the correct items in the application builder. But while this does work correctly under 10g, with 11g, the wrong values end up being passed. Just prior to the branch the set of parameters have the correct values, but immediately after the branch, one of the values is NULL, another has the value of a different item, and a third has a totally random value - I have no idea were it comes from!
    I migrated the application from 10g to 11g using the APEX application developer's Export/Import options. There have been no other changes. Should this have worked? If so, any ideas what might have gone wrong?
    Thanks,
    Sid.

    Well, I managed to solve this, but not in a way that makes much sense.
    In desperation (I had tried almost everything else!) I changed the value of "Cached" in the page settings from "No" to "Yes", and ran the app, but the page didnt render correctly (in either Firefox or IE7) - in fact all that displayed was the developers toolbar at the bottom of the page. I changed the value of "Cached" back to "Y", ran the app again, and hey presto - everything worked fine! I actually did this a second time with a fresh import of the app from 10g, just to be sure I wasnt seeing things. I wasnt!
    There was just one further issue - everything worked fine apart from this section of code in a page process:-
    ELSIF (:p9_filter_type = 5) THEN
    IF (:p9_x_gene_list IS NOT NULL) THEN
    :p9_filter := :p9_x_gene_list;
    END IF;
    :p9_entity_types := 'GENE';
    END IF;
    In 11g the value of :p9_entity_types was not being set to 'GENE' when :p9_filter_type was 5. This was (and still is) working in 10g. I changed the code as follows:-
    ELSIF (:p9_filter_type = 5) THEN
    :p9_entity_types := 'GENE';
    IF (:p9_x_gene_list IS NOT NULL) THEN
    :p9_filter := :p9_x_gene_list;
    END IF;
    END IF;
    ... and now it works fine in 11g as well.
    Only wish I knew why!

  • After Migrating from 10g to 11g Geeting problems with Guided navigations.

    After Migrating from 10g to 11g Geeting problems with Guided navigations and section navigations not working.
    And we are getting the following error <<odbc driver returned an error (SQLExecDirectW)>> where we have used navigations.
    In 10G we have Guided navigation Reports to display the Reports links and intermediate reports for conditionally displaying the Dashboard section(Reports) but after migrating to 11g Guided navigation reports and conditional reports are not working..
    We know that in 11g section navigation replaced with conditions and Guided navigation replaced with action link.. but
    do we need to recreate those reports in actions and condition or is there any work around avoid reworking.

    Hi Both,
    Thanks for the reply ...
    For Guided navigation we are getting like below error:
    Odbc driver returned an error (SQLExecDirectW).
    For Conditional dashboard section we are getting like below error:
    "saw.aViewsToRefresh = [];saw.aViewsToRefresh['d:dashboard~p:1egt6il5utl0uu8n~s:3jsmgfs3c1r4tn7c~n:condition'] = true;saw.aViewsToRefresh['d:dashboard~p:1egt6il5utl0uu8n~s:nos5q43jvjmi643b~n:condition'] = true;"

  • Exporting whole database (10GB) using Data Pump export utility

    Hi,
    I have a requirement that we have to export the whole database (10GB) using Data Pump export utility because it is not possible to send the 10GB dump in a CD/DVD to the system vendor of our application (to analyze few issues we have).
    Now when i checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use normal export method. Also, will data pump reduce the size of the dump file so it can fit in a DVD or can we use Parallel Full DB export utility to split the files and include them in a DVD, is it possible.
    Please correct me if i am wrong and kindly help.
    Thanks for your help in advance.

    You need to create a directory object.
    sqlplus user/password
    create directory foo as '/path_here';
    grant all on directory foo to public;
    exit;
    then run you expdp command.
    Data Pump can compress the dumpfile if you are on 11.1 and have the appropriate options. The reason for saying filesize is to limit the size of the dumpfile. If you have 10G and are not compressing and the total dumpfiles are 10G, then by specifying 600MB, you will just have 10G/600MB = 17 dumpfiles that are 600MB. You will have to send them 17 cds. (probably a few more if dumpfiles don't get filled up 100% due to parallel.
    Data Pump dumpfiles are written by the server, not the client, so the dumpfiles don't get created in the directory where the job is run.
    Dean

  • Best Approach for using Data Pump

    Hi,
    I configured a new database which I set up with schemas that I imported in from another production database. Now, before this database becomes the new production database, I need to re-import the schemas so that the data is up-to-date.
    Is there a way to use Data Pump so that I don't have to drop all the schemas first? Can I just export the schemas and somehow just overwrite what's in there already?
    Thanks,
    Nora

    Hi, you can use the NETWORK_LINK parameter for import data from other remote database.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007380
    Regards.

  • Tablespace level backup using data pump

    Hi,
    Im using 10.2.0.4 on RHEL 4,
    I have one doubt, can we take a tablespace level backup using data pump,
    bt i dnt wnt to use it for transportable tablespace.
    thanks.

    Yes, you can only for the tables in that tablespace only.
    Use the TABLESPACES option to export list of tablespaces.*here all the tables in that tablespaces will be exported*.
    and you must have the EXP_FULL_DATABASE role to use tablespace mode.
    Have a look at this,
    http://stanford.edu/dept/itss/docs/oracle/10g/server.101/b10825/dp_export.htm#i1007519
    Thanks
    Edited by: Cj on Dec 12, 2010 11:48 PM

  • Problem while migrating from Sybase to Oracle using Quick Migrate

    Hi,
    For SQL Developer version 2.1, while migrating from sybase to oracle, Using Quick migrate, during data move step, for the rows having ''(Blank) values in TEXT data type in SYBASE, which is convertd to CLOB in Oracle, the migration for that table terminates at that row.
    However, NULL values in Sybase TEXT data type are successfully inserted in Oracle CLOB.
    How can we overcome this?

    reproduced and see exception in console, bug logged.
    Edited by: Jade Zhong on Feb 1, 2010 6:10 PM

  • NQS ERROR:14025 NO FACT TABLE EXISTS -after migrating from 10g to 11g

    NQS ERROR:14025 NO FACT TABLE EXISTS AT THE REQUESTED LEVEL OF DETAIL in all the reports after migrating from 10g to 11g ...
    then we applied the patch (One-off Patch for Bug: 11850704) for the error <<NQS ERROR:14025 NO FACT TABLE EXISTS AT THE REQUESTED LEVEL OF DETAIL>>
    But after applying the above the above patch we are still getting the same error.
    but in the above patch instructions file - Post deployment instructions to create the Variable
    Post Install Instructions:
    - To revert to the 10g navigator behavior for handling conforming dimensions,
    you must set the following session variable via an init block in the RPD:
    NO_FORCE_TO_DETAIL_BIN=1
    The default value for the above variable is 0.
    - Restart all servers (Admin Server and all Managed Server(s))
    but we didn’t find the process to create the specified variable and Initialization block in the RPD
    Can you please suggest us how to go further.
    Our questions are:

    Hi
    Refer the below thread.
    obiee 11g non-conforming dimensions and nQSError 14025
    Might be help you/
    Thanks,
    satya

  • How to export resource manager consumer groups using Data Pump?

    Hi, there,
    Is there any way to export RM Consumer Groups/Mappings/Plans as part of a Data Pump export/import? I was wondering because I don't fancy doing it manually and I don't see the object in the database_export_objects view. I can create them manually, but was wondering whether there's an easier, less involved way of doing it?
    Mark

    Hi,
    I have not tested it but i think a full db export/import (using data pump or traditional exp/imp) may help doing this (which might not be feasible for you to have full exp/imp) because full database mode exports/imports SYS schema objects also, so there is a chance that it will also import the resource group and resource plans.
    Salman

Maybe you are looking for