Data Pump - invoking expdp for a 8idb from a 10gdb
Situation: I am trying to invoke expdp for a remote 8.1.7 database from a 10g database using NETWORK_LINK parameter(NETWORK_LINK=dblink_name) in a .par file (windows platform). I get the ORA-12560. To mention, the dblink is working fine.
My question: Can we invoke expdp against a 8.1.7 database from a 10g database using NETWORK_LINK parameter?
regards,
Anjan
Thanks, Here I tweak my question: How about doing a general export dmp from a 8.1.7 database and importing (using impdp) the same .dmp file in a 10g database.. Do you see any dependency related issues?
regards,
Anjan
Similar Messages
-
SEVERAL USER BELONG TO THE TABLESPACE I WANT TO EXPORT
I WANT TO ONLY WEED OUT THE TABLESSPACE AND THE TABLES BELONGING TO ONE USER.
THE USER ALSO WANT SOME OF THE TABLES WITH ALL THE DATA IN THEM AND SOME OF THE TABLES WITH NO DATA IN THEM
I TRIED ALL OPTIONS
WITH DATA_ONLY, METADATA_ONLY ON BOTH THE IMPORT AND EXPORT AND HAVE ISSUES
HAVE TRIED EXPORTING AND IT GIVES ME ALL THE TABLES AND A SIZE TO INDICATE IT WORKS BUT ON IMPORT KAZOOM-HELL BREAKS LOOSE. JUNIOR DBA NEED HELP
SQL> select owner, table_name, tablespace_name from dba_tables where tablespace_name='USER_SEG';
ORAPROBE OSP_ACCOUNTS USER_SEG
PATSY TRAPP USER_SEG
PATSY TRAPPO USER_SEG
PATSY TRAUDIT USER_SEG
PATSY TRCOURSE USER_SEG
PATSY TRCOURSEO USER_SEG
PATSY TRDESC USER_SEG
PATSY TREMPDATA USER_SEG
PATSY TRFEE USER_SEG
PATSY TRNOTES USER_SEG
PATSY TROPTION USER_SEG
PATSY TRPART USER_SEG
PATSY TRPARTO USER_SEG
PATSY TRPART_OLD USER_SEG
PATSY TRPERCENT USER_SEG
PATSY TRSCHOOL USER_SEG
PATSY TRSUPER USER_SEG
PATSY TRTRANS USER_SEG
PATSY TRUSERPW USER_SEG
PATSY TRUSRDAT USER_SEG
PATSY TRVARDAT USER_SEG
PATSY TRVERIFY USER_SEG
PATSY TRAPPO_RESET USER_SEG
PATSY TRAPP_RESET USER_SEG
PATSY TRCOURSEO_RESET USER_SEG
PATSY TRCOURSE_RESET USER_SEG
PATSY TRPARTO_RESET USER_SEG
PATSY TRPART_RESET USER_SEG
PATSY TRTRANS_RESET USER_SEG
PATSY TRVERIFY_RESET USER_SEG
MAFANY TRVERIFY USER_SEG
MAFANY TRPART USER_SEG
MAFANY TRPARTO USER_SEG
MAFANY TRAPP USER_SEG
MAFANY TRAPPO USER_SEG
MAFANY TRCOURSE USER_SEG
MAFANY TRCOURSEO USER_SEG
MAFANY TRTRANS USER_SEG
JULIE R_REPOSITORY_LOG USER_SEG
JULIE R_VERSION USER_SEG
JULIE R_DATABASE_TYPE USER_SEG
JULIE R_DATABASE_CONTYPE USER_SEG
JULIE R_NOTE USER_SEG
JULIE R_DATABASE USER_SEG
JULIE R_DATABASE_ATTRIBUTE USER_SEG
JULIE R_DIRECTORY USER_SEG
JULIE R_TRANSFORMATION USER_SEG
JULIE R_TRANS_ATTRIBUTE USER_SEG
JULIE R_DEPENDENCY USER_SEG
JULIE R_PARTITION_SCHEMA USER_SEG
JULIE R_PARTITION USER_SEG
JULIE R_TRANS_PARTITION_SCHEMA USER_SEG
JULIE R_CLUSTER USER_SEG
JULIE R_SLAVE USER_SEG
JULIE R_CLUSTER_SLAVE USER_SEG
JULIE R_TRANS_SLAVE USER_SEG
JULIE R_TRANS_CLUSTER USER_SEG
JULIE R_TRANS_HOP USER_SEG
JULIE R_TRANS_STEP_CONDITION USER_SEG
JULIE R_CONDITION USER_SEG
JULIE R_VALUE USER_SEG
JULIE R_STEP_TYPE USER_SEG
JULIE R_STEP USER_SEG
JULIE R_STEP_ATTRIBUTE USER_SEG
JULIE R_STEP_DATABASE USER_SEG
JULIE R_TRANS_NOTE USER_SEG
JULIE R_LOGLEVEL USER_SEG
JULIE R_LOG USER_SEG
JULIE R_JOB USER_SEG
JULIE R_JOBENTRY_TYPE USER_SEG
JULIE R_JOBENTRY USER_SEG
JULIE R_JOBENTRY_COPY USER_SEG
JULIE R_JOBENTRY_ATTRIBUTE USER_SEG
JULIE R_JOB_HOP USER_SEG
JULIE R_JOB_NOTE USER_SEG
JULIE R_PROFILE USER_SEG
JULIE R_USER USER_SEG
JULIE R_PERMISSION USER_SEG
JULIE R_PROFILE_PERMISSION USER_SEG
MAFANY2 TRAPP USER_SEG
MAFANY2 TRAPPO USER_SEG
MAFANY2 TRCOURSE USER_SEG
MAFANY2 TRCOURSEO USER_SEG
MAFANY2 TRPART USER_SEG
MAFANY2 TRPARTO USER_SEG
MAFANY2 TRTRANS USER_SEG
MAFANY2 TRVERIFY USER_SEG
MAFANY BIN$ZY3M1IuZyq3gQBCs+AAMzQ==$0 USER_SEG
MAFANY BIN$ZY3M1Iuhyq3gQBCs+AAMzQ==$0 USER_SEG
MAFANY MYUSERS USER_SEG
I ONLY WANT THE TATBLES FROM PATSY AND WANT TO MOVE IT TO ANOTHER DATABASE FOR HER TO USE AND WANT TO KEEP THE SAME TABLESPACE NAME
THE TABLES BELOW SHOULD ALSO HAVE JUST THE METADATA AND NOT THE DATA
PATSY TRAPP USER_SEG
PATSY TRAPPO USER_SEG
PATSY TRAUDIT USER_SEG
PATSY TRCOURSE USER_SEG
PATSY TRCOURSEO USER_SEG
PATSY TRDESC USER_SEG
PATSY TREMPDATA USER_SEG
PATSY TRFEE USER_SEG
PATSY TRNOTES USER_SEG
PATSY TROPTION USER_SEG
PATSY TRPART USER_SEG
PATSY TRPARTO USER_SEG
PATSY TRPART_OLD USER_SEG
PATSY TRPERCENT USER_SEG
PATSY TRSCHOOL USER_SEG
PATSY TRSUPER USER_SEG
PATSY TRTRANS USER_SEG
PATSY TRUSERPW USER_SEG
PATSY TRUSRDAT USER_SEG
THE FOLLOWING WILL OR ARE SUPPOSED TO HAVE ALL THE DATA
PATSY TRVERIFY USER_SEG
PATSY TRAPPO_RESET USER_SEG
PATSY TRAPP_RESET USER_SEG
PATSY TRCOURSEO_RESET USER_SEG
PATSY TRCOURSE_RESET USER_SEG
PATSY TRPARTO_RESET USER_SEG
PATSY TRPART_RESET USER_SEG
PATSY TRTRANS_RESET USER_SEG
PATSY TRVERIFY_RESET USER_SEG
HAVE TRIED ALL THE FOLLOWING AND LATER GOT STUCK IN MY LIL EFFORT TO DOCUMENT AS I GO ALONG:
USING DATA PUMP TO EXPORT DATA
First:
Create a directory object or use one that already exists.
I created a directory object: grant
CREATE DIRECTORY "DATA_PUMP_DIR" AS '/home/oracle/my_dump_dir'
Grant read, write on directory DATA_PUMP_DIR to user;
Where user will be the user such as sys, public, oracle etc.
For example to create a directory object named expdp_dir located at /u01/backup/exports enter the following sql statement:
SQL> create directory expdp_dir as '/u01/backup/exports'
then grant read and write permissions to the users who will be performing the data pump export and import.
SQL> grant read,write on directory dpexp_dir to system, user1, user2, user3;
http://wiki.oracle.com/page/Data+Pump+Export+(expdp)+and+Data+Pump+Import(impdp)?t=anon
To view directory objects that already exist
- use EM: under Administration tab to schema section and select Directory Objects
- DESC the views: ALL_DIRECTORIES, DBA_DIRECTORIES
select * from all_directories;
select * from dba_directories;
export schema using expdb
expdp system/SCHMOE DUMPFILE=patsy_schema.dmp
DIRECTORY=DATA_PUMP_DIR SCHEMAS = PATSY
expdp system/PASS DUMPFILE=METADATA_ONLY_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLESPACES = USER_SEG CONTENT=METADATA_ONLY
expdp system/PASS DUMPFILE=data_only_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLES=PATSY.TRVERIFY,PATSY.TRAPPO_RESET,PATSY.TRAPP_RESET,PATSY.TRCOURSEO_RESET, PATSY.TRCOURSE_RESET,PATSY.TRPARTO_RESET,PATSY.TRPART_RESET,PATSY.TRTRANS_RESET,PATSY.TRVERIFY_RESET CONTENT=DATA_ONLYyou are correct all the patsy tables reside in the tablespace USER_SEG that are in a diffrent database and i want to move them to a new database-same version 10g. i have created a user named patsy there also. the tablespace does not exist in the target i want to move it to-same thing with the objects and indexes-they don't exists in the target.
so how can i move the schema and tablespace with all the tables and keeping the tablespace name and same username patsy that i created to import into.
tried again and get some errrors: this is beter than last time and that because i remap_tablespace:USER_SEG:USERS
BUT I WANT TO HAVE THE TABLESPACE IN TARGET CALLED USER_SEG TOO. DO I HAVE TO CREATE IT OR ?
[oracle@server1 ~]$ impdp system/blue99 remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
Import: Release 10.1.0.3.0 - Production on Wednesday, 08 April, 2009 11:10
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Release 10.1.0.3.0 - Production
Master table "SYSTEM"."SYS_IMPORT_FULL_03" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_03": system/******** remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
. . imported "TEST"."TRTRANS" 10.37 MB 93036 rows
. . imported "TEST"."TRAPP" 2.376 MB 54124 rows
. . imported "TEST"."TRAUDIT" 1.857 MB 28153 rows
. . imported "TEST"."TRPART_OLD" 1.426 MB 7183 rows
. . imported "TEST"."TRSCHOOL" 476.8 KB 5279 rows
. . imported "TEST"."TRAPPO" 412.3 KB 9424 rows
. . imported "TEST"."TRUSERPW" 123.8 KB 3268 rows
. . imported "TEST"."TRVERIFY_RESET" 58.02 KB 183 rows
. . imported "TEST"."TRUSRDAT" 54.73 KB 661 rows
. . imported "TEST"."TRNOTES" 51.5 KB 588 rows
. . imported "TEST"."TRCOURSE" 49.85 KB 243 rows
. . imported "TEST"."TRCOURSE_RESET" 47.60 KB 225 rows
. . imported "TEST"."TRPART" 39.37 KB 63 rows
. . imported "TEST"."TRPART_RESET" 37.37 KB 53 rows
. . imported "TEST"."TRTRANS_RESET" 38.94 KB 196 rows
. . imported "TEST"."TRCOURSEO" 30.93 KB 51 rows
. . imported "TEST"."TRCOURSEO_RESET" 28.63 KB 36 rows
. . imported "TEST"."TRPERCENT" 33.72 KB 1044 rows
. . imported "TEST"."TROPTION" 30.10 KB 433 rows
. . imported "TEST"."TRPARTO" 24.78 KB 29 rows
. . imported "TEST"."TRPARTO_RESET" 24.78 KB 29 rows
. . imported "TEST"."TRVERIFY" 20.97 KB 30 rows
. . imported "TEST"."TRVARDAT" 14.13 KB 44 rows
. . imported "TEST"."TRAPP_RESET" 14.17 KB 122 rows
. . imported "TEST"."TRDESC" 9.843 KB 90 rows
. . imported "TEST"."TRAPPO_RESET" 8.921 KB 29 rows
. . imported "TEST"."TRSUPER" 6.117 KB 10 rows
. . imported "TEST"."TREMPDATA" 0 KB 0 rows
. . imported "TEST"."TRFEE" 0 KB 0 rows
Processing object type TABLE_EXPORT/TABLE/GRANT/TBL_OWNER_OBJGRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE UNIQUE INDEX "TEST"."TRPART_11" ON "TEST"."TRPART" ("TRPCPID", "TRPSSN") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 32768 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE INDEX "TEST"."TRPART_I2" ON "TEST"."TRPART" ("TRPLAST") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE INDEX "TEST"."TRPART_I3" ON "TEST"."TRPART" ("TRPEMAIL") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE INDEX "TEST"."TRPART_I4" ON "TEST"."TRPART" ("TRPPASSWORD") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "BILLZ"."TRCOURSE_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRCOURSE_I1"', NULL, NULL, NULL, 232, 2, 232, 1, 1, 7, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "BILLZ"."TRTRANS_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRTRANS_I1"', NULL, NULL, NULL, 93032, 445, 93032, 1, 1, 54063, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRAPP_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I1"', NULL, NULL, NULL, 54159, 184, 54159, 1, 1, 4597, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRAPP_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I2"', NULL, NULL, NULL, 54159, 182, 17617, 1, 2, 48776, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRAPPO_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I1"', NULL, NULL, NULL, 9280, 29, 9280, 1, 1, 166, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRAPPO_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I2"', NULL, NULL, NULL, 9280, 28, 4062, 1, 2, 8401, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRCOURSEO_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRCOURSEO_I1"', NULL, NULL, NULL, 49, 2, 49, 1, 1, 8, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TREMPDATA_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TREMPDATA_I1"', NULL, NULL, NULL, 0, 0, 0, 0, 0, 0, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TROPTION_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TROPTION_I1"', NULL, NULL, NULL, 433, 3, 433, 1, 1, 187, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_11" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I2" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I3" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I4" creation failed
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRPART_I5" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_I5"', NULL, NULL, NULL, 19, 1, 19, 1, 1, 10, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPARTO_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I1"', NULL, NULL, NULL, 29, 1, 29, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPARTO_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I2"', NULL, NULL, NULL, 29, 14, 26, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I2"', NULL, NULL, NULL, 7180, 19, 4776, 1, 1, 7048, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I3" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I3"', NULL, NULL, NULL, 2904, 15, 2884, 1, 1, 2879, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I4" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I4"', NULL, NULL, NULL, 363, 1, 362, 1, 1, 359, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I5" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I5"', NULL, NULL, NULL, 363, 1, 363, 1, 1, 353, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRPART_11" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_11"', NULL, NULL, NULL, 7183, 29, 7183, 1, 1, 6698, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRPERCENT_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPERCENT_I1"', NULL, NULL, NULL, 1043, 5, 1043, 1, 1, 99, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRSCHOOL_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRSCHOOL_I1"', NULL, NULL, NULL, 5279, 27, 5279, 1, 1, 4819, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRVERIFY_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRVERIFY_I2"', NULL, NULL, NULL, 30, 7, 7, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "STU"."TRVERIFY_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"STU"', '"TRVERIFY_I1"', NULL, NULL, NULL, 30, 12, 30, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_03" completed with 29 error(s) at 11:17 -
Does data pump really replace exp/imp?
hi guys,
Ive read some people saying we should be using data pump instead of exp/imp. But as far as I can see, if I have a database behind a firewall at some other place, and cannot connect to that database directly and need to get some data acrposs, then data pump is useless for me and I can only exp and imp the data.OracleGuy777 wrote:
...and i guess this means that data pump does not replace exp and imp.Well, depending of your database version, it is.
"+Original Export is desupported for general use as of Oracle Database 11g. The only supported use of original Export in 11g is backward migration of XMLType data to a database version 10g release 2 (10.2) or earlier. Therefore, Oracle recommends that you use the new Data Pump Export and Import utilities, except in the following situations which require original Export and Import:+
+* You want to import files that were created using the original Export utility (exp).+
+* You want to export files that will be imported using the original Import utility (imp). An example of this would be if you wanted to export data from Oracle Database 10g and then import it into an earlier database release.+"
http://download.oracle.com/docs/cd/E11882_01/server.112/e10701/original_export.htm#SUTIL3634
Coming back on your problem, as already suggested, you have the NETWORK_LINK parameter, you're able to export data from source and import directly into your target db without the need of any intermediate file.
Nicolas. -
Data Pump and Data Manipulation (DML)
Hi all,
I am wondering if there is a method to manipulate table column data as part of a Data Pump export? For example, is it possible make the following changes during DP export:
update employee
set firstname = 'some new string';
We are looking at a way where we can protect sensitve data columns. We already have a script to update column data (ie scramble text to protect personal information) but we would like to automate the changes as we export the database schema, rather than having to export, import, run DML and then export again.
Any ideas or advice would be great!
Regards,
Leigh.Thanks Francisco!
I haven't read the entire document yet but at first glance it appears to give me what I need. Do you know if this functionality for Data Pump is available on Oracle Database versions 10.1.0.4 and 10.2.0.3 (the versions we currently use).
Regards,
Leigh.
Edited by: lelliott78 on Sep 10, 2008 9:48 PM -
What are the 'gotcha' for exporting using Data Pump(10205) from HPUX to Win
Hello,
I have to export a schema using data pump from 10205 on HPUX 64bit to Windows 64bit same 10205 version database. What are the 'gotcha' can I expect from doing this? I mean export data pump is cross platform so this sounds straight forward. But are there issues I might face from export data pump on HPUX platform and then import data dump on to Windows 2008 platform same database version 10205? Thank you in advance.On the HPUX database, run this statement and look for the value for NLS_CHARACTERSET
SQL> select * from NLS_DATABASE_PARAMETERS;http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_4218.htm#sthref2018
When creating the database on Windows, you have two options - manually create the database or use DBCA. If you plan to create the database manually, specify the database characterset in the CREATE DATABASE statement - http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5004.htm#SQLRF01204
If using DBCA, see http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#ADMQS0021 (especially http://docs.oracle.com/cd/B19306_01/server.102/b14196/install.htm#BABJBDIF)
HTH
Srini -
Data Pump - expdp and slow performance on specific tables
Hi there
I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
I have chekced:
- no lobs
- no long/raw
- no VPD
- no partitions
- no bitmapped index
- just date, number, varchar2's
I'm runing with trace 400300
But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
1 > direct path (i think)
2 > external table (i think)
4 > ?
others?
I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
I have a table 2.5 GB -> 3 minutes
and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
There are 367.000 blks (8 K) and avg rowlen = 71
I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
System name: Linux
Node name: tiaprod.thi.somethingamt.dk
Release: 2.6.18-194.el5
Version: #1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine: x86_64
VM name: Xen Version: 3.4 (HVM)
Instance name: prod
Redo thread mounted by this instance: 1
Oracle process number: 222
Unix process pid: 24268, image: [email protected] (DW00)
*** 2011-09-20 09:39:39.671
*** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
*** CLIENT ID:() 2011-09-20 09:39:39.671
*** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
*** MODULE NAME:() 2011-09-20 09:39:39.671
*** ACTION NAME:() 2011-09-20 09:39:39.671
KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
*** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
*** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPC:09:39:39.693: Setting remote flag for this process to FALSE
prvtaqis - Enter
prvtaqis subtab_name upd
prvtaqis sys table upd
KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
KUPW:09:39:39.820: 1: worker max message number: 1000
KUPW:09:39:39.822: 1: Full cluster access allowed
KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
KUPW:09:39:39.998: 1: Max character width: 1
KUPW:09:39:39.998: 1: Max clob fetch: 32757
KUPW:09:39:39.998: 1: Max varchar2a size: 32757
KUPW:09:39:39.998: 1: Max varchar2 size: 7990
KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
KUPW:09:39:40.005: 1: Master table : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
KUPW:09:39:40.005: 1: Metadata job mode : SCHEMA_EXPORT
KUPW:09:39:40.005: 1: Debug enable : TRUE
KUPW:09:39:40.005: 1: Profile enable : FALSE
KUPW:09:39:40.005: 1: Transportable enable : FALSE
KUPW:09:39:40.005: 1: Metrics enable : FALSE
KUPW:09:39:40.005: 1: db version : 11.2.0.2.0
KUPW:09:39:40.005: 1: job version : 11.2.0.0.0
KUPW:09:39:40.005: 1: service name :
KUPW:09:39:40.005: 1: Current Edition : ORA$BASE
KUPW:09:39:40.005: 1: Job Edition :
KUPW:09:39:40.005: 1: Abort Step : 0
KUPW:09:39:40.005: 1: Access Method : AUTOMATIC
KUPW:09:39:40.005: 1: Data Options : 0
KUPW:09:39:40.006: 1: Dumper directory :
KUPW:09:39:40.006: 1: Master only : FALSE
KUPW:09:39:40.006: 1: Data Only : FALSE
KUPW:09:39:40.006: 1: Metadata Only : FALSE
KUPW:09:39:40.006: 1: Estimate : BLOCKS
KUPW:09:39:40.006: 1: Data error logging table :
KUPW:09:39:40.006: 1: Remote Link :
KUPW:09:39:40.006: 1: Dumpfile present : TRUE
KUPW:09:39:40.006: 1: Table Exists Action :
KUPW:09:39:40.006: 1: Partition Options : NONE
KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
KUPW:09:39:40.006: 1: Metadata Filter Index : 1 Count : 10
KUPW:09:39:40.006: 1: 1 Name - INCLUDE_USER
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object Name - SCHEMA_EXPORT
KUPW:09:39:40.006: 1: 2 Name - SCHEMA_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TIA')
KUPW:09:39:40.006: 1: 3 Name - NAME_EXPR
KUPW:09:39:40.006: 1: Value - ='ACC_PAYMENT_SPECIFICATION'
KUPW:09:39:40.006: 1: Object - TABLE
KUPW:09:39:40.006: 1: 4 Name - INCLUDE_PATH_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TABLE')
KUPW:09:39:40.006: 1: 5 Name - ORDERED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE_DATA
KUPW:09:39:40.006: 1: 6 Name - NO_XML
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object - XMLSCHEMA/EXP_XMLSCHEMA
KUPW:09:39:40.006: 1: 7 Name - XML_OUTOFLINE
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TABLE_DATA
KUPW:09:39:40.006: 1: 8 Name - XDB_GENERATED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TRIGGER
KUPW:09:39:40.007: 1: 9 Name - XDB_GENERATED
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE/RLS_POLICY
KUPW:09:39:40.007: 1: 10 Name - PRIVILEGED_USER
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: MD remap schema Index : 4 Count : 0
KUPW:09:39:40.007: 1: MD remap other Index : 5 Count : 0
KUPW:09:39:40.007: 1: MD Transform ddl Index : 2 Count : 11
KUPW:09:39:40.007: 1: 1 Name - DBA
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - JOB
KUPW:09:39:40.007: 1: 2 Name - EXPORT
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: 3 Name - PRETTY
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 4 Name - SQLTERMINATOR
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 5 Name - CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 6 Name - REF_CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 7 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 8 Name - RESET_PARALLEL
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INDEX
KUPW:09:39:40.007: 1: 9 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TYPE
KUPW:09:39:40.007: 1: 10 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INC_TYPE
KUPW:09:39:40.007: 1: 11 Name - REVOKE_FROM
KUPW:09:39:40.008: 1: Value - SYSTEM
KUPW:09:39:40.008: 1: Object - ROLE
KUPW:09:39:40.008: 1: Data Filter Index : 6 Count : 0
KUPW:09:39:40.008: 1: Data Remap Index : 7 Count : 0
KUPW:09:39:40.008: 1: MD remap name Index : 8 Count : 0
KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:40.038: 1: Flags: 18
KUPW:09:39:40.038: 1: Start sequence number:
KUPW:09:39:40.038: 1: End sequence number:
KUPW:09:39:40.038: 1: Metadata Parallel: 1
KUPW:09:39:40.038: 1: Primary worker id: 1
KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
KUPW:09:39:40.041: 1: In procedure CREATE_MSG
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
KUPW:09:39:40.046: 1: Created type completion for duplicate 62
KUPW:09:39:40.046: 1: In procedure CREATE_MSG
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name: Filter Value:
KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
*** 2011-09-20 09:39:40.325
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
*** 2011-09-20 09:39:42.603
KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:39:42.603: 1: Nothing to remap
KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:39:42.620: 1: flags mask: 0
KUPW:09:39:42.620: 1: dapi_possible_meth: 1
KUPW:09:39:42.620: 1: data_size: 3019898880
KUPW:09:39:42.620: 1: et_parallel: TRUE
KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
KUPW:09:39:42.648: 1: l_client_bit_mask: 7
KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12 <<<<< Here is says either (I thought that was method ?) <<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
KUPW:09:39:42.680: 1: 1 rows fetched
KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0 <<<<<<<<<<<<<<<< HERE IT SAYS METHOD = 4 and PARALLEL=12 (I'm not using the parallel parameter ???) <<<<<<<<<<<<<<<<<<
KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
KUPW:09:39:42.684: 1: Send table_data_varray called. Count: 1
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.695: 1: Send table_data_varray returned.
KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:42.695: 1: Old Seqno: 62 New Path: PO Num: -5 New Seqno: 0
KUPW:09:39:42.695: 1: Object count: 1
KUPW:09:39:42.697: 1: 1 completed for 62
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:39:42.697: 1: In procedure CREATE_MSG
KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
*** 2011-09-20 09:40:01.798
KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:40:01.798: 1: Object seqno fetched:
KUPW:09:40:01.799: 1: Object path fetched:
KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:40:01.815: 1: Old Seqno: 226 New Path: PO Num: -5 New Seqno: 0
KUPW:09:40:01.815: 1: Object count: 1
KUPW:09:40:01.815: 1: 1 completed for 226
KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called. Handle: 200001
KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:40:01.828: 1: Process order range: 1..1
KUPW:09:40:01.828: 1: Method: 1
KUPW:09:40:01.828: 1: Parallel: 1
KUPW:09:40:01.828: 1: Creation level: 0
KUPW:09:40:01.830: 1: BULK COLLECT called.
KUPW:09:40:01.830: 1: BULK COLLECT returned.
KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300Hi there ...
I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
But I still need an explanation for the methods (1,2,4 etc)
regards
Mette -
Need Help in expdp for resolving ORA-39127: unexpected error from call
Hi All,
My Environment is -------> Oracle 11g Database Release 1 On Windows 2003 Server SP2
Requirement is ------------> Data Pump Jobs to be completed without any error message.
I am tryring to take export data pump of a schema
Command Used --> expdp schemas=scott directory=data_pump_dir dumpfile=scorr.dmp version=11.1.0.6.0
Export Log Show this details its completed with 2 error messages
Export: Release 11.1.0.6.0 - Production on Saturday, 23 April, 2011 13:31:10
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the OLAP option
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** schemas=scott directory=data_pump_dir dumpfile=scorr.dmp version=11.1.0.6.0
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.schema_info_exp('SCOTT',0,1,'11.01.00.06.00',newblock)
ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found.
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 205
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 280
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 5980Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.schema_info_exp('SCOTT',1,1,'11.01.00.06.00',newblock)
ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found.
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 205
ORA-06512: at "SYS.DBMS_CUBE_EXP", line 280
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 5980
. . exported "SCOTT"."DEPT" 5.945 KB 4 rows
. . exported "SCOTT"."EMP" 8.585 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5 rows
. . exported "SCOTT"."ACCTYPE_GL_MAS" 0 KB 0 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
D:\APP\ADMINISTRATOR\ADMIN\SIPDB\DPDUMP\SCORR.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 13:40:08
Please help me to resolve this issue.
Thank you,
ShanHi Shan,
I am getting very similar to yours
"ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found."
error message while creating OLAP Analytic Workspace with AWM.
I am creating workspace for the first time, actually following some tutorial to get some knowledge about OLAP)
I see you managed to solve you problem.
I wonder how I can get this MOS DOC 852794.1 - is it possible to get it without going to Metalink?
Thanks in advance for any help.
Regards,
SC -
Data pump import from 11g to 10g
I have 2 database: first is 11.2.0.2.0 and second is 10.2.0.1.0
In 10g i created database link on 11g
CREATE DATABASE LINK "TEST.LINK"
CONNECT TO "monservice" IDENTIFIED BY "monservice"
USING '(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = host)(PORT = port))) (CONNECT_DATA = (SID = sid)))';
And execute this query for test dbLink which work fine:
select * from v$[email protected];
After it i try to call open function:
declare
h number;
begin
h := dbms_datapump.open('IMPORT', 'TABLE', 'TEST.LINK', null, '10.2');
end;
and get exception: 39001. 00000 - "invalid argument value"
if i remove 'TEST.LINK' from the arguments it works fine
Edited by: 990594 on 26.02.2013 23:41Hi Richard Harrison,
result for import from 11g to 10g:
impdp user/pass@dburl schemas=SCHEMANAME network_link=TEST_LINK version=10.2
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-39001: invalid argument value
ORA-39169: Local version of 10.2.0.1.0 cannot work with remote version of 11.2.0.2.0
result for export from 11g to 10g:
expdp user/pass@dburl schemas=SCHEMANAME network_link=TEST_LINK version=10.2
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64 bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing option
ORA-39006: internal error
ORA-39065: unexpected master process exception in DISPATCH
ORA-04052: error occurred when looking up remote object SYS.KUPM$MCP@TEST_LINK
ORA-00604: error occurred at recursive SQL level 3
ORA-06544: PL/SQL: internal, error, arguments: [55916], [], [], [], [], [], [], []
ORA-06553: PLS-801: internal error [55916]
ORA-02063: preceding 2 lines from TEST_LINK
ORA_39097: Data Pump job encountered unexpected error -4052 -
Syntax for Data Pump in Oracle Database 11g
Hi, thanks to all. i myself clarifying many doubts by refering the POSTS in this Forum.
I need the syntax for the Data Pump in Oracle 11g.
Could you post the syntax plese.
Thanks & Regards..Hello,
You may have all the options in Datapump by typing the statements belows:
expdp help=y
impdp help=yElse, you'll have much more details and examples in the Utilities book from Oracle 11g documentation on the link below:
http://www.comp.dit.ie/btierney/Oracle11gDoc/server.111/b28319/part_dp.htm#i436481
Hope this help.
Best regards,
Jean-Valentin
Edited by: Lubiez Jean-Valentin on Apr 19, 2010 11:07 PM -
How to use Data Pump (epdp) from PL/SQL
Hi,
I want to use the Oracle Data Pump Export Utility (expdp) from PL?SQL
I wan to write a Stored Procedure which will invoke the 'expdp' from PL/SQL itself.
Kindly anyone who can help me to proceed.
RegardsHello,
Instead of writing pl/sql procedure ; use dbms_scheduler and use pl/sql anonymous block.
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2 (30) := 'UNDEFINED';
l_job_state VARCHAR2 (30) := 'UNDEFINED';
l_sts ku$_status;
BEGIN
l_dp_handle :=
DBMS_DATAPUMP.open (operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'RET_EXPORT',
version => 'LATEST'
DBMS_DATAPUMP.add_file (handle => l_dp_handle,
filename => 'myexport.dmp',
directory => 'EXPORT_DIR'
DBMS_DATAPUMP.add_file (handle => l_dp_handle,
filename => 'myexport.log',
directory => 'EXPORT_DIR',
filetype => DBMS_DATAPUMP.ku$_file_type_log_file
DBMS_DATAPUMP.metadata_filter (handle => l_dp_handle,
name => 'SCHEMA_EXPR',
VALUE => '= ''MYSCHEMA'''
DBMS_DATAPUMP.start_job (l_dp_handle);
DBMS_DATAPUMP.detach (l_dp_handle);
END;
/Regards -
ORA-39080: failed to create queues "" and "" for Data Pump job
When I am running datapump expdp I receive the following error:
+++Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc+++
+++tion+++
+++With the Partitioning, OLAP and Data Mining options+++
+++ORA-31626: job does not exist+++
+++ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user CHESHIRE_POLICE_LOCAL+++
+++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
+++ORA-06512: at "SYS.KUPV$FT_INT", line 600+++
+++ORA-39080: failed to create queues "" and "" for Data Pump job+++
+++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
+++ORA-06512: at "SYS.KUPC$QUE_INT", line 1555+++
+++ORA-01403: no data found+++
Sys has the following two objects as invalid at present after running catproc.sql and utlrp.sql and manual compilation:
OBJECT_NAME OBJECT_TYPE
AQ$_KUPC$DATAPUMP_QUETAB_E QUEUE
SCHEDULER$_JOBQ QUEUE
While I run catdpb.sql the datapump queue table does not create:
BEGIN
dbms_aqadm.create_queue_table(queue_table => 'SYS.KUPC$DATAPUMP_QUETAB', multiple_consumers => TRUE, queue_payload_type =>'SYS.KUPC$_MESSAGE', comment => 'DataPump Queue Table', compatible=>'8.1.3');
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -24001 THEN NULL;
ELSE RAISE;
END IF;
END;
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 7Snehashish Ghosh wrote:
When I am running datapump expdp I receive the following error:
+++Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc+++
+++tion+++
+++With the Partitioning, OLAP and Data Mining options+++
+++ORA-31626: job does not exist+++
+++ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user CHESHIRE_POLICE_LOCAL+++
+++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
+++ORA-06512: at "SYS.KUPV$FT_INT", line 600+++
+++ORA-39080: failed to create queues "" and "" for Data Pump job+++
+++ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95+++
+++ORA-06512: at "SYS.KUPC$QUE_INT", line 1555+++
+++ORA-01403: no data found+++
Sys has the following two objects as invalid at present after running catproc.sql and utlrp.sql and manual compilation:
OBJECT_NAME OBJECT_TYPE
AQ$_KUPC$DATAPUMP_QUETAB_E QUEUE
SCHEDULER$_JOBQ QUEUE
While I run catdpb.sql the datapump queue table does not create:
BEGIN
dbms_aqadm.create_queue_table(queue_table => 'SYS.KUPC$DATAPUMP_QUETAB', multiple_consumers => TRUE, queue_payload_type =>'SYS.KUPC$_MESSAGE', comment => 'DataPump Queue Table', compatible=>'8.1.3');does it work better when specifying an Oracle version that is from this Century; newer than V8.1? -
How can we import object from 1 user to another user using DATA PUMP
hi,
i have taken full export by EXPDP( data pump) now i want to import user objects from target user of one database to another user of different database.
plz reply me solution
ThanksHi,
impdp 'user/user@db' DIRECTORY=DATA_PUMP_DIR(DEFAULT) DUMPFILE=FILENAME.dmp LOGFILE=IMPORT.LOG REMAP_SCHEMA=SOURCE SCHEMA:TArget SCHEMA
1.Before Import check whether user importing has read,write on directory.
2.Always Try to Add logfile clause This will help.
3.Add TABLE_EXISTS_ACTION=REPLACE if you want target schema tables to be replaced by source schema table(if both has same table)
Regards,
NEerav -
How can I use the data pump export from external client?
I am trying to export a bunch of table from a DB but I cant figure out how to do it.
I dont have access to a shell terminal on the server itself, I can only login using TOAD.
I am trying to use TOAD's Data Pump Export utility but I keep getting this error:
ORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalid
I dont understand if its because I am setting the parameter file wrong or if the utility is trying to find that directory on the server whereas I am thinking its going to dump it to my local filesystem where that directory exists.
I'd hate to have to use SQL Loader to create ctl files for each and every table...
Here is my parameter file:
DUMPFILE="db_export.dmp"
LOGFILE="exp_db_export.log"
DIRECTORY="D:\temp\"
TABLES=ACCOUNT
CONTENT=ALL
(just trying to test it on one table so far...)
P.S. Oracle 11g
Edited by: trant on Jan 13, 2012 7:58 AMORA-39070: Unable to open the log file.
ORA-39087: directory name D:\TEMP\ is invalidDirectory here it should not be physical location, its a logical representation.
For that you have to create a directory from SQL level, like create directory exp_dp..
then you have to use above created directory as DIRECTORY=exp_dp
HTH -
Best Practice for data pump or import process?
We are trying to copy existing schema to another newly created schema. Used export data pump to successfully export schema.
However, we encountered some errors when importing dump file to new schema. Remapped schema and tablespaces, etc.
Most errors occur in PL/SQL... For example, we have views like below in original schema:
CREATE VIEW *oldschema.myview* AS
SELECT col1, col2, col3
FROM *oldschema.mytable*
WHERE coll1 = 10
Quite a few Functions, Procedures, Packages and Triggers contain "*oldschema.mytable*" in DML (insert, select, update) statement, for exmaple.
Getting the following errors in import log:
ORA-39082: Object type ALTER_FUNCTION:"TEST"."MYFUNCTION" created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"TEST"."MYPROCEDURE" created with compilation warnings
ORA-39082: Object type VIEW:"TEST"."MYVIEW" created with compilation warnings
ORA-39082: Object type PACKAGE_BODY:"TEST"."MYPACKAGE" created with compilation warnings
ORA-39082: Object type TRIGGER:"TEST"."MYTRIGGER" created with compilation warnings
A lot of actual errors/invalid objects in new schema are due to:
ORA-00942: table or view does not exist
My question is:
1. What can we do to fix those errors?
2. Is there a better way to do the import with such condition?
3. Update PL/SQL and recompile in new schema? Or update in original schema first and export?
Your help will be greatly appreciated!
Thank you!I routinely get many (MANY) errors as follows and they always compile when I recompile using utlrp.
ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_LASTOUTPUNCH" created with compilation warnings
ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_REFPERIODENDFOREMP" created with compilation warnings
ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_TAILOFFSECS" created with compilation warnings
ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."FN_GDAPREPORTGATHERER" created with compilation warnings
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ABSENT_EXCEPTION" created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_BAL_PROJ" created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_DETAILS" created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_SUMMARY" created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACTUAL_SCHEDULE" created with compilation warnings
It works. In all my databases: peoplesoft, kronos, and others...
I should qualify that it may still be necessary to 'debug' specific problems, but most common typical problems are easily resolved using the utlrp.sql. The usual problems I run into are typically because of a database link that points to another database such as in a production environment that we firewall our test and development databases from linking to (for obvious reasons). -
Migration from 10g to 12c using data pump
hi there, while I've used data pump at the schema level before, I'm rather new at full database imports.
we are attempting a full database migration from 10.2.0.4 to 12c using the full database data pump method over db link.
the DBA has advised that we avoid moving SYSTEM and SYSAUX objects. but initially when reviewing the documentation it appeared that these objects would not be exported from the target system given TRANSPORTABLE=NEVER. can someone confirm this? the export/import log refers to objects that I believed would not be targeted:
23-FEB-15 19:41:11.684:
Estimated 3718 TABLE_DATA objects in 77 seconds
23-FEB-15 19:41:12.450: Total estimation using BLOCKS method: 52.93 GB
23-FEB-15 19:41:14.058: Processing object type DATABASE_EXPORT/TABLESPACE
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"UNDOTBS1" already exists
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"SYSAUX" already exists
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"TEMP" already exists
23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"USERS" already exists
23-FEB-15 20:10:33.200:
Completed 96 TABLESPACE objects in 1759 seconds
23-FEB-15 20:10:33.208: Processing object type DATABASE_EXPORT/PROFILE
23-FEB-15 20:10:33.445:
Completed 7 PROFILE objects in 1 seconds
23-FEB-15 20:10:33.453: Processing object type DATABASE_EXPORT/SYS_USER/USER
23-FEB-15 20:10:33.842:
Completed 1 USER objects in 0 seconds
23-FEB-15 20:10:33.852: Processing object type DATABASE_EXPORT/SCHEMA/USER
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OUTLN" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"ANONYMOUS" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OLAPSYS" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"MDDATA" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"SCOTT" already exists
23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"LLTEST" already exists
23-FEB-15 20:10:52.372:
Completed 1140 USER objects in 19 seconds
23-FEB-15 20:10:52.375: Processing object type DATABASE_EXPORT/ROLE
23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"SELECT_CATALOG_ROLE" already exists
23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"EXECUTE_CATALOG_ROLE" already exists
23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"DELETE_CATALOG_ROLE" already exists
23-FEB-15 20:10:55.256: ORA-31684: Object type ROLE:"RECOVERY_CATALOG_OWNER" already exists
any insight most appreciated.Schema's SYS,CTXSYS, MDSYS and ORDSYS are Not Exported using exp/expdp
Doc ID: Note:228482.1
I suppose he already installed a software 12c and created a database itseems - So when you imported you might have this "already exists"
Whenever the database is created and software installed by default system,sys,sysaux will be created.
Maybe you are looking for
-
[SOLVED] Kernel panic USB3 ExHDD (perhaps related to uas module?)
Greetings fellow Arch users! Annoying problem: Hitachi Touro Pro 1GB USB 3 external drive causing consistent, repeatable kernel panic on 3.15.2-1-ARCH #1 SMP PREEMPT. The same panic occurs whether the drive is already attached upon reboot or attached
-
How to increase the the max limit column in pivot view in BIEE 11G?
Hi Experts, How to increase the the max limit column in pivot view in BIEE 11G? When the number of column exceed 256 in pivot view, it will generate the following error message as below: Exceeded configured maximum number of allowed output prompts, s
-
hello everyone, i went swimming today but i forgot to take my iPhone out of my pocket, it was only about a minute in the water, but still it can't be good. I haven't touched anything to avoid damage. The screen is withe on the upper edge, behind the
-
I have downloaded several new fonts. They appear in the Font Book. Why are they not in Word so that I can use them?
-
Perhaps someone on this group can identify the missing timers/processing-delays in end-to-end client route convergence Scenarios: a) BGP New route Advertised by Cleint(CPE1) b) BGP Route withdrawn by Client(CPE1) PE-to-RR i-M-BGP (Logical) =========