Data pump include exclude statistics
I am new to oracle. Database statistics are stored in data dictionary(sys schema), then how can we include and exclude statistics(sys objects) while doing data pump.
These statistics are for your schema objects (most cases tables). You have permission to create/update your schema object's statistics aka analyze them.
This link gives you a bit more info what simple export/import does when you use STATISTICS parameter.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#sthref2786
When using datapump export statistics are always saved for tables and if the source table has statistics, they are imported.
Similar Messages
-
INCLUDE & EXCLUDE in Data Pump
Any reason why DATA PUMP does not allow INCLUDE parameter when you have the EXCLUDE parameter in your statement and vice versa?
"Data Pump offers much greater metadata filtering than original Export and Import. The INCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep in the export job. The EXCLUDE parameter allows you to specify which object (and its dependent objects) you want to keep out of the export job. You cannot mix the two parameters in one job. Both parameters work with Data Pump Import as well, and you can use different INCLUDE and EXCLUDE options for different operations on the same dump file."
Source: http://www.oracle.com/technology/products/database/utilities/pdf/datapump11g2007_quickstart.pdf
Jonathan Ferreira
http://oracle4dbas.blogspot.com -
How to exclude statistic using Data Pump API?
How to exclude all statistics while exporting data using Oracle Data Pump API (DBMS_DATAPUMP package)?
You would call the metadata filter api like this:
dbms_datapump.METADATA_FILTER(
handle = your_handle_here,
name = 'EXCLUDE_PATH_LIST',
value = 'STATISTICS');
Hope this helps.
Dean -
Exclude DBMS_SCHEDULER jobs in DATA PUMP import
Hello,
I need to exclude all DBMS_SCHEDULER jobs during DATA PUMP import.
I tried to do this with: EXCLUDE=JOB but this only works on DBMS_JOB.
Is there a way to exclude all DBMS_SCHEDULER jobs during import?
Kind RegardsThere are PROCOBJ that can be excluded (Procedural objects in the selected schemas) but I'm affraid it exclude not only DBMS_SCHEDULER jobs.
Any ideas? -
Data pump + statistics + dba_tab_modifications
Database 11.2, persons involved with replicating data are using data pump and importing the statistics (that's all I know about their process). When I look at this query:
select /*mods.table_owner,
mods.table_name,*/
mods.inserts,
mods.updates,
mods.deletes,
mods.timestamp,
tabs.num_rows,
tabs.table_lock,
tabs.monitoring,
tabs.sample_size,
tabs.last_analyzed
from sys.dba_tab_modifications mods join dba_tables tabs
on mods.table_owner = tabs.owner and mods.table_name = tabs.table_name
where mods.table_name = &tab;I see this:
INSERTS UPDATES DELETES TIMESTAMP NUM_ROWS TABLE_LOCK MONITORING SAMPLE_SIZE LAST_ANALYZED
119333320 0 0 11/22/2011 19:27 116022939 ENABLED YES 116022939 10/24/2011 23:10As we can see, the source database last gathered stats on 10/24 and the data was loaded in the destination on 11/22.
The database is giving bad execution plans as indicated in previous thread: Re: Understanding results from dbms_xplan.display_cursor
My first inclination is to run the following, but since they imported the stats, should they already be "good" and it's a matter of dba_tab_modifications getting out of sync? What gives?
exec dbms_stats.gather_schema_stats(
ownname => 'SCHEMA_NAME',
options => 'GATHER AUTO'
)In your previous post you mentioned that the explain plan has 197 records. That is one big SQL statement, so the CBO has plenty of opportunity to mess it up.
That said, it is a good idea to verify that your statistics are fine. One way to accomplish that is to gather statistics into a pending area and compare the pending area stats with what is currently in the dictionary using DBMS_STATS.DIFF_TABLE_STATS_IN_PENDING.
As mentioned by Tubby in your previous post, extended stats are powerful and easy way to improve the quality of CBO’s plans. Note that in 11.2.0.2 you can create extended stats specifically tailored to your SQLs (based on AWR or from live system) - http://iiotzov.wordpress.com/2011/11/01/get-the-max-of-oracle-11gr2-right-from-the-start-create-relevant-extended-statistics-as-a-part-of-the-upgrade/
Iordan Iotzov
http://iiotzov.wordpress.com/
Edited by: Iordan Iotzov on Jan 6, 2012 12:45 PM -
I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 19.87 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
. . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
thanks,
JimJim,
Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
Dean -
File name substitution with Data pump
Hi,
I'm experimenting with Oracle data pump export, 10.2 on Windows 2003 Server.
On my current export scripts, I am able to create the dump file name dynamically.
This name includes the database name, date, and time such as the
following : exp_testdb_01192005_1105.dmp.
When I try to do the same thing with data pump, it doesn't work. Has anyone
had success with this. Thanks.
ed lewisHi Ed
This is an example for your issue:
[oracle@dbservertest backups]$ expdp gsmtest/gsm directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
Export: Release 10.2.0.1.0 - Production on Thursday, 19 January, 2006 12:23:55
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "GSMTEST"."SYS_EXPORT_TABLE_01": gsmtest/******** directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "GSMTEST"."BAN_BANCO" 7.718 KB 9 rows
Master table "GSMTEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
Dump file set for GSMTEST.SYS_EXPORT_TABLE_01 is:
/megadata/clona/exp_testdb_01192005_1105.dmp
Job "GSMTEST"."SYS_EXPORT_TABLE_01" successfully completed at 12:24:18
This work OK.
Regards,
Wilson -
Database Upgrade using Data Pump
Hi,
I am moving my database from a Windows 2003 server to a Windows 2007 server. At the same time I am upgrading this database from 10g to 11gR2(11.2.0.3).
therefore I am using the export / import method of upgrade ( via Data Pump not the old exp/imp ).
I have successfully exported by source database and have created the empty shell database ready to take the import. However I have a couple of queries
Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?
Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespaces
Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I was just wondering where there any con's of this method over using the explicit export dump file ?
thanks,
JimJim,
Q1. regarding all the SYSTEM objects from the source database. How will they import given that the new target database already has a SYSTEM tablespace
I am guessing I need to use the TABLE_EXISTS_ACTION option for the import. However should I set this to APPEND, SKIP, REPLACE or TRUNCATE - which is best ?If all you have is the base database and nothing created, then you can do the full=y. In fact, this is probably what you want. The system tablespace will be there so when Data Pump tries to create it , it will just fail that create statement. Nothing else will fail. In most cases, your system tables will already be there, and this is ok too. If you do schema mode imports, you will miss out on some of the other stuff.
Q2. I am planning to slightly change the directory structure on the new database server - would it therefore be better to pre-create the tablespaces or leave this to the import but use the REMAP >DATAFILE option - what is everyone's experience as to which is the better way to go ? Again if I pre-create the tablespaces, how do I inform the import to ignore the creation of the tablespacesIf the directory structure is different (which they usually are) then there is no easier way. You can run impdp but with sqlfile and you can say - include=tablespace. This will give you all of the create tablespace commands in a txt file and you can edit the text file to change what ever you want to change. You can tell datapump to skip the tablespace creation by using - exclude=tablespace
Q3. these 2 databases are on the same network, so in theorey instead of a manual export, copy of the dump file to the new server and then the import, I could use a Network Link for Import. I >was just wondering where there any con's of this method over using the explicit export dump file ?The only con could be if you have a slow network. This will make it slower, but if you have to copy the dumpfile over the same network, then you will still see the same basic traffic. The pros are that you don't have to have extra disk space. Here is how I look at it.
1. you need XX GB for the source database
2. you need YY GB for the source dumpfile
3. you need YY GB for the target dumpfile that you copy
4. you need XX GB for the target databse.
By doing network you get rid if 2*YY GB for the dumpfiles.
Dean -
Hi,
Is there a way to use the Data Pump API to export tables from multiple schemas in the same job? I can't figure out what the filters would be. It seems like I can either specify many tables from 1 schema only, or I can specify multiple schemas but not limit the tables I want to export.
I keep running into this error: ORA-31655: no data or metadata objects selected for job
I'd like to do something like this:
--METADATA FILTER: SPECIFY TABLES TO EXPORT
dbms_datapump.metadata_filter(
handle => hdl,
name => 'NAME_EXPR',
value => 'IN(''schema1.table1'',''schema2.table2'')');
This does not seem to be possible..
Any help would be appreciated.
Thanks,
NoraUser that have EXP_FULL_DATABASE role should be able to do what you want.
Search here for that role http://students.kiv.zcu.cz/doc/oracle/server.102/b14215/dp_export.htm#i1007837
Seems like you could do what you want by using that role in
joint venture wiht exclude and include parameters http://students.kiv.zcu.cz/doc/oracle/server.102/b14215/dp_export.htm#i1009903 -
Data Pump - How to avoid exporting/importing dbms_scheduler jobs?
Hi,
I am using data pump to export a users objects. When I import them it also imports any jobs that user has created with dbms_scheduler - how can I avoid this. I tried EXCLUDE=JOBS but no luck.
Thanks,
Jon.
Here are my export and import paramater files:
DIRECTORY=dpump_dir1
DUMPFILE=reveal.dmp
CONTENT=METADATA_ONLY
SCHEMAS=REVEAL
EXCLUDE=TABLE_STATISTICS
EXCLUDE=INDEX_STATISTICS
LOGFILE=reveal.log
DIRECTORY=dpump_dir1
DUMPFILE=reveal.dmp
CONTENT=METADATA_ONLY
SCHEMAS=reveal
REMAP_SCHEMA=reveal:reveal_backup
TRANSFORM=SEGMENT_ATTRIBUTES:n
EXCLUDE=TABLE_STATISTICS
EXCLUDE=INDEX_STATISTICS
LOGFILE=reveal.logSorry for the reply to an old post.
It seems that now (10.2.0.4) JOB is included in the list of SCHEMA_EXPORT_OBJECTS.
SQL> SELECT OBJECT_PATH FROM SCHEMA_EXPORT_OBJECTS WHERE object_path LIKE '%JOB%';
OBJECT_PATH
JOB
SCHEMA_EXPORT/JOB
Unfortunatly, EXCLUDE=JOB still generates invalid argument on my schema imports. I also don't know whether these are old style jobs, or scheduler jobs. I don't see anything for object_path LIKE '%SCHED%' , which is my real interest anyway.
The data pump is so rich already, I hate ask for more, but ... may we please have even more?? scheduler_programs, scheduler_jobs, scheduler etc.
Thanks
Steve -
Data Pump - expdp and slow performance on specific tables
Hi there
I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
I have chekced:
- no lobs
- no long/raw
- no VPD
- no partitions
- no bitmapped index
- just date, number, varchar2's
I'm runing with trace 400300
But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
1 > direct path (i think)
2 > external table (i think)
4 > ?
others?
I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
I have a table 2.5 GB -> 3 minutes
and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
There are 367.000 blks (8 K) and avg rowlen = 71
I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
System name: Linux
Node name: tiaprod.thi.somethingamt.dk
Release: 2.6.18-194.el5
Version: #1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine: x86_64
VM name: Xen Version: 3.4 (HVM)
Instance name: prod
Redo thread mounted by this instance: 1
Oracle process number: 222
Unix process pid: 24268, image: [email protected] (DW00)
*** 2011-09-20 09:39:39.671
*** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
*** CLIENT ID:() 2011-09-20 09:39:39.671
*** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
*** MODULE NAME:() 2011-09-20 09:39:39.671
*** ACTION NAME:() 2011-09-20 09:39:39.671
KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
*** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
*** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPC:09:39:39.693: Setting remote flag for this process to FALSE
prvtaqis - Enter
prvtaqis subtab_name upd
prvtaqis sys table upd
KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
KUPW:09:39:39.820: 1: worker max message number: 1000
KUPW:09:39:39.822: 1: Full cluster access allowed
KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
KUPW:09:39:39.998: 1: Max character width: 1
KUPW:09:39:39.998: 1: Max clob fetch: 32757
KUPW:09:39:39.998: 1: Max varchar2a size: 32757
KUPW:09:39:39.998: 1: Max varchar2 size: 7990
KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
KUPW:09:39:40.005: 1: Master table : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
KUPW:09:39:40.005: 1: Metadata job mode : SCHEMA_EXPORT
KUPW:09:39:40.005: 1: Debug enable : TRUE
KUPW:09:39:40.005: 1: Profile enable : FALSE
KUPW:09:39:40.005: 1: Transportable enable : FALSE
KUPW:09:39:40.005: 1: Metrics enable : FALSE
KUPW:09:39:40.005: 1: db version : 11.2.0.2.0
KUPW:09:39:40.005: 1: job version : 11.2.0.0.0
KUPW:09:39:40.005: 1: service name :
KUPW:09:39:40.005: 1: Current Edition : ORA$BASE
KUPW:09:39:40.005: 1: Job Edition :
KUPW:09:39:40.005: 1: Abort Step : 0
KUPW:09:39:40.005: 1: Access Method : AUTOMATIC
KUPW:09:39:40.005: 1: Data Options : 0
KUPW:09:39:40.006: 1: Dumper directory :
KUPW:09:39:40.006: 1: Master only : FALSE
KUPW:09:39:40.006: 1: Data Only : FALSE
KUPW:09:39:40.006: 1: Metadata Only : FALSE
KUPW:09:39:40.006: 1: Estimate : BLOCKS
KUPW:09:39:40.006: 1: Data error logging table :
KUPW:09:39:40.006: 1: Remote Link :
KUPW:09:39:40.006: 1: Dumpfile present : TRUE
KUPW:09:39:40.006: 1: Table Exists Action :
KUPW:09:39:40.006: 1: Partition Options : NONE
KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
KUPW:09:39:40.006: 1: Metadata Filter Index : 1 Count : 10
KUPW:09:39:40.006: 1: 1 Name - INCLUDE_USER
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object Name - SCHEMA_EXPORT
KUPW:09:39:40.006: 1: 2 Name - SCHEMA_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TIA')
KUPW:09:39:40.006: 1: 3 Name - NAME_EXPR
KUPW:09:39:40.006: 1: Value - ='ACC_PAYMENT_SPECIFICATION'
KUPW:09:39:40.006: 1: Object - TABLE
KUPW:09:39:40.006: 1: 4 Name - INCLUDE_PATH_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TABLE')
KUPW:09:39:40.006: 1: 5 Name - ORDERED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE_DATA
KUPW:09:39:40.006: 1: 6 Name - NO_XML
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object - XMLSCHEMA/EXP_XMLSCHEMA
KUPW:09:39:40.006: 1: 7 Name - XML_OUTOFLINE
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TABLE_DATA
KUPW:09:39:40.006: 1: 8 Name - XDB_GENERATED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TRIGGER
KUPW:09:39:40.007: 1: 9 Name - XDB_GENERATED
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE/RLS_POLICY
KUPW:09:39:40.007: 1: 10 Name - PRIVILEGED_USER
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: MD remap schema Index : 4 Count : 0
KUPW:09:39:40.007: 1: MD remap other Index : 5 Count : 0
KUPW:09:39:40.007: 1: MD Transform ddl Index : 2 Count : 11
KUPW:09:39:40.007: 1: 1 Name - DBA
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - JOB
KUPW:09:39:40.007: 1: 2 Name - EXPORT
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: 3 Name - PRETTY
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 4 Name - SQLTERMINATOR
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 5 Name - CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 6 Name - REF_CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 7 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 8 Name - RESET_PARALLEL
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INDEX
KUPW:09:39:40.007: 1: 9 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TYPE
KUPW:09:39:40.007: 1: 10 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INC_TYPE
KUPW:09:39:40.007: 1: 11 Name - REVOKE_FROM
KUPW:09:39:40.008: 1: Value - SYSTEM
KUPW:09:39:40.008: 1: Object - ROLE
KUPW:09:39:40.008: 1: Data Filter Index : 6 Count : 0
KUPW:09:39:40.008: 1: Data Remap Index : 7 Count : 0
KUPW:09:39:40.008: 1: MD remap name Index : 8 Count : 0
KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:40.038: 1: Flags: 18
KUPW:09:39:40.038: 1: Start sequence number:
KUPW:09:39:40.038: 1: End sequence number:
KUPW:09:39:40.038: 1: Metadata Parallel: 1
KUPW:09:39:40.038: 1: Primary worker id: 1
KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
KUPW:09:39:40.041: 1: In procedure CREATE_MSG
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
KUPW:09:39:40.046: 1: Created type completion for duplicate 62
KUPW:09:39:40.046: 1: In procedure CREATE_MSG
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name: Filter Value:
KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
*** 2011-09-20 09:39:40.325
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
*** 2011-09-20 09:39:42.603
KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:39:42.603: 1: Nothing to remap
KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:39:42.620: 1: flags mask: 0
KUPW:09:39:42.620: 1: dapi_possible_meth: 1
KUPW:09:39:42.620: 1: data_size: 3019898880
KUPW:09:39:42.620: 1: et_parallel: TRUE
KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
KUPW:09:39:42.648: 1: l_client_bit_mask: 7
KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12 <<<<< Here is says either (I thought that was method ?) <<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
KUPW:09:39:42.680: 1: 1 rows fetched
KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0 <<<<<<<<<<<<<<<< HERE IT SAYS METHOD = 4 and PARALLEL=12 (I'm not using the parallel parameter ???) <<<<<<<<<<<<<<<<<<
KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
KUPW:09:39:42.684: 1: Send table_data_varray called. Count: 1
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.695: 1: Send table_data_varray returned.
KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:42.695: 1: Old Seqno: 62 New Path: PO Num: -5 New Seqno: 0
KUPW:09:39:42.695: 1: Object count: 1
KUPW:09:39:42.697: 1: 1 completed for 62
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:39:42.697: 1: In procedure CREATE_MSG
KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
*** 2011-09-20 09:40:01.798
KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:40:01.798: 1: Object seqno fetched:
KUPW:09:40:01.799: 1: Object path fetched:
KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:40:01.815: 1: Old Seqno: 226 New Path: PO Num: -5 New Seqno: 0
KUPW:09:40:01.815: 1: Object count: 1
KUPW:09:40:01.815: 1: 1 completed for 226
KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called. Handle: 200001
KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:40:01.828: 1: Process order range: 1..1
KUPW:09:40:01.828: 1: Method: 1
KUPW:09:40:01.828: 1: Parallel: 1
KUPW:09:40:01.828: 1: Creation level: 0
KUPW:09:40:01.830: 1: BULK COLLECT called.
KUPW:09:40:01.830: 1: BULK COLLECT returned.
KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300Hi there ...
I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
But I still need an explanation for the methods (1,2,4 etc)
regards
Mette -
EXPORT ONLY TABLES IN A SCHEMA USING DATA PUMP
Hi folks,
Good day. I will appreciate if I can get a data pump command to export only TABLE oBJECT in a specific shema .
The server is a 4 node RAC with 16 CPU per node.
Thanks in advanceIf all you want is the table definitions, why can you use something like:
expdp user/password direcory=my_dir dumfile=my_dump.dmp tables=schama1.table1,schema1.table2,etc content=metadata_only include=table
This will export just the table definitions. If you want the data to, then remove the content=metadata_only, if you want the dependent objects, like indexes, table_statistics, etc, then remove the include=table.
Dean -
SEVERAL USER BELONG TO THE TABLESPACE I WANT TO EXPORT
I WANT TO ONLY WEED OUT THE TABLESSPACE AND THE TABLES BELONGING TO ONE USER.
THE USER ALSO WANT SOME OF THE TABLES WITH ALL THE DATA IN THEM AND SOME OF THE TABLES WITH NO DATA IN THEM
I TRIED ALL OPTIONS
WITH DATA_ONLY, METADATA_ONLY ON BOTH THE IMPORT AND EXPORT AND HAVE ISSUES
HAVE TRIED EXPORTING AND IT GIVES ME ALL THE TABLES AND A SIZE TO INDICATE IT WORKS BUT ON IMPORT KAZOOM-HELL BREAKS LOOSE. JUNIOR DBA NEED HELP
SQL> select owner, table_name, tablespace_name from dba_tables where tablespace_name='USER_SEG';
ORAPROBE OSP_ACCOUNTS USER_SEG
PATSY TRAPP USER_SEG
PATSY TRAPPO USER_SEG
PATSY TRAUDIT USER_SEG
PATSY TRCOURSE USER_SEG
PATSY TRCOURSEO USER_SEG
PATSY TRDESC USER_SEG
PATSY TREMPDATA USER_SEG
PATSY TRFEE USER_SEG
PATSY TRNOTES USER_SEG
PATSY TROPTION USER_SEG
PATSY TRPART USER_SEG
PATSY TRPARTO USER_SEG
PATSY TRPART_OLD USER_SEG
PATSY TRPERCENT USER_SEG
PATSY TRSCHOOL USER_SEG
PATSY TRSUPER USER_SEG
PATSY TRTRANS USER_SEG
PATSY TRUSERPW USER_SEG
PATSY TRUSRDAT USER_SEG
PATSY TRVARDAT USER_SEG
PATSY TRVERIFY USER_SEG
PATSY TRAPPO_RESET USER_SEG
PATSY TRAPP_RESET USER_SEG
PATSY TRCOURSEO_RESET USER_SEG
PATSY TRCOURSE_RESET USER_SEG
PATSY TRPARTO_RESET USER_SEG
PATSY TRPART_RESET USER_SEG
PATSY TRTRANS_RESET USER_SEG
PATSY TRVERIFY_RESET USER_SEG
MAFANY TRVERIFY USER_SEG
MAFANY TRPART USER_SEG
MAFANY TRPARTO USER_SEG
MAFANY TRAPP USER_SEG
MAFANY TRAPPO USER_SEG
MAFANY TRCOURSE USER_SEG
MAFANY TRCOURSEO USER_SEG
MAFANY TRTRANS USER_SEG
JULIE R_REPOSITORY_LOG USER_SEG
JULIE R_VERSION USER_SEG
JULIE R_DATABASE_TYPE USER_SEG
JULIE R_DATABASE_CONTYPE USER_SEG
JULIE R_NOTE USER_SEG
JULIE R_DATABASE USER_SEG
JULIE R_DATABASE_ATTRIBUTE USER_SEG
JULIE R_DIRECTORY USER_SEG
JULIE R_TRANSFORMATION USER_SEG
JULIE R_TRANS_ATTRIBUTE USER_SEG
JULIE R_DEPENDENCY USER_SEG
JULIE R_PARTITION_SCHEMA USER_SEG
JULIE R_PARTITION USER_SEG
JULIE R_TRANS_PARTITION_SCHEMA USER_SEG
JULIE R_CLUSTER USER_SEG
JULIE R_SLAVE USER_SEG
JULIE R_CLUSTER_SLAVE USER_SEG
JULIE R_TRANS_SLAVE USER_SEG
JULIE R_TRANS_CLUSTER USER_SEG
JULIE R_TRANS_HOP USER_SEG
JULIE R_TRANS_STEP_CONDITION USER_SEG
JULIE R_CONDITION USER_SEG
JULIE R_VALUE USER_SEG
JULIE R_STEP_TYPE USER_SEG
JULIE R_STEP USER_SEG
JULIE R_STEP_ATTRIBUTE USER_SEG
JULIE R_STEP_DATABASE USER_SEG
JULIE R_TRANS_NOTE USER_SEG
JULIE R_LOGLEVEL USER_SEG
JULIE R_LOG USER_SEG
JULIE R_JOB USER_SEG
JULIE R_JOBENTRY_TYPE USER_SEG
JULIE R_JOBENTRY USER_SEG
JULIE R_JOBENTRY_COPY USER_SEG
JULIE R_JOBENTRY_ATTRIBUTE USER_SEG
JULIE R_JOB_HOP USER_SEG
JULIE R_JOB_NOTE USER_SEG
JULIE R_PROFILE USER_SEG
JULIE R_USER USER_SEG
JULIE R_PERMISSION USER_SEG
JULIE R_PROFILE_PERMISSION USER_SEG
MAFANY2 TRAPP USER_SEG
MAFANY2 TRAPPO USER_SEG
MAFANY2 TRCOURSE USER_SEG
MAFANY2 TRCOURSEO USER_SEG
MAFANY2 TRPART USER_SEG
MAFANY2 TRPARTO USER_SEG
MAFANY2 TRTRANS USER_SEG
MAFANY2 TRVERIFY USER_SEG
MAFANY BIN$ZY3M1IuZyq3gQBCs+AAMzQ==$0 USER_SEG
MAFANY BIN$ZY3M1Iuhyq3gQBCs+AAMzQ==$0 USER_SEG
MAFANY MYUSERS USER_SEG
I ONLY WANT THE TATBLES FROM PATSY AND WANT TO MOVE IT TO ANOTHER DATABASE FOR HER TO USE AND WANT TO KEEP THE SAME TABLESPACE NAME
THE TABLES BELOW SHOULD ALSO HAVE JUST THE METADATA AND NOT THE DATA
PATSY TRAPP USER_SEG
PATSY TRAPPO USER_SEG
PATSY TRAUDIT USER_SEG
PATSY TRCOURSE USER_SEG
PATSY TRCOURSEO USER_SEG
PATSY TRDESC USER_SEG
PATSY TREMPDATA USER_SEG
PATSY TRFEE USER_SEG
PATSY TRNOTES USER_SEG
PATSY TROPTION USER_SEG
PATSY TRPART USER_SEG
PATSY TRPARTO USER_SEG
PATSY TRPART_OLD USER_SEG
PATSY TRPERCENT USER_SEG
PATSY TRSCHOOL USER_SEG
PATSY TRSUPER USER_SEG
PATSY TRTRANS USER_SEG
PATSY TRUSERPW USER_SEG
PATSY TRUSRDAT USER_SEG
THE FOLLOWING WILL OR ARE SUPPOSED TO HAVE ALL THE DATA
PATSY TRVERIFY USER_SEG
PATSY TRAPPO_RESET USER_SEG
PATSY TRAPP_RESET USER_SEG
PATSY TRCOURSEO_RESET USER_SEG
PATSY TRCOURSE_RESET USER_SEG
PATSY TRPARTO_RESET USER_SEG
PATSY TRPART_RESET USER_SEG
PATSY TRTRANS_RESET USER_SEG
PATSY TRVERIFY_RESET USER_SEG
HAVE TRIED ALL THE FOLLOWING AND LATER GOT STUCK IN MY LIL EFFORT TO DOCUMENT AS I GO ALONG:
USING DATA PUMP TO EXPORT DATA
First:
Create a directory object or use one that already exists.
I created a directory object: grant
CREATE DIRECTORY "DATA_PUMP_DIR" AS '/home/oracle/my_dump_dir'
Grant read, write on directory DATA_PUMP_DIR to user;
Where user will be the user such as sys, public, oracle etc.
For example to create a directory object named expdp_dir located at /u01/backup/exports enter the following sql statement:
SQL> create directory expdp_dir as '/u01/backup/exports'
then grant read and write permissions to the users who will be performing the data pump export and import.
SQL> grant read,write on directory dpexp_dir to system, user1, user2, user3;
http://wiki.oracle.com/page/Data+Pump+Export+(expdp)+and+Data+Pump+Import(impdp)?t=anon
To view directory objects that already exist
- use EM: under Administration tab to schema section and select Directory Objects
- DESC the views: ALL_DIRECTORIES, DBA_DIRECTORIES
select * from all_directories;
select * from dba_directories;
export schema using expdb
expdp system/SCHMOE DUMPFILE=patsy_schema.dmp
DIRECTORY=DATA_PUMP_DIR SCHEMAS = PATSY
expdp system/PASS DUMPFILE=METADATA_ONLY_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLESPACES = USER_SEG CONTENT=METADATA_ONLY
expdp system/PASS DUMPFILE=data_only_schema.dmp DIRECTORY=DATA_PUMP_DIR TABLES=PATSY.TRVERIFY,PATSY.TRAPPO_RESET,PATSY.TRAPP_RESET,PATSY.TRCOURSEO_RESET, PATSY.TRCOURSE_RESET,PATSY.TRPARTO_RESET,PATSY.TRPART_RESET,PATSY.TRTRANS_RESET,PATSY.TRVERIFY_RESET CONTENT=DATA_ONLYyou are correct all the patsy tables reside in the tablespace USER_SEG that are in a diffrent database and i want to move them to a new database-same version 10g. i have created a user named patsy there also. the tablespace does not exist in the target i want to move it to-same thing with the objects and indexes-they don't exists in the target.
so how can i move the schema and tablespace with all the tables and keeping the tablespace name and same username patsy that i created to import into.
tried again and get some errrors: this is beter than last time and that because i remap_tablespace:USER_SEG:USERS
BUT I WANT TO HAVE THE TABLESPACE IN TARGET CALLED USER_SEG TOO. DO I HAVE TO CREATE IT OR ?
[oracle@server1 ~]$ impdp system/blue99 remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
Import: Release 10.1.0.3.0 - Production on Wednesday, 08 April, 2009 11:10
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Release 10.1.0.3.0 - Production
Master table "SYSTEM"."SYS_IMPORT_FULL_03" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_03": system/******** remap_schema=patsy:test REMAP_TABLESPACE=USER_SEG:USERS directory=DATA_PUMP_DIR dumpfile=patsy.dmp logfile=patsy.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
. . imported "TEST"."TRTRANS" 10.37 MB 93036 rows
. . imported "TEST"."TRAPP" 2.376 MB 54124 rows
. . imported "TEST"."TRAUDIT" 1.857 MB 28153 rows
. . imported "TEST"."TRPART_OLD" 1.426 MB 7183 rows
. . imported "TEST"."TRSCHOOL" 476.8 KB 5279 rows
. . imported "TEST"."TRAPPO" 412.3 KB 9424 rows
. . imported "TEST"."TRUSERPW" 123.8 KB 3268 rows
. . imported "TEST"."TRVERIFY_RESET" 58.02 KB 183 rows
. . imported "TEST"."TRUSRDAT" 54.73 KB 661 rows
. . imported "TEST"."TRNOTES" 51.5 KB 588 rows
. . imported "TEST"."TRCOURSE" 49.85 KB 243 rows
. . imported "TEST"."TRCOURSE_RESET" 47.60 KB 225 rows
. . imported "TEST"."TRPART" 39.37 KB 63 rows
. . imported "TEST"."TRPART_RESET" 37.37 KB 53 rows
. . imported "TEST"."TRTRANS_RESET" 38.94 KB 196 rows
. . imported "TEST"."TRCOURSEO" 30.93 KB 51 rows
. . imported "TEST"."TRCOURSEO_RESET" 28.63 KB 36 rows
. . imported "TEST"."TRPERCENT" 33.72 KB 1044 rows
. . imported "TEST"."TROPTION" 30.10 KB 433 rows
. . imported "TEST"."TRPARTO" 24.78 KB 29 rows
. . imported "TEST"."TRPARTO_RESET" 24.78 KB 29 rows
. . imported "TEST"."TRVERIFY" 20.97 KB 30 rows
. . imported "TEST"."TRVARDAT" 14.13 KB 44 rows
. . imported "TEST"."TRAPP_RESET" 14.17 KB 122 rows
. . imported "TEST"."TRDESC" 9.843 KB 90 rows
. . imported "TEST"."TRAPPO_RESET" 8.921 KB 29 rows
. . imported "TEST"."TRSUPER" 6.117 KB 10 rows
. . imported "TEST"."TREMPDATA" 0 KB 0 rows
. . imported "TEST"."TRFEE" 0 KB 0 rows
Processing object type TABLE_EXPORT/TABLE/GRANT/TBL_OWNER_OBJGRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE UNIQUE INDEX "TEST"."TRPART_11" ON "TEST"."TRPART" ("TRPCPID", "TRPSSN") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 32768 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE INDEX "TEST"."TRPART_I2" ON "TEST"."TRPART" ("TRPLAST") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE INDEX "TEST"."TRPART_I3" ON "TEST"."TRPART" ("TRPEMAIL") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
ORA-39083: Object type INDEX failed to create with error:
ORA-00959: tablespace 'INDEX1_SEG' does not exist
Failing sql is:
CREATE INDEX "TEST"."TRPART_I4" ON "TEST"."TRPART" ("TRPPASSWORD") PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 131072 NEXT 131072 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "INDEX1_SEG" PARALLEL 1
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "BILLZ"."TRCOURSE_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRCOURSE_I1"', NULL, NULL, NULL, 232, 2, 232, 1, 1, 7, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "BILLZ"."TRTRANS_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"BILLZ"', '"TRTRANS_I1"', NULL, NULL, NULL, 93032, 445, 93032, 1, 1, 54063, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRAPP_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I1"', NULL, NULL, NULL, 54159, 184, 54159, 1, 1, 4597, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRAPP_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRAPP_I2"', NULL, NULL, NULL, 54159, 182, 17617, 1, 2, 48776, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRAPPO_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I1"', NULL, NULL, NULL, 9280, 29, 9280, 1, 1, 166, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRAPPO_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRAPPO_I2"', NULL, NULL, NULL, 9280, 28, 4062, 1, 2, 8401, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRCOURSEO_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRCOURSEO_I1"', NULL, NULL, NULL, 49, 2, 49, 1, 1, 8, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TREMPDATA_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TREMPDATA_I1"', NULL, NULL, NULL, 0, 0, 0, 0, 0, 0, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TROPTION_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TROPTION_I1"', NULL, NULL, NULL, 433, 3, 433, 1, 1, 187, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_11" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I2" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I3" creation failed
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"TEST"."TRPART_I4" creation failed
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRPART_I5" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_I5"', NULL, NULL, NULL, 19, 1, 19, 1, 1, 10, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPARTO_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I1"', NULL, NULL, NULL, 29, 1, 29, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPARTO_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPARTO_I2"', NULL, NULL, NULL, 29, 14, 26, 1, 1, 1, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I2"', NULL, NULL, NULL, 7180, 19, 4776, 1, 1, 7048, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I3" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I3"', NULL, NULL, NULL, 2904, 15, 2884, 1, 1, 2879, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I4" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I4"', NULL, NULL, NULL, 363, 1, 362, 1, 1, 359, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRPART_I5" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRPART_I5"', NULL, NULL, NULL, 363, 1, 363, 1, 1, 353, 0, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRPART_11" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPART_11"', NULL, NULL, NULL, 7183, 29, 7183, 1, 1, 6698, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRPERCENT_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRPERCENT_I1"', NULL, NULL, NULL, 1043, 5, 1043, 1, 1, 99, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "DANQ"."TRSCHOOL_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"DANQ"', '"TRSCHOOL_I1"', NULL, NULL, NULL, 5279, 27, 5279, 1, 1, 4819, 1, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "JULIE"."TRVERIFY_I2" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"JULIE"', '"TRVERIFY_I2"', NULL, NULL, NULL, 30, 7, 7, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
ORA-39083: Object type INDEX_STATISTICS failed to create with error:
ORA-20000: INDEX "STU"."TRVERIFY_I1" does not exist or insufficient privileges
Failing sql is:
BEGIN DBMS_STATS.SET_INDEX_STATS('"STU"', '"TRVERIFY_I1"', NULL, NULL, NULL, 30, 12, 30, 1, 1, 1, 2, 2, NULL, FALSE, NULL, NULL, NULL); END;
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_03" completed with 29 error(s) at 11:17 -
Data pump + inserting new values in a table
Hello Everybody!!
My question is : is it possible to insert new row into a table which is included in running DataPump process (export data) ?
Best regards
Marcin MigdalHave a look here, never done it myself but seems kind of cool :)
http://www.pythian.com/blogs/766/oracle-data-pump-11g-little-known-new-feature -
RE: (forte-users) Including/Excluding plans
Pascal,
Before you partition try setting "trc:cf:6".
Good luck,
John Hornsby
DS Data Systems (UK) Ltd.
John.Hornsbydsdata.co.uk
Tel. +44 01908 847100
Mob. +44 07966 546189
Fax. +44 0870 0525800
-----Original Message-----
From: Rottier, Pascal [mailto:Rottier.Pascalpmintl.ch]
Sent: 11 August 2000 10:42
To: 'Forte Users'
Subject: (forte-users) Including/Excluding plans
Hi,
I know there is a way to determine why Forte includes plans in a partition
during partitioning. This shows which plans are (direct or indirect)
supplier plans, but also, which plans are included for other reasons. But I
forgot how to do this. Can anyone point me in the right direction.
Pascal Rottier
Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.comUse fscript :
findplan xxx
findactenv
showapp
This will give you the partitioning scheme, placement of all
service ojects and a list of all included projects per partition.
David Bell
Technical Manager - North Europe
iPlanet Professional Services
Sun Microsystems | Netscape Alliance
St James's House Phone : +44 1344 482100
Oldbury Fax: +44 1344 420905
Bracknell Mobile: +44 7718 808062
RG12 8SA Email : david.j.belluk.sun.com
http://www.iplanet.com
-----Original Message-----
From: John Hornsby [mailto:John.Hornsbydsdata.co.uk]
Sent: 11 August 2000 11:22
To: Rottier, Pascal
Cc: Forte Users
Subject: RE: (forte-users) Including/Excluding plans
Pascal,
Before you partition try setting "trc:cf:6".
Good luck,
John Hornsby
DS Data Systems (UK) Ltd.
John.Hornsbydsdata.co.uk
Tel. +44 01908 847100
Mob. +44 07966 546189
Fax. +44 0870 0525800
-----Original Message-----
From: Rottier, Pascal [mailto:Rottier.Pascalpmintl.ch]
Sent: 11 August 2000 10:42
To: 'Forte Users'
Subject: (forte-users) Including/Excluding plans
Hi,
I know there is a way to determine why Forte includes plans in a partition
during partitioning. This shows which plans are (direct or indirect)
supplier plans, but also, which plans are included for other
reasons. But I
forgot how to do this. Can anyone point me in the right direction.
Pascal Rottier
Origin Nederland (BAS/West End User Computing)
Tel. +31 (0)10-2661223
Fax. +31 (0)10-2661199
E-mail: Pascal.Rottiernl.origin-it.com
++++++++++++++++++++++++++++
Philip Morris (Afd. MIS)
Tel. +31 (0)164-295149
Fax. +31 (0)164-294444
E-mail: Rottier.Pascalpmintl.ch
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com
For the archives, go to: http://lists.xpedior.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com
Maybe you are looking for
-
I try to log into it and it says: "iMessage Activation- Could not sign in. Please Check your network connection and try again." I'm connecting to my houses Wi-Fi and can make calls and regular texts. How can I fix this?
-
Good day everybody! I want to ask you one question: is it somehow possible to create a pointer to variable in AS2? I mean not a variable of a complex type itself, but a pointer (or some kind of it) to such variable. Thank you for paying attention.
-
Dear Gurus, At sales order level my sold to party is different compare to rest of the mandatory partner function. Invoice post against payer. which is different as compare to sold to party. is there standard way so that system can also check the agin
-
UnmarshalException: unexpected element (uri:...
Hello, I have an ugly problem while trying to unmarshal my xml-instance with JAXB (WSDP2.0). I hope that you may help. With the tool XJC I have generated some Java-classes from XML-Schema xjc.bat -debug -readOnly -b myBindingFile.xjb -extension -d ./
-
ByteArray.uncompress() Error
I'm attempting to compress a string on the server and uncompress it in Flex. I'm getting a generic error when calling uncompress() and I assume it is because I'm encoding the string incorrectly. I'm using C# on the server, and Flex 2 on the client