Parallel parameter in EXPDP
Hi all,
I am flusterd on how to use the parallel parameter in expdp. Does paralllel parameter depend on the number of CPU's that the DB server is having?
If yes then, how to decide what vaue to specify for parallel parameter if say the DB server is having 8 cores?
BR
Sphinx
Hi John,
I have gone through the white paper, regarding the CPU i have found that:
Set the degree of parallelism 2 times the number of CPU and then fine tune from there.Can you explain me how they have to come to this proportion and also how to fine tune from there?
Regards,
Sphinx
Edited by: $phinx19 on Oct 18, 2012 6:45 AM
Similar Messages
-
DBMS_DATAPUMP parallel parameter did not work
Hi, I am using DBMS_DATADUMP with network link to load large table directly from one database to another database. I used parallel parameter to improve the performance. But looks the parallel did not work. Still only one worker to handle the import.
The status is
Job: LOADING
Owner: TRI
Operation: IMPORT
Creator Privs: TRUE
GUID: 25DB4B2BE420406B82A7AE159CF1E626
Start Time: Wednesday, 22 April, 2009 15:37:04
Mode: TABLE
Instance: orcl
Max Parallelism: 4
EXPORT Job Parameters:
IMPORT Job Parameters:
Parameter Name Parameter Value:
INCLUDE_METADATA 0
TABLE_EXISTS_ACTION TRUNCATE
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Worker 1 Status:
Process Name: DW01
State: EXECUTING
Object Schema: TRI
Object Name: ORDER_LINES
Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Worker Parallelism: 1
Worker 2 Status:
Process Name: DW02
State: WORK WAITING
My source database is 10.2.0.4 compatible=10.2.0
target database is 11.1.0.6
My table is around 3G
The API I am using is
my_handle := dbms_datapump.open(operation => 'IMPORT',job_mode => 'TABLE',
remote_link => my_db_link, job_name => my_job_name ,version=>'LATEST' ) ;
dbms_datapump.set_parameter (my_handle, 'TABLE_EXISTS_ACTION','TRUNCATE');
dbms_datapump.set_parameter (my_handle, 'INCLUDE_METADATA',0);
dbms_datapump.metadata_filter (handle => my_handle, name => 'SCHEMA_EXPR',
value=>'IN (''ORDER'')');
dbms_datapump.metadata_filter (handle => my_handle, name => 'NAME_LIST',
value => '''ORDER_LINES''');
dbms_datapump.metadata_remap(my_handle,'REMAP_SCHEMA',old_value=>'ORDER',value=>'TRI');
dbms_datapump.set_parallel(my_handle,16);
dbms_datapump.start_job(my_handle);
Edited by: tonym on Apr 23, 2009 10:49 AMThen I test to use API and network link to export large table from remote database with parallel parameter.
my_handle := dbms_datapump.open(operation => 'EXPORT',job_mode => 'TABLE',remote_link => my_db_link, job_name => my_job_name ,version=>'LATEST' ) ;
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test1.dmp', directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test2.dmp', directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test3.dmp', directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file( handle => my_handle, filename => 'test4.dmp', directory => 'DATA_PUMP_DIR');
dbms_datapump.metadata_filter (handle => my_handle, name => 'SCHEMA_EXPR',value=>'IN (''INVENTORY'')');
dbms_datapump.metadata_filter (handle => my_handle, name => 'NAME_LIST',value => '''INV_TRANSACTIONS''');
dbms_datapump.set_parallel(my_handle,4);
dbms_datapump.start_job(my_handle);
Looks it did not use parallel either. This table is around 3G too.
status:
Job: LOADING_2
Operation: EXPORT
Mode: TABLE
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 4
Job Error Count: 0
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST1.DMP
bytes written: 69,632
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST2.DMP
bytes written: 4,096
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST3.DMP
bytes written: 4,096
Dump File: C:\ORACLE\ADMIN\OW\DPDUMP\TEST4.DMP
bytes written: 4,096
Worker 1 Status:
Process Name: DW04
State: WORK WAITING
Worker 2 Status:
Process Name: DW05
State: EXECUTING
Object Schema: INVENTORY
Object Name: INV_TRANSACTIONS
Object Type: TABLE_EXPORT/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Completed Rows: 4,652,330
Worker Parallelism: 1
Edited by: tonym on Apr 23, 2009 10:53 AM -
Should I specify the Parallel parameter for an non-RAC database?
The Oracle documatation state as the following:
"The Oracle Database 10g Release 2 database controls and balances all parallel operations, based upon available resources, request priorities and actual system load." It show that Oracle can optimize the Parallel level automaticly.
Should I specify the Parallel parameter for a non-RAC database? Most of the transactions are small OLTP.What parallel parameter are you talking about?
Generally, you may benefit from parallelization in a very similar manner on RAC as on single instance system. And it is in both cases not sufficient to change the value of any initialization parameter to achieve parallelization of queries, DDL or DML.
Kind regards
Uwe
http://uhesse.wordpress.com -
DataPump Import over Network and Parallel parameter
I see and understand the advantages of using the Parallel parameter with datapump with dump files. I am trying to figure out is there any advantage to the parallel parameter when importing data over the network? I have noticed theren are sweet spots where parallel and dump files based upon the data. Sometimes ten workers and ten dmp files is no faster if not slower than three. I have not been able to find a clear answer on this. Any advice or explanation would be appreciated.
ThanksI am trying to figure out is there any advantage to the parallel parameter when importing data over the network?Reading data from network is slower than other options available from datapump.
The performance of Parallel import using datapump import are controlled by I/O contention. You have to determine the best value for performing parallel datapump export and import. -
I am using expdp or exp whatever want as per need, now I have to do it fast so I am using a parameter parallel.....................now tell me on what basis you will decide the parameter parallel value so that operation will be fast???
As usual documentation is your best friend:
http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_export.htm#i1006404 -
expdp user/pass DIRECTORY=Export DUMPFILE=SID.DMP LOGFILE=SID.LOG FULL=Y PARALLEL=8 Parallel=8 and Parallel=4 take the same amout of time... I was wondering if it should, or if there is a parameter that should be set differently in the database..
windows 2008 R2
dual quad core processors (8 cores)
oracle enterprise 11.2.0.3
parallel_instance_group=
parallel_min_percent=0
parallel_min_servers=0
recovery_parallelism=0
parallel_server_instances=1
parallel_servers_target=128
parallel_execution_message_size=16384
parallel_threads_per_cpu=2
parallel_max_servers=320
parallel_min_time_threshold=AUTO
parallel_degree_limit=CPU
parallel_automatic_tuning=FALSE
parallel_force_local=FALSE
parallel_io_cap_enabled=FALSE
parallel_server=FALSE
fast_start_parallel_rollback=LOW
parallel_degree_policy=MANUAL
parallel_adaptive_multi_user=TRUESo why do you think parallel was not done. I know you said that there was no performance gain, but it could be the size of the job.
There are many factors that could change the parallel.
1. Oracle version you are running. The algorithm has changed from version to version to try and squeeze as much
parallelism out of the job as possible.
2. types contained in the table
3. size of the tables
4. speed of the io that you have
When you ran with parallel=4, did you get 4 dumpfiles? How about with parallel=8?
Thanks
Dean -
Expdp with parallel writing in one file at a time on OS
Hi friends,
I am facing a strange issue.Despite giving parallel=x parameter the expdp is writing on only one file on OS level at a time,although it is writing into multiple files sequentially (not concurrently)
While on other servers i see that expdp is able to start writing in multiple files concurrently. Following is the sample log
of my expdp .
++++++++++++++++++++
Export: Release 10.2.0.3.0 - 64bit Production on Friday, 15 April, 2011 3:06:50
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "CNVAPPDBO4"."EXPDP_BL1_DOCUMENT": CNVAPPDBO4/********@EXTTKS1 tables=BL1_DOCUMENT DUMPFILE=DUMP1_S:Expdp_BL1_DOCUMENT_%U.dmp LOGFILE=LOG1_S:Expdp_BL1_DOCUMENT.log CONTENT=DATA_ONLY FILESIZE=5G EXCLUDE=INDEX,STATISTICS,CONSTRAINT,GRANT PARALLEL=6 JOB_NAME=Expdp_BL1_DOCUMENT
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 23.93 GB
. . exported "CNVAPPDBO4"."BL1_DOCUMENT" 17.87 GB 150951906 rows
Master table "CNVAPPDBO4"."EXPDP_BL1_DOCUMENT" successfully loaded/unloaded
Dump file set for CNVAPPDBO4.EXPDP_BL1_DOCUMENT is:
/tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_01.dmp
/tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_02.dmp
/tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_03.dmp
/tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_04.dmp
Job "CNVAPPDBO4"."EXPDP_BL1_DOCUMENT" successfully completed at 03:23:14
++++++++++++++++++++
uname -aHP-UX ocsmigbrndapp3 B.11.31 U ia64 3522246036 unlimited-user license
Is it hitting any known bug? Please suggest.
regds,
kunwarPARALLEL always using with DUMPFILE=filename_*%U*.dmp. Did yoy put the same parameter on target server?
PARALLEL clause depend on server resources. If the system resources allow, the number of parallel processes should be set to the number of dump files being created. -
Hi,
How do we know that how many processors that we can give to parameter parallel in expdp.
the processors that we are using in parallel parameter, what processors that are?
Please let me know.
thanks899329 wrote:
Hi,
How do we know that how many processors that we can give to parameter parallel in expdp.
I don't think that there is any limit to it provided it comes under the value of the PROCESSES parameter of yours.
the processors that we are using in parallel parameter, what processors that are?They are going to be the worker processes .
HTH
Aman.... -
Data Pump - expdp and slow performance on specific tables
Hi there
I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
I have chekced:
- no lobs
- no long/raw
- no VPD
- no partitions
- no bitmapped index
- just date, number, varchar2's
I'm runing with trace 400300
But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
1 > direct path (i think)
2 > external table (i think)
4 > ?
others?
I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
I have a table 2.5 GB -> 3 minutes
and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
There are 367.000 blks (8 K) and avg rowlen = 71
I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
System name: Linux
Node name: tiaprod.thi.somethingamt.dk
Release: 2.6.18-194.el5
Version: #1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine: x86_64
VM name: Xen Version: 3.4 (HVM)
Instance name: prod
Redo thread mounted by this instance: 1
Oracle process number: 222
Unix process pid: 24268, image: [email protected] (DW00)
*** 2011-09-20 09:39:39.671
*** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
*** CLIENT ID:() 2011-09-20 09:39:39.671
*** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
*** MODULE NAME:() 2011-09-20 09:39:39.671
*** ACTION NAME:() 2011-09-20 09:39:39.671
KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
*** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
*** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPC:09:39:39.693: Setting remote flag for this process to FALSE
prvtaqis - Enter
prvtaqis subtab_name upd
prvtaqis sys table upd
KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
KUPW:09:39:39.820: 1: worker max message number: 1000
KUPW:09:39:39.822: 1: Full cluster access allowed
KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
KUPW:09:39:39.998: 1: Max character width: 1
KUPW:09:39:39.998: 1: Max clob fetch: 32757
KUPW:09:39:39.998: 1: Max varchar2a size: 32757
KUPW:09:39:39.998: 1: Max varchar2 size: 7990
KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
KUPW:09:39:40.005: 1: Master table : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
KUPW:09:39:40.005: 1: Metadata job mode : SCHEMA_EXPORT
KUPW:09:39:40.005: 1: Debug enable : TRUE
KUPW:09:39:40.005: 1: Profile enable : FALSE
KUPW:09:39:40.005: 1: Transportable enable : FALSE
KUPW:09:39:40.005: 1: Metrics enable : FALSE
KUPW:09:39:40.005: 1: db version : 11.2.0.2.0
KUPW:09:39:40.005: 1: job version : 11.2.0.0.0
KUPW:09:39:40.005: 1: service name :
KUPW:09:39:40.005: 1: Current Edition : ORA$BASE
KUPW:09:39:40.005: 1: Job Edition :
KUPW:09:39:40.005: 1: Abort Step : 0
KUPW:09:39:40.005: 1: Access Method : AUTOMATIC
KUPW:09:39:40.005: 1: Data Options : 0
KUPW:09:39:40.006: 1: Dumper directory :
KUPW:09:39:40.006: 1: Master only : FALSE
KUPW:09:39:40.006: 1: Data Only : FALSE
KUPW:09:39:40.006: 1: Metadata Only : FALSE
KUPW:09:39:40.006: 1: Estimate : BLOCKS
KUPW:09:39:40.006: 1: Data error logging table :
KUPW:09:39:40.006: 1: Remote Link :
KUPW:09:39:40.006: 1: Dumpfile present : TRUE
KUPW:09:39:40.006: 1: Table Exists Action :
KUPW:09:39:40.006: 1: Partition Options : NONE
KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
KUPW:09:39:40.006: 1: Metadata Filter Index : 1 Count : 10
KUPW:09:39:40.006: 1: 1 Name - INCLUDE_USER
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object Name - SCHEMA_EXPORT
KUPW:09:39:40.006: 1: 2 Name - SCHEMA_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TIA')
KUPW:09:39:40.006: 1: 3 Name - NAME_EXPR
KUPW:09:39:40.006: 1: Value - ='ACC_PAYMENT_SPECIFICATION'
KUPW:09:39:40.006: 1: Object - TABLE
KUPW:09:39:40.006: 1: 4 Name - INCLUDE_PATH_EXPR
KUPW:09:39:40.006: 1: Value - IN ('TABLE')
KUPW:09:39:40.006: 1: 5 Name - ORDERED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE_DATA
KUPW:09:39:40.006: 1: 6 Name - NO_XML
KUPW:09:39:40.006: 1: Value - TRUE
KUPW:09:39:40.006: 1: Object - XMLSCHEMA/EXP_XMLSCHEMA
KUPW:09:39:40.006: 1: 7 Name - XML_OUTOFLINE
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TABLE_DATA
KUPW:09:39:40.006: 1: 8 Name - XDB_GENERATED
KUPW:09:39:40.006: 1: Value - FALSE
KUPW:09:39:40.006: 1: Object - TABLE/TRIGGER
KUPW:09:39:40.007: 1: 9 Name - XDB_GENERATED
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE/RLS_POLICY
KUPW:09:39:40.007: 1: 10 Name - PRIVILEGED_USER
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: MD remap schema Index : 4 Count : 0
KUPW:09:39:40.007: 1: MD remap other Index : 5 Count : 0
KUPW:09:39:40.007: 1: MD Transform ddl Index : 2 Count : 11
KUPW:09:39:40.007: 1: 1 Name - DBA
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - JOB
KUPW:09:39:40.007: 1: 2 Name - EXPORT
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: 3 Name - PRETTY
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 4 Name - SQLTERMINATOR
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: 5 Name - CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 6 Name - REF_CONSTRAINTS
KUPW:09:39:40.007: 1: Value - FALSE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 7 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TABLE
KUPW:09:39:40.007: 1: 8 Name - RESET_PARALLEL
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INDEX
KUPW:09:39:40.007: 1: 9 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - TYPE
KUPW:09:39:40.007: 1: 10 Name - OID
KUPW:09:39:40.007: 1: Value - TRUE
KUPW:09:39:40.007: 1: Object - INC_TYPE
KUPW:09:39:40.007: 1: 11 Name - REVOKE_FROM
KUPW:09:39:40.008: 1: Value - SYSTEM
KUPW:09:39:40.008: 1: Object - ROLE
KUPW:09:39:40.008: 1: Data Filter Index : 6 Count : 0
KUPW:09:39:40.008: 1: Data Remap Index : 7 Count : 0
KUPW:09:39:40.008: 1: MD remap name Index : 8 Count : 0
KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:40.038: 1: Flags: 18
KUPW:09:39:40.038: 1: Start sequence number:
KUPW:09:39:40.038: 1: End sequence number:
KUPW:09:39:40.038: 1: Metadata Parallel: 1
KUPW:09:39:40.038: 1: Primary worker id: 1
KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
KUPW:09:39:40.041: 1: In procedure CREATE_MSG
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
KUPW:09:39:40.046: 1: Created type completion for duplicate 62
KUPW:09:39:40.046: 1: In procedure CREATE_MSG
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name: Filter Value:
KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
*** 2011-09-20 09:39:40.325
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
*** 2011-09-20 09:39:42.603
KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:39:42.603: 1: Nothing to remap
KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:39:42.620: 1: flags mask: 0
KUPW:09:39:42.620: 1: dapi_possible_meth: 1
KUPW:09:39:42.620: 1: data_size: 3019898880
KUPW:09:39:42.620: 1: et_parallel: TRUE
KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
KUPW:09:39:42.648: 1: l_client_bit_mask: 7
KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12 <<<<< Here is says either (I thought that was method ?) <<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
KUPW:09:39:42.680: 1: 1 rows fetched
KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0 <<<<<<<<<<<<<<<< HERE IT SAYS METHOD = 4 and PARALLEL=12 (I'm not using the parallel parameter ???) <<<<<<<<<<<<<<<<<<
KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
KUPW:09:39:42.684: 1: Send table_data_varray called. Count: 1
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.695: 1: Send table_data_varray returned.
KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:42.695: 1: Old Seqno: 62 New Path: PO Num: -5 New Seqno: 0
KUPW:09:39:42.695: 1: Object count: 1
KUPW:09:39:42.697: 1: 1 completed for 62
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:39:42.697: 1: In procedure CREATE_MSG
KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
*** 2011-09-20 09:40:01.798
KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:40:01.798: 1: Object seqno fetched:
KUPW:09:40:01.799: 1: Object path fetched:
KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:40:01.815: 1: Old Seqno: 226 New Path: PO Num: -5 New Seqno: 0
KUPW:09:40:01.815: 1: Object count: 1
KUPW:09:40:01.815: 1: 1 completed for 226
KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called. Handle: 200001
KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia 7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia 7
KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:40:01.828: 1: Process order range: 1..1
KUPW:09:40:01.828: 1: Method: 1
KUPW:09:40:01.828: 1: Parallel: 1
KUPW:09:40:01.828: 1: Creation level: 0
KUPW:09:40:01.830: 1: BULK COLLECT called.
KUPW:09:40:01.830: 1: BULK COLLECT returned.
KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300Hi there ...
I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
But I still need an explanation for the methods (1,2,4 etc)
regards
Mette -
Hi
I wanted to refresh my dev environment quickly..
As we also wanted to migrate from exp/imp to datapump, I tried these..
Only thing worked for me is impdp on network mode using remap schema.
1)First i wanted to implement expdp and impdp standard procedures(Full database export/import)
My expdp failed with the following error on a full export..
Pls see below:
C:\Documents and Settings\oracle2>expdp system/<p.w> full=Y directory=DATA_PUMP_DIR dumpfile=OAPFULL.dmp logfile=OAPFULL.log
Export: Release 11.1.0.7.0 - Production on Thursday, 26 April, 2012 16:12:49
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_02": system/******** full=Y directory=DATA_PUMP_DIR dumpfile=OAPFULL.dmp logfile=OAPFULL.log
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 4.351 GB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PASSWORD_VERIFY_FUNCTION
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/SCHEMA/DB_LINK
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/CONTEXT
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW01" prematurely terminated
ORA-31672: Worker process DW01 died unexpectedly.
Job "SYSTEM"."SYS_EXPORT_FULL_02" stopped due to fatal error at 16:14:562)As the above failed, I tried impdp with network_link import and full=y and its a different issue!
impdp system/<p.w> NETWORK_LINK=OAPLIVE.WORLD full=y logfile=OAPDRfull2.log table_exists_
action=REPLACE
Import: Release 11.1.0.7.0 - Production on Thursday, 26 April, 2012 13:36:01
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** NETWORK_LINK=OAPLIVE.WORLD full=y logfile=OAPDRfull2.log table_exists_action=REPLACE
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 4.670 GB
Processing object type DATABASE_EXPORT/TABLESPACE
ORA-31684: Object type TABLESPACE:"SYSAUX" already exists
*************lots more of object already exists errors here............(space limit so cant paste)***************
ORA-31684: Object type SYNONYM:"GEOPROD"."EXT_POSTAIM_PRESORT_61" already exists
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.DISPATCH_WORK_ITEMS [SYNONYM:"GEOPROD"."EXT_POSTAIM_PRESORT_61"]
ORA-31600: invalid input value 200001 for parameter HANDLE in function CLOSE
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.DBMS_METADATA", line 569
ORA-06512: at "SYS.DBMS_METADATA", line 4731
ORA-06512: at "SYS.DBMS_METADATA", line 792
ORA-06512: at "SYS.DBMS_METADATA", line 4732
ORA-06512: at "SYS.KUPW$WORKER", line 2718
ORA-03113: end-of-file on communication channel
ORA-02055: distributed update operation failed; rollback required
ORA-02063: preceding lines from OAPLIVE.WORLD
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 7858
----- PL/SQL Call Stack -----
object line object
handle number name
242F2F2C 18256 package body SYS.KUPW$WORKER
242F2F2C 7885 package body SYS.KUPW$WORKER
242F2F2C 8657 package body SYS.KUPW$WORKER
242F2F2C 1545 package body SYS.KUPW$WORKER
241DDF3C 2 anonymous block
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA []
ORA-31642: the following SQL statement fails:
SELECT unique ku$.seq# from sys.metanametrans$ ku$ WHERE ku$.htype='DATABASE_EXPORT' AND ku$.model='ORACLE' AND NOT ( ku$.seq#>=(select a.seq# from sys.metanametrans$ a where
a.model='ORACLE' and a.htype='DATABASE_EXPORT' and a.name ='DATABASE_EXPORT/SCHEMA/SYNONYM')) order by ku$.seq#
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_METADATA_INT", line 5002
ORA-01427: single-row subquery returns more than one row
ORA-06512: at "SYS.DBMS_
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 7853
----- PL/SQL Call Stack -----
object line object
handle number name
242F2F2C 18256 package body SYS.KUPW$WORKER
242F2F2C 7885 package body SYS.KUPW$WORKER
242F2F2C 2744 package body SYS.KUPW$WORKER
242F2F2C 8523 package body SYS.KUPW$WORKER
242F2F2C 1545 package body SYS.KUPW$WORKER
241DDF3C 2 anonymous block
Job "SYSTEM"."SYS_IMPORT_FULL_01" stopped due to fatal error at 13:39:48So its basically these 2 issues I want to know how to fix.
1-expdp error cause and fix
2-impdp error cause and fix -Also how to avoid object alreay exists error?
Also for example the package etc. is the same name..but i want new package from LIVE so it means if the same name package, view, etc is there, it wouldnt get updated?
Any way to overcome this?
I need it exactly same as LIVE...(with a few exceptions which is small enough i can do after impdp finishes fine)
Pleaseeeeeeee help!!
Thanks&Regards.......Hi..
Thanks for the links..I applied the tips on each of them but it didnt work.
Also my database is 11g so it is not true that this happens on 10g only.
Things tried:
1)I tried with different values on parallel parameter but same error
2)I applied the following:
alter system set open_cursors=1024 scope=spfile;
alter system set "_optimizer_cost_based_transformation"=off;
commit;
The 3rd link was bit better
I tried to find out where exactly the error was causesusing
expdp attach =SYS_EXPORT_FULL_03
But I cant figure out what the object : PUBLIC
oracle/context/isearch/GetPage
is..?
Does this needs to be excluded from the export?if so, how?
Can someone help how to fix the error now?
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name "DW01" prematurely terminated
ORA-31672: Worker process DW01 died unexpectedly.
Job "SYSTEM"."SYS_EXPORT_FULL_03" stopped due to fatal error at 11:29:32
C:\Documents and Settings\ora2>expdp attach=SYS_EXPORT_FULL_03
Export: Release 11.1.0.7.0 - Production on Tuesday, 01 May, 2012 11:35:38
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Username: system
Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_EXPORT_FULL_03
Owner: SYSTEM
Operation: EXPORT
Creator Privs: TRUE
GUID: 8499C802F52A414A8BCACE552DDF6F11
Start Time: Tuesday, 01 May, 2012 11:37:56
Mode: FULL
Instance: geooap
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND system/******** parfile=h:\datapump\oapfull.par
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: H:\datapump\oapfull.dmp
bytes written: 4,096
Worker 1 Status:
Process Name: DW01
State: UNDEFINED
Object Schema: PUBLIC
Object Name: oracle/context/isearch/GetPage
Object Type: DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
Completed Objects: 766
Total Objects: 766
Worker Parallelism: 1 -
All,
Server: Sun Solaris 10
Database: Oracle 10g (stand alone) 10.2.0.4
We are seeing one of our Batch processes running slow. During this period when it runs slow, we have extracted the AWR and ADDM reports. Below are the same for referece.
ADDM Report Highlights:*
FINDING 1: 70% impact (70089 seconds)
PL/SQL execution consumed significant database time.
RECOMMENDATION 1: SQL Tuning, 48% benefit (47865 seconds)
ACTION: Tune the PL/SQL block with SQL_ID "9knuzfs7zmxmv". Refer to the
"Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide
and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID 9knuzfs7zmxmv
BEGIN
SYS.KUPW$WORKER.MAIN('SYS_EXPORT_FULL_06', 'SYS');
END;
RATIONALE: SQL statement with SQL_ID "9knuzfs7zmxmv" was executed 10
times and had an average elapsed time of 5088 seconds.
RATIONALE: Average time spent in PL/SQL execution was 4786 seconds.
RECOMMENDATION 2: SQL Tuning, 22% benefit (22483 seconds)
ACTION: Investigate the SQL statement with SQL_ID "59bh50fscntuj" for
possible performance improvements.
RELEVANT OBJECT: SQL statement with SQL_ID 59bh50fscntuj and
PLAN_HASH 2198587470
CREATE TABLE "ET$024BC2CC0001"
"GUID",
"LOGTYPE",
"PUBDATALONG"
) ORGANIZATION EXTERNAL
( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY DPUMP_DIR_CRON ACCESS
PARAMETERS (DEBUG =0 DATAPUMP INTERNAL TABLE
"SYSADM"."PSIBLOGIBINFO" JOB ( "SYS","SYS_EXPORT_FULL_06",1)
WORKERID 2 PARALLEL 3 VERSION COMPATIBLE ENCRYPTPASSWORDISNULL )
LOCATION ('bogus.dat') ) PARALLEL 3 REJECT LIMIT UNLIMITED
AS SELECT /*+ PARALLEL(KU$,3) */ "GUID", "LOGTYPE",
TO_LOB("PUBDATALONG")
FROM RELATIONAL("SYSADM"."PSIBLOGIBINFO" ) KU$
RATIONALE: SQL statement with SQL_ID "59bh50fscntuj" was executed 1
times and had an average elapsed time of 22483 seconds.
RATIONALE: At least one execution of the statement ran in parallel.
RATIONALE: Average time spent in PL/SQL execution was 22224 seconds.
AWR Report Findings:*
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host
HR8PRD 254316722 hr8prd 1 10.2.0.4.0 NO uxhrpr53
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 34023 25-Jan-13 01:01:02 306 4.2
End Snap: 34026 25-Jan-13 04:00:10 306 4.1
Elapsed: 179.13 (mins)
DB Time: 1,666.75 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 18,480M 18,480M Std Block Size: 8K
Shared Pool Size: 3,840M 3,840M Log Buffer: 10,320K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 384,182.98 139,221.48
Logical reads: 67,752.53 24,552.38
Block changes: 7,210.41 2,612.93
Physical reads: 8,507.25 3,082.89
Physical writes: 72.47 26.26
User calls: 1,365.84 494.96
Parses: 71.19 25.80
Hard parses: 2.36 0.86
Sorts: 176.16 63.84
Logons: 0.09 0.03
Executes: 891.55 323.08
Transactions: 2.76
% Blocks changed per Read: 10.64 Recursive Call %: 39.52
Rollback per transaction %: 4.06 Rows per Sort: 12.92
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 97.25 In-memory Sort %: 100.00
Library Hit %: 99.14 Soft Parse %: 96.68
Execute to Parse %: 92.02 Latch Hit %: 98.84
Parse CPU to Parse Elapsd %: % Non-Parse CPU: 100.00
Shared Pool Statistics Begin End
Memory Usage %: 66.41 47.09
% SQL with executions>1: 96.90 89.42
% Memory for SQL w/exec>1: 97.06 92.53
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 2,843 2.8
enq: KO - fast object checkpoi 1,594 810 508 0.8 Applicatio
log file sync 12,011 152 13 0.2 Commit
Streams AQ: qmn coordinator wa 1 5 5000 0.0 Other
cursor: pin S wait on X 533 5 9 0.0 Concurrenc
------------------------------------------------------------- Looks like ADDM is asking for tuning the EXPDP job itself !! The AWR report however shows some bottlenecks in log file sync and streams. I also believe the streams wait event is because of EXPDP itself.
Command used to take EXPDP is as under:
expdp userid=\"/ as sysdba\" directory=dpump_dir_cron full=y dumpfile=$expname logfile=$logexpname parallel=10 filesize=50GPlease advice.
Thanks,
SuddhasatwaParallelism is always good as long as you have enough processes and cpu resources. The settings for certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, you can try using the following settings to improve performance, although the effect may not be the same on all platforms.
DISK_ASYNCH_IO=TRUE
DB_BLOCK_CHECKING=FALSE
DB_BLOCK_CHECKSUM=FALSE
Additionally, the following initialization parameters must have values set high enough to allow for maximum parallelism:
PROCESSES
SESSIONS
PARALLEL_MAX_SERVERS
FOR PARALLELISM ORACLE SAYS
Set the degree of parallelism to two times the number of CPUs, then tune from there. For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of dump files. For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set. A PARALLEL greater than one is only available in Enterprise Edition
Edited by: Karan on Jan 25, 2013 3:59 PM -
Do we need to create users in expdp
Hello Guys
I need to do expdp impdp to migrate a database to another server with same OS and oracle version.
Do i need to create users.If not will they take the same passwords automatically.
also please tell which way i can run my expdp and impdp faster.i just know that to use parallel clause in both expdp impdp.
Also BUFFER parameter i think do some faster work if set to a high value.
So please tell on what basis i should set its value
oracle 10g
os--aixHi,
As long as the user running the export/import has the correct rights then the user will be created automatically (which is a big improvement over old exp/imp).
The main way to speed up expdp/impdp is to use parallel as high as possible and have a very large PGA when you import as most of the time is spent building indexes.
IN your case i would use a network link between the two databases to avoid creating a file which should also make things faster (and simpler).
There is no buffer parameter in expdp/impdp like there was in old style export/import.
I would make sure you are on 10.2.0.5 if possible to make sure you have all bug fixes for datapump as there are various issues that can slow it down.
Cheers,
Harry -
Datapump - Parallelism is not working
Hello,
I am running 11.1.0.7 on AIX.
I am taking an expdp of an table using the value of 4 for parameter PARALLEL.
expdp SYSTEM TABLES=MYTEST.HISTORY DIRECTORY=EXPORT_FILES DUMPFILE=TEST_HIST_%U.EXPDP.DMP LOGFILE=TEST_HIST.EXPDP.LOG PARALLEL=4But I see only two dumpfile created that too seems like most of the data is going to only one -
ls -ltr
total 286757112
-rw-r----- 1 oracle staff 32768 Jan 17 15:38 TEST_HIST_02.EXPDP.DMP
-rw-r----- 1 oracle staff 19154370560 Jan 17 15:38 TEST_HIST_01.EXPDP.DMPWhy this behaviour? I thought that the data will be distributed to 4 different dumpfiles as I have set it to run in parallel mode and I have 6 CPUs in the box.
Thanks in advance!This has nothing to do with the parallelism set for the table. DO NOT CHANGE TABLE PARALLELISM for Data Pump. Sorry for the shout, but the table parallelism for the table does not change anything that Data Pump looks at. This suggestion is wrong.
The reason you may only get two dumpfiles is because of many things. First, let me explain how expdp works with parallelism. When expdp start, the first work item to be assigned to a worker process is the export the metadata. The first part of this request is the 'estimation' phase. This phase gets the names of the tables/partitoins/subpartitoins that need to be exported. This information is sent to the MCP process so it can then schedule the data unload. This data unload will start right away if parallel is greater than 1. The worker process that did the estimation now starts unloading metadata. This metadata is written to file #1. In your case, parallel=4 so the MCP will try to split up the data unload that needs to be done. The data can be broken up into 1 or n jobs. Some of the decision on how many jobs to create are based on these factors:
1. Generally exporting data using direct path is n times faster than external tables
2. Direct path does not support parallelism on a single table
this means that worker 2 could get assigned table1 direct path and worker 3 could be assigned table2. Parallelism is
achieved, but by unloading 2 tables at the same time.
3. Some attributes of tables are not supported by direct path, so if a table has those attributes, external table must be
chosen. External table supports parallelism on a single table, but some attributes prohibit single table parallelism.
4. if the table is not larger than xMB then the over head of setting up external table is not worth the parallelism so just
use direct path (parallel 1).
And the list goes on. From what I can see, you had 1 worker exporting metadata and writing it to one file and you had anther worker exporting the data writing to your second file. The data for that table was exported using parallel 1. Not sure why, but because you only had 2 dump files, that is the only scenario I can come up with.
Can you do this and post the results:
Use your expdp command and add estimate=blocks then post the results from the estimate lines. I might be able to tell from that information why you exported the data using parallel 1.
Dean -
Hi
i m new to expdp i used to start to use expdp recently from exp.
since it is having so many features within that.
want to know some thing like abt the err:ORA-39095
when i searched in google what they suggested to use filesize or some suggest to give dumpfile=1.dmp ,2.dmp,3.dmp like that. and some suggested to use %U.dmp
so i m totally confusing what its exact resolution of this error? and kindly suggest me to how to proceed
i think i m not well understoodl properly and then i have given parallel=4 then i remove the parallel parameter now backup happened so can anybody explain how parallel and dumpfile will works whether one parallel processes start to write one dumpfile like that or what?
and also morning i have taken schema exp the dumpfile size 105MB . but afternoon by using expdp its splits upto 4 .dmp file 1.dmp, 2.dmp,3.dmp like that so each and every file is having around 40MB.
what these files will do and suppose if i want to restore how can i restore by using these many dumpfiles ?
pls advise i think these all are basic abt expdp but struggling need some help or some docs with examples.
Thanks,
M.Murali..1- You can use the dynamic format (i.e dumpfile=full_%U.dmp) :
The 'wildcard' specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error.
2- If this is yet not enough and more files are needed, a workaround would be to speficy a bigger 'filesize' parameter.
3- If this is inconvenient, another option is to use this syntax:
dumpfile=fullexp%U.dmp, fullexp2_%U.dmp, fullexp3_%U.dmp
which can expand up to 3*99 files.
If encountering problems containing the dump in a single directory using this solution, you may prefer
this syntax:
dumpfile=dmpdir1:fullexp1_%U.dmp, dmpdir2:fullexp2_%U.dmp, dmpdir3:fullexp3_U.dmp
(assuming the 3 directory objects listed above had been already created first).
also here are some links to get ur started
http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
http://www.orafaq.com/wiki/Datapump -
EXTERNAL TABLE PARALLEL (DEGREE DEFAULT)
Ask you if in external table the clause SKIP 1 used with PARALLEL (DEGREE 4)
skip the first record for each session in parallel . Where i could set the the parallel parameter ?
ThanksDefault value have a sense during insertion. Since you cannot insert into an external table (with some Slight oversight in the Concepts guide regarding external tables with datapump table), what will the target to have such feature ?
Nicolas.
Maybe you are looking for
-
I've already posted this in a few places, but I still am no closer to an answer, so I'll repost here. I've got a few Intel Core Solo Mac Mini's that suddenly began prompting for username and password at the logon screen, but they appear to have anoth
-
Acrobat Batch Plug-in and accessing user input parameters
I have written a plug-in for acrobat 9 and need to retreive the value stored in the "select output location" field from the default "edit batch sequence" dialog. How can I get a handle to the "edit batch sequence" dialog and the dialog's user input f
-
Purchase requisition in ax 2012R2
how to create item number for purchase requisition in ax 2012R2
-
Last Activity Date in Opportunity Report
Hi Guys, Would like to knoe if there's a way to retrieve the Last Activity Date in a historical Opportunity report? Regards, Teena
-
How to Freeze Dynamic arrangement of columns in tables a Pivot in OBIEE 11g
HI I am using OBIEE 11g . Reports are displayed in dashboards. I want to disable dynamic arranging feature of columns in tables & Pivot table.How to do that. Thanks Abdul