Data Pump - expdp and slow performance on specific tables

Hi there
I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
I have chekced:
- no lobs
- no long/raw
- no VPD
- no partitions
- no bitmapped index
- just date, number, varchar2's
I'm runing with trace 400300
But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
1 > direct path (i think)
2 > external table (i think)
4 > ?
others?
I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
I have a table 2.5 GB -> 3 minutes
and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
There are 367.000 blks (8 K) and avg rowlen = 71
I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
System name:  Linux
Node name:  tiaprod.thi.somethingamt.dk
Release:  2.6.18-194.el5
Version:  #1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine:  x86_64
VM name:  Xen Version: 3.4 (HVM)
Instance name: prod
Redo thread mounted by this instance: 1
Oracle process number: 222
Unix process pid: 24268, image: [email protected] (DW00)
*** 2011-09-20 09:39:39.671
*** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
*** CLIENT ID:() 2011-09-20 09:39:39.671
*** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
*** MODULE NAME:() 2011-09-20 09:39:39.671
*** ACTION NAME:() 2011-09-20 09:39:39.671
KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
*** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
*** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
KUPC:09:39:39.693: Setting remote flag for this process to FALSE
prvtaqis - Enter
prvtaqis subtab_name upd
prvtaqis sys table upd
KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
KUPW:09:39:39.820: 1: worker max message number: 1000
KUPW:09:39:39.822: 1: Full cluster access allowed
KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
KUPW:09:39:39.998: 1: Max character width: 1
KUPW:09:39:39.998: 1: Max clob fetch: 32757
KUPW:09:39:39.998: 1: Max varchar2a size: 32757
KUPW:09:39:39.998: 1: Max varchar2 size: 7990
KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
KUPW:09:39:40.005: 1: Master table             : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
KUPW:09:39:40.005: 1: Metadata job mode        : SCHEMA_EXPORT
KUPW:09:39:40.005: 1: Debug enable             : TRUE
KUPW:09:39:40.005: 1: Profile enable           : FALSE
KUPW:09:39:40.005: 1: Transportable enable     : FALSE
KUPW:09:39:40.005: 1: Metrics enable           : FALSE
KUPW:09:39:40.005: 1: db version               : 11.2.0.2.0
KUPW:09:39:40.005: 1: job version              : 11.2.0.0.0
KUPW:09:39:40.005: 1: service name             :
KUPW:09:39:40.005: 1: Current Edition          : ORA$BASE
KUPW:09:39:40.005: 1: Job Edition              :
KUPW:09:39:40.005: 1: Abort Step               : 0
KUPW:09:39:40.005: 1: Access Method            : AUTOMATIC
KUPW:09:39:40.005: 1: Data Options             : 0
KUPW:09:39:40.006: 1: Dumper directory         :
KUPW:09:39:40.006: 1: Master only              : FALSE
KUPW:09:39:40.006: 1: Data Only                : FALSE
KUPW:09:39:40.006: 1: Metadata Only            : FALSE
KUPW:09:39:40.006: 1: Estimate                 : BLOCKS
KUPW:09:39:40.006: 1: Data error logging table :
KUPW:09:39:40.006: 1: Remote Link              :
KUPW:09:39:40.006: 1: Dumpfile present         : TRUE
KUPW:09:39:40.006: 1: Table Exists Action      :
KUPW:09:39:40.006: 1: Partition Options        : NONE
KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
KUPW:09:39:40.006: 1: Metadata Filter    Index : 1    Count : 10
KUPW:09:39:40.006: 1:         1           Name - INCLUDE_USER
KUPW:09:39:40.006: 1:                    Value - TRUE
KUPW:09:39:40.006: 1:              Object Name - SCHEMA_EXPORT
KUPW:09:39:40.006: 1:         2           Name - SCHEMA_EXPR
KUPW:09:39:40.006: 1:                    Value -  IN ('TIA')
KUPW:09:39:40.006: 1:         3           Name - NAME_EXPR
KUPW:09:39:40.006: 1:                    Value -  ='ACC_PAYMENT_SPECIFICATION'
KUPW:09:39:40.006: 1:                   Object - TABLE
KUPW:09:39:40.006: 1:         4           Name - INCLUDE_PATH_EXPR
KUPW:09:39:40.006: 1:                    Value -  IN ('TABLE')
KUPW:09:39:40.006: 1:         5           Name - ORDERED
KUPW:09:39:40.006: 1:                    Value - FALSE
KUPW:09:39:40.006: 1:                   Object - TABLE_DATA
KUPW:09:39:40.006: 1:         6           Name - NO_XML
KUPW:09:39:40.006: 1:                    Value - TRUE
KUPW:09:39:40.006: 1:                   Object - XMLSCHEMA/EXP_XMLSCHEMA
KUPW:09:39:40.006: 1:         7           Name - XML_OUTOFLINE
KUPW:09:39:40.006: 1:                    Value - FALSE
KUPW:09:39:40.006: 1:                   Object - TABLE/TABLE_DATA
KUPW:09:39:40.006: 1:         8           Name - XDB_GENERATED
KUPW:09:39:40.006: 1:                    Value - FALSE
KUPW:09:39:40.006: 1:                   Object - TABLE/TRIGGER
KUPW:09:39:40.007: 1:         9           Name - XDB_GENERATED
KUPW:09:39:40.007: 1:                    Value - FALSE
KUPW:09:39:40.007: 1:                   Object - TABLE/RLS_POLICY
KUPW:09:39:40.007: 1:         10           Name - PRIVILEGED_USER
KUPW:09:39:40.007: 1:                    Value - TRUE
KUPW:09:39:40.007: 1: MD remap schema    Index : 4    Count : 0
KUPW:09:39:40.007: 1: MD remap other     Index : 5    Count : 0
KUPW:09:39:40.007: 1: MD Transform ddl   Index : 2    Count : 11
KUPW:09:39:40.007: 1:         1           Name - DBA
KUPW:09:39:40.007: 1:                    Value - TRUE
KUPW:09:39:40.007: 1:                   Object - JOB
KUPW:09:39:40.007: 1:         2           Name - EXPORT
KUPW:09:39:40.007: 1:                    Value - TRUE
KUPW:09:39:40.007: 1:         3           Name - PRETTY
KUPW:09:39:40.007: 1:                    Value - FALSE
KUPW:09:39:40.007: 1:         4           Name - SQLTERMINATOR
KUPW:09:39:40.007: 1:                    Value - FALSE
KUPW:09:39:40.007: 1:         5           Name - CONSTRAINTS
KUPW:09:39:40.007: 1:                    Value - FALSE
KUPW:09:39:40.007: 1:                   Object - TABLE
KUPW:09:39:40.007: 1:         6           Name - REF_CONSTRAINTS
KUPW:09:39:40.007: 1:                    Value - FALSE
KUPW:09:39:40.007: 1:                   Object - TABLE
KUPW:09:39:40.007: 1:         7           Name - OID
KUPW:09:39:40.007: 1:                    Value - TRUE
KUPW:09:39:40.007: 1:                   Object - TABLE
KUPW:09:39:40.007: 1:         8           Name - RESET_PARALLEL
KUPW:09:39:40.007: 1:                    Value - TRUE
KUPW:09:39:40.007: 1:                   Object - INDEX
KUPW:09:39:40.007: 1:         9           Name - OID
KUPW:09:39:40.007: 1:                    Value - TRUE
KUPW:09:39:40.007: 1:                   Object - TYPE
KUPW:09:39:40.007: 1:         10           Name - OID
KUPW:09:39:40.007: 1:                    Value - TRUE
KUPW:09:39:40.007: 1:                   Object - INC_TYPE
KUPW:09:39:40.007: 1:         11           Name - REVOKE_FROM
KUPW:09:39:40.008: 1:                    Value - SYSTEM
KUPW:09:39:40.008: 1:                   Object - ROLE
KUPW:09:39:40.008: 1: Data Filter        Index : 6    Count : 0
KUPW:09:39:40.008: 1: Data Remap         Index : 7    Count : 0
KUPW:09:39:40.008: 1: MD remap name      Index : 8    Count : 0
KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia  7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:40.038: 1: Flags: 18
KUPW:09:39:40.038: 1: Start sequence number:
KUPW:09:39:40.038: 1: End sequence number:
KUPW:09:39:40.038: 1: Metadata Parallel: 1
KUPW:09:39:40.038: 1: Primary worker id: 1
KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
KUPW:09:39:40.041: 1: In procedure CREATE_MSG
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia  7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
KUPW:09:39:40.046: 1: Created type completion for duplicate 62
KUPW:09:39:40.046: 1: In procedure CREATE_MSG
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia  7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name:  Filter Value:
KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
*** 2011-09-20 09:39:40.325
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
*** 2011-09-20 09:39:42.603
KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
KUPW:09:39:42.603: 1: Nothing to remap
KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
KUPW:09:39:42.620: 1: flags mask: 0
KUPW:09:39:42.620: 1: dapi_possible_meth: 1
KUPW:09:39:42.620: 1: data_size: 3019898880
KUPW:09:39:42.620: 1: et_parallel: TRUE
KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"                               <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
KUPW:09:39:42.648: 1: l_client_bit_mask: 7
KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12               <<<<< Here is says either (I thought that was method ?)  <<<<<<<<<<<<<<<<
KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
KUPW:09:39:42.680: 1: 1 rows fetched
KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0               <<<<<<<<<<<<<<<<  HERE IT SAYS METHOD = 4  and PARALLEL=12 (I'm not using the parallel parameter ???)  <<<<<<<<<<<<<<<<<<
KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
KUPW:09:39:42.684: 1: Send table_data_varray called.  Count: 1
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia  7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
KUPW:09:39:42.695: 1: Send table_data_varray returned.
KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:39:42.695: 1: Old Seqno: 62 New Path:  PO Num: -5 New Seqno: 0
KUPW:09:39:42.695: 1: Object count: 1
KUPW:09:39:42.697: 1: 1 completed for 62
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:39:42.697: 1: In procedure CREATE_MSG
KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia  7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
*** 2011-09-20 09:40:01.798
KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
KUPW:09:40:01.798: 1: Object seqno fetched:
KUPW:09:40:01.799: 1: Object path fetched:
KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
KUPW:09:40:01.815: 1: Old Seqno: 226 New Path:  PO Num: -5 New Seqno: 0
KUPW:09:40:01.815: 1: Object count: 1
KUPW:09:40:01.815: 1: 1 completed for 226
KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called.  Handle: 200001
KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
kwqberlst rqan->lascn_kwqiia > 0 block
kwqberlst rqan->lascn_kwqiia  7
kwqberlst ascn -90145310 lascn 22
kwqberlst !retval block
kwqberlst rqan->lagno_kwqiia  7
KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
KUPW:09:40:01.828: 1: Process order range: 1..1
KUPW:09:40:01.828: 1: Method: 1
KUPW:09:40:01.828: 1: Parallel: 1
KUPW:09:40:01.828: 1: Creation level: 0
KUPW:09:40:01.830: 1: BULK COLLECT called.
KUPW:09:40:01.830: 1: BULK COLLECT returned.
KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"            <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300

Hi there ...
I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
But I still need an explanation for the methods (1,2,4 etc)
regards
Mette

Similar Messages

  • Laggy and Slow Performance

    We are an architectural company, and we use photoshop to create masterplanning, arhitectural, landscape plans. So keep in mind that theres a bunch of layers with drop shadow, textures, bevel & emboss etc in it.
    We recently upgraded to CS6, our PC's are installed with Windows 7 64-bit, 16 Gb or RAM, 1 Gb NVIDIA Quadro 2000D Graphics cards. 
    As Graphics Support, I make sure that updates, fixes are installed and all that stuff. But still, we experience some laggy and slow performance from Photoshop. The file is 660 MB, and is set to 150 Resolution. Takes a while to open, I can understand that it might be reading all those details, but once opened  you wont get a realtime response from Photoshop. This happend before but was able to fix with the update that Adobe had released.
    With our PC build, I cant find any reason why would still we get this kind of performance.
    Please let me know if you have some solution to this.
    Thanks.

    CUDA is (and always will be) NVIDIA Proprietary.
    Apple has made a strong commitment to OpenCL (not to be confused with OpenGL) to harness the power of every manufacturer's GPU. OpenCL will work with NVIDIA/CUDA or AMD cards.
    Adobe has committed to OpenCL for the newest versions of their Premiere PRO CC software.
    http://blogs.adobe.com/premierepro/2013/06/adobe-premiere-pro-cc-and-gpu-support .html
    In my opinion it is only a matter of time before most production software supports OpenCL.
    Gaming, on the other hand, may not feel that same pressure. Those guys tend to do what they want.
    Any thoughts on how dramatically a second display would affect performance?
    A second display does not seem to affect these cards in any sort of perceptible way.

  • Data pump, Query "1=2" performance?

    Hi guys
    I am trying to export a schema using data pump however I need no data from a few of the tables since they are irrelevant but I'd still like to have the structure of the table itself along with any constraints and such.
    I thought of using the QUERY parameter with a "1=2" query making it so that I can filter out all data from certain tables in the export while giving me everything else.
    While this works I wonder if data pump/oracle is smart enough to not run this query through the entire table? If it does perform a full table scan then can anybody recommend any other way of excluding just the data of certain tables while still getting the table structure itself along with anything else related to it?
    I have been unable to find such information after searching the net for a good while.
    Regards
    Alex

    Thanks.
    Does that mean 1=2 actually scans the entire table so it should be avoided in the future?
    Regards
    Alex

  • Data Pump - expdp / impdp utility question

    HiAll,
    As a basic exercise to learn Data pump, I am trying to export schema scott and then want to load the dump file into a test_t schema in the same database.
    --1. created dir object from sysdba
    CREATE DIRECTORY dpump_dir1 AS 'C:\output_dir';
    --2. created dir on the OS as c:/output_dit
    --3. run expdp
    expdp system/*****@orcl schemas=scott DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=scott_orcl_nov5.dmp PARALLEL=4
    --4. create test_t schema and grant dba to it.
    --5. run impdp
    impdp system/*****@orcl schemas=test_t DIRECTORY=dpump_dir1 JOB_NAME=hr DUMPFILE=scott_orcl_nov5.dmp PARALLEL=8
    it fails here as ORA39165 : schema test_t not found. However the schema test_t does exist.
    So, I am not sure why it should give this error. It seems that the schema of Scott from the expdp dump file can not be loaded to any other schema but only to a schema named scott...Is it right? If yes then how can I load all the objects of schema say scott to another schema say test_t? It would be helpful if you can please show the respective expdp and impdp command.
    Thanks a lot
    KS

    The test_t schema does not exist in the export dump file, you should remap from input scott schema to the target test_t schema.
    REMAP_SCHEMA : http://download.oracle.com/docs/cd/E11882_01/server.112/e22490/dp_import.htm#SUTIL927
    Nicolas.

  • Running Aperture and Slow Performance on MacBook Pro

    I am running a MacBook pro w 4 GM interal memory, lost of HD space, and OS 10.8.4.  Often when working with Aperture system performance seems very sluggish to the point of becoming unuseable.  Export of 20 images to mail can take 10 min.  It just becomes so slow as if strangled for memory.  I find I shut down other apps but don't think this should be necessary with current OS technology.  Any suggestions?  Thanks

    Have you checked what it going on, when Aperture becomes sluggish?
    I'd recommend to launch some diagnostic tools, if you have not already done so:
    Aperture's own "Activity Viewer": From the main menu bar: Window > Show Activity
    This will tell you, how Aperture is spending its time : rendering, previews, scanning for faces or places, raw processing. Inparticulur, check, if Aperture is hanging while processing one particular image or video over and over again.
    The Console window: Launch it from the Utilities folder in the Applications folder. Look, if you see error messages or warnings in this window, when Aperture starts to hang.
    The Activity Monitor: Launch it from the Utilities folder in the Applications folder. This window will tell you, which processes are using the cpu, the RAM, and doing page outs to the disk. You can see, if other processing are competing with Aperture and slowing it down, orif Aperture is starved for memory.
    How large are images? Do you have very large raw files or scans, or are your photos moderately sized?
    Is your library on the internal drive or an external drive?
    Do you see this slowness only with your main Aperture library, or also, if you create a new, small library with a few test images?
    Regards
    Léonie

  • Constant freeze and slow performance when wake up

    hi,
    my laptop freezes all the time  like when i play music for long time especially when norton performs a background check.
    also when i open it and it was on sleep it becomes very slow when i open chrom, or play a movie.
    alot of time  sudden stop of the cruser and very slow performance that it can't even opens the my computer.
    I always have to restart the windows.

  • How to retrieve the data from Website and Upload it in SAP table?

    Dear ABAPers,
            I want to retrieve the data from website and upload the same in SAP Database Table is that possible.Please help me.It is very Urgent.
    Thanks & Regards,
    Ashok.

    Dear Abhishek,
                  Thanks for your reply.But my requirement is not met.
    If i execute the program it should retrieve the data from particular website.
    Thanks & Regards,
    Ashok.

  • How to read data from flatfile and insert into other relevant tables ? Please suggest me the query ?

    Hi to all,
    I have flat files in different location through FTP i need to fetch those files and load in the relavant table of the database.
    Please share me the query to do it ..

    You would need a ForEach Loop to iterate though the files. Initially the FTP task will pull the files from locations to a landing folder. Once thats done the ForEachLoop will iterate through files in the folder and will have a data flow task inside to transfer
    file data to tables.
    If you want a more secure option you can also use SFTP (Secured FTP) and can implement it using free WinSCP clinet. I've explained a method of doing it fo dynamic files here
    http://visakhm.blogspot.in/2012/12/implementing-dynamic-secure-ftp-process.html
    for iterating through files see this example
    http://visakhm.blogspot.in/2012/05/package-to-implement-daily-processing.html
    you may not need the validation step inside the loop in your case
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to fetch data from XML and store it in internal table

    Hi All,
    Can anyone help me out, in fetching data from xml and store it in an internal table. Is there any standard function module is there?
    Regards,
    Karthick

    to do this you can either develop a XSLT program and use it with CALL TRNSFORMATION key word to tranform the XML into itab .
    (search the ABAP General forum, i have posted few samples)
    or simply use the following FM which converts your XML into a itab of name value pair (name would holw the element name and value would hold the value of the element) which you can then loop and read it to your itb.
    data:             xmldata type xstring .
    data: result_xml type standard table of smum_xmltb .
    data: return type standard table of bapiret2 .
    CALL FUNCTION 'SMUM_XML_PARSE'
      EXPORTING
        xml_input       = xmldata
      TABLES
        xml_table       = result_xml
        return          = return .
    Regards
    Raja

  • Expdp and impdp between TWO differnt tables

    DBVersion: 10g and 11g
    i have two tables with same data strucutre ,same columnames and width except name difference
    orders and order_history
    i took export of orders for a date range and now want to import into order_history
    can that be done?if so can you please provide me syntax
    thanks

    remap_table is only available starting in 11.1.0.6. So, if your target is 11.1.0.6 or later, you should be all set.
    Dean

  • Spinning beach ball and slower performance

    I have been noticing that the performance of my iMac is becoming slower, with the dreaded spinning beach ball apearing whenever I change between applications to perform relatively basic functions. However, the worst is when I am in iTunes, where I experience the spinning beach ball for several minutes after connecting and choosing a device, and I typically cannot do anything else in other applications until it stops. As for other applications, I have nearly abandoned Aperture because anything from scrolling to adjustments will cause the spinning beach ball to return several seconds to minutes.
    My iMac is a 2.8 GHz Intel Core 2 Duo with nearly 30g's left on the hard drive. I have the memory max'd at 4GB. I am running OS X 10.7.2 with all apps completely up to date. Overall, I am doing basic functions in Bento, Pages, and surfing Safari on a daily basis with only the minimal applications needed open at once.
    Is there anything I can do to resolve this?

    Hi Todd, thanks for the reply.  Actually, I do have CS6 via my Creative Cloud membership but I'm having to work in 5.5 on this particular client's system for a project. 
    I was hoping that the above solution would be the answer I was searching for but alas, that is apparently not the case, since the issues have returned and the spinning beach ball continues to churn... (nothing like the elation of finding a solution only to later feel crushing defeat as the problem returns...)
    The issues were resolved for a while, so perhaps this post may still be able help someone.  I'd rather be working in CS6 right now, and I'm pushing for this client to do the upgrade as well.  Considering how much time is being wasted waiting for the Mac to continue after hanging every other keystroke, I'd think the client would be anxious to make the change!

  • Errors and slow performance copying lots of files between large hard drives

    After upgrading to Snow Leopard, I have VERY slow file transfer speed with external eSATA drives. While trying to upgrade from a 1.5TB drive to a new 2TB drive, the copy took almost 24 hours and then failed on a large Fusion virtual OS file.
    So I removed my eSATA card from my Mac Pro and connected the drives via FireWire 800. Same issue. I have the 10.6.1 update and reset the PRAM with no improvement.
    Tried doing the copy with SuperDuper, and I have the same speed issue, but checking their log I see some of the files that Finder tried to copy, SuperDuper found another way to copy. Others it fails on. Here's a log of one of the files it could copy (but Finder couldn't):
    | 06:19:31 PM | Info | WARNING: Caught I/O exception(12): Cannot allocate memory
    | 06:19:31 PM | Info | WARNING: Source: /Volumes/DPR/Data/Private/Drive Images/Fusion/Windows Vista 64/MS Win Vista Ult 64 SP2.vmwarevm/Windows Vista x64 Edition-s002.vmdk, lstat(): 0
    | 06:19:31 PM | Info | WARNING: Target: /Volumes/DPR2/Data/Private/Drive Images/Fusion/Windows Vista 64/MS Win Vista Ult 64 SP2.vmwarevm/Windows Vista x64 Edition-s002.vmdk, lstat(): 0
    | 06:19:31 PM | Info | Attempting to copy file using copyfile().
    | 06:21:42 PM | Info | Successfully copied file.
    On another try, with no other programs running, after 19 hours, a 1.5TB copy fails with this error:
    | 12:23:56 PM | Info | Error copying /Volumes/DPR/Data/Private/Media/Photo/Family/Originals/2007.03.04 Mpumalanga, South Africa/Canon EOS Digital Rebel XTi [add 07.53.32 hours]/IMG_1189.JPG to /Volumes/DPR2/Data/Private/Media/Photo/Family/Originals/2007.03.04 Mpumalanga, South Africa/Canon EOS Digital Rebel XTi [add 07.53.32 hours]/IMG_1189.JPG of type 8
    | 12:23:56 PM | Error | SDCopy: Error copying /Volumes/DPR/Data/Private/Media/Photo/Family/Originals/2007.03.04 Mpumalanga, South Africa/Canon EOS Digital Rebel XTi [add 07.53.32 hours]/IMG_1189.JPG to /Volumes/DPR2/Data/Private/Media/Photo/Family/Originals/2007.03.04 Mpumalanga, South Africa/Canon EOS Digital Rebel XTi [add 07.53.32 hours]/IMG_1189.JPG of type 8\n: No space left on device
    There is indeed plenty of space left on the device... over 1.5TB in fact.
    Not sure what is going on here. I have 4GB of RAM, and heard Snow Leopard needs twice as much as Leopard. Am I running out of RAM due to the large amount of files? Or is it some incompatibility with large drives and Snow Leopard? I did not have these problems with Leopard, and I was using these same drives.

    I, too, am experiencing extremely long copy times (17 hours on 5GB) when copying numerous (200+) files from an external HD via FW 400. The same happens when I try to copy files using FW 800.
    Is there anything I can do to reduce the copying times?

  • IOS 8.02 and slow performance handling photos

    Since updating my iPad air to iOS 8.0.2 it has significantly reduced the performance of Photos.  I use my iPad predominantly for reviewing my photography when i'm on the road so it is extremely important to me that it works efficiently.  The setup worked fine in ios7 but since upgrading even advancing one image to another can take seconds by which time I've usually swiped the screen several more times only to arrive at an image 3 or 4 past where I want to be once the system has responded.  I also use simple image editing apps and need to copy an image to the clipboard to paste into the app.  The responsiveness of the system is at its worst in this scenario and can take up to 10 seconds before the copy command button becomes active.  My observation is that it has to wait until the line of images either side of the one I want must be re-painted before the command can be completed.  On a reasonable frequency opening the photos app simply crashes the system and it has to be opened again.  I've never experienced that on any of my iPads and I was an early adopter of version 1 of the device.  I should therefore like Apple to formally acknowledge these fundamental flaws and move quickly to providing a further iteration of iOS 8 that is not so "flaky".

    Howdy Jake,
    It sounds like your iPad is updated, but it’s slow to respond and even stops responding occasionally. The article linked below provides a lot of great troubleshooting suggestions that will resolve most issues like this.
    iOS: Not responding or does not turn on
    I hope this helps.
    -Jason

  • Time Out and slow performance of query

    Hi Experts,
    We have a multiprovider giving sales and stock data.we have 1000 article 200 sites this query doesn't respond at all.It is either timed out or analyzer status is not responding. However on single article input ,the query responds in 60 sec.Please help.
    Best gds
    SumaMani

    Hi,
    Did you give the query in RSRT and click on Performance info.
    It will give some message so that you can take steps to improve the performance of the query.
    Regards,
    Rama Murthy.

  • Oracle Data Pump (expdp) credentials via cron job

    I have Oracle 10.2 on Linux Red Hat Server. In additions to performing appropriate backups of my database I also have a cron job I use to performa full logical export using expdp every night to export user objects in the event that a singleobjects needs recovered. This is an extra safeguard for object recovery.
    Currently I do my export (expdp) via a cron job run as the oracle software owner as a db user with db credentials specified. I would however like to change this script to essentially run as sys by doing something like "expdp / as sysdba... " However it appears doing so actually requires the password to be supplied and to run expdp as "expdp sys/password as sysdba".
    does anyone have experience performng an expdp as sys without specifying the password... essentially being able to do "/ as sysdba"?
    Hope that makes sense.
    Thanks for any suggestions.

    I appreciate all comments. At the expense of being long winded I did not get into all the details. But to be more clear I want to say
    1. I am using a parfile as to hide the password from the ps command.
    2. I also understand that doing it as sysdba is not recommended but i thought if I did so I could eliminate need for storing clear text password.
    3. This system is on a seperate "air-gapped" (a.k.a "sneaker-net") from the outside world and is a better protected than if sitting somewhere near the internet.
    4. Historically we have not been permitted to use OP$ accounts. This may be a legacy issue,and will (re-)investigate this as an option.
    5. Just to be clear, I really do not want to do a full export as sysdba. In fact currently I am doing it as a user with EXP_FULL_DATABASE role. However, that requires password to be stored in file (parfile). The file has it in clear text which is still not optimal because System Admins could gain access to this password (Yes I know system admins could do larger damage, but we still need to protect the passwords).
    I am going to look at calling the API directly, and OP$ if needed.
    Thanks for the suggestions.

Maybe you are looking for

  • Very slow internet after installing 10.5.3

    Hello all, this is the very first time I have encountered any problems with my Macbook and I am hoping I can get some help here. About a month ago I came home from college and hooked my Macbook up to the wireless router at home, everything was runnin

  • Setting in / out points

    Hello all I have several PSD sequences of all the same 12 second duration, is there a way I can set my in and out points for them all at one time ? I want them all to have the same in and out point or, will I need to add them all individually ? I hav

  • Add "Show in Bridge"

    Hello,   Someime to time I wish to show my picture not only in Lightroom but in Bridge as well.   I suggest to add "Show in Bridge" after "Show in Finder"   Have a nice day

  • Job queue / Intelligent Agent / dbsnmp

    Hi everybody, we are testing Oracle8 on a Linux SuSE6.0 system with SMP. Installing the software was ok by following the instructions on the SuSE-website and doing some own work. First to answer/solve a lot of questions/problems towards the dbsnmp_st

  • Itunes dump file help?

    Okay so i bought a rented movie in itunes, when i try to transfer it itunes crashes. This was on itunes 8.2 and iphone OS 3. Now, i downgraded to 2.2.1 and downgraded itunes, and now it says that itunes stopped working.I have 10 days left on that mov