Data Pump & Package Names

Hi,
When I import and then export I find that package names have changed to uppercase, are wrapped in double quotes and prefixed with the schema name. ( I'm doing schema specific exports )
so
pkg_name
becomes
SCHEMA_NAME."PKG_NAME"
I use TOAD to compare schemas and this is a bit of a pain, any ideas how I can stop this from happening
Kind Regards

Hi,
Yes my mistake re the schema names, but still having the problem with the package name being uppercased and encased in double quotes.
for example running
select text from all_source where line = 1 and type = 'PACKAGE' and owner = 'WKF_OWNER' on the source database gives
TEXT
PACKAGE pkg_wkf_document_input
PACKAGE pkg_wkf_document_output
PACKAGE pkg_wkf_aq
PACKAGE pkg_wkf_config
PACKAGE pkg_wkf_housekeep
on the target database after doing the datapump import I get
TEXT
PACKAGE "PKG_WKF_DOCUMENT_OUTPUT"
PACKAGE "PKG_WKF_DOCUMENT_INPUT"
PACKAGE "PKG_WKF_AQ"
PACKAGE "PKG_WKF_CONFIG"
PACKAGE "PKG_WKF_HOUSEKEEP"
so when I compare schemas it show the packages as being different due to the name.
Looks like i will have to revert to export
Regards

Similar Messages

  • Decimal  data type package name  oracle 9i

    Dear Friends,
    Can any one tell me , Package name of decimal datatype in oracle 9i.please........
    Thank you

    not clear
    check to this
    ANYDATA TYPE A self-describing data instance type containing an
    instance of the type plus a description
    ANYDATASET TYPE Contains a description of a given type plus a set of data
    instances of that type
    ANYTYPE TYPE Contains a type description of any persistent SQL type,
    named or unnamed, including object types and
    collection types; or, it can be used to construct new
    transient type descriptions

  • File name substitution with Data pump

    Hi,
    I'm experimenting with Oracle data pump export, 10.2 on Windows 2003 Server.
    On my current export scripts, I am able to create the dump file name dynamically.
    This name includes the database name, date, and time such as the
    following : exp_testdb_01192005_1105.dmp.
    When I try to do the same thing with data pump, it doesn't work. Has anyone
    had success with this. Thanks.
    ed lewis

    Hi Ed
    This is an example for your issue:
    [oracle@dbservertest backups]$ expdp gsmtest/gsm directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
    Export: Release 10.2.0.1.0 - Production on Thursday, 19 January, 2006 12:23:55
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Starting "GSMTEST"."SYS_EXPORT_TABLE_01": gsmtest/******** directory=dpdir dumpfile=exp_testdb_01192005_1105.dmp tables=ban_banco
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 64 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "GSMTEST"."BAN_BANCO" 7.718 KB 9 rows
    Master table "GSMTEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for GSMTEST.SYS_EXPORT_TABLE_01 is:
    /megadata/clona/exp_testdb_01192005_1105.dmp
    Job "GSMTEST"."SYS_EXPORT_TABLE_01" successfully completed at 12:24:18
    This work OK.
    Regards,
    Wilson

  • Interface Problems: DBA = Data Pump = Export Jobs (Job Name)

    Hello Folks,
    I need your help in troubleshooting an SQL Developer interface problem.
    DBA => Data Pump => Export Jobs (Job Name) => Data Pump Export => Job Scheduler (Step):
    -a- Job Name and Job Description fields are not visible. Well the fields are there but each of them just 1/2 character wide. I can't see/enter anything in the fields.
    Import Wizard:
    -b- Job Name field under the first "Type" wizard's step looks exactly the same as in Export case.
    -c- Can't see any row under "Chose Input Files" section (I see just ~1 mm of the first row and everything else is hidden).
    My env:
    -- Version 3.2.20.09, Build MAIN-09.87
    -- Windows 7 (64 bit)
    It could be related to the fact that I did change fonts in the Preferences. As I don't know what is the default font I can't change it back to the default and test (let me know what is the default and I will test it).
    PS
    -- Have tried to disable all extensions but DBA Navigator (11.2.0.09.87). It didn't help
    -- There are no any messages in the console if I run SQL Dev under cmd "sqldeveloper\bin\sqldeveloper.exe
    Any help is appreciated,
    Yury

    Hi Yury,
    a-I see those 1/2 character size text boxes (in my case on frequency) when the pop up dialog is too small - do they go away when you make it bigger?
    b- On IMPORT the name it starts with IMPORT - if it is the half character issue have you tried making the dialog bigger?
    c-I think it is size again but my dialog at minimum size is already big enough.
    Have you tried a smaller font - or making the dialogs bigger (resizing from the corners).
    I have a 3.2.1 version where I have not changed the fonts from Tools->Preferences->CodeEditor->Fonts appears to be:
    Font Name: DialogInput
    Font size: 12
    Turloch
    -SQLDeveloper Team

  • Data Pump .xlsx into a SQL Server Table and the whole 32-Bit, 64-Bit discussion

    First of all...I have a headache!
    Found LOTS of Google hits when trying to data pump a .xlsx File into a SQL Server Table. And the whole discussion of the Microsoft ACE 64-Bit Driver or the Microsoft Jet 32-Bit Driver.
    Specifically receiving this error...
    An OLE DB record is available.  Source: "Microsoft Office Access Database Engine"  Hresult: 0x80004005  Description: "External table is not in the expected format.".
    Error: 0xC020801C at Data Flow Task to Load Alere Coaching Enrolled, Excel Source [56]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "Excel Connection Manager"
    failed with error code 0xC0202009.
    Strangely enough, if I simply data pump ONE .xlsx File into a SQL Server Table utilizing my SSIS Package, it seems to work fine. If instead I am trying to be pro-active and allowing for multiple .xlsx Files by using a Foreach Loop Container and a variable
    @[User::FileName], it's erroring out...but not really because it is indeed storing the rows onto the SQL Server Table. I did check all my Delay
    Why does this have to be sooooooo difficult???
    Can anyone help me out here in trying to set-up a SSIS Package in a rather constrictive environment to pump a .xlsx File into a SQL Server Table? What in God's name am I doing wrong? Or is all this a misnomer? But if it's working how do I disable the error
    so that is stops erroring out?

    Hi ITBobbyP,
    According to your description, when you import data of .xlsx file to SQL Server database, you got the error message.
    The error can be caused by the following reasons:
    The excel file is locked by other processes. Please kindly resave this file and name it to other file name to see if the issue will be fixed.
    The ACE(Access Database Engine) is not up to date as Vaibhav mentioned. Please download the latest ACE and install it from the link:
    https://www.microsoft.com/en-us/download/details.aspx?id=13255.
    The version of OFFICE and server bitness is not the same. To solve the problem, please refer to the following document:
    http://hrvoje.piasevoli.com/2010/09/01/importing-data-from-64-bit-excel-in-ssis/
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    Wendy Fu
    TechNet Community Support

  • How to exclude statistic using Data Pump API?

    How to exclude all statistics while exporting data using Oracle Data Pump API (DBMS_DATAPUMP package)?

    You would call the metadata filter api like this:
    dbms_datapump.METADATA_FILTER(
    handle = your_handle_here,
    name = 'EXCLUDE_PATH_LIST',
    value = 'STATISTICS');
    Hope this helps.
    Dean

  • Data pump import problem

    i have exported Hr schema from one macine using following code it export successfully but i tried to import this exported file in another machine it gave error. one thing more how i use package to import file as we exported by using data pump using package plz help
    C:\Documents and Settings\remote>impdp hr/hr DIRECTORY=DATA_PUMP_DIR DUMPFILE=YAHOO.DMP full=y
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORA-31626: job does not exist
    ORA-31637: cannot create job SYS_IMPORT_FULL_01 for user HR
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPV$FT_INT", line 663
    ORA-39080: failed to create queues "KUPC$C_1_20090320121353" and "KUPC$S_1_20090
    320121353" for Data Pump job
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPC$QUE_INT", line 1665
    ORA-01658: unable to create INITIAL extent for segment in tablespace SYSTEM
    DECLARE
    handle NUMBER;
    BEGIN
    handle := dbms_datapump.open (
    operation => 'EXPORT' ,job_mode => 'SCHEMA');
    dbms_datapump.add_file(
    handle => handle
    *,filename => 'YAHOO.dmp'*
    *,directory => 'DATA_PUMP_DIR'*
    *,filetype => 1);*
    dbms_datapump.metadata_filter(
    handle => handle
    *,name => 'SCHEMA_EXPR'*
    *,value => 'IN(''HR'')');*
    dbms_datapump.start_job(
    handle => handle);
    dbms_datapump.detach(handle => handle);
    end;

    +*<Moderator edit - deleted contents of MOS Doc 752374.1 -  pl do not post such contents - it is a violation of your Support agreement - locking this thread>*+                                                                                                                                                                                                                                                                                                                               

  • Data Pump Export with remap tables

    Hello we have two database on different servers. both running same platform windows 2008 ORacle11r2. we have two stars(schema) on both server. both schema have partition table and every partition has it own tablespace. Star one is empty its name is Say "Msoon" i want to populate Msoon with ONLY DATA of Other star say M2 i have tried different command but every time i get error. here is my command in which i'm extracting one table from database
    expdp M1396_1447/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=quest_dg2.dmp tables=M1396_1447.M1396_1447_DG2 CONTENT= DATA_ONLY
    Now i want to import its data in Msoon star table MSOON02_DG2 which is same. just name is change but i get the following error
    impdp MSOON/Aa123456 DIRECTORY=data_pumpdir DUMPFILE=QUEST_DG22.dmp remap_table=M1396_1447.M1396_1447_DG2:MSOON02.MSOON02_DG2 CONTENT= DATA_ONLY
    Import: Release 11.2.0.1.0 - Production on Wed Jan 12 20:34:17 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MSOON"."SYS_IMPORT_FULL_12" successfully loaded/unloaded
    Starting "MSOON"."SYS_IMPORT_FULL_12":  MSOON/******** DIRECTORY=data_pump_dir DUMPFILE=QUEST_DG22.dmp logfile=MY_L.log remap_table=M1396_1447.M1396_1447_DG2:MSOON02.MSOON02_DG2 CONTENT= DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [15]
    TABLE_DATA:"M1396_1447"."MSOON02.MSOON02_DG2":"DEF_PART_M1396_1447_DG2"
    ORA-31603: object "MSOON02.MSOON02_DG2" of type TABLE not found in schema "M1396_1447"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "SYS.KUPW$WORKER", line 8171
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    00000003B2AD5BB0     18990  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8192  package body SYS.KUPW$WORKER
    00000003B2AD5BB0     18552  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      4105  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8875  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      1649  package body SYS.KUPW$WORKER
    00000003B29D51D0         2  anonymous block
    ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [15]
    TABLE_DATA:"M1396_1447"."MSOON02.MSOON02_DG2":"DEF_PART_M1396_1447_DG2"
    ORA-31603: object "MSOON02.MSOON02_DG2" of type TABLE not found in schema "M1396_1447"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "SYS.KUPW$WORKER", line 8171
    ----- PL/SQL Call Stack -----
      object      line  object
      handle    number  name
    00000003B2AD5BB0     18990  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8192  package body SYS.KUPW$WORKER
    00000003B2AD5BB0     18552  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      4105  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      8875  package body SYS.KUPW$WORKER
    00000003B2AD5BB0      1649  package body SYS.KUPW$WORKER
    00000003B29D51D0         2  anonymous block
    Job "MSOON"."SYS_IMPORT_FULL_12" stopped due to fatal error at 20:34:19
    Edited by: Oracle Studnet on Jan 12, 2011 7:36 AM

    i have some problem with data pump parallel processing. here is my statement
    expdp M1396_1447/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=SRV03Msoon02_jt%U.dmp logfile= jt.log tables=M1396_1447.M1396_1447_jt CONTENT= DATA_ONLY parallel=4 One server two i want to import it
    impdp MSOON02/Aa123456 DIRECTORY=data_pump_dir DUMPFILE=SRV03Msoon02_jt%U.dmp logfile= jtJT.log REMAP_SCHEMA=M1396_1447:MSOON02 remap_table=M1396_1447_JT:MSOON02_JT CONTENT= DATA_ONLY parallel=4 i get following Error if i use parallel. if i omit parallel it take two long to import data. it took 2.5hr to insert 2.35Gb data in one partition of table.
    One more thing if i use parallel, all partitions of table containing no record imoprt successfully but all those partition that have data sized in GBs fail the following error. plz it is requested to help me out of this problem. here is log file of data pump
    Import: Release 11.2.0.1.0 - Production on Thu Jan 13 14:05:04 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MSOON02"."SYS_IMPORT_FULL_05" successfully loaded/unloaded
    Starting "MSOON02"."SYS_IMPORT_FULL_05":  msoon02/******** DIRECTORY=data_pump_dir DUMPFILE=SRV03 Msoon02_jt%U.dmp logfile= REMAP_SCHEMA=M1396_1447:MSOON02 remap_table=M1396_1447_JT:MSOON02_JT CONTENT= DATA_ONLY parallel=4
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20091201" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100101" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-31693: Table data object "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20091101" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    . . imported "MSOON02"."MSOON02_JT":"DEF_PART_M1396_1447_JT"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100201"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100301"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100401"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100501"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100601"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100701"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100801"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20100901"      0 KB       0 rows
    . . imported "MSOON02"."MSOON02_JT":"PT_M1396_1447_JT_20101001"      0 KB       0 rows
    Job "MSOON02"."SYS_IMPORT_FULL_05" completed with 3 error(s) at 14:21:32Edited by: Oracle Studnet on Jan 13, 2011 1:20 AM

  • Data pump for HR Jobs and HR positions for v11i

    I am trying to use Datapump (v11i/11.5.2)
    to load the data for HR Jobs and Hr Positions. I am loading HR_PUMP_BATCH_HEADER and HR_PUMP_BATCH_LINE TABLEs with records having required valid values but I am getting error. I have succefully loaded the employee data using same method before.
    Have anybody done this ? PLase help.
    null

    I got the below from Metalink.
    Pl. see whether it is helpful for you, since I have little knowledge about HRMS.
    The HR_PUMP_BATCH_LINE_USER_KEYS table must be seeded with value in order for the package that follows to work. In some cases, the user must provide this value.
    I ran into this problem when trying to run the create_job_requirement API through the Data Pump. Within my PL/SQL block, this is how I passed the value to my variable:
    pv_id_flex_num_user_key := sel.sun_job_id&#0124; &#0124;sel.name&#0124; &#0124;sel.job_requirement_id&#0124; &#0124;sel.id_flex_num&#0124; &#0124;':ID FLEX NUM USER KEY';
    I then performed the following after calling the insert_batch_line procedure and passing all the parameter values:
    SELECT hpbl.batch_line_id,
    coj_hr_ci.devl_seq_s.nextval,
    coj_hr_ci.devl_seq_s.nextval
    INTO v_batch_line_id,
    v_sequence1,
    v_sequence2
    FROM hr_pump_batch_lines hpbl
    WHERE hpbl.pval058 = pv_id_flex_num_user_key;
    INSERT INTO hr_pump_batch_line_user_keys(batch_line_id,
    unique_key_id,
    user_key_id,
    user_key_value)
    VALUES(v_batch_line_id,
    50141,
    v_sequence2,
    pv_id_flex_num_user_key);
    null

  • Help needed with Export Data Pump using API

    Hi All,
    Am trying to do an export data pump feature using the API.
    while the export as well as import works fine from the command line, its failing with the API.
    This is the command line program:
    expdp pxperf/dba@APPN QUERY=dev_pool_data:\"WHERE TIME_NUM > 1204884480100\" DUMPFILE=EXP_DEV.dmp tables=PXPERF.dev_pool_data
    Could you help me how should i achieve the same as above in Oracle Data Pump API
    DECLARE
    h1 NUMBER;
    h1 := dbms_datapump.open('EXPORT','TABLE',NULL,'DP_EXAMPLE10','LATEST');
    dbms_datapump.add_file(h1,'example3.dmp','DATA_PUMP_TEST',NULL,1);
    dbms_datapump.add_file(h1,'example3_dump.log','DATA_PUMP_TEST',NULL,3);
    dbms_datapump.metadata_filter(h1,'NAME_LIST','(''DEV_POOL_DATA'')');
    END;
    Also in the API i want to know how to export and import multiple tables (selective tables only) using one single criteria like "WHERE TIME_NUM > 1204884480100\"

    Yes, I have read the Oracle doc.
    I was able to proceed as below: but it gives error.
    ============================================================
    SQL> SET SERVEROUTPUT ON SIZE 1000000
    SQL> DECLARE
    2 l_dp_handle NUMBER;
    3 l_last_job_state VARCHAR2(30) := 'UNDEFINED';
    4 l_job_state VARCHAR2(30) := 'UNDEFINED';
    5 l_sts KU$_STATUS;
    6 BEGIN
    7 l_dp_handle := DBMS_DATAPUMP.open(
    8 operation => 'EXPORT',
    9 job_mode => 'TABLE',
    10 remote_link => NULL,
    11 job_name => '1835_XP_EXPORT',
    12 version => 'LATEST');
    13
    14 DBMS_DATAPUMP.add_file(
    15 handle => l_dp_handle,
    16 filename => 'x1835_XP_EXPORT.dmp',
    17 directory => 'DATA_PUMP_DIR');
    18
    19 DBMS_DATAPUMP.add_file(
    20 handle => l_dp_handle,
    21 filename => 'x1835_XP_EXPORT.log',
    22 directory => 'DATA_PUMP_DIR',
    23 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
    24
    25 DBMS_DATAPUMP.data_filter(
    26 handle => l_dp_handle,
    27 name => 'SUBQUERY',
    28 value => '(where "XP_TIME_NUM > 1204884480100")',
    29 table_name => 'ldev_perf_data',
    30 schema_name => 'XPSLPERF'
    31 );
    32
    33 DBMS_DATAPUMP.start_job(l_dp_handle);
    34
    35 DBMS_DATAPUMP.detach(l_dp_handle);
    36 END;
    37 /
    DECLARE
    ERROR at line 1:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3043
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3688
    ORA-06512: at line 25
    ============================================================
    i have a table called LDEV_PERF_DATA and its in schema XPSLPERF.
    value => '(where "XP_TIME_NUM > 1204884480100")',above is the condition i want to filter the data.
    However, the below snippet works fine.
    ============================================================
    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
    l_dp_handle NUMBER;
    l_last_job_state VARCHAR2(30) := 'UNDEFINED';
    l_job_state VARCHAR2(30) := 'UNDEFINED';
    l_sts KU$_STATUS;
    BEGIN
    l_dp_handle := DBMS_DATAPUMP.open(
    operation => 'EXPORT',
    job_mode => 'SCHEMA',
    remote_link => NULL,
    job_name => 'ldev_may20',
    version => 'LATEST');
    DBMS_DATAPUMP.add_file(
    handle => l_dp_handle,
    filename => 'ldev_may20.dmp',
    directory => 'DATA_PUMP_DIR');
    DBMS_DATAPUMP.add_file(
    handle => l_dp_handle,
    filename => 'ldev_may20.log',
    directory => 'DATA_PUMP_DIR',
    filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
    DBMS_DATAPUMP.start_job(l_dp_handle);
    DBMS_DATAPUMP.detach(l_dp_handle);
    END;
    ============================================================
    I dont want to export all contents as the above, but want to export data based on some conditions and only on selective tables.
    Any help is highly appreciated.

  • Data Pump - expdp and slow performance on specific tables

    Hi there
    I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
    I have chekced:
    - no lobs
    - no long/raw
    - no VPD
    - no partitions
    - no bitmapped index
    - just date, number, varchar2's
    I'm runing with trace 400300
    But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
    1 > direct path (i think)
    2 > external table (i think)
    4 > ?
    others?
    I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
    I have a table 2.5 GB -> 3 minutes
    and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
    There are 367.000 blks (8 K) and avg rowlen = 71
    I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
    Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
    System name:  Linux
    Node name:  tiaprod.thi.somethingamt.dk
    Release:  2.6.18-194.el5
    Version:  #1 SMP Mon Mar 29 22:10:29 EDT 2010
    Machine:  x86_64
    VM name:  Xen Version: 3.4 (HVM)
    Instance name: prod
    Redo thread mounted by this instance: 1
    Oracle process number: 222
    Unix process pid: 24268, image: [email protected] (DW00)
    *** 2011-09-20 09:39:39.671
    *** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
    *** CLIENT ID:() 2011-09-20 09:39:39.671
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
    *** MODULE NAME:() 2011-09-20 09:39:39.671
    *** ACTION NAME:() 2011-09-20 09:39:39.671
    KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
    *** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
    *** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
    KUPC:09:39:39.693: Setting remote flag for this process to FALSE
    prvtaqis - Enter
    prvtaqis subtab_name upd
    prvtaqis sys table upd
    KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
    KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
    KUPW:09:39:39.820: 1: worker max message number: 1000
    KUPW:09:39:39.822: 1: Full cluster access allowed
    KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
    KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
    KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
    KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
    KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
    KUPW:09:39:39.998: 1: Max character width: 1
    KUPW:09:39:39.998: 1: Max clob fetch: 32757
    KUPW:09:39:39.998: 1: Max varchar2a size: 32757
    KUPW:09:39:39.998: 1: Max varchar2 size: 7990
    KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
    KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
    KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
    KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
    KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
    KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
    KUPW:09:39:40.005: 1: Master table             : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
    KUPW:09:39:40.005: 1: Metadata job mode        : SCHEMA_EXPORT
    KUPW:09:39:40.005: 1: Debug enable             : TRUE
    KUPW:09:39:40.005: 1: Profile enable           : FALSE
    KUPW:09:39:40.005: 1: Transportable enable     : FALSE
    KUPW:09:39:40.005: 1: Metrics enable           : FALSE
    KUPW:09:39:40.005: 1: db version               : 11.2.0.2.0
    KUPW:09:39:40.005: 1: job version              : 11.2.0.0.0
    KUPW:09:39:40.005: 1: service name             :
    KUPW:09:39:40.005: 1: Current Edition          : ORA$BASE
    KUPW:09:39:40.005: 1: Job Edition              :
    KUPW:09:39:40.005: 1: Abort Step               : 0
    KUPW:09:39:40.005: 1: Access Method            : AUTOMATIC
    KUPW:09:39:40.005: 1: Data Options             : 0
    KUPW:09:39:40.006: 1: Dumper directory         :
    KUPW:09:39:40.006: 1: Master only              : FALSE
    KUPW:09:39:40.006: 1: Data Only                : FALSE
    KUPW:09:39:40.006: 1: Metadata Only            : FALSE
    KUPW:09:39:40.006: 1: Estimate                 : BLOCKS
    KUPW:09:39:40.006: 1: Data error logging table :
    KUPW:09:39:40.006: 1: Remote Link              :
    KUPW:09:39:40.006: 1: Dumpfile present         : TRUE
    KUPW:09:39:40.006: 1: Table Exists Action      :
    KUPW:09:39:40.006: 1: Partition Options        : NONE
    KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
    KUPW:09:39:40.006: 1: Metadata Filter    Index : 1    Count : 10
    KUPW:09:39:40.006: 1:         1           Name - INCLUDE_USER
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:              Object Name - SCHEMA_EXPORT
    KUPW:09:39:40.006: 1:         2           Name - SCHEMA_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TIA')
    KUPW:09:39:40.006: 1:         3           Name - NAME_EXPR
    KUPW:09:39:40.006: 1:                    Value -  ='ACC_PAYMENT_SPECIFICATION'
    KUPW:09:39:40.006: 1:                   Object - TABLE
    KUPW:09:39:40.006: 1:         4           Name - INCLUDE_PATH_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TABLE')
    KUPW:09:39:40.006: 1:         5           Name - ORDERED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE_DATA
    KUPW:09:39:40.006: 1:         6           Name - NO_XML
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:                   Object - XMLSCHEMA/EXP_XMLSCHEMA
    KUPW:09:39:40.006: 1:         7           Name - XML_OUTOFLINE
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TABLE_DATA
    KUPW:09:39:40.006: 1:         8           Name - XDB_GENERATED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TRIGGER
    KUPW:09:39:40.007: 1:         9           Name - XDB_GENERATED
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE/RLS_POLICY
    KUPW:09:39:40.007: 1:         10           Name - PRIVILEGED_USER
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1: MD remap schema    Index : 4    Count : 0
    KUPW:09:39:40.007: 1: MD remap other     Index : 5    Count : 0
    KUPW:09:39:40.007: 1: MD Transform ddl   Index : 2    Count : 11
    KUPW:09:39:40.007: 1:         1           Name - DBA
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - JOB
    KUPW:09:39:40.007: 1:         2           Name - EXPORT
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:         3           Name - PRETTY
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         4           Name - SQLTERMINATOR
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         5           Name - CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         6           Name - REF_CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         7           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         8           Name - RESET_PARALLEL
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INDEX
    KUPW:09:39:40.007: 1:         9           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TYPE
    KUPW:09:39:40.007: 1:         10           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INC_TYPE
    KUPW:09:39:40.007: 1:         11           Name - REVOKE_FROM
    KUPW:09:39:40.008: 1:                    Value - SYSTEM
    KUPW:09:39:40.008: 1:                   Object - ROLE
    KUPW:09:39:40.008: 1: Data Filter        Index : 6    Count : 0
    KUPW:09:39:40.008: 1: Data Remap         Index : 7    Count : 0
    KUPW:09:39:40.008: 1: MD remap name      Index : 8    Count : 0
    KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
    KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
    KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:40.038: 1: Flags: 18
    KUPW:09:39:40.038: 1: Start sequence number:
    KUPW:09:39:40.038: 1: End sequence number:
    KUPW:09:39:40.038: 1: Metadata Parallel: 1
    KUPW:09:39:40.038: 1: Primary worker id: 1
    KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
    KUPW:09:39:40.041: 1: In procedure CREATE_MSG
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
    KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
    KUPW:09:39:40.046: 1: Created type completion for duplicate 62
    KUPW:09:39:40.046: 1: In procedure CREATE_MSG
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name:  Filter Value:
    KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    *** 2011-09-20 09:39:40.325
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    *** 2011-09-20 09:39:42.603
    KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
    KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
    KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
    KUPW:09:39:42.603: 1: Nothing to remap
    KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
    KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
    KUPW:09:39:42.620: 1: flags mask: 0
    KUPW:09:39:42.620: 1: dapi_possible_meth: 1
    KUPW:09:39:42.620: 1: data_size: 3019898880
    KUPW:09:39:42.620: 1: et_parallel: TRUE
    KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"                               <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
    KUPW:09:39:42.648: 1: l_client_bit_mask: 7
    KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12               <<<<< Here is says either (I thought that was method ?)  <<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
    KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
    KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
    KUPW:09:39:42.680: 1: 1 rows fetched
    KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0               <<<<<<<<<<<<<<<<  HERE IT SAYS METHOD = 4  and PARALLEL=12 (I'm not using the parallel parameter ???)  <<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
    KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
    KUPW:09:39:42.684: 1: Send table_data_varray called.  Count: 1
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.695: 1: Send table_data_varray returned.
    KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:42.695: 1: Old Seqno: 62 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:39:42.695: 1: Object count: 1
    KUPW:09:39:42.697: 1: 1 completed for 62
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:39:42.697: 1: In procedure CREATE_MSG
    KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
    KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
    KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
    KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
    *** 2011-09-20 09:40:01.798
    KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:40:01.798: 1: Object seqno fetched:
    KUPW:09:40:01.799: 1: Object path fetched:
    KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
    KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
    KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
    KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:40:01.815: 1: Old Seqno: 226 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:40:01.815: 1: Object count: 1
    KUPW:09:40:01.815: 1: 1 completed for 226
    KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called.  Handle: 200001
    KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
    KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
    KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
    KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:40:01.828: 1: Process order range: 1..1
    KUPW:09:40:01.828: 1: Method: 1
    KUPW:09:40:01.828: 1: Parallel: 1
    KUPW:09:40:01.828: 1: Creation level: 0
    KUPW:09:40:01.830: 1: BULK COLLECT called.
    KUPW:09:40:01.830: 1: BULK COLLECT returned.
    KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"            <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
    KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
    KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
    KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
    expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300

    Hi there ...
    I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
    But I still need an explanation for the methods (1,2,4 etc)
    regards
    Mette

  • How can I use the data pump export from external client?

    I am trying to export a bunch of table from a DB but I cant figure out how to do it.
    I dont have access to a shell terminal on the server itself, I can only login using TOAD.
    I am trying to use TOAD's Data Pump Export utility but I keep getting this error:
    ORA-39070: Unable to open the log file.
    ORA-39087: directory name D:\TEMP\ is invalid
    I dont understand if its because I am setting the parameter file wrong or if the utility is trying to find that directory on the server whereas I am thinking its going to dump it to my local filesystem where that directory exists.
    I'd hate to have to use SQL Loader to create ctl files for each and every table...
    Here is my parameter file:
    DUMPFILE="db_export.dmp"
    LOGFILE="exp_db_export.log"
    DIRECTORY="D:\temp\"
    TABLES=ACCOUNT
    CONTENT=ALL
    (just trying to test it on one table so far...)
    P.S. Oracle 11g
    Edited by: trant on Jan 13, 2012 7:58 AM

    ORA-39070: Unable to open the log file.
    ORA-39087: directory name D:\TEMP\ is invalidDirectory here it should not be physical location, its a logical representation.
    For that you have to create a directory from SQL level, like create directory exp_dp..
    then you have to use above created directory as DIRECTORY=exp_dp
    HTH

  • Preview transformation file in data manager package

    Dear BPC Experts,
    When we try to preview Tranformation file while running data manager package to import transaction data from BW, we are getting following error. We do not get this error if we use load from flat file package.
    We are on BPC 10 PS06, EPM Add-in SP14 patch3.
    Has anybody seen this issue before? We can paste the entire log if required.
    See the end of this message for details on invoking
    just-in-time (JIT) debugging instead of this dialog box.
    ************** Exception Text **************
    System.ArgumentException: Separator cannot be null and must contain only one char
    Parameter name: separator
       at FPMXLClient.DataManager.CsvParser.Parse(String data, String separator, Boolean hasHeader) in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\DataManager\CsvParser.cs:line 15
       at FPMXLClient.DataManager.UI.Forms.FilePreview.BuildDataArrayFromCsv(String data) in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\UILayer\DataManagerUI\Forms\FilePreview.cs:line 487
       at FPMXLClient.DataManager.UI.Forms.FilePreview.BuildDataArray(String data, Boolean formatted) in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\UILayer\DataManagerUI\Forms\FilePreview.cs:line 414
       at FPMXLClient.DataManager.UI.Forms.FilePreview.SpecialFilesProcessing() in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\UILayer\DataManagerUI\Forms\FilePreview.cs:line 406
       at FPMXLClient.DataManager.UI.Forms.FilePreview.DisplayData() in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\UILayer\DataManagerUI\Forms\FilePreview.cs:line 351
       at FPMXLClient.DataManager.UI.Forms.FilePreview.InitializePreview() in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\UILayer\DataManagerUI\Forms\FilePreview.cs:line 102
       at FPMXLClient.DataManager.UI.Forms.FilePreview.FilePreview_Load(Object sender, EventArgs e) in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\UILayer\DataManagerUI\Forms\FilePreview.cs:line 740
       at System.Windows.Forms.Form.OnLoad(EventArgs e)
       at FPMXLClient.UILayer.Forms.BaseForm.OnLoad(EventArgs e) in d:\Olympus_100_REL_XLCLIENT\src\FPMXLClient\src\UILayer\UI\Forms\Base\BaseForm.cs:line 70
       at System.Windows.Forms.Form.OnCreateControl()
       at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)
       at System.Windows.Forms.Control.CreateControl()
       at System.Windows.Forms.Control.WmShowWindow(Message& m)
       at System.Windows.Forms.Control.WndProc(Message& m)
       at System.Windows.Forms.ScrollableControl.WndProc(Message& m)
       at System.Windows.Forms.Form.WmShowWindow(Message& m)
       at System.Windows.Forms.Form.WndProc(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
       at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
       at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
    Best Regards,
    Ashwin.

    Hi Raju,
    Thank you for your reply.
    It seems that it is SP related issue. When we downgrade our EPM Add-in to SP13 patch4, it did not throw any error.
    Best Regards,
    Ashwin.

  • While using data pump (impdp) how to rename references within objects?

    using 10g;
    what i want to accomplish is to change schema & tablespace ownership using the data pump method via the command line; i have had success using the command line for expdp / impdp. Problem is that there are objects that reference the old schemas that DO NOT get updated (e.g. procedure may reference usr1.table1 in the PL/SQL statement) and this is where i have been UN-successfull). Anyone know of a way to change references from old schema to new schama name in objects(procedures, views, etc) via the command line?
    this is what i currently use that works to change schema, tablespace, but will not change references within my objects;
    expdp system/<pass> schemas=usr1,usr2 DIRECTORY=dp_dir DUMPFILE=dataPump_BothSchemas.dmp LOGFILE=expdpAllSchema.log parallel=2
    impdp system/<pass> DIRECTORY=dp_dir DUMPFILE=dataPump_BothSchemas.dmp LOGFILE=impbothSchToEE.log remap_schema=usr1:newUsr1,usr2:newUsr2 remap_tablespace=old_ts_tables:new_ts_tables full=y
    Thanks!
    p.s. I have acomplished this using the enterprise manager.

    (e.g. procedure may reference usr1.table1 in the PL/SQL statement) If you hard coded such reference in stored procedure, you have to manually correct them. Consider use synonym if your storage procedure referencing other schema's objects.

  • Data Pump Export error - network mounted path

    Hi,
    Please have a look at the data pump error i am getting while doing export. I am running on version 11g . Please help with your feedback..
    I am getting error due to Network mounted path for directory OverLoad it works fine with local path. i have given full permissions on network path and utl_file able to create files but datapump fail with below error messages.
    Oracle 11g
    Solaris 10
    Getting below error :
    ERROR at line 1:
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3444
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3693
    ORA-06512: at line 64
    DECLARE
    p_part_name VARCHAR2(30);
    p_msg VARCHAR2(512);
    v_ret_period NUMBER;
    v_arch_location VARCHAR2(512);
    v_arch_directory VARCHAR2(20);
    v_rec_count NUMBER;
    v_partition_dumpfile VARCHAR2(35);
    v_partition_dumplog VARCHAR2(35);
    v_part_date VARCHAR2(30);
    p_partition_name VARCHAR2(30);
    v_partition_arch_location VARCHAR2(512);
    h1 NUMBER; -- Data Pump job handle
    job_state VARCHAR2(30); -- To keep track of job state
    le ku$_LogEntry; -- For WIP and error messages
    js ku$_JobStatus; -- The job status from get_status
    jd ku$_JobDesc; -- The job description from get_status
    sts ku$_Status; -- The status object returned by get_status
    ind NUMBER; -- Loop index
    percent_done NUMBER; -- Percentage of job complete
    --check dump file exist on directory
    l_file utl_file.file_type;   
    l_file_name varchar2(20);
    l_exists boolean;
    l_length number;
    l_blksize number;
    BEGIN
    p_part_name:='P2010110800';
    p_partition_name := upper(p_part_name);
    v_partition_dumpfile :=  chr(39)||p_partition_name||chr(39);
    v_partition_dumplog  :=  p_partition_name || '.LOG';
         SELECT COUNT(*) INTO v_rec_count FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
             IF v_rec_count != 0 THEN
               SELECT
               PARTITION_ARCHIVAL_PERIOD
               ,PARTITION_ARCHIVAL_LOCATION
               ,PARTITION_ARCHIVAL_DIRECTORY
               INTO v_ret_period , v_arch_location , v_arch_directory
               FROM HDB.PARTITION_BACKUP_MASTER WHERE PARTITION_ARCHIVAL_STATUS='Y';
             END IF;
         utl_file.fgetattr('ORALOAD', l_file_name, l_exists, l_length, l_blksize);      
            IF (l_exists) THEN        
             utl_file.FRENAME('ORALOAD', l_file_name, 'ORALOAD', p_partition_name ||'_'|| to_char(systimestamp,'YYYYMMDDHH24MISS') ||'.DMP', TRUE);
         END IF;
        v_part_date := replace(p_partition_name,'P');
            DBMS_OUTPUT.PUT_LINE('inside');
        h1 := dbms_datapump.open (operation => 'EXPORT',
                                  job_mode  => 'TABLE'
              dbms_datapump.add_file (handle    => h1,
                                      filename  => p_partition_name ||'.DMP',
                                      directory => v_arch_directory,
                                      filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
              dbms_datapump.add_file (handle    => h1,
                                      filename  => p_partition_name||'.LOG',
                                      directory => v_arch_directory,
                                      filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
              dbms_datapump.metadata_filter (handle => h1,
                                             name   => 'SCHEMA_EXPR',
                                             value  => 'IN (''HDB'')');
              dbms_datapump.metadata_filter (handle => h1,
                                             name   => 'NAME_EXPR',
                                             value  => 'IN (''SUBSCRIBER_EVENT'')');
              dbms_datapump.data_filter (handle      => h1,
                                         name        => 'PARTITION_LIST',
                                        value       => v_partition_dumpfile,
                                        table_name  => 'SUBSCRIBER_EVENT',
                                        schema_name => 'HDB');
              dbms_datapump.set_parameter(handle => h1, name => 'COMPRESSION', value => 'ALL');
              dbms_datapump.start_job (handle => h1);
                  dbms_datapump.detach (handle => h1);              
    END;
    /

    Hi ,
    I tried to generate dump with expdp instead of API, got more specific error logs.
    but on same path log file got create.
    expdp hdb/hdb DUMPFILE=P2010110800.dmp DIRECTORY=ORALOAD TABLES=(SUBSCRIBER_EVENT:P2010110800) logfile=P2010110800.log
    Export: Release 11.2.0.1.0 - Production on Wed Nov 10 01:26:13 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, Automatic Storage Management, OLAP, Data Mining
    and Real Application Testing options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-31641: unable to create dump file "/nfs_path/lims/backup/hdb/datapump/P2010110800.dmp"
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    Additional information: 3Edited by: Sachin B on Nov 9, 2010 10:33 PM

Maybe you are looking for

  • How do I transfer iTunes from one computer to another?

    How do I transfer iTunes from one computer to another?

  • Function Module - Customer Installment Payment Due

    Hello, I am trying to create a payment due report for our customers. We use installment payment terms for our customers. Therefore, when run at the end of September 2011 this report has to only show the invoice amount, tax and discount that is due as

  • What will be impacted if we change Acquisition value in closed year

    hello, I want to change the Net value of asset to zero for tax depreciation area for the closed year. I will change this by transaction type posting for acquisition value and ordinary depreciation. My question is: The year in which I need to post is

  • MacBook Slo! help!

    my macbook pro is suddenly very slow. I thought it may have abeen a bad p2p file i downlaoded but i did a virus scan and a spyware scan. I do not no mac malwareveyr well but this *****. Maybe its somethign else though i dont know. Icons bounce in the

  • Excel Upoad issue

    Hi All, I have a requirement where I am upload the excel sheet using "File Upoad" UI element. The issue is that I am not able to fetch this data in an internal tabe. We have tried out a few standard methods, but no success. Can anyone please provide