Expdp

hi,
while data pump export i am getting the following error
C:\>expdp parfile=dp1.par;
LRM-00109: could not open parameter file 'dp1.par;'
LRM-00113: error when processing file 'dp1.par;'
please give me the solution for this

Aman.... wrote:
The semi-colon has nothing to do with the error. The OP has to ensure that he is having the file available to the binary.
Aman....I have to disagree with you Aman
The error very clearly says : LRM-00109: could not open parameter file dp1.par;
expdp is looking for a file called db1.par;
so unless your file is called db1.par; the ';' is not needed.
I have just tested it in windows and I can replicate the error when I specify a ';' at the end of the file name.
Sunil
Edited by: SBF_ on Jul 13, 2011 7:35 PM
Edited by: SBF_ on Jul 13, 2011 7:36 PM

Similar Messages

  • How to run expdp from client ?

    Hi All,
    I tried searching google and forums for my issue but to no avail > How to run expdp from client side ....like in my laptop.
    Because currently our PROD database server has no space for expdp dump file. So I want it directed to my laptop which has an extenal USB of 1TB harddisk...via client EXPDP
    import data using impdp command
    Posted on: 08-May-2012 11:36, by user: 841731 -- Relevance: 53% -- Show all results within this thread
    below command is correct or not? if it is not correct could you please send me the correct command. impdp user/pass@databasename schemas=sourceschemaname remap_schema=sourceschemaname:destinationschemaname ...
    System generated Index names different on target database after expdp/impdp
    Posted on: 30-May-2012 11:58, by user: 895124 -- Relevance: 43% -- Show all results within this thread
    After performing expdp/impdp to move data from one database (A) to another (B), the system name generated indexes has different ...
    [ETL] TTS vs expdp/impdp vs ctas (dblink
    Posted on: 08-May-2012 21:10, by user: 869578 -- Relevance: 39% -- Show all results within this thread
    (table : 500 giga, index : 500 giga), how much faster is TTS (transportable tablespace) over expdp/impdp, and over ctas (dblink) ? As you know, the speed of etl depends on the hardware capacity. (io ...
    Oracle Client
    Posted on: 21-Jun-2012 22:47, by user: Sh**** -- Relevance: 32% -- Show all results within this thread
    Hi Guys, Please can you guys elaborate the difference between Oracle Client and Oracle Instant Client. Also, please can you advise from where I can download the Oracle normal ...
    Oracle 10g Client
    Posted on: 05-Jun-2012 10:11, by user: dzm -- Relevance: 26% -- Show all results within this thread
    to search at oracle site and this forum, but i wasn't able to find a link to download the oracle 10g client. I really need especificaly the 10g version. Anybody know the link or another way to download ...
    9i client to access 11g database
    Posted on: 22-Jun-2012 07:31, by user: kkrm333 -- Relevance: 24% -- Show all results within this thread
    Hi, Can i access a 11g database using 9i client? Thanks,
    SQLplus in Oracle Client
    Posted on: 14-Jun-2012 00:36, by user: Tim B. -- Relevance: 24% -- Show all results within this thread
    Hi, I tried to install an 11g oracle client in linux. As I've compared the files with the files when you install using the oracle instant ...
    Re: Information on Oracle Client 11202-1.1.4-6
    Posted on: 05-Jun-2012 03:33, by user: 898763 -- Relevance: 23% -- Show all results within this thread
    Actually thats the client requirement
    Analysing the performance of a single client
    Posted on: 28-Mar-2012 02:05, by user: 880172 -- Relevance: 23% -- Show all results within this thread
    timeouts even on some of the simplest queries. I want to try and get some data about how just this one client is performing and what it’s doing, but everything Google has thrown up so far is orientated around ...
    to make client connection as sys
    Posted on: 12-Jun-2012 22:04, by user: user11221081 -- Relevance: 23% -- Show all results within this thread
    Dear gurus can i connect to my server from my client machine with sysdba without giving sys password i have connected in different ways as sys@abc ...Thanks a lot.

    Though you can initiate the binary from your client side but for the file creation, there is no other way but to store it on the server side. So your best bet would be to get some space free on the server side of yours.
    Aman....

  • Data Pump - expdp and slow performance on specific tables

    Hi there
    I have af data pump export af a schema. Most of the 700 tables is exported very quickly (direct path) but a couple of them seems to be extremenly slow.
    I have chekced:
    - no lobs
    - no long/raw
    - no VPD
    - no partitions
    - no bitmapped index
    - just date, number, varchar2's
    I'm runing with trace 400300
    But I'm having trouble reading the output from it. It seems that some of the slow performning tables is runinng with method 4??? Can anyone find an explanation for the method in the trace:
    1 > direct path (i think)
    2 > external table (i think)
    4 > ?
    others?
    I have done some stats using v$filestat/v$session_wait (history) - and it seems that we always wait for DB seq file read - and doing lots and lots of SINGLEBLKRDS. Not undo is read
    I have a table 2.5 GB -> 3 minutes
    and then this (in my eyes) similar table 2.4 GB > 1½ hrs.
    There are 367.000 blks (8 K) and avg rowlen = 71
    I'm on Oracle 11.2 on a Linux box with plenty of RAM and CPU power.
    Trace file /opt/oracle112/diag/rdbms/prod/prod/trace/prod_dw00_24268.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /opt/oracle112/product/11.2.0.2/dbhome_1
    System name:  Linux
    Node name:  tiaprod.thi.somethingamt.dk
    Release:  2.6.18-194.el5
    Version:  #1 SMP Mon Mar 29 22:10:29 EDT 2010
    Machine:  x86_64
    VM name:  Xen Version: 3.4 (HVM)
    Instance name: prod
    Redo thread mounted by this instance: 1
    Oracle process number: 222
    Unix process pid: 24268, image: [email protected] (DW00)
    *** 2011-09-20 09:39:39.671
    *** SESSION ID:(401.8395) 2011-09-20 09:39:39.671
    *** CLIENT ID:() 2011-09-20 09:39:39.671
    *** SERVICE NAME:(SYS$BACKGROUND) 2011-09-20 09:39:39.671
    *** MODULE NAME:() 2011-09-20 09:39:39.671
    *** ACTION NAME:() 2011-09-20 09:39:39.671
    KUPP:09:39:39.670: Current trace/debug flags: 00400300 = 4195072
    *** MODULE NAME:(Data Pump Worker) 2011-09-20 09:39:39.672
    *** ACTION NAME:(SYS_EXPORT_SCHEMA_09) 2011-09-20 09:39:39.672
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML called.
    KUPW:09:39:39.672: 0: ALTER SESSION ENABLE PARALLEL DML returned.
    KUPC:09:39:39.693: Setting remote flag for this process to FALSE
    prvtaqis - Enter
    prvtaqis subtab_name upd
    prvtaqis sys table upd
    KUPW:09:39:39.819: 0: KUPP$PROC.WHATS_MY_ID called.
    KUPW:09:39:39.819: 1: KUPP$PROC.WHATS_MY_ID returned.
    KUPW:09:39:39.820: 1: worker max message number: 1000
    KUPW:09:39:39.822: 1: Full cluster access allowed
    KUPW:09:39:39.823: 1: Original job start time: 11-SEP-20 09:39:38 AM
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME called.
    KUPW:09:39:39.862: 1: KUPP$PROC.WHATS_MY_NAME returned. Process name: DW00
    KUPW:09:39:39.862: 1: KUPV$FT_INT.GET_INSTANCE_ID called.
    KUPW:09:39:39.866: 1: KUPV$FT_INT.GET_INSTANCE_ID returned. Instance name: prod
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE called.
    KUPW:09:39:39.870: 1: ALTER SESSION ENABLE RESUMABLE returned.
    KUPW:09:39:39.871: 1: KUPF$FILE.INIT called.
    KUPW:09:39:39.996: 1: KUPF$FILE.INIT returned.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH called.
    KUPW:09:39:39.998: 1: KUPF$FILE.GET_MAX_CSWIDTH returned.
    KUPW:09:39:39.998: 1: Max character width: 1
    KUPW:09:39:39.998: 1: Max clob fetch: 32757
    KUPW:09:39:39.998: 1: Max varchar2a size: 32757
    KUPW:09:39:39.998: 1: Max varchar2 size: 7990
    KUPW:09:39:39.998: 1: In procedure GET_PARAMETERS
    KUPW:09:39:40.000: 1: In procedure GET_METADATA_FILTERS
    KUPW:09:39:40.001: 1: In procedure GET_METADATA_TRANSFORMS
    KUPW:09:39:40.002: 1: In procedure GET_DATA_FILTERS
    KUPW:09:39:40.004: 1: In procedure GET_DATA_REMAPS
    KUPW:09:39:40.005: 1: In procedure PRINT_MT_PARAMS
    KUPW:09:39:40.005: 1: Master table             : "SYSTEM"."SYS_EXPORT_SCHEMA_09"
    KUPW:09:39:40.005: 1: Metadata job mode        : SCHEMA_EXPORT
    KUPW:09:39:40.005: 1: Debug enable             : TRUE
    KUPW:09:39:40.005: 1: Profile enable           : FALSE
    KUPW:09:39:40.005: 1: Transportable enable     : FALSE
    KUPW:09:39:40.005: 1: Metrics enable           : FALSE
    KUPW:09:39:40.005: 1: db version               : 11.2.0.2.0
    KUPW:09:39:40.005: 1: job version              : 11.2.0.0.0
    KUPW:09:39:40.005: 1: service name             :
    KUPW:09:39:40.005: 1: Current Edition          : ORA$BASE
    KUPW:09:39:40.005: 1: Job Edition              :
    KUPW:09:39:40.005: 1: Abort Step               : 0
    KUPW:09:39:40.005: 1: Access Method            : AUTOMATIC
    KUPW:09:39:40.005: 1: Data Options             : 0
    KUPW:09:39:40.006: 1: Dumper directory         :
    KUPW:09:39:40.006: 1: Master only              : FALSE
    KUPW:09:39:40.006: 1: Data Only                : FALSE
    KUPW:09:39:40.006: 1: Metadata Only            : FALSE
    KUPW:09:39:40.006: 1: Estimate                 : BLOCKS
    KUPW:09:39:40.006: 1: Data error logging table :
    KUPW:09:39:40.006: 1: Remote Link              :
    KUPW:09:39:40.006: 1: Dumpfile present         : TRUE
    KUPW:09:39:40.006: 1: Table Exists Action      :
    KUPW:09:39:40.006: 1: Partition Options        : NONE
    KUPW:09:39:40.006: 1: Tablespace Datafile Count: 0
    KUPW:09:39:40.006: 1: Metadata Filter    Index : 1    Count : 10
    KUPW:09:39:40.006: 1:         1           Name - INCLUDE_USER
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:              Object Name - SCHEMA_EXPORT
    KUPW:09:39:40.006: 1:         2           Name - SCHEMA_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TIA')
    KUPW:09:39:40.006: 1:         3           Name - NAME_EXPR
    KUPW:09:39:40.006: 1:                    Value -  ='ACC_PAYMENT_SPECIFICATION'
    KUPW:09:39:40.006: 1:                   Object - TABLE
    KUPW:09:39:40.006: 1:         4           Name - INCLUDE_PATH_EXPR
    KUPW:09:39:40.006: 1:                    Value -  IN ('TABLE')
    KUPW:09:39:40.006: 1:         5           Name - ORDERED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE_DATA
    KUPW:09:39:40.006: 1:         6           Name - NO_XML
    KUPW:09:39:40.006: 1:                    Value - TRUE
    KUPW:09:39:40.006: 1:                   Object - XMLSCHEMA/EXP_XMLSCHEMA
    KUPW:09:39:40.006: 1:         7           Name - XML_OUTOFLINE
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TABLE_DATA
    KUPW:09:39:40.006: 1:         8           Name - XDB_GENERATED
    KUPW:09:39:40.006: 1:                    Value - FALSE
    KUPW:09:39:40.006: 1:                   Object - TABLE/TRIGGER
    KUPW:09:39:40.007: 1:         9           Name - XDB_GENERATED
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE/RLS_POLICY
    KUPW:09:39:40.007: 1:         10           Name - PRIVILEGED_USER
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1: MD remap schema    Index : 4    Count : 0
    KUPW:09:39:40.007: 1: MD remap other     Index : 5    Count : 0
    KUPW:09:39:40.007: 1: MD Transform ddl   Index : 2    Count : 11
    KUPW:09:39:40.007: 1:         1           Name - DBA
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - JOB
    KUPW:09:39:40.007: 1:         2           Name - EXPORT
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:         3           Name - PRETTY
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         4           Name - SQLTERMINATOR
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:         5           Name - CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         6           Name - REF_CONSTRAINTS
    KUPW:09:39:40.007: 1:                    Value - FALSE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         7           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TABLE
    KUPW:09:39:40.007: 1:         8           Name - RESET_PARALLEL
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INDEX
    KUPW:09:39:40.007: 1:         9           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - TYPE
    KUPW:09:39:40.007: 1:         10           Name - OID
    KUPW:09:39:40.007: 1:                    Value - TRUE
    KUPW:09:39:40.007: 1:                   Object - INC_TYPE
    KUPW:09:39:40.007: 1:         11           Name - REVOKE_FROM
    KUPW:09:39:40.008: 1:                    Value - SYSTEM
    KUPW:09:39:40.008: 1:                   Object - ROLE
    KUPW:09:39:40.008: 1: Data Filter        Index : 6    Count : 0
    KUPW:09:39:40.008: 1: Data Remap         Index : 7    Count : 0
    KUPW:09:39:40.008: 1: MD remap name      Index : 8    Count : 0
    KUPW:09:39:40.008: 1: In procedure DISPATCH_WORK_ITEMS
    KUPW:09:39:40.009: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.009: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.036: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2011
    KUPW:09:39:40.036: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:40.037: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:40.038: 1: Flags: 18
    KUPW:09:39:40.038: 1: Start sequence number:
    KUPW:09:39:40.038: 1: End sequence number:
    KUPW:09:39:40.038: 1: Metadata Parallel: 1
    KUPW:09:39:40.038: 1: Primary worker id: 1
    KUPW:09:39:40.041: 1: In procedure GET_TABLE_DATA_OBJECTS
    KUPW:09:39:40.041: 1: In procedure CREATE_MSG
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.041: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.041: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.041: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.044: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.044: 1: Estimate in progress using BLOCKS method...
    KUPW:09:39:40.044: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:40.044: 1: Old Seqno: 0 New Path: SCHEMA_EXPORT/TABLE/TABLE_DATA PO Num: -5 New Seqno: 62
    KUPW:09:39:40.046: 1: Created type completion for duplicate 62
    KUPW:09:39:40.046: 1: In procedure CREATE_MSG
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:40.046: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:40.046: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:40.046: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:40.047: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:40.047: 1: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    KUPW:09:39:40.048: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:40.048: 1: Phase: ESTIMATE_PHASE Filter Name:  Filter Value:
    KUPW:09:39:40.048: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:40.182: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 100001
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: ESTIMATE_PHASE
    KUPW:09:39:40.182: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:40.194: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    *** 2011-09-20 09:39:40.325
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:40.325: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:40.328: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:40.328: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    *** 2011-09-20 09:39:42.603
    KUPW:09:39:42.603: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.603: 1: In procedure CREATE_TABLE_DATA_OBJECT_ROWS
    KUPW:09:39:42.603: 1: In function GATHER_PARSE_ITEMS
    KUPW:09:39:42.603: 1: In function CHECK_FOR_REMAP_NETWORK
    KUPW:09:39:42.603: 1: Nothing to remap
    KUPW:09:39:42.603: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:39:42.604: 1: In procedure LOCATE_DATA_FILTERS
    KUPW:09:39:42.604: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.620: 1: In procedure DETERMINE_METHOD_PARALLEL
    KUPW:09:39:42.620: 1: flags mask: 0
    KUPW:09:39:42.620: 1: dapi_possible_meth: 1
    KUPW:09:39:42.620: 1: data_size: 3019898880
    KUPW:09:39:42.620: 1: et_parallel: TRUE
    KUPW:09:39:42.620: 1: object: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"                               <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: l_dapi_bit_mask: 7
    KUPW:09:39:42.648: 1: l_client_bit_mask: 7
    KUPW:09:39:42.648: 1: TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION" either, parallel: 12               <<<<< Here is says either (I thought that was method ?)  <<<<<<<<<<<<<<<<
    KUPW:09:39:42.648: 1: FORALL BULK INSERT called.
    KUPW:09:39:42.658: 1: FORALL BULK INSERT returned.
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM called. v_md_xml_clob
    KUPW:09:39:42.660: 1: DBMS_LOB.TRIM returned.
    KUPW:09:39:42.660: 1: DBMS_METADATA.FETCH_XML_CLOB called.
    KUPW:09:39:42.678: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:39:42.678: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:39:42.678: 1: In procedure UPDATE_TD_ROW_EXP with seqno: 62
    KUPW:09:39:42.680: 1: 1 rows fetched
    KUPW:09:39:42.680: 1: In function NEXT_PO_NUMBER
    KUPW:09:39:42.680: 1: Next table data array entry: 1 Parallel: 12 Size: 3019898880 Method: 4Creation_level: 0               <<<<<<<<<<<<<<<<  HERE IT SAYS METHOD = 4  and PARALLEL=12 (I'm not using the parallel parameter ???)  <<<<<<<<<<<<<<<<<<
    KUPW:09:39:42.681: 1: In procedure UPDATE_TD_BASE_PO_INFO
    KUPW:09:39:42.683: 1: Updated 1 td objects with bpo between 1 and 1
    KUPW:09:39:42.684: 1: Send table_data_varray called.  Count: 1
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.695: 1: Send table_data_varray returned.
    KUPW:09:39:42.695: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.695: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:39:42.695: 1: Old Seqno: 62 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:39:42.695: 1: Object count: 1
    KUPW:09:39:42.697: 1: 1 completed for 62
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE called. Handle: 100001
    KUPW:09:39:42.697: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:39:42.697: 1: In procedure CREATE_MSG
    KUPW:09:39:42.697: 1: KUPV$FT.MESSAGE_TEXT called.
    KUPW:09:39:42.698: 1: KUPV$FT.MESSAGE_TEXT returned.
    KUPW:09:39:42.698: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:39:42.698: 1: KUPC$QUEUE_INT.SEND called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:39:42.699: 1: KUPC$QUEUE_INT.SEND returned.
    KUPW:09:39:42.699: 1: Total estimation using BLOCKS method: 2.812 GB
    KUPW:09:39:42.699: 1: In procedure CONFIGURE_METADATA_UNLOAD
    KUPW:09:39:42.699: 1: Phase: WORK_PHASE Filter Name: BEGIN_WITH Filter Value:
    KUPW:09:39:42.699: 1: DBMS_METADATA.OPEN11.2.0.0.0 called.
    KUPW:09:39:42.837: 1: DBMS_METADATA.OPEN11.2.0.0.0 returned. Source handle: 200001
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER called. metadata_phase: WORK_PHASE
    KUPW:09:39:42.837: 1: DBMS_METADATA.SET_FILTER returned. In function GET_NOEXP_TABLE
    KUPW:09:39:42.847: 1: DBMS_METADATA.SET_PARSE_ITEM called.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_PARSE_ITEM returned.
    KUPW:09:39:42.964: 1: DBMS_METADATA.SET_COUNT called.
    KUPW:09:39:42.967: 1: DBMS_METADATA.SET_COUNT returned.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT called.
    KUPW:09:39:42.967: 1: KUPF$FILE.OPEN_CONTEXT returned.
    KUPW:09:39:42.968: 1: DBMS_METADATA.FETCH_XML_CLOB called. Handle: 200001
    *** 2011-09-20 09:40:01.798
    KUPW:09:40:01.798: 1: DBMS_METADATA.FETCH_XML_CLOB returned.
    KUPW:09:40:01.798: 1: Object seqno fetched:
    KUPW:09:40:01.799: 1: Object path fetched:
    KUPW:09:40:01.799: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.799: 1: In procedure COMPLETE_EXP_OBJECT
    KUPW:09:40:01.799: 1: KUPF$FILE.FLUSH_LOB called.
    KUPW:09:40:01.815: 1: KUPF$FILE.FLUSH_LOB returned.
    KUPW:09:40:01.815: 1: In procedure UPDATE_TYPE_COMPLETION_ROW
    KUPW:09:40:01.815: 1: Old Seqno: 226 New Path:  PO Num: -5 New Seqno: 0
    KUPW:09:40:01.815: 1: Object count: 1
    KUPW:09:40:01.815: 1: 1 completed for 226
    KUPW:09:40:01.815: 1: DBMS_METADATA.CLOSE called.  Handle: 200001
    KUPW:09:40:01.816: 1: DBMS_METADATA.CLOSE returned.
    KUPW:09:40:01.816: 1: KUPF$FILE.CLOSE_CONTEXT called.
    KUPW:09:40:01.820: 1: KUPF$FILE.CLOSE_CONTEXT returned.
    KUPW:09:40:01.821: 1: In procedure SEND_MSG. Fatal=0
    KUPW:09:40:01.821: 1: KUPC$QUEUE.TRANSCEIVE called.
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    kwqberlst rqan->lascn_kwqiia > 0 block
    kwqberlst rqan->lascn_kwqiia  7
    kwqberlst ascn -90145310 lascn 22
    kwqberlst !retval block
    kwqberlst rqan->lagno_kwqiia  7
    KUPW:09:40:01.827: 1: KUPC$QUEUE.TRANSCEIVE returned. Received 2012
    KUPW:09:40:01.827: 1: DBMS_LOB.CREATETEMPORARY called.
    KUPW:09:40:01.828: 1: DBMS_LOB.CREATETEMPORARY returned.
    KUPW:09:40:01.828: 1: Process order range: 1..1
    KUPW:09:40:01.828: 1: Method: 1
    KUPW:09:40:01.828: 1: Parallel: 1
    KUPW:09:40:01.828: 1: Creation level: 0
    KUPW:09:40:01.830: 1: BULK COLLECT called.
    KUPW:09:40:01.830: 1: BULK COLLECT returned.
    KUPW:09:40:01.830: 1: In procedure BUILD_OBJECT_STRINGS
    KUPW:09:40:01.836: 1: In procedure MOVE_DATA UNLOADing process_order 1 TABLE_DATA:"TIA"."ACC_PAYMENT_SPECIFICATION"            <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    KUPW:09:40:01.839: 1: KUPD$DATA.OPEN called.
    KUPW:09:40:01.840: 1: KUPD$DATA.OPEN returned.
    KUPW:09:40:01.840: 1: KUPD$DATA.SET_PARAMETER - common called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - common returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags called.
    KUPW:09:40:01.843: 1: KUPD$DATA.SET_PARAMETER - flags returned.
    KUPW:09:40:01.843: 1: KUPD$DATA.START_JOB called.
    KUPW:09:40:01.918: 1: KUPD$DATA.START_JOB returned. In procedure GET_JOB_VERSIONThis is how I called expdp:
    expdp system/xxxxxxxxx schemas=tia directory=expdp INCLUDE=TABLE:\" =\'ACC_PAYMENT_SPECIFICATION\'\" REUSE_DUMPFILES=Y LOGFILE=expdp:$LOGFILE TRACE=400300

    Hi there ...
    I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
    But I still need an explanation for the methods (1,2,4 etc)
    regards
    Mette

  • Error using expdp on Oracle XE

    Hi,
    When I try and do an export in Oracle XE I get the following error:
    D:\>expdp SYSTEM/vodafone SCHEMAS=XE_DATA DIRECTORY=dmpdir DUMPFILE=xedata.dmp
    Export: Release 10.2.0.1.0 - Production on Thursday, 29 June, 2006 10:47:10
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Producti
    n
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 475
    ORA-29283: invalid file operation
    Any ideas??
    /John

    Hi guys,
    The problem was that Oracle didn't have access to the directory I created. So I created a directory in the Oracle home directory and it worked..
    Thanks,
    John

  • Error while doing an expdp on a large datafile

    Hello,
    I tried an export using expdp in oracle 10g express edition. It was working perfectly until when the db size reached 2.1 gb. I got the following error message:
    ---------------- Start of error message ----------------
    Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Starting "SERVICE_2_8"."SYS_EXPORT_SCHEMA_05": service_2_8/******** LOGFILE=3_export.log DIRECTORY=db_pump DUMPFILE=service_2_8.dmp CONTENT=all
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
    ORA-01116: error in opening database file 5
    ORA-01110: data file 5: '/usr/lib/oracle/xe/oradata/service_3_0.dbf'
    ORA-27041: unable to open file
    Linux Error: 13: Permission denied
    Additional information: 3
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
    ORA-06512: at "SYS.KUPW$WORKER", line 6235
    ----- PL/SQL Call Stack -----
    object line object
    handle number name
    0x3b3ce18c 14916 package body SYS.KUPW$WORKER
    0x3b3ce18c 6300 package body SYS.KUPW$WORKER
    0x3b3ce18c 9120 package body SYS.KUPW$WORKER
    0x3b3ce18c 1880 package body SYS.KUPW$WORKER
    0x3b3ce18c 6861 package body SYS.KUPW$WORKER
    0x3b3ce18c 1262 package body SYS.KUPW$WORKER
    0x3b0f9758 2 anonymous block
    Job "SERVICE_2_8"."SYS_EXPORT_SCHEMA_05" stopped due to fatal error at 03:04:34
    ---------------- End of error message ----------------
    Selinux was disabled completely and I have put the permission of 0777 to the appropriate datafile.
    Still, it is not working.
    Can you please tell me how to solve this problem or do you have any ideas or suggestions regarding this?

    Hello rgeier,
    I cannot access this tablespace which is service_3_0 (2.1 gb) through a php web-application or through sqlplus. I can access a small tablespace which is service_2_8 through the web-application or through sqlplus. When I tried to access service_3_0 through sqlplus, the following error message was returned:
    ---------------- Start of error message ----------------
    ERROR at line 1:
    ORA-01116: error in opening database file 5
    ORA-01110: data file 5: '/usr/lib/oracle/xe/oradata/service_3_0.dbf'
    ORA-27041: unable to open file
    Linux Error: 13: Permission denied
    Additional information: 3
    ---------------- End of error message ----------------
    The following are the last eset of entries in the alert_XE.log file in the bdump folder:
    ---------------- Start of alert log ----------------
    db_recovery_file_dest_size of 40960 MB is 9.96% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Wed Aug 20 05:13:59 2008
    Completed: alter database open
    Wed Aug 20 05:19:58 2008
    Shutting down archive processes
    Wed Aug 20 05:20:03 2008
    ARCH shutting down
    ARC2: Archival stopped
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=27, OS id=7463 to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_06', 'SERVICE_2_8', 'KUPC$C_1_20080820054031', 'KUPC$S_1_20080820054031', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=28, OS id=7466 to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_06', 'SERVICE_2_8');Wed Aug 20 05:40:48 2008
    The value (30) of MAXTRANS parameter ignored.
    The value (30) of MAXTRANS parameter ignored.
    The value (30) of MAXTRANS parameter ignored.
    The value (30) of MAXTRANS parameter ignored.
    The value (30) of MAXTRANS parameter ignored.
    ---------------- End of alert log ----------------

  • Date-wise backup using expdp

    Dear Friends ,
    I am using Oracle database 10g . Every week I take full export backup using 'expdp' . Beetween the days of the week, I dont take any backup . Now I need backup of two days ago . In this moment , is it possible to take backup using datapump (expdp) before two days ago ?
    i.e., Is there any facility in oracle 10g datapump to take export backup using date-wise ?
    My another question ,
    Is there any incremental backup procedure available in oracle 10g datapump ?
    Waiting for kind reply ... ...

    Hi,
    Dear Friends ,
    I am using Oracle database 10g . Every week I take full export backup using 'expdp' . Beetween the days of the week, I dont take any backup . Now I need backup of two days ago . In this moment , is it possible to take backup using datapump (expdp) before two days ago ?
    i.e., Is there any facility in oracle 10g datapump to take export backup using date-wise ?
    My another question ,
    Is there any incremental backup procedure available in oracle 10g datapump ?
    Waiting for kind reply ... ...You must not use or treat Expdp or logical backup as the Backup which can be used for future purpose. Please search forums their are so many links you lan find out and so many people told many time.. even ORACLE suggest that.
    Go for Custom backups and RMAn backups.
    - Pavan Kumar N
    - Pavan Kumar N

  • Expdp failed job disappear

    Hi all,
    I am testing the expdp on attach job in 10.2R3
    1. expdp job and ^C to stop it
    C:\TEMP>expdp system/oracle directory=testdir dumpfile=test5.dmp logfile=test5.log full=y job_name=expjob
    . . exported "SCOTT"."DEPT" 5.656 KB 4 rows
    . . exported "SCOTT"."EMP" 7.820 KB 14 rows
    . . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
    . . exported "SYSMAN"."AQ$_MGMT_NOTIFY_QTABLE_S" 7.734 KB 1 rows
    ^C
    2. in sqlplus check job
    select * from dba_datapump_jobs;
    OWNER_NAME JOB_NAME OPERATION JOB_MODE
    STATE DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
    SYSTEM EXPJOB EXPORT FULL
    EXECUTING 1 1 3
    3. then after 1-2 mins, check the job status again, it disappears????
    select * from dba_datapump_jobs;
    no rows selected
    Please advise what is happening.....
    thanks
    andrew

    ^C doesn't stop the job it should take you to a prompt that looks like
    Export>
    then you can stop the job using
    Export>STOP_JOB
    the job is finished that's why your not seeing it when you do a
    select * from dba_datapump_jobs

  • Check the status expdp datapump job

    All,
    I have started expdp (datapump) on size of 250GB schema.I want to know when the job will be completed.
    I tried to find from views dba_datapump_sessions and v$session_longops,v$session.But but all in vain.
    Is it possible to find completion time of expdp job?
    Your help is really appreciated.

    Hi,
    Have you started the Job in "Interactie mode".. ??
    If yes then you can get the status of Exectuion with "STATUS"- defauly is zero (If you setting to not null value then based on that it will show the status - seconds)
    Second, thing check "dba_datapump_jobs"
    - Pavan Kumar N

  • How to schedule a expdp job on dbconsole 10g

    Hi all,
    is there any way to create a schedule expdp job to execute automatically all days on enterprise mananger 10g (dbconsole) ???
    Thanks
    Wander (Brazil)

    Hi,
    on dbconsol i get to schedule a job only to execute immediately or posteriorly, just one time, but I want that it run all day in a specific time.
    Wander(Brazil)

  • STOPPED JOBS with expdp and dbms_scheduler

    Hello.
    I am working with the 10g release 2 in a RAC enviroment, and i am trying to put an export job at the scheduler.
    To launch the export i have make a shell script, then first exec the export process and after launch a bzip2 command to compress the resultant dmp file.
    The problem is that the export process finish ok, but it don't compress the file, because the scheduler mark the job as STOPPED.
    The log say:
    REASON="Stop job with force called by user: 'SYS'"
    and the expdp S.O process that launch the extjobo stay runing for ever, like if it was waiting for the expdp to exit and it can`t so the script never arrive to the part that compress the file.
    I put the script that i make to export the schema:
    #!/bin/bash
    export ORACLE_HOME=/opt/oracle/product/10.2.0/db
    export PATH=$PATH:$ORACLE_HOME/bin
    export DIRBACK=/ORACLE/BACKUPS/BMR/Dumps
    export dia=`date +%d_%m_%Y_%H_%M_%S`
    export LOG=dump_backup_bmr_$dia.log
    cd $DIRBACK
    $ORACLE_HOME/bin/expdp userid=oracle_backup/orabck@BMR dumpfile="BMR_BMR_$dia.dmp" schemas=BMR directory=Dumps logfile=$LOG
    cd $DIRBACK
    /usr/bin/bzip2 -f --best ./BMR_BMR_$dia.dmp
    cd $DIRBACK
    /bin/mail -s "DUMP BACKUP BMR DIARIO [$dia]" [email protected] < ./dump_backup_bmr_$dia.log
    I have put several cd $DIRBACK to see if it fail because the script don`t find the dmp file.
    Any idea why it STOP after finish the script ?
    PD: sorry for my poor english.
    Regards

    Hi,
    A stop is only done in two cases - if the user calls dbms_scheduler.stop_job or if the database is shutdown while a job is running. Make sure the database is not being shutdown while the job is running or inside of the job.
    If expdp is still running then this suggests that it is hanging. One possibility for that is that expdp is generating a lot of standard error messages and hanging the job (this is a known issue in 10gR2). You can try redirecting standard output and error to files to see if this helps.
    e.g.
    $ORACLE_HOME/bin/expdp > /tmp/output 2> /tmp/errors
    Hope this helps,
    Ravi.

  • Expdp syntax for a single package

    Hi
    I have a database runing in 10.2.0.5
    On this database there is a package "ARTICLE_CHANGE" which is in "CWODS" schema.
    Now I need to export this package from this db to another database.
    OS is AIX.
    Now please help me to get the expdp exact include syntax so that i can take the backup of this package.
    Thanks for your help on this.

    expdp CWODS/CWODS@CWODS_db schemas=CWODS include=PACKAGE:"= 'ARTICLE_CHANGE'" directory=TEST_DIR dumpfile=ARTICLE_CHANGE.dmp logfile=expdpARTICLE_CHANGE.logchange as per your requirements and take care of quotation marks
    Edited by: swapnil kambli on Jun 3, 2013 1:45 AM

  • How to export selected columns in a table using expdp of oracle10g

    Hi all..
    I have a table with 10 columns and i want to export only 4 columns(selected columns) data using expdp
    Pl. tell me if we can do this and if yes what is the syntax.
    Thanks..
    Sekhar

    That's not possible, you could use QUERY to specify where clause but not columns in select clause,
    for example,
    QUERY=employees:'"WHERE department_id > 10 AND salary > 10000"'
    you could use
    create table2export as select c1,c2,c3,c4 from tableof10columns;and export the temp table.

  • Expdp error anyone have an idea on how to fix this?

    I'm trying to get an export of a database that has encrypted columns. Exp really isn't a good option I'd have to unencrypt the columns first everytime.
    First I create the directory D:\DPDUMP and give Everyone SERVICE and SYSTEM full rights to the directory. Oracle runs as the Local System account.
    Created new user myuser that has the role DBA with EXP_FULL_DATABASE etc.
    Made directory object.
    CREATE DIRECTORY dpump_dir_01 AS 'D:\DPDUMP';
    grant read, write on directory dpump_dir_01 to myuser;
    Run expdp as myuser with following PARFILE entries just testing on the tables that are giving me problems.
    DIRECTORY=dpump_dir_01
    LOGFILE=Encryptest.log
    DUMPFILE=Encryptest.dmp
    FULL=N
    encryption_password=********
    TABLES=MYBANK.CARDAGREEMENT
    CONTENT=ALL
    get errors
    ORA-39006: internal error
    ORA-39213: Metadata processing is not available
    I've added USERID="/ as sysdba" to the PARFILE.
    I've tried it to the default DATA_PUMP_DIR after giving read write to that with the same error. expdp TABLES=MYBANK.CARDAGREEMENT.
    No matter what I do I get this error. So far what I've seen is a Directory permissions issue normally causes it and granting access fixes it but I'm having no luck at all.

    Here the error definition message, have you try the recommanded action ?
    ORA-39213: Metadata processing is not available
    Cause: The Data Pump could not use the Metadata API. Typically, this is caused by the XSL stylesheets not being set up properly.
    Action: Connect AS SYSDBA and execute dbms_metadata_util.load_stylesheets to reload the stylesheets.
    Nicolas.

  • Expdp with parallel writing in one file at a time on OS

    Hi friends,
    I am facing a strange issue.Despite giving parallel=x parameter the expdp is writing on only one file on OS level at a time,although it is writing into multiple files sequentially (not concurrently)
    While on other servers i see that expdp is able to start writing in multiple files concurrently. Following is the sample log
    of my expdp .
    ++++++++++++++++++++
    Export: Release 10.2.0.3.0 - 64bit Production on Friday, 15 April, 2011 3:06:50
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Starting "CNVAPPDBO4"."EXPDP_BL1_DOCUMENT": CNVAPPDBO4/********@EXTTKS1 tables=BL1_DOCUMENT DUMPFILE=DUMP1_S:Expdp_BL1_DOCUMENT_%U.dmp LOGFILE=LOG1_S:Expdp_BL1_DOCUMENT.log CONTENT=DATA_ONLY FILESIZE=5G EXCLUDE=INDEX,STATISTICS,CONSTRAINT,GRANT PARALLEL=6 JOB_NAME=Expdp_BL1_DOCUMENT
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 23.93 GB
    . . exported "CNVAPPDBO4"."BL1_DOCUMENT" 17.87 GB 150951906 rows
    Master table "CNVAPPDBO4"."EXPDP_BL1_DOCUMENT" successfully loaded/unloaded
    Dump file set for CNVAPPDBO4.EXPDP_BL1_DOCUMENT is:
    /tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_01.dmp
    /tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_02.dmp
    /tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_03.dmp
    /tksmig/load2/oracle/postpaidamd/DUMP1_S/Expdp_BL1_DOCUMENT_04.dmp
    Job "CNVAPPDBO4"."EXPDP_BL1_DOCUMENT" successfully completed at 03:23:14
    ++++++++++++++++++++
    uname -aHP-UX ocsmigbrndapp3 B.11.31 U ia64 3522246036 unlimited-user license
    Is it hitting any known bug? Please suggest.
    regds,
    kunwar

    PARALLEL always using with DUMPFILE=filename_*%U*.dmp. Did yoy put the same parameter on target server?
    PARALLEL clause depend on server resources. If the system resources allow, the number of parallel processes should be set to the number of dump files being created.

  • Using an "EXISTS" test in the QUERY parameter for an expdp

    Has anyone successfully used EXISTS in the QUERY parm during an export datapump? I can get IN to work, but not exists:
    Works:
    tables=fred.export1
    directory=dpump_dir_dev
    dumpfile=expdpdev_export_test2_20101216.dmp
    logfile=expdpdev_export_test2_20101216.log
    query=export1:"WHERE state in (select state from fred.export2)"Doesn't work
    tables=fred.export1
    directory=dpump_dir_dev
    dumpfile=expdpdev_export_test4_20101216.dmp
    logfile=expdpdev_export_test4_20101216.log
    query=export1:"WHERE exists (select * from fred.export2 where fred.export2.state = fred.export1.state)"Oracle unhappy:
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-31693: Table data object "FRED"."EXPORT1" failed to load/unload and is being skipped due to error:
    ORA-00904: "FRED"."EXPORT1"."STATE": invalid identifier
    . . .I tried "where export2.state = export1.state)" and that returned the same error
    It's like Oracle is aliasing the table or something, such that when I try to reference it in the WHERE clause of the subquery, it can't recognize it. Any ideas on another way of writing the EXISTS test? I can use IN, but in some cases that might be a list of values over 500,000 items long. Looks like an EXISTS runs faster as a SELECT statement, so I thought the same might be true during the datapump process.
    --=Chuck

    Hi,
    It looks like boolean expression in query does not work in expdp where as it works for in clause as in that case it is referring to export table column and matching it to the value from other table.
    "where export2.state = export1.state)" also will not work as you are only exporting export1 table so it does not have references to export2 table (as there is no select for export2 table)
    My 2 cents
    Regards

  • Need Help in expdp for resolving ORA-39127: unexpected error from call

    Hi All,
    My Environment is -------> Oracle 11g Database Release 1 On Windows 2003 Server SP2
    Requirement is ------------> Data Pump Jobs to be completed without any error message.
    I am tryring to take export data pump of a schema
    Command Used --> expdp schemas=scott directory=data_pump_dir dumpfile=scorr.dmp version=11.1.0.6.0
    Export Log Show this details its completed with 2 error messages
    Export: Release 11.1.0.6.0 - Production on Saturday, 23 April, 2011 13:31:10
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the OLAP option
    FLASHBACK automatically enabled to preserve database integrity.
    Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** schemas=scott directory=data_pump_dir dumpfile=scorr.dmp version=11.1.0.6.0
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 192 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.schema_info_exp('SCOTT',0,1,'11.01.00.06.00',newblock)
    ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found.
    ORA-06512: at "SYS.DBMS_CUBE_EXP", line 205
    ORA-06512: at "SYS.DBMS_CUBE_EXP", line 280
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5980Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.schema_info_exp('SCOTT',1,1,'11.01.00.06.00',newblock)
    ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found.
    ORA-06512: at "SYS.DBMS_CUBE_EXP", line 205
    ORA-06512: at "SYS.DBMS_CUBE_EXP", line 280
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_METADATA", line 5980
    . . exported "SCOTT"."DEPT" 5.945 KB 4 rows
    . . exported "SCOTT"."EMP" 8.585 KB 14 rows
    . . exported "SCOTT"."SALGRADE" 5.875 KB 5 rows
    . . exported "SCOTT"."ACCTYPE_GL_MAS" 0 KB 0 rows
    . . exported "SCOTT"."BONUS" 0 KB 0 rows
    Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
    D:\APP\ADMINISTRATOR\ADMIN\SIPDB\DPDUMP\SCORR.DMP
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 2 error(s) at 13:40:08
    Please help me to resolve this issue.
    Thank you,
    Shan

    Hi Shan,
    I am getting very similar to yours
    "ORA-37111: Unable to load the OLAP API sharable library: (The specified module could not be found."
    error message while creating OLAP Analytic Workspace with AWM.
    I am creating workspace for the first time, actually following some tutorial to get some knowledge about OLAP)
    I see you managed to solve you problem.
    I wonder how I can get this MOS DOC 852794.1 - is it possible to get it without going to Metalink?
    Thanks in advance for any help.
    Regards,
    SC

Maybe you are looking for